Goals
- Learn to design and to implement a custom memory allocator: The operating system provides mechanisms for dynamic allocation in contiguous memory spaces. There are multiple criteria to assess how well these mechanisms work, including speed of execution and the minimization of external fragmentation (to that end, you learned of different policies for allocation decisions, such as first- fit, best-fit, and worst-fit). In this lab, you will design and implement a memory allocation system to work in user space and apply the concepts you have studied so far.
Credits
This lab was developed by Prof. L. Felipe Perrone. Permission to reuse this material in parts or in its entirety is granted provided that this credits note is not removed. Additional students files associated with this lab, as well as any existing solutions can be provided upon request by e-mail to perrone[at]bucknell[dot]edu
Set Up
Create a new directory for today’s work:
~/csci315/Labs/Lab8
Copy to this directory all the files developed for your custom memory allocator. Using the terminal, you would do this with commands:
- cd ~/csci315/Labs/Lab8
- cp -R ../Labs/Lab7/* .
Create a file called answers.txt where you will write BRIEF and CLEAR answers to the questions (1.1), (1.2), (1.3).
Problem 1 (30 points)
1.1 What type(s) of fragmentation is (are) possible with a custom memory allocator like yours? Internal, external or yet something different? [8 points]
1.2 Propose two or more performance metrics (unrelated to fragmentation) that you could use to assess how well your memory allocator works and describe how you could measure each one. [8 points]
1.3 Create a new function in your allocator to compute the average fragmentation created in memory after repeated, randomly interleaved calls to your allocate and deallocate functions. Before you get down to coding this function, though, think about how you will implement it and write here your algorithm in the form of an algorithm in pseudo-C code. The signature for this function should be something like:
double average_frag();
If you change this API, make sure to explain in answers.txt why your new function signature was desirable.
Finally, once you have the algorithm for average_frag, implement the corresponding function in the C module that contains your allocator. That is, you will be changing your allocator.h and allocator.c files to include the new function (make sure to use Doxygen comments in your header file to explain what your function does and how its API works). [14 points]
When you are done with this, you need to:
- git add answers.txt
- git add include/allocator.h
- git add src/allocator.c
- git commit “Lab 8.1 completed”
- git push
Problem 2 (30 points)
Copy your memory-test.c file from the previous lab onto a new one called frag-eval.c. Modify your new file so that its main function accepts command line parameters as in the usage guide below:
frag-eval [algorithm] [seed] [requests]
where:
- algorithm (integer) can take only the tree values which select different allocation policies:
0 (means first-fit), 1 (means best-fit), 2 (means worst-fit) - seed (unsigned integer) is used to initialize the pseudo-random number generator with a call to srand(3).
- requests (integer) specifies the number of dynamic memory allocation requests to simulate before the evaluation of fragmentation is performed.
Your main function must implement some pattern of allocate and deallocate requests to exercise the functionality of your allocator. Consider the algorithm given below in pseudo-C code:
//initialize your custom memory allocator to work with a pool of 10240 bytes (10K) srandom(seed); // initialize PRNG, or Pseudo-Random Number Generator int r=0; void *p = NULL; while (r < requests) { s = a random number between 100 and 1000; p = allocate(algorithm, s) // this will require you to change your allocate function // so that it accepts the algorithm parameter to select an // allocation policy deallocate(p); }
2.1 Think critically about the pseudo-code given above. Is this algorithm appropriate to bring your allocator to a state where you can have a representative amount of fragmentation? If so, explain why in your answers.txt. If not, describe how you would change this algorithm to be more representative of the typical program. (For this full-credit in this question, you must provide pseudo-code for the complete algorithm that you would implement later in C.) [8 points]
Now, implement the algorithm of your choice into the main function of your frag-eval.c program. After the loop of dynamic allocation requests, call your average_frag function and display the result on the standard output. [17 points]
You will need to update your Makefile to contain a rule for building frag-eval.c. [5 points]
When you are done with this, you need to:
- git add answers.txt
- git add src/frag-eval.c
- git add Makefile
- git commit “Lab 8.2 completed”
- git push
Problem 3 (40 points)
The method you used in the previous problem to evaluate the fragmentation after n random requests for allocation and deallocation is stochastic. (Hint: if it uses pseudo-random numbers, it must be a stochastic process.) For every run with a chosen seed for your Pseudo-Random Number Generator (PRNG), your average_frag function creates a sample of the average fragmentation. That value by itself has very little statistical significance, but you if replicate these runs R times, with a different seed for each run, you can compute the average and the variance of the results. The average of your average fragmentation observations is a better estimator than a single sample; this is called a point estimate.
As it turns out though, your point estimate is still not the most statistically significant result you can get. If you look at the collection of your samples (say, if you were to plot a histogram with them), they will be somewhat dispersed around your point estimate (the variance you compute will tell you how much). If you start increasing your number of runs R, you will notice that your point estimate changes a bit. (There can be a fair amount of uncertainty about what the value you are estimating really is.) The right way to deal with this uncertainty is to compute a confidence interval for your average fragmentation.
The deliverable for this problem is as follows.
Choose a value of R of runs to perform and also choose R different seeds to provide for the generation of pseudo-random numbers. (Consider only values of R >= 30 and remember that the more samples you have, the “tighter” your confidence intervals will be.)
In order to compute the confidence interval for your estimate, you will need to use the Student t distribution. For your chosen level of confidence (a value between 0 and 1), you will need to use a Student-t value that can come either from a table or from a computation using the Student-t inverse probability distribution. You are being offered some C code that can compute this value for you; look for it in
~cs315/Labs/Lab8
You may elect to use this code or possibly to get the Student-t value you need in other ways (possibly even from spreadsheet software). It’s up to you whether you need this code or not. Here is the link to a discussion of t-distribution for your review, and more on confidence interval computing. If the link doesn’t work, try this one.
2.1 For each of the three allocation policies (first-fit, best-fit, and worst-fit), complete R runs using the selected seeds and store each sample you get. Compute the point estimate (the average of your samples) and also the 95% confidence interval. If your confidence interval “looks overly wide” (you will be the judge of how wide is too wide), consider getting more samples. For each policy, report the confidence interval you computed together with the raw data in nicely presented tables (you should have at least one row per sample, and columns for the number of the run, the seed value used, and the sample obtained). You should do this in a spreadsheet called exp-results.xls (or whatever extension is appropriate for the software you used). Please make sure to print or export this spreadsheet file to a PDF called exp-results.pdf.
2.2 Reflect upon the data you obtained and use it to argue which allocation policy works best for your simulation of allocation/deallocation activity. In answers.txt, write a conjecture of your own to explain why this policy worked better than the others.
- git add answers.txt
- git add exp-results.pdf
- git add [any scripts you used to automate your experiments]
- git commit “Lab 8.3 completed”
- git push