malloc calloc realloc slab allocator Slab allocation is used to avoid internal fragmentation. The slab allocation algorithm has as its principal benefit that memory gets allocated in exactly the same size as requested, thus no internal memory fragmentation exists. The allocation takes place quickly, because the system builds the objects in advance and readily allocates them from a slab. Slabs A slab is the amount that a cache can grow or shrink by. It represents one memory allocation to the cache from the machine, and whose size is customarily a multiple of the page size. A slab must contain a list of free buffers (or bufctls), as well as a list of the bufctls that have been allocated (in the case of a large slab size). [edit] Large Slabs These are for caches that store objects that are not less than 1/8 of the page size for a given machine. The reason for the large slabs having a different layout from the small slabs is that it allows large slabs to pack better into page-size units, which helps with fragmentation. The slab contains a list of bufctls, which are simply controllers for each buffer that can be allocated (a buffer is the memory that the user of a slab allocator would use). [edit] Small Slabs The small slabs contain objects that are less than 1/8 of the page size for a given machine. These small slabs need to be optimized further from the logical layout, by avoiding using bufctls (which would be just as large as the data itself and cause memory usage to be much greater). A small slab is exactly one page, and has a defined structure that allows bufctls to be avoided. The last part of the page contains the 'slab header' which is the information needed to retain the slab. Starting at the first address of that page, there are as many buffers as can be allocated without running into the slab header at the end of the page. Instead of using bufctls, we use the buffers themselves to retain the free list links. This allows the small slab's bufctl to be bypassed. buddy allocator buddy memory allocation technique is a memory allocation technique that divides memory into partitions to try to satisfy a memory request as suitably as possible. This system makes use of splitting memory into halves to try to give a best-fit. Compared to the memory allocation techniques (such as paging) that modern operating systems use, the buddy memory allocation is relatively easy to implement, and does not have the hardware requirement of a MMU. Thus, it can be implemented, for example, on Intel 80286 and below computers. The buddy memory allocation technique allocates memory in powers of 2, i.e 2x, where x is an integer. Thus, the programmer has to decide on, or to write code to obtain the upper limit of x. For instance, if the system had 2000K of physical memory, the upper limit on x would be 10, since 210 (1024K) is the biggest allocatable block. This results in making it impossible to allocate everything in as a single chunk; the remaining 976K of memory would have to be taken in smaller blocks. After deciding on the upper limit (let's call the upper limit u), the programmer has to decide on the lower limit, i.e. the smallest memory block that can be allocated. This lower limit is necessary so that the overhead of storing used and free memory locations is minimized. If this lower limit did not exist, and many programs request small blocks of memory like 1K or 2K, the system would waste a lot of space trying to remember which blocks are allocated and unallocated. Typically this number would be a moderate number (like 2, so that memory is allocated in 2² = 4K blocks), small enough to minimize wasted space, but large enough to avoid excessive overhead. As you can see, what happens when a memory request is made is as follows: * If memory is to be allocated 1. Look for a memory slot of a suitable size (the minimal 2k block that is larger than the requested memory) 1. If it is found, it is allocated to the program 2. If not, it tries to make a suitable memory slot. The system does so by trying the following: 1. Split a free memory slot larger than the requested memory size into half 2. If the lower limit is reached, then allocate that amount of memory 3. Go back to step 1 (look for a memory slot of a suitable size) 4. Repeat this process until a suitable memory slot is found * If memory is to be freed 1. Free the block of memory 2. Look at the neighbouring block - is it free too? 3. If it is, combine the two, and go back to step 2 and repeat this process until either the upper limit is reached (all memory is freed), or until a non-free neighbour block is encountered This method of freeing memory is rather efficient, as compaction is done relatively quickly, with the maximal number of compactions required equal to 2u / 2l (i.e. 2u-l). Typically the buddy memory allocation system is implemented with the use of a binary tree to represent used or unused split memory blocks. However, there still exists the problem of internal fragmentation. In many situations, it is essential to minimize the amount of internal fragmentation. This problem can be solved by slab allocation.