Tuesday, 3 April 2012

Dynamic memory allocation


The task of fulfilling an allocation request consists of finding a block of unused memory of sufficient size. Even though this task seems simple, several issues make the implementation complex. One of such problems is internal and external fragmentation, which arises when there are many small gaps between allocated memory blocks, which are insufficient to fulfill the request. Another is that allocator's metadata can inflate the size of (individually) small allocations; this effect can be reduced by chunking.
Usually, memory is allocated from a large pool of unused memory area called the heap (also called the free store). Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually via a pointer reference. The precise algorithm used to organize the memory area and allocate and deallocate chunks is hidden behind an abstract interface and may use any of the methods described below.


The dynamic memory allocation algorithm actually used can impact performance significantly and a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).


Fixed-size-blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but it suffers from fragmentation.


For more details on this topic, see Buddy memory allocation.
In this system, memory is allocated from several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size. If a smaller size is requested than is available, the smallest available size is selected and it is then broken in two. One of the resulting halves is selected, and the process repeats (checking the size again and splitting if needed) until the block is just large enough. All new blocks that are formed during these splits are added to their respective memory pools for later use.
All the blocks of a particular size are kept in a sorted linked list or tree. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the next-largest size buddy-block list (when a block is allocated, the allocator will start with the smallest sufficiently large block avoiding needlessly breaking blocks).

No comments: