CS 537 Notes, Section #13: Storage Allocation


Chapter 8, Sections 8.1 through 8.3 in Operating Systems Concepts.

Information stored in memory is used in many different ways. Some possible classifications are:

The compiler, linker, operating system, and run-time library all must cooperate to manage this information and perform allocation.

When a process is running, what does its memory look like? It is divided up into areas of stuff that the OS treats similarly, called segments. In Unix/Linux, each process has three segments:


Producer/Consumer

In some systems, can have many different kinds of segments.

One of the steps in creating a process is to load its information into main memory, creating the necessary segments. Information comes from a file that gives the size and contents of each segment (e.g. a.out in Unix/Linux and .exe in Windows). The file is called an object file.

Division of responsibility between various portions of system:


Dynamic Memory Allocation

Why is not static allocation sufficient for everything? Unpredictability: cannot predict ahead of time how much memory, or in what form, will be needed:

Need dynamic memory allocation both for main memory and for file space on disk.

Two basic operations in dynamic storage management:

Dynamic allocation can be handled in one of two general ways:

Stack organization: memory allocation and freeing are partially predictable (as usual, we do better when we can predict the future). Allocation is hierarchical: memory is freed in opposite order from allocation. If alloc(A) then alloc(B) then alloc(C), then it must be free(C) then free(B) then free(A).

A stack-based organization keeps all the free space together in one place.


Stack Frames

Heap organization: allocation and release are unpredictable. Heaps are used for arbitrary list structures, complex data organizations. Example: payroll system. Do not know when employees will join and leave the company, must be able to keep track of all them using the least possible amount of storage.


Heap Allocation


Memory Bitmap

Pools: keep a separate allocation pool for each popular size. Allocation is fast, no fragmentation.

Reclamation Methods: how do we know when memory can be freed?

Two problems in reclamation:

Reference Counts: keep track of the number of outstanding pointers to each chunk of memory. When this goes to zero, free the memory. Example: Smalltalk, file descriptors in Unix/Linux. Works fine for hierarchical structures. The reference counts must be managed automatically (by the system) so no mistakes are made in incrementing and decrementing them.


Reference Counts

Garbage Collection: storage is not freed explicitly (using free operation), but rather implicitly: just delete pointers. When the system needs storage, it searches through all of the pointers (must be able to find them all!) and collects things that are not used. If structures are circular then this is the only way to reclaim space. Makes life easier on the application programmer, but garbage collectors are incredibly difficult to program and debug, especially if compaction is also done. Examples: Lisp, capability systems.

How does garbage collection work?


Garbage Collection

Garbage collection is often expensive: 20% or more of all CPU time in systems that use it.



Copyright © 1997, 2002, 2008 Barton P. Miller
Non-University of Wisconsin students and teachers are welcome to print these notes their personal use. Further reproduction requires permission of the author.