The Working Set Model for Program Behavior

P. J. Denning @ MIT

Communications of the ACM, May 1968, pages 323-333

Goal

Develop a unified approach to tackle process scheduling and memory management

Introduction

Model the process behavior, in multiprogrammed environment, in terms of system demand
System demand - a unified resource (in this paper, only processor and memory) requirement by a process
Q: Why not disk, devices, network bandwidth, etc. too?
A: disk - disk usage is considered partly due to virtual memory
Process behavior should be modeled based only its dynamic resource requirement
Input from user or assist from compiler can NOT be expected
Each process has its own virtual machine, which is composed of a processor and infinite one-level virtual memory
Page is brought in only on demand. No look-ahead!

Background

Segmentation, Paging...
This paper assumes two level memory:
Main memory
Secondary Memory with infinite capacity: no problem
The goal of memory management is to minimize the traffic between main and secondary memory
Previous Strategies: Random, FIFO, LRU, ATLAS Loop Detection

Working Set Model

Introduction
Programmer's view of WS: the smallest collection of information that must be in memory to execute her program efficiently
System's view of WS: the set of most recently referenced pages
Working set W(t, t): the set of pages referenced by the process during (t - t, t)
w(t, t): the size of W(t, t)
x = the mean value of the interval within which the same page is referenced
Assumptions (restrictions) about W(t, t)
pages in W(t, t) is continuously in memory
process is never interrupted except for page faults
a page is removed from memory the moment it leaves W
Properties of Working Set
size and t: Fig 3
prediction: when a < t, W(t, t) is a good predictor of W(t + a, t)
reentry rate
reentry of a page = leaving the working set and brought into the set again
process reentry rate = 1 / mean time between reentries
proportional to (1 - prob(page x is referenced again within t))
proportional against expected value of inter-reference interval
l(t) = [1 - prob(page x is referenced again within t)] / expected value of inter-reference interval
real time reentry rate: 
reentry rate = number of reentries during A sec / (A sec + total time spent page traffics for reentries)
number of reentries during A sec = A sec * l(t)
total time spent page traffics for reentries = time to bring a page from disk to memory * number of reentries during A sec
f(t) = [A * l(t)] / [ A + A * l(t) * time to bring the page from disk into memory]
l(t) / [ 1 + l(t) * time to bring the page from disk into memory]
traffic rate between memory and disk: 2 * [ real time reentry rate * S(w(t, t)) ]
t-Sensitivity: how sensitive is l(t) to changes in t
sensitivity s(t) = - (d l(t) / d t)
Decreasing t never results in decreasing l(t)
Choice of t
residency: the fraction of time a page is potentially available in memory
x(= inter-reference interval) < t -> 1 = 100%
t < x < t + T (page traffic time) -> t / [t + 2 * T]
t + T < x -> t / [x + T]
Fig. 4: residency vs x => t should be determined considering T
Detecting W(t, t)
Using a special hardware is impractical
Software scheme
use referenced bit
periodically shift the reference bit
If OR(bits) = 1, the page is in the W(t, t)
Memory Allocation
Look-ahead is not good
Look-ahead requires the knowledge of W(t, t)
Interactive program make look-ahead futile
Reserving w(t, t) pages together with demand-paging is enough
Q: Is this true for modern OS?

Scheduling

Desired properties of working set implementation
Process scheduling and memory management must be closely related
Should be efficient
Must be capable of providing the w(t, t) and processor time consumption for each process
Sample implementation: Fig 7
Schedule checker process together with user processes. As a result, checker will periodically
kick out pages not in W(t, t) by using the software scheme above
decrease w(t, t) by the number of pages kicked out
Page fault handler will increase w(t, t) by one after bringing a page into the memory
Process virtual time (= processor usage) is maintained

Resource Allocation: A Balancing Problem

Resource allocation can be formulated as a problem of balancing processor and memory demands
Memory demand: the fraction of memory a process wants to use
min { w(t, t)/M(=size of memory), 1 }
Processor demand
the fraction of a standard interval which a process is expected to use before it blocks = length of the burst the process is expected to use
Probability density of x(=interval of blocking): Fig 8