Project 2b: xv6 Scheduler


There are three objectives to this assignment:

  • To familiarize yourself with a real scheduler.
  • To change that scheduler to a new algorithm.
  • To make graphs to validate the implementation.


With increased sharing in the cloud, compute resources have become a commodity that is sold by the hour. CPU schedulers play an important role in guaranteeing that buyers get what they pay for, and thus schedulers must evolve along with new pricing models.

In this project, you will build a scheduler that simultaneously supports two pricing schemes similar to those used by AWS (Amazon Web Services) and other cloud services: reserved processes and spot processes. Your scheduler will have two levels to support these two process types, and will keep track of a "customer charge" per process in dollars.

Level 1 is for reserved processes and will receive priority over Level 2. A reserved process is given a probabilistic guarantee of a certain share of CPU resources. To implement level 1, use a lottery scheduler, as described in this chapter of the online book. On-demand processes will be charged a fixed rate based on their guaranteed portion of CPU, regardless of whether they actually use their portion. You may choose to charge reserved processes at a constant rate (regardless of whether they win or lose the lottery), or you may choose to only charge the processes when they win. Our tests will accept either design.

Level 2 is for spot processes. These should run only using CPU resources that are left over from level 1. A spot process may run if not all CPU time is allocated at level 1, or if not all level 1 processes use their full allocation all the time. Of course, depending on the load at level 1, level 2 processes may never get an opportunity to run. Thus, people use reserved process for time-critical work (e.g., web apps), and only use spot computing for work that can be done at any time (e.g., bitcoin mining). As in AWS, your scheduler should allow each spot process to specify how much the person running the process is willing to pay per CPU-hour, and the scheduler should always choose the highest bidder that is runnable (assuming level 1 has been satisfied).


You'll need three new system calls to implement this scheduler. The first is int reserve(int percent), which marks a process as a reserved process (level 1), and guarantees (probabilistically) the given portion of CPU time. The call should fail (return -1) if the percent is not between 0 and 100. Furthermore, the sum of all the percents of all reserved process percents should not exceed 200 (the default Makefile for xv6 emulates two CPUs); a call to reserve() should fail if the OS cannot meet this constraint. Reserved processes pay 100 nanodollars per millisecond of CPU time reserved.

The second call is int spot(int bid), which marks a process for spot computing (level 2). The bid is in nanodollars per millisecond of CPU time. For example, a bid of 25 means the process is willing to pay about $0.09/hour. Before a call to spot() or reserve(), processes default to being spot instances with a bid of 0.

The third is int getpinfo(struct pstat *). This routine returns some basic information about each active process, including its process ID, how many times it has been chosen to run, how much time it has run (ms), how much it has been charged (dollars microdollars). You can use this system call to build a variant of the command line program ps, which can then be called to see what is going on.


Most of the code for the scheduler is quite localized and can be found in proc.c; the associated header file, proc.h is also quite useful to examine. To change the scheduler, not much needs to be done; study its control flow and then try some small changes.

You'll need to figure out how to generate random numbers in the kernel; some searching should lead you to a simple pseudo-random number generator, which you can then include in the kernel and use as appropriate. In this one case, simply copying code for a pseudo-random number generator from online is not considered cheating.

You'll need to understand how to fill in the structure pstat in the kernel and pass the results to user space. The structure looks like what you see in here.

Uptime is tracked with millisecond granularity in the ticks variable in trap.c. This variable is protected by the tickslock spinlock and is readable from userspace via the uptime system call. If you use ticks to measure time, you need to be careful to avoid deadlock. In particular, the prior code sometimes acquires tickslock before ptable.lock. Thus, you should never acquire the locks in the opposite order in the scheduler() function. Correctly accessing ticks requires using locks, which we have not learned about yet. Thus, every time xv6 context switches to the scheduler() function in proc.c, you may assume 10 milliseconds has passed. If programs only use CPU, this assumption is actually accurate, but programs that yield often (i.e., more often than every 10 ms), perhaps by waiting for I/O, will be overcharged. For example, if process A is scheduled to run for 10 ms, but it does I/O after 2 ms (and therefore yields), A will be charged for 10 ms, instead of just the 2 ms it used. This is acceptable for grading (although you can implement something more sophisticated if you like).

Beware of overflow. You may wish to use a pair of integers to store charges (e.g., one integer for dollars, and one for microdollars).

(Added Sep 28) There are times when we have not defined what your scheduler should do. For example, if only spot instances are runnable, and they all bid the same, which one should run? As another example, what if there are only reserved processes, but the reservations don't add up to 200%? In any of these cases, you should still run something (don't waste CPU). You can use any reasonable method to choose which process to run in such cases (e.g., pick randomly or with round robin).

(Added Sep 28) Lotteries should be held independently on each core. For example, if a process reserves 20%, that process might have 10 out of 100 tickets on each core (or 100 out of 1000, or whatever you decide).

(Added Sep 28) Currently in xv6, a process can only run on one CPU at a time (you will fix this in a later project). Thus, sometimes processes may have trouble using their full reservation. Processes with higher reservations should get more CPU time, but the fact that a process may get less than its reservation does not always indicate a bug, especially when a large percent is reserved.

(Added Sep 28) One concern is that because processes default to being spot processes with a bid of 0, a new process may never get a chance to set its reservation. You may optionally let each new process run for free for a short time after launch if you like.

The Code

The source code for xv6 (and associated README) can be found in ~cs537-2/ta/xv6/. Everything you need to build and run and even debug the kernel is in there.

You may also find the following readings about xv6 useful, written by the same team that ported xv6 to x86: xv6 book.

Particularly useful for this project: Chapter 9

What To Turn In

Beyond the usual code, you'll have to generate one or more graphs that help you argue your implementation is correct. In a file called graphs.txt, give the file names of your graphs, and briefly explain how the graphs show your code is correct. For example, you could show that reserved processes receive their correct share of CPU time, and that spot processes pay less for CPU time but have less predictable performance.