Next implement readDiskBlock and writeDiskBlock. You will need to write class Elevator and a helper class Request, which represents a read or write request that has not yet completed. When a new request arrives, Kernel.interrupt() should call Elevator.read or Elevator.write. This method creates a Request, adds it to queue of requests, and if the disk isn't busy, calls Disk.beginRead or Disk.beginWrite to start an operation. It then calls wait to wait until its request has completed. When a disk completion interrupt occurs, Kernel.interrupt() calls Elevator.endIO, which wakes up the process that requested the just-completed operation so that it can return from the readDiskBlock or writeDiskBlock call.
The easiest way to wake up the right process is to make Request a monitor. The requesting process calls request.await(), which calls wait, and Elevator.endIO calls request.finish(), which calls notify. Elevator.endIO also checks to see if the queue of requests is empty, and if not, it chooses another request and tells the disk to start servicing it. At first, you might want to use a FIFO queue (e.g. LinkedList) to record requests. Once everything is working well, you can then modify the code that adds or removes requests to implement the Elevator algorithm. This approach allows you to see whether the Elevator algorithm actually improves performance.
After you get this part of the project working, write BufferPool and put it "between" the Kernel and the Elevator by making the kernel call BufferPool.read or BufferPool.write, instead of Elevator.read or Elevator.write. When a request arrives, BufferPool first tries to satisfy it from the cache. If the requested block isn't in the cache, it chooses the least recently used buffer, calls Elevator.write to clean it if necessary, and calls Elevator.read to re-fill it with the requested block if the original call was BufferPool.read. Also write a method BufferPool.flush to be used for writing all remaining dirty blocks back to disk at shutdown.
See also Q10 and Q16.
See also Q7, Q17, and Q18.
See also Q17.
Like a real disk, the simulated disk can only read and write whole blocks. If you look at the source of class Disk, you will see that it only checks that the length of the array is at least BLOCK_SIZE. If you give it a bigger array, it will only read or write the first BLOCK_SIZE bytes. However, there is no reason to use a larger array.
command1 & command2 & command3All three commands will be started concurrently, by calling Kernel.exec. The Shell then calls Kernel.join to wait for each command to complete before issuing the next prompt.
You can test your program with something like
% java Boot 10 Disk 100 Shell Shell> DiskTester sequential & DiskTester random Shell> exit %This example assumes your DiskTester.main looks at args[0] to see what kind of test to run.
You can also include the command line as an argument to the Shell, for example
% java Boot 10 Disk 100 Shell 'command1 & command2 & command3'This is handy for tests to be run over and over again. Create a file called runtest containing the single line
java Boot 10 Disk 100 Shell 'command1 & command2 & command3'and make it executable by typing
chmod +x runtestThen you can simply type "runtest" to run your test.
class DiskTester { public static void main(String[] args) { int blockSize = Library.getDiskBlockSize(); byte[] buffer = new byte[blockSize]; Library.writeDiskBlock(1, buffer); Library.readDiskBlock(1, buffer); } }This test makes sure you can get through all the layers of software. Java fills a newly allocated array of bytes with nulls, this test simply clears block 1 of the disk to nulls and then reads back that block of nulls. The next step would be to fill the block with a known test pattern before you write the data to disk and check the data you read from a particular block to make sure it contains the "right" pattern.
private void setPattern(int blockNumber, byte[] buffer) { for (int i = 0; i < buffer.length; i++) { buffer[i] = (byte) (blockNumber + i); } } private boolean checkPattern(int blockNumber, byte[] buffer) { for (int i = 0; i < buffer.length; i++) { if (buffer[i] != (byte) (blockNumber + i)) return false; } return true; }When you have this all working and you start to add performance enhancements to your project (Elevator; the BufferPool), you can make your tester write several different blocks in various patterns to check whether you get the performance you expect.
Let's look at two common paths through the code, assuming that there is no BufferPool yet.
case INTERRUPT_DISK: break;Code you added to this case will call Elevator.endIO, which calls notifyAll. The application program thread that started the operation, wakes up, notices that its operation has completed, and returns from Elevator.read, Kernel.interrupt, and Library.readDiskBlock. If there are more requests waiting, endIO will select one and call disk.beginRead or disk.beginWrite to get the disk started on another operation. Note that the thread that is calling beginRead in this case is the disk thread, so in a sense, the disk is calling itself! However, beginRead is non-blocking, so there is no problem with circular waiting, which could lead to deadlock.
See also Q18.
You could do most of this project by adding new code to Kernel.java, but it is cleaner (and easier to debug) to put most of your code in new classes Elevator and BufferPool. You will create one instance of each of these classes in Kernel.doPowerOn and store them in static fields of Kernel. The Elevator needs to be able to access the Disk, so pass a pointer to it to the Elevator constructor:
scheduler = new Elevator(disk);Similarly, you can pass various information to the BufferPool constructor.
When a read request comes from an application program (via a call to Kernel.interrupt(INTERRUPT_USER, ...)), your doDiskRead method in the kernel will call Elevator.read. This call is blocking - that is, it will not return until the data has been copied into the program's buffer.
See also Q13.
Each buffer in the BufferPool should contain an indication of the block number of the disk block it currently holds (if any). Each read or write call for block b must use the the cached copy of b if there is one. This is important not only for for performance, but also for correctness. For example, suppose one application writes block 17 and another reads block 17 twice. If the second read "doesn't notice" that the block is in the cache (perhaps because of some race condition) and reads the copy from disk instead, it will look to the application like time went backwards: The first read will see the new data written by the other application, but the second read will see the old data. Thus adding or removing a block from the cache must be an atomic action; each search for a block in the cache must come entirely before or after the change. Proper use of synchronized methods can guarantee this property.
The other kind of "correctness" you need to test is that the Elevator algorithm and the BufferPool are doing their jobs. If they are working correctly, certain kinds of workloads should see striking improvements in performance. For example, suppose one DiskTester program accesses blocks 1, 2, 3, 4, ..., while another copy, running in parallel with it, accesses blocks 1001, 1002, 1003, 1004, ..... If the disk accesses are interleaved, the disk will spend most of its time jumping back and forth between the two regions, but if they are queued properly by the Elevator algorithm, the disk might satisfy request 1, 2, 3, 4, ... first, and then seek to block 1001 and satisfy 1002, 1003, 1004. The overall throughput should be a lot better. Note that you can only see this improvement if you have more than one job running concurrently (see Q5).
Similarly, the cache should substantially improve the performance of workloads with a high degree of locality (most references are to a small set of blocks, which fit in the cache) but not workloads that randomly hit all the blocks on disk (unless your cache is as big as the whole disk!)
In summary, you should think carefully about what your implementation is trying to accomplish, and design DiskTester to prove that it is doing the job.
The fix is the same here as it is was in Project2. Make BufferPool.read and BufferPool.write non-synchronized, but carefully wrap the portions of them that manipulate fields of BufferPool in synchronized (private) methods of BufferPool. Similarly, make sure the methods Elevator.read and Elevator.write are not synchronized. The key point is that calls to Elevator.read and Elevator.write are not in (or called by) synchronized methods of BufferPool.
The simplest solution I can think of is to add a separate "cleaner" thread. When the LRU buffer allocator finds a dirty buffer, it marks it as "needs cleaning" and then looks for another buffer. It only blocks in the unlikely case that all buffers are dirty. The cleaner thread spends its whole life looking for a buffer labeled "needs cleaning" and a cleaning it by calling Elevator.write. With this solution, you should see a measurable improvement in performance in the case that there are many DiskTester processes writing blocks scattered around the disk. You may even find that it helps to have multiple cleaner threads.
This project is already pretty challenging for the time allotted, so you are not required to implement a cleaning thread. If you finish early and everything seems to be working perfectly, you might want to try out this idea and see how well it works.
See also Q11.
while (Disk.busy) { wait(); }After the access to Disk.busy, the Disk process may complete its operation and call Kernel.interrupt, which calls Elevator.endIO, which calls notifyAll. If all this happens before the first process calls wait, the notifyAll may see no waiting threads and hence have no effect. The application process will stay blocked forwever in wait -- deadlock!
The solution is to have a separate variable Elevator.diskIsBusy. Its meaning is slightly different than Disk.busy. Whereas Disk.busy == true means that the disk is currenly working on a request, Elevator.diskIsBusy == true means that the Elevator has told the Disk to start on an operation but has not yet been informed of its completion. Because diskIsBusy is a field of class Elevator, it can be accessed in a synchronized method of Elevator, allowing you to inspect the variable and take an action based on its value, all as part of a single atomic operation.
% java Boot 10 Disk 100 Shellthe Kernel doesn't shut down until the Shell terminates. Suppose you type some commands (for example, various runs of DiskTester) and then type exit. The commands can only access the disk by calling readDiskBlock() and writeDiskBlock(), which are blocking, so they will not terminate until all their I/O operations have completed. Thus, the Elevator queue should be empty when you type exit.
If you type a command such as such as DiskTester & exit, the exit could take effect while the DiskTester was in the middle of a disk operation. However, the Shell implements exit command by calling System.exit(0), which bypasses the normal Kernel shutdown and would prevent your flush method from being called anyhow. You might say that this is a bug in the Shell, and you would be right. However, it's our bug, not yours, so you don't have to deal with it. You may assume exit is never combined with any other command. In short, Elevator.flush() is not necessary.
However, BufferPool.flush() is definitely necessary. Otherwise, the only reason anything ever gets written to disk is that a "dirty" buffer in the buffer pool is found by the LRU replacement algorithm. Note that to test this feature, you will need to make two runs. First remove the file DISK if it exists. Then run a test that writes known data to one or more blocks and shut down the system completely (that is, exit from the Shell). Finally, run another test that checks those blocks and varifies that they contain the correct data. See also Q2.