Next implement readDiskBlock and writeDiskBlock. You will need to write class DiskQueue and a helper class Request, which represents a read or write request that has not yet completed. When a new request arrives, Kernel.interrupt() should call DiskQueue.read or DiskQueue.write. This method creates a Request, adds it to queue of requests, and if the disk isn't busy, calls Disk.beginRead or Disk.beginWrite to start an operation. It then calls wait to wait until its request has completed. When a disk completion interrupt occurs, Kernel.interrupt() calls DiskQueue.endIO, which wakes up the process that requested the just-completed operation so that it can return from the readDiskBlock or writeDiskBlock call.
The easiest way to wake up the right process is to make Request a monitor. The requesting process calls request.await(), which calls wait, and DiskQueue.endIO calls request.finish(), which calls notify. DiskQueue.endIO also checks to see if the queue of requests is empty, and if not, it chooses another request and tells the disk to start servicing it. At first, you might want to use a FIFO queue (e.g. ArrayList) to record requests. Once everything is working well, you can then modify the code that adds or removes requests to implement the Elevator algorithm. This approach allows you to see whether the DiskQueue algorithm actually improves performance.
After you get this part of the project working, write DiskCache and put it “between” the Kernel and the DiskQueue by making the kernel call DiskCache.read or DiskCache.write, instead of DiskQueue.read or DiskQueue.write. When a request arrives, DiskCache first tries to satisfy it from the cache. If the requested block isn't in the cache, it chooses a buffer, calling DiskQueue.write to clean any dirty buffers it encounters, and calls DiskQueue.read to re-fill it with the requested block if the original call was DiskCache.read. Also write a method DiskCache.shutdown() to be used for writing all remaining dirty blocks back to disk at shutdown.
See also Q10 and Q16.
See also Q7, Q17, and Q18.
See also Q17.
Like a real disk, the simulated disk can only read and write whole blocks. If you look at the source of class Disk, you will see that it only checks that the length of the array is at least BLOCK_SIZE. If you give it a bigger array, it will only read or write the first BLOCK_SIZE bytes. However, there is no reason to use a larger array.
command1 & command2 & command3All three commands will be started concurrently, by calling Kernel.exec. The Shell then calls Kernel.join to wait for each command to complete before issuing the next prompt.
You can test your program with something like
% java Boot 10 Disk 100 Shell Shell> DiskTester -v 123 & DiskTester -v 456 Shell> exit %You can also include the command line as an argument to the Shell, for example
% java Boot 10 Disk 100 Shell 'command1 & command2 & command3'This is handy for tests to be run over and over again. Create a file called runtest containing the single line
java Boot 10 Disk 100 Shell 'command1 & command2 & command3'and make it executable by typing
chmod +x runtestThen you can simply type “./runtest” to run your test. You can also modify your Makefile to change the line following “run:” to read
java -ea Boot 10 Disk 100 Shell 'command1 & command2 & command3'and simply type “make” to re-compile and run your program.
Warning: This line must start with a TAB character, not spaces.
If you specify -c the reads and writes will be clustered rather than uniform: 90% of the accesses will be to 10% of the blocks. The particular set of blocks chosen as the “hot” blocks depends on the last digit of the id specified on the command line.
Let's look at two common paths through the code, assuming that there is no DiskCache yet.
case INTERRUPT_DISK: break;Code you added to this case will call DiskQueue.endIO, which calls request.finish. The application program thread that started the operation wakes up, notices that its operation has completed, and returns from DiskQueue.read, Kernel.interrupt, and Library.readDiskBlock. If there are more requests waiting, endIO will select one and call disk.beginRead or disk.beginWrite to get the disk started on another operation. Note that the thread that is calling beginRead in this case is the disk thread, so in a sense, the disk is calling itself! However, beginRead is non-blocking, so there is no problem with circular waiting, which could lead to deadlock.
See also Q12, Q18 and Q21.
You could do most of this project by adding new code to Kernel.java, but it is cleaner (and easier to debug) to put most of your code in new classes DiskQueue and DiskCache. You will create one instance of each of these classes in Kernel.doPowerOn and store them in static fields of Kernel. The DiskQueue needs to be able to access the Disk, so pass a pointer to it to the DiskQueue constructor:
scheduler = new DiskQueue(disk);Similarly, you can pass various information to the DiskCache constructor.
When a read request comes from an application program (via a call to Kernel.interrupt(INTERRUPT_USER, ...)), your doDiskRead method in the kernel will call DiskQueue.read. This call is blocking - that is, it will not return until the data has been copied into the program's buffer.
See also Q13, Q14, Q18, and Q21.
Each buffer in the DiskCache should contain an indication of the block number of the disk block it currently holds (if any). Each read or write call for block b must use the the cached copy of b if there is one. This is important not only for for performance, but also for correctness. For example, suppose one application writes block 17 and another reads block 17 twice. If the second read “doesn't notice” that the block is in the cache (perhaps because of some race condition) and reads the copy from disk instead, it will look to the application like time went backwards: The first read will see the new data written by the other application, but the second read will see the old data. Thus adding or removing a block from the cache must be an atomic action; each search for a block in the cache must come entirely before or after the change. Proper use of synchronized methods can guarantee this property.
In summary, you should think carefully about what your implementation is trying to accomplish, and run tests that prove that it is doing the job.
The fix is the same here as it is was in Project2. Make DiskCache.read and DiskCache.write non-synchronized, but carefully wrap the portions of them that manipulate fields of DiskCache in synchronized (private) methods of DiskCache. Similarly, make sure the methods DiskQueue.read and DiskQueue.write are not synchronized. The key point is that calls to DiskQueue.read and DiskQueue.write are not in (or called by) synchronized methods of DiskCache.
Another approach is to add a separate “cleaner” thread. When the Clock buffer allocator finds a dirty buffer, it marks it as “needs cleaning” and then looks for another buffer. It only blocks in the unlikely case that all buffers are dirty. The cleaner thread spends its whole life looking for a buffer labeled “needs cleaning” and a cleaning it by calling DiskQueue.write. With this solution, you should see a measurable improvement in performance in the case that there are many DiskTester processes writing blocks scattered around the disk. You may even find that it helps to have multiple cleaner threads.
This project is already pretty challenging for the time allotted, so you are not required to implement either of these two suggested improvements. If you finish early and everything seems to be working perfectly, you might want to try out this idea and see how well it works.
See also Q11.
A solution is to have a separate variable DiskQueue.diskIsBusy. Its meaning is slightly different than Disk.busy. Whereas Disk.busy == true means that the disk is currently working on a request, DiskQueue.diskIsBusy == true means that the DiskQueue has told the Disk to start on an operation but has not yet been informed of its completion. Because diskIsBusy is a field of class DiskQueue, it can be accessed in a synchronized method of DiskQueue, allowing you to inspect the variable and take an action based on its value, all as part of a single atomic operation.
% java Boot 10 Disk 100 Shellthe Kernel doesn't shut down until the Shell terminates. Suppose you type some commands (for example, various runs of DiskTester) and then type exit. The commands can only access the disk by calling readDiskBlock() and writeDiskBlock(), which are blocking, so they will not terminate until all their I/O operations have completed. Thus, the DiskQueue queue should be empty when you type exit.
If you type a command such as such as DiskTester & exit, the exit could take effect while the DiskTester was in the middle of a disk operation. However, the Shell implements exit command by calling System.exit(0), which bypasses the normal Kernel shutdown and would prevent your shutdown method from being called anyhow. You might say that this is a bug in the Shell, and you would be right. However, it's our bug, not yours, so you don't have to deal with it. You may assume exit is never combined with any other command. In short, DiskQueue.shutdown() is not necessary.
However, DiskCache.shutdown() is definitely necessary. Otherwise, the only reason anything ever gets written to disk is that a “dirty” buffer in the buffer pool is found by the Clock replacement algorithm. Note that to test this feature, you will need to make two runs. First remove the file DISK if it exists. Then run a test that writes known data to one or more blocks and shut down the system completely (that is, exit from the Shell). Finally, run another test that checks those blocks and verifies that they contain the correct data. See also Q2.
One possible solution to this problem is for P1 to lock the buffer while it is waiting for the read to complete. In somewhat more detail, when P1 finds block 20 is not in the cache and allocates a buffer b to hold it, it should immediately mark b as “contains block 20 but is not yet ready”. When P2 looks for block 20, it will find buffer b and wait until the “not ready” flag is turned off. When P1's read request to the DiskQueue completes, it clears the “not ready” status and does a notifyAll() on b, allowing P2 to continue. Note that there should be a single synchronized method of DiskCache that looks for a buffer containing block 20 and if it doesn't find it, allocates a buffer and marks it as “contains 20 but not ready”. However the call to DiskQueue.read() needs to be outside this method, because it does not want to block other processes from using the DiskCache while it is waiting for the read to complete.
There is a small difference if the buffer b chosen by P1 happens to be dirty. P1 immediately marks b as containing block 20. That's a bit of a lie, since b still contains some other block, but P1 also locks b (marks it as unready). It then writes the dirty block back to disk and reads in block 20 before unlocking b. Meanwhile, process P2, looking for block 20, thinks it's in b, so it so it doesn't try a redundant read of block 20 from disk, but also sees that b is locked, so it does try to copy data out of b until it really does contain block b.