CS 537 Spring 2007
Programming Assignment 3
Frequently Asked Questions
Last modified Mon Mar 19 15:22:35 CDT 2007
- Q1:
-
But I'm wondering how I choose a hypothesis.
Are we supposed to prove which algorithm is one of specific algorithms
such as FCFS, RR, etc.
(for example, proving our assumption that the first one is FCFS)?
- A:
-
No, this isn't simply an exercise in “guess the algorithm.” The algorithms
are inspired by FCFS, RR, etc., and figuring out which algorithm is which may
help you guess how they should behave, but you may find some surprises. FCFS,
RR, etc. are algorithms for short-term scheduling of the CPU and the situation
here is quite different. For example, it is easy to define waiting time in
this setting, but what would “penalty ratio” mean? Don't focus solely on
time while ignoring such measures as requests completed and grain consumed.
A CPU scheduler is allocating a resource called CPU time. Its goal is to
allocate time fairly and efficiently. In this case, the goal is to allocate
corn, barley, rice and wheat. How well does each of the algorithms achieve
this goal?
- Q2:
-
Are we supposed to figure out which algorithm is best?
- A:
-
No. There's unlikely to be a “best” algorithm, for a couple of reasons.
First, there are various objectives: increase throughput, minimize waiting
time, improve “fairness” (that is, decrease the variations in performance
seen by different customers), and so on. Second, the results may be sensitive
to offered load. An algorithm that's “good” under light load may not behave
well under heavy load. Perhaps the results are sensitive to other parameters,
such as the request rate (see Brewer.meanSleepTime), the supply rate
(Supplier.meanSleepTime), the sizes of requests
(Brewer.maxRequest), variation among brewers (see the Brewer
constructor) or the relations between them.
- Q3:
-
While we've been running benchmarks on Problem 3, we've noticed something
strange. As we increase the number of brewers, the system in
general performer “slower” as wait times increase and such, as would be
expected. However, at around 30 brewers, the system starts to perform much
better, and then continues the expected trend again. Then at precisely 97
brewers, the system suddenly performs off-the-charts. What's going on?
- A:
-
First, how often did you run each experiment? There's quite a bit of
randomness in the simulation, so running it twice with the same parameters
may give very different results. When I tried running
java P3 1 100 100
a dozen times, I saw mean waiting time ranging from 0.09 sec to 0.56 sec. It's
dangerous to draw conclusions on one run for each parameter setting. Second, if
you look at the entire output, not just the last line, you will see that most
brewers report zero completed requests with zero waiting time. If you look at
P3.java, you will see that value printed for “Total waiting time” is
what's returned by Brewer.waitingTime(), and if you look at
Brewer.java, you will see that this is the value of private field
totalServiceTime, which is only updated in one place, when a request
successfully completes. Thus a brewer that never gets anything will report no
waiting time, and if you have a very large number of brewers but only 100
rounds of production, you will have a large number of brewers reporting no
waiting time, leading to a low mean waiting time. Finally, read the
comment in the Supplier constructor. The supplier attempts to supply
grain at about the same average rate the brewers consume it. How well does it
succeed? Can you tell by looking at the statistics printed by the program?
Would additional statistics help? What would happen if you increased the
number of brewers without increasing the supply rate? What would happen if the
supplier supplied grain much faster than brewers could consume it?