Lecture 2: Machine Organizations
Hardware components:
-
CPU, hardware level-1 cache, and hardware level-2 cache; the three components
are often on the same hardware board;
-
DRAM (or memory);
-
memory controller handles the memory reads and writes by the CPU;
-
Video display, keyboard, and speakers;
-
Disks, floppy, tapes, etc.
-
device controllers: each I/O device has its own controller; each device
controller typically has a (relatively slow) microprocessor and some DRAM
memory;
-
Network interface cards;
-
What link the above components together: buses
-
memory bus, which handles traffic between CPU and main memory (that is,
memory reads and writes), and handles traffic between the I/O bus and the
main memory;
-
I/O bus, which interfaces I/O devices, moves data beween DRAM memory and
I/O devices; CPU and I/O bus competes for cycles on the memory bus;
-
memory bus needs to keep up with the CPU's clock speed; I/O bus needs to
accomodate a variety of I/O devices; thus, memory bus is typically much
faster than I/O bus;
Change: A Constant Theme in Computer Technology
-
In 1985: 1 MIPS (Million Instruction Per Second) CPU, 1MHz memory bus clock
speed, ~100KHz I/O bus clock speed, 64K-1MB main memory, 100MB of disk
capacity, 10Mb/s ethernet;
-
In 1997: 100MIPS CPU, 200MHz~1GHz memory bus clock speed, ~20MHz I/O bus
clock speed, 64MB -1GB main memory, 4GB-9GB of disk capacity, 100Mb/s ethernet;
-
A factor of 100 improvment in a decade
-
"If the transportation industry had been advancing as fast as the computer
industry, today from Madison to Los Angles would only take half an hour."
Impacts of technology change on the operating system:
-
running out of address bits: initially, memory addresses are in 16 bits,
but 16 bits can only address memory up to 64KB. So now we have 32-bit
memory addresses. But this is running out as well --- 32bits only
allows us to index up to 4GB of main memory. Thus, we have to adopt
64-bit memory addresses soon.
-
from batch processing to interrupt driven:
-
batch processing: the operating system is in charge. It runs:
for ( ; ; ) {
get a job from the job queue;
load the job (the executable and the data) from the tape to memory;
execute the job;
if the job needs to input data from I/O devices, input them;
if the job needs to output data to I/O devices, output them;
after the job finishes, output the results;
}
-
interrupt driven: at any time, there are many devices at work. When
they finish their work or needs some attention, they "interrupt" the CPU
and the kernel gets to inspect the situation and perform appropriate processing.
The kernel is like a servant, being called at arbitrary times.
-
Implications of interrupt-driven style of operating systems:
-
synchronization: devices may access the same piece of memory (for example,
CPU and disk may read and write the same piece of DRAM memory); care must
be taken to avoid unpredictable results (for example, the kernel forbids
the CPU to read or write a piece of memory when the disk is reading or
writing it);
-
example of synchronization: temporarily disabling interrupts to avoid register
corruption. Basic steps in interrupt processing involve hardware
saving the Program Counter (PC) in a special register, and reading the
PC value back from the special register. If during the processing
of one interrupt, another interrupt happens, then the old PC value is lost.
Thus, interrupts must be disabled during the processing of one interrupt.
(well, that's not the complete true story, but you get the spirit of the
solution here).
Processes: a simple way to handle the complexity in an OS
-
A process is the execution of a program;
-
Process states: running, ready, blocked;
-
The kernel schedules the processes to run on the CPU, to make it look like
that each process has its own CPU; it is called "time-sharing";
-
More on processes in next class;