James G. Mitchell @ Xerox & Jeremy Dion @ Cambridge Univ
Communications of the ACM, April 1981, pages 233-245
This paper compares two file servers: CFS (Cambridge File Server) & XDFS (Xerox Distributed File System) |
Intended as a backing store of virtual memory system with small segments |
|||||
Required characteristics
|
|||||
For efficiency, simplicity is favored
|
Intended as a file server for database |
|
Atomicity of transaction, including multiple files, is important |
|
More parallelism - byte level locking |
|
Flexible configuration |
Any client having the capability of a file can access the file. Server does not care who the client is |
|
Capability: 32-bit disk address of the file + 32-bit random number |
|
Server generates the capability when a file is created |
|
Capability can be passed to other clients |
|
Clients participating a transaction just need to share the same capability |
Access control is based on the identity of the client
|
|||
When multiple clients participate in a single transaction, server need to keep track of whom she is talking to and enforce appropriate policy based on who the client is |
Issues about
|
CFS server organizes files in hierarchy with files and indices (similar to directory but no pathname-to-id mapping) |
|||||
Each client is given a slot within root index |
|||||
Client can organize its files and index (= subdirectory) arbitrarily |
|||||
Objects (file or index) are shared by multiple client by each client having its entry for the shared object in its own file-tree
|
|||||
Garbage collector also deletes files or indices which are not reachable from the root index |
Essentially no organization |
|
Files are automatically deleted when the transaction is aborted |
|
If client loses the file reference which it created, the file never gets deleted |
Issues about how to maintain consistency of files -> transaction |
A series of update operations of a single file can be atomic |
|||||||||
Session semantics of sharing
|
|||||||||
Clients can creates either:
|
|||||||||
Timeout mechanism is used to detect client crash: client infinitely looping cannot be detected |
Update operations over multiple files on multiple servers by multiple clients can be atomic |
|||||||||
Transaction
|
|||||||||
Locking
|
Error control
|
|||
Flow control
|
|||
Stateful server: server maintains file offset |
|||
Partly idempotent client request: sends absolute file offset with read/write to cope with possible retransmission of packets |
|||
Packet duplication and out of order delivery are not addressed |
Error control
|
|||||||
Basically no flow-control. Pup handles it: who knows? |
File is represented as a tree
|
Single B-tree per partition
|