![](indbul1a.gif) |
Use extra disks for redundant information,
e.g., checksum or parity, and repair faults of disks from those extra
disks
![](indbul2a.gif) |
Partition disks into groups and assign
some extra check disks to each group |
|
![](indbul1a.gif) |
MTTR (Mean Time To Repair) |
![](indbul1a.gif) |
MTBF (group) = [MTBF (individual disk) / # of
disks in the group] * [1/prob(another failure in the group before repair)]
![](indbul2a.gif) |
prob(another failure) = MTTR /
MTBF(remaining disks in the group) |
|
![](indbul1a.gif) |
MTBF(raid) = MTBF(group) / # of group =
MTBF(individual disk)2/[(D+C*n) * (G+C-1) * MTTR]
![](indbul2a.gif) |
D = total # of disks |
![](indbul2a.gif) |
G = # of data disks in a group |
![](indbul2a.gif) |
C = # of check disks in a group |
![](indbul2a.gif) |
n = # of groups |
|
![](indbul1a.gif) |
Some metrics
![](indbul2a.gif) |
Reliability overhead = check-disks /
data-disks |
![](indbul2a.gif) |
Usable storage capacity percentage =
just reverse of reliability overhead |
![](indbul2a.gif) |
Performance: small independent reads Vs
big reads |
![](indbul2a.gif) |
Effective performance = performance
boost Vs check disk overhead |
|
![](indbul1a.gif) |
Bit
level interleaving
![](indbul2a.gif) |
Pros:
use all disks simultaneously; load balancing |
![](indbul2a.gif) |
Cons
![](indbul3a.gif) |
only
1 I/O at a time per group |
![](indbul3a.gif) |
if
disks are not synchronized, biggest rotation delay is taken
account |
|
|
![](indbul1a.gif) |
Organization
![](indbul2a.gif) |
Sector
(or disk I/O unit) interleaving |
![](indbul2a.gif) |
Use
parity to correct the bit of the faulty disk |
|
![](indbul1a.gif) |
Read
- just read the data disk having the sector |
![](indbul1a.gif) |
Write
![](indbul2a.gif) |
Read
the data disk having the sector |
![](indbul2a.gif) |
Read
the parity disk |
![](indbul2a.gif) |
Write
to the data disk |
![](indbul2a.gif) |
Write
to the parity disk
![](indbul3a.gif) |
new
parity = (old-data XOR new-data) XOR old-parity |
|
|
![](indbul1a.gif) |
Advantages
![](indbul2a.gif) |
Same
cost as level 2 & 3 |
![](indbul2a.gif) |
Bulk
I/O is as fast as level 2 & 3 |
![](indbul2a.gif) |
Small
I/O is a lot faster than level 2 & 3 |
|
![](indbul1a.gif) |
Disadvantages
![](indbul2a.gif) |
Small
modify is still slow. But this type of access is rare! |
![](indbul2a.gif) |
Parity
disk should be accessed every time and thus is bottleneck
![](indbul3a.gif) |
Only
one write is allowed at a time per group |
![](indbul3a.gif) |
LFS can help with this since it writes out to a log
instead of updating in place |
|
|