Section \#2: PRAM models
(CS838: Topics in parallel computing, CS1221, Thu, Jan 21, 1999, 8:00-9:15 a.m.)
Pavel Tvrdik


The contents

  1. RAM model
  2. PRAM model
  3. Handling shared memory access conflicts
  4. Computational power
  5. Simulation of large PRAMs on small PRAMs
  6. Simulation of stronger PRAM models on weaker ones
  7. Conclusion
  8. Content

RAM model

Random Access Machine is a favorite model of a sequential computer. Its main features are:
  1. Computation unit with a user defined program.
  2. Read-only input tape and write-only output tape.
  3. Unbounded number of local memory cells.
  4. Each memory cell is capable of holding an integer of unbounded size.
  5. Instruction set includes operations for moving data between memory cells, comparisons and conditional branches, and simple arithmetic operations.
  6. Execution starts with the first instruction and ends when a HALT instruction is executed.
  7. All operations take unit time regardless of the lengths of operands.
  8. Time complexity = the number of instructions executed.
  9. Space complexity = the number of memory cells accessed.
Back to the beginning of the page Back to the CS838 class schedule

PRAM model

Parallel Random Access Machine is a straightforward and natural generalization of RAM. It is an idealized model of a shared memory SIMD machine. Its main features are:
  1. Unbounded collection of numbered RAM processors P0, P1, P2,... (without tapes).
  2. Unbounded collection of shared memory cells M[0], M[1], M[2],....
  3. Each Pi has its own (unbounded) local memory (registers) and knows its index i.
  4. Each processor can access any shared memory cell (unless there is an access conflict, see further) in unit time.
  5. Input af a PRAM algorithm consists of n items stored in (usually the first) n shared memory cells.
  6. Output of a PRAM algorithm consists of n' items stored in n' shared memory cells.
  7. PRAM instructions execute in 3-phase cycles.
    1. Read (if any) from a shared memory cell.
    2. Local computation (if any).
    3. Write (if any) to a shared memory cell.
  8. Processors execute these 3-phase PRAM instructions synchronously.
  9. Special assumptions have to be made about R-R and W-W shared memory access conflicts.
  10. The only way processors can exchange data is by writing into and reading from memory cells.
  11. P0 has a special activation register specifying the maximum index of an active processor. Initially, only P0 is active, it computes the number of required active processors and loads this register, and then the other corresponding processors start executing their programs.
  12. Computation proceeds until P0 halts, at which time all other active processors are halted.
  13. Parallel time complexity = the time elapsed for P0's computation.
  14. Space complexity = the number of shared memory cells accessed.
PRAM is an attractive and important model for designers of parallel algorithms. Why?
  1. It is natural: the number of operations executed per one cycle on p processors is at most p.
  2. It is strong: any processor can read or write any shared memory cell in unit time.
  3. It is simple: it abstracts from any communication or synchronization overhead, which makes the complexity and correctness analysis of PRAM algorithms easier. Therefore,
  4. It can be used as a benchmark: If a problem has no feasible/efficient solution on PRAM, it has no feasible/efficient solution on any parallel machine.
  5. It is useful: it is an idealization of existing (and nowaday more and more abundant) shared memory parallel machines.
The PRAM corresponds intuitively to the programmers' view of a parallel computer: it ignores lower level architectural constraints, and details, such as memory access contention and overhead, synchronization overhead, interconnection network throughput, connectivity, speed limits and link bandwidths, etc.

Constrained PRAM models

  1. Bounded number of shared memory cells. This may be called a small memory PRAM. If the input data set exceeds the capacity of the shared memory, the input and/or output values can be distributed evenly among the processors.
  2. Bounded size of a machine word and/or memory cell. This parameter is usually called word size of PRAM.
  3. Bounded number of processors. This may be called a small PRAM. If the number of threads of execution is higher, processors may interleave several threads.
  4. Constraints on simultaneous access to shared memory cells: handling access conflicts.
Limiting the amount of PRAM shared memory corresponds to restricting the amount of information that can be communicated between processors in one step. For example, a distributed memory machine with processors interconnected by a shared bus can be modeled as a PRAM with one shared memory cell.
Back to the beginning of the page Back to the CS838 class schedule

Handling shared memory access conflicts

To make the PRAM model realistic and useful, some mechanism has to be defined to resolve read and write access conflicts to the same shared memory cell. Concurrent Read has a clear semantics, whereas Concurrent Write has to be further constrained. There exist several basic submodels: The following example demonstrates the differences among submodels.

Example

Assume p-processor PRAM, p<n. Assume that shared memory contains n distinct items and P0 owns value x. The task is to let P0 know whether x occurs within the input array.
  1. EREW PRAM algorithm:
    1. P0 broadcasts x to P1,...,Pp in log p steps using binary broadcast tree.
    2. All processors perform local searches, each on [ n/p] items in [ n/p] steps.
    3. Every processor defines a flag Found and all processors perform a parallel reduction.
    T(n,p)=log p + n/p
  2. CREW PRAM algorithm: A similar approach, but P1,...,Pp can read x simultaneously in O(1) time. But the final reduction takes O(log p) time anyway, so
    T(n,p)=log p + n/p
  3. COMMON CRCW PRAM algorithm: The final step takes now also O(1) time, those processors with the flag Found set can write simultaneously into P0's cell
    T(n,p)=n/p.
Back to the beginning of the page Back to the CS838 class schedule

Computational power

Having this range of submodels, we must ask how they compare as to the ability to execute parallel algorithm. Various submodels may have different computational power.

Definition

PRAM submodel A is computationally stronger than submodel B, written A>=B, if any algorithm written for B will run unchanged on A in the same parallel time, assuming the same basic properties.

Lemma

PRIORITY >= ARBITRARY >= COMMON >= CREW >= EREW
Back to the beginning of the page Back to the CS838 class schedule

Simulation of large PRAMs on small PRAMs

Small PRAMs can simulate larger PRAMs. Even though relatively simple, the following two simulations are very useful and notoriously used. The first result says that if we decrease the number of processors, the cost of a PRAM algorithm does not change, up to a multiplicative constant.

Lemma

Assume p'<p. Any problem that can be solved on a p-processor PRAM in t steps can be solved on a p'-processor PRAM in t'=O(tp/p') steps assuming the same size of shared memory.

Proof:

  1. Partition p simulated processors into p' groups of size p/p' each.
  2. Associate each of the p' simulating processors with one of these groups.
  3. Each of the simulating processors simulates one step of its group of processors by:
    1. executing all their READ and local computation substeps first,
    2. executing their WRITE substeps then. \sq
This result has an important consequence!!! If we develop a parallel PRAM algorithm with C(n,p)=o(SU(n)), we have automatically developed a better sequential algorithm.

Lemma

Assume m'<m. Any problem that can be solved on a p-processor and m-cell PRAM in t steps can be solved on a max(p,m')-processor m'-cell PRAM in O(tm/m') steps.

Proof:

  1. Partition m simulated shared memory cells into m' continuous segments Si of size m/m'.
  2. Each simulating processor P'i, 1<= i<= p, will simulate processor Pi of the original PRAM.
  3. Each simulating processor P'i, 1<= i<= m', stores the initial contents of Si into its local memory and will use M'[i] as an auxiliary memory cell for simulation of accesses to cells of Si.
  4. Simulation of one original READ operation:
    each P'i, i=1,...,max(p,m') repeats for k=1,...,m/m':
    1. write the value of the k-th cell of Si into M'[i], i=1,...,m',
    2. read the value which the simulated processor Pi, i=1,...,p, would read in this simulated substep, if it appeared in the shared memory.
  5. The local computation substep of Pi, i=1,...,p, is simulated in one step by P'i.
  6. Simulation of one original WRITE operation is analogous to that of READ.
Back to the beginning of the page Back to the CS838 class schedule

Simulation of stronger PRAM models on weaker ones

It is very useful to know efficient simulations of stronger PRAM models on weaker ones, since a stronger model is more convenient for the design of algorithms, whereas weaker models, such as EREW, are closer to real parallel computers. Since it is technologically difficult to build full massively parallel CREW or CRCW PRAM computers, it is important to understand the costs of simulating the CREW or CRCW machines on EREW. Any multiple access has to be converted into a series of exclusive accesses. The most important are simulations of the strongest PRIORITY CRCW on the weakest EREW.

Lemma

Assume PRIORITY CRCW with the priority scheme based trivially on indexing: lower indexed processors have higher priority. One step of p-processor m-cell PRIORITY CRCW can be simulated by a p-processor mp-cell EREW PRAM in O(log p) steps.

Proof:(Constructive.)

  1. Each PRIORITY processor Pk is simulated by EREW processor P'k.
  2. Each shared memory cell M[i], i=1,...,m, of PRIORITY is simulated by an array of p shared memory cells M'[i,k], k=1,...,p, in the EREW PRAM. M'[i,1] plays the role of M[i]. M'[i,2],...,M'[i,p] are auxiliary cells used for resolving conflicts, initially empty, and organized as internal nodes of a complete binary tree Ti with p leaves. The height of every Ti is [log p].
  3. Simulation of a PRIORITY WRITE substep: Each EREW processor must find out whether it is the processor with the lowest index within the group of processors asking to write to the same cell, and if so, it must become the group winner and perform the WRITE operation. The other processors in the group will fail. This is done as follows:
    1. If Pk wants to write into M[i], processor P'k turns active and becomes k-th leaf of Ti. It knows whether it is the right or left child of its parent.
    2. Each active left processor stores its ID into the parent cell in its tree, marks it as occupied, and remains active.
    3. Each active right processor checks its parent cell. If this is empty, it stores its ID into it, and remains active. If it is occupied, it becomes idle (he lost).
    4. This is repeated log p times at further levels of the trees.
    5. The processor who managed to proceed to the root of Ti, becomes the winner who can write into M[i]. Processors which used Ti must then sweep down Ti in the reverse order to reset the cells M'[i,2],...,M'[i,p] to empty.
  4. Simulation of a PRIORITY READ substep: is similar.
    1. The same sweep-ups of the auxiliary trees are performed in parallel to determine the winners in the groups.
    2. The winners will read the values from the cells M'[*,1]
    3. During the cleaning sweep-down, the read value is distributed to the losers.

Example

p=7, processors P2, P3, and P7 wish to write into M[5].


Lemma

One step of PRIORITY CRCW with p processors and m shared memory cells can be simulated by an EREW PRAM in O(log p) steps with p processors and m+p shared memory cells.

Proof:(Constructive.)

  1. Each PRIORITY processor Pk is simulated by EREW processor P'k.
  2. Each PRIORITY cell M[i] is simulated by EREW cell M[i].
  3. EREW uses an auxiliary array A of p cells.
  4. If Pk wants to access M[i], processor P'k writes pair (i,k) into A[k].
    If Pk does not want to access any PRIORITY cell, processor P'k writes (0,k) into A[k].
  5. All p processors sort the array A into lexicographic order using (log p)-time parallel sort.
  6. Each P'k appends to cell A[k] a flag f:
    f=0, if the first component of A[k] is either 0
         or it is the same as the first component of A[k-1]
    
    f=1 otherwise.
    
  7. Further steps differ for simulation of WRITE or READ.
  8. PRIORITY WRITE:
    1. Each P'k reads the triple (i,j,f) from cell A[k] and writes it into A[j].
    2. Each P'k reads the triple (i,k,f) from cell A[k] and writes into M[i] iff f=1.
  9. PRIORITY READ:
    1. Each P'k reads the triple (i,j,f) from cell A[k].
    2. If f=1, it reads value vi from M[i] and overwrites the third component in A[k]the flag f, with vi.
    3. In at most log p steps, this third component is then distributed in subsequent cells of A until it reaches either the end or an element with a different first component.
    4. Each P'k reads the triple (i,j,vi) from cell A[k] and writes it into A[j].
    5. Each P'k who asked for a READ reads the value vi from the triple (i,k,v) in cell A[k].

Example

Assume p=7, m=4, and
P1 wants to access M[2],
P2 wants to access M[4],
P3 wants to access M[2],
P4 wants to access M[1],
P5 wants to access M[4],
P6 wants to access M[2],
P7 wants to access no cell at all.
Array A in the first three steps of simualtion:
(2,1, ) (4,2, ) (2,3, ) (1,4, ) (4,5, ) (2,6, ) (0,7, )
(0,7, ) (1,4, ) (2,1, ) (2,3, ) (2,6, ) (4,2, ) (4,5, )
(0,7,0) (1,4,1) (2,1,1) (2,3,0) (2,6,0) (4,2,1) (4,5,0)
Array A in simulation of WRITE:
(2,1,1) (4,2,1) (2,3,0) (1,4,1) (4,5,0) (2,6,0) (0,7,0)
Array A in simulation of READ:
(0,7,0) (1,4,v1) (2,1,v2) (2,3,0) (2,6,0) (4,2,v4) (4,5,0)
(0,7,0) (1,4,v1) (2,1,v2) (2,3,v2) (2,6,0) (4,2,v4) (4,5,v4)
(0,7,0) (1,4,v1) (2,1,v2) (2,3,v2) (2,6,v2) (4,2,v4) (4,5,v4)
(2,1,v2) (4,2,v4) (2,3,v2) (1,4,v1) (4,5,v4) (2,6,v2) (0,7,0)
Back to the beginning of the page Back to the CS838 class schedule

Conclusion

Any polylog-time PRAM algorithm is robust with respect to PRAM models.
Back to the beginning of the page Back to the CS838 class schedule

Last modified: Fri Jan 23 by Pavel Tvrdik