Lottery Scheduling: Flexible Proportional-Share Resource Management
C. Waldspurger and W. Weihl. Lottery Scheduling: Flexible Proportional-Share Resource Management. Proceedings of the First USENIX Symposium on Operating System Design and Implementation, November 1994.
Reviews due Thursday, 3/12.
Comments
1. Summary
This paper proposes a scheduling model called lottery scheduling. It is a randomized resource allocation scheduler to provide dynamic, flexible, fair, responsive control over relative execution rates of processes. It also ensures modular insulation and protection by currency conversion techniques.
2. Problem
Dynamic, fair, flexible and responsive scheduling is a requirement in multithreaded systems. Scheduling policies may vary in the long run according to priority, importance or interaction. However, existing solutions either adopt simple priority scheduling with no modular insulation, or uses a fair share mechanism with poor dynamic/responsible control over long run.
3. Contributions
The basic idea of lottery scheduling is to give each process an amount of tickets according to their priority. Tickets are numbered from 1 to total n. A random number generator picks up one process to execute for a time slice whose ticket number matches the random number generated. Thus a process with more ticket has proportionally more chance to execute.
The tickets could be transferred among trusting processes such as client and server to execute cooperatively. Ticket inflation should dynamically control the execution rates among processes but should be used in a cautious way. Within a protection boundary, currency is like local tickets that controls relative execution rates within the boundary. And it can be converted to universal tickets (base currency) according to the transfer rate. Compensation ticket inflates the number of tickets given to processes that only uses a fraction of winning time slice to ensure real execution time of that process is proportional to tickets granted.
Lottery scheduling can also be used to in various shared resources such as mutex locks and memory.
4. Evaluation
Most evaluations are done to demonstrate lottery scheduling is effective and fair in ensuring proportional execution time according to tickets given. It also shows this proportionality is accurate in the long run and could be achieved in a short time. Various techniques are evaluated to show proportional execution such as ticket transfer (client-server model) and load insulation. Finally efficiency of lottery scheduling is measured to be less than 3% overhead.
I don’t understand why multimedia application is evaluated. It seems redundant.
5. Confusion
When the system boots in the first place, how do we know how many base units/tickets to be assigned to each processes? Is this done manually?
And in the long run, how does the system determine that a process should be given more tickets? Would this induce ticket manipulation/inflation among processes just like price manipulation/inflation in economics?
Posted by: Chaowen Yu | March 12, 2015 07:57 AM
Summary:
This paper presents lottery scheduling, a scheduling mechanism that allocates resources to processes based on a randomly selected lottery number. In addition, lottery scheduling implements a currency system that abstracts the conversion rates of shares. The authors present lottery scheduling as a way to provide modular resource management and it was designed to be general enough to manage most system resources.
Problem:
The authors tried to build a scheduling mechanism that would allow for responsive control over the execution rates of processes. They argue that priority based systems aren't well understood and cannot provide fine grain control over execution rates.
Contributions:
The authors introduced a novel scheduling mechanism that borrows concepts from economy. They designed a ticket system that allows system resources to be handled in an abstract manner. This also allowed them to provide modular resource management through the implementation of a currency system.
They added a mechanism to boost the priority of a process through compensation tickets to ensure that the resource allocation ratio is maintained.
The currency system allowed processes to deal with resource sharing among its threads in a very abstract way. This reduced the complexity of managing resources while also providing flexibility. In addition, the currency system also ensured that fluctuations in the currency of one module wouldn't affect the other modules in the system.
Their system allows tickets to be transferred through messages. This provided them with a mechanism to solve the priority inversion issue. This is especially apparent in a client-server system where they run the server without any tickets, ensuring that the resource allocation ratios are maintained on the server as well.
Evaluation:
The authors conducted a number of experiments to prove that the lottery system met their design goals. These experiments were also designed to provide practical examples in which the features of the lottery system, namely ticket transfers and the modular insulation that the currency system provides. One interesting experiment shows how their ticket transfer can be combined with the lottery system to implement a mutex system.
However, they don't really conduct any performance testing. So, while their system might look good in theory and is functionally sound, there is no way of knowing if the overhead introduced has been amortized in any way. This would have been especially interesting for their mutex implementation.
Confusion:
Are there any systems that currently employ lottery scheduling?
Posted by: Clint Lestourgeon | March 12, 2015 07:42 AM
1. Summary
This paper presents Lottery Scheduling. It is a novel randomized scheduling algorithm that provides efficient and responsive control over the relative execution rates of computations. It implements proportional-share resource management. This is desirable in systems running a wide-range of applications. It also supports modular resource management.
2. Motivation
The authors state that the schedulers at that time did not have accurate control over the quality of service provided to users and did not have support for applications to specify relative computation rates. This is especially important in a system servicing request of varying importance. Some general-purpose scheduling schemes only supported policies like priority and fair share scheduling tried to address this. Priority schedulers did not provide modularity property required for such a system. The overheads associated with fair share schedulers limit them to only coarse control over computations.
3. Contributions
4. Evaluation
A prototype lottery scheduler was implemented for the Mach 3.0 microkernel.
With scheduling quantum of 100 milliseconds, fairness can be achieved over 8 seconds intervals. With scheduling quantum of 10 miliseconds, fairness can be achieved over subsecond intervals, provided that scheduler overhead can be bounded.
Three clients with an 8:3:1 ticket allocation compete for service from a multithreaded database server. The observed throughput matches the allocation and average response time with a standard deviation of less than 7%.
Dhrystone benchmark tasks running concurrently for 200 seconds showed that 2.7% fewer iterations were executed under lottery scheduling compared to unmodified Mach kernel scheduler. With eight tasks, lottery scheduling was observed to be 0.8% slower. With a multithreaded database servers and 5 client tasks running 20 queries each showed that it was 1.7% faster under lottery scheduling.
The experiments conclude that it provides flexible and responsive control over the relative execution rates of a wide range of applications. The overhead imposed by the unoptimized prototype is comparable to that of the standard Mach timesharing policy.
5. Confusion
The paper points out the problems associated with assigning an absolute priority to clients. Shouldn't a similar problem be encountered with assigning 'n' number of lottery tickets to different clients.
Posted by: Rohith Subramanyam | March 12, 2015 07:37 AM
Summary:
This paper discusses lottery scheduling for randomised resource allocation of compute resources. The model's evaluation is done on Mach 3.0 micro-kernel for wide range of applications.
Problem:
Challenge is to schedule computations in multi-threaded systems, multiplexing service requests of varying importance. This requires support to specify relative computation rate of resources. This is specially significant for interactive systems.
Solution:
Lottery Scheduling uses proportional-share resource management for resources like I/O, memory and access to locks. For scheduling a resource for application, a lottery number is generated which if matched with the lottery ticket held by application allocates the resource to application. Key solution details are:
-Lottery tickets: They are generated to quantify resource rights independently, represent resource proportion dynamically over time, and homogeneous representation of heterogeneous resources.
-Probabilistically fair scheduling: This ensures that disparity in allocation will decrease over time.
-Ticket transfers: A client can pass its tokens to the server when waiting for a reply.
-Ticket currencies: A unique currency denominates tickets within each trust boundary, and a base currency value is used to maintain overall resource representation. Exchange rate is maintained between local currency.
-compensation tickets- maintain fairness by increasing ticket evaluation for applications not completely using their allocated share of resource.
Evaluation:
Implementation of prototype lottery scheduler is done over Mach 3.0 micro-kernel with support for ticket transfer, inflation, currencies, and compensation tickets to maintain fairness. A pseudo-number generator based on Park-Miller algorithm is used for number generation. Evaluation shows that over reasonable time for run the resource allocation is close to fair. Client server computation shows flexible transfer of client tickets to server for computation. Overall comparison of run time for lottery is better then unmodified Mach.
Concerns:
-Can you explain priority inversion problem in more detail?
-Scheduler behaviour for short one time run processes will be interesting to evaluate.
Posted by: Nidhi Tyagi | March 12, 2015 07:29 AM
Summary
this paper presents a lottery scheduler which is a resource allocation mechanism based on random numbers and proportional share allocation. It is very efficient and responsive to changes in shares and tasks run according to the relative shares of tickets they posses. Insulation of allocation policies from one module from another encourages modular programming. This is achieved by using the currency abstraction. Lottery scheduling can be generalized to handle I/O bandwidth, memory and access to locks. The overhead of the unoptimized prototype is very similar to the standard Mach 3.0 time-sharing policy in place already. The paper discusses many experiments conducted to prove the accuracy of the goals and comes up with numbers to provide empirical evidence.
Problem
Scheduling computations in multi-threaded systems can be challenging and complex problem. Scarce resources must be shared to support requests of varying importance and the policy chosen to handle this might have an enormous impact on response time and throughput. Accurate control over the quality of service provided to users and applications requires support for specifying relative computation rates. For long-running computations such as scientific applications and simulations, the consumption of computing resources that are shared must be regulated. For interactive computations programmers need the ability to refocus available resources to tasks that are currently important. Existing fair share and priority based schedulers mostly achieve these goals but suffer from lack of deeper understanding of the algorithm or a big overhead. Lottery scheduling was developed to address the above concern.
Contributions
1. Each allocation is determined by holding a lottery. The resource is granted to the client with the winning lottery.
2. Scheduling by lottery is probabilistically fair. Since the scheduling algorithm is randomized, the actual allocated proportions are not guaranteed to match the expected proportions exactly.
3. Any changes in relative ticket allocations is reflected immediately in the next allocation decision, lottery scheduling is extremely responsive.
4. Tickets and currencies can be used to insulate the resource management policies on independent modules, because each ticket probabilistically guarantees its owner the right to a worst-case resource consumption rate.
5. Ticket transfers are explicit transfer of tickets from one client to another. Tickets transfers can be used in situations where a client blocks due to dependency.
6. Ticket inflation is a scenario among a group of trusted clients in which the clients have the power to escalate resource rights by creating more lottery tickets.
7. To enforce resource management abstraction barriers, the concept of currency is introduced. A unique currency is used to determine and denominate tickets within each trust boundary. The effects of inflation can be locally contained my maintaining an exchange rate between each local currency and a base currency that is unchanging and hence conserved.
8. A client which consumes only a fraction f of its allocated resource quantum can be granted a compensation ticket that inflates its value by 1/f until the client starts its next quantum.
9. In the real implementation lotteries are held using a simple move to front heuristic. Since the clients with the largest number of tickets will be selected more frequently.
10. A minimal lottery scheduling interface is exported by the micro-kernel. It consists of operations to create and destroy tickets and currencies, operations to fund and unfund a currency and operations to compute the correct value of tickets and currencies in base units.
11. The mach_msg system call was modified to temporarily transfer tickets from the client to a server for synchronous RPCs. This automatically redirects resource rights from a blocked client to a server computing on its behalf.
Evaluation
1. Fairness - The first experiment measured the accuracy with which the lottery scheduler could control the relative execution rate of computations. With the exception of the run for which the 10:1 allocation resulted in an average ratio of 13.42:1, all of the observed ratios are close to their corresponding allocations
2. Flexible Control - A more interesting use of lottery scheduling is to dynamically control ticket inflations. A practical application is the Monte-Carlo algorithm. It was observed that when a new task is started the share receives a large share of the processor initially. The share diminishes as the task reduces its error rate to a value closer to that of the other executing tasks.
3. Client-Server Computation - The Mach IPC primitive mach_msg was modified to temporarily transfer tickets from client to server on synchronous RPC. The results of executing three database clients with an 8:3:1 ticket allocation with the server only running from the contributions from the clients shows that the tasks indeed complete in the number of queries always holding the ration to each other all the time.
4. Various other applications include multimedia applications and load insulation. The system overhead due to lottery scheduling is very low.2,7% fewer instructions were executed compared to the unmodified Mach kernel for the Dhrystone benchmarks. Lottery scheduling can also be used for managing diverse resources such as processor time, I/O bandwidth, memory and access to locks.
Confusion
Why have the concept of a currency when you ultimately convert everything to base units?
Posted by: Jyotiprakash Mishra | March 12, 2015 06:31 AM
1. Summary
This paper introduces lottery scheduling which is a proportional-share scheduler that uses randomization to provide responsive control over the relative execution rates of processes as well as supporting modular resource management.
2. Problem
Schedulers need to support the specification of relative execution rates among processes. General purpose schedulers rely upon a simple notion of priority that does not provide the encapsulation and modular properties required. Also, the assignment of these priorities is often done in a ad-hoc manner. Fair-share and Microeconomic schedulers address the above issues but incur too high overhead and so cannot be invoked repeatedly to provide the fine-grain control required for interactive systems.
3. Contributions
Lottery / Tickets
-----------------
- Lottery tickets encapsulate resource rights. The allocation of a resource to a process is in proportion to the number of tickets held
- Lottery scheduling is probabilistically fair. It solves the problem of starvation in theory. Accuracy of scheduling can be improved by decreasing time quanta
Ticket Transfer
-----------------
- A process can transfer its tickets to another process.
- Useful in client/server models where client transfers tickets to server in order to maximize server performance while it is handling client's task.
Ticket Inflation
-----------------
- A process can increase its allocation it owns by creating more tickets.
- Useful in a scenario with mutually trusting clients as it allows for changes in resource allocation without explicit communication between processes
Ticket Currencies
-----------------
- Each client can re-distribute the tickets it holds to its own jobs. This new distribution becomes a different local currency.
- The system automagically converts a local currency to the global in lotteries to ensure the right proportion.
- For e.g, thread A and B have 100 tickets each. Thread A re-distributes this by assigning 500 tickets (local currency) each to job 1 and job 2. Thread B re-distributes this by assigning 1 to job 3 only. Now, when scheduling the lottery, the system adjusts job 3's currency and gives it 100 tickets and job 1 and 2 are given 50 tickets each.
Compensation Tickets
-----------------
- A process may use only a fraction (f) of its allocated resource quantam. In this case it is disadvantaged against a process that uses a greater quantam as the resource use is not proportional to ticket allocation.
- A process may be granted a compensation ticket that inflates its ticket allocation by 1/f to ensure proportion is maintained.
Synchronization resources
-----------------
- Lottery scheduling can also be used to control the waiting times of threads competing for lock access.
- All threads that are blocked on a mutex transfer their tickets to a mutex currency. The thread that holds the mutex executes with the sum of its own tickets and that of all the other waiting threads. This solves priority inversion problem.
Space-shared resources
-----------------
- Lottery scheduling can also be used to provide proportional share guarantees for finely divisible resources like memory.
- An inverse lottery is used where a client having more tickets is less likely to have a unit of its resource revoked.
4. Evaluation
It is observed that allocations among processes are close to their tickets ratios (especially over longer time intervals). Ticket transfer is shown to be effective in speeding up server computation. Tickets are shown to be efficient in dynamically adjusting frame rates and currencies are shown to be effective in insulating ticket inflations. Lottery scheduling has a very low overhead and is shown to improve performance by 1.7% on a multithreaded database server application, though there was no significant difference on the Dhrystone benchmarks.
5. Confusion
What do the authors refer to as a "simple notion of priority" when describing general purpose schedulers? What is wrong with this notion of priority?
How do we determine how many tickets each process gets? Is there an algorithm for that?
Is lottery scheduling currently used?
Posted by: Naveen Neelakandan | March 12, 2015 05:51 AM
1. summary
This paper introduces an innovative scheduling mechanism - lottery scheduling - that aims to achieve efficient and responsive control over the relative execution rates of computations. Lottery scheduling can also be generalized to manage other diverse resources.
2. Problem
This paper particularly adresses the scheduling problems in interactive systems. There are three major problems with current scheduling mechanisms:
3. Contributions
This paper presents lottery scheduling, a novel randomized mechanism that provides responsive control over the relative execution rates of computations. Each lottery represents a proportional share of resources, also known as resource rights. The resource is granted to the client with the winning ticket. Clients’ expected allocation of resources is proportional to the number of tickets they hold.
Scheduling by lottery is probabilistically fair. The number of lotteries won by a client has a binomial distribution, and the number of lotteries required for a client’s first win has a geometric distribution. Therefore for each allocation, every client is given a fair chance of winning proportional to its share of the total number of tickets.
To better achieve modular resource managemen, the paper also introcudes the following concepts:
4. Evaluation
A prototype lottery scheduling system is designed and developed according to this paper. Some experiments are conducted to evaluate its fairness over time, flexiblility, execution rates, processing rates and video rates with different applications, load insulation and system overhead.
5. Confusion
Why no starvation? Even though the lottery mechanism itself is totally random and fair, there could be threads that have very small amount of lotteries, and could hardly get executed.
Posted by: Yiran Wang | March 12, 2015 04:48 AM
Summary
This paper proposes lottery scheduling : which is an efficient way to implement proportional share resource management. Each contending entity is allocated lottery tickets whose number is proportional to the desired relative execution rate of that entity. Then, a lottery is held by selecting a lottery ticket through a random number generator. The entity holding the winning ticket gets the resource till the next lottery is held. Techniques for implementing resource management policies with lottery tickets such as, ticket transfers, ticket inflation, currency exchange and compensation tickets have also been presented.
Problem
If the contending entities have a notion of service rates associated with them, there are no general-purpose schemes that can flexibly handle and provide responsive control for this kind of differentiation. Priority based schemes schedules the entity with the highest priority but the main problem is how to respect those service rates and provide scheduling in proportion to those rates.
Contributions
The main contribution is the idea of envisioning the system as a collection of processes participating in a lottery at regular time intervals. The authors specify the straightforward algorithm of allocating lottery tickets (that encapsulate resource rights) and then holding a lottery. The winner can be found in O(n) operations and with certain optimizations by using a tree-based implementation, it can be found in O(lg n) operations. The scheduling is probabilistically fair and as the number of lotteries held increases it starts closely matching with the specified proportions.
Advance techniques that models paradigms of economics have been proposed. Through ticket transfer the conventional priority inversion problem is solved. Modules can adjust their proportion by inflation/deflation of tickets. The inflation/deflation within a module can be contained inside that module only by using different ticket currencies and maintaining an exchange rate. Processes that are I/O bound and does not use the full resource quantum can get compensation tickets so that the proportional share resource allocation is still maintained.
Evaluation
A prototype lottery scheduler was implemented using a Mach 3.0 microkernel. They demonstrate the fairness by running two tasks in the Dhrystone benchmark by varying the proportion of the CPU allocation. They exhibit a practical application of ticket inflation by dynamically adjusting a task's ticket value as a function of its current relative error in a set of Monte-Carlo tasks. The ticket transfer technique was also evaluated by running several clients and a single multithreaded server application. All these experiments showed satisfactory results. The system overhead for lottery scheduling was found to be comparable to the native Mach kernel scheduler.
Confusion
Does the 'move to front' heuristic mean maintaining a priority queue ?
It looks like a process can try to game the system by relinquishing the CPU before its quantum is over and as a result gaining some compensation tickets. Has there been any step taken to prevent this?
Posted by: Subhankar Biswas | March 12, 2015 04:36 AM
Summary:
This paper talks about lottery scheduling which is a randomized mechanism that provides a probabilistic guarantee of resources. The notion of tickets helps in modular resource management and also provide a way to control relative execution rates. The paper discusses about the mechanism of lottery scheduling and how it provides modular resource management. The authors also talk about its implementation and its performance.
Problem:Most of the scheduling policies are unable to accommodate the relative computation rates for jobs due to their varied nature. Using priority based scheduling mechanism does not provide encapsulation and becomes difficult to modularize. Also the overhead for scheduling is costly when we move to highly sophisticated scheduling mechanisms. The authors try to come up with a scheduling mechanism that is simple and provides support for modularity and control over relative execution rates.
Contribution:The lottery scheduler uses tickets to quantify resource rights. Because it is based on randomized resource allocation, if non-zero tickets are assigned to one job then it will not be starved unlike conventional high priority mechanisms. The idea of ticket transfer helps in case of RPC or shared mutex when a highly funded client/thread could transfer its tickets so that it does not get blocked. Ticket currencies provide a way to abstract resource rights. Compensation tickets provide fairness for clients that consume only a part of their allocated resource. By using an inverse lottery, we can identify a client to relinquish a resource when it cannot be shared.
Evaluation:The authors did evaluate the performance of their implementation for fairness, flexibility and overhead. Running the system on the Dhrystone benchmark shows that expected ratio is achieved with small deviations corresponding to large ratios(though they converge over longer intervals). The system copes will with the Monte Carlo experiments which were run to test the flexibility where the ticket value is periodically changed. The mechanism was also tested in an environment where the clients could transfer their tickets to the server and the ratio of query processing rates matched the ratio of tickets allotted. When the system was tested in multimedia applications, the mechanism did not produce the expected result as it was due to the round robin processing of client requests at the server. Testing the implementation on load insulation where a two tasks are given identical funding and one of the tasks creates subtasks and inflates its currency. This inflation did not affect the performance of the other task thus showing that insulation is provided. The system overhead on Mach with Dhrystone was little higher as different optimizations were not done. But on a multithreaded database overhead was lesser. Though the individual characteristics are tested there is no comparison with other mechanisms especially priority based mechanisms.
Confused about:How are tickets assigned to processes? How does this work when there are user interactive processes?
Posted by: Ashwin Karthi Narayanaswamy | March 12, 2015 04:01 AM
1. summary
This paper presents a novel, starvation free, proportional share resource management mechanism called "lottery scheduling". It provides efficient control over relative execution rates of computations. It also facilitates modular resource management. Lottery scheduling can be generalized to manage diverse resources.
2. Problem
Traditional priority based schemes comes close to provide flexible and responsive control over service rates but do not provide the encapsulation and modularity. Priority based schemes are often adhoc. Fair share schedulers address some of the issues with priority schemes but the assumptions and overheads(changing priorities using feedback loops) limit their usage to long running processes. There was a need for providing flexible and dynamic control of scheduling for interactive systems.
3. Contributions
Lottery Scheduling(randomized resource allocation algorithm) -
Resource rights are represented by tickets. Allocations are determined by holding a lottery. Resource is granted to the client with the winning ticket. Tickets are abstract, relative and uniform. Lottery scheduling is probabilistically fair. Clients throughput is proportional to # of tickets they hold. Average response time of clients is inversely proportional to the # of tickets they hold. Problem of starvation is not present because each client will eventually win a lottery. Lottery scheduling is fair even when the tickets/clients change dynamically making it responsive.
Modular Resource Management -
Ticket transfer - Tickets can transfered between clients. A client waiting on a RPC call can transfer tickets to server.
Ticket Inflation - Among trusted processes, clients can inflate or deflate tickets without any communication(usually disallowed).
Ticket Currencies - Provide abstraction for resource management across logical trust boundaries. Among mutually trusting clients, a client can be favored by inflating her tickets by debasing tickets of the other clients in the group. Exchange rate is maintained between local currency and the base currency. Total tickets of the group remains the same in base currency even after inflating a client.
Compensation tickets - Favors I/O bound and interactive threads, by helping them get their fair share of CPU.
- Faster implementation using a partial sum tree with leaves in the client is proposed.
- Lottery scheduling mechanism can be used managing resources of various kinds
4. Evaluation
Lottery scheduler provides fairness, tasks are allocated corresponding to their allocations. Variance(between allocated rate and execution rate) is large for larger ratios but converge towards allocated rate. In a client server setup, clients transfer tickets to the server, the experiment shows high priority clients are guaranteed high response time and throughput. Lottery scheduler is extremely light weight (with the partial sum tree ensuring log n look up). Currency abstraction modularizes the inflation locally without affecting processes of other currencies.
5. Confusion
1. For interactive processes, will you assign a small ticket count or a large count? Since it is a interactive process, it yields quickly so for large ticket counts many cpu cycles will be wasted. If small ticket count is given then response time becomes large.
Posted by: Prasanth Krishnan | March 12, 2015 03:25 AM
1. Summary
The paper argues that conventional schedulers fall short of providing flexible and responsive control of relative execution rates, and presents, lottery scheduling, a probabilistically-fair scheduler, which provides responsive control and modular resource management.
2. Problem
To design a proportional-share scheduler, which provides flexible and responsive control over execution rates, and provides mechanisms for modular resource management.
3. Contributions
In lottery scheduling, the clients possess lottery tickets in proportion to resource rights, and the resource allotment at every time slice is decided by a lottery. Since the scheduling is random, the exact expected proportion of resource allocation will likely not be achieved. However, the difference between expected and actual decreases as the number of scheduling decisions increase. Additionally, there is no problem of starvation, since any client with non-zero ticket share will eventually win the lottery and be scheduled.
Lottery scheduling ensures modular resource management through a variety of
mechanism like ticket transfers, ticket inflation, local currency, and
compensation tickets. Ticket transfer lets clients temporarily transfer tickets
to other clients. This is useful when clients are blocked by other clients.
Ticket inflation lets mutually trusting clients inflate/deflate the amount of
tickets they possess, thereby, modifying their share dynamically. Local
currencies provide a mechanism to insulate share allocations within sets of
clients. Compensation tickets are offered to clients that only use a fraction of
the allotted time slice to achieve proportional fairness on the long run.
4. Evaluation
The authors show that the lottery scheduling is fair in time-scale of a minute by showing that the relative execution ratios of two compute-bound benchmarks is very close to their share ratios. The authors show the effectiveness of ticket transfer mechanism by implementing a simple RPC benchmark and showing that the ratio of request completion rate is proportional to share ratios of clients. The authors also show that lottery scheduling effectively insulates load across trust boundaries by running two sets of processes and showing that introduction of a new process into one set does not affect the allocation of the other set. Moreover, the aggregate allocation of the sets also remain unchanged.
5. Confusion
Why do conventional fair schedulers have high overheads? Why are they are limited to coarse-grained tasks? What is distributed lottery scheduler?
Posted by: Shoban Chandrabose | March 12, 2015 03:18 AM
Summary :
The authors talk about a lottery scheduling mechanism which is used to proportional share resources in systems wherein requests of varying importance are to be serviced. They discuss in detail about the techniques used for implementing resource management policies using lottery tickets such as ticket transfers, inflation and compensation tickets etc. It is implemented in the Mach microkernel and experiments to determine fairness over time, query processing rate, load insulation and system overhead have been conducted.
Problem :
The problem with priority scheduling is that it is adhoc and only provides coarser control over the execution rates of computations. The goal is therefore to implement a scheduling mechanism that would provide efficient control over the rates of computation, a proportional sharing of resources along with easy management of resources.
Contributions :
1. Lottery scheduling is probabilistically fair. The allocation is proportional to the number of tickets held by a client. The problem of starvation does not occur.
2. Tickets are objects that encapsulate resource rights and can be transferred in messages.
3. Ticket transfers help in situations in which the client can transfer some tickets to server on which it is waiting so that the server could run. Ticket inflation is useful between mutually trusting clients and it allows a client to escalate its rights by creating more lottery tickets.
4. The ticket currency abstraction is useful for flexible naming, sharing and protecting resource rights.
5. Compensation tickets are used to inflate the value of the ticket in the next quantum when it has used only a particular fraction of its allocated resource quantum.
6. Simple command line interface to create and destroy tickets and currencies, fund and unfund currencies, obtain information about the tickets and currencies.
7. Lottery scheduling could also be applied for scheduling of access to network ports and also to manage other resources like processor time, I/O bandwidth and access to locks.
Evaluation :
The lottery scheduler has been experimented by using a variety of applications including a multithreaded client-server app for searching text, MPEG video viewer and a monte-carlo numerical integration program mainly for fairness, rate of computation and system overhead.
It is observed that the scheduler was flexible, responsive and could control efficiently the relative rates of computation. A sample client server computation showed how query processing rates differed for client requests with varying importance. An experiment with a multithreaded database server with five client tasks each performing 20 queries was found to perform 1.7% faster under lottery scheduling.
Conclusion :
With compensation tickets, is it possible for tasks to run for a short time intentionally so that they get a longer fraction of the time quantum in subsequent runs?
The section on lock funding on how lottery scheduling controls waiting times for threads competing on a lock was not that clear.
Posted by: Krishna Gayatri Kuchimanchi | March 12, 2015 03:07 AM
Summary: This paper proposed a new scheduling algorithm: lottery scheduling. This is a randomized scheduling algorithm for resource allocation and it is efficient and provides responsible control over execution rates of computations.
Problem:
1. Few general purpose scheduling algorithms supporting control over service rates.
2. Existing fair share schedulers had too large overhead to be useful for coarse control over long-running computations.
Contributions:
1. Use a randomized algorithm to decide which process gets the resource based on the number of shares it holds. More specifically, every process is assigned a number of tickets. When the system needs to schedule the next process, a random number from 0 to the total number of tickets - 1 is generated. The process that holds this ticket will be scheduled. There are many benefits of lottery scheduling. First, this is efficient. Though the naive implementation takes O(n) time to select a winner (n is the number of processes), there are tree-based data structures that can improve this bound to O(logn). Second, it is fair because when sufficient number of trials are made, the resource rate of every process is proportional to the number of tickets it has. Third, it is responsible. When the number of tickets of any process is changed, the new number will be used immediately in the next round, and the average response time is inversely proportional to the ratio of tickets this process owns.
2. Compensation tickets. When a process gives up part of its time slice, it will be award some compensation tickets to account for the resource that it gave up.
3. Ticket inflation and deflation. A process can create more lottery tickets to escalate its resource rights.
Evaluation: The authors implemented lottery scheduling on Mach and made some evaluations. First, they verified the correctness of their implementation by testing certain properties of lottery scheduling. Then they tried lottery scheduling on some applications, and tested the performance under different lottery configurations. Finally, they tested the overhead of lottery scheduling by comparing running an application with lottery scheduling and with the default scheduler of Mach. It turned out that the overhead of lottery scheduler is slightly smaller.
Confusion: What are currencies for?
Posted by: Menghui Wang | March 12, 2015 02:55 AM
1. Summary
Lottery scheduling represents a flexible yet simple method for implementing proportional share based scheduling. By using randomness to ensure fairness with high probability, the scheduling overhead is minimized, yet the ticket an currency based model allows for flexibility in the scheduling both in that scheduling priorities can be changed dynamically and in response to system events. And by allowing for multiple different priority schemes to coexist without adverse interactions.
2. Problem
Various schedulers exist, each with their own benefits and drawbacks, but generally, we have very simple priority based systems, which while simple tend to be inflexible and very coarse grained. Or more complex systems which attempt to achieve some sort of fairness, but many of these are based on feedback mechanisms which end up being complex and slow. The goal then is to create a scheduler that is both simple, and can provide flexibility and fairness.
3. Contributions
The core of their scheduler is lottery scheduling, which in it's raw form is simply using randomness to implement proportional share. In this case the rights are represented by lottery tickets, and a ticket is randomly chosen, and the winner gets to run. Thus tasks with larger numbers of tickets have higher probabilities of winning, and in the long run, even without any feedback, the share of each resource given to each process with be fair with high probability.
Instead of having one currency for tickets, multiple currencies can be created. each has an active amount, representing the amount of currency currently in use, and some number of backing tickets, each in some other currency. Each of these is traced back to a global base currency that is used for global lottery scheduling. The use of distinct currencies allows the isolation of different scheduling domains, since the choices on allocation in one domain do not directly affect other domains that are not directly related. This allows each user and each task to choose how to allocate their currency, without worrying about how their choices will affect global fairness. Furthermore, the amount or type of backing currency can be changed transparently, without requiring reallocation. This modularity effectively isolates local scheduling decisions.
In its most straightforward form, this still suffers from priority inversion, where a process with many tickets may be waiting for a process with very few tickets. To alleviate this, the system allows for the transfer of tickets from one process to another. This itself is implemented on top of the currency system, allowing a simple transfer, again without worrying about how the transfer affects global ticket allocation.
This concept can be further applied to other resources, for example mutexes or storage devices. or even multiple resources at the same time. One of the benefits of lottery scheduling is its extreme flexibility, the general concept can be applied to any resource where one wants to give proportional access, but does not need hard guarantees on who runs when.
4. Evaluation
First they demonstrate that lotteries actually give fairness in the long run. As expected with an algorithm with a random component, there is some short term variability. In the end, this ends up being a test of their random number generator, showing that their fast random number generator generates numbers suitable for the purpose of scheduling. This is important of course, since fairness is critical.
They further demonstrate that the scheduler responds quickly and fairly to changes in the priorities, either from changes in ticket allocation, or the termination of a task. Along these lines, they also demonstrate in figure 9 that the use of currencies successfully isolates changes in the allocation of one currency are isolated from other currencies, and thus demonstrating that currencies do allow for effective modularity in scheduling.
It is also important that the lottery scheduler have a low overhead, and here their data is not as clear. in one of their test cases the lottery scheduler does seem to introduces a slight overhead, although in the overhead is small and they seem to suggest that it is not significant. The other test actually shows an improvement, which while small they claim that this case does appear to be significant. Furthermore, they mention that the current implementation is not fully optimized, so although there is not good data for it, what they do have seems to suggest that lottery scheduling can be done efficiently.
5. Confusion
With ticket transfers, it seems it would be easy to accidentally violate the acyclicity condition on the currency graph. How is this prevented? Aside from the fact that in general there is a client server relationship and thus most ticket transfers go in one direction.
Posted by: Jay Yang | March 12, 2015 02:36 AM
Summary:
This paper presents a novel scheduling approach called Lottery Scheduling. Lottery scheduling uses a randomized allocation mechanism by using lottery tickets to represent rights to a resource. The resource is allocated to the winner of a lottery, which is dictated by a random number generator. Thus, resources are allocated to competing clients in proportion to the number of tickets that they hold. It also supports modular resource management through the use of ticket currencies, thus enabling modules to insulate their resource allocation policies from one another.
Problem:
The two popular schedulers at the time of this paper were Priority schedulers and Fair share schedulers. Priority schedulers, though popular used ad-hoc dynamic priority adjustment schemes and did not provide encapsulation and modularity properties desired in engineering large software systems. Though, fair share schedulers addressed some of these issues, they suffered from overheads that made them incompetent for long-running computations. Thus, there was a need for an effective scheduling algorithm that provided responsive control over the relative execution rates of computations.
Contributions:
- Lottery scheduler through its probabilistically fair scheduling mechanism prevents lower priority clients to be starved by higher priority ones. This is ensured by a simple fact that a client with a non zero number of tickets will eventually win a lottery.
- The idea of ticket transfers implemented in lottery scheduling effectively solves the problem of priority inversion.
- Modular resource management is possible through the ticket currency abstraction. Within a module of clients, inflation of currencies can be used to adjust resource allocation. The elegance of this idea is that these inflations do do not affect the resource allocations of the clients outside the modules. Thus there is a notion of fairness coupled with insulation in lottery schedulers.
- The idea of lottery scheduling can be extended for scheduling resources like I/O bandwidth, locks and also space-shared resources like memory. For lottery-scheduled locks, as all clients waiting on the lock transfer their ticket share to the client holding the lock, scheduling is based on total contention for the lock and not just the priority of the thread currently holding the lock. For space-shared resource allocations like page allocations, inverse lottery policy is used where a victim page is chosen to be penalized for having lesser tickets.
Evaluation:
The authors do an extensive evaluation of lottery scheduling mechanism using a wide variety of benchmarks including compute bound benchmarks such as Dhrystone and Monte- carlo program, multithreaded client server applications and competing MPEG video viewers. Lottery mechanism is evaluated on axes such as fairness, flexible control, client- server computation, performance on multimedia applications, load insulation and system overhead. On all these fronts, the authors observe that the lottery schedulers perform very well
Confusion:
- How are tickets assigned?
- Lottery schedulers make sense in context of domains where the assignment of shares can be easily done. But is it viable for all operating systems?
Posted by: Varun Channagiri Dattatreya | March 12, 2015 01:55 AM
Summary
The paper describes Lottery scheduling mechanism for resource allocation to various processes with a probability proportional to the number of lottery tickets held by it. It provides a currency abstraction to flexibly share, protect and schedule the resources within the trusted boundaries.
Problem
The authors want to have a scheduler that dynamically modifies the priority of the processes based on the computation they do. The existing schedulers rely on static priority or even if they use dynamic priority adjustment schemes, the overhead associated with those schemes limit them to have "relatively coarse control over long-running computations." So the authors developed a randomized mechanism that would provide efficient and flexible control over the resource allocation.
Contribution
The paper introduces a novel mechanism of Lottery scheduling. The right for a resource is represented by lottery tickets. The allocation of a resource is decided by a lottery; and the resource is granted to the client who holds the winning ticket. The probability of a resource being allocated to a process is directly proportional to the number of tickets it holds. Any changes in the relative ticket allocation is reflected in the next allocation decision and hence it is highly responsive. There is no problem of starvation, because any client with a ticket would eventually win a lottery if not immediately. Ticket can be explicitly transferred among processes. Ticket inflation is in a sense similar to the ticket transfer except that there is no explicit transfer. By ticket inflation a process can inflate its resource rights by creating more lottery tickets for itself. This is very useful for mutual trusting processes and where explicit communication may not be possible. Compensation tickets help increase the client's priority when it consumes resources lesser than what is allocated to it.
Evaluation
They performed various experiments to evaluate the fairness, flexibility, response time and efficiency of the lottery allocation mechanism. They evaluated the fairness by measuring the computation rates of process with different allocation ratio. The results demonstrated that the computation rates are close to allocation ratio. To demonstrate the flexible control they ran three monte carlo experiments two minutes apart. When a new experiment is started the clients want it to compute more trials so as achieve the convergence quickly. The results demonstrated this. They had a client server setup to evaluate the ticket transfer technique. And the server processed the requests based on these rates as expected. Overall the computation rates were proportional to the ticket counts. It would have been better if the authors compared all the workloads with conventional scheduling mechanism and then compare.
Confusion
Wouldn't the clients use the Compensation technique for malevolent purposes ? I couldn't completely agree with this technique.
Posted by: Nikhil Collooru | March 12, 2015 01:51 AM
1. Summary
The authors discuss the theory and implementation of the Lottery Scheduling mechanism which strives to put forth a probabilistically fair resource allocation system. Each client/process is allocated a share of tickets that are directly proportional to the probability of it being allocated the access to a resource. Interestingly, as it is analogous to monetary shares, policies such as transfers and inflation have also been implemented by the authors.
2. Problem
The primary problem that the paper tackles is scheduling resources to clients. Earlier to this paper, the most common implementation was priority-based which proved to be incompetent in large software engineering systems as they were ad-hoc. Other fair share schedulers have a lot of assumptions inherently implemented and are thus limited by their coarse-control over long-running computations. Thus the authors want to implement a simple and a fair system of resource allocation that can manage diverse resources such as I/O, memory and even locks which led to the implementation of lottery scheduling.
3. Contributions
a) Resource rights are represented by lottery tickets. They are abstract - independent of machine details ; relative - depends on the contention for the resource ; uniform - heterogenous resources are represented homogeneously.
b) Scheduling is probabilistically fair - more the tickets more the probability of resource allocation. No starvation as the client will get scheduled at some point of time as the winning ticket is picked out randomly.
c) Tickets can be transferred while waiting for another client (RPC and locks). If a client uses only a portion of its time quantum, it gets more tickets to keep the system fair.
d) Ticket currencies across users and clients are kept fair by converting them to a base currency and thus represented against a common scale.
e) An inverse lottery mechanism is used to decide who gives up a resource in case of contention ( processor and memory, for example).
4. Evaluation
The authors had implemented the lottery scheduling on the Mach 3.0 microkernel. The implementation proves to be fair for its allocated ratio - more iterations if given more tickets. In case of discrepancies, it is found that they tend to wane out over time thus not cumulatively adding up errors. These properties were found to be true for client-server communications and video frame rates.
5. Confusion
In the inverse lottery mechanism where the client with a lower ticket share is more probable to give that resource away, say processor for example, wouldn’t this mean that we would have a lot of clients running at the end of it? Wouldn’t it just be easier to finish the client first and then we would have the resource essentially?
Posted by: Naveen Anand Subramaniam | March 12, 2015 01:34 AM
Summary:
This paper describes a scheduling mechanism ('Lottery Scheduler') that can allow efficient and responsive control over relative execution rates of processes/threads.
Motivation/Problem:
The authors state that existing scheduling techniques such as decay-usage scheduling and fair share scheduling do not allow rapid, fine-grained control over processor allocation. They argue that such a mechanism can be of use when running a variety of programs ranging from long-running scientific computations to interactive applications.
Contributions:
Some of the interesting ideas presented in the paper are:
- A probabilistically fair randomized scheduling algorithm which allocates processor share to threads based on the number of tickets they hold.
- The idea of denominating the tickets using a hierarchy of currencies (in different trust boundaries) and the possibility of the applications themselves transferring/inflating these tickets to affect the processor allocation.
- An efficient implementation and utilization of this currency hierarchy while scheduling (by tree traversal in logarithmic time).
- The possibility of using such a lottery mechanism in managing resources of various kinds ( ranging from mutexes to network switches).
Evaluation:
The authors first provide empirical data proving that their scheduler (implemented on the Mach 3.0 micro-kernel) shares the processor in a ratio that tracks the ticket allocation ratio closely over time. They also show that their scheduler can provide rapid control over the processor allocation. They also show the effectiveness of their ticket transfer mechanism using a client-server experiment and show that the scheduler provides load insulation between resource management barriers (/trust boundaries). Also they demonstrate that the lottery scheduling mechanism has no significant overhead over the regular Mach 3.0 scheduler.
Confusions:
While explaining why the difference between allocated resource ratio and the ticket ratio was high for higher ratios, the authors simply say: 'As expected, the variance is greater for larger ratio'. Could you please explain why this is so?
Posted by: Hariharan Gopalakrishnan | March 12, 2015 01:33 AM
1. Summary
Lottery scheduling is a an algorithm that provides control over proportional rates of execution for running programs. This is opposed to other scheduling systems, where processes are given priorities that may be adjusted dynamically to manage execution rates.
2. Problem
Conventional resource management systems at the time this paper was written did not provide control over the relative execution rates of programs. Additionally, decay-usage scheduling which was widely used at the time is viewed as something of a black magic that was poorly understood.
3. Contributions
The authors introduce the idea of random scheduling by lottery at the time of resource acquisition, which is guaranteed to be probabilistically fair. In the worst case, a client will attain the proportion of a resource equal to the percentage of lottery tickets it holds for it.
Tasks can achieve very fine grained control of the performance of their subtasks by creating currencies backed by the number of tickets the parent task holds. Lotteries can then be held where the subtasks have a chance of winning equal to the percentage of the currency they hold multiplied by the parent's total proportion of tickets.
4. Evaluation
The authors implemented lottery scheduling for the Mach 3.0 kernel, and their evaluation is more or less measurements that show lottery scheduling behaves in the manner they defined it to. Benchmark with example allocations like 8:3:1 are run and graphed to show that the relative rates of execution behave as expected. Additionally, a few specific scenarios were implemented to show the usefulness of lottery scheduling, like a situation where it is used to control lock contention in a synchronized computation.
5. Confusion
I would have liked to know more about how tickets can be sent using RPC. Is there some way the transaction is verified to prevent unfriendly users faking ticket transactions?
Posted by: Peter Den Hartog | March 12, 2015 01:23 AM
1. Summary
The current paper presents lottery scheduling mechanism that provides efficient control over the relative rates of execution of tasks. In addition, it also enables modular resource management and can be generalized to manage other system resources.
2. Problem
Scheduling multiple applications requires accurate and flexible control over the resource sharing. Priority mechanisms provide only crude form of control without any modular properties. Fair share and microeconomic schedulers have overheads associated with them and thus are only suitable for coarse grained control. In addition, interactive systems require dynamic control in fine-grained scheduling.
3. Contributions
The paper proposes a randomized resource allocation mechanism. Each client is allocated tickets in proportion to their shares and resources are allocated by holding a lottery. This random distribution gives each task a share proportional to its share of tickets over a long time.
Tickets can be transferred between clients. This also solves the priority inversion problems and can accelerate the service times. Ticket inflation is an alternate to the ticket transfers, in which mutually trusting clients can escalate their resources by creating more lottery tickets.
Resource sharing boundaries can be established by using currencies within each trust boundary. If a client does not consume its time slice, the resource share is compensated such that it eventually receives its allocated share.
4. Evaluation
The scheduler is implemented in a Mach microkernel. The complexity of implementation is O(n) and can be reduced to O(logn) with a tree-based implementation. Experiments are done to evaluate the accuracy of resource allocation and to calculate the overheads. For larger time periods, the received share is approximately equal to the allocated share, but there is a variation for shorter time periods. Experiments done with client server systems and multimedia applications also show that the relative resource shares are as desired. Compared to the unmodified Mach scheduler, the lottery scheduler entails only small overheads.
5. Confusion
In the client server system, the server depends on the tickets of the clients for scheduling. But, the client tickets are not relative to each other. How can this ensure fairness if the tickets share is local and not global?
Posted by: Bhuvana Kakunoori | March 12, 2015 01:11 AM
Summary:
The paper presents a randomized resource allocation and scheduling mechanism which takes into account the dynamic changes in the contention of a resource as well as other changes. This responsive control mechanism is achieved using a lottery scheduling mechanism where each process is provided a number of tickets (or shares) and a process is randomly selected as a winner and is given the resource rights. So, the greater the value (or number) of tickets, greater the probability that a process would get the resource it needs.
Problem:
The currently available mechanisms for resource allocation either assigns a priority for a resource which can’t change during the course of its execution or make certain assumptions and involve overheads for dynamically changing resource allocation.
Contributions:
The paper provides a novel mechanism called ‘Lottery Scheduling’ which is used to dynamically change the priority of a process on a resource. Each process is provided a ticket of certain currency which converts to certain base value. A ticket value is then randomly selected and the process holding that ticket gets the resource for a certain time quanta. The greater the value of tickets a process holds, higher the probability that the process gets the resource.
Processes are also allowed to transfer their tickets to other processes thereby increasing the probability of the other process to get the resource. A process would usually want to do this when it has to wait on another process to continue its execution. In certain cases, a process is allowed to inflate the tickets i.e., acquire more tickets thereby increasing the chances of getting the resource. But this is done in a controlled manner to avoid a process from monopolizing the resource. Different ticket currencies are used within each trust boundary but each currency finally convert into certain base currency. The paper also introduces the idea of ticket compensation which increases the ticket value for a process if a process does’t use its assigned resource for the whole time quanta. So, to compensate for this additional tickets are given to the process.
Evaluation:
The paper introduces a simple and novel technique to dynamically change the resource allocation priorities. The implementation provided in lottery scheduling seems to have very little overhead and it seems to be as fast as traditional scheduling mechanism with the added benefit of responsive control.
Confusions:
Lottery Scheduling seems like a good technique to use. Is it currently being used in any system? Is there a reason why systems have decided to not use this technique?
Posted by: Varun Joshi Kishanlal Joshi | March 12, 2015 12:40 AM
Summary
Lottery scheduling is a mechanism used to allocate resources to different processes. The basic idea is that each process is allocated a portion of the resource that is proportional to the amount of tickets they have received in the lottery. Priority scheduling and fair share scheduling can both be implemented using lottery scheduling, depending on how many tickets are allocated to each process. Each process is also guaranteed to run if it has a non-zero amount of tickets. Lotteries are conducted by generating a random number and the process with the winning ticket is allowed to run. Lottery scheduling can be applied to any resource under contention, such as locks or memory.
Problem
Previous implementations of schedulers did not provide enough flexibility in scheduling processes, relying on the concept of a priority to determine which process to schedule next. Schedulers based on priority don’t provide sufficient encapsulation of the priority and are hard to dynamically change the priority of the process. Lottery scheduling provides a simple mechanism for dynamically changing the priority of a process as well as encapsulating the priority in the tickets the process contains.
Contributions
Lottery scheduling is a mechanism to schedule process based on the lottery tickets that each process holds. Processes are allocated tickets and the probability a process will win the lottery is proportional to the ratio of the current processes tickets to the total tickets in the lottery. The winner of the lottery is chosen by generating a random number and searching for the process that contains the winning ticket. In order to manage the resources of the system, tickets can be transferred between processes to elevate the priority of another process. In special situations, processes can increase their own priority by inflating the number of lottery tickets they have. This should be disallowed most of the time to prevent monopolization. In order to ensure fair share of the processor, compensation tickets are also issued when a process doesn’t fully utilize the time they are allocated. The abstraction of ticket currencies is also used to provide a value of each ticket within a trust boundary.
Evaluation
The paper presented numerous experiments using the lottery based scheduler. They implemented a lottery based scheduler for the compute-bound Dhrystone benchmark, a Monte-Carlo numerical integration program, and a multithreaded client-server application. When experimenting with fairness in the Dhrystone benchmark, they found that the lottery scheduler was fairly close to the allotted ticket ratios, and these ratios became closer as the test ran for longer. For the multithreaded database server, they found that the total average time to execute 20 queries was 1135.5 seconds, as opposed to 1155.5 seconds with only an average standard deviation of 0.1%. Overall, the lottery scheduler performed very close to the approximate tickets that were allocated to each process and the performance was generally better than standard priority scheduling.
Confusions
I didn’t really understand the idea behind ticket currencies. What constitutes a trust boundary? Also, how are the number of tickets in each process used to generate the number for the base of the currency? What does the paper mean when it says “If a ticket deactivation changes a currency’s active amount to zero, the deactivation propagates to each of its backing tickets”?
Posted by: Justin Moeller | March 12, 2015 12:31 AM
Summary: This paper proposes Lottery Scheduling (LS) a efficient, flexible, responsive control over service rates. It assigns each client some amount of tickets, and the resource comsumption rates of active computations are proportional to the relative number of tickets. The prototype system is implemented on Mach 3.0 microkernel, and achieves flexible control over compute-bound tasks and client-server interactions (previously cannot be controled).
Problem: In scheduling in multithreaded systems, it is important to response quickly to the varying importance of each thread (e.g. in scientific computing and interactive computing). Previous work based on priority are either on absolute priority or ad-hoc, and work such as fair share and micronomic is not efficient enough. We need a flexible, efficient, theoretically understandable scheduling policy.
Contribution:
1. Introduce the algorithm of lottery: each thread is assigned some amount of tickets, which abtracts the rights of resources it should be assigned. And a lottery randomly generate a ticket number, which assign to the thread own the ticket. This algorithm is probabilistic fair, a thread with t in T tickets in n round lottery will win nt/T times in expectation, which is proportional to t/T. The algorithm is efficient, can be optimized when tickets are uneven or by partition the tickets as a tree.
2. The tickets is a concurrency, it can transfer, inflate/deflate, conpensate to achieve the dynamic change of resource requirement.
3. The author implements the lottery algorithm on Mach 3.0 microkernel OS. Including the structure of each ticket and currency, the system call to do ticket compensation/inflate/deflate/transfer.
Evaluation:
They evaluates the flexibility, response rate, and efficiency of the lottery scheduler.
1. fairness, they test the observed ratio of tickets given the allocated tickets, the two ratios in 60 seconds are quite close.
2. flexibility, the tickets are varied, and three independent runs shows the curves of the cumulated tickets assigned, they are very close at the end. The test in client-server application shows that the response time of server closely matches its allocated tickets.
3. efficiency (overhead): The overhead does not influence the application run time significantly (say, 0.7%~2% slower).
Confusion: How does this idea inspires the modern OS scheduler?
Posted by: Shike Mei | March 12, 2015 12:24 AM
Summary
This paper introduces us to Lottery Scheduling, which is based on randomized resource allocation mechanism. It aims to provide efficient, responsive control over relative execution rates of computations which conventional schedulers are unable to provide. This scheduling mechanism can be also modified to be implement for scheduling many other diverse resources like IO bandwidth, memory and access to locks.
Problem
General purpose schedulers are unable to provide flexible, responsive control or dynamically change scheduling in systems involving components like databases, media applications and networks. The assignment of priorities and dynamic priority assignment is ad-hoc in existing priority based schedulers. These existing schedulers also face overheads which make them unsuitable for fast interactive applications.
Contribution
Lottery scheduling involves randomized resource allocation where the probability of acquiring a resource depends on the number of tickets a process holds. A random winning ticket is generated and the process holding that ticket is assigned that resource. Resource tickets are relative in nature, where the process gets more of the lowly contended resource than a highly contended one, while guaranteeing the worst case scenario that it will receive at least its share of resource proportional to its tickets. This mechanism does not face the problem of starvation as any process with a ticket will eventually be allocated the resource. Also any changes made to the ticket allocation immediately affects the resource allocation hence making it responsive.
Tickets can also be transferred between processes when one of them is blocked. Another alternative to transfer tickets without any explicit communication is to inflate the tickets. This mechanism could also be used for allocation of IO bandwidth, controlling mutex acquisition rates and space-shared resources like memory.
Evaluation
A prototype lottery scheduler was implemented over Mach 3.0 microkernel and experimented over different workloads to test fairness, flexible control, client-server scenario and multimedia applications. While running the Dhrystone benchmark, it was observed that the tasks remained closed to their allocated ratios. Dynamic control was tested by running Monte-Carlo algorithm, where the ticket values were dynamically adjusted to make the system more responsive. A client-server model was also implemented where the server had no threads of its own but completely depended on the tickets transferred by the clients. It was seen that the average response time for the queries by the clients had a standard deviation less than 7% of the average.
Confusion
I am not much clear on the currency mechanism implemented and how the tickets are initially allocated to the processes.
Posted by: Nabarun Nag | March 11, 2015 11:47 PM
1. Summary
This paper describes lottery scheduling as a generic and responsive resource scheduling mechanism. The mechanism makes a scheduling decision after every fixed time quantum by generating a random number and choosing the client that holds the winning lottery ticket.
2. Problem
Most of the scheduling mechanisms and policies prior to lottery scheduling were not flexible and responsive enough to handle all kinds of workloads. Most of them were tailored for batch processing jobs and not optimized for the response time. They would also require a lot of additional book keeping to implement the policy.
3. Contributions
The authors present a resource allocation mechanism based on the common lottery system. Each contender for a resource is assigned a certain amount of lottery tickets. A contender with a higher number of lottery tickets will have more chances of winning the lottery (it is probabilistically fair) than a contender with less tickets. After every fixed time quantum the lottery scheduler runs and generates a random number i.e. holds the lottery. The client with the winning lottery ticket is granted access to the resource. The proposed scheme supports ticket transfers from one client to another in cases the former is waiting on the latter. An alternative to this is that a client can create more tickets for himself inflating it's own resource rights. Ticket currencies are the same lottery tickets but they could be in different denominations in different contexts. The lottery scheduling mechanism is generic enough to be applied to any resource in the system.
4. Evaluation
The authors present extensive evaluation of their proposed mechanism. In all their experiments they try to measure the actual resource allocation proportion given the preset proportion of tickets that each client had. They see some variations in the measured resource allocation which is expected given that this scheme is not exact and depends on a random number.
5. Confusion
The authors do not talk about how the initial allocation of tickets among resource contenders is done. This could be more of a problem in situations where contenders arrive and leave arbitrarily.
Posted by: Mihir Patil | March 11, 2015 11:32 PM
Summary:
The paper presents implementation of proportional share resource management for multi-threaded system called lottery scheduling. Lottery scheduling is randomized resource allocation mechanism which provides responsive control over the relative execution rates of computations. Author claims it to be efficient, fair, modular and flexible as compared to traditional priority based schedulers and fair share schedulers.
Problems:
Existing schedulers were incapable of differentiating service request of varying importance and failed to provide responsive control over service rates. Priority based algorithms did not provide encapsulation and modularity and generally have ad-hoc implementation. Fair share schedulers had huge overheads over long-running computations. Also, existing schedulers had problem of priority inversion and starvation.
Contribution:
Contribution of this paper is a novel randomized resource allocation mechanism. Tickets are allocated to clients and winner is selected based upon random number. Clients holding relatively large number of tickets gets higher chance of getting resource. At the same time it ensures client with relatively small number of ticket will get a resource thus solving starvation problem which is found in priority based scheduling. If a client is blocked due to some dependency it can transfer it's tickets to another client thereby solving priority inversion problem. Author introduces of idea of ticket inflation using which clients can adjust resource allocation without explicit communication but it works well only in mutually trusting clients. Tickets can be used to help implementing different modules different management policies, tickets are assigned to module owner in this case. If a process consumes yields CPU before its quantum time, its tickets are increased proportionally thus increasing its chances of winning lottery. Lottery scheduling can be used in diverse resource management scenarios like locks and I/O bandwidth.
Evaluation:
Implementation of lottery scheduler was done for Mach 3.0 microkernel. Author shows that scheduler can successfully control computation rates for long time intervals. For shorter intervals though there was some variation two tasks remain close to their allocated ratios. Addition of tasks and intra currency fluctuations in a module doesn't affect tasks in module. For client server computation, 3 clients are created with tickets in ratio 8:3:1 and clients can transfer tickets to server, experiments show that allocated resources are in ratio of tickets. Author claims that overhead imposed by unoptimized prototype is comparable to that of standard Mach timesharing policy and better implementation can outperform Mach implementation.
Confusion:
Is lottery scheduling used in production system?
Posted by: Anup Rathi | March 11, 2015 11:32 PM
1. Summary
The authors describe a new resource management algorithm they call lottery scheduling. The algorithm is called lottery scheduling and is a proportional share algorithm, meaning that resource consumption rates of processes are proportional to their fraction of shares. The paper describes how some mechanisms a lottery algorithm would allow, like containing tickets and inflation with exchange rates and currencies and also compensation tickets, awarded to under consuming processes.
2. Problem
Resource management is the general problem tackled by this paper. The authors note that conventional systems at the time did not offer responsive control over relative execution rates. These systems generally use the notion of priority, which does not allow a task to adjust the service rates of its sub-modules or threads. The authors also argue that even though decay-usage scheduling is widely used, it is poorly understood. Existing fair scheduling algorithms at the time address some problems, but create their own overhead.
3. Contributions
Lottery scheduling provides great control over resources in a very modular way. The general idea is that programmers can control the rate at which resources are consumed by different tasks or subtasks by allocating them lottery tickets within their parent currency. When a resource is competed for by several tasks or subtasks, the resource is granted based on a probabilistic function where each ticket holder has a probability of receiving the resource equal to its fraction of the total shares. A task may wish to further divide up its tickets to subtasks, this is accomplished with what the authors call currencies. Tasks can create their own currency, backed by the tickets that they hold, and hand out tickets in this new currency to subtasks. Now when a lottery is held, the subtasks tickets are worth their fraction of the currency’s total tickets multiplied by their parents fraction of its total currency. This modular effect is appreciated because it allows not only very fine grained control of resource allocation, but also a sort of protection between tasks such that they cannot influence the parent currency.
4. Evaluation
The author’s evaluation is tautological, stating simply that the lottery scheduler does what they said it would, specifically that it is fair over time and separately funded currencies aren’t prone to interference. They provide an example where a task is given twice as many tickets as a competing task and is allocated twice as much processor time. They also give an example of how starting a task within a currency does not affect other currencies, another point of their design. Additionally they provide an example of their ticket transfer mechanic
5. Confusion
I’m not entirely sure of the mechanism by which they pass tickets around in their system. How is the validation done to by the kernel to verify that tasks aren’t lying and saying they have tickets in other currencies that they ought not have? Additionally, if a client can pass tickets to a server, how do they get returned?
Posted by: Alex Sherman | March 11, 2015 11:19 PM
Summary
This papers introduces a unique and starvation free scheduling mechanism – Lottery scheduling. A probabilistic algorithm is used to determine which process gets the CPU share. A process is assigned lottery tickets based on a random number and priority of a process is determined by the number of tickets each process has. Techniques such as ticket transfer, ticket inflation and compensation ensure the fairness by the scheduler.
Problem
Scheduling limited resources for concurrent competing processes is challenging. Resources like I/O bandwidth, memory, access to lock etc. are often weighted. Also different applications might require higher and flexible priority during their execution. For instance a client requests may vary in importance, long running high priority jobs could monopolize CPU share. Algorithms for monitoring priority adjustments are complex. Existing schedulers do not consider the above factors which lead to worst case, biased or unfair scheduling of processes.
Contribution
Lottery scheduling is a probabilistic fair scheduling algorithm. Expected allocation of resources to clients is proportional to number of tickets that they hold. Since scheduling algorithm is randomized, actual allocated proportions are not guaranteed to exactly match expected proportions However, over time disparity decreases. Since any client with a non-zero number of tickets will eventually win a lottery, this avoids starvation. The algorithm is also fair when number of clients or tickets varies dynamically. For each allocation, any changes in relative ticket allocations immediately get reflected in next allocation decision.
Ticket transfers are a quick way of temporarily distributing tickets from client to server. If a process is blocked on some other process, tickets can be transferred to the blocked process. It also solves the problem of priority inversion. Ticket inflation can be used to bump up the priority of a process. Ticket currencies with an exchange rate with respect to a base currency enable inflation just within a group with mutually trusted clients. In this way, lottery scheduling attains modular resource management by insulating resource allocation of one process from another. Finally compensation tickets balance the CPU share for I/O bound jobs by increasing the number of tickets assigned to it in cases where the I/O job only used a fraction of the share (which is the usual case).
EvaluationThe authors conducted several experiments on a various applications to evaluate the performance of the lottery scheduler. Two task executing Dhrystone benchmark were ran thrice for 60s and it was seen that actual allocation was fairly close to the allocated ratio. This was also seen on average for shorter intervals. To measure the flexibility, three consecutive Monte Carlo computations were run one after the other and the CPU responded fastest to the recent job. The share then diminishes over time. Likewise, allocation ratio was respected in multimedia and database applications and in case of client server communication. Modularity is demonstrated by locally inflating the currency. Changes in one module did not affect the scheduling of processes in another.
Confusion
Compensation tickets have been explained in case of an I/O bound jobs but how does the scheduler distinguish an I/O bound job from a CPU bound one? And if it cannot differentiate, what if a CPU intensive process also receives such compensation tickets? Can this not lead to CPU bound job taking control over the entire OS?
Posted by: Sukanya Chakraborty | March 11, 2015 10:56 PM
Summary:
This paper introduces lottery scheduling, which is a randomized resource allocation mechanism. It provides efficient, responsive control over the relative execution rates of computations.
Problem:
Few general-purpose schemes can support flexible, responsive control over service rates. Existing schedulers with absolute priority schemes are limited to relatively coarse control due to the assumptions and overheads.
Contribution:
(1) The paper proposes lottery scheduling. Resource rights are represented by lottery tickets. The scheduling by lottery is probabilistically fair and the number of lotteries won by a client has a binomial distribution. A client's average response time is inversely proportional to its ticket allocation. The starvation doesn't not exist and any changes to relative ticket allocations are immediately reflected in the next allocation decision.
(2) Resource management policies can be implemented with lottery tickets. Ticket transfer can be used when a client blocks due to some dependency. Ticket inflation and deflation can adjust resource allocations without explicit communication. Ticket currencies are useful for flexibly naming, sharing, and protecting resource rights. Compensation tickets ensure that each client's resource consumption match its allocated share.
(3) The authors implemented a prototype lottery scheduler. Tree-based implementation is used.
(4) The mechanism can also be used to synchronize resources and control space-shared resources.
Evaluation:
The author evaluates the prototype lottery scheduler for flexibility, responsiveness and efficiency. The experiment show that the relative execution ratio is close to the corresponding allocations. The implementation is also flexible due to its dynamic control. The client-server application experiment shows that ticket allocations affect both response time and throughput. The authors use the same executable and workloads under the kernel and the unmodified Mach kernel and find the overhead of the implementation is comparable to that of standard Mach time-sharing policy.
Confusion:
Can you provide some details about the "move to front" heuristic?
Posted by: Jing Fan | March 11, 2015 10:45 PM
Summary:
This paper introduces lottery scheduling, a proportional share resource management scheduler which provides responsive and modular control over the relative execution rates of processes in a multithreaded system. The implementation details of abstractions like tickets and currencies are explained and extensive experiments are performed to demonstrate the authors’ claims of fairness, flexibility and modularity.
Problem:
Resources such as cpu, IO bandwidth and access to locks must be multiplexed amongst amongst applications of varying importance. The policy used to manage these resources must provide encapsulation and modularity so that the application can manage its own resources without impacting the throughput and response time of other applications. Simple priority based schemes don’t provide this isolation. Absolute scheduling schemes are slow to adjust and so do not provide dynamic scheduling control down to millisecond granularity. Lottery scheduling addresses both these problems by providing modular and responsive control over scheduling, and is also general enough to control diverse resources.
Contributions:
Using tickets as an abstraction for resource rights allows it to be generalized for any resource and is a simple mechanism to assign relative shares. Random selection is a simple strategy for the lottery which provides probabilistic fairness, which gets better for longer runs and avoids starvation. The mechanism maintains responsiveness since any changes in the shares or shareholders will be reflected in the next lottery. Ticket transfers is an important mechanism that allows processes to speed up bottlenecks such as lock contentions which prevents priority inversions. Currencies allow for applications to have local modular policies to resource shares amongst its own threads while keeping other applications isolated from the effects of its changes.
Evaluation:
The authors modified a Mach 3.0 microkernel to incorporate a lottery scheduler, and performed several experiments with different workloads to evaluate fairness, flexibility to new processes, effectiveness in a client server model, and load insulation. The observed execution ratios of tasks was found to closely match the allocated ratios, with minimal variation. 3 Monte-carlo integration tasks started 2 minutes apart each were shown to quickly converge by inflating the newer task with shares proportional to its error from the older tasks. The server response and throughput ratios matched the allocation ratios of its clients even in transient cases with a small standard deviation of 7%. Finally the load insulation experiment showed that currencies allowed different tasks to be isolated from the allocation changes of another task’s subtasks.
Confusions:
What are the policies of ticket assignment? Or is this only employed in mutually trusting systems.
Posted by: Haseeb Tariq | March 11, 2015 10:43 PM
Summary
This paper presents lottery scheduling: a randomized scheduling mechanism that provides fast, efficient proportional-share allocations. Lottery scheduling guarantees fair sharing to a fine degree of accuracy for both short-run and long term processes. Although originally intended for processor scheduling, it can also be used to manage many different resources including I/O bandwidth, memory and thread locks. It achieves this with very small overhead and comparable performance to the standard Mach microkernel scheduler.
Problem
Some workloads require compute resource sharing between processes of varying importance. For example, a database server will want to allocate more CPU time to the server daemon than to a terminal shell. One way to accomplish this is by adding a notion of priority into the scheduler, but this is a vague and non-quantitative measure which is difficult to implement to practice. Fair-share and microeconomic schedulers address the problem better, but they bring significant overhead to the scheduler which makes them unsuitable for general use.
Contributions
The major contribution of this paper is the lottery scheduler, a fast mechanism that provides accurate proportional sharing of resources between processes with a variety of different needs. In addition, ticket transfers and ticket inflation allow processes to make up for time they lost due to yielding the processor. Ticket currencies allow lottery scheduling to be established at fine-grain control between a smaller group of trusting processes. Compensation tickets also allow a client's tickets to get inflated to make up for blocked time and get their fair share of resource allocation.
Evaluation
Several different evaluations in this paper show that lottery schedulers work as advertised. The authors perform tests to show that lottery-scheduled tasks maintain relative rate accuracy over varying allocation ratios, fairness over long measures of time, and that multiple currencies effectively achieve isolation in sharing. Finally, a system overhead test shows lottery scheduling to be only marginally slower than the unmodified Mach kernel, although even this falls within the standard deviation of Mach individual runs.
Confusions
Who determines how many tickets each process gets? Is this mechanism useful in anything other than a single-user, mutually cooperative system?
Posted by: Mark Coatsworth | March 11, 2015 10:11 PM
Summary
This paper proposes a novel proportional, fair-share resource allocation policy called lottery scheduling. This algorithm allows fine-grained control over the relative execution priorities of contending processes. The authors describe the features of this mechanism such as ticket transfers, inflation, currencies, and extend it to support space-shared or diverse resources.
Problem
The authors claim that general-purpose resource allocation schemes do not offer any control over application service rates other than through priorities. Furthermore, the priority-based mechanisms also support a coarse view of priority, in which the only guarantee is that a high priority process will get a greater slice of CPU time than a low priority process. The fair-share schedulers that tried to improve on this have significant overheads and perform as expected only across large time-frames.
Contributions
The main contribution of this paper is the central idea itself, of a simple and flexible means of controlling the distribution of resources among competing processes. The simplicity makes it trivial and simple (or so it seems to me) to evaluate the fairness of this proposed scheduling scheme. Certain advanced features are described to promote wide application of this mechanism. Ticket transfers suits the context of local RPC and also seems to be much more directed and useful than simply yielding in terms of cooperative thread scheduling. The concept of separate ticket currencies is necessary to enforce the isolation needed in a fair scheduling policy. Also, processes which use less than the available time slice are compensated by inflating the value of the tickets held by them, thus ensuring a fair scheduling policy. All of these schemes are made feasible due to the simplicity of the tickets-based approach in terms of bookkeeping. Finally, I liked how authors applied lottery scheduling to manage resources other than compute resources, such as lock contention, and space-shared resources.
Evaluation
The prototype design is implemented using a Mach 3.0 microkernel. The authors demonstrate the accurate control and fairness of lottery scheduling by showing that the relative runtime of two tasks closely matched the ratio of tickets allocated to them. Monte-carlo simulations are used to show the use of lottery scheduling to maintain fine-grained and dynamic control over process scheduling. The system overheads of implementing the lottery scheduler are also shown to be comparable to the default Mach timesharing policy. The load insulation property guaranteed by lottery scheduling is also tested suitably.
Confusions
I was unsure if a process returns to the ready queue on yielding. On checking chapter 21 of the OSTEP book (http://pages.cs.wisc.edu/~cs537-3/notes/21_threads-locks-os.pdf) , it says yield can be thought of as a system call taking a process from the running to the ready state. Then, isn’t giving it compensatory tickets a bad idea as the process yielded as it didn’t have anything urgent to do, but we still raise the probability of it getting CPU time in the near future. I know this works similarly in the multi-level feedback queue too, but I am not sure as to what am I missing in this scenario. I mean, if a process is polling on a lock, and yields periodically to avoid wasting CPU cycles, but we compensate for it anyways, aren’t we nullifying that optimization? Also, isn’t most of the evaluation section merely showing us that they used a good random number generator?
Posted by: Swapnil Haria | March 11, 2015 10:09 PM
1. Summary
This paper introduces the design and mechanism of lottery scheduling with an implementation and related evaluation. The design is to solve problems in traditional scheduling that has suffered from either performance or flexibility.
2. Problems
To allocate scarce resources to important task is crucial in the computation. Traditional schemes whose assignment of priorities and dynamic priority adjustment schemes are often ad-hoc. Though solutions exist that address some of the problems, they still limit the control over long-running computations. Therefore, the lottery scheduling is developed to provide support for modular resource management and responsive control over the relative execution rates of computations.
3. Contribution
On lottery scheduling: encapsulate resource rights in a abstract, relative and uniform way; the scheduling is probabilistically fair and the allocated resource to a client is proportional to the number of tickets that they hold; starvation doesn’t exist in this mechanism.
On modular resource management: ticket transfer transfers tickets from one client to another, which can be used in any situation and also solves the conventional priority inversion problem; make use of ticket inflation among mutually trusting clients and adjust resource allocations without explicit communication; abstraction of currency for flexibly naming, sharing and protecting resource rights; granting compensation tickets to make sure a client consume its entire allocated quantum.
Besides, prototype is implemented and evaluated. The implementation includes a pseudo-random number generator; using a tree of partial ticket sums for optimization; minimal lottery scheduling interface is exported by the microkernel; a mechanism in calculating ticket currencies, compensation tickets; system call to transfer tickets and user interface. Extending other application with lottery scheduling.
4. Evaluation
The evaluation section of this paper tests the fairness of lottery scheduling, the result shows that it can control computation rates; the flexibility test shows that it can obtain approximate results quickly whenever a new experiment starts while allowing older experiments to continue reducing their error at a slower rate; lottery scheduling enables the desired control at the operating system level and eliminating the need for mutually trusting though part of the results are distorted by the round robin processing of client request; finally, the overhead of lottery scheduling is comparable to that of the standard Mach timesharing policy.
5. Confusions
Any other drawbacks of lottery scheduling?
Posted by: Junhan Zhu | March 11, 2015 09:56 PM
1. Summary
The paper outlines lottery scheduling, a ticket based-system which provides for lightweight, responsive control of scheduling processes. The idea of lotteries and currencies can be generalized to many other resources, including locks and memory allocation.
2. Problem
Schedulers that control scarce resources (particularly computation via the processor) must allow for control of quality of service. Most current schedulers implement this requirement via priorities. However, these priority systems are not modular, and thus lead to poorly understood behavior between different computations. Current fair-share systems can provide these guarantees of modularity, but incur a high overhead and thus cannot function well in interactive environments.
3. Contributions
The main contribution of the paper is the lightweight, randomized scheduling mechanism which provides probabilistic service guarantees. Clients wishing to access a resource hold a number of lottery tickets. Every time an allocation decision is made, a randomized lottery is held, and the winner is given access to the resource. As a result, clients which need a high quality of service simply require more tickets. The random nature of allocation means that the lottery is probabilistically fair, and that each client is likely to receive its fair share.
Because tickets can be accounted for very easily, a number of significant scheduling mechanisms are possible. First, different currencies can be created, and linked back to the single global currency. This means that clients which are assigned a specific sub-currency will only detrimentally affect other clients competing for the same sub-currency. Because this sub-currency equates to a fixed number of global tickets, other clients will not be affected. This modularity is important when clients require specific qualities of service. Additionally, clients which perform a call into some server can temporarily transfer their tickets to the server to improve performance.
This lottery based system is not limited to just computation scheduling. Locks can be allocated based on the same system: the client which receives the lock can receive the tickets of all other waiting threads. This helps solve the priority inversion problem and means the lock will be available more quickly to waiting threads. Memory can be handled similarly: a lottery can be held when choosing which page to send to disk.
4. Evaluation
The authors provide a number of examples which show the power of the lottery system. For small time intervals, there is a large variation in allocation, as shown in Figure 5. Over larger time intervals, the authors find the experimental allocations closely match the assigned allocations.
The ticket transfer mechanic is measured by creating three clients in a 8:3:1 ticket allocation. Clients each call a server, and transfer their tickets. Experiments show that the clients see proportional speeds of 7.69:2.51:1, closely matching the allocation.
Other experiments show that the allocation system can successfully respond to changes in allocations, and that tickets allocated in two different currencies do not affect each other.
5. Confusion
How would tickets actually be assigned? If a client requests of number of tickets (in the base currency) which would violate the QoS guarantees of another client, is the new client simply denied?
Posted by: Michael Bauer | March 11, 2015 09:17 PM
Paper: Lottery Scheduling: Flexible Proportional-Share Resource Management
Author: Carl A. Waldspurger, William E. Weihl (MIT)
Summary
The paper introduces a randomized resource allocation mechanism, Lottery Scheduling, that provides efficient, responsive control over the relative execution rates of computations. The basic concept is of lottery tickets, more the number of ticket client possess, more is its chances of winning resource(lottery). The paper also discusses how lottery scheduling supports modular resource management and how it can be generalized to manage many diverse resources.
Problems
Existing fair share schedulers did not provide dynamic control over scheduling, had overheads and lacked fine-grain control over the distribution of resources. Choosing correct scheduling policy for multiplexing resources to service requests of varying importance was complex and challenging as it could affect response time and throughput. In this paper, author proposes Lottery scheduling, mechanism which is simple, flexible and provides proportional sharing.
Contributions
Lottery scheduling(LS) uses the concept of tickets, which are used to represent the share of a resource that a process should receive and it achieves this probabilistically by holding a lottery every so often (say, every time slice). Since any client with a non-zero number of tickets will eventually win a lottery it solves the problem of starvation. LS also operates fairly when the number of clients or tickets varies dynamically. Few other contributions listed in the paper are:
1) Good for multimedia application 2) Easy to donate tickets to others (esp. when waiting on locks) - ticket transfers also solves the problem of priority inversion. 3) Concept of ticket currencies provide useful abstraction for isolating trust boundaries. 4) It can be generalized to manage diverse resources - memory, I/O bandwidth & locks.
Apart, the authors mention that its conceptually simple and easily implemented, it can be added to existing OS to provide better control over consumption rates.
Evaluations
Authors present a set of experiments which show that for short time scales there are some variations but results are comparable but for large time scales, lottery scheduler approaches the desired share values. An experiment of client-server is shown over multithreaded database server and observed throughput and response time ratios closely match the allocation. Load insulation evaluation shows that ticket inflation within one trust boundary does not affect outside it. Few other results are shown for multimedia application and LS manages diverse resources like locks/space-shared resources. Overall, its lightweight and has low overhead.
Confusions
Who ensures ticket transfer and ticket inflation is done correctly ?
Posted by: Yash Govind | March 11, 2015 09:13 PM
1. Summary
In the paper "Lottery Scheduling: Flexible Proportional-Share Resource Management", the authors present a resource allocation mechanism which provides efficient and responsive control over relative execution rate of computations. They implement proportional sharing resource management by holding a lottery to determine the allocation and granting the resource to the client with the winning ticket. They also show how lottery scheduling can be generalized to allocate resources wherever queuing is necessary for resource access.
2. Problem
For scheduling of scare resources, policy for multiplexing requests of varying importance must be chosen appropriately as it impacts the throughput and response time. For e.g. conventional task schedulers employ priority based schemes(static or dynamically varying) which are inadequate to insulate resource allocation policies of separate modules, fair state schedulers employ complicated algorithms for monitoring usage of tasks periodically and adjusting the priority dynamically using feedback loops. However, none of them support flexible, responsive control over service rates at a time scale of milliseconds to seconds.
3. Contributions
1) Flexible control - Lottery sharing can be used to manage diverse resources such as processor time, IO bandwidth and access to locks. A variant of lottery scheduling can be used for space shared resources such as memory.
2) Modular resource management - Support for multiple currencies can be used as an abstraction for isolating or grouping users, tasks and threads.
3) Responsive control - Since changes to ticket allocation (by inflation or deflation) are immediately reflected in making the next allocation decision, lottery scheduling is extremely responsive.
4) Fairness - Lottery scheduling is probabilistically fair.Since, any client with a non-zero number of tickets will eventually win a lottery, the problem of starvation is overcome. By providing compensation tickets, a client that does not consume its entire allocated quantum is guaranteed to still use its allocation ratio.
4. Evaluation
Fairness - With few exceptions, all observed ratios are close to their corresponding allocations over a series of second or sub-second time windows.
Flexible control - Using ticket inflation/deflation, when a new task is started, it is initially given a large share of the processor which diminishes as the task reduces its error to a value closer to that of the other executing tasks.
Ticket transfer - By allowing a client to automatically redirect its resource rights to the server that is computing on its behalf, they show how high priority clients are guaranteed high response time and throughput.
Load insulation - For tasks executing the Dhrystone benchmark, they show how by inflating tickets of a currency type, only tasks holding tickets of that currency are affected.
Synchronization resources - The authors solve the priority inversion problem by making all threads that are blocked waiting to acquire the mutex by performing ticket transfers to fund the thread currently holding the mutex.
5. Confusion
- How are the tickets initially assigned to tasks?
- I am not sure if compensation tickets is a good idea. Just to ensure fairness, allocating compensation tickets for processes which will not access the resources even in the future, does not seem right!
Posted by: Shruthi Venkatesan | March 11, 2015 09:04 PM
Summary:
The paper presents the Lottery Scheduler, a resource allocation mechanism that ensures efficiency, fairness, dynamic control and modularity, i.e isolation of resources for each process/thread. The authors claim that the mechanism, mainly intended for scheduling, can be extended to other resources such as I/O, memory and synchronization locks.
Problem:
The existing scheduling mechanisms did not consider the need for varying resource needs of applications in some software systems. And the ones that did, failed to address the issues of modularity or were ad-hoc in handling dynamic change in priorities of processes. Some that handled the mentioned issues had high overheads and so weren’t efficient in interactive systems.
Contributions:
Two major ideas in lottery scheduling were tickets and randomized resource allocation. The concept of tickets provided an abstraction that allowed resource share allocation, relative to other processes contending for the resource, independent of the system. Randomized sharing, realized using random number generation, ensured probabilistic fairness. This also provided a fair chance of avoiding starvation for any process with some non-zero number of tickets. The tickets also allowed the scheduler to insulate one process from another, which can recursively spawn threads, i.e the first process received its share of resources even when the second one spawned many other threads increasing the contention of a resource (CPU time). This was accomplished using ‘currencies’ which was used to divvy the resources within a boundary, a process for example. Apart from these foundational ideas, the authors present various ideas like ticket transfers, used in a client-server scenario, where a blocked client can transfer tickets to a server, which it’s waiting on. This ensured a similar sharing of resources in the server for different clients and also solved the problem of priority inversion. Another idea was ‘compensation tickets’, which ensured that short running/interactive processes could get their fair share of resources. The authors also extended the idea to locks and waiting threads, and used a similar idea to choose a ‘loser’ thread to enable sharing of resources such as memory.
Evaluation:
Experiments to test specifics of the proposed design, like accuracy, dynamic and flexible control etc, show that the resource manager is able to ensure proportionate resource sharing with an acceptable accuracy, dynamic adaptability to changing resource requirements (with Monte Carlo algorithm), ticket sharing efficiency and eliminate the need for well behaved applications to ensure fairness(with Multimedia applications).
Confusions:
How does the lottery scheduler decide on the ticket allocation to a new process/ thread? Does it rely on another mechanism, such as MLFQ, to dynamically vary the number of tickets, depending on the behavior of the application?
Posted by: Kishore Kumar Jagadeesha | March 11, 2015 08:58 PM
1. Summary
This paper presents Lottery Scheduling which provides efficient & responsive control over relative execution rates which are probabilistically fair.It supports modular resource management by insulating policies from one another. The same abstraction of tickets can be applied to a variety of resource management domains apart from CPU scheduling.
2. Problem
Existing schedulers are either priority-based which are static and ad-hoc, defining priorities at coarse granularities. They don’t provide modularity as change in priority of one process within a client can affect other clients in the system. Fair-share schedulers provide deterministic share allocations, but are very responsive to changes in shares.
3. Contributions
Lottery scheduling abstracts resource rights as lottery tickets. Each allocation is determined by a lottery and the resource is granted to the client with the winning lottery. This abstraction enables tickets to be passed through messages from clients to servers to dynamically change priorities. Within a client’s trust-boundary, tickets can be distributed in a local currency which is based on the global ticket currency. Effects of ticket inflation will be limited to this trust boundary. Interactive processes get fair-share of CPU through compensation tickets which are awarded when a process yields the CPU before its share. The lottery abstraction can also be extended to other resources such as:
* Locks - waiting threads can lend their tickets to thread holding the lock.
* Memory Replacement - use an inverse-lottery to decide the thread whose page will be swapped out.
* Composite resource management - Manage multiple resources at the same time.
4. Evaluation
Rigorous experiments to show following:
* Fairness - over large time-scales, allocated proportion is same as desired proportion. * Over short time-scales, there’s more variation, but roughly the same.
* Flexibility - immediate responsiveness in scheduling profile to dynamic priority change.
* Client-Server ticket transfer.
* Multimedia applications - changing relative ticket proportions.
* Insulation - ticket inflation within one trust-boundary does not affect outside it.
* System overhead of lottery scheduling is low.
5. Confusions
* Why is lottery scheduling not implemented in commercial OSs?
* How is ticket transfer possible between un-trusting clients and servers? What is the backup measure to ensure tickets are returned by server?
Posted by: Aditya Venkataraman | March 11, 2015 08:42 PM
Summary :
This paper presents lottery scheduler, a randomized proportional-share scheduler which guarantees that each job obtains a certain percentage of the CPU time by assigning tickets to each process.
Problem :
Traditional fair-share scheduling algorithms suffer from problems. There are always corner cases that the deterministic approaches cannot handle and this impacts performance. For example, LRU scheduler performs badly for cyclic-sequential workloads. These schedulers are also not light weight because they have to maintain a lot of state per process to track how much CPU each process has used so far. Going through all of this state might also impact the performance and hence these schedulers might be slow. The authors here solve all these problems by using a randomized scheduler.
Contributions :
1. Using a randomized scheduler which is light-weight, fast and does not suffer from corner cases. Tickets that each process has represent its share of the CPU. Finally, a ticket is chosen randomly from the global pool and the process with the winning ticket gets the resource.
2. Provides a lot of mechanisms to manipulate tickets in different useful ways.
3. Very simple idea where all the system needs is a good random number generator, a data structure to track the processes, and the total number of tickets.
4. Using inverse lottery mechanism to make processes relinquish a unit of resource it holds. This way, the system provides share guarantees for space-shared resources like memory.
Evaluations :
The authors observe that when processes are run for a short duration only, unfairness can be observed with respect to the allocated share and the actual obtained share. Only as processes are run for a significant amount of time, the lottery scheduler approaches the desired share values. The paper also presents results on ticket transfer with prioritized clients and server. The low priority clients get their fair share of the CPU in spite of competing with the high priority client.
What I found confusing :
How will this span to a system where all processes are short running? Won't there be a lot of unfair resource allocation?
Posted by: Anusha Dasarakothapalli | March 11, 2015 07:06 PM
Summary:
The paper introduces a fair randomized resource allocation mechanism named lottery scheduling. It is a proportional share resource manager which provides responsive control over the relative execution rates of computations. The mechanism is applicable to interactive systems, managing memory, multiple resources and virtual circuits.
Problem:
The policy used to multiplex resources in a multithreaded system have a high impact on the response time and throughput of the system. Resources should be multiplexed in such a way that every process gets a fair share of it i.e. no starvation. General purpose schemes with notion of priority do not provide encapsulation and modular properties. Fair-share schedulers tackle the problem of priority but incur overhead and have relatively coarse control over the long-running computations. The authors propose a mechanism which is simple, flexible, fast and does proportional sharing.
Contributions:
1. Lottery scheduling is a randomized mechanism for proportional sharing of resources with responsive control i.e. any changes in ticket allocation immediately take into effect and influence consecutive resource allocation decisions.
2. Lottery scheduling supports modular resource management
- ticket transfers to tackle priority inversion issue.
- inflation/deflation to dynamically adjust resource allocation.
- currencies as an abstraction for flexible naming, sharing and protection rights.
- compensation tickets to ensure fairness.
3. The mechanism is general enough to manage variety of resources such as I/O bandwidth, memory, access to locks etc.
Evaluation:
A prototype was implemented by modifying Mach 3.0 microkernel which has a scheduling quantum of 100ms. Various experiments were done to quantify the flexibility, responsiveness and efficient control claims. Relative accuracy rate measurements show that lottery scheduler controls execution rates successfully for longer time intervals. However, some variation was observed for smaller time intervals. Clients with different ticket allocations(8:3:1) were evaluated while competing for resources. Observed allocation matches closely with throughput and response time ratio. Tests with varying currency and adding additional tasks show system to be load insulated and has responsive control.
Confusion:
The paper does not talk about the basis on which tickets are assigned to processes which in itself can be a problem, I believe.
Posted by: Harneet Singh | March 11, 2015 05:14 PM
Summary:
The authors present a proportional share scheduling policy with the use of variable number of tickets assigned to processes to conduct a lottery by random number generation. The paper also puts forward a case that a similar lottery system could be diversified to manage other resources like network bandwidth and access to locks.
Problem:
The three primary issues which the authors address are:
1. Overheads with fair-share schedulers are high and hence they typically work well only with long-running jobs.
2.Priority based schedulers sometimes lead to starvation of processes.
3.Scheduling overhead for short dynamic jobs need to be minimized and a desirable quality of service across the system needs to be maintained.
Contributions:
In simple terms the lottery scheduler works by assigning variable set of tickets to each process. Once tickets have been assigned, a random number generator picks up a ticket to decide which process to run. The core ideas presented in the paper are:
Ticket currency, transfer and inflation: Different users can distribute the set of tickets to its set of jobs as per their own currency, which is then converted to the base currency by the lottery scheduler to achieve fairness across users. Ticket transfers help processes give tickets to other processes to reduce wait time when a process is blocked. Inflation can be used to create more tickets in an environment when clients trust each other.
Compensation tickets: Since threads might run for variable times, compensation tickets allow jobs which do not consume their entire quantum to progress equally as compared to other jobs by giving them extra compensation tickets.
Lottery process: For the lottery, the authors implement a sophisticated random number generator based on the Park-Miller algorithm in assembly language.
Locks using lottery: The paper also describes a way to reduce wait-time of threads competing for locks. This is done by making threads waiting on a lock to transfer its tickets to the thread which holds the mutex, resulting in faster unlocking of the mutex.
Evaluation:
The authors present a comprehensive set of results to evaluate each of fairness, flexibility, ticket transfer and also claim the low overhead of lottery scheduling.
Fairness and flexibility: The fairness evaluation results show that progress is equivalent to the proportion of tickets assigned. To evaluate flexibility, the authors present Monte-Carlo with more tickets at the beginning of execution for faster approximate results. Results show faster trials during the beginning of execution.
Ticket transfer was evaluated using client-server application of databases with the server not having its own tickets. Results show that servers are able to make progress proportional to the tickets given to it by the client.
Overhead results were computed using unmodified Mach kernel running Dhrystone benchmarks and a database application. The lottery scheduler was marginally faster to the timesharing policy implemented by the Mach kernel.
Confusions:
Application behavior might be un-deterministic since random lottery is taking place.
Even though the authors claim lottery scheduling to work for short running jobs, I fail to understand why most of the results presented in the paper are using long running applications.
Posted by: Tejaswi Agarwal | March 10, 2015 06:07 AM