Processes 4
Overview
- Multi-level feedback queues
- Scheduling in Window 7 + illustration Scheduling in
- Linux (implementation in labs) Scheduling related processes/threads
Multi-level Feedback Queues
- Different scheduling algorithms can be used for the individual queues (e.g., round robin, SJF, FCFS)
Feedback queues allow priorities to change dynamically[动态的], i.e., jobs can move between queues:
- Move to lower priority queue if too much CPU time is used (prioritise I/O and interactive processes)
- Move to higher priority queue to prevent starvation and avoid inversion[反转] of control
Defining characteristics of feedback queues include:
- The number of queues
- The scheduling algorithms used for the individual queues
- Migration policy[迁移政策] between queues
- Initial access to the queues
Feedback queues are highly configurable[可配置] and offer significant flexibility[灵活性,适应性]
Windows 7
- An interactive system[交互系统] using a preemptive scheduler with dynamic priority levels
- Two priority classes with 16 different priority levels exist
- “Real time” processes/threads have a fixed priority level
- “Variable” processes/threads can have their priorities boosted temporarily
- Two priority classes with 16 different priority levels exist
A round robin algorithm is used within the queues
Priorities are based on the process base priority (between 0-15) and thread base priority (±2 relative to the process priority)
- A thread’s priority dynamically changes during execution between its base priority and the maximum priority within its class
- Interactive I/O bound processes (e.g. keyboard) receive a larger boost
- Boosting priorities prevents priority inversion
Scheduling in Linux
The Completely Fair Scheduler
- Linux distinguishes[区分] between two types of tasks for scheduling:
- Real time tasks (to be POSIX compliant), divided into:
- Real time FIFO tasks
- Real time Round Robin tasks
- Time sharing tasks using a preemptive approach (similar to variable in Windows)
- Real time tasks (to be POSIX compliant), divided into:
- The most recent scheduling algorithm in Linux for time sharing tasks is the “completely fair scheduler”
Real-Time Tasks
- Real time FIFO tasks have the highest priority and are scheduled using a FCFS approach, using preemption if a higher priority job shows up
- Real time round robin tasks are preemptable by clock interrupts and have a time slice associated with them
- Both approaches cannot guarantee hard deadlines
Time Sharing Tasks
- The CFS divides the CPU time between all processes
- If all N processes have the same priority:
- They will be allocated a “time slice” equal to 1/N times the available CPU time
- I.e., if N equals 5, every process will receive 20% of the processor’s time
- They will be allocated a “time slice” equal to 1/N times the available CPU time
- The length of the time slice and the “available CPU time” are based on the targeted latency[延迟] (⇒ every process should run at least once during this interval)
- If N is very large, the context switch time will be dominant, hence a lower bound on the “time slice” is imposed by the minimum granularity[粒度]
- A process’s time slice can be no less than the minimum granularity (response time will deteriorate[恶化])
- A weighting scheme is used to take different priorities into account If process have different priorities:
- Every process i is allocated a weight Wi that reflects its priority
- The tasks with the lowest amount of “used CPU time” are selected first
Shared Queues
- A single or multi-level queue shared between all CPUs
- Advantage: automatic load balancing
Disadvantages:- Contention[竞争] for the queues (locking is needed)
- “All CPUs are equal, but some are more equal than others” : does not account for processor affinity[紧密度]:
- Cache becomes invalid when moving to a different CPU
- Translation look aside buffers (TLBs - part of the MMU) become invalid
- Windows will allocate the highest priority threads to the individual CPUs/cores
Private Queues
- Each process/thread is assigned to a queue private to an individual CPU
- Advantages:
- CPU affinity is automatically satisfied
- Contention for shared queue is minimised
- Disadvantages: less load balancing
- Push and pull migration between CPUs is possible
Related vs. Unrelated Threads
Related: multiple threads that communicate with one another and ideally run together (e.g. search algorithm)
Unrelated: e.g. processes threads that are independent, possibly started by different users running different programs
Related Threads
- The aim is to get threads running, as much as possible, at the same time across multiple CPUs
- Approaches include:
- Space sharing
- Gang scheduling
Gang scheduling
- Time slices are synchronised[同步的] and the scheduler groups threads together to run simultaneously (as much as possible)
- A preemptive algorithm
- Blocking threads result in idle CPUs
Summary
- Scheduling on Windows and Linux
- Multi-processor/core scheduling is “a bit different” (load balancing, processor affinity, etc.)
- Related and unrelated threads
- Shared or private queues
- Space scheduling or gang scheduling