CS/OS

Process Scheduling 2

재호맴매 2024. 3. 20. 14:36

Terminologies

  • Burst (time)
    • CPU burst : time it takes for CPU to execute an operation.
    • I/O burst : time it takes for the CPU to wait for I/O.
  • CPU-I/O burst cycle
    • Each process execution consists of a cycle of CPU execution and I/O wait.
    • Alternating CPU and I/O bursts.
  • Processes can be described as 
    • I/O-bound process : I/O-intensive (CPU burst < I/O burst)
    • CPU-bound process : CPU-intensive (CPU burst > I/O burst)

 

Scheduling Criteria

System

  • CPU utilization : keep the CPU as busy as possible.
  • Throughput : # of processes that complete their execution per time unit.

 

Process

  • Turnaround time
    • Amount of time to execute a particular process.
    • Completion of process - arrival of process (Waiting time + CPU burst time)
  • Waiting time : amount of time a process waiting in the ready queue.
  • Response time
    • Amount of time it takes from when a request was submitted until the first response is produced.
    • New -> Ready -> Running time

 

Optimization criteria

  • Maximize CPU utilization.
  • Maximize throughput.
  • Minimize turnaround time, waiting time, response time.
  • Maxmize deadline.
  • Minimize fairness.
  • Minimize scheduling overhead.

 

Which metrics are important?

  • Super computer : CPU utilization, throughput
    • Large, long running jobs submitted by user.
    • Probably has lots of CPU hungry tasks, not much I/O.
  • Personal computer : turnaround time, waiting time, response time
    • Human like interactivity on desktop.
    • Frequently waiting on I/O to make progress.
  • Embeded system : deadline
    • Emphasize rapid interrupt handling and task dispatching.
    • A mixed of criticality.
  • Server computer for cloud server : fairness
    • Ensure each task receive the CPU resource in proportion its share.

 

Virtual Round Robin

Cycle of virtual round robin

  • Reduce the bias in favor of longer process. (기존 Round Robin의 단점)
    • Short Process를 우대하는 방식
  • CPU-bound processes generally uses a complete time quantum.
    • Immediately returns to the ready queue.
  • Processes in the auxiliary queue (generally I/O-bound processes) get preference.

 

Multi-Level Feedback Queues

Multilevel Queue

  • Ready queue is partitioned into separate queue.
    • Process마다 Class 존재.
    • Foreground : interactive (generally I/O-bound)
    • Background : batch (generally CPU-bound)
  • Queue 내부적인 scheduling / Queue끼리의 scheduling
  • Starvation can occur because of fixed priority.

 

Multilevel Feedback Queue

Cycle of Multilevel Feedback Queue

  • Scheduling algorithm for each queue
    • Foreground : RR
    • Background : FIFO
  • Migration : queue 간의 process 이동 가능.
    • Promote by aging : starvation 해결.
    • Demote by using all time quantum : long process에 penalty 부여.
  • Multilevel Feedback Queue scheduler defined by the following parameter.
    • Number of queue
    • Scheduling algorithms for each queue
    • Method used to determine which queue a process will enter
    • Method used to determine when to upgrade / demote a process

 

Multiprocessor Scheduling

Multiprocessor Scheduling

  • Uniprocessor scheduling (time domain) : decide when and which job will run.
  • Multiprocessor scheduling (time domain + space domain) : decide not only when but also where a job will run.
    • How to maintain the ready queue?
    • How to define and exploit affinity?
    • How to assign applications to multiple processors?
    • How to manage processor heterogeneity (이질성) ?
    • How to balance workload among processor?

 

HW Issue

  • Symmetric multiprocessing (SMP) / Asymmetric multiprocessing : master - slaves relationship
  • Cache (processor) affinity : an behavior that have better performance due to the effect of cache when executed on the previously executed CPU
    • Soft affinity : affinity 관계가 보장이 안 될 수도 있음.
    • Hard affinity : not to migrate to other processors.
  • Load balancing
    • Push migration : 감시자 process가 시스템 상 하나 존재하여 다른 CPU로 process를 보냄.
    • Pull migration : 각 CPU마다 감시자 process가 존재하여 다른 CPU에서 process를 가져옴.

 

SW Issue

  • Concurrency (synchronization problem) : maximize parallelism.
  • When accessing shared data across CPUs, mutual exclusion primitives should likely be used to guarantee correctness (race condition)

 

Single Queue Multiprocessor Scheduling; SQMS

  • Simply reuse the basic framework for single processor scheduling.
  • Advantage : simplicity
  • Disadvantage
    • Cache affinity
    • Some form of locking have to be inserted (lack of scalability)

 

Multi Queue Multiprocessor Scheduling; MQMS

  • The basic scheduling framework consist of multiple scheduling queues.
  • Avoid the problems of synchronization.
  • Provide more scalabiltiy and cache affinity.
  • Disadvantage : load imbalance

 

MQMS Migration

  • Work stealing
    • Look around at other queues too often : high overhead
    • Do not look around at other queues too often : suffer from load imbalances
  • Finding the right threshold.

 

Proportional Share Scheduling

$O(1)$ Scheduler

  • Basic concept
    • A priority-based scheduler (RTOS, Linux 2.5)
    • Two run queues (active / expired run queue)
    • Each run queue consists of linked lists for priority levels.
    • Insert, delete, find operation : $O(1)$
    • Disadvantage : starvation
  • Algorithm
    • Scheduler inserts each runnable task into active run queue.
    • Whenever the task runs out of its time slice, it will be removed from active run queue, and inserted into expired run queue.
    • If an active run queue becomes empty, the run queue and expired run queue swap pointers.

Proportional Share Scheduling

  • Basic concept
    • A weight value is associated with each process.
    • The CPU is allocated to the process in proportion to its weight.
  • Two contexts
    • Fair queueing (in the context of networks)
    • Proportional share (in the context of OS) -> bandwidth scheduler, fair-share
  • Optimization criteria
    • Proportional fairness accuracy : minimize service time error
    • Scheduling overhead : execution time of the scheduling algorithm

Calculation of service time error

  • WFQ (Weighted Fair Queueing)
    • Select smallest value of virtual finish time.
    • Virtual Time (VT) : (actually allocated time) / (weight of the process)
    • Virtual Finish Time (VFT) : (virtual time) + 1 / (weight of the process)
    • Guarantee - 0.9 $\le$ (service time error) $\le$ 1.2

  • WRR (Weighted Round Robin)
    • 기존 Round Robin에 가중치를 추가
    • 임의의 시간에서 (Ex. 4 tics) 여전히 불공평의 가능성 존재
    • Process switch overhead가 발생하지만 WFQ가 훨씬 공평

  • CFS (Completely Fair Scheduler)
    • Merged into the 2.6.23 release of the Linux kernel.
    • Multiple queues, similar design to WFQ.
    • Select the task which has the smallest value of virtual runtime.
    • If vruntime of processes are same, compare weight or delta_exec.
    • vruntime += (NICE_0_LOAD) / se -> load.weight) * delta_exec
    • weight up -> vruntime down -> priority up
    • delta_exec up -> vruntime up -> priority down

 

Linux Schedulers

Different Priorities

  • 100 ~ 139 : conventional class -> SCHED_NORMAL
  • 0 ~ 99 : real-time class -> SCHED_FIFO, SCHED_RR

 

Scheduling of Real-Time Processes

  • To lower CPU burst : priority based
  • Rate Monotonic : 주기가 짧을수록 우선순위 상승 -> static
    • n개의 프로세스가 있고 , CPU 사용율의 상한선 U이라고 할 때 아래 식이 성립할 경우 스케줄링 가능
      $$U \ = \ \sum_{i=1}^n {C_i \over T_i} \ \le \ n(\sqrt[n] {2} \ - \ 1)$$
  • Earliest Deadline First (EDF) : deadline까지 가장 조금 남은 Process의 우선순위 상승 -> dynamic
    • 사용율의 상한은 1로 고정, CPU 사용율이 1보다 작을 경우 스케줄링 가능
    • 수학적으로 최적의 알고리즘, but impratical