Reaves.dev

v0.1.0

built using

Phoenix v1.7.12

Scheduling

Stephen M. Reaves

::

2024-02-08

Notes about Lecture 4e for CS-6210

Summary

First Principles

Run a thread for a while, block for IO, run a different thread in the meantime

Memory Hierarchy Refresher

Cache Affinity Scheduling

When rescheduling a thread, it makes sense to schedule it on the same processor to reuse cache.

Scheduling Policies

Implementation Issues

Queue Based

Performance

Figures of merit:

The bigger the memory footprint of a process, the more time it will take to load working-set into cache

The heavier the load, the more a fixed-processor scheduler makes sense

If no thread has high affinity with a processor, maybe it sits idle

Cache Affinity and Multicore

Hardware multi-threaded support allows cpu to context-switch basically for free

Layered caching makes misses not as bad

Cache Aware Scheduling

Schedule some number of cache-frugal threads and some number of cache-hungry threads on the different cores so that together, the sum of all the cache-hungriness of all threads that are executing at any given time is less than the size of the L2 Cache (or other last-level cache).

Characterize threads as cache-frugal or cache-hungry

1nCft+1mCht<size(L2 cache)\sum_1^n C_{ft} + \sum_1^m C_{ht} < \text{size(L2 cache)}