Side Channel Vulnerabilities
What can we infer about another thread by observing its effect on the system state? Can we trigger the exposure of private data?
1. Exfiltration
One example is exfiltration, where attacker and victim threads may share a L2 cache. To do this, the attacker could:
- Prime & Probe - attacker primes the L2 cache by filling one or more sets with its own data. Once the victim has executed, attacker could probe the state of the cache by timing accesses, to see if any were evicted. If so, the victim must have touched an address that maps to the same set.
- Evict & Time - attacker causes victim to run establishing a base execution time. It then evicts a line of interest and runs the victim again. A variation in execution time indicates that the line of interest was accessed.
- Flush & Reload - this relies on shared virtual memory. The attacker flushes a line of interest. Once the victim has executed, attacker reloads the line by touching it, measuring the time taken. A fast reload indicates the victim touched this line, reloading it.
2. Shared State
For a side channel to be exploited, there must be a shared state affected by the execution of both attacker and victim.
- For a single core this could be a cache level, TLB, branch predictor, prefetchers, physical rename registers, dispatch ports, etc.
- For a single NUMA domain this could be a memory controller.
- Seperate cores may share caches, interconnect, etc.
3. Victim Execution
To perform a side channel attack the attacker must trigger victim execution:
- Perform a syscall
- Release a lock
- Threads a on the same core with simultaneous multithreading (SMT)
- Call it as a function (
)
Function Calls as Side Channels This is already in the same address space! However, it is still used for:
- Testing language based security.
- When victim is an object with secret state and public access method.
Historically, to limit the cost of a context switch, the OS would store copies of all of its page address translations alongside each process, marked with supervisor only access. This avoids a TLB flush. However, this means a spectre attack can be used to access kernel data.
4. Avoiding Attacks
Kernel Address Space Layout Randomisation (KASLR) randomises the placement of code and data in the address space, making spectre attacks guess where the data it wants is stored. This is not foolproof.
Kernel Address Space Isolation (KASI) changes the virtual address space mapping every time the kernel is entered (flushes TLB). This mitigates spectre attacks but has a significant performance impact. However, this is no match for Spectre 2 :(`
4.2 Spectre 2
Attacks could trick the branch predictor into executing a certain piece of code. It:
- Finds a gadget (secret code) in the victims code space.
- Trains the branch predictor to speculatively branch to the gadget when a syscall is executed.
- Observe microarchitectural or cache side channel from the speculatively executed gadget.
- Steal!
To mitigate this we could:
- Mess with cache probing by adding noise to the timers.
- Prevent attacker from poisoning branch predictor by adding an instruction to block use of branch prediction
- Block branch predictor contention by having a different branch prediction per process.
- Retpoline: a code sequence that implements an indirect branch using a return instruction, and fixes the Return Address Stack (RAS) to ensure a benign prediction target:
1RP0: call RP2 ; Push RP1 addr onto stack, jump to RP2 2RP1: int 3 ; Breakpoint to capture speculation 3RP2: mov [rsp], <Jump Target> ; Overwrite return addr to desired target 4RP3: ret ; Return