Presented by

  • Rohan McLure

    Rohan McLure

    Rohan is a first-year grad at IBM, mostly hacking on the Linux kernel. His Kernel work includes modernising arch/powerpc, as well as hardening the kernel from vulnerabilities in the age of side-channels due to processor speculation. Past projects include novel parallel SAT solvers and climate models. With a background in Mathematics and Systems Programming, Rohan's interests mainly boil down to making 'complex things simple', pursued through joint loves for both software engineering and teaching.


Dubbed by some as the “world’s largest software project”, the Linux Kernel has grown massively in its 30+ years of development. With each year’s accumulating design overhauls and support for new hardware, programs from decades ago are expected to still run without the need for a developer somewhere to hit ‘recompile’ again. How can this be? And if that’s the case, why do my programs care that I’m running Linux, or any other operating system for that matter? In October of last year I received the following response to one of my kernel code contributions: 
 > This breaks powerpc32. The fallocate syscall misinterprets its arguments. > It probably breaks every syscall with a 64-bit argument. In a handful of lines, I’d managed to break virtually all 32-bit programs for an entire architecture. Come to this talk for a brief crash course on one central deliverable of an operating system kernel - namely the ability to actually run the code that people have written and compiled for it. Addressing this one matter provides a primer to many core computing concepts: - Virtual memory - Calling convention - The syscall interface - Big / Little Endianness - Assembly - The 32-bit --> 64-bit transition - Binary compatibility layers; think Rosetta 2, Wine - C as the language of UNIX - The ELF binary format - Static / dynamic linking - What even is a kernel? YouTube: