In a modern, multicore chip, every core—or processor—has its own small memory cache, where it stores frequently used data. But the chip also has a larger, shared cache, which all the cores can access. If one core tries to update data in the shared cache, other cores working on the same data need to know. So the shared cache keeps a directory of which cores have copies of which data. That directory takes up a significant chunk of memory. Envisioned chips will need a more efficient way of maintaining cache coherence.
At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers will unveil the first fundamentally new approach to cache coherence in more than three decades. Whereas with existing techniques, the directory’s memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.
“Directories guarantee that when a write happens, no stale copies of the data exist,” says Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “After this write happens, no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order.” Read more.