Improving Computing Memory Performance for Scientific Discovery

A new framework employs usage patterns to improve data placement.

Advanced Scientific Computing Research

December 9, 2025
minute read time
ORNL’s Frontier supercomputer earned the top ranking on the TOP500 list in June 2022, marking it as the world’s fastest computer at the time with 1.1 exaflops of performance. Frontier is an example of the trend towards more complex memory architectures in today's most powerful machines.
ORNL’s Frontier supercomputer earned the top ranking on the TOP500 list in June 2022, marking it as the world’s fastest computer at the time with 1.1 exaflops of performance.
Image courtesy of Terry Jones, ORNL

The Science   

For supercomputers, quickly loading and storing results to internal memory is just as important as solving the arithmetic during the application. For many applications, memory performance is the main obstacle to improving application speed. Vendors are addressing this challenge with increasingly complex memory designs. In recent years, supercomputers have begun to incorporate larger memory systems. These systems feature different types of memory devices organized into multiple layers. But mapping application data to the best type of memory is a difficult problem. Existing software does a poor job of using hardware efficiently. Researchers set out to bridge this gap with new software approaches for managing complex memory designs. 

The Impact

Scientists have designed a new framework for managing computer memory. It enables scientific computing applications to take advantage of new and emerging memory hardware. It doesn’t require extra effort from developers or even changes to program code. As the applications execute, the framework automatically monitors how different parts of each application use the available memory hardware. The framework then moves individual data items to the most appropriate type of memory. It judges which type of memory is most relevant based on recent usage information. Evaluations have shown this approach effectively improves the performance of a variety of scientific computing applications on real supercomputers. In some cases, the approach can make computations up to seven times faster (or more, depending on the machine and memory configurations).

Summary

Many scientific computing systems and applications fail to use new and emerging memory technologies efficiently. Recently, researchers at the Department of Energy’s Oak Ridge National Laboratory and the University of Tennessee found a way to address this problem. The new approach does a better job of placing frequently used data items in faster (but smaller) memory systems. It also places data items that are used less frequently in slower devices. In this way, the approach automatically enables many applications to take advantage of new memory designs and significantly improves computing performance.

Contact

Terry Jones
Oak Ridge National Laboratory
trj@ornl.gov

Michael Jantz
University of Tennessee
mrjantz@utk.edu

Funding

This work was supported by the DOE Office of Advanced Scientific Computing Research (ASCR) through the Next-Generation Scientific Software Technologies (NGSST) program.

Publications

Brandon Kammerdiener, J. Zach McMichael, Michael R. Jantz, Kshitij A. Doshi, and Terry Jones, “Flexible and Effective Object Tiering for Heterogeneous Memory Systems.” ACM Transactions on Architecture and Code Optimization (TACO), Volume 22, Issue 1 (2025). Article No.: 28, Pages 1–24 [DOI: 10.1145/3708540]

Related Links

Supercomputing Memory Management Tool Makes Data Storage More Efficient, HPCWire