HPE engineers working on 160TB single-memory computer
Hewlett Packard Enterprise has developed the world's largest single-memory computer in order to ensure that machine learning and supercomputers have the memory they need for the Big Data revolution HPE

Hewlett Packard Enterprise (HPE) has developed the world's largest single-memory computer, in order to provide the memory that will be needed for supercomputers, machine learning and the ongoing Big Data explosion.

HPE has built a machine with 160TB of memory, which is enough to simultaneously access the contents of 160 million books, or the equivalent of every single book in the Library of Congress five times over.

Computer memory is separate from the data storage of your hard drive. The memory refers to the computer chip's memory, which is used to process commands and manage all the software programs open on a PC.

This is why when your computer is low on random-access memory (RAM), it becomes slow – all the processes running on the machine are using up all its memory, so there is none left to run even more programs.

Finally enough memory for machine learning

The prototype contains 160TB of shared memory spread across 40 physical nodes, and it is designed to store and manipulate entire datasets of information. There is a critical need for larger memories in the field of machine learning, where large neural networks of classical computers are trained to autonomously analyse large amounts of unstructured data to find patterns that humans can't detect.

Data science is a rising field whereby computers are automated to analyse huge amounts of data and extract knowledge from it that provides insights on how people think, live and work. In the tech industry, this is known as "big data".

But it doesn't end with 160TB – HPE claims its prototype can easily be scaled up to an exabyte scale single-memory system, and even beyond that to 4,096 yottabytes, which is a ridiculously huge amount of memory that's about 250,000 times bigger than every single file in the digital universe today.

4,096 yottabytes would be enough for a computer to access every single piece of data from Facebook, all the data generated from every single trip on Google's Street View self-driving cars, every single piece of data ever generated by Nasa for space exploration – all at the same time, which is unthinkable today.

Forget about processing power – memory matters too

Close up of single-memory computer's circuit board
A close-up of the prototype single-memory computer's circuit board HPE

HPE says the prototype was developed as part of its research and development programme The Machine, which focuses on "memory-driven computing", meaning that the memory, and not the processor, is at the centre of the computing architecture.

The Machine's focus is on looking for the shortfalls in how memory, storage and processors interact in classical computers today in order to reduce the time computers need to process complex problems.

The 160TB prototype uses a high performing fabric protocol to connect the 40 physical nodes. Cavium's second generation ARMv8-A workload optimised System on a Chip ThunderX2 provides processing power, and the system runs the Linux operating system as well as software programs designed to make the best use of abundant persistent memory.

"We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society," said Mark Potter, CTO at HPE and Director, Hewlett Packard Labs. "The architecture we have unveiled can be applied to every computing category – from intelligent edge devices to supercomputers."