Waferscale, meet atomic scale: Uncle Sam to test Cerebras chips in nuke weapon sims PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Waferscale, meet atomic scale: Uncle Sam to test Cerebras chips in nuke weapon sims

America’s Sandia National Labs this week said it will investigate using Cerebras’ wafer-sized accelerator chips to determine that the nation’s nuclear weapons will work as intended, should global annihilation ever be desired.

With support from the Lawrence Livermore and Los Alamos national labs, the deployment will be overseen by the Dept of Energy’s National Nuclear Security Administration (NNSA), which is tasked with, among other things, maintaining the reliability and extending the lifespan of city-obliterating warheads through the use of simulations running on supercomputers. These simulations reassure the agency that any changes to the United States’ nuclear arsenal – such as keeping the physics packages viable by replacing materials, or tweaking the designs – will not unacceptably impact the destructive potential.

Seeing as most of us have agreed to no longer conduct real-world tests of these devices, simulations with data from sub-critical experiments are needed instead. And so, Cerebras’ silicon will be test-driven to see whether it can help here.

“This collaboration with Cerebras Systems has great potential to impact future mission applications by enabling artificial intelligence and machine learning technologies, which are an emerging component of our production simulation workloads,” said Simon Hammond, who as a federal program manager oversees computational systems and software at the NNSA’s Advanced Simulation and Computing (ASC) team.

That’s an interesting mention of AI: Cerebras’ chips are designed to accelerate this kind of work, and there’s quite some interest in using machine-learning models to predict the result of scientific experiments as opposed to the classic computation approach of modeling physical interactions. Using AI could be faster than pure computation, though accuracy may be sacrificed, and a hybrid of the two approaches may be best.

Cerebras’ CS-2 systems feature a large, dinner-plate-sized chip packed with 2.6 trillion transistors. The startup contends that this super-sized “waferscale” chip allows far faster processing of huge datasets because the information can remain on the processor longer, or all the time, which avoids shuffling data in and out of slower system memory.

The upstart is one of several exploring waferscale computing to accelerate larger AI/ML workloads. Tesla, for instance, demoed its Dojo supercomputer at Hot Chips this year. For a full breakdown on Cerebras’ waferscale compute architecture or Tesla’s Dojo platform, check out our sister site The Next Platform.

Speaking with The Register, Sivasankaran Rajamanickam, an engineer involved in the deployment of the Cerebras technology at Sandia, expressed interest in examining how the architecture handles sparse models and on-chip data flows. “The scale of the hardware makes it really exciting to see what we can do with it,” he said.

Cerebras is only the latest AI startup to deploy its hardware under the ASC program. The Dept of Energy routinely explores heterogeneous compute platforms using a variety of CPUs, GPUs, NICs, and other accelerators to improve the speed and resolution of these simulations. To date, the agency has employed systems from Intel, AMD, Graphcore, Fujitsu, Marvell, IBM, and Nvidia to name a few.

“We anticipate technologies developed as part of the program will be tested on the Advanced Simulation and Computing program’s advanced architecture prototype systems and will eventually affect the production of advanced and commodity technology platforms used by the three labs,” Robert Hoekstra, senior manager of the extreme scale computing group at Sandia, said in a statement.

We’re told the findings of these trials will inform future investments by the DoE. ®

Time Stamp:

More from The Register