How to build a discovery machine

Engineers at Washington University in St. Louis have mapped out further details for how to create “discovery machines” for solving problems with potentially trillions of complicating factors

Leah Shaffer 
Complex problems like sorting supply chain routes, traffic systems in cities or designing optimized chemical compounds, require “discovery machines”, super computers combining neuromorphic architectures with quantum mechanical tunneling. (Image: Shutterstock)
Complex problems like sorting supply chain routes, traffic systems in cities or designing optimized chemical compounds, require “discovery machines”, super computers combining neuromorphic architectures with quantum mechanical tunneling. (Image: Shutterstock)

The AI machines the guide the world can be grouped into three main categories: inference machines, learning machines and discovery machines. Researchers at Washington University in St. Louis are tackling the rarest of such type with a better way to build “discovery machines,” thanks to recent research from Shantanu Chakrabartty, the Clifford W. Murphy Professor and vice dean for research in the McKelvey School of Engineering at Washington University in St. Louis.

The work, now published in Nature Communications, builds off previous research on establishing a hybrid systems architecture, one that employs “neuromorphic architecture” modeled on how human neurobiology functions combined with systems that leverage quantum mechanics to find optimal solutions to complex problems.

The research shows that these machines can consistently produce state-of-the-art solutions with high reliability and with competitive time-to-solution metrics, Chakrabartty said.

To understand how the new systems work, think back to those three different types of machines. Inference machines are the most common and familiar: ChatGPT, for instance, can serve as an inference machine. If you ask the large language model (LLM) to solve a Rubik’s Cube, that LLM has already been trained on the exact steps involved for that problem and can provide the instructions within seconds. 

Now, imagine if no one ever trained the machine on those steps and users instead wanted it to “learn” all the possible steps to solving the cube. That effort requires a learning machine. But more complex problems call for more complex computing, requiring more energy and time. Computer and systems engineers have gotten better at creating those machines, too.

It’s the third category — the discovery machines — where things get very difficult. Imagine a machine that not only can find all possible solutions to a particular puzzle, but it can also find the fastest and most optimized solution, even with trillions of factors. This type of effort involves tapping into the power of randomness.

With his new research, Chakrabartty is essentially offering a recipe to make AI machines with this kind of power. These machines are designed to “find a needle in a haystack,” he said, with guaranteed success.

His team’s formula for discovery machines boils down to a hybrid system of neuromorphic-inspired auto encoding, plus Fowler-Nordheim annealing, which is a tool cribbed from quantum mechanics.

“These are the two ingredients you need,” Chakrabartty said. “It’s general enough you can apply it to any complex problem.”

An auto encoder is a technique that compresses large streams of data. With compressed data, the machine can do pattern prediction and keep repeating the compression process until its predictions are accurate.

Fowler-Nordheim annealing is a method to generate noise and randomness, which enables the machine to “tunnel” directly to the optimized solution. This is where the researchers find the biggest advantages over a classical computing approach.

New computer chips allow for simulated annealing, which is a way to do quantum computing that can take researchers to the “Eureka moment” more directly, using principles of quantum mechanics, Chakrabartty said. With the hybrid system they built, his team can tune discovery machines to get results.

Teamwork makes the machine work

Chakrabartty and his team are involved in this research with collaborators around the world via the Institute of Neuromorphic Engineering (INE) and through annual brainstorming events like the Telluride Neuromorphic AI workshop in Colorado and the Bangalore Neuromorphic Engineering Workshop in Bengaluru, India. This research includes co-authors from Indian Institute of Sciences, Heidelberg University in Germany, John Hopkins University and University of California, Santa Cruz. Chakrabartty’s doctoral student Faiek Ahsan, who is the first author on this work, has been investigating the synaptic origins of the process of  “discovery.” 

Over the years, the group has been trying to solve higher order challenges using a standard test called the Ising model. Even the neural networks found Ising too difficult to solve, so they came up with the idea of augmenting the next generation of AI models with a light touch of quantum mechanics. But there is one more bonus.

The proposed architecture also differs from previous higher-order Ising models in its convergence guarantees. This means, they know that no matter if the machine takes six months or a year to find an answer, there will be an answer at the end of that period. In some cases of supercomputers if researchers don’t get the prompt right first, they could be waiting for a year — for nothing.

It brings to mind Deep Thought, the supercomputer from “The Hitchhiker’s Guide to the Galaxy,” Chakrabartty said. It’s asked, “what is the answer to life, the universe and everything” and ended up taking millions of years to eventually answer with “42,” much to the chagrin of its creators. That shall not be the situation with the discovery machines emerging from the team’s hybrid system.

“These types of machines give you that guarantee,” he said. “After six months something useful will show up.”


Ahsan F, Maiti S, Chen Z. et al. Higher-order neuromorphic Ising machines—autoencoders and Fowler-Nordheim annealers are all you need for scalability. Nat Commun (2026). https://doi.org/10.1038/s41467-026-71937-4

This work is supported in part by research grants from the U.S. National Science Foundation: 2332166, 2208770 and 2020624. F.A., S.C. and A.N. would like to acknowledge the McDonnell International Scholars Academy seed grant and a Memorandum of Understanding between WashU and IISc. The initial work on XOR-SAT solvers was performed when A.N. was a visiting scholar at WashU under the Fulbright- Nehru Doctoral Fellowship program. C.S.T. acknowledges financial support from the Pratiksha Trust Grant, India. J.S. and J.K. also acknowledge financial support from the Horizon Europe grant (Agreement No. 101147319, EBRAINS 2.0).

Click on the topics below for more stories in those areas

Back to News