Solving a Mystery in Dark Matter Detectors Could Improve Quantum Computers


BYLINE: Lauren Biron

Newswise — Although dark matter makes up most of the mass in our universe, it has never been directly observed. To hunt for lighter dark matter and other rare phenomena, researchers must solve a puzzle in their supersensitive detectors: an unexpected number of low-energy events, called the “low-energy excess” or LEE, that can obscure the rare signals they seek.

In a study published on Dec. 30, 2025, in Applied Physics Letters, researchers with the TESSERACT (Transition-Edge Sensors with Sub-EV Resolution And Cryogenic Targets) experiment identified one of the culprits behind the low-energy excess. They found that the noise comes not from the electronics or the surrounding environment, but from tiny bursts of vibrational energy within the silicon crystal of the detectors themselves. And the thicker the silicon, the more LEE events there are.

Since at least some LEE events come from tiny changes in the detector material itself, researchers estimate they also cause problems in superconducting qubits, the sensitive building blocks of quantum computers that are often made of silicon. The bursts of energy can create “quasiparticles” that disturb a qubit’s fragile quantum state, causing it to decohere or fail. So even in carefully shielded quantum systems, some errors could be coming from inside the house.

“Quantum computers could perform calculations our current systems can’t, but only if people can make qubits that are stable,” said Dan McKinsey, the director of TESSERACT and a scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), which leads the experiment. “Because the detectors we use for our dark matter experiment have a similar backbone to what is in qubits, by understanding a problem in particle physics, we’re also getting information on how to improve the quantum computing side.”

To pinpoint where LEE events were coming from, TESSERACT collaborators fabricated superconducting phonon sensors (which pick up quantum vibrations, or phonons) on two nearly identical silicon chips that were 1 and 4 millimeters thick. In both detectors, the number of events decreased over time as they were cooled, and the thicker chip saw four times as many low-energy events — pointing to the volume of silicon itself as the source, rather than outside causes.

Now that the scientific community knows the number of LEE events relates to how thick the silicon is, some groups will be able to improve their sensors simply by scaling back how much silicon they use. But it’s still just the first step in understanding exactly what causes the bursts of energy and finding an engineering solution to get rid of the background noise completely.

“Superconducting qubits for computers are designed to ignore the environment so that their quantum state survives,” said Matt Pyle, a TESSERACT collaborator, associate professor at UC Berkeley, and researcher at Berkeley Lab. “In contrast, our photon and phonon sensors use similar technology, but they’re designed to be incredibly sensitive to their environment so that they can sense dark matter. That makes our detectors unique and powerful tools for diagnosing environmental sources that cause decoherence and limit quantum computers.”

During the experiment, TESSERACT’s thinner detector also achieved a world-leading energy resolution of 258.5 millielectronvolts. That means it could distinguish between two events with energies differing by only a few hundredths of an electronvolt, several times smaller than the amount of energy carried by a single particle of visible light. That precision will allow scientists to distinguish extremely faint signals from background noise, essential for tracking down dark matter.

TESSERACT is currently in the prototype and construction phase, and will eventually be installed in France’s Modane Underground Laboratory. The TESSERACT collaboration also includes researchers at Argonne National Laboratory, Caltech, Florida State University, IJCLab (Laboratoire de Physique des 2 Infinis Iréne Joliot-Curie), IP2I (Institut de Physique des 2 Infinis de Lyon), LPSC (Laboratoire de Physique Subatomique et de Cosmologie), Texas A&M University, UC Berkeley, the University of Massachusetts Amherst, the University of Zürich, and QUP (the International Center for Quantum-field Measurement Systems for Studies of the Universe and Particles).

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.




Cracking the Code: Using AI to Solve Difficult-to-Map Proteins


BYLINE: Ashleigh Papp

Newswise — Using a tool to solve a protein’s structure, for most researchers in the world of structural biology and computational chemistry, is not unlike using the Rosetta Stone to unlock the secrets of ancient Egyptian texts. Once a protein’s structure has been discovered, or defined, one can infer crucial information about its function or, in a diseased state, its dysfunction. While researchers have been pursuing the quest of solving protein structure for decades, advancing tools and computing technologies offer a new frontier for this work.

A collaborative study recently published in Nature Communications unveiled a new computing program that offers a faster and more accurate way to determine protein structure at a new level of precision. Researchers from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), along with an international team of researchers, were a part of the effort. This tool, dubbed AI-enabled Quantum Refinement, or AQuaRef for short, uses quantum-mechanical calculations (QM) and artificial intelligence (AI) to predict the highly-accurate placement of atoms and electrons to determine a protein’s molecular structure.

This program is a part of Phenix, a comprehensive software suite that generates realistic computer models used by structural biologists around the world to solve macromolecular structures. “We’re all basically a bunch of proteins,” said Nigel Moriarty, a Berkeley Lab researcher and contributor to the recent publication. “They do so much in our bodies that detail the processes of life. Understanding their structure can give us insights into the mechanisms that cause disease in humans or produce energy in plants. All of this knowledge can lead to more effective therapeutics and bioenergy production.”

The current way of mapping a protein’s structure entails bringing together two streams of information: experimental data produced through techniques like X-ray crystallography and cryogenic electron microscopy (cryo-EM), and theoretical data that exists in a library of detailed, known protein structural information. But the current options are limited, explained Moriarty, a computational research scientist in the Molecular Biophysics and Integrated Bioimaging (MBIB) Division’s Phenix group. Our understanding today is limited to the chemical entities that have already been defined and doesn’t yet include meaningful noncovalent interactions, the type of attraction typically seen holding a protein in its structural form. “That’s where quantum and AI come in,” he said.

Nearly five years ago, members of the Phenix team began working with researchers at Carnegie Mellon University to explore how they might be able to apply their coding work to Phenix’s offerings. The collaborative approach, coupled with 15 years of incremental research, led to this breakthrough program. In addition to Moriarty, other members of the Phenix team involved in this work were Paul Adams and Billy Poon, with Pavel Afonine leading the research. AQuaRef uses machine learning (ML) tools developed at Carnegie Mellon integrated with the Phenix software to compute energy and forces for scientifically interesting proteins—making quantum-level refinement practical where it was previously impossible.

Of the 71 experiments that were tested in this study, AQuaRef produced higher quality structural information at a substantially lower computational cost while maintaining an equal or better fit to experimental data. In addition to the proof-of-concept results from this work, AQuaRef also correctly determined proton positions in DJ-1, a human protein linked to some forms of Parkinson’s Disease, the structure of which has been notoriously difficult to map. Now that the team has confirmed that quantum-level refinement of a 3D protein model structure is possible, they’re aiming to broaden the scope to include more diverse structures, such as those required for pharmaceutical drug design. And the potential impacts of this work reach far beyond human health, from better understanding the mechanisms of photosynthesis for enhanced crop productivity to mapping the proteins in plants as it relates to biofuel production.

“There is a near-infinite number of things that can benefit from a detailed understanding of these mechanisms and protein structure,” said Moriarty. “I’m excited to see how the paradigm shift that AQuaRef represents impacts the field of protein structure determination.”

This international team also included collaborators from the University of Wrocław, Poland, the University of Florida, and Pending.AI, Australia.

This work was funded by the National Institutes of Health as well as with support from the Phenix Industrial Consortium.

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.




AI, Automation, and Biosensors Speed the Path to Synthetic Jet Fuel | Newswise


BYLINE: Will Ferguson

Newswise — When it comes to powering aircraft, jet engines need dense, energy-packed fuels. Right now, nearly all of that fuel comes from petroleum, as batteries don’t yet deliver enough punch for most flights. Scientists have long dreamed of a synthetic alternative: teaching microbes to ferment plant material into high-performance jet fuels. But designing these microbial “mini-factories” has traditionally been slow and expensive because of the unpredictability of biological systems.

In a pair of recent studies, two teams at the Joint BioEnergy Institute (JBEI), which is managed by Lawrence Berkeley National Laboratory (Berkeley Lab), have demonstrated complementary ways to dramatically speed up this process. One combines artificial intelligence and lab automation to rapidly test and refine the genetic designs of biofuel-producing microbes. The other turns a microbe’s “bad habit” into a powerful sensing tool, uncovering hidden pathways that boost production.

Their shared target is isoprenol — a clear, volatile alcohol that can be converted into DMCO, a next-generation jet fuel with higher energy density than today’s conventional aviation fuels. Producing isoprenol efficiently has been a long-standing challenge in synthetic biology.

The two studies — one published in Nature Communications, the other in Science Advances — tackle different sides of this challenge. The first uses automation and machine learning to engineer Pseudomonas putida strains that produce five times more isoprenol than before. The second approach turns the bacterium’s natural fuel-sensing ability into an advantage. By rewiring that system into a biosensor, the team could rapidly screen millions of variants and identify strains that make up to 36 times more isoprenol.

“These are two powerful complementary strategies,” said senior author of the biosensor study Thomas Eng, JBEI deputy director of Host Engineering and a research scientist in Berkeley Lab’s Biological Systems and Engineering (BSE) Division. “One is data-driven optimization; the other is discovery. Together, they give us a way to move much faster than traditional trial-and-error.”

A new engine for strain design

The AI and automation study was led by Taek Soon Lee, director of Pathway and Metabolic Engineering at JBEI, and Héctor García Martín, director of Data Science and Modeling at JBEI, both staff scientists in Berkeley Lab’s BSE Division. They set out to accelerate one of synthetic biology’s most time-consuming steps: improving microbial production through a series of genetic tweaks to different combinations of genes. Traditionally, scientists alter a few genes at a time and test the results — a painstaking, intuition-driven process that can take months or even years to yield meaningful gains.

By contrast, the Berkeley Lab researchers built an automated pipeline that uses robotics to create and test hundreds of genetic designs in parallel. After each round, machine learning algorithms analyze the results to systematically suggest the next set of strain genetic designs. The result is a system that moves 10 to 100 times faster than conventional methods.

“Standard metabolic engineering is slow because you’re relying on human intuition and biological knowledge,” said García Martín. “Our goal was to make strain improvement systematic and fast.”

Lead author David Carruthers, a scientific engineering associate with JBEI and BSE, developed a robotic workflow that connects key lab steps into one automated system. Working with collaborators at Lawrence Livermore National Laboratory, the team introduced a custom microfluidic electroporation device that can insert genetic material into 384 Pseudomonas putida strains in under a minute — a task that typically takes hours by hand.

At the core of the system is CRISPR interference (CRISPRi), a tool that lets researchers “turn down” gene activity rather than switching genes off completely. This fine-tuning makes it possible to test subtle gene combinations that shape the cell’s metabolism and track the effects through detailed protein measurements. After each round, the machine learning model analyzes the results and recommends the next set of genes that are most likely to boost performance when dialed down.

“Traditionally, optimizing production is a kind of guess-and-check process,” Carruthers said. “You make one change, test it, and hope you’re climbing toward a higher peak. By combining automation and machine learning, we were able to climb that landscape systematically — in weeks, not years.”

Lee, who led the metabolic engineering work, emphasized why this level of automation is so transformative for biology.

“We have been engineering Pseudomonas by hand for years, but biological experiments always come with small variations that are hard to control,” he said. “Automation gives us the ability to generate the same high-quality data every time, which is essential for machine learning to work well.”

Patrick Kinnunen, a former Berkeley Lab JBEI postdoctoral researcher who co-developed the data strategy, highlighted how crucial that reproducibility was for the algorithms. “Automation didn’t just make the experiments faster — it made the data cleaner,” he said. “That clarity is what lets it uncover non-intuitive genetic combinations that we probably would have missed by hand.”

Using their automated learning loop, the team completed six engineering cycles, each lasting just a few weeks instead of the months typical of manual workflows. They boosted isoprenol titers (the concentration of product in the culture) five-fold compared to their starting strain.

Turning a bug into a feature

Meanwhile, a second team led by Eng tackled a different but equally stubborn hurdle: how to select target genes that, when dialed down, improve isoprenol production significantly. The team’s microbe, Pseudomonas putida, posed a peculiar problem. It didn’t just make isoprenol, it also consumed the fuel molecule almost as soon as it produced it, undermining production efforts. Initially, this looked like a flaw. But during the COVID-19 pandemic, Eng and colleagues realized it might be a clue: if the microbe could sense and eat isoprenol, it likely had a built-in molecular sensor.

“There was a real ‘Aha!’ moment,” Eng said. “We had spent more than a year trying to figure out why the cells were consuming the product. One day we thought, ‘Wait, if they can sense it, there has to be a protein that detects it. Maybe we can turn that from a problem into a tool.’”

The team discovered the molecular system the microbe uses to sense isoprenol: two proteins that work together to detect the fuel and send signals inside the cell. They then rewired this system into a biosensor — a kind of biological “engine light” that turns on in proportion to how much fuel the cell produces.

Then came the clever twist: They linked the sensor to genes essential for survival, creating a system where only the microbes that make the most fuel can grow. Instead of measuring thousands of samples by hand, they let natural selection do the screening. This approach rapidly surfaced “champion” strains, including variants that produced up to 36 times more isoprenol than the original.

“What started as a frustrating bug became our biggest asset,” Eng said. “We turned the microbe’s fuel-eating behavior into a sensor that reports and selects for the best producers automatically.”

The approach also revealed surprising biology; high-producing strains switched to feed on their own amino acids once glucose ran out, sustaining production by rewiring their metabolism in unexpected ways. Just as importantly, the workflow can be applied to other molecules, offering a flexible new tool for rapidly engineering microbes — not just for isoprenol, but for a wide range of bio-based products.

Scaling up to industry-ready

Although developed independently, the two approaches fit together well. The AI-driven pipeline excels at rapidly optimizing combinations of a known set of gene targets, while the biosensor method is best for discovering novel gene targets, revealing genetic levers that would be difficult to predict.

“One is depth-first; the other is breadth-first,” Eng said. “Machine learning systematically optimizes combinations of annotated targets, while the biosensor approach starts fresh and lets the cells tell us which gene targets matter.”

Both teams are now working to scale their methods from lab experiments to industrially relevant fermentation systems — a critical step for producing synthetic aviation fuel at commercial levels. They’re also adapting their approaches to other microbes and target molecules, aiming to make them broadly applicable in biomanufacturing.

“If widely adopted, these approaches could reshape the industry,” García Martín said. “Instead of taking a decade and hundreds of people to develop one new bioproduct, small teams could do it in a year or less.”

Aindrila Mukhopadhyay, BSE deputy director for science, director of Host Engineering at JBEI, and a coauthor on the biosensor study, said these kinds of tools are changing how biological research gets done.

“Engineering biology is challenging due to the inherent unpredictability of metabolism and that makes the engineering slow,” Mukhopadhyay said. “By streamlining key steps — as we did through selections — and leveraging automation and AI, we’re making it a faster, more systematic process that is easier to adopt.”

JBEI is a Bioenergy Research Center funded by the Department of Energy Office of Science.

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.