Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions


Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO.

Meta is desperate to regain momentum in the fiercely competitive AI market following the disappointing debut of its latest open-source models last April. The release failed to captivate developers, leading CEO Mark Zuckerberg to pivot his strategy.

“Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before,” Meta said in a blog post on Wednesday. “This initial model is small and fast by design, yet capable enough to reason through complex questions in science, math, and health. It is a powerful foundation, and the next generation is already in development.”

Meta isn’t positioning Muse Spark as a top-of-the-line model, but is instead highlighting its efficiency and “competitive performance” on various tasks.

The new Muse Spark will be proprietary, instead of open source, with the company saying there is “hope to open-source future versions of the model.” The company had been taking an open-source approach to AI with its Llama family of models.

Meta said in a technical blog about the new model that that improved AI training techniques along with rebuilt technology infrastructure has enabled the company to create smaller AI models that are as capable as its older midsize Llama 4 variant for “an order of magnitude less compute.”

“Muse Spark offers competitive performance in multimodal perception, reasoning, health, and agentic tasks,” Meta said in the post. “We continue to invest in areas with current performance gaps, specifically long-horizon agentic systems and coding workflows.”

Meta is also experimenting with a new AI model revenue stream by offering third-party developers access to Muse Spark’s underlying technology via an API. Currently, only unspecified “select partners” can access the AI model’s “private API preview,” but Meta said that it plans to eventually offer paid API access to a wider audience at a later date.

The new model now powers the company’s digital assistant in the standalone Meta AI app and desktop website. Muse Spark will debut in the coming weeks inside Facebook, Instagram, WhatsApp and Messenger, as well as in the company’s Ray-Ban Meta AI glasses. Meta also plans for Muse Spark to eventually power the company’s Vibes AI video feature in the Meta AI app. That service currently uses AI models from third-parties like Black Forest Labs.

With Muse Spark, users of the standalone Meta AI app and related website will now be able to alternate between certain modes depending on the sophistication of their prompts. Users can get use one mode quick answers to simple questions, and another for more complicated queries related to tasks like analyzing legal documents or gleaning nutritional information from photos of grocery store products.

With Muse Spark, users of the standalone Meta AI app and related website will now be able to alternate between certain modes depending on the sophistication of their prompts. With Instant mode, users can get quick answers to simple questions whereas Thinking mode lets them input more complicated queries related to tasks like analyzing legal documents or gleaning nutritional information from photos of grocery store products.

Additionally, a Contemplating mode “will be rolling out gradually” in the Meta AI app and site for the most complicated queries and tasks, Meta said in the technical blog. In this mode, the Muse Spark model utilizes a squad of AI agents to help “reason in parallel,” thus helping it “compete with the extreme reasoning modes of frontier models such as Gemini Deep Think and GPT Pro,” the technical blog said.

The revamped Meta AI with Muse Spark will also contain a Shopping mode that the company said will be able to help people buy clothes or decorate rooms.

“Shopping mode draws from the styling inspiration and brand storytelling already happening across our apps, surfacing ideas from the creators and communities people already follow,” Meta’s blog post said.

This is breaking news. Please check back for updates.

WATCH: Alphabet, Meta, Microsoft all down as data center spending rises.


Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix


Gen Z girl looking at smartphone screen feeling upset scrolling on social media.

Mementojpeg | Moment | Getty Images

Governments around the world are making efforts to crack down on teen social media use amid mounting evidence of potential harms, but critics argue blanket bans are an ineffective quick fix.

Australia became the first country to enforce a sweeping social media ban for under-16s in December, requiring platforms like Meta’s Instagram, ByteDance’s TikTok, Alphabet’s YouTube, Elon Musk’s X, and Reddit to implement age verification measures or face penalties.

Several European countries are now looking to follow Australia’s lead, with the U.K., Spain, France, and Austria drafting their own proposals. Although a national ban in the U.S. looks unlikely, state-level legislation is underway.

Countries around the world are considering teen social media bans – why experts warn it’s a ‘lazy’ fix

It comes after Meta, the parent company of Facebook, Instagram and Threads, faced two separate defeats in trials related to child safety and social media harms in March.

A Santa Fe jury found Meta misled users about child safety on its apps. The next day, a Los Angeles jury ruled that Meta and YouTube designed platform features that contributed to a plaintiff’s mental health harms.

Meta CEO and Chairman Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on Feb. 18, 2026.

Meta’s stock drops almost 8% as 2 court defeats add to Zuckerberg’s recent woes

These developments are set to “unleash a lot more legislation,” Sonia Livingstone, social psychology professor and director of the London School of Economics’ Digital Futures for Children center, told CNBC.

However, Livingstone said a social media ban for teens is a slapdash solution from governments that have failed to properly police tech giants for years.

“I think the argument for a ban is an admission of failure that we cannot regulate companies, so we can only restrict children,” she said, explaining that the U.S. and Europe already have a lot of legislation in the books that isn’t being enforced.

“When are governments really going to enforce, raise the stakes on fines, ban the companies if necessary for not complying,” she added.

Enforce existing laws

Experts argue the sector has for too long escaped accountability and the rigid requirements faced by other industries.

“[Governments] should be implementing the law [and] big tech companies should be facing a slew of regulatory interventions that forbid a whole series of practices that they currently do,” Livingstone said.

She highlighted the U.K.’s Online Safety Act, which “requires safety by design” — this means features such as Snapchat’s “Quick Add” that invite teens to befriend others should be stopped, according to Livingstone.

Livingstone believes that a blanket ban wouldn’t even be under discussion if social media companies had undergone appropriate premarket testing to establish if their features are safe for their target audience.

“There are lots of areas where we have a well functioning market that requires testing to establish it meets the standards…[before products] can go into the market,” she said. “If we did that for AI and for social media, we would be in a whole different place and we’d not be having to talk about banning children from anything.”

Josh Golin, executive director at Boston-based non-profit Fairplay for Kids, told CNBC that he’d like to see “privacy and safety by design legislation rather than blanket bans” across the U.S.

This includes passing the Children and Teen Online Privacy Protection Act to put a stop to personal data-driven advertising towards children, so there’s “less financial incentive for social media companies to target and addict kids.”

Golin added that passing the Senate’s version of the Kids Online Safety Act (KOSA) is also key to ensuring platforms are held legally responsible for design features that can cause addiction or other harms.

He added that Meta has already successfully lobbied to stop KOSA even though it passed the Senate in 2024. But, if it continues to block legislation further, Golin thinks this could see further pressure “line up behind bans because addictive and unsafe is not OK.”

Regulatory pressure to follow after landmark social media verdict: Legal Analyst

A ban is ‘lazy’ and ‘unfair’

A sweeping social media ban only punishes a generation of young people who have become increasingly dependent on online means of interaction, according to Livingstone. She said bans are a “lazy” solution from governments and an “unfair” outcome for young people.

“It’s the 15 years in which we don’t let our children go outside and meet their friends. It’s the 15 years in which we stopped funding parks and youth clubs for them to meet in,” she said.

“So a ban now is to say to ‘Children, we can’t make the regulation work. We can’t update it fast enough. We haven’t built you anything else to do, but that’s just tough. We’ve terrified your parents into feeling that there’s nothing they can do, and we’re going to take you away from the service where you hoped you would feel some sociability and entertainment.”

A young woman wearing headphones browses vintage vinyl records in a store.

A ‘quiet revolution’: Why young people are swapping social media for lunch dates, vinyl records and brick phones
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Scientists Commission Crucial Subsystem in Pioneering Particle Physics Experiment


Newswise — The U.S. Department of Energy’s (DOE) Argonne National Laboratory made a major contribution to a high-profile experiment seeking to discover new physics. Hosted at DOE’s Fermi National Accelerator Laboratory (Fermilab), the Mu2e (muon-to-electron conversion) experiment aims to observe an extremely rare process in particle physics. The experiment is a multiyear collaboration among more than 30 institutions and 200 scientists from around the world.

An Argonne team of high energy physics scientists designed and — most recently — commissioned a crucial Mu2e subsystem called the Cosmic Ray Veto (CRV) detector. This subsystem filters out the biggest source of background noise in Mu2e. Background noise refers to signals that mimic the rare process that Mu2e seeks to detect. The commissioning tests demonstrated that the CRV detector is working properly and collecting background data.

The CRV’s development and deployment was a collaborative effort among Argonne, Fermilab, Kansas State University, University of South Alabama, University of Virginia, Northern Illinois University and University of Michigan.

“The CRV detector will enable Mu2e to more reliably and accurately detect an event expected to be vanishingly rare according to current particle physics theory,” said Yuri Oksuzian, an Argonne physicist who has played a key role in Mu2e and the CRV’s development. ​“The CRV is essential because it screens out background noise that could mimic this event. Observing even a few cases of the event would be compelling evidence of new physics.”

Searching for a muon-to-electron conversion

Since the 1970s, the dominant theory in particle physics has been the Standard Model. Widely considered a robust theory, the Standard Model describes the interactions among the fundamental particles and forces in the universe. But it leaves many big questions unanswered. For example, it cannot explain gravity or dark matter, a mysterious type of matter that cannot be observed directly.

As a result, particle physicists are searching for new theories, particles and forces beyond the Standard Model. The aim is to provide a more comprehensive understanding of the universe.

Mu2e seeks to observe a muon changing to an electron without any other particles being produced. A muon is a fundamental particle that is a heavier version of an electron. The Standard Model expects this transition to be so rare that observing it at Mu2e would be a major discovery and strong evidence of new physics.

“If Mu2e detects a muon-to-electron conversion, it would indicate that there’s a new particle or force involved in this process,” said Oksuzian. ​“The discovery would fundamentally change our understanding of how the universe works.”

Screening out cosmic-ray muons

The Mu2e apparatus directs a high-intensity beam of muons to a thin aluminum foil target. Detectors probe the target for conversion events, indicated by the presence of electrons with a precise amount of energy and momentum. Remarkably, Mu2e is expected to be 10,000 times more sensitive to conversions than previous similar experiments.

The apparatus has critical subsystems that measure background — in other words, signals that look like conversions but aren’t. A major background source is cosmic rays. These are high-energy particles from space that collide with atoms in the Earth’s atmosphere. The collisions produce showers of particles, including muons, that reach the ground.

Muons can penetrate the Mu2e apparatus and knock electrons from the aluminum foil. These free electrons can potentially have the exact energy and momentum of an electron from a conversion event. If that happens, Mu2e’s detectors will mistakenly register them as the real signal.

The CRV detector, engineered by Argonne, detects background events caused by cosmic-ray muons. It is essentially a giant cage that covers key parts of the Mu2e apparatus. It consists of 83 modules, which together weigh about 60 tons. The modules are made of thousands of plastic strips that produce light photons when muons pass through them.

Special fibers inside the strips carry the photons to sensors called silicon photomultipliers. The sensors measure the photons, indicating the exact time of the muon’s passage. If the CRV detects a cosmic muon just before an electron appears in the Mu2e apparatus, that electron is rejected from the data.

“We had to carefully design the CRV structure so that there are no gaps between the modules,” said Oksuzian. ​“The objective was to ensure that the detector would not miss any cosmic muons.”

Without the CRV, cosmic-ray muons would produce thousands of ​“fake” conversion events over Mu2e’s three-to-five-year run time. Because muon-to-electron conversions are so rare, even a small number of fake events would compromise Mu2e’s accuracy. As a result, the CRV must detect and reject 99.99% of cosmic-ray muons passing through. Argonne recently evaluated the CRV’s performance over a two-year period and found that it can meet this strict design requirement.

Commissioning the CRV at Mu2e

Recently, the Argonne team transported the CRV components to the Mu2e building at Fermilab in Batavia, Illinois, and commissioned them to detect cosmic-ray muons. The successful test enabled the CRV to fulfill a key DOE technical milestone and performance objective, helping to advance Mu2e’s commissioning. Other Mu2e subsystems will be commissioned and tested over the next year.

The experiment is expected to begin in 2027. Argonne scientists will have important roles in Mu2e operations, including the CRV, data acquisition system and analysis of datasets.

Besides Oksuzian, Argonne’s CRV team also includes Simon Corrodi, Sam Grant, Peter Winter and Lei Xia.

DOE’s Office of Science is a key supporter of Mu2e and Argonne’s CRV research and oversees Mu2e’s implementation.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology by conducting leading-edge basic and applied research in virtually every scientific discipline. Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.




Sandia Researchers Develop Rapid PFAS Detector | Newswise


Newswise — ALBUQUERQUE, N.M. —When Sandia scientists Ryan Davis and Nathan Bays set out to find a better way to absorb and degrade PFAS in water sources, they kept running into the same issue: Detecting the chemicals in samples took too long.

So, they came up with their own solution.

They’ve developed a faster, cheaper way to test for PFAS.

The problem of PFAS and solving it

PFAS, or per-and polyfluoroalkyl substances, are commonly called forever chemicals because they don’t break down naturally in the environment. They can move through soil and water and build up in wildlife and humans.

Ryan, a chemist, has spent years developing technologies that can eliminate PFAS on both large and small scales. But that research has been time-consuming. Depending on the concentration, it can take hours to days to detect PFAS in a single sample.

“A common complaint of ours and others who are doing PFAS analysis is that it’s slow and can be costly depending on the technology,” Ryan said.

Traditional testing processes requires repetitive extraction, concentration and processing.

 It starts with a liter or more of liquid, suspected to contain PFAS. The liquid is forced through a cartridge to extract the PFAS. The collected PFAS is then added to a smaller volume of water, and the process is repeated with new cartridges until enough PFAS concentrated for detection.  

The process is not only time-consuming but also costly. Cartridges can cost several hundred dollars apiece.

That process not only slows research and development but puts testing out of reach for the average person.

“We want a technology that can be broadly accessible, not only for researchers but for the broader public and government,” Ryan said. “It will allow regulators to track PFAS in the environment, and for people to test their own tap water.”  

A new way to detect PFAS

Ryan and Sandia postdoctoral researcher Nathan Bays have developed that technology.

The pair stumbled onto the approach while experimenting with a mass spectrometer and a technique called desorption electrospray ionization, or DESI. The process uses electrically charged droplets sprayed at the surface of an adsorbent that ionizes only the target chemical, not the adsorbent itself.

 Bays and Ryan said the results were unexpected.

“We had toyed with the idea of using DESI to confirm the presence of PFAS on adsorbent materials,” Ryan said. “When we did some preliminary testing, not only did we confirm the presence of PFAS, but we noticed that we got results well beyond our standard analysis.”

“At this point, it became very clear we had an opportunity to push further on this work,” Bays said. “One step at a time, we went from just being able to see PFAS at parts-per-million—to levels at parts-per-billion, and finally low parts-per-trillion.”

Ryan and Bays’ technique starts with an adsorbent about the size of a Rice Krispy. The adsorbent is placed in a solution for testing. Three minutes later, it is removed and placed in front of a mass spectrometer where it is sprayed with electrically charged droplets. The droplets remove PFAS from the adsorbent and carry it into the mass spectrometer, where it is analyzed for PFAS concentration and type.

The entire process can take as little as five minutes.

“It’s one of those outcomes that wasn’t exactly planned as we had initially envisioned it,” Ryan said. “It was surprising to see the concentration of PFAS so clearly. That may be why it hadn’t been done before. It was just unexpected.”

The pair has published details of the process in hopes it can be commercialized for widespread use. They also hope it can be developed to tackle other environmental pollutants besides PFAS and used for environmental analytics and testing such as off- gassing measurements tied to Sandia’s nuclear deterrence work.   

“It could help researchers understand the system’s environment and the off-gassing of chemicals in certain work,” Ryan said. “While our first phase worked with liquid, our more recent work has delved into the gas phase.”

Why they do it

Both Ryan and Nathan are passionate about this technology and PFAS remediation. Developing the new test is just a small part of the broader work they do aimed at reducing PFAS pollution.

“I’ve been working on this specific project since I joined Sandia two and a half years ago,” Nathan said. “My whole career has evolved around environmental remediation, so this was a natural fit. I’m a big outdoors person. My wife and I like to go out in nature, and we don’t like to see our world be polluted like this.”

One of the biggest focuses of PFAS remediation has been at U.S. Air Force bases, where soil and groundwater have been impacted by the long-term use of firefighting foam.

Ryan’s big goal, however, is to give people more power over their health. “More and more research shows that PFAS can have negative outcomes at even low concentrations, so detecting at those low concentrations is key,” Ryan said. “We don’t want families to worry about whether they can afford groceries this week or test their water for safety.”




Gevo Licenses Catalyst Technologies for Jet Fuel Production


BYLINE: Tina M. Johnson

Newswise — Gevo, an advanced biofuels company based in Colorado, has licensed two patented catalyst technologies from the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) for use in the production of sustainable aviation fuel (SAF).

“This partnership will streamline the transition of ORNL’s catalyst technologies from lab scale to pilot-scale reactors,” said Andrew Sutton, senior scientist in the Manufacturing Science Division at ORNL. “By demonstrating industrial viability, our goal is to accelerate the commercialization of this technology in the U.S., boosting global competitiveness and domestic production of aviation fuel.”

SAF is an alternative fuel made from plant- or waste-based feedstocks. The International Air Transport Association, representing more than 80% of global air traffic, is interested in SAF. Many air carriers have agreed to buy the fuel at scale, but production efficiencies remain an issue.

To meet the challenge, researchers at ORNL developed catalysts that enable a single-step conversion of ethanol to olefins (ETO), which can then be used to produce SAF. A catalyst accelerates chemical reactions and enhances the efficiency of the fuel production process.

In addition to SAF, olefins serve as key building blocks for a wide range of products, including plastics, solvents and surfactants. The global plastics market is poised for continued growth, with forecasts predicting a market worth more than $1.3 trillion by 2033.

Ethanol, commonly derived from agricultural or cellulosic feedstocks, often serves as the basis for SAF production through its conversion to olefins — key intermediates that simplify and reduce the cost of large-scale fuel manufacturing. Building on this foundation, ORNL’s novel conversion process not only achieves high carbon efficiency but does so at equal or lower cost compared with conventional methods.

Through the DOE Technology Commercialization Fund, the partnership was awarded support for a three-year cooperative research and development agreement (CRADA) to advance this technology for pilot-scale operation and industrial commercialization. Gevo will guide the overall process model and provide industry know-how for successful implementation in the company’s pilot reactor.

“Gevo’s collaboration with Oak Ridge National Laboratory focuses on evaluating a novel catalytic process that converts ethanol into valuable fuel precursors and alternative chemicals like butadiene,” said Andrew Ingram, Gevo’s director of process chemistry and catalysis. “This work complements our broader ethanol conversion portfolio but is distinct from both our commercial deployment of Axens’ alcohol-to-jet process and our next-generation ETO platform. If the economics prove out, this pathway could provide a flexible, cost-effective option to scale U.S. bio-based solutions, driven by American innovation that creates new markets and demand for farmers producing feedstocks for energy and materials.”

ORNL provides extensive scale-up expertise, employing advanced characterization capabilities at the Center for Nanophase Materials Sciences, which was used to provide deeper insight into catalytic processes in larger chemical reactors.

Under the CRADA, ORNL will develop catalyst pellets and test their performance in an advanced chemical reactor. Researchers will develop a computational model based on the testing data generated that can accurately predict how the process will behave at scale to clear the way for industrial use. 

Global demand for jet fuel is expected to increase from 106 billion gallons in 2019 to 230 billion gallons by 2050. Expanding SAF use could help the aviation industry meet this demand while advancing U.S. energy independence and security.

This project was supported by DOE’s Alternative Fuels and Feedstocks Office, formerly known as the Bioenergy Technologies Office, through the Chemical Catalysis for Bioenergy (ChemCatBio), a multi-laboratory consortium focused on accelerating the development of catalytic technologies that convert biomass and waste resources into bio-based fuels and chemicals. Initial program funding was provided by ORNL Laboratory Directed Research and Development and Technology Innovation programs.

In addition to Sutton, Stephen Purdy, Meijun Li, Michael Cordon and Hunter Jacobs are currently contributing to the CRADA project. Inventors of the patented technologies include ORNL’s Li and Brian Davison, former ORNL researcher Zhenglong Li and the University of Maryland’s Junyan Zhang. Jennifer Caldwell within Technology Transfer at ORNL negotiated the terms of the licensing agreement. Browse available chemical technologies for licensing.

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit https://energy.gov/science. — Tina M. Johnson




Broadcom agrees to expanded chip deals with Google, Anthropic


Broadcom CEO Hock Tan speaks at the digital X event in Cologne, Germany, on September 13, 2022.

Ying Tang | Nurphoto | Getty Images

Broadcom said Monday that it’s agreed to produce future versions of artificial intelligence chips for Google, and signed an expanded deal with Anthropic that will give the AI startup access to about 3.5 gigawatts worth of computing capacity drawing on Google’s AI processors.

Shares of Broadcom rose 3% in extended trading.

The disclosure in a securities filing underscores the surging demand for infrastructure that can run generative AI models. Anthropic’s popularity has soared this year, with its Claude app becoming the top free U.S. app listed in Apple’s App Store in February after a dispute between the company and the Pentagon became public.

On an earnings call last month, Broadcom CEO Hock Tan said that “for Anthropic, we are off to a very good start in 2026” in providing 1 gigawatt of compute from Google’s homegrown tensor processing units (TPUs). Broadcom helps Google make its TPUs.

“For 2027, this demand is expected to surge in excess of 3 gigawatts of compute,” he said.

In a note following the earnings call, analysts at Mizuho led by Vijay Rakesh estimated that Broadcom would pick up $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. The filing on Monday did not contain a dollar amount.

Meanwhile, Broadcom is also collaborating with Anthropic rival OpenAI on custom silicon for AI. Both model builders currently rely heavily on graphics processing units from Nvidia through cloud providers such as Amazon, Google and Microsoft. OpenAI has also committed to drawing on six gigawatts of AMD’s GPUs, with the first gigawatt set to come in the second half of this year.

WATCH: Final Trades: Broadcom, Spotify, Applovin and Uber

Broadcom agrees to expanded chip deals with Google, Anthropic
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


AI data center boom ‘stress tests’ insurers as private capital floods in


AI data centers are becoming a “stress test” for insurers as rapid technological advancements and the use of increasingly complex financial structures present a unique set of challenges and opportunities for the sector.

Global spending on data centers could reach $7 trillion by 2030, according to McKinsey, and much of that spending can no longer come solely from hyperscalers. Instead, Big Tech is increasingly tapping private equity, private credit and using debt to finance the capital-intensive build-out of the facilities.

Private infrastructure data center deals were consistently above the $10 billion mark last year, according to data from Preqin. The largest deal amounted to $40 billion, with Nvidia, Microsoft, BlackRock and Elon Musk’s xAI forming part of a consortium of investors to buy Aligned Data Centers.

The fact that so much money is tied up in building, constructing, and running data centers has been a “real stress test” over the last four to five years for the major insurance companies, Tom Harper, data center leader at insurance broker Gallagher, told CNBC.

“When you put $10 to $20 billion plus in a single location, it creates capacity issues in the marketplace. The marketplace has always had an appetite for these risks because they are such high-quality builds. They’ve got cutting-edge technology, they’re AA plus plus construction locations, but the capacity — the ability to provide the insurance capacity at these locations — has been tough.”

It was nearly impossible to reasonably insure a $20 billion campus in 2023, according to Harper. In 2026, however, it’s become a weekly conversation.

We’re talking about trillions of dollars, and almost going back to the same cycle where there’s almost no transparency about the financing structures — the scale is astronomical

Rajat Rana

Partner at Quinn Emanuel Urquhart & Sullivan,

Estimated spending on AI data centers has been referred to as the biggest peacetime investment project in history. Rajat Rana, partner at Quinn Emanuel Urquhart & Sullivan, told CNBC he would take it a step further and stress that this is the “largest peacetime investment project in human history, which is financed largely off balance sheet.”

Rana, who worked on structured finance litigation in the wake of the housing crisis triggered by the 2008 Financial Crash, said tracking developments in AI data center financing feels like “deja vu.”

“We’re talking about trillions of dollars, and almost going back to the same cycle where there’s almost no transparency about the financing structures — the scale is astronomical,” he said.

The AI boom is not only driving a rush in demand for the facilities, it’s also spurring rapid advancements in power generation and chips — the critical tech that the data centers house. The advancements and huge sums of money flowing into the sector pose both risks and rewards for insurers and lenders.

Bespoke policies

Professional services firm Marsh launched a dedicated digital infrastructure advisory group designed to help clients as contracts become increasingly complex.

Last year, Marsh also launched Nimbus, a 1-billion-euro ($1.2 billion) insurance facility for covering the construction of data centers in the U.K. and Europe. Seven months later, it expanded the facility to offer limits of up to $2.7 billion.

“Private credit can meaningfully complement banks and can support non‑hyperscale contracted offtakes,” said Alex Wolfson, senior vice president of credit specialties at Marsh Risk.

As data center loans increase, insurers who protect lenders if a borrower doesn’t pay, are starting to hit limits, Wolfson explained. Marsh is working on solutions to support lenders.

However, Quinn Emanuel’s Rana cautioned that when it comes to data centers, it’s not easy for insurance companies to fully understand the risk as financing moves off the balance sheet.

He noted that in January, four U.S. senators called on the government to investigate how Big Tech is increasingly turning to “complex and opaque debt markets to borrow staggering sums of cash.” In an open letter, the senators warned that massive debt loads could cause “destabilizing losses” for financial institutions, triggering a broader financial crisis that harms the economy.

That increased opacity in financing can lead to second-order litigation risks for downstream investors such as pension funds, insurers and asset managers invested in private credit funds who later learn they were not fully aware of concentration risk, Rana said in a note published in March.

He told CNBC that some PE funds have reached out to him with concerns about commercial leases and the valuation of properties.

Tenants are trying to negotiate the extensions of their properties and landlords are disputing the value as they look for higher prices for AI data centers.

“I’m not a doomsday guy who’s saying, hey, it’s gonna crash. My point is, whether it crashes or not, the disputes are inevitable, and we have already seen those disputes,” Rana said.

‘GPU debt treadmill’

A key debate around potential cracks in financing centers on GPUs and the risk that their lifecycles may not align with the longer lifespan of the facilities that house them.

CoreWeave, which sells AI tech in the cloud, is the first company to secure GPU-backed loans, essentially using the value of the high-performance chips as collateral. Last week, the company announced it secured $8.5 billion in a first investment-grade rated GPU-backed deal. Its stock jumped 12% on the day.

While data centers typically have a decades-long lifecycle, the average lifecycle of a GPU is around seven years.

“There are different data centers that are raising debt by disclosing different life cycles to investors,” said Rana. He referred to the problem as the “GPU debt treadmill,” a phrase coined by AI commentator Dave Friedman.

“This is almost like a treadmill that these AI data centers are running on,” Rana told CNBC. Even if the financing structure is ring-fenced and backed by an investment-grade counterparty, the real risk may lie in whether an equity issue today later evolves into a credit problem over time.

AI data center boom ‘stress tests’ insurers as private capital floods in

“As these new chips come in, the data centers will feel pressured to raise more debt, and then they will have to build new infrastructure, and then that basically creates a billion-dollar question: how fast can you build these facilities? How fast can you raise credit?”

The cost of funding these projects is likely to continue to fuel recent growth in asset-backed securitization deals, says Harper, with greater volumes of commercial mortgage-backed securities sold to investors.

For some insurers, like Gallagher, the changing dynamics in the sector are opportunities rather than challenges. Harper said the lifecycles of GPUs have been increasing. Where things have depreciated quickly, Gallagher has had to get creative and write bespoke insurance polices with a predetermined agreement on how to value the assets.

“It would be a nightmare with the size and scope of these [facilities] to determine [the value of] each individual unit,” he said.

Harper also stressed that GPUs are interchangeable. The firm has seen operators anticipate relatively short life cycles and construct facilities that are more modular in response.

“There is a core tension in data center project finance: lenders typically want asset lives that exceed loan tenors by a comfortable margin, and the shorter useful life of GPUs challenges that assumption,” said Marsh Risk’s Wolfson.

Lenders are therefore structuring loans more cautiously to protect themselves.

Read more data center news

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Polymarket removes wagers on U.S. service member rescue mission in Iran


Polymarket removed a forum related to the rescue mission of U.S. military servicemembers amid political pressure, the latest sign of mounting scrutiny around prediction markets.

U.S. and Iranian military forces are searching for a missing American airman after its F-15E fighter jet was shot down over Iran on Friday. One crew member has been rescued, but another is not accounted for.

Rep. Seth Moulton, D-Mass., decried the Polymarket page that allowed users to bet on which day the U.S. would confirm the rescue of the two airmen after an American F-15E fighter jet was shot down over Iran. The lawmaker called the page “DISGUSTING” in an X post.

“They could be your neighbor, a friend, a family member,” Moulton wrote on Friday. “And people are betting on whether or not they’ll be saved.”

In a response on X, Polymarket said: “We took this market down immediately as it does not meet our integrity standards.”

“It should not have been posted, and we are investigating how this slipped through our internal safeguards,” Polymarket wrote.

In a separate X post, Polymarket said it doesn’t “make money or charge any fees on any geopolitical markets.”

In an email to CNBC, Moulton said, “Polymarket didn’t take that market down because it violated their standards. They took it down because we called them out.”

Moulton also said that the Commodity Futures Trading Commission has the authority to regulate prediction market platforms, but it is doing nothing.

“That needs to change, too,” he said. “Yesterday, there were 219 active bets in Polymarket’s ‘war’ category. Today, there are 223. This is spreading, and Congress needs to act.”

Moulton last month banned his staff from using prediction market platforms like Polymarket or Kalshi, a policy that his office believes is the first of its kind in Congress.

“Constituents that we serve should trust us to make decisions based on the right thing for do for our nation, not based on how bets might turn out,” Moulton said Monday on CNBC’s “Squawk Box.”

Moulton also said on X that Donald Trump Jr., the son of President Donald Trump, “is an investor in this dystopian death market and may have access to intelligence that isn’t public yet.”

Requests for comment from Trump Jr. weren’t immediately returned to CNBC.

The Massachusetts lawmaker is part of a growing chorus of voices in Washington calling for stronger oversight of these betting platforms as interest swells.

A group of congressional Democrats introduced legislation late last month that would bar prediction markets from allowing wagers on elections, war and government actions, in addition to sports.

In February, six Democratic senators urged the Commodity Futures Trading Commission to clarify that it will prohibit any contracts related to an individual’s death. These contracts “present dangerous national security risks,” the lawmakers wrote.

The CFTC on Thursday announced lawsuits against three states over what it saw as efforts to circumvent the organization’s sole regulatory authority over prediction markets.

The NFL has also asked prediction market operators to keep specific event contracts that the league deems “objectionable bets” off their platforms. The league outlined examples of event contracts that could be easily manipulated, inherently objectionable, related to officiating, and knowable in advance — and asked that operators refrain from offering such trades.

— CNBC’s Dan Mangan, Azhar Sukri and Luke Fountain contributed to this report.

Disclosure: CNBC and Kalshi have a commercial relationship that includes customer acquisition and a minority investment.


Can Practical Superconductors Work Without Extreme Cooling?


Newswise — Scientists discovered how tiny changes in superhydride structure enable superconductivity at near room temperatures but extreme pressure — offering clues for designing more practical superconductors.

Superconductors allow electricity to flow without resistance, meaning no energy is lost as heat. This property makes them useful for technologies such as MRI scanners, particle accelerators, magnetic-levitation trains and some power-transmission systems. Most superconductors, however, only work at extremely low temperatures — often hundreds of degrees below zero Fahrenheit. Keeping materials that cold requires complex and costly cooling systems, which limits where the superconductors can be used.

Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have helped take a step toward easing that limitation. They have gained new insight into a class of materials called superhydrides that can become superconducting at much higher temperatures — around 10 degrees Fahrenheit.

The research was carried out with collaborators from the University of Illinois Chicago (UIC), the University of Chicago and DOE’s Lawrence Livermore National Laboratory. A key tool was the upgraded Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.

“These experiments show what the upgraded APS can do. We can now study atomic-level structures with unprecedented detail in materials under extreme pressure.” — Maddury Somayazulu, Argonne physicist

Superhydrides are made mostly of hydrogen, held together in an ordered structure (crystal) by a small number of metal atoms. When subjected to extremely high pressures, these materials can carry electric current with no resistance. In a landmark 2018 study, researchers led by UIC professor Russell Hemley showed that a lanthanum-based superhydride could superconduct near 8 degrees Fahrenheit. The drawback was that it only worked at pressures equivalent to those deep within the Earth (1.88 million atmospheres), making it impractical outside the lab.

In the new study, Hemley and his fellow researchers explored whether changing the material’s chemistry could lower the pressure needed for superconductivity. They added a small amount of yttrium to the lanthanum superhydride to make it more stable and reduce the pressure required.

“To reach these extreme pressures, we squeezed a tiny sample between two diamonds,” said Maddury Somayazulu, a physicist at the APS. The team’s diamond-anvil device can generate pressures as high as five million atmospheres.

After forming the superconducting material at high pressure and temperature, the team used high-energy X-rays from the APS to study its structure (at beamlines 16-ID-B and 13-ID-D). ​“We focused an intense X-ray beam onto a sample only a few micrometers thick and about ten to twenty micrometers across,” said Vitali Prakapenka, a beamline scientist and research professor at the University of Chicago. One micrometer is about 1/70th the width of a human hair.

The recent APS upgrade made these measurements possible. Its brighter, more tightly focused X-ray beam allowed researchers to study extremely small samples while changing the pressure. ​“That beam allowed us to separate signals coming from the tiny sample itself as opposed to those coming from the surrounding materials and diamond anvils,” Prakapenka said.

The team found that small differences in how atoms are arranged in a crystalline lattice can strongly affect superconductivity. They identified two different crystal structures, each becoming superconducting at a slightly different temperature.

“These experiments show what the upgraded APS can do,” Somayazulu said. ​“We can now study atomic-level structures with unprecedented detail in materials under extreme pressure.”

Although the pressures used in the experiments are still very high — about 1.4 million times atmospheric pressure — the researchers see this as part of a longer path forward. They are adding more elements to lower the pressure further with the goal of making these materials practical.

Diamonds provide a useful comparison, Somayazulu explained. Natural diamonds form deep inside the Earth under extreme pressure and temperature. Scientists later learned how to synthesize them in the lab, and eventually how to produce them without such intense conditions. Researchers believe superhydrides could follow a similar path.

“If we understand the physics well enough, we may be able to stabilize these structures at much lower pressures but still attain superconductivity close to room temperature,” Prakapenka said.

Experimental data from the APS will help guide theoretical models and AI tools in that search for new materials. Instead of testing only a few combinations at quite-challenging-to-reach extreme conditions, scientists can use AI to explore many possible multi-element compositions. They can then focus experiments on the most promising ones.

“The calculations are very demanding,” Prakapenka said. ​“Theorists rely on high-quality experimental data to make their predictions more accurate.”

Finding a material that superconducts at near room temperature and normal pressure could reshape the nation’s electrical infrastructure.

The research was supported by the DOE Office of Basic Energy Sciences, DOE National Nuclear Security Administration and the National Science Foundation. Contributors include Maddury Somayazulu, Russell Hemley, Vitali Prakapenka, Abdul Haseeb Manayil-Marathamkottil, Kui Wang, Nilesh Salke, Muhtar Ahart, Alexander Mark, Rostislav Hrubiak, Stella Chariton, Dean Smith and Nenad Velisavljevic.

This article was adapted from the UIC release.

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology by conducting leading-edge basic and applied research in virtually every scientific discipline. Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.




New Nanofluidic Holder Lets Scientists Heat, Cool, Electrify, and Watch Reactions in Real Time | Newswise


Newswise — Micro- and nanofluidic systems are increasingly important in biology, medicine, chemistry, and materials science because they allow researchers to study reactions, transport, and molecular behavior in spaces that approach the dimensions of living capillaries or engineered nanosystems. Yet as chips become more integrated and more powerful, a bottleneck has emerged: the surrounding interface hardware often cannot match the chip’s sophistication. Researchers need systems that can simultaneously deliver multiple liquids, maintain stable seals, control heat and cooling, impose electric fields, and support in situ optical observation. Based on these challenges, deeper research was needed into multifunctional chip interfaces for highly integrated nanofluidic systems.

On January 19, 2026, a team from the Department of Physics at Chalmers University of Technology in Sweden reported (DOI: 10.1038/s41378-025-01125-9) in Microsystems & Nanoengineering a temperature-controlled nanofluidic chip holder with integrated electrodes for real-time optical analysis. The system was designed for 1 cm² silicon-based chips with up to 12 fluidic connection points. By combining heating, cooling, electrical control, and nanofluidic scattering spectroscopy in one platform, the researchers created a versatile interface for studying nanoscale transport and reaction processes directly on-chip.

The holder pairs a transparent acrylic channel plate with a thermally connected chip stage and four Peltier elements, allowing both heating and cooling while keeping the chip accessible to dark-field microscopy and spectroscopy. It can host miniature chips only 10 mm wide, yet each chip supports up to 12 independently addressable inlets or outlets, and 52 such chips can be produced from a single 4-inch wafer. In performance tests, the platform maintained stable cooling down to 12 °C at an optimized current and reached 112 °C in heating mode; under short high-current operation, the chip briefly dropped as low as 4 °C. The team then used Brilliant Blue and Fluorescein as model molecules to demonstrate three functions: on-chip solution switching and mixing, temperature-dependent diffusion inside a single nanochannel, and electrically modulated diffusion. Higher temperatures accelerated Fluorescein transport, while stronger applied voltages suppressed or slowed entry into the channel. At higher fields, the optical spectra also shifted toward longer wavelengths, suggesting field-induced changes in the dye’s electronic behavior.

“This work addresses a practical but often overlooked problem in nanofluidics: not just how to fabricate advanced chips, but how to operate them with precision once they are made. By integrating temperature control, electrical actuation, pressure handling, and optical readout into a single compact holder, the study turns the chip interface itself into an enabling technology. That matters because many important nanoscale processes—from molecular transport to catalytic reactions—depend on tightly controlled conditions that must be adjusted and observed in real time.”

The new platform could expand the experimental reach of nanofluidics across several fields. In chemistry, it may support studies of nanoscale mixing, diffusion, and catalytic reactions under controlled thermal and electrical conditions. In biology and biophysics, it could help researchers examine processes such as protein aggregation, folding, or transport in confined environments. Because the design is compact, modular, and compatible with optical readout, it also offers a practical route toward more scalable lab-on-a-chip and organ-on-a-chip research tools. More broadly, the work highlights that the future of highly integrated fluidics will depend not only on smarter chips, but also on smarter interfaces that make those chips truly usable.

###

References

DOI

10.1038/s41378-025-01125-9

Original Source URL

https://doi.org/10.1038/s41378-025-01125-9

Funding information

Open access funding provided by Chalmers University of Technology.

About Microsystems & Nanoengineering

Microsystems & Nanoengineering is an online-only, open access international journal devoted to publishing original research results and reviews on all aspects of Micro and Nano Electro Mechanical Systems from fundamental to applied research. The journal is published by Springer Nature in partnership with the Aerospace Information Research Institute, Chinese Academy of Sciences, supported by the State Key Laboratory of Transducer Technology.