The Tech Download: Agentic tools and chips take center stage at Nvidia’s ‘Super Bowl of AI’


This report is from this week’s The Tech Download newsletter. Like what you see? You can subscribe here.

Nvidia’s yearly showcase event — dubbed the ‘Super Bowl of AI’ by some — kicked off at the start of the week to much fanfare across the tech sector. The event sees tens of thousands of attendees gather in California to get the latest on the world’s most valuable company’s plans for the future.

Didn’t manage to snag a ticket? No problem. I caught up with CNBC’s Katie Tarasov, who was on the ground at the event, to get a sense of what went down.

Jensen Huang, chief executive officer of Nvidia Corp., speaks during a news conference at the Nvidia GTC conference in San Jose, California, US, on Tuesday, March 17, 2026.

David Paul Morris | Bloomberg | Getty Images

Kai: What were the key announcements this year?

Katie: I’m always watching for the biggest hardware announcements because it’s Nvidia chips that are filling AI data centers and powering almost every major company’s AI ambitions. We saw two big new chip announcements during CEO Jensen Huang’s keynote on Monday.

First was an entirely new type of chip called a Language Processing Unit, or LPU. It’s the first chip Nvidia’s unveiling using technology it acquired from chip startup Groq in December. That $20 billion deal was Nvidia’s biggest purchase ever. While Nvidia’s star graphics processing units have thousands of cores that perform many operations simultaneously, the Groq 3 LPU is built with a single core optimized for speeding up those GPUs.

The other big chip announcement was the unveiling of a rack filled entirely with Nvidia’s newest Vera central processing units, or CPUs. I wrote a piece last week explaining how the CPU is having a renaissance as Nvidia sees it as a coming bottleneck for agentic AI, which requires more data transfer and general-purpose compute typically handled by the CPU.

And one software mention that stood out: Nvidia announced NemoClaw, an enterprise-level version of OpenClaw that layers Nvidia’s software stack on top of the autonomous AI agent platform.

Kai: What stood out about the Nvidia GTC 2026 conference? 

Kai: Why has Nvidia’s stock dropped slightly in the past few days?

Kai: What did we learn about Nvidia’s plans for the future from the conference?

Katie: Overall, I think we’re seeing Nvidia shift its strategy to align with changing compute needs as agentic AI takes off. Instead of putting all its eggs in the GPU basket, Nvidia’s taking a more “soup-to-nuts” strategy, as Creative Strategies analyst Ben Bajarin put it to me.

But Huang also showed a sneak peak of what’s next for its most buzz-worthy line: the Kyber rack-scale architecture. It will integrate 144 GPUs in compute trays that sit vertically instead of horizontally in order to boost density and lower latency. The Kyber design will be available in Vera Rubin Ultra, Nvidia’s next rack-scale system, expected to ship in 2027.

Latest updates

Stock of the week

Stock Chart IconStock chart icon

The Tech Download: Agentic tools and chips take center stage at Nvidia’s ‘Super Bowl of AI’

Chipmaker Micron stock movement since Wednesday 18 March.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


Micron revenue almost triples, tops estimates as demand for memory soars


Micron CEO Sanjay Mehrotra speaks at a groundbreaking ceremony for the company’s semiconductor manufacturing facility in Clay, New York, on Jan. 16, 2026.

Heather Ainsworth | Bloomberg | Getty Images

Micron’s revenue almost tripled in the latest quarter as results topped analysts’ estimates and guidance sailed past expectations. The stock, which is up more than 350% in the past year, slipped in extended trading.

Here’s how the company did relative to LSEG consensus:

  • Earnings per share: $12.20 adjusted vs. $9.31 expected
  • Revenue: $23.86 billion vs. $20.07 billion expected

Micron is benefiting from soaring demand for Nvidia graphics processing units that run generative artificial intelligence models. Each generation of Nvidia chip packs in more memory, creating a supply crunch. Micron has been working to add capacity, as have competitors Samsung and SK Hynix.

Revenue in the fiscal second quarter increased from $8.05 billion a year earlier, according to a statement.

For the current period, the company expects about $33.5 billion in revenue, up from $9.3 billion a year ago, implying growth of over 200%. Adjusted earnings per share will be about $19.15, Micron said. Analysts polled by LSEG had expected $12.05 in adjusted earnings per share on $24.3 billion in revenue.

“The step-up in our results and outlook are the outcome of an increase in memory demand driven by AI, structural supply constraints and Micron’s strong execution across the board,” CEO Sanjay Mehrotra said in prepared remarks the company issued at the time of the release.

Micron’s stock has been on a tear. The shares tripled in 2025 and have jumped another 62% year to date as of Wednesday’s close. Among the 10 most valuable U.S. tech companies, Micron is the only one that’s up. Oracle is the leading decliner, down 22%, and Microsoft and Tesla have also seen double-digit percentage drops.

“Looking at how the shares were trading going into this earnings report, I thought the biggest risk was high investor expectations,” said Hendi Susanto, a portfolio manager at Gabelli Funds, in an email. “However, fiscal third-quarter guidance is strong, well above analysts’ and my own expectations.”

Micron revenue almost triples, tops estimates as demand for memory soars

Mehrotra said that AI and conventional servers are facing a “lack of adequate DRAM and NAND supply.” That refers to the company’s traditional memory products that have long been used in data centers and devices.

Memory companies have been shifting production capacity largely to high-bandwidth memory, which is embedded onto Nvidia’s latest GPUs and many other chips powering AI. Those products have higher margins.

The company’s GAAP gross margin, the profit left after accounting for the cost of goods sold, more than doubled in the past year to 74.4% from 36.8%, and increased from 56% in the prior quarter.

Net income climbed to $13.8 billion, or $12.07 per share, from $1.58 billion, or $1.41 per share, in the same quarter last year.

Micron said revenue in its cloud memory business rose more than 160% to $7.75 billion. The mobile and client unit saw even steeper growth, with revenue jumping to $7.71 billion from $2.24 billion a year ago.

Memory is typically a commodity business, which comes with lower margins than other silicon products and short-term contracts. In the past few months, memory companies have signed longer-term contracts as semiconductor makers work to ensure future capacity.

“As AI evolves, we expect compute architectures to become more memory-intensive,” the company said in an earnings presentation. “This is why we strongly believe that Micron is one of the biggest beneficiaries and enablers of AI.”

Mehrotra said on the earnings call that volume production of HBM4 for Nvidia’s Vera Rubin started in the fiscal first quarter, and next-generation HBM4e products will ramp in 2027. Nvidia has said it will utilize custom HBM in its next-generation Feynman GPU coming in 2028.

Mehrotra added that capital expenditures will “step up meaningfully” in fiscal 2027, with construction-related costs increasing by over $10 billion.

Micron is building two giant new campuses of fabrication plants in Idaho and New York to increase its memory manufacturing capacity in the U.S. Mehrotra said on the call that initial production at the Idaho site is expected by mid-2027. Micron broke ground in January on the massive $100 billion New York campus, and expects wafer output by the second half of 2028.

WATCH: How Micron is building the biggest-ever U.S. chip fab, despite China ban

Micron is building the biggest-ever U.S. chip fab, despite China ban
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


CNBC Daily Open: Risk-off trade back on for oil


Hello, this is Leonie Kidd writing to you from London. Welcome to another edition of CNBC’s Daily Open.

U.S. President Donald Trump continues to dominate the news cycle, and his latest round with reporters in the Oval Office has yielded more headlines and market moves this morning. It’s only Tuesday and already it’s been a volatile week for oil, which remains the epicenter of trading action.

Market participants — as well as us journalists — will need to stay on their toes to keep up with developments.

What you need to know today

Oil prices jumped over 2% on Tuesday as uncertainty lingered over a U.S.-led coalition to protect shipping through the Strait of Hormuz. President Donald Trump suggested Monday that the coalition was not fully in place as he urged other countries to get involved.

He voiced his frustrations by saying “some are very enthusiastic, and some are less than enthusiastic … and I assume some will not do it.”

Washington, meanwhile, is looking to postpone a meeting between Trump and Chinese President Xi Jinping amid the conflict with Iran. During a press conference in the Oval Office, he said, “There’s no tricks to it either. It’s very simple. We’ve got a war going on. I think it’s important that I be here.”

Back in the Middle East, the United Arab Emirates reopened its airspace on Tuesday after a brief shutdown, as Iran continued missile and drone attacks. The UAE’s Defense Ministry said that air defenses have intercepted more than 300 ballistic missiles and 1,600 drones so far.

The volatility has led to a hike in interest rates from the Reserve Bank of Australia. The central bank raised its benchmark policy rate for a second consecutive time, citing concerns over the inflation risk posed by the war in Iran.

In stock markets, Asia-Pacific equities rose Tuesday as auto and tech stocks gained after Nvidia announced robust revenue forecast for its key chips, and partnerships with carmakers from the region. European and U.S. futures are lacking direction in early trade.

— Leonie Kidd

And finally…

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


AI chipmaker Cerebras namedropped by Oracle, alongside Nvidia and AMD


As AI chipmaker Cerebras angles for an eventual IPO, the company appears to have landed a significant cloud-computing customer: Oracle.

On a conference call with analysts on Tuesday following Oracle’s quarterly earnings, Clay Magouyrk, one of the software vendor’s two CEOs, indicated that his company’s infrastructure includes Cerebras chips, alongside graphics processing units (GPUs) from market leader Nvidia and rival Advanced Micro Devices.

“We build infrastructure which is flexible, fungible, and can support the smallest workloads up to the largest,” Magouyrk said. “We continually offer the latest in accelerators, from the most recent Nvidia and AMD options to emerging designs from companies like Cerebras and Positron,” another AI hardware startup.

Cerebras offers cloud services that employ its large-scale WSE-3 chips. The company filed paperwork for an IPO in 2024 but withdrew the filing last October. Days later, it announced a $1.1 billion funding round at a valuation of $8.1 billion, and CEO Andrew Feldman said Cerebras still intends to go public.

For prospective investors, one of the most glaring concerns from Cerebras’ original prospectus was its reliance on a single customer based in the Middle East. G42, backed by Microsoft, is headquartered in Abu Dhabi, United Arab Emirates, and in the first half of 2024, it accounted for 87% of Cerebras’ revenue.

Bolstering its client roster with a name like Oracle could be a big boon for Cerebras, and it would follow another significant announcement earlier this year. In January, Cerebras said it had received a $10 billion commitment from OpenAI, which relies on Oracle, and other companies, for cloud services. The next month, OpenAI said it was collaborating with Cerebras on a research preview of Codex-Spark, a fast-acting AI model geared toward software development, for ChatGPT Pro customers.

Oracle didn’t immediately respond to a request for comment, and its price list does not mention a Cerebras option. Cerebras didn’t immediately provide a comment.

Oracle’s earnings call came after the company reported better-than-expected results, lifted its fiscal 2027 guidance and said remaining performance obligations more than quadrupled to $553 billion from a year earlier.

“Altogether, we are confident that the investments we make now in data centers, compute capacity and customer relationships will only grow more valuable over time,” Magouyrk said, after naming Cerebras and other chipmakers.

While Cerebras is trying to compete as an upstart against the world’s most valuable company, it’s playing in a market with seemingly insatiable demand for computing power as AI model developers scale to quickly respond to the needs of users.

Nvidia is using its mammoth cash pile to expand into new product areas. In December, the company bought key assets from AI chip startup Groq for about $20 billion. Nvidia plans to announce a new architecture drawing on Groq at its GTC developer conference in California next week, The Wall Street Journal reported.

Magouyrk said on the call that GTC will feature some “key announcements.” He also said that speed in responding to incoming requests requires innovative technology in addition to strategically located data centers.

“It’s the type of hardware that’s being deployed, and that’s why you’re seeing so much innovation going on around these AI accelerators,” he said. “If you look at what Groq does, or Cerebras or Positron, all of these different types of customers are saying, well, not only how do we reduce the cost of inferencing, but also, how can we significantly reduce the latency of it?”

WATCH: OpenAI unveils first AI model running on Cerebras chips


Broadcom CEO Hock Tan sees AI chip revenue ‘significantly’ above $100 billion next year


Broadcom CEO Hock Tan.

Lucas Jackson | Reuters

Broadcom CEO Hock Tan sees the artificial intelligence boom gaining so much steam that he’s projecting AI chip revenue next year “significantly in excess of $100 billion.”

After the chipmaker reported better-than-expected results for the fiscal first quarter and issued a strong forecast for the current period, Tan said on his company’s earnings call that demand is picking up from large customers that are increasingly in need of Broadcom’s help in designing custom silicon.

“We have also secured the supply chain required to achieve this,” Tan said, regarding the 2027 sales target.

AI revenue in the first quarter more than doubled from a year earlier to $8.4 billion, while total sales increased 29% to $19.3 billion. The company expects AI semiconductor revenue of $10.2 billion this quarter.

Broadcom shares popped more than 5% in extended trading on Thursday after Tan’s comments.

Chip companies like Broadcom have faced a number of headwinds in recent months, including a shortage of the high bandwidth memory crucial for custom accelerators, and capacity constraints at the most advanced levels of chip manufacturing and packaging.

Broadcom helps its customers translate their chip designs into silicon, providing back-end support before the processors are sent off to be manufactured at huge fabrication plants by companies like Taiwan Semiconductor Manufacturing Company.

Broadcom CEO Hock Tan sees AI chip revenue ‘significantly’ above 0 billion next year

It’s a role that’s fueled Broadcom’s growth as more tech giants design in-house accelerators for AI. Tan said custom AI deployment is entering its “next phase” and is expected to speed up, as the company helps six key customers design their chips. Chief among those are Google, Meta, Anthropic and OpenAI, with Fujitsu and ByteDance likely as the final two.

Google was the first to the in-house chip game in 2015, with its tensor processing units designed alongside Broadcom. Google has made its chips available to cloud customers since 2018, with key customers now including Apple and Anthropic. Broadcom expects even stronger demand from next-generation Google chips in 2027.

Meta is also reportedly in talks to use Google’s TPUs, and Broadcom assists the social media company with developing its own MTIA accelerator. Analysts have cast doubt on the future of Meta’s custom silicon program, but the “MTIA roadmap is alive and well,” Tan said on the earnings call.

During the question and answer portion of the call, Bernstein Research analyst Stacy Rasgon pushed Tan on the specific sources of the projected $100 billion in AI chip revenue. He counted 3 gigawatts of capacity at Anthropic, 3 gigawatts at Google, at least 2 gigawatts with Meta, and 1 gigawatt from OpenAI, among others. Tan said the dollars per gigawatt “vary, sometimes quite dramatically,” but that his estimates were “not far” off.

While Tan said that the AI revenue boost would come from “just chips,” Broadcom makes much more than just AI accelerators. Ben Bajarin of Creative Strategies said it includes digital signal processors, data processing units and networking switches.

It’s “everything in that bucket,” Bajarin said.

— CNBC’s Jordan Novet contributed to this report.

WATCH: Broadcom CEO on Ai revenue

Broadcom CEO: AI revenue from chips could exceed $100 billion in 2027


Broadcom beats on earnings and guidance as AI revenue doubles


Broadcom CEO Hock Tan speaks at the digital X event in Cologne, Germany, on September 13, 2022.

Ying Tang | Nurphoto | Getty Images

Broadcom reported better-than-expected earnings and revenue and issued a strong forecast for the current period as the chipmaker continues to benefit from the artificial intelligence boom.

Here’s how the company performed in comparison with LSEG consensus:

  • Earnings per share: $2.05 adjusted vs. $2.03 estimated
  • Revenue: $19.31 billion vs. $19.18 billion estimated

Revenue jumped 29% year over year during the fiscal first quarter, which ended on Feb. 1, according to a statement.

Net income increased to $7.35 billion, or $1.50 per share, from $5.50 billion, or $1.14 per share, in the same quarter a year earlier. Adjusted earnings exclude stock-based compensation and tax adjustments.

For the second quarter, Broadcom said it anticipates a 68% adjusted profit margin, higher than StreetAccount’s 66% consensus. The company said it’s looking for $22 billion in revenue, beating the $20.56 billion average estimate, according to LSEG.

Broadcom helps other companies translate their chip designs into silicon, providing intellectual property and backend technologies before they’re sent off to chip fabrication plants from companies such as Taiwan Semiconductor Manufacturing Company. It’s a role that’s gained importance as Amazon, Google, Meta and Microsoft design customized chips.

AI revenue soared 106% from a year earlier to $8.4 billion, “driven by robust demand for custom AI accelerators and AI networking,” CEO Hock Tan said in the statement.

Tan had called for a doubling of AI revenue in December. He said the company expects AI semiconductor revenue of $10.7 billion this period.

Broadcom reported $12.52 billion in revenue from semiconductor solutions, higher than the $12.25 billion that analysts polled by StreetAccount expected. During the quarter, Broadcom announced new Wi-Fi 8 chips.

For infrastructure software, Broadcom said it generated $6.80 billion in revenue, lower than StreetAccount’s $7.02 billion consensus.

Broadcom said its board authorized up to $10 billion in new share buybacks through 2026.

In December Tan said Anthropic had placed a $10 billion custom chip order. Last week U.S. Defense Secretary Pete Hegseth said the Pentagon would dub Anthropic a “supply chain risk to national security” and President Donald Trump directed government agencies to stop using Anthropic after the AI startup refused to permit uses of its technology for mass domestic surveillance or fully autonomous weapons.

As of Wednesday’s close, Broadcom shares were down 8% so far in 2026, while the S&P 500 index was flat.

Executives will discuss the results on a conference call starting at 5 p.m. ET.

This is breaking news. Please check back for updates.

Broadcom beats on earnings and guidance as AI revenue doubles


Chinese tech companies progress ‘remarkable,’ OpenAI’s Altman tells CNBC


The progress of Chinese tech companies across the entire stack is “remarkable,” OpenAI’s Sam Altman told CNBC, pointing to “many fields” including AI.

Altman’s comments come as China races against the U.S. to develop artificial general intelligence (AGI) — where AI matches human capabilities — and roll out the technology across society.

Chinese progress is “amazingly fast,” he said. In some areas Chinese tech companies are near the frontier, while in others they lag behind, Altman added.

India’s Prime Minister Narendra Modi (L) takes a group photo with AI company leaders including OpenAI CEO Sam Altman (C) and Anthropic CEO Dario Amodei (R) at the AI Impact Summit in New Delhi on February 19, 2026.

Ludovic Marin | Afp | Getty Images

This is a breaking news story. Please refresh for updates.