Amid Epstein fallout, Bill Gates becomes point of controversy at India AI summit


Bill Gates speaks during the Gates Foundation’s first global Goalkeepers event in the Nordics, which is being held in Stockholm, Sweden, Jan. 22, 2026.

TT News Agency | Stefan Jerrevang | Via Reuters

Bill Gates has become a source of controversy at this week’s high-profile India AI Impact Summit, as speculation around his planned keynote address ultimately ended with his withdrawal at the last minute. 

The drama comes as the Microsoft co-founder receives public backlash for his past relationship with deceased financier and sex predator Jeffrey Epstein — with more details on the two men’s years of communications revealed in the Department of Justice’s file drop last month. 

The Gates Foundation India on Thursday said the billionaire would skip the address “[a]fter careful consideration, and to ensure the focus remains on the AI Summit’s key priorities,” adding that he would be replaced by another foundation representative. 

A spokesperson for Gates told CNBC separately that while he has acknowledged meeting Epstein was a serious error in judgment, he “unequivocally denies any improper conduct related to Epstein and the horrible activities in which Epstein was involved.”

“Mr. Gates never visited Epstein’s island, never attended parties with him, and had no involvement in any illegal activities associated with Epstein,” the spokesperson said.

The official announcement capped a back-and-forth saga that began earlier this week when local Indian media pointed out that Gates’ name had been removed from some of the summit’s public-facing materials. 

Government sources later briefed the media that Gates was not expected to attend the event. However, the Gates Foundation issued a conflicting message on Wednesday, insisting that he was participating “as planned” before the recent reversal. 

Asked about the controversy on Tuesday, India’s IT minister Ashwini Vaishnaw told reporters that Gates’ attendance would come down to “personal choices,” adding he “need not comment.” The summit organizers and the foundation did not immediately respond to a request for comments on Gates’ absence.

The American tech leader turned philanthropist has been under intense scrutiny in recent weeks following the release of millions of documents related to financier Jeffrey Epstein under the Epstein Files Transparency Act. 

The files included a draft email written to himself in which Epstein suggests that he had helped facilitate extramarital affairs and sexual encounters for Gates, amongst other references of the Microsoft co-founder.

In an interview with Australia’s 9News last month, Gates denied any wrongdoing, commenting in relation to new files, calling Epstein’s allegations “absolutely absurd and completely false.”

He emphasized that his interactions with Epstein were limited to dinners aimed at potential philanthropy discussions, adding that he “never went to the island” and “never met any women.”

The New Delhi AI Impact Summit, where Gates had been scheduled to speak, has seen participation from leading tech names such as Alphabet CEO Sundar Pichai, OpenAI’s Sam Altman, and Anthropic’s Dario Amodei, besides a host of global leaders including French President Emmanuel Macron and UN Secretary-General António Guterres.

The Gates Foundation has invested in India across health and development, and has also backed projects related to AI.


Chinese tech companies progress ‘remarkable,’ OpenAI’s Altman tells CNBC


The progress of Chinese tech companies across the entire stack is “remarkable,” OpenAI’s Sam Altman told CNBC, pointing to “many fields” including AI.

Altman’s comments come as China races against the U.S. to develop artificial general intelligence (AGI) — where AI matches human capabilities — and roll out the technology across society.

Chinese progress is “amazingly fast,” he said. In some areas Chinese tech companies are near the frontier, while in others they lag behind, Altman added.

India’s Prime Minister Narendra Modi (L) takes a group photo with AI company leaders including OpenAI CEO Sam Altman (C) and Anthropic CEO Dario Amodei (R) at the AI Impact Summit in New Delhi on February 19, 2026.

Ludovic Marin | Afp | Getty Images

This is a breaking news story. Please refresh for updates.


Mark Zuckerberg said he reached out to Apple CEO Tim Cook to discuss ‘wellbeing of teens and kids’


Mark Zuckerberg said he reached out to Apple CEO Tim Cook to discuss ‘wellbeing of teens and kids’

Meta CEO Mark Zuckerberg said in a Wednesday court testimony that he reached out to Apple CEO Tim Cook to discuss the “wellbeing of teens and kids.”

The comments came after the defense lawyer Paul Schmidt pointed to an email exchange between Zuckerberg and Cook from February 2018. “I thought there were opportunities that our company and Apple could be doing and I wanted to talk to Tim about that,” Zuckerberg said.

The email exchange was part of a broader portrayal by the defense attorney to show jury members that Zuckerberg was more proactive about the safety of young Instagram users than what was previously presented to court by the opposing counsel, going so far as to reach out to a corporate rival.

“I care about the wellbeing of teens and kids who are using our services,” Zuckerberg said when characterizing some of the content of the email.

Zuckerberg testified during a landmark trial in Los Angeles Superior Court over the question of social media and safety, which is being likened to the industry’s “Big Tobacco” moment.

Part of the trial focused on the alleged harms of certain digital filters promoting the cosmetic surgery, which Instagram chief Adam Mosseri previously testified about earlier in the trial.

Zuckerberg said that the company consulted with various stakeholders about the use of beauty filters on Instagram, but he did not specifically name them. The plaintiff’s lawyer questioned Zuckerberg about messages showing he lifted the ban because it was “paternalistic.”

“It sounds like something I would say and something I feel,” Zuckerberg replied. “It feels a little overbearing.”

Zuckerberg was pressed about the decision to allow the feature when the company had guidance from experts that the beauty filters had negative effects, particularly on young girls.

He was specifically asked about one study by the University of Chicago in which 18 experts said that beauty filters as a feature cause harm to teenage girls.

Zuckerberg, who noted that he believed this was referring to so-called cosmetic surgery filters, said he saw that feedback and discussed with the team, and it came down to free expression. “I genuinely want to err on the side of giving people the ability to express themselves,” Zuckerberg said.

Meta CEO Mark Zuckerberg arrives at Los Angeles Superior Court on Feb. 18, 2026.

Jill Connelly | Getty Images

Zuckerberg echoed Mosseri’s previous sentiments shared in court that Meta ultimately decided to lift a temporary ban on the plastic surgery digital filters without promoting them to other users.

Defense attorney Mark Lanier noted that Facebook vice president of product design and responsible innovation Margaret Stewart said in an email that while she would support Zuckerberg’s ultimate decision, she said she didn’t believe it was the “right call given the risks.” She mentioned in her message that she dealt with a personal family situation that she acknowledged made her biased, but gives her “first-hand knowledge” of the alleged harms.

Zuckerberg said that many Meta employees disagree with the company’s decisions, which is something the company encourages, and while he understood Stewart’s perspective, there was ultimately not enough causal evidence to support the assertion of harms by the outside experts.

When Lanier asked if Zuckerberg has a college degree that would indicate expertise in causation, the Meta chief said, “I don’t have a college degree in anything.”

“I agree i do not know the legal understanding of causation, but I think I have a pretty good idea of how statistics work,” Zuckerberg said.

The trial, which began in late January, centers on a young woman who alleged that she became addicted to social media and video streaming apps like Instagram and YouTube.

The Facebook founder pushed back against the notion that the social media company made increasing time spent on Instagram a company goal.

Zuckerberg was addressing a 2015 email thread in which he appeared to highlight improving engagement metrics as an urgent matter for the company.

While the email chain may have contained the words “company goals,” Zuckerberg said the comments could have been an aspiration, and asserted that Meta doesn’t have those objectives.

Lawyers later brought up evidence from Mosseri, which included goals to actively up user daily engagement time on the platform to 40 minutes in 2023 and to 46 minutes in 2026.

Zuckerberg said the company uses milestones internally to measure against competitors and “deliver the results we want to see.” He asserted that the company is building services to help people connect.

Meta CEO Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, Feb. 18, 2026.

Frederic J. Brown | AFP | Getty Images

Lawyers also raised questions over whether the company has taken adequate steps to remove underage users from its platform.

Zuckerberg said during his testimony that some users lie about their age when signing up for Instagram, which requires users to be 13 or older. Lawyers also shared a document which stated that 4 million kids under 13 used the platform in the U.S.

The Facebook founder said that the company removes all underage users it identifies and includes terms about age usage during the sign-up process.

“You expect a 9-year-old to read all of the fine print,” a lawyer for the plaintiff questioned. “That’s your basis for swearing under oath that children under 13 are not allowed?”

Instagram did not begin requiring birthdays at sign-up until late 2019. At several times, Zuckerberg brought up his belief that age-verification is better suited for companies like Apple and Google, which maintain mobile operating systems and app stores.

Zuckerberg later responded to questions about documents in which the company reported a higher retention rate on its platform for users who join as tweens. He said lawyers were “mischaracterizing” his words and that Meta doesn’t always launch products in development such as an Instagram app for users under 13.

Meta Platforms CEO Mark Zuckerberg testifies at a Los Angeles Superior Court trial in a key test case accusing Meta and Google’s YouTube of harming kids’ mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026 in a courtroom sketch.

Mona Edwards | Reuters

During Wednesday’s session, Judge Carolyn B. Kuhl threatened to hold anyone using AI smart glasses during Zuckerberg’s testimony in contempt of court.

“If you have done that, you must delete that, or you will be held in contempt of the court,” the judge said. “This is very serious.”

Members of the team escorting Zuckerberg into the building just before noon ET were pictured wearing the Meta Ray-Ban artificial intelligence glasses.

Recording is not allowed in the courtroom.

Lawyers also questioned whether Zuckerberg previously lied about the board’s inability to fire him.

If the board wants to fire me, I could elect a new board and reinstate myself,” he said, in response to remarks he previously made on Joe Rogan’s podcast.

During his interview with the podcaster last year, Zuckerberg had said he wasn’t worried about losing his job because he holds voting power.

Zuckerberg told the courtroom he is “very bad” at media.

Lawyers representing the plaintiff contend that Meta, YouTube, TikTok and Snap misled the public about the safety of their services and knew that the design of their apps and certain features caused mental health harms to young users.

Snap and TikTok settled with the plaintiff involved in the case before the trial began.

Meta has denied the allegations and a spokesperson told CNBC in a statement that “the question for the jury in Los Angeles is whether Instagram was a substantial factor in the plaintiff’s mental health struggles.”

Last week, Instagram’s Mosseri testified that while he thinks there can be problematic usage of social media, he doesn’t believe that’s the same as clinical addiction.

Adam Mosseri, head of Instagram at Meta Platforms Inc., arrives at Los Angeles Superior Court in Los Angeles, California, US, on Wednesday, Feb. 11, 2026.

Caroline Brehman | Bloomberg | Getty Images

“So it’s a personal thing, but yeah, I do think it’s possible to use Instagram more than you feel good about,” Mosseri said. “Too much is relative, it’s personal.”

The Los Angeles trial is one of several major court cases taking place this year that experts have described as the social media industry’s “Big Tobacco” moment because of the alleged harm caused by their products and the related company efforts to deceive the public.

Parents of children who they allege suffered from detrimental effects of social media outside the courthouse in Los Angeles on Wednesday, Feb 18.

Jonathan Vanian

Meta is also involved in a major trial in New Mexico, in which the state’s attorney general, Raúl Torrez, alleges that the social media giant failed to ensure that children and young users are safe from online predators.

“What we are really alleging is that Meta has created a dangerous product, a product that enables not only the targeting of children, but the exploitation of children in virtual spaces and in the real world,” Torrez told CNBC’s “Squawk Box” last week when opening arguments for the trial began.

This summer, another social media trial is expected to begin in the Northern District of California. That trial also involves companies like Meta and YouTube and allegations that their respective apps contain flaws that foster detrimental mental health issues in young users.

CNBC’s Jennifer Elias contributed reporting.

WATCH: New Mexico AG Raul Torrez talks about his case against Meta

New Mexico AG Raul Torrez: Meta has created a space for predators to target and exploit children


Figma stock jumps 16% as company sees AI monetization accelerating growth


Dylan Field, co-founder and chief executive officer of Figma, speaks during a Bloomberg Television interview outside of the New York Stock Exchange in New York on July 31, 2025.

Michael Nagle | Bloomberg | Getty Images

Figma shares jumped as much as 20% in extended trading on Wednesday after the design software maker reported robust results and quarterly guidance than Wall Street had predicted.

Here’s how the company did in comparison with LSEG consensus:

  • Earnings per share: 8 cents adjusted vs. 7 cents expected
  • Revenue: $303.8 million vs. $293.15 million expected

Figma’s revenue grew 40% year over year in the fourth quarter, according to a statement. The company had a net loss of $226.6 million, or 44 cents per share, compared with net income of $33.1 million, or 15 cents per share, in the fourth quarter of 2024.

Management called for $315 million to $317 million in first-quarter revenue, which implies 38% growth. Analysts polled by LSEG were expecting $292 million.

For 2026, Figma sees $100 million to $110 million in adjusted operating income on $1.366 billion to $1.374 billion in revenue, which would suggest 30% revenue growth. The LSEG revenue consensus was $1.29 billion.

Lately, investors have become more concerned that generative artificial intelligence products could weaken the growth prospects of software companies. As of Wednesday’s close, Figma shares were down about 35% year to date, while the iShares Expanded Tech-Software Sector Exchange-Traded Fund has slipped 22%. The S&P 500 index has gained almost 1% in the same period.

“If you look at software, not only is it not going away. There’s going to be way more of it than ever before,” Figma’s co-founder and CEO, Dylan Field, said in a Wednesday interview. But he said the market is “potentially increasingly competitive.”

Figma stock jumps 16% as company sees AI monetization accelerating growth

The company, which went public in July, wants to ensure it can benefit as people turn to AI products for design. The Figma Make tool allows people to type in a few words and have AI models from Anthropic and Google interpret the information to craft app prototypes. More than half of customers spending over $100,000 in annualized revenue had people using Figma Make every week during the quarter, according to the statement.

Figma managed to lower the cost of running the Make service for end users by optimizing its computing infrastructure, Praveer Melwani, the company’s finance chief, said on a conference call with analysts. The company’s adjusted gross margin stayed put at 86%, despite that Figma Make weekly active users increased 70% from the third quarter.

Soon Figma will be bringing in more revenue from AI adoption. In March, it will start enforcing monthly AI credit limits for different types of account holders. Clients will pay based on monthly usage or sign up for AI credit subscriptions, according to a blog post from December.

“What we’ve observed is it tends to be a power law distribution, where a subset of users within an organization are receiving outsized value, and as such, are going over the projected limits that we intend to enforce,” Melwani said. “Now, our expectation is that that will continue to evolve.”

Also during the quarter, Figma announced a collaboration with ServiceNow to convert designs into applications for large companies to adopt.

“We were pleased to see positive commentary around both Figma Make and Figma Design, indicating increased adoption of AI workflows across Figma’s platform,” RBC analyst Rishi Jaluria, with the equivalent of a hold rating on the stock, wrote in a note to clients.

This is developing news. Please check back for updates.

WATCH: How the AI sell-off ripped through software

How the AI sell-off ripped through software


India’s Adani to invest $100 billion in AI data centers over the next decade


The logo of the Adani Group is seen on the facade of its Corporate House on the outskirts of Ahmedabad, India, November 21, 2024. 

Amit Dave | Reuters

India’s Adani on Tuesday announced plans to invest $100 billion to develop renewable energy-powered AI-ready data centers by 2035, seeking to establish the world’s largest integrated data center platform.

The blockbuster investment, which comes as India pushes to gain a stronger foothold in the global AI race, is expected to create a $250 billion AI infrastructure ecosystem in India over the next decade, Adani said.

The initiative is also poised to incentivize an additional $150 billion in spending across server manufacturing, sovereign cloud platforms, and supporting industries, the company said.

“The world is entering an Intelligence Revolution more profound than any previous Industrial Revolution,” Gautam Adani, chairman of Adani Group, said in a statement.

“India will not be a mere consumer in the AI age. We will be the creators, the builders and the exporters of intelligence and we are proud to be able to participate in that future,” he added.

The announcement coincides with India’s AI Impact Summit, a five-day event which got underway on Monday.

Global leaders and technology executives such as OpenAI CEO Sam Altman and Alphabet CEO Sundar Pichai are expected to take part in the summit, which has been billed as the first major international AI meeting hosted in the Global South.

Shares of Adani Enterprises, the flagship company of Adani Group, rose 2.3% on the news, making it one of the top gainers on the benchmark Nifty 50 stock index. Shares of Adani Green Energy were last seen up 1.8%.

Strategic partnership

Adani’s AI push is designed to build on AdaniConnex’s existing 2 gigawatt (GW) national data center, with plans to expand toward a 5 GW target. It is this deployment that the company says will create the world’s largest integrated data center platform.

AdaniConnex is a joint venture between Adani Group and EdgeConnex, a global data center provider.

Adani said its vision is supported by its strategic partnerships with Google. The multinational conglomerate added that it was also in talks with other major players to establish large-scale campuses across India, without providing further details.

Google’s parent company Alphabet said in October that it would invest $15 billion over the next five years to build an AI data center hub in southern India.

Shares of Adani Group companies have been volatile in recent weeks.

Indeed, the firm’s stocks fell sharply after court filings late last month showed that the U.S. Securities and Exchange Commission is looking to send a summons to Indian billionaire and Adani Group chair Gautam Adani and nephew Sagar Adani on charges of bribery and fraud.

Adani’s chariman was indicted with seven other men in New York federal court in November 2024 on charges related to a massive bribery and fraud scheme. CNBC reached out to Adani Group and the U.S. SEC following the news.

India’s Ministry of Law and Justice twice refused last year to deliver the summons to Gautam Adani and Sagar Adani under the Hague Convention, the SEC told the court.

— CNBC’s Priyanka Salve contributed to this report.


AI chatbot firms face stricter regulation in online safety laws protecting children in the UK


Preteen girl at desk solving homework with AI chatbot.

Phynart Studio | E+ | Getty Images

The UK government is closing a “loophole” in new online safety legislation that will make AI chatbots subject to its requirement to combat illegal material or face fines or even being blocked.

After the country’s government staunchly criticized Elon Musk’s X over sexually explicit content created by its chatbot Grok, Prime Minister Keir Starmer announced new measures that mean chatbots such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft Copilot will be included in his government’s Online Safety Act.

The platforms will be expected to comply with “illegal content duties” or “face the consequences of breaking the law,” the announcement said.

This comes after the European Commission investigated Musk’s X in January for spreading sexually explicit images of children and other individuals. Starmer led calls for Musk to put a stop to it.

Keir Starmer, UK prime minster, during a news conference in London, UK, on Monday, Jan. 19, 2026.

Bloomberg | Bloomberg | Getty Images

Earlier, Ofcom, the UK’s media watchdog, began an investigation into X reportedly spreading sexually explicit images of children and other individuals.

“The action we took on Grok sent a clear message that no platform gets a free pass,” Starmer said, announcing the latest measures. “We are closing loopholes that put children at risk, and laying the groundwork for further action.”

Starmer gave a speech on Monday on the new powers, which extend to setting minimum age limits for social media platforms, restricting harmful features such as infinite scrolling, and limiting children’s use of AI chatbots and access to VPNs.

One measure announced would force social media companies to retain data after a child’s death, unless the online activity is clearly unrelated to the death.

“We are acting to protect children’s wellbeing and help parents to navigate the minefield of social media,” Starmer said.

Alex Brown, head of TMT at law firm Simmons & Simmons, said the announcement shows how the government is taking a different approach to regulating rapidly developing technology.

“Historically, our lawmakers have been reluctant to regulate the technology and have rather sought to regulate its use cases and for good reason,” Brown said in a statement to CNBC.

He said that regulations focused on specific technology can age quickly and risk missing aspects of its use. Generative AI is exposing the limits of the Online Safety Act, which focuses on “regulating services rather than technology,” Brown said.

He said Starmer’s latest announcement showed the UK government wanted to address the dangers “that arise from the design and behaviour of technologies themselves, not just from user‑generated content or platform features,” he added.

There’s been heightened scrutiny around children and teenagers’ access to social media in recent months, with lawmakers citing mental health and wellbeing harms. In December, Australia became the first country to implement a law banning teens under 16 from social media.

Australia’s ban forced apps like Alphabet’s YouTube, Meta’s Instagram, and ByteDance’s TikTok to have age-verification methods such as uploading IDs or bank details to prevent under-16s from making accounts.

Spain became the first European country to enforce a ban earlier this month, with France, Greece, Italy, Denmark, and Finland also considering similar proposals.

The UK government launched a consultation in January on banning social media for under-16s.

Additionally, the country’s House of Lords, an unelected upper legislative chamber, voted last month to amend the Children’s Wellbeing and Schools Bill to include a social media ban for under-16s.

The next phase will see the bill reviewed by parliament’s the House of Commons. Both houses have to agree on any changes before they pass into law.


Elon Musk’s xAI faces threat of NAACP lawsuit over air pollution from Mississippi data center


Nikolas Kokovlis | Nurphoto | Getty Images

Elon Musk’s xAI, which merged with SpaceX last week, is facing increased pressure from environmental and civil rights groups over pollution concerns, this time at the company’s facility in Southaven, Mississippi.

On Friday, the Southern Environmental Law Center and Earthjustice, on behalf of the NAACP, sent a notice of intent to sue xAI and subsidiary MZ Tech LLC, saying the company’s use of dozens of natural gas-burning turbines requires a federal permit, violates the Clean Air Act and harms nearby communities.

Pollution from the turbines, which xAI has also used in Memphis, Tennessee, for its Colossus 1 and Colossus 2 data centers, has been a major source of local contention for more than a year.

Plans for a third data center in Southaven, located about 20 miles from Memphis, were announced early this year, when Mississippi Republican Governor Tate Reeves said he expected the project to create “hundreds of permanent jobs throughout DeSoto County.”

Launched by Musk in 2023, xAI is trying to compete with OpenAI, Anthropic and Google in the booming generative AI market. On Feb. 2, Musk said SpaceX, his rocket maker and defense contractor, acquired xAI in a deal that valued the combined entity at $1.25 trillion.

Musk is banking on the area in and around Memphis as the foundation of his AI ambitions, and he’s been flouting environmental rules in order to develop as quickly as possible. Musk’s social network X, formerly Twitter, is also owned by xAI, which created the Grok AI chatbot and image generator.

XAI is currently under a myriad of government investigations in Europe, Asia and the U.S. after Grok enabled users to easily create and share deepfake porn, including explicit imagery depicting child sexual abuse.

Last year, residents in the majority-Black community of Boxtown in South Memphis testified at public hearings about a stench in the air, and the impact of worsening smog on their health caused by xAI’s use of natural gas turbines. Research by scientists at the University of Tennessee also found that xAI’s turbine use added to air pollution woes in the area.

Environmental advocates, including the NAACP, had previously said they would sue to stop xAI’s un-permitted use of the turbines in Memphis. But they stopped short of filing a legal complaint after Shelby County’s health department allowed xAI to treat the turbines as temporary, non-road engines, and issued permits for their use.

At the federal level, the EPA recently clarified gray areas of the law and said these turbines can’t be categorized as temporary non-road engines. Nonetheless, xAI has been using the turbines across state lines without obtaining federal permits.

XAI didn’t immediately respond to a request for comment.

Noise pollution from the turbines has also been a source of local consternation. Jason Haley, a Southaven resident, told CNBC the turbines make headache-inducing noises around the clock that he can hear inside his home.

Haley is part of a group called Safe and Sound which documents the decibel levels, and is pressing local officials to stop xAI from making so much noise, especially overnight, with its turbines.

Mississippi officials will hold a public hearing, scheduled for Tuesday, for community members who wish to express their concerns about xAI’s expansion plans in the area. The hearing will focus on whether the state should give xAI permission to install and run 41 permanent turbines at its Southaven facility, Mississippi Today previously reported.

Similar community dynamics are playing out across the U.S. as tech giants rush to construct massive data centers, which can strain local energy and water supply and cause prices to increase.

In November, Microsoft ended efforts to build a data center in Wisconsin due to the community’s vocal opposition. Amazon also pulled out of plans for a data center in Arizona after community protests.

In terms of Musk’s Southaven project, Patrick Anderson, a senior attorney with SELC, said xAI “has to follow the law, just like any other company.”

“And when it flouts the Clean Air Act’s bedrock protections against unpermitted emissions, it puts the health and welfare of ordinary citizens at risk,” Anderson said in an email. “That’s why we intend to hold xAI accountable here.”

The Mississippi Department of Environmental Quality did not immediately respond to requests for comment.

Read the environmental groups’ notice of intent to sue xAI here:


Nearly a thousand Google workers sign letter urging company to divest from ICE, CBP


The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York, Nov. 17, 2021.

Andrew Kelly | Reuters

More than 900 Google workers have signed an open letter condemning recent actions by U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), urging the tech giant to disclose its dealings with the agencies and divest from them.

The letter, citing recent ICE killings of Keith Porter, Renee Good, and Alex Pretti, said that the employees are “appalled by the violence” and “horrified” by Google’s part in it.

“Google is powering this campaign of surveillance, violence, and repression,” the letter reads.

It goes on to cite that Google Cloud is aiding CBP surveillance and powering Palantir’s ImmigrationOS system, which is used by ICE. The letter states that Google’s generative artificial intelligence is used by CBP and that the Google Play Store has blocked ICE tracking apps.

The letter also quotes a social media post by Google Chief Scientist Jeff Dean from early January, who wrote, “We all bear a collective responsibility to speak up and not be silent when we see things like the events of the last week.”

“We are vehemently opposed to Google’s partnerships with DHS, CBP, and ICE,” the employees wrote. “We consider it our leadership’s ethical and policy-bound responsibility to disclose all contracts and collaboration with CBP and ICE, and to divest from these partnerships.”

The letter calls on Google to acknowledge the danger that workers face from ICE, host an emergency internal Q&A on the company’s DHS and military contracts, implement safety measures to protect workers — such as flexible work-from-home policies and immigration support — and reveal its ties with the government agencies to help all involved determine where the company will draw a line.

“As workers of conscience, we demand that our leadership end our backslide into contracting for governments enacting violence against civilians,” the letter reads. “Google is now a prominent node in a shameful lineage of private companies profiting from violent state repression. We must use this moment to come together as a Googler community and demand an end to this disgraceful use of our labor.”

Google did not immediately respond to a CNBC request for comment.

The letter comes as employees place mounting pressure on tech CEOs to speak out against ICE. Just two weeks prior, employees representing Amazon, Spotify, Meta and more wrote a similar letter demanding ICE “out of our cities.”


Why Amazon’s CEO is ‘confident’ with $200 billion spending plan


Andy Jassy, CEO of Amazon, speaks during an unveiling event in New York, Feb. 26, 2025.

Michael Nagle | Bloomberg | Getty Images

Amazon‘s stock plunged 11% in extended trading on Thursday, dragged lower by market jitters around the company’s $200 billion capex plans, the highest spending forecast among the megacap companies.

The forecast is a sharp increase from Amazon’s capital expenditures last year, and it was more than $50 billion above analysts’ expectations. The company reported spending roughly $131 billion on purchases of property and equipment in 2025, up from about $83 billion in the year prior.

Tech companies have laid out aggressive spending plans on artificial intelligence infrastructure since OpenAI ushered in the modern era of this technology with the release of ChatGPT in late 2022, but at the start of 2026, those lavish commitments have only kept growing.

Google parent Alphabet on Wednesday said it would spend up to $185 billion in 2026, while Meta last week said its capital expenditures could nearly double from last year to somewhere between $115 billion to $135 billion in 2026

On a conference call with investors, Wall Street analysts pressed Amazon executives for more clarity around the spending blitz and when it could begin to pay off. CEO Andy Jassy said in prepared remarks at the beginning of the call that he was “confident” that company’s cloud unit will see a “strong return on invested capital,” though he didn’t say when it could materialize.

“Help us, get to that — get to your level of confidence in having a strong long term return on that invested capital,” Mark Mahaney, Evercore ISI head of internet research, said to Jassy.

Jassy said the company needs the capital to keep pace with “very high demand” for Amazon’s AI compute, which requires more infrastructure such as data centers, chips and networking equipment.

“This isn’t some sort of quixotic, top-line grab,” Jassy said. “We have confidence that we, that these investments will yield strong returns on invested capital. We’ve done that with our core AWS business. I think that will very much be true here as well.”

Sales at Amazon Web Services grew 24% to $35.6 billion in the most recent period, beating analysts’ expectations and marking the cloud unit’s “fastest growth in 13 quarters,” Jassy said.

AWS could’ve grown faster if it had more capacity to meet demand, “so we are being incredibly scrappy around that,” he said.

The company’s cloud unit added almost 4 gigawatts of computing capacity in 2025, and AWS expects to double that power by the end of 2027, Jassy noted.

Barclays analyst Ross Sandler asked Jassy how he sees the AI market evolving from the current landscape, where it remains “a bit top-heavy with a lot of the spend clustering around a few of the AI-native labs.”

Jassy said the AI market has become more like a “barbell,” with the AI labs on one side and enterprises on the other end, looking to the technology as a “productivity and cost avoidance” tool. The middle is comprised of enterprises that are in various stages of building AI applications, he said.

“That middle part of the barbell very well may end up being the largest and most durable,” Jassy said.

WATCH: Amazon shares fall on earnings miss, $200 billion guidance for 2026 capex spending

Why Amazon’s CEO is ‘confident’ with 0 billion spending plan


Alphabet shares close flat after earnings beat. Here’s what’s happening


Alphabet shares close flat after earnings beat. Here’s what’s happening

Alphabet’s shares closed largely flat on Thursday after the company beat Wall Street’s expectations on earnings and revenue, with artificial intelligence spending projected to increase hugely this year.

The Google parent closed nearly 2% lower on Wednesday. After the bell, Alphabet reported fourth-quarter revenue of $113.83 billion, above the $111.43 billion estimate from analysts polled by LSEG.

Its Google Cloud division had $17.66 billion in revenue versus a forecast of $16.18 billion, according to StreetAccount. YouTube Advertising posted $11.38 billion in revenue versus the estimated $11.84 billion.

The tech giant said it would significantly increase its 2026 capital expenditure to between $175 billion and $185 billion — more than double its 2025 spend. A significant portion of capex spending would go toward investing in AI compute capacity for Google DeepMind.

What analysts are saying

Barclays analysts said in a note Thursday that Infrastructure, DeepMind and Waymo costs “weighed on overall Alphabet profitability,” and will continue to do so in 2026.

“Cloud’s growth is astonishing, measured by any metric: revenue, backlog, API tokens inferenced, enterprise adoption of Gemini. These metrics combined with DeepMind’s progress on the model side, starts to justify the 100% increase in capex in ’26,” they said.

“The AI story is getting better while Search is accelerating – that’s the most important take for GOOG,” they added.

Deutsche Bank analysts said in a note Thursday that Alphabet has “stunned the world” with its huge capex spending plan. “With tech in a current state of flux, it’s not clear whether that’s a good or a bad thing,” they wrote.

Correction: This story has been updated to correct that Alphabet shares were down on Thursday.