House Speaker Mike Johnson, R-La., is calling for a national framework to regulate artificial intelligence (AI) — but cautioned it should not go too far.
“America will win the AI race. We will win it, if two things happen — if government resists the siren song of control, and if industry steps up as our patriotic partner,” Johnson said. “I think we can do both of those things.”
The leader of the House of Representatives spoke at the Hill & Valley Forum on Tuesday, an annual bipartisan meeting of lawmakers and private sector leaders to discuss American AI innovation.
TRUMP SAYS EVERY AI PLANT BEING BUILT IN US WILL BE SELF-SUSTAINING WITH THEIR OWN ELECTRICITY
Speaker of the House Mike Johnson, R-La., praises President Donald Trump’s policies and agenda ahead of his State of the Union speech, during a news conference at the Capitol in Washington, Tuesday, Feb. 24, 2026.(J. Scott Applewhite/AP Photo)
He told attendees on Capitol Hill that Congress had “three things” it needed to accomplish regarding AI.
“The first thing is, we have to deliver a single national framework that protects children, safeguards communities, supports creators, and avoids a patchwork of state regulations,” Johnson said. “We recognize that constant shifts in policy don’t just confuse the market, they run contrary to our national interest.”
He said lawmakers “will utilize existing structures to establish safeguards and rules of the road, so to speak, without smothering the whole marketplace with red tape.”
The Grok application appears on a smartphone screen in this photo illustration in Athens, Greece, on October 2, 2025.(Nikolas Kokovlis/NurPhoto via Getty Images)
The second thing, Johnson said, was to treat AI technology as a matter of national security in keeping it in the hands of the U.S. and its allies rather than the country’s adversaries.
CHINA RACES AHEAD ON AI —TRUMP WARNS AMERICA CAN’T REGULATE ITSELF INTO DEFEAT
The final task the speaker mentioned is a duty to “move at the speed that victory demands.”
It comes days after President Donald Trump released his own framework for AI regulations that includes more guardrails against self-harm and sexual exploitation for AI platforms accessed by children, streamlining permitting for AI data centers, and preventing AI from being used to silence free speech, among other measures.
President Donald Trump speaks during the swearing in for Homeland Security Secretary Markwayne Mullin in the Oval Office of the White House, Tuesday, March 24, 2026.(AP Photo/Alex Brandon)
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
The proposal would need to be drafted as legislation by congressional lawmakers and passed by both chambers to be able to affect any meaningful change.
Trump also issued a moratorium on states’ abilities to enact their own AI regulations late last year.
Elizabeth Elkind is a politics reporter for Fox News Digital leading coverage of the House of Representatives. Previous digital bylines seen at Daily Mail and CBS News.
Follow on Twitter at @liz_elkind and send tips to elizabeth.elkind@fox.com
Newswise — Researchers at the Department of Energy’s SLAC National Accelerator Laboratory and collaborating institutions recently built a generative AI model that can recreate molecular structures from the movement of the molecule’s ions after they are blasted apart by X-rays, a technique called Coulomb explosion imaging.
The research, published in Nature Communications, is an important step toward being able to take snapshots of molecules during chemical reactions – an advance that could have important impacts in medicine and industry. The machine learning model closely predicted the geometries of a range of different molecules made of less than ten atoms, paving the way for applying the technique to larger molecules. “We were pretty excited about this,” said Xiang Li, an associate scientist at SLAC’s Linac Coherent Light Source (LCLS) and lead author of the study. “It is the first AI model built for molecular structure reconstruction from Coulomb explosion imaging.”
A new way to see molecules
Currently, there are limited options available for imaging isolated gas phase molecules. With electron microscopy, for example, subjects must be fixed in place, making it impossible to image free-floating molecules. And for diffraction-based techniques to work, the sample of molecules needs to be dense enough to generate a strong signal in the detector. The resulting image is technically an average of many molecules, restricting researchers from studying details only visible when imaging isolated molecules.
In the paper, the researchers instead focused on Coulomb explosion imaging. In this technique, an X-ray pulse hits a single molecule in a vacuum chamber, ripping off the molecule’s electrons. This leaves behind positive ions that explosively repel away from each other and smash into a detector. The detector captures their momentum, which can be used to reconstruct the structure of the molecule. “This technique has the ability to isolate minor details that are chemically relevant,” said James Cryan, LCLS interim deputy director for science, research and development, associate professor of photon science at SLAC and coauthor of the paper.
But this reconstruction process has so far been largely infeasible due to computing constraints. After the X-ray pulse strips away electrons, the remaining ions do not explode apart instantly. During this brief delay, the atoms can shift slightly, making it difficult to reconstruct the original structure using Coulombs law for electrostatic forces. “It will not be accurate because a simple use of that law only works if the charge-up process is instantaneous,” explained Li.
Making things even messier, every additional atom in the molecule adds an exponential level of complexity. “It’s very challenging to work backwards to get the original structure,” said co-author Phay Ho, a physicist with DOE’s Argonne National Laboratory. “It’s kind of like breaking a glass and trying to put it back together from how the pieces flew apart. Many problems in modern physics and chemistry involve reconstructing hidden structures from indirect measurements. This work demonstrates how AI can help tackle such inverse problems.”
The research team set out to build a machine learning model that could overcome this computing constraint. They developed and trained the model at SLAC’s Shared Science Data Facility (S3DF). Generative AI models are well-suited for the task because they “think” differently than a standard computer simulation. Instead of working through a series of equations, they learn by finding patterns in training data. Then, they use those patterns to make statistical predictions.
To gather training data, the team turned to a simulation built by Ho. The simulation analyzes molecular structures and calculates the momentum of their ions following a Coulomb explosion. After running for over a month, the computing-intensive simulation, using both quantum mechanics and classical physics equations, produced a dataset of 76,000 molecular samples.
Initially, the researchers trained the AI on this dataset alone, which is small by AI-training standards, and they found the model predicted inaccurate structures from explosion data. So, they re-did the training, adding in another dataset derived using only classical physics. The second set was less precise but about 100 times larger than the first one.
This two-step training was the trick for predicting precise structures.
The researchers tested the AI model by prompting it to predict molecular structures in a portion of the simulation data it had not seen in training. The model, which the team named MOLEXA (short for “molecular structure reconstruction from Coulomb explosion imaging”), took the ion momenta and calculated the most likely structures. “We found that this two-step training process suppressed the prediction error by a factor of two,” said Li.
The team then tested MOLEXA with experimental datasets recorded at the Small Quantum Systems (SQS) instrument of the European X-ray Free-Electron Laser facility (European XFEL) in Germany. The molecules they tested included water, tetrafluoromethane and ethanol. They entered the experimental ion momenta into the model, reconstructed the molecular structures, and then compared the reconstructions to known structures listed by the National Institute of Standards and Technology.
They found the predictions largely overlapped with the established structures. Overall, the bonds were in the right spots, with only slight variations in their angles. The errors in position were generally less than half the length of a typical chemical bond. “The model is actually, most of the time, doing better than that,” added Li. “It is only a starting point for future research, which will not only improve model accuracy but also extend its applicability to larger molecular systems.”
Expanding to larger molecules and chemical reactions
The paper is a major step in advancing Coulomb explosion imaging, which has long been limited by the challenge of reconstructing molecular structures from experimental measurements. In future work, the researchers plan to scale up the number of atoms the machine learning model can piece back together and apply the model to time-resolved experiments at the LCLS and European XFEL. That will help researchers to reconstruct snapshots of molecules in motion, creating flip-book-like molecular movies with insights into how chemical reactions unfold. It will also help with the interpretation of data collected at the high X-ray pulse rates delivered by SLAC’s superconducting X-ray laser, Cryan said.
The team is also now testing the model’s ability to reconstruct molecules from incomplete data. Much of the time, the detector misses an ion produced in the Coulomb explosion. Li wants to know, for example: Can the AI still reconstruct an ethanol molecule if one or more of its hydrogen ions are not registered in the detector?
If these challenges are resolved, the technique could become more applicable in biology and chemistry research. Proteins, for instance, can consist of thousands of atoms. “That’s really the goal,” said Li. “We will be able to study systems that are more biologically or industrially relevant.”
The team also included researchers from the Stanford PULSE Institute; Stanford University; Kansas State University; European XFEL, Germany; the Max Planck Institute for Nuclear Physics, Germany; Fritz Haber Institute, Germany; and Sorbonne University, France. Large parts of this work were funded by the Department of Energy’s Office of Science. LCLS is an Office of Science user facility.
About SLAC
SLAC National Accelerator Laboratory explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by researchers around the globe. As world leaders in ultrafast science and bold explorers of the physics of the universe, we forge new ground in understanding our origins and building a healthier and more sustainable future. Our discovery and innovation help develop new materials and chemical processes and open unprecedented views of the cosmos and life’s most delicate machinery. Building on more than 60 years of visionary research, we help shape the future by advancing areas such as quantum technology, scientific computing and the development of next-generation accelerators.
SLAC is operated by Stanford University for the U.S. Department of Energy’sOffice of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.
Russell T Davies shared an artificially generated Doctor Who clip with his followers (Picture: James Shaw/Shutterstock)
Doctor Who fans have been left disappointed after showrunner Russell T Davies shared an artificially generated clip of William Hartnell.
The video was created by an Instagram user who exclusively shares AI-generated clips of themselves supposedly travelling back in time, to such events as Partygate and Princess Diana’s wedding day.
A recent clip saw him claim to be on the 1963 set of the first episode of Doctor Who – although several social media users have pointed out huge anachronisms in the video.
In it, Hartnell’s artificially reanimated figure can be seen in the background, speaking with a crew member and standing in front of the Tardis.
In voiceover, the creator claimed the first Tardis was made of cardboard (it was built with wood) and questioned what Hartnell might have discussed with the crew in between takes, when the episode was filmed as if it were live, for minimal editing.
Get personalised updates on Doctor Who
Wake up to find news on your TV shows in your inbox every morning with Metro’s TV Newsletter.
Sign up to our newsletter and then select your show in the link we’ll send you so we can get TV news tailored to you.
In the comments section of the post, John Barrowman described it as ‘brilliant’ and Davies also wrote ‘amazing!’, which he reiterated when he shared the Instagram reel to his story.
The showrunner appeared to delete his Story (Picture: Albert L. Ortega/Shutterstock)
‘I don’t know why RTD would post that,’ said one fan (Picture: Matthew Purchase/@TheCyberdevil)
The decision to promote AI slop has disappointed several fans online, with Matthew Purchase sharing the post on X with the caption: ‘Oh god. RTD likes AI slop.’ The post has been viewed over 200,000 times.
Sammy wrote on X: ‘The Tardis prop didn’t look like that, all the sets are wrong. Literal slop and I don’t know why RTD would post that.’
People are allowed to be disappointed if not annoyed by the Showrunner of their favourite show, sharing and apparently delighting in AI slop, when the affects of it are so widely known and debated daily. This isn’t just a simple issue, it’s a moral one for someone who knows
Meanwhile, Josh Snares similarly commiserates over the lack of accuracy, while Romana said it proved Davies needs to step back from the show.
‘I think above else it just proves he is out of touch and that we need new blood in the writers’ room. The same 3 people can’t write Doctor Who forever if it’s going to continue and continue successfully,’ she wrote.
Matt simply wrote: ‘Russell T Davies has just lost it hasn’t he really.’
Some Whovians thought a mountain was being made out of a molehill (Picture: Lisa O’Connor/Variety via Getty Images)
Zee was another disappointed fan online, writing: ‘Why use this slop when the show has successfully recast late actors and had old Doctors reappear without de-ageing effects? This show only exists today because of the love and care humans put into creating it.’
While some Whovians believed the furore was making a mountain out of a molehill, others framed the promotion of AI as a ‘moral issue’ given the threats it poses to the entertainment industry and those working on shows like Who.
Davies subsequently appeared to remove the post from his Story, but he still follows the account posting this AI-generated content and has not deleted his comment.
Other fans dug up a years-old interview of Davies, alongside Ncuti Gatwa and Millie Gibson, in which he suggested Who could one day ‘re-create’ the past Doctors with CGI.
The reception from the actors beside him was noticeably muted as he told Variety: ‘Wouldn’t that be amazing? A CGI William Hartnell as the very first Doctor in 1963.
‘Imagine that. It would be amazing. One day we can do anything.’
Metro contacted Bad Wolf for comment on this story.
Got a story?
If you’ve got a celebrity story, video or pictures get in touch with the Metro.co.uk entertainment team by emailing us celebtips@metro.co.uk, calling 020 3615 2145 or by visiting our Submit Stuff page – we’d love to hear from you.
MORE: BBC suffers huge glitch as private Alex Jones moment accidentally broadcast to stunned viewers
MORE: You could be downloading ‘shadow AI agents’ without knowing – how dangerous are they?
MORE: Doctor Who and Boys from the Blackstuff star Tom Georgeson dies aged 88
Developers at Capcom and Ubisoft were apparently unaware of the company’s support of Nvidia’s DLSS 5, as the AI upscaler controversy continues.
Nvidia’s new DLSS 5 technology, which uses generative AI to alter a game’s visuals, has become a new focal point in the conversation around generative AI.
After the technology was showcased earlier this week, many fans and developers have criticised it for how it alters character’s faces to the point of being almost unrecongisable and changes the lighting to make it less realistic – based on comparison shots using Resident Evil Requiem, Starfield, and others.
Nvidia’s CEO Jensen Huang has been bullish about the criticism, calling it ‘completely wrong’ and insisting that it ‘doesn’t change the artistic control’. However, the latest report suggest that developers at Capcom and Ubisoft had no idea about the technology until it was unveiled to the public.
As noted in the original blog post, the announcement of DLSS 5 was supported by several companies, including Bethesda, Capcom, Hotta Studio, NetEase, NCSOFT, S-Game, Tencent, Ubisoft, and Warner Bros. Games.
According to Insider Gaming, DLSS 5 was revealed to developers at Capcom and Ubisoft at the same time as everyone else. ‘We found out at the same time as the public,’ one unnamed Ubisoft developer told the outlet.
Expert, exclusive gaming analysis
Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning.
It’s claimed developers at Capcom were particularly shocked by the studio’s involvement, as they had historically been very ‘anti-AI’ with projects like Resident Evil Requiem. Now, some fear this might represent a change in attitude among higher-ups.
In the original announcement, Capcom’s executive producer and corporate officer, Jun Takeuchi, described DLSS 5 as ‘another important step in pushing visual fidelity forward, helping players become even more immersed in the world of Resident Evil.’
Additionally, Charlie Guillemot, co-CEO of new Ubisoft subsidiary Vantage Studios, said: ‘Immersion is about making the world feel real. DLSS 5 is a real step towards that goal. The way it renders lighting, materials, and characters changes what we can promise to players.
‘On Assassin’s Creed Shadows, it’s letting us build the kind of worlds we’ve always wanted to.’
While Nvidia clearly got support from executives, it’s not going to help their position if the actual teams within Capcom and Ubisoft – whose work DLSS 5 directly affects – were not consulted beforehand.
GameCentral has reached out to Ubisoft and Capcom for comment.
It seems as if the backlash against DLSS 5 has come as a surprise to many within the industry. After posting positive impressions about the technology earlier in the week, Digital Foundry founder Richard Leadbetter has since said in a new video that they ‘don’t think we did a good enough job on the day’, saying they should have ‘taken more time with the material’ before posting the video.
The blowback they received even led to death threats against the team, which Leadbetter described as ‘crossing the line’ and ‘totally unacceptable’.
Starfield characters rendered using DLSS 5 (Nvidia)
Culture Minister Marc Miller says the government must have a serious conversation about artificial intelligence (AI) systems’ use of news.
“Having the news cannibalized and regurgitated undermines the spirit of the use of that news in the first place and the purpose for which it’s used and we have to have a serious conversation with the platforms that purport to use it including AI shops,” Miller said.
Miller was asked whether the government is open to extending its Online News Act to AI companies. The Online News Act requires Meta and Google to compensate media outlets for displaying their content. Meta pulled news off its platforms in response, but Google has been making payments under the act.
He said it’s not a question about opening up the legislation but of making sure companies are acting responsibly.
Miller was speaking at a national summit of AI and culture, a day after a new report said AI systems depend on Canadian journalism for the information they provide users but don’t offer compensation or proper attribution in return.
Story continues below advertisement
Researchers at McGill University’s Centre for Media, Technology and Democracy tested 2,267 Canadian news stories on ChatGPT, Gemini, Claude and Grok.
They found when the platforms were asked about Canadian news events from their training data, they did not provide source attribution about 82 per cent of the time.
The report said AI companies now extract value from journalism “at every stage: ingesting news archives as training data, producing derivative content without naming the sources, and delivering answers to consumers that could reduce the need and incentive to visit the original source.”
Get breaking National news
Get breaking Canada news delivered to your inbox as it happens so you won’t miss a trending story.
The system “accelerates the economic decline of the journalism it relies on,” the researchers said.
AI scams becoming more sophisticated
Miller said Tuesday he had seen the report. He said he wants the government’s legislation to work, and that “this is about people paying their fair share.”
Story continues below advertisement
Asked whether that principle extends to AI companies, Miller said “the principle of proper compensation for use of proprietary material doesn’t change.”
Miller reiterated that the government is open to a deal to bring news back to Meta’s platforms.
The McGill researchers said in a policy brief the problems posed for journalism by social media and AI systems are distinct.
While social media platforms “captured advertising revenue by aggregating attention around news content,” the brief reads, “AI companies are doing something different: they are absorbing the substance of journalism, and delivering it directly to consumers as their own product.”
That means the “consumer’s need to visit the source is not just reduced by algorithmic demotion, as it was with social media. It is rendered unnecessary by the AI’s response itself.”
A coalition of Canadian news outlets, which includes The Canadian Press, Torstar, the Globe and Mail, Postmedia and CBC/Radio-Canada, are suing OpenAI in an Ontario court. They argue OpenAI is using their news content to train ChatGPT, breaching copyright and profiting from the use of that content without permission or compensation.
When he was asked Tuesday about the government’s position on whether the use of copyrighted materials for AI training violates copyright law, Miller said he doesn’t believe there is a need to open up the law.
Story continues below advertisement
“Intellectual property reform is a complex issue that goes over and above artificial intelligence, and it is a multi-year process. So it’d be irresponsible in any context to stand here and say nothing’s going to happen,” he said.
“But the current copyright law does and should protect those that have created material and people need to be compensated properly.”
In a 2024 consultation on copyright and artificial intelligence, AI companies maintained that using the material to train their systems doesn’t violate copyright.
The news publishers’ lawsuit was launched in late 2024. It’s unclear how long it will take for the court to make a decision on the case.
The House of Commons heritage committee heard last year from groups and unions representing creative industries that take issue with AI’s use of copyright-protected works without permission and want to establish a licensing system covering such use.
The Wednesday letters page agrees with the backlash against Nvidia’s DLSS 5 tech, as one reader wonders why Öoo was never in the UK Indie World.
Games Inbox is a collection of our readers’ letters, comments, and opinions. To join in with the discussions yourself email gamecentral@metro.co.uk
No star review So the inevitable has finally happened and Starfield is coming to PlayStation 5 (but not Switch 2, for some reason, I noticed). As someone that has played the game on PC I would say now that it is not something to get excited about. I have no idea what the new story DLC will be but the problems with the game are so deep it’s literally impossible for it to fix it.
I really resent that game. It tied up Bethesda for years and is going to lead to something like a 20 year gap between Skyrim and The Elder Scrolls 6. 20 years! And the only other proper game they’ve made since then is Fallout 4. People talk about Sony wasting a generation, but Bethesda has wasted two. Skyrim was an Xbox 360 game, for pity’s sake!
The worst thing is that thanks to Skyrim I have little faith in The Elder Scrolls 6 being worth the wait. Starfield has a shopping list of problems but one of the main ones is that it’s so old-fashioned. The dialogue system, the AI for companions, and the way towns work is almost exactly the same as Skyrim.
And then the one thing you’d want to be the same as Skyrim – the exploration and open world design – is completely missing. Instead of getting an amazing open world with a secret around every corner you get an infinite collection of identikit, randomly generated planets that are about as interesting to explore as Milton Keynes on a Sunday. So no, I would not recommend Starfield to any PlayStation owners. Korbie
Expert, exclusive gaming analysis
Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning.
Consumer backlash GC always says the best way to stay positive about the games industry is to just go away and play some new games. That’s true but the other thing that gives me hope is how the majority of gamers are anti-AI, much more than you would expect of a hobby where technology is so important.
This Nvidia DLSS 5 tech is horrendous and emphasises the fact that AI is attempting the death of art. As if it wasn’t bad enough that all AI artwork looks the same, and it is everywhere because it’s so easy to make, now games have to look like it as well. The levels of uncanny valley are off the scale, while there’s no consistency of any kind (Grace doesn’t look anything like herself in AI-o-vision) and the lighting is terrible – like the game is constantly shining a high-powered spotlight at the screen.
As usual with AI, it’s all a solution to a problem that doesn’t actually exist and as usual I imagine Nvidia and other companies will respond to the intense, and very clear, negative reaction by… doubling down on it all and blaming gamers for not liking it. I don’t know about PlayStation 6 but it is very obvious that the next gen Xbox is going to do nonsense like this and I’m already sick of it. Zeiss
Ugly future That Nvidia DLSS 5 stuff is so ugly, I can’t believe anyone involved thought it was a good idea. Do they not have eyes? Digital Foundry is getting so much grief for being positive about it and I can’t say they don’t deserve it.
What makes me laugh about all the comparison images is that the only game that looks halfway decent is Starfield, and that’s because it already had a bland art style with dead-eyed characters, so adding an AI filter of exactly that didn’t make it any worse.
The Resident Evil Requiem shots are laughable though and the idea of video game graphics no longer being what the developer intended but some on-the-fly guessing game made up by the AI is disgusting to me. The future sucks. Focus
Email your comments to: gamecentral@metro.co.uk
Secret mode I love seeing the difference between how other companies show off their new products and updates and what Nintendo does. We get a big blog post and lots of details from Sony about their PSSR tech. Then we get some kind of preview blow-out from Nvidia about their AI thing, which seems to have blown up in their face. And then for Nintendo and their boost mode… they keep it a secret and don’t tell anyone.
I only found out about it from the news reports but giving it a quick twirl it does actually seem quite good. You can definitely see the difference it makes and that’s pretty rare in these instances, in my experience.
Now all we need is an annoucement for that ‘proper’ Nintendo Direct we’re all waiting for. Which could take place anywhere from tomorrow to December. Because it’s Nintendo and who knows what they’re ever thinking. St1nger
Improved formula Am I missing something? All Resident Evil bosses are just run around, pop off a few shots, rinse and repeat. Not played Requiem yet but I can’t imagine it’s much different. Not that this is a bad thing but it is part of the formula.
I’m saying this as a massive fan too, but I love the games as an overall experience, in spite of the boss fights usually. Bobwallett
GC: You are missing that… maybe that part of the formula should be changed?
Spore reproduction I was thinking of old games that never got a sequel or modern day equivalent and I remembered Spore, which at the time it came out I was kind of obsessed with. For those that don’t know it was by the creator of SimCity and The Sims, so it was a big deal at the time, and was about controlling a species from microscopic organisms to space-faring aliens.
That sounded great in theory but in reality it was just half a dozen minigames that weren’t that great. However, the creature designer was amazing and I had hours and hours of fun creating my own creatures and messing around the editor. It was the only thing at the time better than the WWE creator-a-wrestler.
I think it was a flop, so there was no sequel at the time and to be honest I haven’t heard anyone talk about it in years. I do feel it’s the sort of thing that could do very well today with an update though, as, to me at least, it was basically the Minecraft of its day in terms of you ignoring what the game was actually about and making your own stuff.
One of the big ideas was that the things you designed in the earlier eras carried through to the later ones but that wasn’t really very obvious when you played so I would focus more on that and making it more one game with the same controls rather than a bunch of separate ones. Civilization takes place over thousands of years but it’s still the same game, so something like that.
It couldn’t be an official sequel though, because it was by EA and I don’t see any chance they’d approve anything like that. Sandlow
Nothing like it Thanks for the review of Öoo. I had never heard of this game until now and I don’t understand why it wasn’t in the UK Indie World. Surely the whole point of them is to highlight games just like this?
Given the low price I have bought it already and look forward to playing it tonight. I love seeing how unusual and imaginative indie games can be, compared to big budget games. Don’t get me wrong, I love myself a blockbuster, if it’s well done, but even something like Resident Evil Requiem is getting criticised for being original. That doesn’t seem to be a problem for Öoo. Royston
Don’t miss Gaming news! Add us as a Preferred Source
As a loyal GameCentral reader, we want to make sure you never miss our articles when searching for gaming stories. We have all the latest video games news, reviews, previews, and interviews, with a vibrant community of highly engaged readers.
Click the button below and tick Metro.co.uk to ensure you see stories from us first in Google Search.
Add us as a Preferred Source
GameCentral has been delivering unique games news and reviews for over a decade
Prehistoric gaming RE: Grackle and Mickah. Having just turned 50, and been around games since I can remember, I have some very old gaming first memories. I think the very earliest one would have been Escape for the ZX Spectrum – a simple maze game where you had to find a key to ‘escape’, avoiding dinosaurs as you ran around the map.
I also remember playing Gorf in the arcades around the same time, whilst on a family holiday to Swanage, and being amazed when my brother told me it was the word frog spelled backwards! (Well, I was only six at the time.)
I’ll try and find the time to turn this into a Reader’s Feature as I’m pretty sure I can remember the first game I played on many formats, including Spike on the Vectrex, Shadow Of The Beast on the Amiga, Pac-Land on the Commodore 64, and Cuthbert Goes Walkabout on the Dragon 32.
Good memories, good times. Jonathan Foley Currently playing: Horace (Switch) and Virtual Boy (Switch 2)
GC: We look forward to that Reader’s Feature.
Inbox also-rans So this Clunkin’ Bell restaurant hasn’t even opened yet? We’re getting leaks and rumours about GTA knock-off restaurants but nothing about the actual game? That about says it all. Mentz
I’m sorry but if DLSS 5 or anything like it is part of the PlayStation 6 then that’s it for me as far as gaming is concerned. These artless, cynical tech bros trying to destroy art, just because they can’t make it, is revolting to me. Devo
Email your comments to: gamecentral@metro.co.uk
The small print New Inbox updates appear every weekday morning, with special Hot Topic Inboxes at the weekend. Readers’ letters are used on merit and may be edited for length and content.
You can also submit your own 500 to 600-word Reader’s Feature at any time via email or our Submit Stuff page, which if used will be shown in the next available weekend slot.
You can also leave your comments below and don’t forget to follow us on Twitter.
MORE: Games Inbox: Is Requiem the best Resident Evil game?
MORE: Games Inbox: What’s the first video game you ever played?
MORE: Games Inbox: Who will win out of PS6 vs. Project Helix?
AI-powered toys that “talk” with young children should be more tightly regulated, suggests a report from the University of Cambridge.
Researchers at the university explored how generative AI toys capable of human-like conversation may influence development in the years up to age five.
The year-long project included scientific observations of children interacting with a GenAI toy for the first time.
While the report highlighted benefits to these toys, including that they could support language and communication skills; they also found the toys tended to struggle with social and pretend play, misunderstand children, and react inappropriately to emotions.
When one five-year-old told the toy, “I love you,” for example, it replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
Despite GenAI toys being widely marketed as learning companions or friends, their impact on early years development has barely been studied.
As a result, researchers are urging parents and educators to proceed with caution.
Discussing one potential red flag, study co-author Dr Emily Goodacre, said: “Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up.
“Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy – and without emotional support from an adult, either.”
What did the study involve?
The study was kept deliberately small-scale to enable detailed observations of children’s play and capture nuances that larger-scale studies might miss.
Researchers surveyed early years educators to explore their attitudes and concerns, then ran more detailed focus groups and workshops with early years practitioners and 19 children’s charity leaders.
Working with Babyzone, an early years charity, they video-recorded 14 children at London children’s centres playing with a GenAI soft toy called Gabbo.
Designed for kids over three, Gabbo is a plush robot that can have “endless conversations” with children and provides “educational playtime”, according to Curio, which creates the $99 (£73) toy.
After the play sessions, they interviewed each child and a parent, using a drawing activity to support the conversation.
The pros and cons of AI toys
Most parents and educators felt that AI toys could help develop children’s communication skills and some were enthusiastic about their learning potential.
But equally, many worried about children forming “parasocial” relationships with toys. The observations supported this: children hugged and kissed the toy, said they loved it and (in the case of one child) suggested they could play hide-and-seek together.
Dr Goodacre stressed that these reactions might simply reflect children’s vivid imaginations, but added there was potential for unhealthy relationships to form.
Children in the study also struggled with the toy’s conversation, as it sometimes ignored their interruptions, mistook parents’ voices for children’s, and failed to respond to apparently important statements about feelings.
When one three-year-old told the toy: “I’m sad,” it misheard and replied: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?”
Parents were also worried about privacy – specifically what information the toy might be recording and where this would be stored. When selecting an AI-powered toy for the study, researchers said many GenAI toys’ privacy practices are unclear or lack important details.
On the Gabbo website, Curio said its toys are “built from the ground up with privacy and security at the forefront”. The company added that its operating system “merges all-ages fun with G-rated content, anonymity, and privacy, and security for every safeguarded adventure”. It’s also KidSAFE listed.
Nearly 50% of early years practitioners surveyed said they did not know where to find reliable AI safety information for young children, and 69% said the sector needed more guidance.
They also raised concerns about safeguarding and affordability, with some fearing AI toys could widen the digital divide.
Experts have also previously warned that AI can make mistakes, passing on incorrect information, as well as bias, to kids.
Strict regulation is needed, said researchers
AI-powered toys are set to boom in the coming years. In June 2025, one of the world’s leading toy companies, Mattel, announced a strategic collaboration with OpenAI (the company behind ChatGPT) with a view to creating “AI-powered products and experiences”.
Researchers now want to see clearer regulation which would address key concerns. They recommend limiting how far toys encourage children to befriend or confide in them, more transparent privacy policies, and tighter controls over third party access to AI models.
“A recurring theme during focus groups was that people do not trust tech companies to do the right thing,” said Professor Jenny Gibson, the study’s other co-author. “Clear, robust, regulated standards would significantly improve consumer confidence.”
The report urges manufacturers to test toys with children and consult safeguarding specialists before releasing new products.
Parents are also encouraged to research GenAI toys before buying and to play with their children, creating opportunities to discuss what the toy is saying and how the child feels.
And lastly, the authors recommend keeping AI toys in shared family spaces where parents can monitor interactions.
Newswise — Using a tool to solve a protein’s structure, for most researchers in the world of structural biology and computational chemistry, is not unlike using the Rosetta Stone to unlock the secrets of ancient Egyptian texts. Once a protein’s structure has been discovered, or defined, one can infer crucial information about its function or, in a diseased state, its dysfunction. While researchers have been pursuing the quest of solving protein structure for decades, advancing tools and computing technologies offer a new frontier for this work.
A collaborative study recently published in Nature Communications unveiled a new computing program that offers a faster and more accurate way to determine protein structure at a new level of precision. Researchers from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), along with an international team of researchers, were a part of the effort. This tool, dubbed AI-enabled Quantum Refinement, or AQuaRef for short, uses quantum-mechanical calculations (QM) and artificial intelligence (AI) to predict the highly-accurate placement of atoms and electrons to determine a protein’s molecular structure.
This program is a part of Phenix, a comprehensive software suite that generates realistic computer models used by structural biologists around the world to solve macromolecular structures. “We’re all basically a bunch of proteins,” said Nigel Moriarty, a Berkeley Lab researcher and contributor to the recent publication. “They do so much in our bodies that detail the processes of life. Understanding their structure can give us insights into the mechanisms that cause disease in humans or produce energy in plants. All of this knowledge can lead to more effective therapeutics and bioenergy production.”
The current way of mapping a protein’s structure entails bringing together two streams of information: experimental data produced through techniques like X-ray crystallography and cryogenic electron microscopy (cryo-EM), and theoretical data that exists in a library of detailed, known protein structural information. But the current options are limited, explained Moriarty, a computational research scientist in the Molecular Biophysics and Integrated Bioimaging (MBIB) Division’s Phenix group. Our understanding today is limited to the chemical entities that have already been defined and doesn’t yet include meaningful noncovalent interactions, the type of attraction typically seen holding a protein in its structural form. “That’s where quantum and AI come in,” he said.
Nearly five years ago, members of the Phenix team began working with researchers at Carnegie Mellon University to explore how they might be able to apply their coding work to Phenix’s offerings. The collaborative approach, coupled with 15 years of incremental research, led to this breakthrough program. In addition to Moriarty, other members of the Phenix team involved in this work were Paul Adams and Billy Poon, with Pavel Afonine leading the research. AQuaRef uses machine learning (ML) tools developed at Carnegie Mellon integrated with the Phenix software to compute energy and forces for scientifically interesting proteins—making quantum-level refinement practical where it was previously impossible.
Of the 71 experiments that were tested in this study, AQuaRef produced higher quality structural information at a substantially lower computational cost while maintaining an equal or better fit to experimental data. In addition to the proof-of-concept results from this work, AQuaRef also correctly determined proton positions in DJ-1, a human protein linked to some forms of Parkinson’s Disease, the structure of which has been notoriously difficult to map. Now that the team has confirmed that quantum-level refinement of a 3D protein model structure is possible, they’re aiming to broaden the scope to include more diverse structures, such as those required for pharmaceutical drug design. And the potential impacts of this work reach far beyond human health, from better understanding the mechanisms of photosynthesis for enhanced crop productivity to mapping the proteins in plants as it relates to biofuel production.
“There is a near-infinite number of things that can benefit from a detailed understanding of these mechanisms and protein structure,” said Moriarty. “I’m excited to see how the paradigm shift that AQuaRef represents impacts the field of protein structure determination.”
This international team also included collaborators from the University of Wrocław, Poland, the University of Florida, and Pending.AI, Australia.
This work was funded by the National Institutes of Health as well as with support from the Phenix Industrial Consortium.
###
Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.
Can Project Helix put up a serious fight?(Credits: Getty Images)
The Friday letters page doesn’t think parents pay enough attention to age ratings for games, as one reader wishes John Carpenter hadn’t made Toxic Commando.
Games Inbox is a collection of our readers’ letters, comments, and opinions. To join in with the discussions yourself email gamecentral@metro.co.uk
Simultaneous release So Project Helix is a codename, but what do we think Microsoft is really going to call the new console? For a start, they need to get rid of all that Series X/Series S nonsense, because that was terrible. Just call it Xbox something. Xbox Infinite always seemed a good one to me, but I’m fine with calling it Xbox 6, because at this point who’s counting?
I don’t want to get into a PlayStation vs. Xbox thing because I think if Project Helix is different enough it can exist alongside the PlayStation 6. But releasing them both at the same time seems like a really bad idea.
If Helix is more expensive and PlayStation 6 has actual proper exclusives I don’t think anyone is going to pick Xbox unless they’re hardcore fans. And I don’t even know if there’s many of them left.
Microsoft was probably aiming to get Helix out before PlayStation 6, but I wonder if Sony fast-tracked their console when they found out. That doesn’t bode too well for either the hardware or the launch games. Focus
Expert, exclusive gaming analysis
Sign up to the GameCentral newsletter for a unique take on the week in gaming, alongside the latest reviews and more. Delivered to your inbox every Saturday morning.
Force themselves Strange how quickly Battlefield 6 has fallen out of favour. Beating Call Of Duty one minute and then going free to play the next. I guess for all its faults Call Of Duty probably knows how to keep people playing better and as much as fans hate all the wacky skins at least that gives you something eye-catching to promote the game with.
EA said they’re going to keep everything in Battlefield 6 realistic but if that’s the case how many camouflage outfits do you really want to pay for? I was going to buy the game when it was cheap, so I guess I’ll try it out for free and then by the time that’s over it’ll probably be discounted enough for me.
I feel this improves the chances of Star Wars: Battlefront 3 though. EA’s likely to see it as a quick and obvious way to reuse the same tech in a new game. One where you can do as many wacky skins as you like and no one’s going to complain. Not saying they’ll definitely do it but it does seem more likely to me now. Taylor Moon
Price block I don’t like to be negative about something we haven’t see yet but I have to agree with other readers that I’m already sick of hearing Microsoft talk about Project Helix. The arrogance and complete lack of humility hasn’t changed at all since the exit of Phil Spencer, proving it was always just the company standard.
I think the real cynicism is coming from the price though. I just don’t see how you get past the fact that Helix is going to be more expensive than any other format, including Steam Machine. People would be taking a risk on Helix, and when you’re doing that you don’t generally want to be spending more money than you would have otherwise. I don’t care what the marketing campaign is like, there’s not getting over that. Heston
Email your comments to: gamecentral@metro.co.uk
Free money Of all the games that John Carpenter could’ve put his name to it ended up being a Left 4 Dead clone? I don’t believe for one minute that Toxic Commando was his idea. If you know the man, he often talks about how he enjoys putting out his hand and getting free money for doing nothing, every time a company wants to remake one of his films. It happens so often he just treats it like a joke.
I think he must’ve been the same with this game. Someone phoned him up and asked him if he could do a soundtrack and sketch out some hokey story. That’s money for old rope as far as he’s concerned. The only downside is he has to put his name to the game, when it might not be that great, but they didn’t make him do any press for it, that I’ve seen, so it’s pretty low risk/low effort.
It’s a shame because I don’t think he’s got it in him to make a new movie, but a game could’ve been something else. A slower paced survival horror would’ve been absolutely perfect for him, but I don’t think it’s ever going to happen. He is attached to the Halloween online game, but I think that’s just going to be another free handout. Saltie
Don’t miss Gaming news! Add us as a Preferred Source
As a loyal GameCentral reader, we want to make sure you never miss our articles when searching for gaming stories. We have all the latest video games news, reviews, previews, and interviews, with a vibrant community of highly engaged readers.
Click the button below and tick Metro.co.uk to ensure you see stories from us first in Google Search.
Add us as a Preferred Source
GameCentral has been delivering unique games news and reviews for over a decade
Artificial temptation The worst thing about Microsoft not mentioning AI, when talking about Project Helix, is that we all know it’s going to be there, but they know it’s not popular, so they’re trying to pretend otherwise. I’ve got a genius level idea: maybe if people don’t like something you shouldn’t do it? Especially if you’re trying to hawk your ultra expensive PC in a box.
We all know that Project Helix is going to be a failure, but I predict it will be over AI. It’s going to be too expensive already, but you know Microsoft can’t stop themselves from pushing games made by AI and they’re all going to be horrible. The Microsoft boss is a nut for AI, there’s no way he won’t force them to do it. Goose
Wrong number Interesting to see the change in age ratings for games like EA Sports FC. Although I would be absolutely shocked if more than 10% of adults paid even the slightest bit of attention to a game’s age rating, and I’m probably greatly overestimating that amount.
I used to work at a games shop when I was a student and not only did parents not care they would get violently angry if you pointed out that a game was above the age of their kid. This happened so many times with GTA that my boss told me not to bother, even though it was supposed to be policy that we did.
It was obvious why they were angry too. They knew what they were doing was wrong, but they didn’t care because games are too good a babysitter to give up just on the off chance that it turns their kid into a badly adjusted person.
I also can’t say how many times I heard parents try to argue that the number was the difficulty of the game, as if I wouldn’t be the one to know that it definitely wasn’t that.
To anyone out there reading this, that’s a good parent and careful about what their kids play I salute you, because I can tell you that you are the minority.
Of course, nowadays you have Roblox instead, which is a thousand times worse and doesn’t have any age ratings, but thankfully that’s not my problem. Coolsbane
Strange selection Has Bafta ever commented on why they always try and ignore Japanese games as much as possible? It’s so blatant I really don’t know how they justify it. Although the real insult is not doing it and yet also nominating something as milquetoast as Ghost Of Yōtei as game of the year.
That’s just rubbing your face in it, especially when they didn’t even nominate Hollow Knight: Silksong or Hades 2, which I think most people would say were easily a lot better. Hibby
Day of the plumber Nintendo has finally recognised the day GameCentral and their viewers have been celebrating for years! It’s MAR10 Day (earlier in the week)! I usually get newsletters from Nintendo quite regularly, but it’s the first, if I remember correctly, that I have seen this from Nintendo as a form of advertising.
There is definitely cause for celebration with Pokémon Pokopia and their 2.2 million sales of the game, which appears to be a considerable success story if ever I saw one. It’s a very cutesy game to look at, with the charm and not too over complicated gameplay mechanics to enjoy and experiment with. The setting up of one’s home looks a wee bit convoluted and a wee bit messy, but apparently completing the story mode gives you a useful skill to help craft and build your home better.
It appears also, that Resident Evil Requiem has been a big success, and we’re only in March, but two very different games have hit their mark in only a short space of time. Very well deserved too and I can’t wait to be getting back to this amazing franchise soon, after my little backlog has been lightened.
With the Super Mario Galaxy movie coming out soon, it’s the latest adaptation to follow Fallout, Borderlands, Sonic The Hedgehog, and Resident Evil films, along with the excellent Castlevania and Tomb Raider animations.
I saw an awesome movie based on a game the other day, called Iron Lung, by YouTuber Markiplier and despite it having its critics, it basically follows the Iron Lung story and gameplay perfectly, including the environment it’s set in. Will be definitely getting the Blu Ray when it’s released.
So hopefully everyone had a fantastic MAR10 day and wow, what an amazing start to the year for games, and movies inspired by games, setting up 2026 to be one heck of a year! Alucard
GC: Nintendo has been using Mario Day to promote things since 2016.
Inbox also-rans I think Marathon is going to be hit. I’ve completely enjoyed everything I’ve played of it so far and I’m very much looking forward to the big update. I don’t think it’s as good as Destiny 2 but it is good. Carpetnator
Does anyone else wish Capcom would remake Resident Evil 3 again before moving on to other stuff? That one was so bad and it was almost nothing like the original, which is weird because all the other remakes have been good. Icchi boo
Email your comments to: gamecentral@metro.co.uk
The small print New Inbox updates appear every weekday morning, with special Hot Topic Inboxes at the weekend. Readers’ letters are used on merit and may be edited for length and content.
You can also submit your own 500 to 600-word Reader’s Feature at any time via email or our Submit Stuff page, which if used will be shown in the next available weekend slot.
You can also leave your comments below and don’t forget to follow us on Twitter.
MORE: Games Inbox: Is buying physical video games becoming more popular?
MORE: Games Inbox: Should Capcom remake Resident Evil 6?
MORE: Games Inbox: When will the next big Super Mario game be announced?
Private equity firm KKR is working with advisers on a sale of data center cooling company CoolIT Systems for a price tag potentially exceeding $3 billion, the Financial Times reported on Sunday, citing people familiar with the matter.
A potential sale of CoolIT was in the preliminary stage and there were no guarantees that it would result in a transaction, the report said, adding that multiple buyers had been earmarked as potential bidders.
Private equity firm KKR is considering a sale of its CoolIT Systems data center cooling business. KKR A potential sale of CoolIT was in the preliminary stage and there were no guarantees that it would result in a transaction, the Financial Times reported. CoolIT Systems
CoolIT declined to comment. KKR did not respond to a request for comment. Reuters could not immediately verify the report.
High-powered AI and cloud servers crunching data need huge amounts of power and give off intense heat that traditional air cooling systems are often unable to cool properly.
The global appetite for data centers has sparked a wave of dealmaking across the industry as companies race to build capacity to meet the surge in power and cooling needs.
CoolIT specializes in designing, developing and manufacturing liquid cooling technologies for AI and computing systems, according to its website. It was acquired by KKR in 2023.