Decoding the Shadows: Vehicle Recognition Software Uncovers Unusual Traffic Behavior | Newswise


Newswise — Researchers at the Department of Energy’s Oak Ridge National Laboratory have developed a deep learning algorithm that analyzes drone, camera and sensor data to reveal unusual vehicle patterns that may indicate illicit activity, including the movement of nuclear materials.

The software monitors routine traffic over time to establish a baseline for “patterns of life,” enabling detection of deviations that could signal something out of place. For example, a surge in overnight truck traffic at a facility which is normally only visited during the day could reveal illegal shipments. 

The research builds on a previous ORNL-developed technology for recognizing specific vehicles from side views. Researchers improved the structure of this software’s deep learning network to provide much broader capabilities than any existing recognition systems, said ORNL’s Sally Ghanem, lead researcher.

“The majority of the current re-identification models require specific views of the car from the same angles. But our model does not have any of these limitations,” Ghanem said. “We can basically put in any view, from any distance, and determine if it is the same vehicle.” That means the top of a truck seen from a drone can be matched with a side view from the ground. 

This precision in recognition was achieved by training the software on hundreds of thousands of publicly available images from surveillance cameras, ground sensors and drones, combined with computer-generated images based on vehicle specifications. ORNL researcher John Holliman built 3D digital models of many car and truck brands, varying the paint jobs, perspectives and lighting conditions to create a wide range of training scenarios. Unlike most vehicle data sets, the ORNL training images also included older vehicle models.

The image set was expanded with footage captured during six data collections around three ORNL campus intersections chosen because vehicles enter and exit by the same route. “We’re using drones to improve the training data because they are very flexible,” Ghanem said. “Drones can circle a vehicle and change their distance to get many angles, so we can simulate images collected from a satellite or at road level.”

To demonstrate that flexibility, ORNL’s Zach Ryan and Jairus Hines piloted a drone hovering 80 feet over the road to ORNL’s High Flux Isotope Reactor, rotating the drone to follow vehicles through turns for multiple perspectives. They also filmed desirable footage of vehicles slightly hidden by tree limbs or traffic lights, and even blurry shots caused by electrical or magnetic interference. 

“The more low-resolution images we include, the more robust the model,” Ghanem said. Unclear footage and nighttime images train the software to more accurately identify vehicles even when visibility is poor, as in some satellite images.

To avoid bias, Ghanem weeded out repetitive images of the same angle or vehicle type. She also taught the algorithm with both correct and incorrect matches, making sure the correct pairs represented different perspectives. These methods prevent the algorithm from choosing based only on obvious similarities, such as front views of white sedans. “By retraining the model on challenging pairs, we make it more capable of tricky matches,” Ghanem said. 

After training, the team tested the software against 10,000 image pairs, evenly split between correct and incorrect matches. The system proved more than 97% accurate. 

The software leverages a series of neural networks – computational models that function similarly to the brain – which can be trained to not only match different viewpoints but derive long-term patterns from the results. “The project supports nuclear nonproliferation, enabling us to identify whether shipment activities are happening at a specific place,” Ghanem said. 

But the algorithm is also precise enough to track an individual vehicle with stickers, dents or other distinguishing features across a variety of sensors, flagging repeated visits to the same location even if the vehicle takes different routes each time. Researchers are exploring possibilities for adapting the algorithm to incorporate information from non-visual sensors. It could also be applied to identifying the shipment of dangerous or illegal substances on other forms of transportation, such as ships and airplanes.

ORNL researchers and staff who contributed to the project, which was funded through ORNL’s Laboratory Directed Research and Development program, include Ghanem, John Holliman, Ryan Kerekes, Andrew Duncan, Jairus Hines, Ken Dayman, and former staff member Zach Ryan. The High Flux Isotope Reactor is a DOE Office of Science user facility.

UT-Battelle manages ORNL for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.




Six PNNL Researchers Win DOE Early Career Research Awards | Newswise


Newswise — RICHLAND, Wash.—The Department of Energy granted early career awards to six researchers at Pacific Northwest National Laboratory—a record number of recipients for PNNL in a single year. The prestigious award is designated for outstanding scientists early in their research careers. It delivers generous support—$2,750,000 for each of the 2025 recipients over a period of five years—allowing researchers to delve into questions that are key to DOE missions. 

“This is the first time six PNNL researchers have received Early Career Research Awards in the same year. This recognition is a testament to their promising research and the impact they stand to make in a variety of fields over the course of their careers,” said Deb Gracio, PNNL director.

PNNL recipients of the awards include chemist Richard Cox, chemical engineer Josh Elmore, computational scientist Hadi Dinpajooh, materials scientist Le Wang, and Earth scientists Avni Malhotra and Nick Ward. Their work focuses on basic science, ranging in focus from the chemistry of heavy elements like plutonium and uranium to plant and microbiological processes that could boost the development of the U.S. bioeconomy. The awards are given to scientists at DOE national laboratories, Office of Science user facilities and U.S. academic institutions. 

“The Department of Energy’s Office of Science is dedicated to supporting these promising investigators, and the Early Career Research Program provides an incredible opportunity,” said Harriet Kung, DOE’s Deputy Director of Science Programs for the Office of Science. “These awards allow them to pursue new ideas and harness the resources of the user facilities to increase the potential for breakthrough new discoveries.” 

For some, like Malhotra, the funding presents a rare opportunity to lead a new research program. “It’s an incredible opportunity to build a program from scratch that can lead to long-term discoveries and new research capabilities,” said Malhotra. Her work will shed light on biological processes that occur in soil near plant roots, which are difficult to capture and have long gone understudied. 

Similarly, Nick Ward’s research could uncover important details about a large, lingering question in the Earth science community: just how much methane and nitrous oxide could flow into or out of the world’s trees, and how might the scientific community better capture the process of forest-based trace gas exchange in their models?

For other recipients, like Wang, the funding makes possible new investigations within an established research team. Wang’s work flows out of the lab’s research in thin oxide films: materials that are an essential component of many modern electronics. Scientists like Wang grow these films in extremely thin layers, atom by atom, and study them to glean details about materials that can give rise to new, promising energy and information-processing technologies. 

“I’ve proposed to focus on a new material system known as high-entropy oxides,” said Wang. “Exploring how these multicomponent materials behave at the atomic level could bring about new functional properties,” he added. 

Dinpajooh’s work developing new AI methodologies could accelerate discovery in basic energy sciences by helping researchers better understand chemical and physical processes in electrolyte solutions. Electrolyte solutions are central to energy storage technologies, separation of critical materials, and many other applications. These AI-enabled approaches could improve prediction of key phenomena such as speciation, nucleation, and electron transfer—helping scientists tailor electrolyte performance and guide the design of next-generation materials and processes.

Other funded work, like Elmore’s research on bacterial bioproduction, could ultimately harness the power of microorganisms to produce valuable chemicals. But before those chemicals and other critical materials can be produced, researchers must work toward a predictive understanding of how microbes regulate energy use. 

By exploring how certain proteins are modified within bacterial cells, Elmore’s research could help to realize that understanding. The proposed work builds upon the project he led within PNNL’s Predictive Phenomics Initiative, which focuses largely on unraveling the mysteries of molecular function in complex biological systems.

Much of the work from this year’s recipients could deliver wide-ranging implications in diverse fields—Cox’s research into nuclear chemistry being a prime example. Cox plans to study the basic chemical behavior of a subset of heavy elements known as actinides. With key roles in nuclear energy, environmental cleanup, energy storage, and even nuclear non-proliferation, a better understanding of why actinides behave the way they do could benefit many. 

“It takes a special place like PNNL that has the access and the ability to handle these unique elements safely,” said Cox, who has pursued this line of research for roughly half a decade. “It was very exciting to find out that my proposed research was chosen, and I’m even more excited to venture out into a new scientific direction,” he added.

###

About PNNL

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in energy resiliency and national security. Founded in 1965, PNNL is operated by Battelle and supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the DOE Office of Science website. For more information on PNNL, visit PNNL’s News Center. Follow us on TwitterFacebookLinkedIn and Instagram.




Feeling the Vibe


Newswise — It started with a social media post from Andrej Karpathy, one of the founders of OpenAI. Last year, he tweeted, ​“There’s a new kind of coding I call ​‘vibe coding,’ where you fully give into the vibes, embrace exponentials, and forget that the code even exists.” Karpathy said that large language models and voice-to-text programs had gotten so sophisticated that he could just ask a model to create something and then copy and paste the code it generated to build a project or create a web app from scratch. ​“I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works.” 

That groovy technique might be good for patching a glitchy website or building a phone app, but can it really change the way we do science? Researchers from the U.S. Department of Energy’s (DOE) Argonne National Laboratory are testing vibe coding tools and techniques to see how they stand up to data-intensive scientific challenges. At a recent hackathon, researchers from across the lab gathered to learn together and test commercially available coding tools like Cursor and Warp against scientific challenges as large and hairy as the hunt for dark matter and as pressing as the optimization of nuclear power plants. 

As a long-time leader in computational science and the home of Aurora, one of the world’s fastest and most powerful supercomputers, Argonne is no stranger to grand challenges. But to solve huge problems and to process more data than ever before, researchers are working to stay at the bleeding edge of harnessing artificial intelligence (AI) for science.

Rick Stevens sees vibe coding as another way Argonne researchers can continue to speed up scientific innovation. Stevens is the associate laboratory director for Computing, Environment and Life Sciences at Argonne. He has said that scientists need to be able to work as fast as they can think. He gets frustrated by the bottlenecks of current technology. But vibe coding is a productivity hack. ​“You’re unhobbled from your coding speed,” said Stevens. 

With vibe coding, researchers can interact with large language models in real time, asking them questions by talking rather than by typing commands, and then getting usable output in seconds or minutes. Stevens compared it to having an AI co-scientist — or even a team of co-scientists — working alongside you. He challenged fellow scientists to work with the technology every day. ​“You need to get your head around how to be productive in this environment,” he said. ​“Think, play and have a blast!”

Breaking barriers between ideas and action 

Part of the excitement around vibe coding is that we don’t know how it’s going to change science. At the hackathon, the vibe in the room was playful. The group was a mix of coders and non-coders from a variety of disciplines. Instead of quietly pecking away at their keyboards, researchers were laughing, bouncing ideas off each other and confidently speaking commands to their laptops. 

The promise of AI and vibe coding isn’t just about doing science faster, Stevens explained. These tools free up scientists to be more creative, to put their energy toward things that only a human can do. ​“With these tools, you’re not bottlenecked by writing code,” he said. ​“Now, you’re focused on ideas.” 

Here are some of the ideas Argonne scientists are vibing on:

1. Prototyping software to strengthen nuclear power plants

Nuclear power plants are an integral part of America’s energy supply and a reliable source of power for the growing energy needs of AI. Nuclear engineer Yeni Li and her team are creating AI models of those power plants to help plant engineers and managers predict the best times for maintenance. That knowledge can lead to more reliable and affordable energy production. 

Li said that vibe coding will be useful for setting up the software architecture she needs to turn her ideas into prototypes. ​“These tools will help us do a few days of work in a single afternoon,” said Li. 

2. Automating workflows in bioscience

Rosemarie Wilton doesn’t do a lot of coding in her work as a molecular biologist, but she does spend a significant amount of time using software tools for data analysis. Developing Python-coded pipelines would allow her to automate her data processing workflows and integrate multiple tools seamlessly. She was delighted to see how fast vibe coding could give her the command codes she needed. ​“For a coding novice, it’s really quite amazing. It will be a time saver,” she said. 

That quick win in generating command codes led Wilton and Computational Biologist Nick Chia to think about other ways vibe coding could help. Chia mused, ​“If we have an AI agent generating hypotheses for experiments, could we create another AI agent to order the chemicals or samples needed to run those experiments?” Speeding up routine processes like these could help Wilton and her team track the spread of human pathogens with greater accuracy or engineer new enzymes and biosynthetic pathways faster than ever before. 

3. Translating coding languages in science infrastructure

Zachary Sherman is a software developer who manages open-source Python tools for the Atmospheric Radiation Measurement group. He came to the hackathon looking for ways to quickly translate other coding languages into Python, a task that could take years of tedious manual coding. 

“There are many different atmospheric tools in different coding languages and also databases with application programming interfaces for downloading and interacting with atmospheric datasets,” said Sherman. ​“Some of these tools are outdated. We think vibe coding can help us create tools in Python to interact with these interfaces to download and work with the datasets. We also think vibe coding will help us modernize these code bases so we can troubleshoot issues faster and save time and money as we maintain essential scientific infrastructure.”

4. Understanding the nature of the universe

Chiara Bissolotti is a nuclear physicist trying to understand how all known particles interact. Tim Hobbs is a theoretical particle physicist trying to identify unknown particles that can help us understand the nature of dark matter or other possible ​“new physics” in the universe. Both of their fields generate huge amounts of data from theoretical computer simulations, cosmological observations and experiments at research institutions such as CERN’s Large Hadron Collider and the planned Electron-Ion Collider at DOE’s Brookhaven National Laboratory. The information hidden where their data sets overlap could be the key to answering some of the biggest mysteries of the universe, from quarks to the cosmos. But merging those data sets is a monumental task if you’re coding and comparing them by hand. 

“Can the data sets talk to each other?” asked Hobbs. ​“Might they be hiding common patterns, or guide us toward novel theoretical predictions or the automation of burdensome calculations?” 

Bissolotti summed it up, ​“We have many, many ideas. Many more ideas than time. If vibe coding can help us build the scaffolding of the code or help us make the data comparisons more scalable and efficient, we can cut our time to solution by a huge factor.”

5. Collaborating on complex problems in national security

Jonathan Ozik is a computational scientist who uses supercomputers and simulations to understand large and complex systems across many scientific domains, such as biological systems, health care interventions and infectious diseases in urban settings. He said vibe coding can help him explain his work to the many collaborators from different backgrounds that he works with. He also sees it as a way that he can help himself switch between complex projects. ​“It could give me a two-minute reintroduction to the code and the context I’m working in,” he said. ​“There’s no reason not to try to make your daily tasks easier.” 

Ozik predicts vibe coding will open research up to ideas we can’t yet begin to imagine: ​“If you have fewer perceived barriers, you create new possibilities. Things that were previously infeasible in science will become common.”

Argonne National Laboratory seeks solutions to pressing national problems in science and technology by conducting leading-edge basic and applied research in virtually every scientific discipline. Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.




AI chatbot firms face stricter regulation in online safety laws protecting children in the UK


Preteen girl at desk solving homework with AI chatbot.

Phynart Studio | E+ | Getty Images

The UK government is closing a “loophole” in new online safety legislation that will make AI chatbots subject to its requirement to combat illegal material or face fines or even being blocked.

After the country’s government staunchly criticized Elon Musk’s X over sexually explicit content created by its chatbot Grok, Prime Minister Keir Starmer announced new measures that mean chatbots such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft Copilot will be included in his government’s Online Safety Act.

The platforms will be expected to comply with “illegal content duties” or “face the consequences of breaking the law,” the announcement said.

This comes after the European Commission investigated Musk’s X in January for spreading sexually explicit images of children and other individuals. Starmer led calls for Musk to put a stop to it.

Keir Starmer, UK prime minster, during a news conference in London, UK, on Monday, Jan. 19, 2026.

Bloomberg | Bloomberg | Getty Images

Earlier, Ofcom, the UK’s media watchdog, began an investigation into X reportedly spreading sexually explicit images of children and other individuals.

“The action we took on Grok sent a clear message that no platform gets a free pass,” Starmer said, announcing the latest measures. “We are closing loopholes that put children at risk, and laying the groundwork for further action.”

Starmer gave a speech on Monday on the new powers, which extend to setting minimum age limits for social media platforms, restricting harmful features such as infinite scrolling, and limiting children’s use of AI chatbots and access to VPNs.

One measure announced would force social media companies to retain data after a child’s death, unless the online activity is clearly unrelated to the death.

“We are acting to protect children’s wellbeing and help parents to navigate the minefield of social media,” Starmer said.

Alex Brown, head of TMT at law firm Simmons & Simmons, said the announcement shows how the government is taking a different approach to regulating rapidly developing technology.

“Historically, our lawmakers have been reluctant to regulate the technology and have rather sought to regulate its use cases and for good reason,” Brown said in a statement to CNBC.

He said that regulations focused on specific technology can age quickly and risk missing aspects of its use. Generative AI is exposing the limits of the Online Safety Act, which focuses on “regulating services rather than technology,” Brown said.

He said Starmer’s latest announcement showed the UK government wanted to address the dangers “that arise from the design and behaviour of technologies themselves, not just from user‑generated content or platform features,” he added.

There’s been heightened scrutiny around children and teenagers’ access to social media in recent months, with lawmakers citing mental health and wellbeing harms. In December, Australia became the first country to implement a law banning teens under 16 from social media.

Australia’s ban forced apps like Alphabet’s YouTube, Meta’s Instagram, and ByteDance’s TikTok to have age-verification methods such as uploading IDs or bank details to prevent under-16s from making accounts.

Spain became the first European country to enforce a ban earlier this month, with France, Greece, Italy, Denmark, and Finland also considering similar proposals.

The UK government launched a consultation in January on banning social media for under-16s.

Additionally, the country’s House of Lords, an unelected upper legislative chamber, voted last month to amend the Children’s Wellbeing and Schools Bill to include a social media ban for under-16s.

The next phase will see the bill reviewed by parliament’s the House of Commons. Both houses have to agree on any changes before they pass into law.


Fentanyl or Phony? Machine Learning Algorithm Learns to Pick Out Opioid Signatures | Newswise


Newswise — New forms of fentanyl are created every day. For law enforcement, that poses a challenge: how do you identify a chemical you’ve never seen before?

Researchers at Lawrence Livermore National Laboratory (LLNL) aim to answer that question with a machine learning model that can distinguish opioids from other chemicals with an accuracy over 95% in a laboratory setting. The foundation for this new technique was published in Analytical Methods.

To identify synthetic opioids like fentanyl now, chemists try to match their signature to a library of a few hundred known samples. But studies suggest there could be thousands of unknown forms, some more dangerous than others. Recognizing those new versions requires a reference-free identification system: a way to catch an opioid even if it does not exist in a chemical database yet.

“When law enforcement finds a new clandestine drug operation, those labs often produce never-before-seen fentanyl derivatives. We can’t just go check a database, and we can’t just go back to who made it and ask how they did it,” said LLNL computational mathematician and author Colin Ponce. “And law enforcement needs to identify the samples they find quickly because there’s going to be another sample tomorrow. I think that’s a little bit of a unique situation.”

Machine learning might seem like a natural fit to identify novel or unknown opioids. And it is — to an extent. The method works best with large data sets, which are difficult to generate for toxic substances like synthetic opioids. 

To even get a machine learning algorithm off the ground, the team had to create the chemical data. They did so with LLNL’s mass spectrometry capabilities coupled to an autosampler, which enabled them to measure hundreds of samples under the same experimental conditions. This minimized variables for the machine learning algorithms. 

“In the world of AI, data is gold, and if you don’t have good data, then you’re not going to generate accurate machine learning models,” said LLNL chemist and author Carolyn Fisher. “Good data is something that we can control and generate at LLNL.” 

With that data in hand, they tried different machine learning techniques as they homed in on the best method: a random forest model. 

“When a model like this eventually gets into the hands of a user, the output has to be interpretable and trustworthy,” said LLNL scientist and author Kourosh Arasteh. “We explored machine learning methods ranging from simple regression and random forests to more complex neural network approaches to balance interpretability with performance.” 

The random forest approach runs through a collection of decision trees. Each tree asks a series of questions about the data and, based on each answer, lands on a prediction: opioid or not. Together, they vote on the final classification.

“Our 650 samples are not the same as having 300,000 samples. On the machine learning side, we needed to make sure that we were designing techniques that that were appropriate for that kind of scale,” said Ponce.

This study trained and tested the algorithm with analytically pure samples. These ideal chemicals contain no contaminants or impurities.

“The challenge is that nothing is analytically pure in the real world,” said Fisher. “The next step is to add in background noise and have the AI understand what it should care about during a classification task.”

Fisher and Ponce emphasized that this work would have been impossible without collaboration across the disciplines of data science and chemistry. The two are friends outside of work, and this study, a Laboratory Directed Research and Development project, emerged from a series of organic conversations between them.

“To me, this project really captures what LLNL does best,” said fellow author and LLNL software engineer Steven Magana-Zook. “When you get chemists and data scientists working side by side, you end up with results that neither group could get on their own. That kind of cross-disciplinary work is exactly what makes this place so strong.”

That approach, while essential to the work, initially proved to be an obstacle. The team faced rejection of this manuscript from two journals — reviewers in chemistry didn’t fully grasp the machine learning aspects and experts on the computational side felt uncertain about the chemistry.

“I don’t think people talk about failure enough. It’s so common in science. We fail so much more than we succeed,” said Fisher. “But we keep iterating and improving. I’m proud of our resilience.” 

The team’s persistence paid off. Looking ahead, they aim to further develop their algorithm using real-world samples with higher background signals. 

Other LLNL coauthors include Roald Leif, Alex Vu, Mark Dreyer, Brian Mayer and Audrey Williams.




Heineken to slash up to 6,000 jobs in AI ‘productivity savings’ amid slump in beer sales


An employee checks a Heineken beer bottle on a packaging conveyor at the Heineken NV brewery in Zoeterwoude, Netherlands.

Jasper Juinen | Bloomberg | Getty Images

Dutch brewer Heineken is planning to lay off up to up to 7% of its workforce, as it looks to boost efficiency through productivity savings from AI, following weak beer sales last year.

The world’s second-largest brewer reported lackluster earnings on Wednesday, with total beer volumes declining 2.4% over the course of 2025, while adjusted operating profit was up 4.4%.

The company also said it plans to cut between 5,000 and 6,000 roles over the next two years and is targeting operating profit growth in the range of 2% to 6% this year. Heineken’s shares were last seen up 3.4%, and the stock is up nearly 7% so far this year.

Stock Chart IconStock chart icon

Heineken to slash up to 6,000 jobs in AI ‘productivity savings’ amid slump in beer sales

Heineken shares year-to-date

Outgoing CEO Dolf van den Brink told CNBC’s “Squawk Box Europe” on Wednesday that the results were due to “challenging market circumstances,” but performance was overall well-balanced.

Heineken’s outlook for 2026 comes in below the usual range but “is in line with buyside expectations and consistent with peer Carlsberg, and prudent in light of a new incoming,” UBS analysts said in a note on Wednesday.

Regarding the cuts, Van den Brink said: “Productivity has been a top priority in our evergreen strategy… we committed to 400 to 500 million euros ($476 million to $600 million) of savings on an annual basis, and this is a first operationalization of that debt commitment.”

The job reductions will help the brewer to invest in growth and in its premium brands, he said.

Van den Brink acknowledged that the cuts came “partly also due to AI, or let’s say digitization.”

“That’s a very big part of our EverGreen 2030 strategy, with around 3,000 roles moving to our business services, where technology digitization in general, and AI specifically, will be an important part of ongoing productivity savings,” he said.

The EverGreen 2030 strategy focuses on three core areas, including accelerating growth, increasing productivity, and future-fit.

The company, headquartered in the Netherlands, has 87,000 employees and operates in over 70 countries.

Van den Brink is due to step down from his leadership position in May after six years at the helm. Heineken is currently searching for a successor.

More AI layoffs

Sad female worker carrying her belongings while leaving the office after being fired

AI was behind over 50,000 layoffs in 2025 — here are the top firms to cite it for job cuts

Firms that cited AI in layoffs in 2025 range from Amazon, which announced 15,000 cuts last year, to Salesforce, with CEO Marc Benioff saying he let go of 4,000 customer support workers as AI was supposedly doing 50% of the work at the company.

Some European companies that cited AI in restructuring strategies were airline group Lufthansa and tech consultancy firm Accenture.

Kristalina Georgieva, managing director at the International Monetary Fund, told CNBC at the World Economic Forum in January that AI is “hitting the labor market like a tsunami” and warned that “most countries and most businesses are not prepared for it.”

— CNBC’s Steve Sedgwick, Karen Tso, and Ben Boulos contributed to this report.

Correction: This story has been updated to correct the U.S. dollar conversion of Heineken’s planned annual savings.


HKIAS Annual General Meeting 2025: Commemorating a Decade of Excellence and Embracing Future Endeavors | Newswise


Newswise — The Hong Kong Institute for Advanced Study (HKIAS) hosted its Annual General Meeting (AGM) on October 15, 2025, gathering Senior Fellows from across the globe to mark the Institute’s 10th anniversary and engage in discussions centered on strategic advancements in research and international collaboration. Under the leadership of Chairman Professor Serge Haroche, the meeting commenced with a heart-warming welcome to the new appointed HKIAS Senior Fellows: Professor Françoise Combes, Professor Étienne Ghys, Professor Dame Madeleine Atkins, Professor Alessio Figalli and Professor Sylvie Méléard. The Executive Director, Professor Shuk Han Cheng, presented a comprehensive review of the recent initiatives undertaken by City University of Hong Kong (CityUHK) and HKIAS, highlighting current news, activities, collaborative interactions with faculty members between CityUHK and the home institutions of our Senior Fellows and the significant achievements of Senior Fellows to underscore a decade of excellence.

As a key component of the HKIAS 10th anniversary celebration activities, HKIAS organised a series of distinguished lectures and round table discussion. These activities, which showcased the cutting-edge research contributions of our Senior Fellows across multitude of disciplines, were supported partially by the Kwang Hua Educational Foundation. Their reception among students and faculty at CityUHK and various academic institutions across Hong Kong highlighted a profound interest and active engagement within the academic community. 

13 October: Professor Serge Haroche, an esteemed Nobel laureate in Physics, unveiled the intricacies of laser and quantum physics. On the same day, Professor Pierre-Louis Lions, the 1994 recipient of the Fields Medal, engaged the audience with a discourse on the intersection of mathematics and artificial intelligence (AI).

14 October: Professor George Fu Gao, a world-renowned virologist, delivered an insightful lecture on AI-empowered vaccine and antibody development. Additionally, Professor Mu-ming Poo, a distinguished figure in neuroscience and brain-inspired technology illuminated the audience on brain science and its implications for AI development.

15 October: Professor Dame Madeleine Atkins, President Emeritus of Lucy Cavendish College at the University of Cambridge, led a Round Table Discussion on Additional Models of Research Grant Funding, with Mr David Foster, Executive Director of the Croucher Foundation, as the online guest speaker.

Throughout the AGM week, interdisciplinary meetings and networking events were integral to fostering mentorship opportunities and collaboration among HKIAS Senior Fellows, CityU Faculty members, emerging researchers and students from various disciplines.

These events reaffirmed HKIAS’s unwavering commitment to fostering global collaboration and scientific excellence over the past decade. As the Institute celebrated its 10th anniversary, we look forward to organizing further initiatives that will enhance the international profile of the science and engineering community at CityUHK and explore new frontiers in research and collaboration.

For more details on the celebration events, please visit HKIAS past events.




As AI ‘very quickly’ blurs truth and fiction, experts warn of U.S. threat – National | Globalnews.ca


Less than two years ago, a federal government report warned Canada should prepare for a future where, thanks to artificial intelligence, it is “almost impossible to know what is fake or real.”

As AI ‘very quickly’ blurs truth and fiction, experts warn of U.S. threat – National | Globalnews.ca

Now, researchers are warning that moment may already be here, and senior officials in Ottawa this week said the government is “very concerned” about increasingly sophisticated AI-generated content like deepfakes impacting elections.

“We are approaching that place very quickly,” said Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data and Conflict.

He added the United States could quickly become a top source of such content — a threat that could accelerate amid future independence battles in Quebec and particularly Alberta, which has already been seized on by some U.S. government and media figures.

Story continues below advertisement

“We are 100 per cent guaranteed to be getting deepfakes originating from the U.S. administration and its proxies, without question,” said McQuinn. “We already have, and it’s just the question of the volume that’s coming.”

During a House of Commons committee hearing on foreign election interference on Tuesday, Prime Minister Mark Carney’s national security and intelligence advisor Nathalie Drouin said Canada expects the U.S., like all other foreign nations, to stay out of its domestic political affairs.

That came in response to the lone question from MPs about the possibility of the U.S. becoming a foreign interference threat on par with Russia, China or India.

The rest of the two-hour hearing focused on the previous federal election and whether Ottawa is prepared for future threats, including AI and disinformation.

“I do know that the government is very concerned about AI and the potentially pernicious effects,” said deputy foreign affairs minister David Morrison, who, like Drouin, is a member of the Critical Election Incident Public Protocol Panel tasked with warning Canadians about interference.


Click to play video: 'Canadian governments should regulate AI, 85% of Canadians say: poll'


Canadian governments should regulate AI, 85% of Canadians say: poll


Asked if Canada should seek to label AI-generated content online, Morrison said: “I don’t know whether there’s an appetite for labelling specifically,” noting that’s a decision for platforms to make.

Story continues below advertisement

“It is not easy to put the government in the position of saying what is true and what is not true,” he added.

Ottawa is currently considering legislation that will address online harms and privacy concerns related to AI, but it’s not yet clear if the bill will seek to crack down on disinformation.

“Canada is working on the safety of that new technology. We’re developing standards for AI,” said Drouin, who also serves as deputy clerk of the Privy Council.


She noted that Justice Marie-Josée Hogue, who led the public inquiry into foreign interference, concluded in her final report last year that disinformation is the greatest threat to Canadian democracy — thanks in part to the rise of generative AI.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Addressing and combating that threat is “an endless, ongoing job,” Drouin said. “It never ends.”

The Privy Council Office told Global News it provided an “initial information session relating to deepfakes” to MPs on Wednesday, and would offer additional sessions to “all interested parliamentarians as well as to political parties over the coming weeks.”

Experts like McQuinn say such a briefing is long overdue, and that government, academia and media must also step up educating an already-skeptical Canadian public on how to discern truth from fiction.

“There should be annual training (for politicians and their staffs), not just on deepfakes and disinformation, but foreign interference altogether,” said Marcus Kolga, a senior fellow at the Macdonald-Laurier Institute and founder of DisinfoWatch.

Story continues below advertisement

“This needs leadership. Right now, I’m not seeing that leadership, but we desperately need it because all of us can see what is coming.”

Kolga also agreed there is “no doubt” that official U.S. government channels, and U.S. President Donald Trump himself, are becoming a major source of that content.

“The trajectory is rather clear,” he said. “So I think that we need to anticipate that that’s going to happen. Reacting to it after it happens isn’t all that helpful — we need to be preparing at this time.”

Threat growing from the U.S., researchers say

Morrison noted Tuesday that the elections panel, as well as the Security and Intelligence Threats to Elections (SITE) task force, did not observe any significant use of AI to interfere in last year’s federal election.

However, he added that “our adversaries in this space are continually evolving their tactics, so it’s only a matter of time, and we do need to be very vigilant.”

Story continues below advertisement

The Communications Security Establishment and the Canadian Centre for Cyber Security have issued similar warnings recently about hostile foreign actors further harnessing AI over the next two years against “voters, politicians, public figures, and electoral institutions.”

Researchers now say the U.S. is quickly becoming a part of that threat landscape.

McQuinn said part of the issue is the online disinformation that Canadians see is being spread primarily on American-owned social media platforms like X and Facebook, with TikTok now under U.S. ownership as well.

That has posed challenges to foreign countries trying to regulate content on those platforms, with European and British laws facing resistance and hostility by the companies and the Trump administration, which has promised severe penalties, including tariffs and even sanctions.

Digital services taxes that sought to claw back revenues for operating in foreign countries have been identified by the U.S. as trade irritants, with Canada’s tax nearly scuttling negotiations last year before it was revoked.

Kolga noted the spread of disinformation by U.S. content creators and platforms is not new, whether it originates from America or from elsewhere in the world. Other countries, including Russia, India and China, are known to use disinformation campaigns and have been identified in Canadian security reports as significant sources of foreign interference efforts.

Russia has also been accused of covertly funding right-wing influencers in the U.S. and Canada to push pro-Russian talking points and disrupt domestic affairs.

Story continues below advertisement

What is new, McQuinn said, is the involvement of Trump and his administration in pushing that disinformation, including AI deepfakes.


Click to play video: 'Trump defends AI image of himself as Pope, says Melania thought it was ‘cute’'


Trump defends AI image of himself as Pope, says Melania thought it was ‘cute’


While much of the content is clearly fake or designed to illicit a reaction — a White House image showing Trump and a penguin walking through an Arctic landscape suggested to be Greenland, or Trump sharing third-party AI content depicting him flying a feces-spraying fighter jet over protesters — there have been more subtle examples.

The White House was accused last month of using AI to alter a photo of a protester arrested in Minnesota during a federal immigration crackdown in the state to make the woman appear as though she were crying.

In response to criticism over the altered image, White House deputy communications director Kaelan Dorr wrote on X, “The memes will continue.” The image remains online.

Story continues below advertisement

“The present U.S. administration is the only western country that we know of (that) on a regular basis is publishing or sharing or promoting obvious fakes and deepfakes, at a level that has never been seen by a western government before,” McQuinn said.

He said the online strategy and behaviour matches that of common state disinformation actors like Russia and China, as well as armed groups like the Taliban, which don’t have “any respect” for the truth.

“If you don’t (have that respect), then you will always have an asymmetrical advantage against any actor, whether it’s state or non-state, who wants to in some way adhere to the truth,” he said.

“(This) U.S. administration will always have an advantage over Canadian actors because they no longer have any controls on them or restraints, because truth is no longer a factor in their communication.”


Click to play video: 'Gazans react to Trump AI video promoting plan for “Riviera of the Middle East”'


Gazans react to Trump AI video promoting plan for “Riviera of the Middle East”


McQuinn added his own research suggests 83 per cent of disinformation is passed along by average Canadians who don’t immediately realize the content they’re sharing is fake.

Story continues below advertisement

“It’s not that they necessarily believe in the disinformation,” he said. “Something looks kind of catchy or aligns with their ideas of the world, and they will pass it on without reading in the second or third paragraph that the idea that they agreed with now morphs into something else.

“The good news is that Canadians are learning very quickly” how to spot things like deepfakes, he added, which is creating “a certain amount of skepticism that is naturally cropping up in the population.”

Yet Trump’s repeated sharing of AI content online that imagines U.S. control of Canada — an homage to his “51st state” threats — as well as tacit support between U.S. administration figures and the Alberta independence movement has researchers increasingly worried.

“My real concern is that when Donald Trump does order the U.S. government to start supporting some of those narratives and starts actually engaging in state disinformation, in terms of Canada’s unity — when that happens, then we’re in real trouble,” Kolga said.




Alphabet shares close flat after earnings beat. Here’s what’s happening


Alphabet shares close flat after earnings beat. Here’s what’s happening

Alphabet’s shares closed largely flat on Thursday after the company beat Wall Street’s expectations on earnings and revenue, with artificial intelligence spending projected to increase hugely this year.

The Google parent closed nearly 2% lower on Wednesday. After the bell, Alphabet reported fourth-quarter revenue of $113.83 billion, above the $111.43 billion estimate from analysts polled by LSEG.

Its Google Cloud division had $17.66 billion in revenue versus a forecast of $16.18 billion, according to StreetAccount. YouTube Advertising posted $11.38 billion in revenue versus the estimated $11.84 billion.

The tech giant said it would significantly increase its 2026 capital expenditure to between $175 billion and $185 billion — more than double its 2025 spend. A significant portion of capex spending would go toward investing in AI compute capacity for Google DeepMind.

What analysts are saying

Barclays analysts said in a note Thursday that Infrastructure, DeepMind and Waymo costs “weighed on overall Alphabet profitability,” and will continue to do so in 2026.

“Cloud’s growth is astonishing, measured by any metric: revenue, backlog, API tokens inferenced, enterprise adoption of Gemini. These metrics combined with DeepMind’s progress on the model side, starts to justify the 100% increase in capex in ’26,” they said.

“The AI story is getting better while Search is accelerating – that’s the most important take for GOOG,” they added.

Deutsche Bank analysts said in a note Thursday that Alphabet has “stunned the world” with its huge capex spending plan. “With tech in a current state of flux, it’s not clear whether that’s a good or a bad thing,” they wrote.

Correction: This story has been updated to correct that Alphabet shares were down on Thursday.


AI, Automation, and Biosensors Speed the Path to Synthetic Jet Fuel | Newswise


BYLINE: Will Ferguson

Newswise — When it comes to powering aircraft, jet engines need dense, energy-packed fuels. Right now, nearly all of that fuel comes from petroleum, as batteries don’t yet deliver enough punch for most flights. Scientists have long dreamed of a synthetic alternative: teaching microbes to ferment plant material into high-performance jet fuels. But designing these microbial “mini-factories” has traditionally been slow and expensive because of the unpredictability of biological systems.

In a pair of recent studies, two teams at the Joint BioEnergy Institute (JBEI), which is managed by Lawrence Berkeley National Laboratory (Berkeley Lab), have demonstrated complementary ways to dramatically speed up this process. One combines artificial intelligence and lab automation to rapidly test and refine the genetic designs of biofuel-producing microbes. The other turns a microbe’s “bad habit” into a powerful sensing tool, uncovering hidden pathways that boost production.

Their shared target is isoprenol — a clear, volatile alcohol that can be converted into DMCO, a next-generation jet fuel with higher energy density than today’s conventional aviation fuels. Producing isoprenol efficiently has been a long-standing challenge in synthetic biology.

The two studies — one published in Nature Communications, the other in Science Advances — tackle different sides of this challenge. The first uses automation and machine learning to engineer Pseudomonas putida strains that produce five times more isoprenol than before. The second approach turns the bacterium’s natural fuel-sensing ability into an advantage. By rewiring that system into a biosensor, the team could rapidly screen millions of variants and identify strains that make up to 36 times more isoprenol.

“These are two powerful complementary strategies,” said senior author of the biosensor study Thomas Eng, JBEI deputy director of Host Engineering and a research scientist in Berkeley Lab’s Biological Systems and Engineering (BSE) Division. “One is data-driven optimization; the other is discovery. Together, they give us a way to move much faster than traditional trial-and-error.”

A new engine for strain design

The AI and automation study was led by Taek Soon Lee, director of Pathway and Metabolic Engineering at JBEI, and Héctor García Martín, director of Data Science and Modeling at JBEI, both staff scientists in Berkeley Lab’s BSE Division. They set out to accelerate one of synthetic biology’s most time-consuming steps: improving microbial production through a series of genetic tweaks to different combinations of genes. Traditionally, scientists alter a few genes at a time and test the results — a painstaking, intuition-driven process that can take months or even years to yield meaningful gains.

By contrast, the Berkeley Lab researchers built an automated pipeline that uses robotics to create and test hundreds of genetic designs in parallel. After each round, machine learning algorithms analyze the results to systematically suggest the next set of strain genetic designs. The result is a system that moves 10 to 100 times faster than conventional methods.

“Standard metabolic engineering is slow because you’re relying on human intuition and biological knowledge,” said García Martín. “Our goal was to make strain improvement systematic and fast.”

Lead author David Carruthers, a scientific engineering associate with JBEI and BSE, developed a robotic workflow that connects key lab steps into one automated system. Working with collaborators at Lawrence Livermore National Laboratory, the team introduced a custom microfluidic electroporation device that can insert genetic material into 384 Pseudomonas putida strains in under a minute — a task that typically takes hours by hand.

At the core of the system is CRISPR interference (CRISPRi), a tool that lets researchers “turn down” gene activity rather than switching genes off completely. This fine-tuning makes it possible to test subtle gene combinations that shape the cell’s metabolism and track the effects through detailed protein measurements. After each round, the machine learning model analyzes the results and recommends the next set of genes that are most likely to boost performance when dialed down.

“Traditionally, optimizing production is a kind of guess-and-check process,” Carruthers said. “You make one change, test it, and hope you’re climbing toward a higher peak. By combining automation and machine learning, we were able to climb that landscape systematically — in weeks, not years.”

Lee, who led the metabolic engineering work, emphasized why this level of automation is so transformative for biology.

“We have been engineering Pseudomonas by hand for years, but biological experiments always come with small variations that are hard to control,” he said. “Automation gives us the ability to generate the same high-quality data every time, which is essential for machine learning to work well.”

Patrick Kinnunen, a former Berkeley Lab JBEI postdoctoral researcher who co-developed the data strategy, highlighted how crucial that reproducibility was for the algorithms. “Automation didn’t just make the experiments faster — it made the data cleaner,” he said. “That clarity is what lets it uncover non-intuitive genetic combinations that we probably would have missed by hand.”

Using their automated learning loop, the team completed six engineering cycles, each lasting just a few weeks instead of the months typical of manual workflows. They boosted isoprenol titers (the concentration of product in the culture) five-fold compared to their starting strain.

Turning a bug into a feature

Meanwhile, a second team led by Eng tackled a different but equally stubborn hurdle: how to select target genes that, when dialed down, improve isoprenol production significantly. The team’s microbe, Pseudomonas putida, posed a peculiar problem. It didn’t just make isoprenol, it also consumed the fuel molecule almost as soon as it produced it, undermining production efforts. Initially, this looked like a flaw. But during the COVID-19 pandemic, Eng and colleagues realized it might be a clue: if the microbe could sense and eat isoprenol, it likely had a built-in molecular sensor.

“There was a real ‘Aha!’ moment,” Eng said. “We had spent more than a year trying to figure out why the cells were consuming the product. One day we thought, ‘Wait, if they can sense it, there has to be a protein that detects it. Maybe we can turn that from a problem into a tool.’”

The team discovered the molecular system the microbe uses to sense isoprenol: two proteins that work together to detect the fuel and send signals inside the cell. They then rewired this system into a biosensor — a kind of biological “engine light” that turns on in proportion to how much fuel the cell produces.

Then came the clever twist: They linked the sensor to genes essential for survival, creating a system where only the microbes that make the most fuel can grow. Instead of measuring thousands of samples by hand, they let natural selection do the screening. This approach rapidly surfaced “champion” strains, including variants that produced up to 36 times more isoprenol than the original.

“What started as a frustrating bug became our biggest asset,” Eng said. “We turned the microbe’s fuel-eating behavior into a sensor that reports and selects for the best producers automatically.”

The approach also revealed surprising biology; high-producing strains switched to feed on their own amino acids once glucose ran out, sustaining production by rewiring their metabolism in unexpected ways. Just as importantly, the workflow can be applied to other molecules, offering a flexible new tool for rapidly engineering microbes — not just for isoprenol, but for a wide range of bio-based products.

Scaling up to industry-ready

Although developed independently, the two approaches fit together well. The AI-driven pipeline excels at rapidly optimizing combinations of a known set of gene targets, while the biosensor method is best for discovering novel gene targets, revealing genetic levers that would be difficult to predict.

“One is depth-first; the other is breadth-first,” Eng said. “Machine learning systematically optimizes combinations of annotated targets, while the biosensor approach starts fresh and lets the cells tell us which gene targets matter.”

Both teams are now working to scale their methods from lab experiments to industrially relevant fermentation systems — a critical step for producing synthetic aviation fuel at commercial levels. They’re also adapting their approaches to other microbes and target molecules, aiming to make them broadly applicable in biomanufacturing.

“If widely adopted, these approaches could reshape the industry,” García Martín said. “Instead of taking a decade and hundreds of people to develop one new bioproduct, small teams could do it in a year or less.”

Aindrila Mukhopadhyay, BSE deputy director for science, director of Host Engineering at JBEI, and a coauthor on the biosensor study, said these kinds of tools are changing how biological research gets done.

“Engineering biology is challenging due to the inherent unpredictability of metabolism and that makes the engineering slow,” Mukhopadhyay said. “By streamlining key steps — as we did through selections — and leveraging automation and AI, we’re making it a faster, more systematic process that is easier to adopt.”

JBEI is a Bioenergy Research Center funded by the Department of Energy Office of Science.

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.