HKIAS Annual General Meeting 2025: Commemorating a Decade of Excellence and Embracing Future Endeavors | Newswise


Newswise — The Hong Kong Institute for Advanced Study (HKIAS) hosted its Annual General Meeting (AGM) on October 15, 2025, gathering Senior Fellows from across the globe to mark the Institute’s 10th anniversary and engage in discussions centered on strategic advancements in research and international collaboration. Under the leadership of Chairman Professor Serge Haroche, the meeting commenced with a heart-warming welcome to the new appointed HKIAS Senior Fellows: Professor Françoise Combes, Professor Étienne Ghys, Professor Dame Madeleine Atkins, Professor Alessio Figalli and Professor Sylvie Méléard. The Executive Director, Professor Shuk Han Cheng, presented a comprehensive review of the recent initiatives undertaken by City University of Hong Kong (CityUHK) and HKIAS, highlighting current news, activities, collaborative interactions with faculty members between CityUHK and the home institutions of our Senior Fellows and the significant achievements of Senior Fellows to underscore a decade of excellence.

As a key component of the HKIAS 10th anniversary celebration activities, HKIAS organised a series of distinguished lectures and round table discussion. These activities, which showcased the cutting-edge research contributions of our Senior Fellows across multitude of disciplines, were supported partially by the Kwang Hua Educational Foundation. Their reception among students and faculty at CityUHK and various academic institutions across Hong Kong highlighted a profound interest and active engagement within the academic community. 

13 October: Professor Serge Haroche, an esteemed Nobel laureate in Physics, unveiled the intricacies of laser and quantum physics. On the same day, Professor Pierre-Louis Lions, the 1994 recipient of the Fields Medal, engaged the audience with a discourse on the intersection of mathematics and artificial intelligence (AI).

14 October: Professor George Fu Gao, a world-renowned virologist, delivered an insightful lecture on AI-empowered vaccine and antibody development. Additionally, Professor Mu-ming Poo, a distinguished figure in neuroscience and brain-inspired technology illuminated the audience on brain science and its implications for AI development.

15 October: Professor Dame Madeleine Atkins, President Emeritus of Lucy Cavendish College at the University of Cambridge, led a Round Table Discussion on Additional Models of Research Grant Funding, with Mr David Foster, Executive Director of the Croucher Foundation, as the online guest speaker.

Throughout the AGM week, interdisciplinary meetings and networking events were integral to fostering mentorship opportunities and collaboration among HKIAS Senior Fellows, CityU Faculty members, emerging researchers and students from various disciplines.

These events reaffirmed HKIAS’s unwavering commitment to fostering global collaboration and scientific excellence over the past decade. As the Institute celebrated its 10th anniversary, we look forward to organizing further initiatives that will enhance the international profile of the science and engineering community at CityUHK and explore new frontiers in research and collaboration.

For more details on the celebration events, please visit HKIAS past events.




As AI ‘very quickly’ blurs truth and fiction, experts warn of U.S. threat – National | Globalnews.ca


Less than two years ago, a federal government report warned Canada should prepare for a future where, thanks to artificial intelligence, it is “almost impossible to know what is fake or real.”

As AI ‘very quickly’ blurs truth and fiction, experts warn of U.S. threat – National | Globalnews.ca

Now, researchers are warning that moment may already be here, and senior officials in Ottawa this week said the government is “very concerned” about increasingly sophisticated AI-generated content like deepfakes impacting elections.

“We are approaching that place very quickly,” said Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data and Conflict.

He added the United States could quickly become a top source of such content — a threat that could accelerate amid future independence battles in Quebec and particularly Alberta, which has already been seized on by some U.S. government and media figures.

Story continues below advertisement

“We are 100 per cent guaranteed to be getting deepfakes originating from the U.S. administration and its proxies, without question,” said McQuinn. “We already have, and it’s just the question of the volume that’s coming.”

During a House of Commons committee hearing on foreign election interference on Tuesday, Prime Minister Mark Carney’s national security and intelligence advisor Nathalie Drouin said Canada expects the U.S., like all other foreign nations, to stay out of its domestic political affairs.

That came in response to the lone question from MPs about the possibility of the U.S. becoming a foreign interference threat on par with Russia, China or India.

The rest of the two-hour hearing focused on the previous federal election and whether Ottawa is prepared for future threats, including AI and disinformation.

“I do know that the government is very concerned about AI and the potentially pernicious effects,” said deputy foreign affairs minister David Morrison, who, like Drouin, is a member of the Critical Election Incident Public Protocol Panel tasked with warning Canadians about interference.


Click to play video: 'Canadian governments should regulate AI, 85% of Canadians say: poll'


Canadian governments should regulate AI, 85% of Canadians say: poll


Asked if Canada should seek to label AI-generated content online, Morrison said: “I don’t know whether there’s an appetite for labelling specifically,” noting that’s a decision for platforms to make.

Story continues below advertisement

“It is not easy to put the government in the position of saying what is true and what is not true,” he added.

Ottawa is currently considering legislation that will address online harms and privacy concerns related to AI, but it’s not yet clear if the bill will seek to crack down on disinformation.

“Canada is working on the safety of that new technology. We’re developing standards for AI,” said Drouin, who also serves as deputy clerk of the Privy Council.


She noted that Justice Marie-Josée Hogue, who led the public inquiry into foreign interference, concluded in her final report last year that disinformation is the greatest threat to Canadian democracy — thanks in part to the rise of generative AI.

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Get breaking National news

For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.

Addressing and combating that threat is “an endless, ongoing job,” Drouin said. “It never ends.”

The Privy Council Office told Global News it provided an “initial information session relating to deepfakes” to MPs on Wednesday, and would offer additional sessions to “all interested parliamentarians as well as to political parties over the coming weeks.”

Experts like McQuinn say such a briefing is long overdue, and that government, academia and media must also step up educating an already-skeptical Canadian public on how to discern truth from fiction.

“There should be annual training (for politicians and their staffs), not just on deepfakes and disinformation, but foreign interference altogether,” said Marcus Kolga, a senior fellow at the Macdonald-Laurier Institute and founder of DisinfoWatch.

Story continues below advertisement

“This needs leadership. Right now, I’m not seeing that leadership, but we desperately need it because all of us can see what is coming.”

Kolga also agreed there is “no doubt” that official U.S. government channels, and U.S. President Donald Trump himself, are becoming a major source of that content.

“The trajectory is rather clear,” he said. “So I think that we need to anticipate that that’s going to happen. Reacting to it after it happens isn’t all that helpful — we need to be preparing at this time.”

Threat growing from the U.S., researchers say

Morrison noted Tuesday that the elections panel, as well as the Security and Intelligence Threats to Elections (SITE) task force, did not observe any significant use of AI to interfere in last year’s federal election.

However, he added that “our adversaries in this space are continually evolving their tactics, so it’s only a matter of time, and we do need to be very vigilant.”

Story continues below advertisement

The Communications Security Establishment and the Canadian Centre for Cyber Security have issued similar warnings recently about hostile foreign actors further harnessing AI over the next two years against “voters, politicians, public figures, and electoral institutions.”

Researchers now say the U.S. is quickly becoming a part of that threat landscape.

McQuinn said part of the issue is the online disinformation that Canadians see is being spread primarily on American-owned social media platforms like X and Facebook, with TikTok now under U.S. ownership as well.

That has posed challenges to foreign countries trying to regulate content on those platforms, with European and British laws facing resistance and hostility by the companies and the Trump administration, which has promised severe penalties, including tariffs and even sanctions.

Digital services taxes that sought to claw back revenues for operating in foreign countries have been identified by the U.S. as trade irritants, with Canada’s tax nearly scuttling negotiations last year before it was revoked.

Kolga noted the spread of disinformation by U.S. content creators and platforms is not new, whether it originates from America or from elsewhere in the world. Other countries, including Russia, India and China, are known to use disinformation campaigns and have been identified in Canadian security reports as significant sources of foreign interference efforts.

Russia has also been accused of covertly funding right-wing influencers in the U.S. and Canada to push pro-Russian talking points and disrupt domestic affairs.

Story continues below advertisement

What is new, McQuinn said, is the involvement of Trump and his administration in pushing that disinformation, including AI deepfakes.


Click to play video: 'Trump defends AI image of himself as Pope, says Melania thought it was ‘cute’'


Trump defends AI image of himself as Pope, says Melania thought it was ‘cute’


While much of the content is clearly fake or designed to illicit a reaction — a White House image showing Trump and a penguin walking through an Arctic landscape suggested to be Greenland, or Trump sharing third-party AI content depicting him flying a feces-spraying fighter jet over protesters — there have been more subtle examples.

The White House was accused last month of using AI to alter a photo of a protester arrested in Minnesota during a federal immigration crackdown in the state to make the woman appear as though she were crying.

In response to criticism over the altered image, White House deputy communications director Kaelan Dorr wrote on X, “The memes will continue.” The image remains online.

Story continues below advertisement

“The present U.S. administration is the only western country that we know of (that) on a regular basis is publishing or sharing or promoting obvious fakes and deepfakes, at a level that has never been seen by a western government before,” McQuinn said.

He said the online strategy and behaviour matches that of common state disinformation actors like Russia and China, as well as armed groups like the Taliban, which don’t have “any respect” for the truth.

“If you don’t (have that respect), then you will always have an asymmetrical advantage against any actor, whether it’s state or non-state, who wants to in some way adhere to the truth,” he said.

“(This) U.S. administration will always have an advantage over Canadian actors because they no longer have any controls on them or restraints, because truth is no longer a factor in their communication.”


Click to play video: 'Gazans react to Trump AI video promoting plan for “Riviera of the Middle East”'


Gazans react to Trump AI video promoting plan for “Riviera of the Middle East”


McQuinn added his own research suggests 83 per cent of disinformation is passed along by average Canadians who don’t immediately realize the content they’re sharing is fake.

Story continues below advertisement

“It’s not that they necessarily believe in the disinformation,” he said. “Something looks kind of catchy or aligns with their ideas of the world, and they will pass it on without reading in the second or third paragraph that the idea that they agreed with now morphs into something else.

“The good news is that Canadians are learning very quickly” how to spot things like deepfakes, he added, which is creating “a certain amount of skepticism that is naturally cropping up in the population.”

Yet Trump’s repeated sharing of AI content online that imagines U.S. control of Canada — an homage to his “51st state” threats — as well as tacit support between U.S. administration figures and the Alberta independence movement has researchers increasingly worried.

“My real concern is that when Donald Trump does order the U.S. government to start supporting some of those narratives and starts actually engaging in state disinformation, in terms of Canada’s unity — when that happens, then we’re in real trouble,” Kolga said.




Alphabet shares close flat after earnings beat. Here’s what’s happening


Alphabet shares close flat after earnings beat. Here’s what’s happening

Alphabet’s shares closed largely flat on Thursday after the company beat Wall Street’s expectations on earnings and revenue, with artificial intelligence spending projected to increase hugely this year.

The Google parent closed nearly 2% lower on Wednesday. After the bell, Alphabet reported fourth-quarter revenue of $113.83 billion, above the $111.43 billion estimate from analysts polled by LSEG.

Its Google Cloud division had $17.66 billion in revenue versus a forecast of $16.18 billion, according to StreetAccount. YouTube Advertising posted $11.38 billion in revenue versus the estimated $11.84 billion.

The tech giant said it would significantly increase its 2026 capital expenditure to between $175 billion and $185 billion — more than double its 2025 spend. A significant portion of capex spending would go toward investing in AI compute capacity for Google DeepMind.

What analysts are saying

Barclays analysts said in a note Thursday that Infrastructure, DeepMind and Waymo costs “weighed on overall Alphabet profitability,” and will continue to do so in 2026.

“Cloud’s growth is astonishing, measured by any metric: revenue, backlog, API tokens inferenced, enterprise adoption of Gemini. These metrics combined with DeepMind’s progress on the model side, starts to justify the 100% increase in capex in ’26,” they said.

“The AI story is getting better while Search is accelerating – that’s the most important take for GOOG,” they added.

Deutsche Bank analysts said in a note Thursday that Alphabet has “stunned the world” with its huge capex spending plan. “With tech in a current state of flux, it’s not clear whether that’s a good or a bad thing,” they wrote.

Correction: This story has been updated to correct that Alphabet shares were down on Thursday.


AI, Automation, and Biosensors Speed the Path to Synthetic Jet Fuel | Newswise


BYLINE: Will Ferguson

Newswise — When it comes to powering aircraft, jet engines need dense, energy-packed fuels. Right now, nearly all of that fuel comes from petroleum, as batteries don’t yet deliver enough punch for most flights. Scientists have long dreamed of a synthetic alternative: teaching microbes to ferment plant material into high-performance jet fuels. But designing these microbial “mini-factories” has traditionally been slow and expensive because of the unpredictability of biological systems.

In a pair of recent studies, two teams at the Joint BioEnergy Institute (JBEI), which is managed by Lawrence Berkeley National Laboratory (Berkeley Lab), have demonstrated complementary ways to dramatically speed up this process. One combines artificial intelligence and lab automation to rapidly test and refine the genetic designs of biofuel-producing microbes. The other turns a microbe’s “bad habit” into a powerful sensing tool, uncovering hidden pathways that boost production.

Their shared target is isoprenol — a clear, volatile alcohol that can be converted into DMCO, a next-generation jet fuel with higher energy density than today’s conventional aviation fuels. Producing isoprenol efficiently has been a long-standing challenge in synthetic biology.

The two studies — one published in Nature Communications, the other in Science Advances — tackle different sides of this challenge. The first uses automation and machine learning to engineer Pseudomonas putida strains that produce five times more isoprenol than before. The second approach turns the bacterium’s natural fuel-sensing ability into an advantage. By rewiring that system into a biosensor, the team could rapidly screen millions of variants and identify strains that make up to 36 times more isoprenol.

“These are two powerful complementary strategies,” said senior author of the biosensor study Thomas Eng, JBEI deputy director of Host Engineering and a research scientist in Berkeley Lab’s Biological Systems and Engineering (BSE) Division. “One is data-driven optimization; the other is discovery. Together, they give us a way to move much faster than traditional trial-and-error.”

A new engine for strain design

The AI and automation study was led by Taek Soon Lee, director of Pathway and Metabolic Engineering at JBEI, and Héctor García Martín, director of Data Science and Modeling at JBEI, both staff scientists in Berkeley Lab’s BSE Division. They set out to accelerate one of synthetic biology’s most time-consuming steps: improving microbial production through a series of genetic tweaks to different combinations of genes. Traditionally, scientists alter a few genes at a time and test the results — a painstaking, intuition-driven process that can take months or even years to yield meaningful gains.

By contrast, the Berkeley Lab researchers built an automated pipeline that uses robotics to create and test hundreds of genetic designs in parallel. After each round, machine learning algorithms analyze the results to systematically suggest the next set of strain genetic designs. The result is a system that moves 10 to 100 times faster than conventional methods.

“Standard metabolic engineering is slow because you’re relying on human intuition and biological knowledge,” said García Martín. “Our goal was to make strain improvement systematic and fast.”

Lead author David Carruthers, a scientific engineering associate with JBEI and BSE, developed a robotic workflow that connects key lab steps into one automated system. Working with collaborators at Lawrence Livermore National Laboratory, the team introduced a custom microfluidic electroporation device that can insert genetic material into 384 Pseudomonas putida strains in under a minute — a task that typically takes hours by hand.

At the core of the system is CRISPR interference (CRISPRi), a tool that lets researchers “turn down” gene activity rather than switching genes off completely. This fine-tuning makes it possible to test subtle gene combinations that shape the cell’s metabolism and track the effects through detailed protein measurements. After each round, the machine learning model analyzes the results and recommends the next set of genes that are most likely to boost performance when dialed down.

“Traditionally, optimizing production is a kind of guess-and-check process,” Carruthers said. “You make one change, test it, and hope you’re climbing toward a higher peak. By combining automation and machine learning, we were able to climb that landscape systematically — in weeks, not years.”

Lee, who led the metabolic engineering work, emphasized why this level of automation is so transformative for biology.

“We have been engineering Pseudomonas by hand for years, but biological experiments always come with small variations that are hard to control,” he said. “Automation gives us the ability to generate the same high-quality data every time, which is essential for machine learning to work well.”

Patrick Kinnunen, a former Berkeley Lab JBEI postdoctoral researcher who co-developed the data strategy, highlighted how crucial that reproducibility was for the algorithms. “Automation didn’t just make the experiments faster — it made the data cleaner,” he said. “That clarity is what lets it uncover non-intuitive genetic combinations that we probably would have missed by hand.”

Using their automated learning loop, the team completed six engineering cycles, each lasting just a few weeks instead of the months typical of manual workflows. They boosted isoprenol titers (the concentration of product in the culture) five-fold compared to their starting strain.

Turning a bug into a feature

Meanwhile, a second team led by Eng tackled a different but equally stubborn hurdle: how to select target genes that, when dialed down, improve isoprenol production significantly. The team’s microbe, Pseudomonas putida, posed a peculiar problem. It didn’t just make isoprenol, it also consumed the fuel molecule almost as soon as it produced it, undermining production efforts. Initially, this looked like a flaw. But during the COVID-19 pandemic, Eng and colleagues realized it might be a clue: if the microbe could sense and eat isoprenol, it likely had a built-in molecular sensor.

“There was a real ‘Aha!’ moment,” Eng said. “We had spent more than a year trying to figure out why the cells were consuming the product. One day we thought, ‘Wait, if they can sense it, there has to be a protein that detects it. Maybe we can turn that from a problem into a tool.’”

The team discovered the molecular system the microbe uses to sense isoprenol: two proteins that work together to detect the fuel and send signals inside the cell. They then rewired this system into a biosensor — a kind of biological “engine light” that turns on in proportion to how much fuel the cell produces.

Then came the clever twist: They linked the sensor to genes essential for survival, creating a system where only the microbes that make the most fuel can grow. Instead of measuring thousands of samples by hand, they let natural selection do the screening. This approach rapidly surfaced “champion” strains, including variants that produced up to 36 times more isoprenol than the original.

“What started as a frustrating bug became our biggest asset,” Eng said. “We turned the microbe’s fuel-eating behavior into a sensor that reports and selects for the best producers automatically.”

The approach also revealed surprising biology; high-producing strains switched to feed on their own amino acids once glucose ran out, sustaining production by rewiring their metabolism in unexpected ways. Just as importantly, the workflow can be applied to other molecules, offering a flexible new tool for rapidly engineering microbes — not just for isoprenol, but for a wide range of bio-based products.

Scaling up to industry-ready

Although developed independently, the two approaches fit together well. The AI-driven pipeline excels at rapidly optimizing combinations of a known set of gene targets, while the biosensor method is best for discovering novel gene targets, revealing genetic levers that would be difficult to predict.

“One is depth-first; the other is breadth-first,” Eng said. “Machine learning systematically optimizes combinations of annotated targets, while the biosensor approach starts fresh and lets the cells tell us which gene targets matter.”

Both teams are now working to scale their methods from lab experiments to industrially relevant fermentation systems — a critical step for producing synthetic aviation fuel at commercial levels. They’re also adapting their approaches to other microbes and target molecules, aiming to make them broadly applicable in biomanufacturing.

“If widely adopted, these approaches could reshape the industry,” García Martín said. “Instead of taking a decade and hundreds of people to develop one new bioproduct, small teams could do it in a year or less.”

Aindrila Mukhopadhyay, BSE deputy director for science, director of Host Engineering at JBEI, and a coauthor on the biosensor study, said these kinds of tools are changing how biological research gets done.

“Engineering biology is challenging due to the inherent unpredictability of metabolism and that makes the engineering slow,” Mukhopadhyay said. “By streamlining key steps — as we did through selections — and leveraging automation and AI, we’re making it a faster, more systematic process that is easier to adopt.”

JBEI is a Bioenergy Research Center funded by the Department of Energy Office of Science.

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.