AI regulation and Canadians’ privacy in wake of Tumbler Ridge shooting | Globalnews.ca


Regulators, cybersecurity and law experts all gathered in Victoria this month to work towards finding the balance between online safety, innovation and protecting Canadians’ privacy.

AI regulation and Canadians’ privacy in wake of Tumbler Ridge shooting  | Globalnews.ca

While dozens of workshops, keynotes and presentations covering a wide variety of topics were made at the Victoria International Privacy and Security Summit, last month’s mass shooting in Tumbler Ridge, B.C., was not far from people’s minds.

“In the ongoing context of discussions about whether platforms should be required to disclose information to prevent tragedies like Tumbler Ridge, it is an absolutely critical and timely topic,” said Canada’s privacy commissioner, Philippe Dufresne, in his keynote address.

“We need to ensure that Canadians are protected from imminent harms but we must — and can do so — in a way that protects Canadians’ privacy and includes appropriate thresholds,” Dufresne continued.

On Monday, 12-year-old Maya Gebala’s family filed a civil suit against tech-giant OpenAI after the company disclosed that the shooter’s ChatGPT account had been disabled in June due to ‘violent activity’ but did not alert law enforcement.

Story continues below advertisement

Gebala’s family said she was shot in the head and neck while trying to lock the library door at her school to protect other students. She remains at the BC Children’s Hospital, where she’s being treated for serious injuries.


Click to play video: 'Tumbler Ridge family sues OpenAI'


Tumbler Ridge family sues OpenAI


In February, OpenAI was summoned to Ottawa to discuss safety concerns and said it would enhance its police referral and repeat offender detection practices. An inquiry set to take place in B.C. will also look at the role artificial intelligence may have played in the shooting.

“Was there manipulation? Was there coercion? Or was it just enough to plant a seed?” Alberta Information and Privacy Commissioner Diane McLeod questioned. “I don’t know, but this is something that we need to be looking at very carefully.”

The lawsuit also claims that OpenAI took no steps to implement age verification or parental consent procedures and accuses the company of knowingly and intentionally permitting ChatGPT to provide pseudo-psychological treatment to the shooter.

Story continues below advertisement

This disturbing trend was addressed by Jim Richberg, Fortinet’s head of security, who also ran cyber operations for the Central Intelligence Agency (CIA) for two decades.

“For those of you who are parents, probably the most disturbing statistic I’ve seen is that more than one in seven teenagers in North America is taking mental health advice on a weekly basis from a GenAI chatbot,” Richberg said at the conference. “Almost none of their parents have any idea that it’s going on.”


Click to play video: 'OpenAI agrees to strengthen safety & security protocols'


OpenAI agrees to strengthen safety & security protocols


How to protect kids from online harm while safeguarding their privacy is an ongoing and complex discussion that is being examined by legal experts throughout Canada.

Get daily Canada news delivered to your inbox so you'll never miss the day's top stories.

Get daily National news

Get daily Canada news delivered to your inbox so you’ll never miss the day’s top stories.

“The basic problem has been the design of these spaces and the algorithms that are pushing content to children that is causing deep harm,” said Emily Laidlaw, an associate professor of law at the University of Calgary and the Canada Research Chair in cybersecurity law.

Story continues below advertisement

“The gore videos, eating disorder content and violence, all of which is creating this kind of space that’s manipulating children’s thoughts and there’s no ability for parents to necessarily know that it’s happening,” Laidlaw explained.

Laidlaw said in the wake of the Tumbler Ridge shooting, previously proposed legislation in Canada may now be outdated and will need to be modernized.

“I think that we will likely see that the Online Harms Act will be scrutinized in a different way now,” Laidlaw said. “These types of AI-facilitated harms really didn’t exist in the same way and that shows you how fast this space evolves.”

“I think that when it comes to the online harm space and the privacy space, we have to start thinking about legislation as being almost modular,” Laidlaw explained. “Like building blocks where you can think through inserting new types of technologies, new types of harms, to be able to respond. It needs to be iterative to stay current.”

On Thursday, the federal government tabled a new version of its “lawful access” legislation that would give police new powers to pursue online data for investigative purposes while addressing some of the privacy concerns raised by the original version of the bill.

The new bill would also allow Canadian police to seek authority, through a court, to request transmission data or subscriber information from a foreign company like Google, Meta or OpenAI. However, it does not address calls to require AI companies to report troubling online behaviour to police.

Story continues below advertisement

“I want to be clear what C-22 is not. It is not about surveillance of Canadians going on about their daily lives,” said Public Safety Minister Gary Anandasangaree. “It is about keeping Canadians safe in the online space.”

 


Click to play video: 'Canada introducing new version of ‘lawful access’ bill to give CSIS, police more online powers'


Canada introducing new version of ‘lawful access’ bill to give CSIS, police more online powers


While Ottawa has signalled new AI regulations are also on the way, some experts said it’s long overdue.

“We’ve been thinking about the next round of legislation for a long time now, and it just doesn’t come,” said Teresa Scassa, the Canada Research Chair in Information Law and Policy at the University of Ottawa.

“What we need to be thinking about in terms of reform, is reform. We need it, and we need it now,” she said.

Scassa said while privacy commissioners across Canada are working diligently to address new issues sparked by new technology, including AI, they need more tools to enforce their recommendations and investigative findings.

Story continues below advertisement

“For example, the Federal Privacy Commissioner can issue findings, but [the Commissioner] has no order-making powers,” she said. “[He] can’t impose fines or penalties on organizations that refuse to comply with recommendations.”

“There’s a possibility of going to the federal court,” Scassa explained. “But again, it just makes it slower and more cumbersome, so it’s just having those additional tools that would make a very significant difference.”

The Office of the Privacy Commissioner of Canada (OPC) has taken on several high-profile investigations recently, which include companies such as TikTok, Aylo, which runs Pornhub and YouPorn, and an ongoing investigation into X Corp., which operates the social media platform X and the chatbot Grok.

Dufresne agreed that while some companies have made improvements, more tools would be welcomed.

“We stand really in stark contrast with colleagues from all over the world, and having this power would be important,” he said. “It’s not that those fines should be imposed frequently, but having the possibility of the fine is going to make sure that companies come to the table and that they’re prepared to protect privacy, right at the beginning.”

Story continues below advertisement


Click to play video: 'TikTok allowed to keep operating in Canada after security review'


TikTok allowed to keep operating in Canada after security review


The OPC is also in the midst of developing a Children’s Privacy Code that addresses the handling of children’s personal information to ensure it’s protected and that children are able to exercise their privacy rights.

“I’m using the powers that I have, and that means developing a code that’s going to set out my expectations for platforms and that’s going to include a reflection on age assurance and age verification,” Dufresne said. “How do we make sure that we prevent access to certain websites for kids?”

Experts said one ongoing challenge that continues to arise when it comes to age verification is the amount of information that would need to be collected by the platform to prove a child’s age.

“When you start to build these safeguards in, you’re talking in part about personal data and collecting more personal data and of course, now we can use biometric data, which is much more sensitive,” Scassa said.

Story continues below advertisement

“You’ve also got the questions about how our data is being used to control our access to technologies, and in controlling our access to technologies, how it can also be used to monitor our use of those technologies.”

Alberta’s privacy commissioner said it was also a key issue during the joint investigation into TikTok, which included the OPC along with provincial counterparts in Alberta, B.C. and Quebec.

“There were hundreds of thousands of Canadian children on the site that were under the age of 13,” McLeod said. “One of the recommendations we made is [TikTok] had to implement proper age verification, but then the question is: how much information do you need to collect to verify age?

“What’s the appropriate mechanism within these social media platforms?” McLeod questioned. “You know, this is what keeps me up at night.”

Many of these issues are expected to be front and centre when Evan Solomon, the federal minister of Artificial Intelligence and Digital Innovation, makes stops in Alberta next week, including his first official visit to Calgary on Wednesday.

None of the allegations in the lawsuit against OpenAI have been proven in court.

Story continues below advertisement

With files from Amy Judd, Sean Boynton and Catherine Urquhart, Global News