Evidence of meeting #108 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ignacio Cofone  Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual
Catherine Régis  Full Professor, Université de Montréal, As an Individual
Elissa Strome  Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

12:20 p.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

Obviously, these would have to be well-vetted individuals with the necessary skills and expertise to be able to provide this kind of advice. However, I think particularly when we think about legal scholars and scientific researchers who have the necessary expertise to understand the technical impacts of the technology, those would be important assets to bring into this conversation.

12:20 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Thank you.

Very quickly, we'll go to Mr. Bengio.

12:20 p.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I agree with everything she said, but I want to add that there already are either for-profit or non-profit organizations—mostly in the U.S., but there could be some in Canada—working on AI safety. In other words, they are developing the technology to do what the regulator needs to do to figure out what is dangerous and what is not. I think this is a better route. It's going to take time for the government to build up that muscle; it's going to be much faster to work with non-governmental organizations that have that expertise.

12:20 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Thank you.

I have one more quick question for Ms. Strome.

Again, we're talking about developing an AI regulatory framework here. I don't know necessarily whether China and Russia, especially in the context of election interference, will apply the same types of safeguards for actors in those respective countries as it relates to AI innovation and potential harms. There's a philosophical discussion going on right now that is almost about the race to the bottom. If we hinder ourselves with a regulatory approach that's too overly burdensome, are we holding ourselves back from addressing those serious harms that can come and impact Canadian society?

12:20 p.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

Well, I think one opportunity for optimism is to look at the recent U.K. AI Safety Summit that was held late last year at Bletchley Park. At that meeting, representatives from the Chinese government were participating in those international discussions about the opportunity to work collaboratively with like-minded nations around the world to think about understanding, assessing and mitigating the risks of AI. I think we have to remain optimistic and hopeful and open to the opportunity for discourse and collaboration.

12:20 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I'm an MP, and I have to remain constantly skeptical because I'm thinking about my one-year-old. Many of us around this table have kids, and I'm hearing about these 20-year threats. My daughter is going to be 21 in 20 years. The world that she's going to enter will be crazy. I don't know if there can be a regulatory approach or if we can even stop it. We might just be fooling ourselves that we can do anything to stop what's going to happen.

Can Mr. Bengio comment on that?

12:20 p.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

You're right: There's nothing we know right now that provides full guarantees that we can avoid all the harms that powerful AI systems can bring in. However, it would be foolish not to try to move the needle towards more safety. In particular, we should be making sure that companies here behave well.

As for what Chinese organizations are doing, we should prepare countermeasures, and maybe this is not the purview of this law. This is more like a national security investment that needs to be made in order to protect Canadians against these attacks.

12:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Vis. You are out of time.

Ms. Lapointe, you have the floor.

February 5th, 2024 / 12:25 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you, Mr. Chair.

Dr. Strome, you cited three priorities for the government in your opening statement. When you were talking about the second priority, flexibility, you said it was important to note that AI was not contained within borders and that Canada should create systems and partnerships. It struck me, as you raised this point, that Canadian legislation would not be effective outside of Canada, so the point you raised was very relevant.

Can you expand on what you see as good and needed systems and partnerships?

12:25 p.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

Absolutely.

We have many opportunities to work with like-minded peer nations around the world. Obviously, we are close allies with the U.S., the U.K. and other G7 countries. All of these countries are grappling with the same issues related to the risks associated with AI.

There are some good steps in the right direction. New systems are being developed and considered around international collaboration on the regulation of AI. One is the one I just mentioned, the U.K. AI Safety Summit, which is now a collection of like-minded countries that are coming together on a regular basis to explore and understand those risks and how we can work together to mitigate them.

It was really telling in the Bletchley declaration, which was published following that meeting, that there was a recognition even in the statement that different countries will have different regulatory approaches, laws, and legislation around AI. However, even within those differences, there are, first of all, opportunities to align, and even opportunities for interoperability. I think that's one great example, and it's an opportunity for Canada to actually make a really significant contribution.

12:25 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

It speaks to the concerns raised by my colleagues MP Masse and MP Généreux. My fear is that the good guys may be overly legislated and subject to red tape, while bad actors will have free rein without these international agreements. Do you also share those concerns?

12:25 p.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

I think there are probably even deeper opportunities for collaboration and alignment on some of these issues, for sure.

12:25 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

The third priority you raised is the need for investments. In your opinion, where should investments first be directed to best accelerate the opportunities for Canada, while also protecting from individual harms and system risks?

12:25 p.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

One of the areas we're deeply concerned about right now is the lack of investment in computing infrastructure within our AI ecosystem. Right now, there is really and truly a global race for computing technology. These large language models and advanced AI systems really require very advanced and significant computational technology.

In Canada and the Canadian AI ecosystem, we don't have access on the ground to that level of computational power. Companies right now in Canada are buying it on the cloud. They're buying it primarily from U.S. cloud providers. Academics in Canada literally don't have access to that kind of technology.

For us to be able develop the skills, tools, and expertise to really interrogate these advanced AI systems and understand where their vulnerabilities are and where the safety and risk concerns are, we're going to need very significant computational powers. As we talk about regulating AI, that goes for the academic sector, the government sector and the private sector as well. That's a critical component.

12:25 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Mr. Cofone, I'd be interested in hearing your opinion on what kind of legal onus there should be on creators of high-impact AI systems and also on the platforms that allow the use of AI applications, such as Facebook and Youtube.

12:25 p.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Ignacio Cofone

I think the main onus should be risk mitigation. This can go back to the principles of fairness, transparency and accountability that we were talking about at the very beginning of the session. It is important that creators and developers of AI systems keep track of the risks they create for a wide variety of harms when they are deploying and developing those systems, and that we have legal frameworks that will hold them accountable for that.

I think that also relates to your prior question. It is legitimately challenging and reasonably concerning that in other countries we may not be able to enforce the frameworks that are passed today. However, we should not let imperfect enforcement stop us from passing the rules and the principles that we believe ought to be enforced, because imperfect enforcement of them is better than not having enforcement at all.

This concern is similar to a concern that we had for privacy more than 20 years ago in relation to data that crosses borders. We didn't know whether we would be able to enforce Canadian privacy law abroad. Courts and regulators surprised us to the extent that they are sometimes able to do it.

12:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Ms. Lapointe.

Mr. Williams, the floor is yours.

12:30 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you very much, Mr. Chair.

I want to go back to a question that my colleague Mr. Vis had for Madam Régis, but I'll go to other witnesses. I'll start with Professor Cofone.

AIDA itself proposes that we create an artificial intelligence and data commissioner who will not be an independent body but rather will report to ISED, to the industry minister. Do we need the AI commissioner to be an independent office or an officer of Parliament?

12:30 p.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Ignacio Cofone

I think we would have enormous benefits from the AI commissioner's being an independent officer. An alternative, a second-best, would be to offset some of the powers that are now vested in the AI commissioner onto the tribunal, which is set up as an independent entity, but to have a better composition of the tribunal, we could increase the proportion of experts that occupy positions in the tribunal to compensate for that.

12:30 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

In terms of process, then, would you see it working very much like the Privacy Commissioner or the competition commissioner?

12:30 p.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Ignacio Cofone

Yes. I think we could have a system that operates like the Privacy Commissioner's. Under the structure of the proposed bill, we could have, for example, the AI commissioner carrying out investigations and then the tribunal enforcing the fines.

12:30 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

For the other witnesses, are there any comments on that?

Mr. Bengio, go ahead.

12:30 p.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Yes, I also think there is good reason to make sure that the regulator is not going to be under a single mission.

ISED has an innovation mission, which is really about the economy growing thanks to technology, but there are other aspects, especially those of harms, risks, national security risks or even global affairs questions, that the management and governance of AI by the government need to cover.

How to do that right I don't know, but I think it will be healthier if the organization doing this within our government isn't under a single particular ministry.

12:30 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Ms. Strome, we've talked in the past here quite a bit about how Canada has really fallen behind with AI when it comes to IP commercialization. We've lost a lot of our patents. I think China developed more patents in AI in one year than we do with all of our patents in a year. They're really ahead of us, along with the U.S. and others.

When it comes to developing and protecting that area and really being back to being a leader in AI again, how does Canada do that? What parts of this bill may prevent that? What parts do we need to add that might encourage that?

12:30 p.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

I actually believe that patents aren't the only measure of the value in our AI ecosystem. In fact, I believe that talent is one of the strongest measures of the strength and the value of our AI ecosystem.

Patents, absolutely, are important, particularly for start-up companies that are trying to protect their intellectual property. However, much of the AI that's developed is actually released into the public domain; it's open-sourced. We derive really significant value and really innovative new products and services that are based on AI through the very highly skilled people who come together with the right resources, the right expertise, the right collaborators and the right funding to actually develop new innovations that are based on AI.

Patents are one measure, but they're not the only one, so I think that we need to take a broader view on that.

When we look at where Canada stands internationally, it's true that AI is on a very competitive global platform and stage right now. One index is called the global AI index. For many years, Canada sat fourth in the world, which is not bad for a small economy relative to some of the other players there. However, we are slipping on that index. Just this year, we slipped from fourth to fifth position, and when you look deeply into the details of where we're losing ground on AI, you see that much of it is coming because of the lack of investment in AI infrastructure. Other countries are making significant pledges, significant commitments and significant investments in building and advancing AI infrastructure, and Canada has not kept pace with that.

In the most recent index, we actually dropped from 15th to 23rd in the world on AI infrastructure, and that affects our global competitiveness.