Skip to main content
;

INDU Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Industry and Technology


NUMBER 108 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Monday, February 5, 2024

[Recorded by Electronic Apparatus]

(1105)

[Translation]

    I call the meeting to order.
    Good morning one and all. Welcome to meeting number 108 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.
    Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming its study of Bill C-27, an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts.
    Today's witnesses are all joining us by video conference. We have with us Ignacio Cofone, Canada research chair in artificial intelligence law and data governance at McGill University; Catherine Régis, full professor at Université de Montréal; Elissa Strome, executive director of pan-Canadian AI strategy at the Canadian Institute for Advanced Research; and Yoshua Bengio, scientific director at Mila - Quebec Artificial Intelligence Institute.
    Welcome and thank you all for being with us.
    Since we are already a bit behind schedule, I'm going to turn the floor right over to you, Mr. Cofone. You have five minutes for your opening statement.

[English]

    Good morning, everyone, and thank you for the invitation to share with the committee my thoughts on Bill C-27.
    I'm appearing today in my personal capacity. Mr. Chair has already introduced me, so I'm going to skip that part and say that it is crucial that Canada have a legal framework that fosters the enormous benefits of AI and data while preventing its population from becoming collateral damage from it.
    I'm happy to share my broad thoughts on the act, but today I want to focus on three important opportunities for improvement while maintaining the general characteristics and approach of the act as proposed. I have one recommendation for AIDA, one for the CPPA and one for both.
    My first recommendation is that AIDA needs an improved definition of “harms”. AIDA is an accountability framework, and the effectiveness of any accountability framework depends on what it is that we hold entities accountable for. AIDA recognizes currently property, economic, physical and psychological harms, but for it to be helpful and comprehensive, we need one step more.
    Consider the harms to democracy that were imposed during the Cambridge Analytica scandal and consider the meaningful but diffuse and invisible harms that are inflicted every day through intentional misinformation that polarizes voters. Consider the misrepresentation of minorities that disempowers them. These go unrecognized by the current definition of “harms”.
    AIDA needs two changes to recognize intangible harms beyond individual psychological ones: It needs to recognize harms to groups, such as harms to democracy, as AI harms often affect communities rather than discrete individuals, and it also needs to recognize dignitary harms, like those stemming from misrepresentation and the growing of systemic inequalities through automated means.
    I therefore urge the committee to amend subsection 5(1) of AIDA to incorporate these intangible harms to individuals and to communities. I would be happy to propose suggested language.
     This fuller account of harms would put Canada up to international standards, such as the EU AI Act, which considers harms to “public interest”, to “rights protected” by EU law, to a “plurality of persons” and to people in a “vulnerable position”. Doing so better complies with AI ethics frameworks, such as the Montreal declaration for responsible AI, the Toronto declaration and the Asilomar AI principles. You would also increase consistency within Canadian law, as the directive on automated decision-making repeatedly refers to “individuals or communities”.
    My second recommendation is that the CPPA must recognize inferences as personal information. We live in a world where things as sensitive and dangerous as our sexuality or ethnicity and our political affiliation can be inferred from things as inoffensive as our Spotify listens or our coffee orders or text messages, and those are just some of the inferences that we know about.
    Inferences can even be harmful when they are incorrect. TransUnion, for example, the credit rating agency, was sued in the United States a couple of years ago for mistakenly inferring that hundreds of people were terrorists. By supercharging inferences, AI has transformed the privacy landscape.
     We cannot afford to have a privacy statute that focuses on disclosed information and builds a back door into our privacy law that strips from it its power to create meaningful protection in today's inferential economy. The CPPA doesn't rule out inferences being personal information, but it doesn't incorporate them explicitly. It should. I urge the committee to amend the definition of personal information in one of the acts to say that “ 'personal information' means disclosed or inferred information about an identifiable individual or group”.
    This change would also increase consistency within Canadian law, as the Office of the Privacy Commissioner has repeatedly stated that inferences should be personal information, and also with international standards, as foreign data protection authorities emphasize the importance of inferences for privacy law. The California attorney general has also stated that inferences should be personal information for the purposes of privacy law.
    My third brief recommendation is a consequence of this bill, which is reforming enforcement. As AI and data continue to seep into more aspects of our social and economic lives, one regulator with limited resources and personnel will not be able to have their eye on everything. They will need to prioritize. If we don't want all other harms to fall through the cracks, both parts of the act need a combined public and private enforcement system, taking inspiration from the GDPR, so that we have an agency that issues fines without preventing the court system from compensating for tangible and intangible harm done to individuals and groups.
    We also have a brief elaborating on the suggested outlines here.
(1110)
     I'd be happy to address any questions or elaborate on anything.
    Thank you very much for your time.
    Thank you very much.

[Translation]

    Ms. Régis, you may now go ahead. You have five minutes for your opening statement.
     Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to comment on the AI portion of Bill C-27.
    I am a full professor in the faculty of law at Université de Montréal. I am also the Canada research chair in collaborative culture in health law and policy, as well as the Canada-CIFAR chair in AI, affiliated to Mila. From January 2, 2022 to December 2023, I co-chaired the ​​Working Group on Responsible AI for the Global Partnership on AI.
    The first point I want to make is to reaffirm not only the importance, but also the urgency of creating a better legal framework for AI, as proposed in Bill C-27. That has been my view for the past five years, and I am now more convinced than ever, given the dizzying pace of recent developments in AI, which you are all familiar with.
    We need legal tools that are binding. They must clearly set out our expectations, values and requirements in relation to AI, at the national level. During the citizen consultations that culminated in the development of the Montréal Declaration for a Responsible Development of Artificial Intelligence, the first need identified was for an appropriate legal framework that would enable the development of trusted AI technologies.
    As you probably know, that trend has spread across the world, the most obvious example definitely being the European Union's efforts. As of last week, the EU is now one step closer to adopting a regulatory framework for AI.
    In addition to these national requirements, the global discussions around AI and the resulting decisions will have repercussions for every country. In fact, the idea of creating a specific AI authority is being discussed.
    In order to ensure that Canadian values and interests are taken into account in the international space, Canada has to be able to influence the discussions and decisions. Setting out a national vision with strong and clear standards is vital to playing a credible, meaningful and influential role in the global governance of AI.
    That said, I think Bill C-27 could still use some improvements. I will focus on two of them today.
    The first improvement is to make the artificial intelligence and data commissioner more independent. Although recent amendments have resulted in improvements, the commissioner is still very much tied to Innovation, Science and Economic Development Canada. To avoid any conflict of interest, real or apparent, the government should create more of a wall between the two entities. This would address any tensions that might arise between the government's role as a funder on one hand, and its role as a watchdog on the other.
    Possible solutions include creating an office of the artificial intelligence commissioner that is totally independent of the department, and empowering the commissioner to impose administrative monetary penalties or require that corrective actions be taken to address the accountability framework. In addition, the commissioner could be asked to recommend new or improved regulations informed by their experience as a watchdog, mainly through the annual public report.
    Other measures could also be taken. Once the legislation is passed, for instance, the government could give the commissioner the financial and institutional resources, as well as the qualified staff necessary to successfully carry out the duties of the commissioner. Making sure that the commissioner has the means to achieve their objectives is really important. Another possibility is to create a mechanism whereby the public could report issues directly to the commissioner. That would establish a relationship between the two.
    The second major improvement that's needed, as I see it, is to further strengthen the crucial role that human rights can play in analyzing the risks and impacts of AI systems. The importance of taking into account human rights in defining the classes of high-impact AI systems is specifically mentioned. However, the importance of then incorporating consideration of those rights in companies' assessments, which could include an analysis of the risks of harm and adverse effects, is not quite so clear.
    I would also recommend adding specific language to address the need to conduct impact assessments for human rights in relation to individuals or groups of individuals who may be affected by high-impact AI systems. A portion of those assessments could also be made public. These are sometimes called human rights impact assessments.
    The Council of Europe, the European Union with its AI legislation, and even the United Nations Educational, Scientific and Cultural Organization are working on similar tools, so exploring the possibility of sharing expertise would be worthwhile.
    The second recommendation is fundamental. While the AI race is very real, there can be no winner of the race to violate human rights. The legislation must make that clear.
    Thank you.
(1115)
    Thank you.
    Ms. Strome, go ahead.

[English]

     Hello. My name is Elissa Strome. I am the executive director of the pan-Canadian AI strategy at the Canadian Institute for Advanced Research, CIFAR.

[Translation]

    Thank you for the opportunity to meet with the committee today.

[English]

    CIFAR is a Canadian-based global research organization that brings together brilliant people across disciplines and borders to address some of the most pressing problems facing science and humanity. Our programs span all areas of human discovery.
     CIFAR's focus on pushing scientific boundaries allowed us to recognize the promise of an idea that Geoffrey Hinton came to us with in 2004—to build a new CIFAR research program that would advance the concept of artificial neural networks. At the time, this concept was unpopular, and it was difficult to find funding to pursue it.
     Twenty years later, this CIFAR program continues to put Canada on the global stage of leading-edge AI research and counts Professor Hinton, Professor Yoshua Bengio—who is here with us today—Professor Richard Sutton at the University of Alberta and many other leading researchers as members.
     Due to this early foresight and our deep relationships, in 2017, CIFAR was asked to lead the pan-Canadian AI strategy. We continue to work with our many partners across the country and across sectors to build a robust and interconnected AI ecosystem around the central hubs of our three national AI institutes: Amii in Edmonton, Mila in Montreal and the Vector Institute in Toronto. There are now more than 140,000 people working in the highly skilled field of AI across the country.
     However, while the pan-Canadian AI strategy has delivered on its initial promise to build a deep pool of AI talent and a robust ecosystem, Canada has not kept up in our regulatory approaches and infrastructure. I will highlight three priorities for the work of this committee and ongoing efforts.
     First is speed. We cannot delay the work of AI regulation. Canada must move quickly to advance our regulatory bodies and processes and to work collaboratively, at an international level, to ensure that Canada's responsible AI framework is coordinated with those of our partners. We must also understand that regulation will not hinder innovation but will enhance it, providing greater stability and ensuring interoperability and competitiveness of Canadian-led AI products and services on the global stage.
     Second is flexibility. The approach we take must be able to adapt to a fast-changing technology and global context. So much is at stake, with the potential for AI to be incorporated into virtually every type of business or service. As the artificial intelligence and data act reflects, these effects can have a high impact. This means we must take an inclusive approach to this work across all sectors, with ongoing public engagement to ensure citizen buy-in, in parallel with the development and refinement of these regulations.
     We also must understand that AI is not contained within borders. This is why we must have systems for monitoring and adapting to the global context. We must also adapt to the advances and potentially unanticipated uses and capabilities of the technology. This is where collaboration with our global partners will continue to be key and will call upon the strengths of Canada's research community, not only in ways to advance AI safety but also in the ethical and legal frameworks that must guide it.
     Third is investment. Canada must make significant investments in infrastructure, systems and qualified personnel for meaningful AI regulation when used in high-impact systems. We were glad to see this defined in the amendments to the act.
     Just like those in the U.S. and the U.K., our governments must staff up with the expertise to understand the technology and its impacts.
     For Canada to remain a leader in advancing responsible AI, Canadian companies and public sector institutions must also have access to the funding and computing power they need to stay at the leading edge of AI. Again, the U.S., the U.K. and other G7 countries have a head start on us, having already pledged deep investments in computing infrastructure to support their AI ecosystems, and Canada must do the same.
     I won’t pretend that this work won't be resource-intensive; it will be. However, we are at an inflection point in the evolution of artificial intelligence, and if we get regulation right, Canada and the world can benefit from its immense potential.
     To conclude, Canada has tremendous strengths in our research excellence, deep talent pool and rich, interconnected ecosystem. However, we must act smartly and decisively now. Getting our regulatory framework, infrastructure and systems right will be critical to Canada's continued success as a global AI leader.
     I look forward to the committee's questions and to the comments from my fellow witnesses.
    Thank you.
(1120)

[Translation]

    Thank you very much.
    It is now Mr. Bengio's turn.
    Good morning.
    First, I want to say how much I appreciate this opportunity to meet with the committee.
    My name is Yoshua Bengio, and I am a full professor at Université de Montréal, as well as the founder and scientific director of Mila - Quebec Artificial Intelligence Institute. Here's a fun fact: I recently became the most quoted computer scientist in the world.

[English]

     Over the past year I've had the privilege of sharing my perspective on AI in a number of important international forums, including the U.S. Senate; the first global AI Safety Summit, an advisory board to the UN Secretary-General; and the U.K. Frontier AI Taskforce; in addition to the work I'm doing here in Canada in co-chairing the advisory committee on AI for the government.
    In recent years, the pace of AI advancement has accelerated to such a degree that I and many leaders in the field of AI have revised downwards our estimates of when human levels of broad cognitive competence, also known as AGI, will be achieved—in other words, when we will have machines that are as smart as humans at a cognitive level.
    This was previously thought to be decades or even centuries away. I now believe, with many of my colleagues, including Geoff Hinton, that superhuman AI could be developed in the next two decades, and even possibly in the next few years.
    If we look at the low end, we're not ready, and this prospect is extremely worrying.

[Translation]

    The prospect of the early emergence of human-level AI is very worrisome.

[English]

    As discussed in the above international forums, without adequate guardrails, the current AI trajectory poses serious risks of major societal harms even before AGI is reached.
    To be clear, progress in AI has opened exciting opportunities for numerous beneficial applications that have motivated me for many years, yet it is urgent to establish the necessary guardrails to foster innovation while mitigating risks and harms.
    With that in mind, we urgently need agile AI legislation. I think this law is doing that, and is moving in the right direction, but initial requirements must be put in place even before the consultations are completed to develop the more comprehensive regulatory framework. With the current approach, it would take something like two years before enforcement would be possible.

[Translation]

    I therefore support AIDA broadly and would like to formulate recommendations to this committee on ways to strengthen its capacity to meaningfully protect Canadians. They are laid out in detail in my submission, but there are three things that I would like to highlight.
    The first is the urgency to adopt legislation.

[English]

    Upcoming advances are likely to be disruptive, and the timeline for these is very uncertain. In this situation, an imperfect law whose regulation could be adapted later is better than no law and better than postponing a law too much. We should best move forward with AIDA's framework and rely on agile regulatory systems that can be adapted as this technology evolves.
    Also, because of the urgency, the law should include initial provisions that will apply as soon as it is adopted to ensure the public's protection while the regulatory framework is being developed.
    What would we do as an initial step? I'm talking about a registry.
    Systems beyond a certain level of capability should report to the government and provide information about their safety and security measures, as well as safety assessments. A regulator will be able to use that information to form best-in-class requirements for future permits to continue developing and deploying these advanced systems. This would put the burden of demonstrating safety on developers with the billions required to build these advance systems, rather than taxpayers.
     Second, another important point to add in the law is that national security risks and societal threats should be listed among the high-impact categories. Examples of capabilities to bring harm include being easily transformable to help bad actors design dangerous cyber-attacks and weapons, deceiving and manipulating as well as or better than humans, or finding ways to self-replicate in spite of contrary programming instructions.
    Finally, my last main point concerns the need for pre-deployment requirements. Developers should be required to register their system and demonstrate its safety and security even before the system is fully trained and developed, and before deployment. We need to address and target the risks that emerged earlier in an AI's life cycle, which the current law doesn't seem to do.
(1125)

[Translation]

    In conclusion, I welcome the committee's questions and look forward to hearing what my fellow witnesses have to say. All of their comments thus far have been quite interesting.
    At this point, I would like to thank you for having this important conversation.

[English]

     Thank you very much.
    To start our conversation, I will yield the floor to MP Perkins for six minutes.
    Thank you, witnesses, for the continuation of this very important piece of legislation and some very interesting opening testimony.
    Originally this bill proposed legislating and regulating only what it called “high-impact systems”, which would not be defined in the law but would be defined in the regulation at some future date.
    Is it Mr. Bengio or Dr. Bengio?
    Either way is fine.
    Dr. Bengio, we now have two added definitions in the draft amendments that Minister Champagne has made to the bill. The amendments add a definition, in a schedule, of “high impact”. They also add a new category, which is specifically machine learning, with a third being general purpose. Is “general purpose” getting too broad in terms of the power?
    It strikes me that large amounts of AI that will happen in business are business processes that are not attached to individuals, the Internet or that kind of thing. There's a company in my riding that's trying to train it to identify the difference between a scallop and a surf clam. To me, that's not something that is high impact. It may be for their business, but at the end of the day, it's just business efficiency. It has and will have a general purpose application, if I'm reading it right.
    Does the bill go too far with the general purpose provision?
    No. I think it's very important to cover the general purpose AI systems in particular, because they could be the most dangerous if misused. This is the place where there is also the most uncertainty about the harms that could follow from these systems.
     I think that having a law that says more oversight is necessary for these general purpose systems will also be an encouragement for developers to create more specialized systems. In fact, in most applications in business and science or medicine, we want a system that's very specialized on one particular kind of question we care about. Until recently, these were the only kinds of AI systems that we knew how to build. General purpose systems like the large language models can be specialized and turned into something specific that doesn't know everything about the world and only knows some specific questions, in which case they become much more innocuous.
    Thank you.
    Mr. Cofone, I have a question around your discussion about groups and larger harms.
    Some witnesses, way back at the beginning of this bill, from Jim Balsillie on, talked about the fact that the bill is absent in dealing with group harms and group risks to privacy as they relate to artificial intelligence. Could you expand that a little more? What would you see as needing to be added?
    You mentioned proposed subsection 5(1) of AIDA. Can you share with us a little more about what you had in mind?
    Of course. The directive on automated decision-making explicitly recognizes that harms can be done to individuals or communities, but when it defines harm in proposed subsection 5(1), AIDA has repeated references to individuals for harm to property and for economic, physical and psychological harm.
    The thing is that harms in AIDA, by their nature, are often diffuse. Oftentimes they are harms to groups, not to individuals. For a good example of this, think of AI bias, which is covered in proposed subsection 5(2), not in 5(1). If you have an automated system that allocates employment, for example, and it is biased, it is very difficult to know whether a particular individual got or didn't get the job because of that bias or not. It is easier to see that the system may be biased towards a certain group.
    The same goes for representation issues in AI. An individual would have difficulty in proving harm under the act, but the harm is very real for a particular group. The same is true of misinformation. The same is true of some types of systemic discrimination that may not be captured by the current definition of bias in the act.
    What I would find concerning is that by regulating a technology that is more likely to affect groups rather than individuals under a harm definition that specifically targets individuals, we may be leaving out most of what we want to cover.
(1130)
     I very much look forward to getting your draft amendment on that and taking a look at it. Thank you.
    Thank you.
    I would like to ask this of perhaps all of the witnesses, maybe starting with Ms. Strome.
    We've had a great debate about Dr. Bengio's saying that having an imperfect bill is better than not having a bill. The challenge for parliamentarians is in two aspects of that.
    One, I never like passing an imperfect bill, especially one as important as this. I don't think there's any merit in sort of saying that we're number one because we got our first bill through. The way Parliament works is that it's five to 10 years before legislation comes back.
    I also don't like giving the department a blank cheque to basically not have to come back to Parliament on an overall public policy framework of how we're going to govern this. This bill lacks that. It just talks about the specifics about high-impact general purpose and machine learning. It doesn't talk overall, such as the Canada Health Act does in referring to five principles.
    What are the five principles of AI, such as transparency and that kind of thing? The bill doesn't speak to that, and it governs all AI. I think that's an issue going forward. I also think that it's an issue to give the bureaucracy, while maintaining flexibility, total control over future development without having to seek approval from Parliament.
    I would like to ask all of the witnesses about the five things, four things or three things that are high-level philosophies about how we should govern AI in Canada, which this bill does not seem to define.
    I'll start with Ms. Strome, and then we'll go from there.
    Just to make sure that I understand correctly, are you asking us to zero in on areas that the bill doesn't currently address?
    No. It's sort of the high-level idea that all AI, when a user is interacting with it, needs to be transparent.
    What are similar types of philosophies, forgetting about whether it's high-impact machine learning or general purpose, that should govern all of this in the act, which the bill is missing?
    Absolutely.
    There's a broad international consensus about what constitutes safe and trustworthy AI. Whether it's the OECD principles or the Montreal declaration, many organizations have a common consensus about what constitutes responsible AI.
    These principles include having fairness as a primary concern. That ensures that AI delivers recommendations that treat people fairly and equitably and that there's no discrimination and no bias.
    Another principle is accountability, which means ensuring that AI systems and developers of AI systems are accountable for the impacts of the technologies that they are developing.
    Transparency is one that you mentioned. That ensures that we understand and have the opportunity to interrogate AI systems and models and get a better understanding of how they are coming towards the decisions and recommendations that they are developing.
    Privacy is a principle that is very deeply interconnected with the bill that's before you today. Those are questions are deeply intertwined with AI as well to ensure that the fundamental principles and rights of privacy are also protected.
(1135)
    Thank you very much, Madam Strome.
    Mr. Perkins, hopefully another MP will pick up where you left off. We're way over time.
    Mr. Turnbull, you have the floor.
    Thanks, Chair.
    Thanks to all the witnesses for being here today. It seems that we have some really important testimony, so thank you for making the time. Thank you for lending your expertise to this important conversation.
    I think we've all heard the phrase or the cliché that “perfection can be the enemy of the good”. I wonder if this is one of those instances.
    We have a very fast-evolving AI space and lots of expertise here in Canada, but then we have people with differing opinions. Some people say that we should split the bill up and do the AIDA portion over again. We have others saying that we need to move forward. In a lot of the opening testimony that I heard from you today, speed is of the essence.
    Mr. Bengio, maybe you can comment on whether you think that we should start over with AIDA and maybe comment on the importance of moving quickly.
     Yes, I mentioned urgency many times in my little presentation because you have to understand AI not as a static thing where we are now but as the trajectory that is happening in research and development, mostly in large companies but also in academia. As these systems become smarter and more powerful, their abilities have dual use, and that means more good and more harm can happen. The harm part is what we need government to protect us from.
    In particular, going back to the question from Mr. Perkins, we need to make sure that one of the principles is that major harm, such as a national security threat, will not be coming easily from the products that are considered legal and are within the law. This is why the high-impact category and maybe the different ways that it could be spelled out are so important.
    Thank you.
    I'll stick with you, Mr. Bengio, for the moment. I want to also ask you what the risks are to Canadians if AI is not regulated sooner rather than later. You've mentioned the idea that there's more good and more harm that can be done, but in the absence of any regulation and any law, what are the potential harms you see?
    Maybe the shortest-term concern that was a priority, for example, for the experts consulted by the World Economic Forum just a few weeks ago is disinformation. An example is the current use of deep fakes in AI to imitate images of people by imitating their voices and rendering their movement in video and interacting with people through texts and through dialogue in a way that can fool a social media user and make them change their mind on political questions.
    There's real concern about the use of AI in politically oriented ways that go against the principles of our democracy. That's a short-term thing.
    The one that I would say is next, which may be a year or two later, is the threat in terms of the use of these advanced AI systems for cyber-attacks. These systems, in terms of programming, have been making a lot of rapid progress in recent years, and it's expected to continue even faster than any other ability, because we can generate an infinite amount of data for that, just like in playing the game of Go. When these systems get strong enough to defeat our current cyber-defences and our industrial digital infrastructure, we are in trouble, especially if these systems fall into the wrong hands. We need to secure those systems. One of the things that the Biden executive order insisted on is that these large systems need to be secured to minimize those risks.
    Then there were other risks that people talk about, such as helping bad actors to develop new weapons or to have the expertise that they wouldn't have otherwise. All of these things need a law as quickly as possible to make sure that we minimize those risks.
(1140)
    Thank you for that.
    I also just wanted to mention something. I know you're aware as a signatory that our government developed a voluntary code of conduct for advanced generative artificial intelligence systems. I wanted to ask how AIDA builds on that voluntary code. Do you see the two as complementary, with the voluntary code preceding the bill and the bill actually adding on to that and furthering this mission of ensuring that we have a regulatory environment that provides some certainty?
    Can you speak to that, Mr. Bengio?
    Absolutely. You are exactly right.
    Voluntary codes are useful to get off the ground quickly, but there's no guarantee that companies will follow that code. Also, the voluntary code is very vague. We need to have more precision about criteria for what is acceptable and what is not. Companies, I think, need to have that.
    We've seen that some companies have even declared publicly in the U.S. that they wouldn't follow the Biden voluntary code, so I think we have no choice. We have to make sure that there's a level playing field. Otherwise, we're favouring the corporations that don't go by the voluntary code. For them it means less expense [Technical difficulty—Editor] with the public. We really need to have regulations and not just [Technical difficulty—Editor].
     Thank you. I think I got that last part. You got a little choppy.
    Is my time up, Chair?
    The Chair: Yes.
    Mr. Ryan Turnbull: Thank you very much.
    Thank you, Mr. Turnbull.

[Translation]

    Over to you, Mr. Garon.
    Thank you, Mr. Chair.
    Thank you to the witnesses for being with us.
    Professor Bengio, you talked about the imminent threat that disinformation poses to democracy. Deepfakes are now more and more common. You are appearing by video conference, so under the current regulatory framework, what assurances do I have that it is actually you taking part in today's meeting?
    That's a good question.
    We need rules to prevent exactly that. For example, computer systems such as Zoom and social media platforms should have to state clearly whether any video content, audio content or text is computer-generated, in other words by AI, or whether it is really coming from a human. We need laws to protect the public from that sort of thing.
    Companies should also be incentivized to develop technology, so we are better able to distinguish between what is real and what is fake.
    Recently, we've heard about scams that use AI to imitate people's voices and dupe a grandmother or grandfather. You'll have to forgive me if I don't use the right terminology. As I understand it, you are saying that the current regulatory framework neither requires companies nor incentivizes them—because there is a cost attached—to identify when something is fake.
    Does Bill C-27, in its current form, remedy that? Does it cover everything it should, or does it need to be strengthened?
    I think some aspects of the bill could do with being strengthened, but my colleague Ms. Régis could probably answer that better than I could.
    If I understand correctly, the amendments recently proposed by the minister reflect a desire to have AI-generated information identified for the public's sake. Yes, I think that is an important element to prevent confusion and an overall climate of distrust in society. I think it's definitely a good idea to pursue that legislatively.
    Thank you, Ms. Régis.
    Professor Bengio, in your opening statement, you talked about provisions that could be implemented right away, given the urgent need for action. You described something along the lines of a registry, whereby large generative AI systems and models would be registered with the government and include a risk evaluation.
    Basically, you're saying that we should do the same thing we do for drugs: before a drug is allowed on the market, the manufacturer has to show that it is safe and that the benefits outweigh the risks.
    Are you likening the challenge with AI systems to a public health issue, thereby warranting that companies submit substantial evidence about their products to a government agency?
(1145)
    Yes, that's right. It actually works that way in many sectors of our society, not just for drugs. Think of when a bridge or train is built, or when a new technique is developed to process meat. The public has to be protected so that things don't go awry. Companies have to be transparent and demonstrate that their products will not cause harm.
    To date, computer technology has escaped all that—the thinking was that it wouldn't have any significant impacts on society. Now we are at a point where computer technology, AI in particular, is about to completely transform society. Transformation can be good or bad, so we need a framework.
    Professor Bengio, my next question is for both you and Professor Régis.
    Now and again, we've been told that the industry is able to regulate itself. We've also been told that the voluntary approach can work. Personally, I'm not inclined to put a lot of faith in that approach. What do you make of the industry's ability to regulate itself?
    Here's some food for thought to help get you started. Isn't self-regulation an incentive for illicit actors to hitch a ride on the wagon of everyone else—all those who are self-regulating—and thus reap the benefits of not doing it themselves?
    What do you make of the voluntary approach?
    As I've said in response to previous questions, I think self-regulation can be a good intermediate step because of how quickly it can be put in place. Companies can work in coordination to establish certain standards. That's the upside of self-regulation.
    However, there are going to be bad actors, and there will be something of an incentive to cut corners if we don't have mandatory rules that are the same across the board.
    Professor Régis, is Canada a big enough player to regulate the industry adequately? A lot of Canadians think Canada is a major G7 country, but the reality is that Canada is a relatively small economy. Are we powerful enough to wield any influence?
    Influence is an issue, but I'd like to briefly comment on the self-regulation aspect, if I may. I think it's important. In my view, self-regulation clearly isn't adequate. There's a pretty strong consensus in the international community that opting strictly for self-regulation isn't enough. That means legislation has its place: it imposes obligations and formal accountability measures on companies.
    That said, it's important to recognize that this legislation, Bill C-27, is one tool in the important tool box we need to ensure the responsible deployment of AI. It's not the only answer. The law is important, but highly responsive ethical standards are also necessary. The tool box should include technical defensive AI, where you have AI versus AI. International standards as well as business standards need to be established. Coming up with a comprehensive strategy is really key. This bill won't fix everything, but it is essential. That's my answer to your first question.
    Sorry, could you please remind me what your second question was?
    Can Canada have any real clout, since it doesn't have a huge economy or a strong presence in the AI world?
    While Canada obviously doesn't have as much clout as China or the United States in AI development, it is still an important player for a number of reasons. First, Canada is known for its strong research capacity. Canada has been involved in various initiatives, including the creation of the Global Partnership on AI. That makes Canada an actor that wants to take a stand and whose voice in this space is still important.
    Nevertheless, if Canada doesn't want to fall behind, it needs to be true to its vision and values by taking very clear action at the national level. That will give Canada real credibility in this space.
    Thank you very much, Professor Régis.
    Mr. Masse, you have the floor.

[English]

     Thank you, Mr. Chair.
    Thanks to our witnesses.
    There are a couple of things that have worked in the past in this committee that have come to light on where we are right now and how it relates to the voluntary code. One of the things is that it used to be legal in Canada for businesses to write off fines and penalties on the environment or on anti-consumer court cases. They would actually get a tax deduction of up to 50% off the fines and penalties. Drug companies were fined for being misleading and environmental companies were fined for doing the wrong thing—actually, it wasn't environmental companies, but there was environmental damage that was done.
     It led to this imbalance that made it actually an incentive, a business-related expense, to go ahead with bad practices that affected people and the environment, because it actually paid off for them. It created an imbalance for innovation and so forth.
    The other one is my work on enacting the right to repair, which passed through this committee and was in the Senate. We ended up taking a voluntary agreement in the auto sector. We basically said that we got a field goal instead of a touchdown. This has now emerged again as an issue, because some of the industry will follow the voluntary agreement and some won't. Some wouldn't even sign on to the voluntary agreement, including Tesla, until recently. There are still major issues, and now they're back to lobbying here on the Hill. We did know the vulnerability 10 years ago, when we started this, that when it worked in towards the electronics and the sharing of information and data, it changed things again, and there wasn't anything there.
    My question is for Ms. Strome, Ms. Régis and Mr. Bengio.
    With this voluntary agreement, have we created a potential system right now whereby good actors will come to the table and follow a voluntary agreement while bad actors might actually use it as an opportunity to extend their business plans and knock out competition? I've seen that happen in those two examples that we've had there.
    I'll start with you, Ms. Strome, because you haven't been on yet. Then we can hear from Ms. Régis and Mr. Bengio, if we can go in that order, please.
(1150)
    I actually think that the voluntary code was an important and critical first step towards regulating this sector. It was a way to move things forward, and it was also a way to open the conversation and the discourse about the need for responsible AI practices, methodologies and approaches as we innovate in this sector. It was a very important first step, but it can't be the last step.
    As you identify and as others have recognized, voluntary codes of conduct and voluntary regulations are just that; they're voluntary. We need much firmer and clearer rules, regulations and guidelines about what our expectations are about how the technology is developed, deployed, and monitored and how it's assessed for its impact so that we understand what those impacts may be.
     Thank you.
    Go ahead, Ms. Régis.
    I think it goes back to my previous point. Elissa was very clear, and I agree, that it clearly is not enough. There is a way to really avoid having to comply with these voluntary norms. I'm a law professor, so for me it makes sense, for sure, to have binding regulations in that space, especially since there are a lot of power dynamics and economic interests at stake.
    With the proposed bill, one thing that is very important and that I like about it is that it focuses on ex ante measures. We've been talking about what if something happens and how Canadians will suffer the consequences if something happens. Let's not wait to have too many of these consequences and really focus on forcing people to have ex ante measures to make sure that before anything important is launched on the market, they do their due diligence and we have access to it. We make sure that it's transparent and that we have some accountability mechanisms to make sure that these consequences are avoided. We force that.
    Thank you.
    Go ahead, Dr. Bengio.
    I completely agree with everything my colleagues said. I'll just add this notion of creating a level playing field.
    If you have corporations that really want to do good and there's no mandatory regulation, then they're forced to do as bad as the worst guy in the class. What you want is the opposite: You want best in class. Without regulation, we get into the worst-in-class scenario, where the organizations that are less responsible end up winning.
    Thank you for that.
    I'll go to Mr. Cofone, but first, I guess, the trouble we're in right now is that we have this voluntary code out there already. There are the actions and deliberations of companies that are making decisions right now, some in one direction and some in another, until they're brought under regulatory powers. I think the ship has sailed, to some degree, in terms of where this can go. We're left with this bill and all the warts it has on a number of different issues.
    One thing that's a challenge—and maybe Mr. Cofone can highlight a little bit of this with his governance background—is that I met with ACTRA, the actors guild, and a lot of their concerns on this issue have to be dealt with through the Copyright Act. If we don't somehow deal with it in this bill, though, then we actually leave a gaping hole for not just abuse of the actors—that includes children—and their welfare, but we also leave a blind spot for how the public can be manipulated and so forth in everything from consumer society to politics to a whole slew of things.
    What do we do? Do you have any suggestions? How do we fix those components that we're not even...? It's a separate act.
(1155)
    Yes. Perhaps I can quickly add something to your prior question besides agreement with the prior three responses.
    An environment like you brought up is a great example. In environmental law, years ago, we thought that regulating was challenging, because we mistakenly thought that the costs were local but the harms were global. Not regulating meant developing the industry while not imposing global harms.
    With AI, it's the same. We think sometimes the harms are global and the costs of regulating are local, but that is not the case. Many of the harms of AI are local. That makes it urgent for Canada to pass a regulation such as this one, a regulation that protects its citizens while it fosters industry.
    On the Copyright Act, it's a challenging question. As Professor Bengio pointed out a bit earlier, AI is not just one technology. Technologies do one thing—self-driving cars drive and cameras film—but AI is a family of methods that can do anything. Regulating AI is not about changing one variable or the other; AI will actually change or affect all of the law. We will have to reform several statutes.
    What is being discussed today is an accountability framework plus a privacy law, because that's the one that's most intimately affected by AI. I do not think we should have the illusion that changing this will account for all AI and for all the effects of AI, or think that we should stop it because it doesn't capture everything. It cannot. I think it is worth discussing an accountability framework to account for harm and bias and it is worth discussing the privacy change to account for AI. It is also possibly warranted to make a change in the Copyright Act to account for generative AI and the new challenges it brings for copyright.
     Do I have any time left, Mr. Chair?
    You can go ahead, Brian.
    Okay.
    Really quickly, maybe I could get a yes-or-no answer or whether it's a good idea or bad idea, maybe in the long-term, if eventually we got to a joint House and Senate committee that overlooked AI on a regular basis, similar to a defence thing. Would that be a good thing or a bad thing? It would cover all those bases of other jurisdictions, rather than just the industry committee, if we had both houses meet and oversee artificial intelligence in the future.
    I know it's a hard one—yes or no—but I don't have much time.
    Could we go in reverse order? Thank you.
    Yes.
    Yes.
    I can't answer. I'm not enough of a legal scholar.
    That's fine. Fair enough. It's just an idea.
    Would you comment, Ms. Strome?
    I think that would be helpful.
    Okay. Thank you.
    Thank you, Mr. Chair.

[Translation]

    Thank you very much.
    Mr. Généreux, the floor is now yours.
    I'd like to thank all the witnesses. Today's discussions are very interesting.
    I'm not necessarily speaking to anyone in particular, but rather to all the witnesses.
    Bad actors, whether they be terrorists, scammers or thieves, could misuse AI. I think that's one of Mr. Bengio's concerns. If we were to pass Bill C‑27 tomorrow morning, would that prevent such individuals from doing so?
    To follow up on the question from my Bloc Québécois colleague earlier, it seems clear to me that, even in the case of a recorded message intended to scam someone, the scammer will not specify that the message was created using AI.
    Do you really believe that Bill C‑27 will change things or truly make Quebeckers and Canadians safer when it comes to AI?
(1200)
    I think so, yes. What it will do, for example, is force legitimate Canadian companies to protect the AI systems they've developed from falling into the hands of criminals. Obviously, this won't prevent these criminals from using systems designed elsewhere, which is why we have to work on international treaties.
    We already have to work with our neighbour to the south to minimize those risks. What the Americans are asking companies to do today includes this protection. I think that if we want to align ourselves with the United States on this issue to prevent very powerful systems from falling into the wrong hands, we should at least provide the same protection as they do and work internationally to expand it.
    In addition, sending the signal that users must be able to distinguish between artificial intelligence and non‑artificial intelligence will encourage companies to find technical solutions. For example, one of the things I believe in is that it should be the companies making the content for cameras and recorders that encrypt a signature to distinguish what is generated by artificial intelligence from what is not.
    For companies to move in that direction, they need legislation to tell them that they need to move in that direction as much as possible.
    Will Bill C‑27 allow it to be as effective as, or equivalent to, the U.S. presidential executive order currently in force?
    Do you think the Americans will then pass legislation that will go further than this current presidential executive order?
    The EU has already been much quicker to adopt measures than we've been. What is the intersection between Bill C‑27 and the bill that's about to be passed in Europe?
    I'll let my colleagues answer some of those questions. However, I would like to clarify something I proposed in what I said and wrote. It has to do with setting a criterion related to the size of the systems in terms of computing power, with the current threshold above which a system would have to be registered being 1026 operations per second. That would be the same as in the United States, and it would bring us up to the same level of oversight as the Americans.
    This criterion isn't currently set out in Bill C‑27. I would suggest that we adopt that as a starting point, but then allow the regulator to look at the science and misuse to adjust the criteria for what is a potentially dangerous and high‑impact system. We can start right away with the same thing as in the United States.
    In Europe, they've adopted more or less the same system, which is also based on computing power. Right now, it's a simple, agreed‑upon criterion that we can use to distinguish between potentially risky systems that are in the high‑impact category and systems that are 99.9% classified as AI systems without a national security risk.
    Professor Régis, I'd like to hear your opinion on how our bill compares with the European legislation.
    I would like to raise a few small points. There was a question about whether the Canadian legislation will be sufficient. First, it will certainly help, but it won't be enough, given the other legislative orders that must be taken into account. The provinces have a role to play in this regard. In fact, as we speak, Quebec is launching its recommendations report on regulating artificial intelligence, entitled “Prêt pour l'IA”. The Government of Quebec has mandated the Conseil de l'innovation du Québec to propose regulatory options, so we have to consider that the Canadian legislation will be part of a broader set of initiatives that will help solidify the guarantees and protect us well.
    As for the United States, it's difficult to predict which way it will go next. However, President Biden's executive order was a signal of a magnitude few expected. So it's a good move then, and one to watch.
    Your question touches a bit on the really important issue of interoperability. How will Canada align with the European Union, the United States and others?
    As for the European case, the final text of the legislation was published last week. Since it's 300 pages long, I don't have all the details; however, I will tell you that we certainly have to think about it, so as not to penalize our companies. In other words, we really need to know how our legislation and Canadians are going to align with it, to a certain extent.
    Furthermore, one of the questions I have right off the bat is this. European legislation is more focused on high‑risk AI systems, and their legal framework deals more with risk, while ours deals more with impact. How can the two really work together? This is something that needs more thought.
(1205)
    I'm joking, but you could ask ChatGPT to summarize these 300 pages for you.
    That would be hilarious.
    I would like to add one thing. Having principles‑based legislation protects us from upcoming changes and provides the necessary consistency. It gives regulators the chance to adapt the key details of our regulations to our partners' regulations.
    Thank you.
    Thank you, Mr. Généreux.
    Mr. Van Bynen, you have the floor.

[English]

     Thank you very much, Mr. Chair.
    We've had a number of witnesses before us with a very wide range of perspectives, some of whom have told us to rip up the bill and start all over again. At the same time, we've also heard that the genie is out of the bottle. It's operating almost like the Wild West out there.
    My question is for Ms. Strome. In November 2023, CIFAR published “Regulatory Transformation in the Age of AI”. The report summary cautions that the current efforts to regulate AI will be doomed if they ignore a crucial aspect of the transformative impact of AI on the regulatory processes themselves.
    Can you go over the findings of this report in a little more detail?
    Thank you for the question. I'll give it a shot for you.
    That report was authored by Professor Gillian Hadfield, who's the director of the Schwartz Reisman Institute at the University of Toronto. She's a Canada CIFAR AI chair, and I believe she was a witness at this committee last week or the week before.
    As you can understand, Professor Hadfield is an expert in regulation and in particular has developed significant expertise in understanding AI regulation in Canada and internationally.
    The policy brief that we published at CIFAR represented ideas that came from Professor Hadfield and her laboratory, her research associates and her colleagues, really looking directly at the need to be more innovative as we think about regulating AI. This is a technology that's moving very quickly. It's a technology with so many dimensions that we haven't explored previously in other regulated sectors.
    I believe the point that Professor Hadfield and her colleagues were making was that as we think about regulating AI, we also need to be incredibly flexible, dynamic and responsive to the technology as it moves forward.
    How can the findings of that report inform the consideration as we develop the act we're considering now?
     I think the important point is the one that I made in my opening statement: It's the the need for deep flexibility. As the technology develops quickly and the world moves quickly around the development, deployment and adoption of AI, the regulations also need to be flexible and dynamic and move quickly. Innovation will be necessary in how we approach the regulation of AI in Canada.
    It's things like bringing together multi-stakeholder groups to provide insight, ideas, advice and expertise. It's learning from the processes and approaches that the private sector is taking in order to comply with the regulations under the AI and data act. It's bringing together academics with government and private sector experts to learn from experiences, perspectives and approaches.
    I think it's that idea of being flexible, trying new things and really trying to stay perhaps just one step behind the advancement of the technology rather than the many steps that we are behind right now.
    Thank you.
    You've been the executive director of the pan-Canadian AI strategy since 2018. Did the work on this strategy inform the drafting of the artificial intelligence and data act, and if so, how?
(1210)
    Not directly.
    The pan-Canadian AI strategy at its inception was really designed to advance Canada's leadership in AI research, training and innovation. It really focused on building a deep pool of talented individuals with AI expertise across the country and fielding very rich, robust, dynamic AI ecosystems in our three centres in Toronto, Montreal and Edmonton. That was the foundation of the strategy.
    As the strategy evolved over the years, we saw additional investments in budget 2021 to focus on advancing the responsible development, deployment and adoption of AI, as well as thinking about those opportunities to work collaboratively and internationally on things like standards, etc.
    Indirectly, I would say that the pan-Canadian AI strategy has at least been engaged in the development of the AI and data act through several channels. One is through the AI advisory council that Professor Bengio mentioned earlier. He's the co-chair of that council. We have several leaders across the AI ecosystem who are participants and members on that council. I'm also a member on that council. The AI and data act and Bill C-27 have been discussed at that council.
    Second—
    Thank you—
    Go ahead.
    That's fine.
    I was just going to say that the AI institutes—the really central hubs of AI research, innovation and commercialization in the country—also had the opportunity to contribute and convene their members to contribute ideas and advice on the act.
    Thank you.
    I think I'm out of time, Mr. Chair.

[Translation]

    Thank you, Mr. Van Bynen.
    Mr. Garon, you have the floor.
    Thank you, Mr. Chair.
    Professor Régis, I don't want to make any assumptions or age us unnecessarily. However, when I was young, I watched films on television. After 10 minutes or so, there would be advertisements. Persuasive tactics were used to try to sell me products. It was obvious that persuasion was involved and that, if I watched these things, I was explicitly consenting to having all sorts of items sold to me.
    With all the artificial intelligence or non‑artificial intelligence algorithms out there, I find that it's now getting harder and harder to identify a persuasion tactic. This issue will become increasingly widespread. We're often asked to agree to something. However, the fine print makes it incomprehensible to the average person, or even to a highly educated person.
    First, do you agree that it's becoming harder and harder to consent to these tactics? Second, how can the quality of consent be improved in this situation? Third, is there any way to improve the current bill in order to enhance the quality of consent?
    That's a lot of questions. You have one minute and 15 seconds left. You can answer the questions in quick succession.
     Yes. It's easier than ever to be persuaded. That's actually one of the strategies. It can involve a very personalized approach based on your history and some of your personal data. This is indeed a problem. In the case of children, the issue gives me even greater cause for concern.
    This issue not only affects consumers, as you pointed out, but democracies in general. I'm concerned about being locked into bubbles where we receive only information that confirms certain things or that exposes us to less diverse viewpoints. This issue raises a wide range of concerns, which must be taken into account. That's my answer to the first question.
    Now, what more can we do? As I was saying, we need to think about this. Consumer protection is also at stake. We could do more work on the provincial component. In a recent study carried out by the Canadian Institute for Advanced Research, millions of tweets were analyzed to find out how people across Canada viewed artificial intelligence. Contrary to what you might think, people sometimes have an extremely positive view of artificial intelligence. However, they're less critical and less aware of what this technology actually does in their lives and of its limits. We often hear about legal issues, for example, but this awareness is in its infancy.
    One recommendation in the Quebec innovation council's report is to encourage people to develop a critical mindset and to think about what artificial intelligence is doing, how it can influence us, and how we can create a guide for defending ourselves against it. This must start at an early age.
(1215)
    Mr. Bengio, do you have any comments?
    I want to add something. The recommendations that I listed briefly at the start include a line that must not be crossed. Companies should not be allowed to improve artificial intelligence technology to the point where it can influence people better than a human being could. The impact may be devastating. We must build significant barriers to avoid reaching this dreaded stage.
    Thank you.
    Mr. Masse, you have the floor.

[English]

     Thank you, Mr. Chair.
    One of the things we keep hearing is that we're supposed to somehow not wait any longer but also be within the same framework as our international counterparts, many of whom have not acted yet, or we don't even know where they're going. Do you have any advice on that?
    We'll do a really quick round here. I get only two and a half minutes. I'll start with Mr. Cofone again, and then go around.
    Do you have any advice as to how we even err on that? That's what we're doing. We've been told, “Hurry up—wait.”
    I think part of the answer to that is correctly following the risk-based approach that this act is taking. This is because with a risk-based approach based on standards, rather than trying to makes specific rules for the specific technologies we have now versus the ones we'll have in five years, we'll be able to adjust as the technology changes. Avoiding the temptation to regulate the technologies we had a couple of years ago and focusing on being technologically neutral while at the same time putting enough content into the bill will allow us to be future-proof and be aligned with these international principles.
    I think part of that relates to the question that was asked just before yours about the impossibility to meaningfully consent today to most of the data processing, because it is impossible to anticipate the inferential harms from AI. I think part of the answer is again following standards and focusing on things like privacy by design, data minimization and purposeful mutation. These are independent of an individual's consent. This approach will allow our laws to adjust to the different ways in which the inferential harms will mutate in the next 10 years, and it's similar to the approach the EU is taking for artificial intelligence.
    I think that is my time. Thanks, Mr. Chair.
    Thank you, Mr. Masse.
    Mr. Vis, the floor is yours.

[Translation]

    Ms. Régis, you started by talking about the commissioner's independence. On November 28, the minister sent us a letter explaining the artificial intelligence commissioner's powers and office structure. I gather that you believe that the commissioner should have an office, financial resources and independent employees.
    What do you think about the idea of creating an office that would report specifically to Parliament, to ensure the independence that you referred to?
    The idea of setting up an independent body to make recommendations to regulators or to society stakeholders at large isn't new. A number of models can serve as an inspiration. This includes the ombudsman model, which we all know, particularly at the provincial level. There's also the Competition Bureau.
    To briefly answer your question, it would be good to explore the idea of creating the position of a commissioner who would report completely independently, including to Canadians. The various current models could be studied to determine which model would be best.
    Thank you.

[English]

     Ms. Strome, you mentioned in your opening testimony that we need to invest in subject matter experts in the Department of Industry. I'm very concerned about this. We know that artificial intelligence operates not just in Canada; it's also global. Even if we have a regulatory approach in Canada, if this bill is indeed passed, I don't think we can isolate ourselves from the potential societal and individual harms that will come from AI actors in other parts of the world.
     I remember a few years ago that the Government of Canada—and I'm not trying to make a political point here—had a hard time operating its payment schedule for public servants.
    Mr. Brian Masse: It still does.
    Mr. Brad Vis: It still does.
    How in the world is Industry Canada going to regulate online harms from AI? They can't even manage their own pay systems. I just don't know if our public service is nimble enough right now to do the job we need it to do in the format suggested thus far.
(1220)
    Let me offer an opportunity and a suggestion.
    There's a lot of expertise in the public sector, in the academic sector and in the private sector in advancing and thinking about responsible AI and measuring and assessing its impact and harms. I think that's a huge opportunity for the Government of Canada to bring some of that expertise into the government, either temporarily or on a case-by-case basis, to assist the government in developing and monitoring and evaluating the risks associated with AI.
    Let me just stop you right there. Are you suggesting somewhat of a hybrid model, whereby private sector actors are integrated with public officials to monitor and regulate, and maybe even make quick decisions on potential harms Canadians face?
    I'll let Mr. Bengio jump in right after that quickly.
    Obviously, these would have to be well-vetted individuals with the necessary skills and expertise to be able to provide this kind of advice. However, I think particularly when we think about legal scholars and scientific researchers who have the necessary expertise to understand the technical impacts of the technology, those would be important assets to bring into this conversation.
    Thank you.
    Very quickly, we'll go to Mr. Bengio.
    I agree with everything she said, but I want to add that there already are either for-profit or non-profit organizations—mostly in the U.S., but there could be some in Canada—working on AI safety. In other words, they are developing the technology to do what the regulator needs to do to figure out what is dangerous and what is not. I think this is a better route. It's going to take time for the government to build up that muscle; it's going to be much faster to work with non-governmental organizations that have that expertise.
    Thank you.
    I have one more quick question for Ms. Strome.
    Again, we're talking about developing an AI regulatory framework here. I don't know necessarily whether China and Russia, especially in the context of election interference, will apply the same types of safeguards for actors in those respective countries as it relates to AI innovation and potential harms. There's a philosophical discussion going on right now that is almost about the race to the bottom. If we hinder ourselves with a regulatory approach that's too overly burdensome, are we holding ourselves back from addressing those serious harms that can come and impact Canadian society?
    Well, I think one opportunity for optimism is to look at the recent U.K. AI Safety Summit that was held late last year at Bletchley Park. At that meeting, representatives from the Chinese government were participating in those international discussions about the opportunity to work collaboratively with like-minded nations around the world to think about understanding, assessing and mitigating the risks of AI. I think we have to remain optimistic and hopeful and open to the opportunity for discourse and collaboration.
     I'm an MP, and I have to remain constantly skeptical because I'm thinking about my one-year-old. Many of us around this table have kids, and I'm hearing about these 20-year threats. My daughter is going to be 21 in 20 years. The world that she's going to enter will be crazy. I don't know if there can be a regulatory approach or if we can even stop it. We might just be fooling ourselves that we can do anything to stop what's going to happen.
    Can Mr. Bengio comment on that?
    You're right: There's nothing we know right now that provides full guarantees that we can avoid all the harms that powerful AI systems can bring in. However, it would be foolish not to try to move the needle towards more safety. In particular, we should be making sure that companies here behave well.
    As for what Chinese organizations are doing, we should prepare countermeasures, and maybe this is not the purview of this law. This is more like a national security investment that needs to be made in order to protect Canadians against these attacks.
(1225)
    Thank you very much, Mr. Vis. You are out of time.

[Translation]

     Ms. Lapointe, you have the floor.
    Thank you, Mr. Chair.

[English]

    Dr. Strome, you cited three priorities for the government in your opening statement. When you were talking about the second priority, flexibility, you said it was important to note that AI was not contained within borders and that Canada should create systems and partnerships. It struck me, as you raised this point, that Canadian legislation would not be effective outside of Canada, so the point you raised was very relevant.
    Can you expand on what you see as good and needed systems and partnerships?
    Absolutely.
    We have many opportunities to work with like-minded peer nations around the world. Obviously, we are close allies with the U.S., the U.K. and other G7 countries. All of these countries are grappling with the same issues related to the risks associated with AI.
    There are some good steps in the right direction. New systems are being developed and considered around international collaboration on the regulation of AI. One is the one I just mentioned, the U.K. AI Safety Summit, which is now a collection of like-minded countries that are coming together on a regular basis to explore and understand those risks and how we can work together to mitigate them.
    It was really telling in the Bletchley declaration, which was published following that meeting, that there was a recognition even in the statement that different countries will have different regulatory approaches, laws, and legislation around AI. However, even within those differences, there are, first of all, opportunities to align, and even opportunities for interoperability. I think that's one great example, and it's an opportunity for Canada to actually make a really significant contribution.
    It speaks to the concerns raised by my colleagues MP Masse and MP Généreux. My fear is that the good guys may be overly legislated and subject to red tape, while bad actors will have free rein without these international agreements. Do you also share those concerns?
    I think there are probably even deeper opportunities for collaboration and alignment on some of these issues, for sure.
    The third priority you raised is the need for investments. In your opinion, where should investments first be directed to best accelerate the opportunities for Canada, while also protecting from individual harms and system risks?
    One of the areas we're deeply concerned about right now is the lack of investment in computing infrastructure within our AI ecosystem. Right now, there is really and truly a global race for computing technology. These large language models and advanced AI systems really require very advanced and significant computational technology.
    In Canada and the Canadian AI ecosystem, we don't have access on the ground to that level of computational power. Companies right now in Canada are buying it on the cloud. They're buying it primarily from U.S. cloud providers. Academics in Canada literally don't have access to that kind of technology.
    For us to be able develop the skills, tools, and expertise to really interrogate these advanced AI systems and understand where their vulnerabilities are and where the safety and risk concerns are, we're going to need very significant computational powers. As we talk about regulating AI, that goes for the academic sector, the government sector and the private sector as well. That's a critical component.
     Mr. Cofone, I'd be interested in hearing your opinion on what kind of legal onus there should be on creators of high-impact AI systems and also on the platforms that allow the use of AI applications, such as Facebook and Youtube.
     I think the main onus should be risk mitigation. This can go back to the principles of fairness, transparency and accountability that we were talking about at the very beginning of the session. It is important that creators and developers of AI systems keep track of the risks they create for a wide variety of harms when they are deploying and developing those systems, and that we have legal frameworks that will hold them accountable for that.
    I think that also relates to your prior question. It is legitimately challenging and reasonably concerning that in other countries we may not be able to enforce the frameworks that are passed today. However, we should not let imperfect enforcement stop us from passing the rules and the principles that we believe ought to be enforced, because imperfect enforcement of them is better than not having enforcement at all.
    This concern is similar to a concern that we had for privacy more than 20 years ago in relation to data that crosses borders. We didn't know whether we would be able to enforce Canadian privacy law abroad. Courts and regulators surprised us to the extent that they are sometimes able to do it.
(1230)

[Translation]

    Thank you, Ms. Lapointe.

[English]

    Mr. Williams, the floor is yours.
    Thank you very much, Mr. Chair.
    I want to go back to a question that my colleague Mr. Vis had for Madam Régis, but I'll go to other witnesses. I'll start with Professor Cofone.
    AIDA itself proposes that we create an artificial intelligence and data commissioner who will not be an independent body but rather will report to ISED, to the industry minister. Do we need the AI commissioner to be an independent office or an officer of Parliament?
    I think we would have enormous benefits from the AI commissioner's being an independent officer. An alternative, a second-best, would be to offset some of the powers that are now vested in the AI commissioner onto the tribunal, which is set up as an independent entity, but to have a better composition of the tribunal, we could increase the proportion of experts that occupy positions in the tribunal to compensate for that.
    In terms of process, then, would you see it working very much like the Privacy Commissioner or the competition commissioner?
    Yes. I think we could have a system that operates like the Privacy Commissioner's. Under the structure of the proposed bill, we could have, for example, the AI commissioner carrying out investigations and then the tribunal enforcing the fines.
    For the other witnesses, are there any comments on that?
    Mr. Bengio, go ahead.
     Yes, I also think there is good reason to make sure that the regulator is not going to be under a single mission.
     ISED has an innovation mission, which is really about the economy growing thanks to technology, but there are other aspects, especially those of harms, risks, national security risks or even global affairs questions, that the management and governance of AI by the government need to cover.
     How to do that right I don't know, but I think it will be healthier if the organization doing this within our government isn't under a single particular ministry.
    Ms. Strome, we've talked in the past here quite a bit about how Canada has really fallen behind with AI when it comes to IP commercialization. We've lost a lot of our patents. I think China developed more patents in AI in one year than we do with all of our patents in a year. They're really ahead of us, along with the U.S. and others.
     When it comes to developing and protecting that area and really being back to being a leader in AI again, how does Canada do that? What parts of this bill may prevent that? What parts do we need to add that might encourage that?
     I actually believe that patents aren't the only measure of the value in our AI ecosystem. In fact, I believe that talent is one of the strongest measures of the strength and the value of our AI ecosystem.
    Patents, absolutely, are important, particularly for start-up companies that are trying to protect their intellectual property. However, much of the AI that's developed is actually released into the public domain; it's open-sourced. We derive really significant value and really innovative new products and services that are based on AI through the very highly skilled people who come together with the right resources, the right expertise, the right collaborators and the right funding to actually develop new innovations that are based on AI.
    Patents are one measure, but they're not the only one, so I think that we need to take a broader view on that.
    When we look at where Canada stands internationally, it's true that AI is on a very competitive global platform and stage right now. One index is called the global AI index. For many years, Canada sat fourth in the world, which is not bad for a small economy relative to some of the other players there. However, we are slipping on that index. Just this year, we slipped from fourth to fifth position, and when you look deeply into the details of where we're losing ground on AI, you see that much of it is coming because of the lack of investment in AI infrastructure. Other countries are making significant pledges, significant commitments and significant investments in building and advancing AI infrastructure, and Canada has not kept pace with that.
    In the most recent index, we actually dropped from 15th to 23rd in the world on AI infrastructure, and that affects our global competitiveness.
(1235)
    Thank you very much.
    I'll now yield the floor to Mr. Sorbara for our last question.
    Thank you very much, Mr. Chair. It's been a wonderful panel today.
    I want to go to Elissa.
    I think you mentioned that there are 140,000 people working in Canada in AI. Obviously, AI has become a huge economic generator, and that's just the industry itself and doesn't include the indirect jobs associated with AI.
    Are you aware of any estimates of what AI could become in terms of benefit for the Canadian economy as we go forward?
    I don't have any real, hard numbers on that, but it's something that we're interested in trying to understand ourselves, so I'll get back to you when we do have that number, for sure.
    I think that the benefits, absolutely, are economic for Canada, and we're seeing that in the number of jobs. We're seeing that in the number of start-up companies that are emerging in our Canadian centres, and particularly in the amount of venture capital investment that is going into those start-up companies. About 30% of all venture capital investment in Canada is going directly into companies that are developing AI. That's a really significant benefit to the economy.
    With regard to venture capital, it is usually early-stage venture capital that is being invested there.
    I'll go to Elissa first and then to Ignacio and anybody else who wants to jump in at that time.
    In terms of the AI regulations, you want them to be.... It's like accounting. You have a principle. You have very prescriptive regulations when it comes to accounting. You want to make sure that the regulations are not so tight that they limit growth and the capacity to evolve and innovate, but you also want to make sure that they are not so loose that there are holes and loopholes in them, if you want to use that word.
    Are we striking that right balance in terms of getting it done? That's very difficult to achieve. I've spoken to colleagues in Europe on AI, both at the subnational and the European level. All parliaments are grappling with this issue.
    Where are we in striking that right balance?
    I think we're on the right track. I think that if the bill is passed and we move towards developing regulations, if we take a really flexible approach, we have to literally be watching the opportunities and be changing, pivoting and adapting as the technology advances and as other international players also advance.
    Thank you.
    If I can just stop you there, I do want to go to Ignacio.
    Ignacio, can you chime in on this quickly? I know you talked about harms in your presentation. I'm going to reread your testimony in terms of the harms when I have a chance this week, because that does go to the point about principle and how we regulate principle versus being very prescriptive.
    I think we're almost there.
    I think, first of all, that it is laudable that this legislation is taking that trade-off seriously and that it's not being too technology-specific. I think it needs some more specificity than it currently has, and it can do that while maintaining its technological neutrality. We could have a bit more precision in the standard for biases. We could have some more precision in the standard for high-impact systems. We could have proportional degrees of due care that go across the categorization. All of those things could be applicable to systems that haven't developed yet while also giving some more guidance to the regulations.
     Chair, how much time do I have left?
    You have about a minute.
    I'll go to Mr. Bengio as well.
    You're one of the leaders in Canada. We have the Vector Institute here at the University of Toronto, I believe. Then we have other institutes in Montreal. Can you comment on the questions I have?
    Then I'll go to Ms. Régis if we have time.
    I think that we should not make the mistake of trying to put a lot of details into the law. All of us have little things that we would like to see. Because the technology is going to change and because the misuse and the harms are going to evolve, we need to have no choice but to let the regulator adapt quickly. Elissa made that point multiple times, and I did as well. We need to really stick to a principle-based approach. That is the only viable solution to protect the public.
(1240)

[Translation]

    Ms. Régis, what do you think?

[English]

    I would say that the best way, in my opinion, to resolve the tension between innovation and protection is a risk-based approach. This bill is based on that logic. The higher it is, the more you demand. If you follow that path, I think you're on the right track to have this equilibrium that you're looking for.

[Translation]

    Thank you, everyone.

[English]

    Thank you very much, MP Sorbara.
    Thanks to all of our witnesses today. It was a fascinating discussion.
     We have a bit of committee business to attend to. I'll let you all go.
    Some hon. member: Hear, hear!
    The Chair: I don't know if you can hear through Zoom, but you're receiving applause from our members here. It was really interesting. Thank you for your work on this and for sharing your insights with us as we go forward on this legislation.
    You are free to go, and thanks again.

[Translation]

    Colleagues, this brings us to committee business. I know that a few notices of motion have been tabled, including Mr. Williams' notice.
    Mr. Williams, you have the floor.

[English]

    Thank you, Mr. Chair.
    I'm sorry to be a dog with a bone on this topic, but I think it's really important that we continue to look at cellphone bills on behalf of Canadians. We're not going to stop until we get these cellphone bills down.
    We had a motion that we talked about last week, and I want to revisit it. Specifically, what we have changed in this motion to make it a little different is to ensure that we get not only Rogers and Bell to the committee but also Vidéotron. I think it's important that we get CEOs of these companies here to talk about what's happening with cellphone bills for Canadians.
    Second, it's that we will have the Minister of Innovation, Science and Industry here. I think it's important for him. I know from talking to him in the past that he's always talked about how he wants to come in front of Canadians and talk about lowering cellphone bills.
    I'm going to go back to March 5, 2020. The then industry minister, Navdeep Bains, announced that they were going to lower cellphone bills by 25% in the next two years, by 2022. It would save families $690 a year. With the announcements we had last week by Bell—I guess it's three weeks ago now—the average cellphone bill in Canada is $106. Rogers and Bell are going to follow suit with a $9-per-month increase, which will mean those bills are going to $115.
    It comes down to one thing, and that's data. That's the question we want to ask the CEOs. Canadians used three times more data in 2022 than they did in 2015. You know that when you go and look at Instagram or you're downloading Reels or you're using YouTube or Netflix, you consume more data. Cellphone bills, if you were consuming only five gigabytes a month, have gone down 25%, but Canadians consume more data, and cellphone bills are going up. These are good questions to ask on behalf of all Canadians.
    I move as follows:
That, as Canadians already pay the highest cellphone bills in the world, and Rogers and Bell have indicated an increase to cellphone bills of $9 a month, the committee call for two meetings to be held by February 15, 2024, one with the CEOs of Rogers, Vidéotron and Bell and the second one with the Minister of Innovation, Science and Industry, to explain why prices are going up; and that this committee also condemn pricing increases and report back to the House.
    Thank you, Mr. Chair.

[Translation]

    Thank you.

[English]

    Thank you, Mr. Williams.
    I recognize Mr. Masse. He wants to intervene on the motion before the committee.
    Go ahead, Mr. Masse.
    Thank you, Mr. Chair.
     I do support the motion, but I would ask that we amend it by adding Telus. Then we'd have the four horsemen of the apocalypse in front us.
    To me it's an important motion. We should be reporting this back to House as well to give as much attention as possible to this issue. It's just become outrageous.
    I support the motion and I hope we can get this done and have Telus included as well.
    That's an amendment, Mr. Masse.
    As you all know, there's no such thing as a friendly amendment. It is an amendment to add Telus to the list so eloquently described by Mr. Masse.
    On the amendment of adding Telus, are there any comments on this, or is there consensus for this amendment to the motion?
    Go ahead, Mr. Turnbull.
     I have no issue adding Telus to the motion, but I have a bit of an issue with the motion in general. Perhaps I'll save my comments if we want to vote on the amendment first.
(1245)
    I think it will be more procedurally elegant if we proceed that way.
    There's always value in procedural elegance.
    I try, but I don't always succeed.
    Looking around the room, I think there is general consensus for this minor tweak to add Telus to the motion.
    (Amendment agreed to [See Minutes of Proceedings].
    We go back to the main motion as amended. Go ahead, Mr. Turnbull.
    We had a great subcommittee meeting. Obviously it was in camera, so I won't speak to the conversation, but the subcommittee report we did is public. It includes a very substantive study that Mr. Lemire put forward originally and that we agreed to commence with. It also included point 5 here, which includes all of the CEOs of all of the companies, including Telus. We already have agreed to ensure that we would bring forward the CEOs to question them on cellphone bills. I'm looking at point 5 in the subcommittee report that came back to this, which had consensus. I also understand, based on our current committee schedule, that we have dates and times to make sure those meetings happen.
    The way I feel about this is that it seems that Mr. Williams is just trying to push up something that's going to happen anyway, that we already agreed to. I don't see the rationale for that when we've already come to consensus on this. We've all agreed it's an important topic. We've all agreed there are concerns around cellphone price increases that are planned by Rogers and/or others. We also can get more acquainted with the facts, because there is lots of other information we need to look at. There is a whole spectrum of other issues we can talk about, but all of those are already included in the subcommittee report and its motion.
    From my understanding, we agreed that meetings would start as early as February 26 on this topic. Mr. Williams' motion, I believe, just bumps it up and is now calling to have those meetings about a week or 10 days earlier.
    What's the rationale for that? Why would this committee need to bump that up by two weeks or 19 days when we've already agreed to do it in due course? We agreed with that.
    We also have other studies that we've talked about. We've had that conversation together and agreed. We came to consensus.
    This seems like it blows up the consensus we had. We had a very constructive conversation to achieve consensus, and I thought we had a way forward, and now we have a motion that tries to bump this study up by 10 days. What is the rationale for that? I can't understand it.
    Please, someone clarify that for me. Maybe Mr. Williams can clarify what the rationale is.
    I'm looking around the room to see if there are more interventions.
    It is true that in the steering committee we did agree to start the telco study on the 26th and to finish the Bill C-27 witnesses before we adjourn for the constituency week in February.
    I'll let Mr. Williams speak to his motion.
    Thank you, Mr. Chair.
    I know that we've agreed to a broader study on telecommunications, which talks about infrastructure and the problems we've had with companies. This is specific to price increases by Rogers, as announced by Rogers, and by Bell. Of course, we're agreeing with the committee to bring other witnesses—the four horsemen or others—together in front of the committee, because this is needed now and is pertinent now. This is the third time that we're trying this motion to get committees together.
    There is a broader study in telecommunications. It's talking about infrastructure and it's going to talk about wireless and about the many Canadians who do not still have access to cellphone and signal. I know there are seven million Canadians who have been promised high-speed Internet access. Fifty per cent of Canada still does not have that access.
    At the end of the day, this is about one topic only—increases that have been announced by Rogers and getting those CEOs and the minister together on that increase.
    Why is that important? I'll tell you why.
    Just this morning at 11 o'clock, Manulife, which had announced last week that it was going to offer specialty drug medication only to Loblaws—an exclusive deal, which was going to be a problem—actually backed off today, because of pressure. They announced that they are not going to follow through with that deal. That's what happens when we work together and put political pressure on these companies.
    Rogers needs to answer now, not in four weeks, not in six weeks. They need to answer within two weeks why they're increasing prices to Canadians now. We should be out doing this now and not waiting.
    Thank you.
(1250)
     Okay.
     I'll turn it over to Mr. Turnbull.
    There's a small thing to keep in mind in terms of scheduling. We have witnesses lined up for Bill C-27 on February 12 and 14. Should this motion be adopted, I would suggest we try to seek additional resources so as to not undo the great work that our clerk has done to get these witnesses before the committee. That's just something to keep in mind.
    Go ahead, Mr. Turnbull.
    No one is disagreeing with the fact that cellphone companies should be called before the committee and questioned about any planned increases. I think we've all agreed to that. That's actually in the subcommittee report. I think it's more substantive. It already includes the CEOs of Telus and Quebecor Media, etc. It includes all the CEOs of all the companies that have been mentioned. It also includes a focus on increased customer cellphone bills, so any.... It's already there.
    I think we've already agreed to do this work, so I still can't understand the rationale for an additional motion that just bumps it up. If you're asking for additional committee resources to start that component of the broader study earlier, okay, that's fine, but then isn't it subject to committee resources? If we've asked for additional resources to study Bill C-27, why shouldn't that be the first priority, which is what we agreed to?
    We've already agreed to that. We've already had that debate and that conversation. We agreed to what's in the subcommittee report, so why is this now...? Even though we've already agreed to it, somehow it's now an even higher priority because you just decided it in the last week or so.
     It doesn't make sense to me when we've already agreed to do a broader study. We've agreed to call all the witnesses. We've agreed to focus on cellphone prices and bills and we've agreed that it can be the first priority in that broader study. We've also agreed to a report of findings and recommendations back to the House.
    I just can't understand what the.... In a way, isn't this a redundant motion? We've already done this.
     Isn't there some rule in the Standing Orders that a motion has to be substantively different in order for it to be considered? This doesn't seem different at all. I don't see anything that's different here. I really can't understand the rationale for this, other than a bit of a grandstand.
    Thank you, Mr. Turnbull.
    Mr. Masse is next.
    Thank you.
    I think a couple of things have overtaken this from when we had our schedule earlier, but you made a really good point. If we have witnesses lined up, that also involves travel for them and so forth, so I think we can find consensus here.
    If we get further resources for this committee, we can start the study earlier. That's what I would like to see. I think this is an issue that is significantly important. There are good interventions, but if we're going to then create a problem for other witnesses.... That was something we didn't have before this was even tabled. At one point, it didn't look like we had some of those witnesses coming forth, but we do now.
    I would offer that we leave this in your hands, Mr. Chair, to find out if we can actually get some additional resources to start this a bit more quickly, if possible. That's the way I would like to approach it, and I think it's a fair compromise for what we're trying to do here.
    I think it's a heightened environment with regard to what the cellphone companies have been doing. We all feel it. It's the number one correspondence that I get in my constituency on a regular basis, aside from Gaza and a few other situations that are taking place.
    At any rate, my position would be to leave it in your hands to see if we can actually get additional resources in this committee to start this study a bit earlier and go from there so that we don't disrupt what our clerk has done and any other flow of the work that you have to do.
     If that's okay with the rest of the committee, I think that's a good way to go forward.
    Okay. I can definitely see....
     If there's consensus around the room to say that we'll start this study on telecoms earlier than planned if we have the additional resources, and we'll keep Bill C-27 as planned.... The clerk is here by my side, so we'll be looking for additional resources.
    There is still a motion before this committee, though. I don't know how colleagues want to proceed with this motion or if there's an agreement that we just start the telecoms study earlier.
    I'll looking at Ryan and Brian.
     Ryan, I'll yield the floor to you.
(1255)
    I would just say I agree with Mr. Masse. I think if the committee can get the additional resources to start this study on telecoms a bit earlier and prioritize the CEOs, that's a good way forward.
    One difference is that in our subcommittee report, we've allocated more time to have those CEOs come before us, which I think is important. I think the subcommittee report gives us more of an opportunity to scrutinize the CEOs, as is the intention here, so maybe I could suggest that Mr. Williams....
    Mr. Williams, I know you won't like this, but maybe you want to withdraw the motion, and then we can come to a consensus to get the additional resources to hopefully start a bit earlier. We'd be happy to do that. We could reach consensus.
     I see Mr. Perkins and then Mr. Williams.
    Thanks.
    I'm open to the conversation. Just to let you know, it is public knowledge that next Tuesday and Wednesday after question period there are available time slots for committees to have additional meetings, as well as next Wednesday night at 7:30. That's just for consideration. There are additional resources at those times.
    Thank you, Mr. Perkins.
    Go ahead, Mr. Williams.
    It's a good discussion.
    Look, we all want to get to the CEOs. The question I have is whether the Minister of Innovation is scheduled for that additional study in the motion as well.
    One second, Mr. Williams. I don't have the committee report in front of me, so I'm just looking to the clerk to see.
    Yes, Mr. Williams, it's what I thought. The minister is not named specifically in the amendment or addition whereby we decided in the subcommittee report to have the CEOs. In the text of the main motion that Mr. Lemire presented, the minister is not specifically mentioned, but nothing prevents us from inviting the minister to testify, if it's the will of the committee as part of this study on telcos. I'm certainly open to sending the invitation to the minister and his team.
    If it's the will of the committee that the minister attend these studies, to get the study moved up, to use the resources that we have....
    The one I'm missing here is to report back to the House, which is the main body of this motion. That's certainly what I was looking for as well.
    It is in the subcommittee report at point 4 that the committee report its findings and recommendations to the House.
    Okay. As long as the committee can come to consensus, we're going to move forward with the resources required before the 15th. Is that correct?
    What I have is to seek additional resources with the clerk for extra meetings on top of what we have on Bill C-27 next week. With these resources, we invite the CEOs of the telcos. In addition, we also invite the minister to come and testify as part of this telco study.
    Okay, I'm fine with that.
    There is just one small clarification, because I was talking to the clerk when it was mentioned. Do we want, for the CEOs, one CEO per meeting or per hour? How do we divide them up? Do we want two per meeting or four for one meeting? It's just to have some clarity on that.
    If we have two meetings, it's two per meeting.
    It would be two per meeting for one hour each. That would be fine by me.
    Go ahead, Mr. Masse.
    I'm open to whatever. I have no problem in putting all four of them right here in front of us in that time frame.
    It would be two hours for the four.
    Okay, there's no strong feeling necessarily on this—
(1300)
    Yes.
    —so I'll work it out with their schedules and with their availabilities and the resources that we might get. I trust that we're in good hands with Clerk Burke to figure it all out.
    That brings us to the end of the meeting.
    I know, Mr. Perkins, that you had some notices of motions. Are you okay to bring them up perhaps at a later point? We've reached the two hours for this meeting.
    Sure. We could maybe deal with them towards the end of the next meeting.
    Yes, we'll keep some time for—
    I may talk to a few folks in between to see if we can simplify them.
     That would make my job easier. I would like that very much, Mr. Perkins.
    Thank you very much.
    This meeting is adjourned.
Publication Explorer
Publication Explorer
ParlVU