Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.
Welcome to meeting number 109 of the House of Commons Standing Committee on Industry and Technology.
Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other acts.
I’d like to welcome our witnesses today. From Amazon Web Services, we have Ms. Nicole Foster, director of global artificial intelligence and Canada public policy.
From Google Canada, we have Ms. Jeanette Patell, director of government affairs and public policy. Also from Google, and participating by videoconference, we have Mr. Will DeVries, director of privacy legal, as well as Ms. Tulsee Doshi, director of product management.
From Meta, we have Ms. Rachel Curran, head of public policy for Canada.
From Microsoft, we have Ms. Amanda Craig, senior director of public policy, Office of Responsible AI, as well as Mr. John Weigelt, chief technology officer.
We thank all of you for being here today.
[English]
You all have five minutes for your opening statements.
It's a privilege to be here as the committee conducts its study of the AI and data act within Bill C-27.
AWS has a strong presence in and commitment to Canada. We have two infrastructure regions here, in both Montreal and Calgary, to support our Canadian customers, and we have plans to invest up to nearly $25 billion by 2037 in this digital infrastructure.
Globally, more than 100,000 organizations of all sizes are using AWS AI and machine-learning services. They include Canadian start-ups, national newspapers, professional sports organizations, federally regulated financial institutions, retailers, public institutions and more.
Specifically, AWS offers a set of capabilities across three layers of the technology stack. At the bottom layer is the AI infrastructure layer. We offer our own high-performance custom chips, as well as other computing options. At the middle layer, we provide the broadest selection of foundation models on which organizations build generative AI applications. This includes both Amazon-built models and those from other leading providers, such as Cohere—a Canadian company—Anthropic, AI21, Meta—who's here today—and Stability AI. At the top layer of the stack, we offer generative AI applications and services.
AWS continually invests in the responsible development and deployment of AI. We dedicate efforts to help customers innovate and implement necessary safeguards. Our efforts towards safe, secure and responsible AI are grounded in a deep collaboration with the global community, including in work to establish international technical standards. We applaud the Standards Council of Canada's continued leadership here.
We are excited about how AI will continue to grow and transform how we live and work. At the same time, we're also keenly aware of the potential risks and challenges. We support government's efforts to put in place effective, risk-based regulatory frameworks while also allowing for continued innovation and a practical application of the technology.
I'm pleased to share some thoughts on the approach Bill C-27 proposes.
First, AI regulations must account for the multiple stakeholders involved in the development and use of AI systems. Given that the AI value chain is complex, recent clarification from the minister that helps define rules for AI developers and deployers is a positive development. Developers are those who make available general purpose AI systems or services, and deployers are those who implement or deploy those AI systems.
Second, success in deploying responsible AI is often very use case- and context-specific. Regulation needs to differentiate between higher- and lower-risk systems. Trying to regulate all applications with the same approach is very impractical and can inadvertently stifle innovation.
Because the risks associated with AI are dependent on context, regulations will be most effective when they target specific high-risk uses of the technology. While Bill C-27 acknowledges a conceptual differentiation between high- and low-impact applications of AI, we are concerned that, even with the additional clarifications, the definition of “high impact” is still too ambiguous, capturing a number of use cases that would be unnecessarily subject to costly and burdensome compliance requirements.
As a quick example, there's the use of AI via peace officer, which is deemed high impact. Is it still high impact if it includes the use of autocorrect when filling out a traffic violation? Laws and regulations must clearly differentiate between high-risk applications and those that pose little or no risk. This is a core principle that we have to get right. We should be very careful about imposing regulatory burdens on low-risk AI applications that can potentially provide much-needed productivity boosts to Canadian companies both big and small.
Third, criminal enforcement provisions of this bill could have a particularly chilling effect on innovation, even more so if the requirements are not tailored to risk and not drafted clearly.
Finally, Bill C-27 should ensure it is interoperable with other regulatory regimes. The AI policy world has changed and progressed quite quickly since Bill C-27 was first introduced in 2022. Many of Canada's most important trading partners, including the U.S., the U.K., Japan and Australia, have since outlined very different decentralized regulatory approaches, where AI regulations and risk mitigation are to be managed by regulators closest to the use cases. While it's commendable that the government has revised its initial approach following feedback from stakeholders, it should give itself the time necessary to get its approach right.
Leveraging emerging international norms and technical standards will ensure that Canada's regulatory regime can be interoperable with those of other leading economies and trading partners. Ultimately, this will help enable global growth for Canada's AI champions. In the meantime, we can and should address specific harms, like the risk of deepfakes for election disinformation, by reviewing existing legislation and crafting specific amendments where needed.
We are committed to sharing our knowledge and expertise with policy-makers as they move forward to promote the responsible use of AI. Thank you so much for the opportunity to be here today.
Good afternoon, Chair and members of the committee. My name is Jeanette Patell and I am the director of government affairs and public policy for Google in Ottawa. I am joined remotely by my colleagues Tulsee Doshi and Will DeVries. Tulsee is a director and head of product in responsible AI at Google. Will is a director on our privacy legal team and advises the company on global privacy laws and data protection compliance. We appreciate the invitation to appear today and to contribute to your consideration of Bill C-27.
As the committee knows, this is important legislation, and important legislation to get right.
[Translation]
Today, we will present a few remarks on the Consumer Privacy Protection Act and the Artificial Intelligence and Data Act. We will be very happy to answer your questions.
We will present our brief to this committee shortly. We will also maintain our commitments regarding aspects that could be improved and ensure better results for businesses, innovators and Canadian consumers.
[English]
When Canadians use our services, they are trusting us with their information. This is a responsibility that we take very seriously at Google, and we protect user privacy with industry-leading security infrastructure, responsible data practices and easy-to-use privacy tools that put our users in control.
Google has long championed smart, interoperable and adaptable data protection regulations—rules that will protect privacy rights, enhance trust in the digital ecosystem and enable responsible innovation. We support the government's efforts to modernize Canada's privacy and data protection regulatory framework and to codify important rights and obligations.
We also believe the CPPA would benefit from further consideration and targeted amendments in certain areas. For example, we agree with others, like the Canadian Chamber of Commerce, that consent provisions should be both clarified and tailored to more consequential activities. We also highlight the need for a consistent federal definition of “minors” and clearer protections for minors' rights and freedoms. Improvements to these areas would maintain and enhance Canadian privacy protections, make it easier for businesses to operate across Canada and the world and enable continued innovation throughout the economy.
Turning to the artificial intelligence and data act, as our CEO has said, “AI is too important not to regulate, and too important not to regulate well.” We are encouraged to see governments around the world developing policy frameworks for these new technologies, and we're deeply engaged in supporting these efforts to maximize AI's benefits while minimizing its risks.
Google has been working on AI for a long time, including at our sites in Montreal and Toronto, and in 2017 we reoriented to be an AI-first company. Today AI powers Google search, translate, maps and other services Canadians use every day. We're also using AI to help solve societal issues, from forecasting floods to improving screenings of diseases like breast cancer. Since 2018, our work with these technologies has been guided by our responsible AI principles, which are supported by a robust governance structure and review process. My colleague Tulsee has been at the centre of this work.
[Translation]
Canada has an exceptional opportunity to leverage investments in basic research and artificial intelligence. This committee will contribute to developing a legislative framework for solid public protection measures that will harness economic and societal benefits.
[English]
We welcome the government's efforts to establish the right guardrails around AI, and we share some of the concerns that others have raised with this committee. We believe the bill can be thoughtfully amended in ways that support the government's objectives without hindering AI's development and use.
There is no one-size-fits-all approach to regulating AI. AI is a multi-purpose technology that takes many forms and spans a wide range of risk profiles. A regulatory framework for these technologies should recognize the vast range of beneficial uses and should weigh the opportunity costs of not developing or deploying AI systems. It should also tailor obligations to the magnitude and likelihood of harm specific to particular use cases. We believe the AIDA should establish a risk-based and proportionate approach tailored to specific applications and focused on ensuring global interoperability via widely accepted compliance tools such as international standards.
We hope to continue to work with the Canadian government, as we have with governments around the world, to build thoughtful, smart regulations that protect Canadians and capture this once-in-a-generation opportunity to strengthen our economy, position Canadian innovators for success on the global stage and drive transformational scientific breakthroughs.
Thank you again for the invitation to appear. We look forward to answering your questions and continuing this important conversation.
My name is Rachel Curran and I'm the head of public policy for Meta in Canada. It's a pleasure to address the committee this afternoon.
Meta supports risk-based, technology-neutral approaches to the regulation of artificial intelligence. We believe it's important for governments to work together to set common standards and governance models for AI. It's this approach that will enable the economic and social opportunities of an open science approach to AI and also bolster Canadian competitiveness.
Meta has been at the forefront of the development of artificial intelligence for more than a decade. We can talk about that later during this hearing. This innovation has allowed us to connect billions of people and generate real value for small businesses. For our community, AI is what helps people discover and engage with the content they care about. For the millions of businesses, particularly small businesses, that use our platforms, our AI-powered tools make an advertiser's job easier. That's a real game-changer for small and medium-sized businesses that are looking to reach customers who are interested in their products.
In addition, Meta's fundamental AI research team has taken an open approach to AI research, pioneering breakthroughs across a range of industries and sectors. In 2017 we launched our AI research lab in Montreal to contribute to the Canadian AI ecosystem. Today, Meta's global research efforts are led by Dr. Joelle Pineau, a world-leading Canadian researcher and a professor at McGill University. She is the one who heads up Meta's global AI research efforts.
Our Canadian team of researchers has worked on some of the biggest breakthroughs in AI, from developing more diverse and inclusive AI models to improving health care accessibility and patient care, which have benefited communities in Canada and abroad. This work is shared openly with the greater research community, a commitment to open science and a level of transparency that helps Meta set the highest standards of quality and responsibility and ultimately build better AI solutions.
We applaud Canada's leadership on the development of smart regulation and guardrails for AI development, particularly through its leadership on the Global Partnership on AI and the G7 process. We strongly support the work of this committee, of course, and the initial aim of Bill C-27, which is to ensure that AI is developed and deployed responsibly while also ensuring that global regulatory frameworks are aligned, maintaining Canada's status as a world leader in AI innovation and research.
We think AI is advancing so quickly that measures focused on specific technologies could soon become irrelevant and hinder innovation. As we look to the future, we hope that the government will consider a truly risk-based and outcome-focused approach that will be future-proof. In that regard, we would flag a few specific concerns with Bill C-27.
First, one proposed amendment from the minister to this bill would classify content moderation or prioritization systems as “high-impact”. We respectfully disagree that these systems are inherently high risk as defined in the legislation, and suggest that the regulation of risks associated with content that Canadians see online would be better dealt with in pending online harms legislation.
Similarly, we think the proposed regime for general purpose AI is not appropriately tailored to risk and more closely resembles the requirements for truly high-impact systems. We suggest that the obligations for general purpose AI should be harmonized with international frameworks, such as the ongoing G7 Hiroshima process, which I referenced earlier, the White House voluntary commitments and OECD work on AI governance.
Lastly, we'd flag the audit and access powers contemplated by Bill C-27. We think they are at odds with existing frameworks—for example, with the approach by other signatories of the Bletchley Declaration arising out of the recent U.K. AI safety summit. That includes the U.S. and the U.K. Again, we'd encourage Canada to pursue an approach that preserves privacy and is consistent with global standards.
(1730)
Members, we believe that Meta is uniquely poised to solve some of AI's biggest problems by weaving our learnings from our world-leading research into products that billions of people and businesses can benefit from while continuing to contribute to Canada's vibrant, world-leading AI ecosystem.
We look forward to working with this committee and to answering your questions.
Thank you, Mr. Chair and committee members for the opportunity to testify.
At Microsoft, we believe in the immense opportunity that AI presents to contribute to Canada's growth and to deliver prosperity to Canadians. To truly realize AI's potential and to improve people's lives, we must effectively address the very real challenges and risks of using AI without appropriate safeguards. That's why we have championed the need for regulation that navigates the complexity of AI to strengthen safety and to safeguard privacy and civil liberties.
Canada has been a leader in putting forward a framework for AI, and there are positive aspects of the legislative framework that provide a helpful foundation going forward. However, as it currently stands, Bill C-27 applies the rules and requirements too broadly. It regulates both low-risk and high-risk AI systems in a similar way without adjusting requirements according to risk, and it includes criminal penalties as part of the enforcement regime.
Not all risk is created equal. Intuitively we know that, but it can be difficult to determine risk levels and adjust for them. In our view, the set of rules and requirements in the AIDA should apply to AI systems and used where the level of risk is high. For example, the AIDA applies the same rules and regulatory obligations to a high-risk system, such as AI that is used to determine whether to approve a mortgage, and to a low-risk system, such as AI that is used to optimize package delivery routes.
Applying the rules and requirements too broadly has several implications. Businesses in Canada, including small and medium-sized businesses, will need to focus on resource-intensive assessment and third party audits even for low-risk, general purpose systems, rather than focusing on where the risk is highest or on developing new safety systems. A restaurant chain and its AI system for inventory management and food waste reduction will be subject to the same requirements as facial recognition technology. This will spread thinly the time, money, talent and resources of Canadian businesses. It will potentially mean finite resources are not sufficiently focused on the highest risk.
Canada's approach is also out of step with that of some of its largest trading partners, including the U.S., the EU, the U.K., Japan and others. In fact, the Canadian law firm Osler has published a comparison of the AIDA with the EU's AI Act, which I'll be happy to submit to the committee. The comparison includes 11 examples where Canada has gone further than the EU, creating a set of unique requirements for businesses operating in Canada.
Going further than the EU does not mean that Canadians will be better protected from the risks of AI. It means that businesses in Canada that are already using lower-risk AI systems could face a more onerous regime than anywhere in the world. Instead, Canadians will be better protected with more targeted regulation. By ensuring that the AIDA is risk-based and provides clarity and certainty on compliance, Canada can set a new standard for AI regulation.
We firmly believe that with the right amendments, it is possible to strike the right balance in the AIDA. You can achieve the crucial objective of reducing harm and protecting Canadians, and you can enable businesses in Canada to be more confident in adapting AI, which will provide enormous benefits for productivity, innovation and competitiveness.
In conclusion, we would recommend, first, better scoping of what is truly high-impact AI. Second, we recommend distinguishing the levels of risk of AI systems and defining requirements according to that level of risk. Third and finally, we recommend rethinking enforcement, including the use of criminal penalties, which is unlike any other jurisdiction in the OECD. This would also ensure that Canada's approach is interoperable with what other global leaders, such as the EU, the U.K. and the U.S., are doing.
We are happy to provide this committee with a written submission detailing our recommendations.
(1735)
Thank you, Mr. Chair. We look forward to your questions.
I'm going to start at a fairly high level, because we have the world leaders in AI before us today.
It's a pretty special committee meeting among a whole series of special meetings we've had on this bill, but we have Amazon, Google, Meta-Facebook and Microsoft before us. You guys are the world leaders and are putting the most money in everywhere.
This bill was tabled almost two years ago. Just give a yes or no: Were any of your companies consulted on this bill before the bill was tabled?
I find it shocking, frankly, that you wouldn't have been consulted. You've had meetings, I assume, since the bill was tabled, because the minister claims 300 meetings, although most of them were with academics and think tanks. In those meetings, did you propose any specific amendments to the bill?
I'll go through this one at a time in the same order.
Further to my remarks, we did advocate that there was a need for greater clarity for developers and deployers in defining their responsibilities in the act—
That specific amendment was, I feel, a reflection of our comments. Regarding other specific comments, what we don't think was heard was the need to really differentiate between what's high and low impact more clearly and the need for better clarity around the criminal provisions.
We also raised the concerns that I raised in my opening statement about content moderation and prioritization systems. Essentially, the content Canadians are seeing online—
—should not be scoped in as high risk, so we raised that with department officials, absolutely. We raised our concerns around remote access. We also raised concerns around the specific obligations for general purpose AI systems.
We have raised the issues I raised in my opening statement.
We have also had the opportunity to provide input since it was tabled, and we shared proposed amendments and fixes similar to what I raised in my opening remarks with regard to the need for a focus on high risk, for rethinking enforcement and for thinking about the requirements being differentiated based on risk.
A number of you mentioned interoperability. How is it possible to be interoperable with other countries when other countries don't have legislation on this?
Look, I understand that Canada wants to act quickly to regulate AI. The original form of the bill, which was a very high-level framework and would have allowed a lot of these issues to be discussed and finalized during the regulation-making process, would have allowed us, allowed Canada, to align with other international jurisdictions. The problem is that the amendments the minister has proposed to the bill in his letters to the committee take a position on all of the issues that are currently under discussion in international forums as part of the G7 process, as part of the Bletchley Declaration and as part of the OECD process. Other jurisdictions, our peer jurisdictions, are discussing these issues now.
The minister's proposed amendments, if accepted by this committee, are going to box Canada into a regulatory framework that may look very different from the one that emerges from international discussions. That is really our concern—not the original text of the bill, but the amendments proposed by the minister.
Right. Originally, the bill only dealt with high-impact systems without a definition. My problem with this bill was that everything was originally in regulation, including the definitions and the policing, except for the penalties. Of course, they knew how to penalize something they couldn't define in the bill. That's been replaced with two more definitions: “general impact” and “machine learning”.
Regarding what's high impact, you reference that they have included a definition in a schedule that they can amend by regulation after the bill passes. Number four is, I think, the one that speaks to the moderation of content. Is this a backdoor way to do Bill C-11?
Yes, and I'm not sure if that's deliberate or not.
Really, we think this is better dealt with in the context of the pending online harms legislation. This will determine what Canadians see online, whether it's something that appears in your social media feed or your YouTube recommendations. It's even something as innocuous as a Canadian company deciding how to rank camping gear for purchase by Canadians using an automated system.
The regulatory obligations proposed in this bill are really going to impact the way those systems work. We think this is better dealt with in a context where we can have a free, robust discussion about where the line should be drawn in terms of the content that Canadians see. I know department officials have said that this provision is designed to deal with misinformation, for instance. Let's have a discussion about where that line should be drawn in the context of a bill that's designed to deal with that.
Ms. Craig, you mentioned low risk. Low risk, in other words, is now being classified—I presume a lot of it—as general purpose.
I use an example from my riding. I have a seafood company in my riding that's using AI to determine whether something is a surf clam or scallop and what direction it should be in before it goes into the machine to be shucked. If it was used by other companies and got sold, that would fall under general purpose.
It seems like it's excessive. Is there any AI that wouldn't fit into a “general purpose” definition?
I think we should apply the same approach to high risk versus low risk, even when we're talking about general purpose systems. The same technology you're talking about can be in a high-risk context or a low-risk context. Even visual search, for example, is using computer vision technology or facial recognition technology. There can be a very low-risk context for general purpose systems or high-risk. I think we need to apply the same kind of lens around high or low risk.
To go back to your question about making our legislation more interoperable internationally, the best shortcut we have for that is to really lean on the work being done by international standards organizations and bodies to determine what the right standards are for how these systems are deployed and designed. By referencing those instead of creating bespoke Canadian regulations, it provides a huge enablement to Canadian companies that want to be able to scale globally, whether they're using AI or not.
You can create the framework for those things now and then reference those pieces as they evolve. We're going to see a number of international standards that will probably be published in the next few months. That work is happening at a very rapid pace. Again, the technology is also evolving at a very rapid pace. I think you want to give yourself flexible frameworks and allow the international technical work to inform what Canada will have to rely on as well.
Going back to your original question, that was the exact point I was trying to make in referencing low-risk systems. In the legislation, they're referred to as general purpose systems, but they are quite broadly defined to contemplate technology that's used for multiple different purposes or activities. That could encompass quite a broad swath of technology. As Nicole said, technology used in low-risk or high-risk circumstances may end up being treated similarly by being impacted by requirements, even if it's just used in a low-risk context.
It's not lost on me that as you folks are giving your presentations, literally millions if not hundreds of millions of people—your clients or customers—are using your services as we do this panel. They are benefiting from those services for productivity purposes and being connected. Obviously, AI is driving a lot of that.
There are a lot of positives going on with artificial intelligence that people benefit from every day without even thinking twice about it. It's something that we obviously are looking at and have looked at for two years now. It requires, in my view, guardrails, if I can use that term, or safeguards.
A few terms have been brought forth: content moderation or ecosystem, and high impact versus low impact. I'm going to try to keep this at a high level.
In terms of the differentiation between high-impact and low-impact systems, where is the right balance? I'll take us back to an accounting approach, where you have principles that are very prescriptive. How do we strike a balance between high impact and low impact when we're reviewing AI so that we're not spending a ton of bureaucracy or time capturing the low-impact systems, which in and of themselves are quite beneficial for consumers and companies?
I'll start with Nicole; then we can go across and go online.
I think we would start with a meaningful decision that may impact an individual's human rights or health and safety. We would probably start with that as the basic definition. It is an area where I think considerable debate went on in the EU to clearly define what those exact use cases are.
There are even low-risk systems that you would want to.... You wouldn't even regulate all high-risk systems the same way. The questions you would ask would be different, and the risk assessments you would do would be different.
Yes. There are a lot of use cases. A lot of times, for example, health care will be viewed as a high-impact use case, or human resources and recruiting will. If you use a video conferencing platform to conduct a job interview, you are using AI, and I would argue that that's not a high-risk use case. The system is identifying a face. It may be blurring a background. It may have—
I would agree with that. I think it's the definition of “harm”. If you set a threshold of material harm in various areas, you're going to almost certainly capture high-impact use cases. For instance, we've dealt with the delivery of health care services, but Nicole referenced human rights issues, so it adds that target. Accommodation, employment or credit opportunities raise human rights issues and should probably be defined as high impact.
I don't think you have to capture every possible use case, but if you at least set a legal threshold of material harm, you're going to capture most high-impact cases.
I think it's a really hard but important question. At Microsoft, we've been contemplating this as well. We've been establishing our own internal governance program and understanding how we calibrate an application of our requirements in that governance process to higher-risk scenarios.
We have developed three categories of what we call “sensitive uses” internally at Microsoft.
The first is any system that has an impact on life opportunity or life consequences. In that category, we think about systems that impact opportunities for employment, education or legal status, for example.
The second category is any system that has an impact on physical or psychological safety. Think about safety-critical systems in the context of critical infrastructure, for example, or systems that might be used by vulnerable populations.
The third category is any system that has an impact on human rights.
I do think it's useful to have a framework to think about triggers for higher risk and then, where there is readiness to go further, to think about some of the more specific sorts of use cases like education and employment. That is represented in some of the high-impact examples in the AIDA as well. Then it's also to recognize that there is going to be a need to evolve and to put guardrails in place for how you think about the high-impact systems and the examples evolving over time. It's about not just having an open-ended process but also thinking about what the triggers are going to be for meeting that bar going forward.
There were some documents you were going to send to the committee. A few people mentioned that. If you can send them, that would be great. Any background information is worth it.
I think I would echo a lot of the comments from my counterparts and point in particular to the definition of “harm”, because I think that could solve a lot of issues here. If you have a test of material harm, that can resolve exactly what the threshold is for both the identification and mitigation of risks associated with specific use cases for AI systems.
Right now, the definition simply says harm includes psychological harm or economic harm, but there's no calibration for what harm is really defined as, nor a test for that.
As a citizen and consumer, I like your products and I use them. As a person always lost while driving, I am especially happy that your services exist. You spoke ad nauseam of all the advantages these services offer to us. We like them. What can I say? That’s how it is.
I’ll come back to the question from my colleague, Mr. Sorbara, because I found it very interesting. We are talking about high-impact, low-impact, high-risk and low-risk systems. We are talking about rapidly evolving technologies. I understand that some technologies can have different uses, which makes the situation a little complex. I understand your message about very set definitions in the bill on what does or doesn’t have a high impact, and that in certain respects, you differ on those definitions. That’s entirely legitimate.
Isn’t it normal for legislators, who are elected by the public, and the government, which wants to protect the public, to have definitions that differ from the industry’s when it comes to the terms “high impact”, “low impact”, “high risk” and “low risk”?
Given that our roles are not the same, isn’t it legitimate for us not to have the same definition?
It's perfectly legitimate for legislators to have a different view of high- versus low-impact AI. I think the point of the discussion is to try to make sure that the finite resources Canadian companies have are not used unnecessarily to do really complex assessments where they may not be necessary.
To give a bit more context to that, some risk assessments are actually in the millions of dollars to complete. They're very complex, and they require a lot of due diligence and information. Once you've completed that risk assessment, you likely want it audited by a third party in very high-risk use systems. That is not a small undertaking, especially for small companies and start-ups looking to start up their new business.
We sometimes interrupt the witnesses because our speaking time is so limited.
Some witnesses told us it might be useful to have a federal registry of broad generative models. For example, companies would be required to add certain codes or certain models to the registry, evaluate the risks and present a risk mitigation plan to the government based on the model and its uses.
If I understand correctly, you think it is too complex.
It's actually not effective. It's not that it's too complex; it isn't necessarily going to be an effective way to evaluate whether a system operates appropriately for that use case.
What we do for customers is provide good, clear information about recommended use cases for the models we offer to customers. The only way to evaluate how a model performs appropriately for your use case is to test it.
In that testing process with your data, you're going to be able to test and evaluate if it's performing appropriately for you use case. Just throwing up a bunch of models is not going to be that effective in giving us that information.
We talked about content moderation. One of my colleagues asked you if content moderation boils down to deciding what Quebeckers and Canadians see on the internet.
The content we see on your platforms is not random. It is very deterministic and based on what people looked at before, for instance. It causes a lot of worry among a lot of people.
I’d like you to give us a clear answer on this matter. Do you think your company also chooses what people see on the internet?
Yes, it does. We pick up signals from what people are interested in, largely. We've applied a lot more transparency and control around that for our users. You can click on a piece of content and go to “Why am I seeing this?”, and you can find out what signals our systems are reading in order to show you that particular piece of content.
We're trying to give users a lot more control over that. You're right that our systems decide what's most relevant and most interesting to our users. Our fear is that with an onerous regulatory system applied to that kind of content prioritization, ultimately, it's going to be up to government regulators what we should be showing to Canadians.
I understand, Ms. Curran, but you said you were in favour of a somewhat American approach, such as the White House voluntary commitments for artificial intelligence, and that these commitments might be worthwhile.
Do you think that a voluntary approach, self-regulation, could be successful? Do you think that a citizen can trust in what they’ve seen in the past and believe in the industry’s ability to self-regulate in the absence of a crisis?
No, I don't think self-regulation is the right approach. As someone who has worked in government for a lot of years, I'm a fan of smart regulation. The issue is what kind of regulation is applied and how those issues are debated. The government has indicated that it's in the process of preparing an online harms bill, and we think that's the context in which to debate where that line should be drawn.
We have always said to governments, “You tell us what you define as disinformation or misinformation, and we'll make sure we enforce against those rules.” That would be a lot easier for us than engaging in an internal debate—which we do daily—around where the line should be drawn and what content we should allow on our platforms.
We are very happy to engage in those discussions, and we would love policy-makers and decision-makers to set those rules for us, but there needs to be an open and robust public debate around where the line should be drawn. In this case—and I think another member referenced this—it looks like content regulation through the back door, and it doesn't really allow for an open, informed and public debate around where to draw the line for what's acceptable content online.
I know my time will be cut short during my next turn.
Based on what you are telling us, the approach you support is to pass online hate legislation that would lead to the same outcome, but it would not be a way to regulate what people see on the internet through the back door.
Is that not a somewhat contradictory way of looking at the situation?
No, I don't think so. We would like government, if it's willing, to regulate content issues. Those are sensitive issues. They're constitutionally sensitive issues. There are federal and provincial aspects to that kind of discussion, so if government wants to regulate in those areas, let's have that discussion and let's have it openly.
I think if we're asked to regulate content through a provision in an AI regulation bill, it's not going to allow us to explain why we're showing particular content to Canadians or why we're restricting particular content from Canadians on the grounds that it's misinformation or disinformation. That's not fair to our users. It's not fair to Canadians that they won't understand why we're taking those decisions.
Just for public awareness and to my colleagues, I will be tabling a motion, not for debate for today but for a subsequent meeting, and looking for feedback and whether there are amendments. It says:
That pursuant to Standing Order 108(1), the committee send for, from the auto manufacturers Ford Motor Company of Canada, Limited, General Motors of Canada Company, and Stellantis (FCA Canada Inc.), BMW Group Canada Inc., Honda Canada Inc., Hyundai Auto Canada Corp., Jaguar Land Rover Canada ULC, Kia Canada Inc., Maserati Canada Inc., Mazda Canada Inc., Mercedes-Benz Canada Inc., Mitsubishi Motor Sales of Canada, Inc., Nissan Canada Inc., Porsche Cars Canada Ltd., Subaru Canada, Inc., Toyota Canada Inc., Volkswagen Group Canada Inc. and Volvo Car Canada Ltd., a comprehensive report on their strategies and initiatives taken to date and on further actions aimed at improving security features to address auto theft in Canada; and that the documents be submitted to the committee within five working days.
We have done this before at this committee. The reason I'm suggesting we do it is that I don't want to turn this entirely over to Public Safety or Transport because of the amount of money that's going through this file to the auto industry. It won't take committee time, but we'll be able to figure out whether or not we might want to have a more comprehensive study about that issue in the future.
I'm looking forward to seeing if any of our colleagues have amendments to that. It will be in your mailboxes tomorrow morning.
The first thing I want to ask relates to an issue we have here: Either we trust the bill through regulation and a bit of vagueness, or we trust the industry by not having any legislation. That could mean upwards of five years, quite potentially...depending upon Parliament and how long it lasts. Even if it doesn't last, to get something through would take a lot of time, so we have a decision to make.
Ms. Curran, you mentioned that you were in public policy before. I think you worked for Prime Minister Harper, if my memory is correct, as director of policy. In July 2019, the U.S. Federal Trade Commission imposed a record $5-billion fine against Facebook for deceiving users in their ability to control the privacy of their personal data. First, in that case—and I don't know—were Canadians having the same problems that Americans were? Second, why would we just trust that no public policy would be the best policy at the moment versus the bill?
I don't think we are arguing that there should be no legislative framework. Our position may differ a bit from those of my colleagues here on that. I think the bill, in its original form as its written now, minus some of the amendments that have been proposed by the minister, is actually quite good and quite workable.
I understand the political imperatives here. I understand the concern the public has for generative AI products in particular. I think it is incumbent upon the government and decision-makers to put some kind of guardrails, as Mr. Sorbara talked about, around the development and deployment of AI. Our only caution is that Canada not do that in a way that's so far out of alignment with other jurisdictions that it's going to have a negative impact on the development and deployment of AI in this country.
On that issue, I mentioned the Federal Trade Commission. How much was allocated to Canada for reparations? Did a reciprocal amount come to Canada?
You're asking for some harmonization here with other countries. In that particular case, there was $5 billion. Were there any reparations to Canadians who were affected by the breach the U.S. Federal Trade Commission noted? We actually get money sometimes, even from consumer abuse in the United States, through a number of different processes. Did any money come to Canada for that breach of trust?
You mentioned criminal provisions. Again, this is part of the challenge we're faced with. In July 2020, Amazon was hit with a record fine of almost $900 million U.S. by the European Union for processing personal data in violation of the GDPR rules for privacy violations.
In that particular case, were Canadians under the same privacy violation that citizens under the GDPR got reparations for? Did Canada get any reciprocal treatment for the privacy violations that may have taken place?
I'm not aware of the particulars of that case. I'm also here on behalf of Amazon Web Services as opposed to Amazon, so it's more difficult for me to answer.
Would the Amazon case with the U.S. Federal Trade Commission be something you're familiar with? That's with regard to delivery drivers and the period of two and a half years...where there was a settlement. Is that one you'd be familiar with?
I appreciate that. I'm not trying to put you in an awkward spot, but you mentioned the criminal provisions.
I have a list of fines and penalties against Amazon in the United States from the Federal Trade Commission for that. Part of what we have to decide here is some of those elements. Canada, quite frankly, isn't getting the same treatment for its citizens.
I'll move to Microsoft.
Microsoft has agreed to pay $3 million in fines for selling software to sanctioned entities and individuals in Cuba, Iran, Syria and Russia from 2012 to 2019. The U.S. Department of the Treasury says that the majority of these apparent violations involved blocked Russian entities.
To Microsoft, is some of that activity still taking place or is that now completed? How can we be entirely trusting if there are no regulations related to AI, there's a waiting period to do something and we hear of cases like that?
Unfortunately, I also don't have the expertise to respond to your question about this specific case.
I will say that, from our perspective, there is an opportunity to still move forward with this legislation and to do so in a way that's swift and that addresses real concerns that exist about AI, deployment and high-risk scenarios. It's by making adjustments to the amendments around approaching high-impact systems and defining requirements for the general purpose or lower-risk systems.
I'm sorry, but I have limited time. If you don't know that one, then I'll move on to this one, because it is about what we're talking about.
The U.S. Federal Trade Commission had charges that Microsoft violated children's online privacy protection. This involves privacy protection for us given what the age of consent is. There was a $20-million settlement for that. I'm just wondering whether, in that case under Microsoft, Canadian children were under the same violation that took place in the settlement.
Some of these cases that I have here.... This isn't hard research. This is from The New York Times. I'm not asking anything that's unknown, based on preparation, or gotcha stuff. It's just The New York Times information that you use your products to find.
I want to know if Canadian children had the same exposure that's been settled in the United States.
I just have one quick yes-or-no question for all the companies with regard to the 3% digital services tax. Are you opposed to the digital services tax that's been proposed by Canada? I'd be interested to know the positions. I can go by company or individual.
I'll leave it in your hands, Mr. Chair, but I'd be interested to know whether they support or oppose the tax, yes or no.
I think the concerns with the digital services tax are more about the timing, with Canada moving forward out of step with other jurisdictions. There are fewer concerns about the tax specifically. It's more the timing and alignment with other countries.
That's exactly our position as well. As long as there is a globally harmonized approach, we're happy to pay more tax and pay a digital services tax. Just don't make it a one-off.
There's a reason I asked that. I'm vice-chair of the Canada-U.S. Inter-Parliamentary Group, and when we go to Washington, we hear that congressional senators have been actively lobbied by your companies to get them to tell us to oppose the tax. If you do a quick search, you will find that many of the individuals are receiving financial donations.
Mr. Ryan Williams: Don't take away my time, Mr. Chair.
Thank you, witnesses.
I want to focus tonight on looking at international standards catching up to those of some of our peers when it comes to minimizing harms to artists, creators and the general public. I know you all agree: You have consumer products and you want to protect creators and consumers.
A big concern of AI is people's control over how their likeness could be used for profit where the likeness's value reflects an investment by the individual...against online harms. Obviously, we also want to protect consumer use and rights.
I'll note some of the biggest examples we have today.
I met with a group yesterday from Music Canada. There's an AI-generated Johnny Cash who can sing Barbie Girl by Aqua perfectly. This is a computer system learning an artist and replicating them. We can look to what could happen if that were used with Michael Jackson or others to create full albums. Who's protecting them? Are there laws to protect consumers and, of course, consumer rights?
The second one is deepfakes. They are very concerning. The biggest example right now is Taylor Swift. That's also not just for celebrities. Something a colleague of ours, Michelle Rempel Garner, has been especially vocal on is the use of AI-generated fake photos and videos for intimate partner violence.
Looking at those concerns and this material harm, how do each of you see Canada catching up with the AIDA or existing legislation and ensuring we protect consumer rights?
I'll start with Ms. Foster and we'll go around the room.
I have a household full of Swifties. We talk about this at the dinner table.
I think you've raised a great example, actually, of where existing legislation could be more purpose-built to solve a problem like that. It is already illegal to share intimate images without consent. I think adding clarification in the Criminal Code to ensure that this covers AI-generated images is probably a more effective vehicle for addressing that particular issue. I think Canada could act on that very quickly and very efficiently.
The industry is working pretty hard to ensure that it is much easier to detect AI-generated content. Among the commitments we made in July at the White House, we made a commitment to develop watermarking and other tools to detect generated content. By November, we'd already released a watermarking tool within our tightened AI model. The industry is also trying to move very quickly to address some of these harms and is obviously collaborating to make sure that there are laws in place to address them, but there's also what we can do on the technical side to help ensure that those images are detected quickly.
The copyright question is a very hot topic, and the government has had consultation on this. From a policy perspective, I think there are two sides to consider. I think there's the AI training and the data input. How do we allow for greater assurance that we have great data to train models? There are techniques, too, on the output side to ensure that we suppress copyrighted content. It is possible to ensure that models have suppression techniques to ensure that copyrighted content is not on the output side of the model
It's good to separate the discussion into understanding the two aspects of AI and ensure that we have good models that reflect Canadian content as well. I think we should think about the fact that most available models are dominated by either the Chinese language or English. There's a lack of, for example, French-language content, but we have other minority languages in this country as well. We want to make sure that we permit appropriate content use for training models to ensure that we have access to Canadian-specific large language models when we interact with them.
The issue you've raised around deepfakes, both video and audio, would not be addressed by Bill C-27, at least not anytime soon. I know that the government made an announcement—I think today—around the issue of deepfakes and an intent to deal with them. They could be dealt with very easily through an amendment to the Criminal Code or existing legislation.
It's the same thing around election disinformation. If that's a harm committee members are concerned about, that can be addressed through a quick amendment to the Canada Elections Act. There's even the Copyright Act on issues of creator rights. The use of material in the context of AI development that impacts creator rights can be dealt with through the Copyright Act as well.
There are existing statutes. We advocated previously for a sectoral approach to AI regulation because of this, but those could all be dealt with very quickly. They won't be dealt with in the context of Bill C-27 quickly.
With regard to deepfakes, similar to my colleague from AWS, we see an opportunity to either make adjustments to the Criminal Code or think about opportunities to address that concern in the upcoming online harms legislation.
With regard to copyright, from our perspective there's a need to think about how we're enabling the use of AI to advance the spread of knowledge, to enable new creative works consistent with copyright law and to protect the rights and needs of creators. We want to continue to engage with partners and governments on how to achieve those objectives. We think it's been really productive to have the ongoing consultation that we've submitted a response to, and we look forward to any further opportunities to follow up on that.
Similar to others, we see other vehicles as being appropriate mechanisms to address the incredibly legitimate concerns around images of that sort.
I want to be clear that for non-consensual images, we have a zero tolerance policy on Google, regardless of whether those images are synthetic. We provide tools to individuals who might find themselves in that terrible situation so we can support them in addressing it. We also have very clear policies with regard to our own generative AI tools that ensure that generating sexually explicit content of this sort is a prohibited use.
I think maybe my colleague Tulsee can also speak a bit to this issue, because detection and watermarking are really important technological advances where we're going to need a very collaborative approach.
Thank you to the witnesses for making time for this committee.
My questions are largely for Microsoft.
Ms. Craig, how do you believe the government should approach regulating a field that's evolving so rapidly, and how do you think the AIDA gets it right in meeting that challenge? One of the examples I think of is the initial schedule for high-impact systems that can be edited as technology evolves and time goes on.
In the overall approach to legislation and the regulation of technology and AI in particular, it is incredibly important to think about establishing a framework and a process through which you can iterate over time with appropriate guardrails in place. One example is allowing the high-impact system schedule to continue to reflect growing deployment of this technology and new high-risk scenarios by adding to it over time, with appropriate guardrails in place to ensure that the process by which that happens also requires that additions reflect the same risk analysis and that a threshold is being met to add to the schedule.
I think ensuring that the processes for implementing requirements, for example, evolve over time with changing approaches to safety systems for AI. That is important. Also, ensuring that the risk-based approach is really foundational to the regulation will be important, as will ensuring that the way the regulation applies is not overly broad. Having onerous requirements applied to low-impact systems restricts how Canadian businesses can continue to use AI for lots of innovative purposes.
Regarding compliance, are the proposed amendments are currently in the AIDA sufficiently specific for Microsoft to plan compliance efforts, and would you already be in line with some of those efforts given your existing internal structures?
The legislation and the recently proposed amendments provide a high-level structure for requirements, and through the implementation process, we expect there will be more detail for defining the requirements.
There is also the enforcement approach, in which there is an ability—for example, through the power to audit—that is less specific. There seems to be an opportunity to provide more detail to the implementation process about how organizations can demonstrate compliance with the requirements that we defined in more detail.
From a Microsoft perspective, we have been working on internal governance for responsible AI for seven years, and we have developed a lot of these constructs internally, which we can think of as a starting point. We can imagine a lot of other organizations may not have been spending as much time on that issue or may not have as many resources to apply to that issue. Providing more certainty on how to comply will be of incredible value.
We have developed our responsible AI principles, and we've developed a responsible AI standard to put those principles into practice. It truly looks like a set of really specific goals and requirements that apply to internal teams working on AI to fulfill practices like ensuring that we are reducing bias and mitigating the risk of bias and ensuring that we have sufficient transparency and accountability built into our processes.
Maybe I can chime in here. I think Amanda is being very diplomatic.
The AIDA, in a number of respects, goes well beyond the most stringent proposal out there internationally, which is the EU AI Act. It's already the subject of a lot of debate among member states. It doesn't have the support of countries like France, for instance, who want to ensure their own domestic industry is given a chance to flourish.
The AIDA has created a standard that doesn't exist anywhere else in the world, so if you're asking us if we would meet that standard if it were imposed here, sure. We have the resources to meet it. The compliance costs are incredibly high. Would that mean certain products may not be launched in Canada? Maybe. However, all of us work for companies that are able to meet very high thresholds because we have the resources and money to do that.
It's going to have a significant negative impact on the Canadian AI industry and on innovation in Canada. That's the word of caution. Canada should make sure it's aligning itself with other jurisdictions. We're a relatively small market. The EU is setting a benchmark that is world-leading. We should at the very least not exceed that.
I echo the comments of the panel, but I hope the committee is going to have the opportunity to hear from Canadian companies that deploy AI where AI is not their core business but is deployed in their business. This is to understand the impact that these kinds of regimes are going to have on the agriculture industry, financial services industry, manufacturing industry and energy industry.
This technology is being deployed not just by AI-developing companies; our customers are in every sector of the economy in Canada. I think the committee should really take some time to hear from Canadian companies that are going to be impacted and that leverage our services or may develop their own services. It should be a high priority for this committee to hear from them.
From a Google perspective, obviously we've been public in our commitment to having responsible AI principles. My colleague Tulsee is at the centre of our governance process around that, which is scalable and incredibly rigorous and robust.
Similar to some of the comments we've heard from others, one of the areas where you might want to consider a focus is in the definition of “high impact”. Insert some considerations or factors for the threshold of what would define a high-impact system. That might provide clarity and flexibility as the technology involves. If you think about the severity and probability of harm, scale of use, nature of the harm and these types of things, giving some guidance to regulators will help provide the certainty and protections the government is looking to establish here, while also giving clarity and predictability to companies large and small that will need to build the systems to comply with this.
I think we have a real opportunity right now to get this right from the outset and build a coherent and consistent policy environment for Canadian companies, which are going to need to be prepared to succeed on the global stage.
I will continue with you, Ms. Curran. Please don’t see it as badgering; I could have put the question to anyone.
The industry—and I repeat, we like it—is trying to show us its broad sense of responsibility, that it has principles and wants to evolve. However, there seem to be things which might seem simple for the broader public that should have been done but weren’t. We are talking about identifying fakes, deep fakes and the rest.
Everyone is wondering why this hasn’t been done yet. I understand you are in a competitive environment.
I am wondering if, in the current environment, there isn’t a cost in terms of market and profits when it comes to adopting ethical standards that surpass those of one’s competitors.
Since we only have two minutes, I’ll go quickly. My question is this: In the current environment, could raising ethical standards above those of one’s competitors lead to a cost in terms of market, profits or clientele? That is the crux of my question.
No, I don't think so. To the extent that we want to be a trusted, credible brand, we want to be world-leading and industry-leading in ethical standards. I think there is no negative cost to that; there is a positive benefit. However, what we're talking about here is the standard that Canada is setting in the regulation of AI systems. There will be a cost if that regulation, that threshold, is set at a level that far outstrips those of our international peers.
High-impact systems are defined as those that pose a risk for health, human rights or safety. If I understood correctly, those are basically the criteria.
What about electoral interference and disinformation on a daily basis?
High-impact systems are defined here as systems that pose risks for health, human rights or safety. That is a nice definition. I’m not an expert and I don’t know what I think of it, but it certainly is important.
However, I’d like to know where disinformation or electoral interference, for example, fit within these criteria.
Are they risks for health, human rights or safety? Might this definition need to evolve at some point?
You're referring to sensitive uses. I think there are ways to think about disinformation risk that would certainly fit into the categories that I described regarding potential psychological harm or impact to human rights.
I think what's really important is that the approach to defining what's high risk is clear and can evolve for changing challenges like misinformation and disinformation. That's the opportunity in the AIDA: to define clearly what's high risk and a process for evolving it over time.
I think your questioning also points out the challenge of trying to have such broad, sweeping, horizontal legislation, because it's very difficult to try to capture all of these very specific use cases in one piece of legislation. In a lot of cases, it actually might make more sense, for example, to direct Health Canada to look specifically at how to navigate complex use cases that are occurring in the health care industry, or directing the financial services regulator to deal more with those use cases.
Those are extremely complex use cases for people like us. While we might be really smart at understanding AI policy and risk mitigation, those are complex use cases that those regulators already understand extremely well, and they understand how to manage risk in those sectors very well.
As to the complexity of trying to create this broad legislation, it might be more appropriate to direct Health Canada to see what levers they have already to regulate AI in specific use cases, or OSFI to look at the financial services sector—
I'm sorry to jump in here, but you can also amend the legislation to identify specific use cases that you are concerned about: If it's election disinformation or if it's delivery of health care services or accommodation services, you can insert those into this bill as specific cases that you want addressed. You set a threshold for harm that is a materiality test, but then you can list specific examples of use cases that you are particularly concerned about and that you want addressed.
I just wanted to pick up on what Nicole Foster was saying, because it's consistent with the approach that the United Kingdom is taking. It really leans into the existing expertise that our sectoral regulators have in the space. They can best understand the risks that exist for their sectors, the actors in those sectors, and they know the questions to ask. Maybe Canada can take a cue from how other countries are approaching this specific question.
This is the quandary we're in. We have to trust either this legislation process or what we're hearing. This is where we're getting mixed messages from a lot of witnesses.
My question is for Mr. DeVries. You're the director of privacy legal for Google. Perhaps you can answer this.
I know you've paid several different fines and penalties, most recently in an antitrust lawsuit in the U.S. for $700 million. Specifically, you should hopefully know about the lawsuit you had where you were secretly tracking the Internet use of millions of people who thought they were browsing privately. For that, you were fined $5 billion.
Were Canadians caught up in that too? If that is the case, are we going to get compensated for it?
I'm not aware of that settlement being outside of the United States, where it was made, but our changes to the incognito mode we offer in the Chrome browser—I'm actually using it right now—were made globally. All users are benefiting from those changes, which didn't change the functionality of the system but made it clearer how you were using and what you were using.
The bottom line, then, is that the product had the same consequences whatever your national boundary. However, the U.S. and its citizens got compensation for the situation because the legal case went through there. Is that a good summary of how that took place?
I would say that was not based on privacy law—the same kind of law that the CPPA would consider in this bill. That was based on private litigation. There is a different private litigation approach in the U.S. compared to in Canada, as you know.
I appreciate the fact that you're open to saving taxes. I personally get lobbied—and so do the rest of parliamentarians and senators when we go U.S.—against Canada's position on the digital tax.
I'm raising that because the position of your companies is to fight all of the fines and penalties that have been paid internationally, mostly in the United States and other places. However, Canada has the same products but doesn't get the same benefit of reparations. Then, on top of that, we even have to wait for taxes to come in from the OECD decision, which could take another decade.
I know that I'm out of time, Mr. Chair, but this is the disappointment that I have. The challenge Canadians have over those who are lobbying us on whether we act or don't act on this legislation is a matter of trust: trust the administration and regulations, or trust the companies and then keep a blind spot for Canadian consumers, because our laws right now don't even give us the same compensation that Americans and Europeans enjoy.
Thank you to all of the witnesses for an excellent discussion today. I'm going to move very quickly because I have a lot to cover in a very short period of time.
Ms. Foster, proposed section 40 of the AIDA is in relation to proposed sections 38 and 39. It's for a case where a company or an individual has committed a crime in contravention of the act, and it would apply a penalty of up to “5% of the person’s gross global revenues in its financial year”.
Are you aware of any other legislation related to artificial intelligence that applies such a fine?
Similarly, we're not aware. We're equally concerned about the remote-access provisions in the bill, which would allow the AI commissioner to remotely access data from all of our companies without any concern for privacy or protection of that data.
No, we're not aware of anyone else taking that approach. I would point out that the test is also a likelihood to contravene, so it goes beyond what we tend to see.
As to the data commissioner in proposed section 33 of the legislation, I'm particularly concerned about this, largely because in other panels we've heard—and it's well known—that the Government of Canada doesn't have the intellectual capacity to regulate AI.
I'm concerned that in this legislation we're going to be giving very broad powers to regulators, and in some cases, the minister may divest those powers—in the current form of the proposed bill—to an artificial intelligence and data commissioner who reports directly to the minister. I'm concerned about this relationship because it may create a conflict between multiple objectives contained in the Department of Industry, namely economic development and protecting citizens from online harms in this case.
Would any of the panellists be able to comment on whether they believe an artificial intelligence and data commissioner, in the context of artificial intelligence, may be better served if that individual or future government organization reports directly to Parliament and not to the Minister of Industry?
I don't know if we have a super strong view about that other than to say—just to repeat my previous comments—that I think it is very difficult for one organization or one person to understand risk mitigation in financial services or health care. These are very complex use cases. I would strongly recommend that the government consider devolving responsibilities of overseeing AI in those sectors to those regulators specifically.
I think it's a great idea to have an AI commissioner report to Parliament rather than to the minister of the department directly.
As we've said, the remote access provisions in here allow the commissioner to conduct audits and access user data in a company's possession. That legal power is unprecedented in liberal democracies, and even non-liberal countries.
The next question I will direct to Google. When I said “Google” earlier, my Pixel 3 said, “Hey, Brad. How's it going?”
The first iteration of this legislation is on privacy. We've had a big discussion about sensitive information. I think it was equally important in this section of the bill.
What we haven't talked a lot about with respect to AI up until today is the impact it's going to have on children. I heard one witness earlier mention that we need to look at taking a proportionate approach for high- and low-risk systems. Actually, I think everyone has commented on this.
In the context of children, how would Google define sensitive information as it relates to its policies and technologies applied to children?
Safety is our number one priority. We take a comprehensive approach to child safety. That begins with designing age-appropriate products, then providing settings and tools for parents and users to make the right choices for them and, finally, having policies that we enforce.
With regard to the specifics around the sensitivity of data from children, maybe my colleague Will can speak to how we approach privacy for children.
With respect to the data related to AI, or any data, we're going to take the circumstances of children into account, but obviously not as a monolith. You're going to think of children who are younger, whose parents will be very closely involved in their use of our products and services. You're going to think about teenagers, who have more agency but still are a special audience that needs consideration—
I think that's the key point right there. I'm sorry for interrupting.
The United Kingdom designed privacy legislation that was proportionate to age. Do we need to do something similar for artificial intelligence regulation? Perhaps we need to be a little more explicit, as it relates to high-impact systems, with designating certain ages and with the impact the systems could have on the psychological development of children.
I'm happy to talk about that with respect to data use in general. My colleague Jeanette could talk more broadly about this in the context of the AI bill.
I'd say for children overall, yes, we need something that gives us a framework as providers to design our products in relation to the age of our different users. That's the framework that has emerged globally. I think that's the same kind of idea we want to see here in Canada.
Finally, I have one more quick question for Microsoft related to national defence.
We haven't spoken a lot about national defence in the context of artificial intelligence at this committee, largely because national defence is outside the scope of this regulation. What's interesting is that we hear a lot about the possible harms that Canadians can face from artificial intelligence designed by national defence systems in countries such as Russia and China, which are being discussed right now at the foreign interference commission that's taking place.
What is Microsoft doing with the Government of the United States to counter some of the actions taken by China and Russia with respect to AI and destabilizing democracies like Canada?
We are in active conversations with governments across North America regarding how we make sure our elections are safe and how we ensure that we stamp out disinformation. We also have strong connections with defence here. We see they are putting in place AI programs that have responsible safeguards and tools to ensure that there's proper human oversight in this new world of conflict.
Is there a possibility of overlap between the roles of National Defence, personal security, the application of AI and the threats we face? Does this need to be studied further?
Thanks to all the witnesses for being here today. I'm really finding the exchanges valuable. The insights and expertise you're sharing are making a big contribution to this conversation today, so thanks for being here, all of you.
I think your companies are all very large, very profitable and very successful, in part because of what my colleague Mr. Sorbara said, which is that you're generating a lot of value for your customers. At the same time, I think all of you are trying to demonstrate leadership in responsible AI and the use of responsible AI. That's great.
As legislators, of course, we have a big role to play, and we have to make decisions based on what's in the public interest. I think we're partners in that conversation, so I appreciate our ability to work together and your candour in the comments you've made.
Earlier this week, we heard from Yoshua Bengio, who is sometimes referred to as the godfather of AI. He was quite expressive of the exponential benefits and risks that are growing as AI evolves. This highlighted, at least from my perspective, the need for speed in getting this legislation through Parliament.
Notwithstanding that there may need to be some amendments and some changes, do all of you agree that as the Canadian government, we need to act with speed to make sure this legislation gets done?
To be honest, I don't know that the legislation changes much about how we'll be approaching responsible AI. It will change the level of compliance and the complexity we have to comply with. It may, in one way, divert resources towards compliance and away from responsible AI development, but that would not be a reason not to legislate. I don't think any of our companies are slowing down our efforts to ensure the responsible deployment of this technology, and we continue to rapidly innovate and invest to ensure that we're doing the right things.
I think some of the existential risk is very theoretical, and I think we're very focused on some of the real risks that need to be mitigated in how AI is deployed today. We continue to invest in determining what the next iteration of AI requires from AI developers. How do we manage some of those emerging risks around hallucination and toxicity, and how do we develop appropriate red teaming and safety testing for these generative AI models?
I don't think it will change how we approach responsible AI, but it might change—
I'm trying not to interrupt you. That wasn't a short answer, but I appreciate that I'm asking you to answer a tough question with a short answer. We politicians do that a lot, I find, but I didn't do it intentionally.
I'm just trying to get a sense of whether you agree that speed is necessary. I take it from what you're saying that you almost don't need government legislation because you're already responsible in AI development and usage. We could probably beg to differ, and you have said already that that doesn't mean governments shouldn't legislate.
Yes, I do think speed is necessary, if for no other reason than to maintain public confidence in AI and in our products and services. I think it's important to get it right and make sure that you don't step on the work that's being done by the Canadian AI ecosystem, but I think speed is a good idea. Passing something is a good idea.
We agree that there's a need to move quickly with amendments but also deliberately. My colleague mentioned earlier the importance of consulting with other industry sectors. That is one thing we think would be incredibly valuable because of the breadth of impact, especially to lower-risk systems and how they will impact Canadian businesses across sectors.
We would go to the proverb “If you want to go fast, go alone; if you want to go far, go together.” We see this as a real global opportunity, and we want to go far together, so I think this is one of those areas where we need to collaborate with international partners to get it right.
I'm going to ask another general question. How many of you have adopted the voluntary code of conduct on the responsible development and management of advanced generative AI systems?
We asked to be included in the Canadian code. We were told it was for Canadian companies only. We were one of the first to sign on to the White House commitments. We are signed on to other voluntary codes that are similar.
Similarly, we're focused on international entities like the G7. That's an area where we can work with Canada, which is also active in the development of that code of conduct.
I'm going to switch tracks here a bit. All of you have made comments on the definition of harm and defining material harm, but all of you have described slightly different ways that you determine that yourselves internally. I think that's what I heard. I can repeat back some of the things I jotted down, but it sounded like there was a slight variation in how you assess that internally. Most of you have said that a lot of what you consider to be a high-impact system depends upon the use case.
There are two questions here. Maybe I'll start with the use case question, because I think it's probably the most difficult one. My feeling is that if we were to try to predict all of the various use cases....
Ms. Curran, you said that we should identify the use cases we're concerned about and then, I think you said, identify the threshold of harm, if I'm not mistaken. I find that as regulators and legislators, it would be very difficult to determine all of the various use cases. I'm sure you can't predict use cases either. What I'm struggling with is how that is a real approach for legislators to take. Could you respond to that?
I think you can do both things at the same time. You can set a materiality threshold that's broad and potentially applicable to an infinite variety of use cases and also outline specific use cases that you are concerned about, including election disinformation and the provision of health care services or employment services. You can have an overriding threshold of materiality that applies to a broad range, an infinite range, of potential use cases.
Yes, you could. I think our concern is that the minister's proposed amendments identify use cases or specific scenarios that we don't believe are high impact. That's where we've identified concerns about things like content moderation and prioritization, and also the fact that there is no threshold for harm in the bill currently.
Do all of you agree that some threshold for harm needs to be added to the bill?
Again, I'll start with you Ms. Curran, because you were responding directly to my previous question. Then I can really quickly survey the panel before my time is up.
—one technology that has an incredible breadth of sectoral applications. The depth and breadth of AI is immense. Trying to come up with something that factors in all of these use cases is so difficult. The reason it's so difficult is that how you manage harm or manage risk will be so specific to the data you're working with, the population you're engaging with and who will be interacting with these systems that trying to do this as a horizontal piece of legislation is a bit of a fool's errand.
Really, looking at different sectors and those who have the expertise in their sector will be a lot more specific and efficient from a regulatory approach. These are the regulators that have expertise in these complex sectors and that already manage risk.
I think there will be a need for a backstop piece of legislation. Other countries have identified this. The U.K. has identified this as an approach that they are looking at taking, but they are waiting to legislate. They're doing the work by sector first and seeing where the gaps that need to be filled.
I think what you're being asked to do as a committee and with this piece of legislation is extremely difficult. It will be very difficult to make sure that we get at the risks we want to target.
Additional work needs to be done on the definitions, perhaps taking a principles-based approach. One thing that really surprised me when I was talking to an insurance underwriter on their use of AI was that they don't use AI for all their claims adjustments. They said 10% of their claims are silly and simple, so they're going to have AI do the silly, simple ones. They'll get that work out of the way and then keep people doing what people are uniquely qualified to do, which is creativity, ingenuity and critical thinking. That was taken off the table. If I look at harms and categories of uses, do they fall into high-impact use because you're using them for claims adjudication, whether it's the silly, simple ones or not?
There is a need to go back and look at those definitions and look at how these tools are actually implemented in the real world. I don't think that's been done sufficiently across the broad communities, as was said.
To your question about whether there should be a threshold or some sort of test for harm, I think we would agree that there should be. This is maybe just an issue of clarification, because elsewhere in the bill there are references to significant or material harm. Maybe what we really need to do is have some consistency in the language.
The final thing I'll say here is that I think these recommendations all speak to the ability to make some adjustments in this bill and land it in a really workable place. If you have use cases that are about the material harms you're concerned about, whether they're health impacts or economic outcomes, insert some factors for what a high-impact system is to give clarity there, establish a test for harm or a threshold for harm and then enable and empower sectoral regulators to do what they do best. They can draw upon their extensive expertise to apply that approach for their areas of regulation. I think that can provide a path to having the type of flexible policy framework you're looking to build.
Last Monday, when he appeared before this committee via videoconference, Mr. Bengio told us we had to move Bill C‑27 forward quickly, because in a decade or even within two years, robots as smart as humans could make decisions.
In today’s La Presse, an article on digital life shows that in 2019, during the pandemic, your four respective companies and Apple created nearly 1 million jobs. Since then, especially over the last two years, over 125,000 of them were cut, and it’s not over.
Are these employees, who created tools through artificial intelligence, now paying for it by having their jobs eliminated? Is this the start of a significant reduction in the number of employees?
I own an SME. As we speak, in the field of communications, tools like ChatGPT can create websites in five minutes. Obviously, it doesn’t take me five minutes to do it. One must adapt to today’s reality.
In the future, will artificial intelligence help us to create more jobs or fewer jobs in the field of information technology?
In fact, Ms. Craig, you talked about research and development. I think Ms. Curran did too.
Could Bill C‑27 undermine research and development in Canada if it sets out rules for artificial intelligence that are too strict?
My questions are for everyone. You may answer one after the other if you like.
I can speak a bit about the impact on jobs. We're pretty conscious of the fact that new technology changes the employment landscape. Not very many of us work in facilities that have secretaries and receptionists anymore. There are a lot of functions in those roles that technology, for example, has taken away the need for. We'll definitely experience a transition in our economy, but I think some of it could be extremely beneficial.
As a country, when we think about AI adoption and where we want to encourage strong AI adoption, we should look at sectors where we already have labour shortages. Health care is a great example, and the construction industry is another example. Targeted applications of AI can help alleviate labour shortages, and some of the administrative burdens that doctors and nurses face can be alleviated through different applications of AI to ensure that they're able to focus more on patient care.
We expect to see change. I think in some cases, AI actually creates new economic opportunity in different types of employment, but change is a certainty.
I agree with that. It will change the labour market. I'm actually really optimistic about this. Some 200 years ago, 72% of North American workers were farmers, and now there are fewer than 2%. Sixty per cent of the jobs that existed in North America in 2018 didn't exist in 1940, so I think we're going to create a whole new set of jobs. However, it is going to be incumbent upon government and policy-makers to work with companies on making sure we're training the labour force of the future and that Canadians are getting the skills they need to enter what's going to be a whole new workforce with a whole new set of jobs. I agree with Nicole as well that in an era of labour shortages, AI is going to be the best tool we have to solve some of those issues.
On the issue of research and development, our global AI research is based in Montreal. Dr. Joelle Pineau, who I hope can appear before this committee at some point, can talk to about the amazing work she is doing out of Quebec to catalyze AI research globally.
I think the wrong regulatory framework and overreach or over-regulation by the government are going to drive activity out of the country. I would hate to see us lose it, because we are world leaders when it comes to AI research.
I think because of the way the bill very broadly affects even lower-risk technology, it could impact research and innovation. It could dampen innovation and ultimately result in a reluctance among Canadian businesses, including small and medium-size businesses, to adopt AI and to feel confident that they can meet regulatory requirements in using AI for innovation. It could also ultimately impact the competitiveness of those businesses and of Canada.
The media reported on a recent decision by Meta to allow a manipulated video of the President of the United States to remain on your platform. I note that Meta's oversight board stated that the loophole threatens elections worldwide and should be closed as soon as possible. That's a pretty big and serious statement. I think all Canadians want and expect Meta to adhere to the AIDA legislation once implemented.
Can you tell the committee today what your organization currently does to protect against altered video content, and what would be done differently if the AIDA were applied?
We do have clear policies against manipulated media, whether it's video or audio, if it's misleading, and we remove that kind of content.
We set up the oversight board a number of years ago to give us guidance on some of our content policies, and that's exactly what they have done in this instance. They have said that we made the wrong decision in this case, and they have given us guidance on how to apply our policies going forward. I think that's exactly what we're going to do.
We have industry-leading tools that are going to help us detect and remove AI-generated content. Certainly, for political ads and social ads, we're requiring that advertisers now disclose when they are using AI-generated content in those ads.
We just announced, for organic content, labelling requirements. We are going to label any AI-generated content as AI-generated so that users are aware, when they are seeing images on our platform, that they are generated by artificial intelligence. We're working with other industry members to make sure that we extend that to video and audio when it's technically feasible. That work is under way.
My next question is for all of the witnesses, and I will ask for some brief responses.
It's clear that governments need to work with social media platforms—we have heard that very loudly today—to protect both individuals and critical infrastructure systems. Interoperability will be key to that. Will this legislation help define expectations for your organizations?
I think as we engage in discussions about potential regulatory frameworks and guardrails, it always informs how we approach where we think we need to focus in responsible AI development and tooling for customers. We want to make sure that our customers are able to comply with laws as they emerge.
If the question is whether the AIDA is going to help us set guardrails in Canada, the answer is maybe. It depends on where things land. If some of the amendments the minister has proposed that we're concerned about are included in the legislation, I think we would need to think about how we respond to that.
Overall, we think regulation is a good thing. We think guardrails are a good thing. By and large, the AIDA in its original form is a pretty good bill, so it's just a question of whether we can get the details right.
Innovation, Science and Economic Development Canada has created a working group called the Canadian forum for digital infrastructure resilience. I chair the AI/ML working group, which is looking at critical infrastructure. It brings together a variety of different companies to have these conversations. We're working through and wrestling with the language in the AIDA definitions and how that applies across critical infrastructure.
We, as companies, support critical infrastructure providers, but we're not necessarily the providers ourselves, so the call to invite others in the agricultural sector, the water sector and the manufacturing sector to understand how these tools and solutions intersect is going to be critically important as we move forward.
I think we're looking for more clarity. In order to answer your question, we really need to see some clarity both in the legislation itself and in how it would be applied.
We are coming nearly to the end of the testimony and this witness panel’s appearance.
I am trying to form an opinion on what I heard. I am just trying to see where you are at regarding regulation so that I can think about it constructively.
I must admit I am a little confused.
On the one hand, when we asked all of you if this requires regulation, the answer was yes. When we asked you if quick action is needed, the answer was yes.
On the other hand, when we got into the details, you told us that Bill C‑27 is inadequate. It contains too many things and touches on too many aspects. Then you sort of told us that a lot of legislation would need changes. I noted down which ones we discussed today: The Canada Health Act, the Canada Elections Act, the Personal Information Protection and Electronic Documents Act, the Criminal Code, the Copyright Act, the Patent Act and measures specifically targeting advertising for children. These types of changes would require endless legislative work, especially with the type of Parliament we’re sitting in today. In the end, it leads to us not having any regulation.
Furthermore, I think if we presented a bill to you in which we changed all of that legislation at the same time, you would probably tell us we are coming back to the same problem at the start of Bill C‑27, and it all boils down to the same thing.
If I understand correctly, it’s a matter of public relations and strategy, among other things.
I have the bad habit of being very direct. I will therefore ask you the following question: Isn’t this a rather clever way of telling us that you don’t want any regulation?
I don't think we're arguing for no regulation, and we support good regulatory frameworks—all of us do. I think where there is a need for speed, there are potential opportunities for government to move more quickly with existing legislation.
I think they are in a position to regulate and make decisions more quickly than it will take for this bill to pass, develop regulations and then develop the expertise to understand a Health Canada use case. I think those regulators are in a position to understand where risks lie more quickly in those specific areas, yes.
I would like your companies to table with the committee, in writing, the list of all the individual statutes you think should be changed. That way, we have something relatively equivalent. It could be constructive. If you could do this, it would be interesting. I’d like to read it.
Mr. Chair, I do indeed ask you to please follow up on that.
I think some laws may not even need to be amended. The government may already have authorities that are technologically neutral. A lot of the things the committee has discussed or we hear discussed are already potentially addressed in existing law, and if you applied the law in a technologically neutral way, you may already have the powers to address those issues.
In the interests of making a positive contribution, your respective companies could each table a list of every statute that could be changed, so that we have an equivalent.
In response to Bill C-18, Ms. Curran, Meta pulled access to news articles and sharing. One of the criticisms of that from many experts was that it was going to make children and youth more vulnerable to abuse. Your CEO, Mark Zuckerberg, at least apologized to Americans during congressional hearings a few days ago for what has taken place.
Can you explain to us—I'm trying to get at the trust factor here—what type of analysis has continued from that point? Do you disagree with the experts about that exploitation taking place through the products you have? Are we getting the same protections for our youth?
I'm surprised that there hasn't been a general, wider apology. I don't know what difference there is, other than citizenship, among those who have succumbed to this, and it has caused significant problems, including connections to suicide and self-harm. Can you assure us on the committee that an analysis is continuously going on with regard to the response to C-18 and whether or not Canadian youth are further at risk because of the spread of misinformation, which affects them mentally?
I think these are two separate issues. Youth safety and youth exploitation online are a key, critical concern of ours. We just announced the further rollout of an initiative with the National Center for Missing & Exploited Children that is going to ensure intimate images and sexploitation are more easily removed from platforms. We're assigning an individual digital code to those images. Youth can do that themselves from their devices without sharing the image. Once that code is received by us and by NCMEC, we can fan out to make sure that those images are not shared more broadly on our platforms and across the Internet.
We are taking a number of steps to make sure that youth are protected on our platforms. It's an ongoing battle, I have to say, and we're working with other members of industry to make sure that youth are protected.
That's very different from the issue of C-18 and online news content, which we have had to remove as a compliance strategy in response to the government's legislation. We didn't want to have to remove that news content. If we are carved out of that bill, we would be happy to put it back up. I'm hopeful that we can continue to work with the government on that front.
I am really concerned that the response to Bill C-18 is not the same across different countries. The spread of misinformation related to the topics that I've noted is particularly troublesome.
Is there any analysis going on about Bill C-18 and the response by Meta putting youth at greater risk or less risk with the spread of misinformation? Again, your CEO has at least apologized to Americans on this issue, but not to Canadians. In your response to Bill C-18, is there an ongoing analysis on whether it has further harmed and spread misinformation, affecting the mental health of young people who are using your product?
Again, no other government has pursued legislation like Bill C-18. We would be happy to work with the government to put news content back up, if they are able to carve us out of that bill. The Canadian government is unique in pursuing that particular piece of legislation.
That is separate and apart from our efforts to protect youth on our platforms, which are ongoing. We've had a number of recent announcements in that respect. We're going to continue that work very actively, because it's a key, critical priority for us.
Again, it's quite separate from the issue of news online in Canada, which we have had to remove in response to the government's Online News Act.
We are out of time, but I'll just give one more minute to Mr. Perkins for a last question. I hope he remembers this kind gesture when we get to clause-by-clause.
I will remember all of your kind gestures, and this is not the first one.
I have just a quick question, Mr. Weigelt.
I met with one earlier witness we had here who said that when it comes to thinking in the future and artificial general intelligence, something missing in the bill is the ability of the government to regulate based on computing power. Should a clause be added to the bill that gives the government the ability to adjust based on computing power as time goes on?
There are different schools of thought around the computing power required for these models. We've certainly seen the European Union put a metric out for that.
There are exciting new developments in what are called small language models, SLMs, and a huge open-source community is building very powerful models. I propose against looking at performance and CPU capacity. Look at capabilities instead and talk about what these models actually do and what they provide.
Again, it's important that we put in place the guardrails to look towards the future so that when these new tools come out, we have powerful regulatory environments that help us protect Canadians against the harm that could arise.
Thank you to all of the witnesses today. It's been a fascinating panel. Feel free to share with the committee, through the clerk, in writing, whatever you feel is pertinent for us to consider as we go through this process.
[Translation]
Thank you very much for participating in this meeting.