:
I call this meeting to order.
Good afternoon, everyone. Welcome to meeting No. 99 of the House of Commons Standing Committee on Industry and Technology.
I appreciate the cheery atmosphere and hope it will last for the entire meeting.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other acts.
I'd like to welcome our witnesses today.
[English]
We have with us today in person Mr. Barry Sookman, senior counsel for McCarthy Tétrault. Online, we have Elizabeth Denham, chief strategy officer with the Women's Legal Education and Action Fund. We have Kristen Thomasen, assistant professor at the Peter A. Allard school of law at UBC, who is also joining us by video conference.
Thank you to all of our witnesses. Each will have five minutes.
Before we start, I see that Mr. Perkins has a point of order.
Mr. Perkins.
:
Thank you very much for the opportunity to appear.
I am senior counsel at McCarthy Tétrault, with a practice focused on technology, intellectual property and privacy. I'm the author of several books in the field, including an eight-volume treatise on computer, Internet and electronic commerce law. I'm here in my personal capacity.
Although my remarks will focus on AIDA, I've submitted to the clerk articles published related to CPPA and AIDA, which contain much-needed improvements. You can see that my submission is substantial.
My remarks are going to focus on AIDA, as I've mentioned. In my view, AIDA is fundamentally flawed.
Any law that is intended to regulate an emerging transformative technology like AI should meet certain basic criteria. It should protect the public from significant risks and harms and be effective in promoting and not hindering innovation. It must also be intelligible—that is, members of Parliament and the public must be able to know what is being regulated and how the law will apply. It must respect parliamentary sovereignty and the constitutional division of powers and employ an efficient and accountable regulatory framework. AIDA either fails or its impact is unknowable in every respect.
AIDA has no definition of “high-impact system”, and even with the 's letter that was delivered, has no criteria and no guiding principles for how AI systems will be regulated. We don't know what the public will be protected from, how the regulations will affect innovation or what the administrative monetary penalties will be. We know that fines for violating the regulations can reach $10 million or 3% of gross revenues, but we have no idea what the regulations will require that will trigger the mammoth fines against both small and large businesses that operate in this country.
In short, none of the key criteria to assess AIDA are knowable. In its current form, AIDA is unintelligible.
AIDA is, in my view, an affront to parliamentary sovereignty. AIDA sets a dangerous precedent. What will be next? Fiat by regulation for quantum computing, blockchain, the climate crisis or other threats? We have no idea.
AIDA also invokes a centralized regulatory framework that leaves all regulation to ISED. This departs from the sensible, decentralized, hub and spoke pro-innovation approach being taken so far in the United Kingdom and the United States, which leverages existing agencies and their expertise and avoids overlapping regulation. It recognizes that AI systems of all types will pervade all aspects of society and that one regulatory authority alone is ill-suited to regulate them. Rather, what is needed is a regulatory framework for a centralized body that sets standards and policies, coordinates regulation within Canada and internationally, and has a mechanism for addressing areas where there are gaps, if there are any.
AIDA also paves the way for a bloated and unaccountable bureaucracy within ISED. ISED will make and enforce the regulations, and they will be administered and enforced by the AI and data commissioner, who is not accountable to Parliament like the Privacy Commissioner. The commissioner is also not subject to any express judicial oversight, even though the commissioner has the power to shut down businesses and to levy substantial fines.
Last, a major problem with AIDA is that its lack of intelligibility and guiding principles make it impossible to evaluate its impact on innovation. We need to recognize that Canada is a middle country. It is risky for Canada to be out in front of our major trading partners with a law that may not be interoperable with those of our partners and may inadvertently and unnecessarily create barriers to trade. Our AI entrepreneurs are heavily dependent on being able to access and exploit AI models like ChatGPT from the United States. We should not risk creating obstacles that inhibit adoption, the realizing of the maximum potential of AI or the continued growth of AI and the ecosystem and the high-paying jobs it will create.
AI is going to be as transformative and as important as the steam engine, electricity and the microchip in prior generations. Canadian organizations in all sectors need open access to AI systems to support adoption and innovation and to be competitive in world markets. If we fail to get this right, there could be significant long-term detrimental consequences for this country.
To go back to my first point, there is nothing in AIDA to provide comfort that these risks will be avoided. While my opening remarks, Mr. Chairman, relate to AIDA, I also have concerns about the CPPA. I would be glad to answer any questions you may have about AIDA or the CPPA.
Thank you again for the opportunity to appear.
Good afternoon, Chair.
Good afternoon, committee members and Madam Clerk.
Thank you for the invitation to appear before you today. Hopefully my input will benefit the committee's important work.
I speak from decades of experience as a privacy professional and from 15 years as an information rights regulator in four jurisdictions. My ongoing work takes place really on the international stage, but it's backed by long-standing familiarity with our own federal and provincial privacy laws.
When I became the information commissioner for the United Kingdom in 2016, that role really brought me into the EU's oversight board that administered the GDPR implementation. That brought me into direct collaboration with all EU member states, and that experience greatly expanded my view of data protection and privacy that was first cultivated at the federal level in Canada, in Alberta and British Columbia.
During my five years as the U.K. information commissioner, I also served three years as the chair of the Global Privacy Assembly. That position greatly expanded my horizons once again and enhanced my knowledge of other laws and other cultures, including the global south, the Middle East and the Asia-Pacific. To this day, the work I do spans continents.
The issues of pressing concern are largely the same, and those are children's privacy and safety and the regulation of artificial intelligence.
Looking first at Canada's CPPA from a global perspective, I see a big missing piece, and the legislation's language, in my view, needs adjusting so that it explicitly declares privacy as a fundamental right for Canadians. Its absence really puts us behind nations who lead the way in privacy and data protection.
The legislative package goes some way towards establishing expectations for AI governance, but it lacks specific and much-needed protections for children and youth. In a study I conducted through my work with an international law firm, Baker McKenzie, which surveyed 1,000 policy influencers across five jurisdictions, we found that all those surveyed came to a single point of agreement: The Internet was not created and not designed with children in mind.
All those policy influencers felt that we need to do better to protect children and youth online. Canada is a signatory to the United Nations Convention on the Rights of the Child, and I think Canada owes it to our young people to enshrine the right for them to learn and to play, to explore, to develop their agency and to be protected from harms online.
In the U.K., I oversaw the creation of a children's age-appropriate design code, which is a statutory enforceable code, and the design of that code has influenced laws, guidance and codes around the world. I'd be happy to answer more questions about that.
Additionally, I believe the legislature should go further than it does to provide the Privacy Commissioner with robust enforcement powers. I exported my career from Canada to the U.K. in large part because I wanted to gain hands-on experience administering laws with real powers and meaningful sanctions.
In Britain, privacy harms are treated as real harms ever since the GDPR came into effect. One result was the leap in the U.K. information commissioner's fining authority, but other enforcement powers were equally powerful: stop processing orders, orders to destroy data, streamlined search and seizure powers, mandatory audit powers and so on.
These enforcement powers were mandated by a comprehensive law that covers all types of organizations, not just digital services but a business of any kind, a charity or a political party. By comparison with the GDPR, Bill lacks broad scope. It doesn't cover charitable organizations, which are not above misusing personal data in the name of their worthy causes. Neither does Bill cover political parties. It leaves data and data-driven campaigns off the table for regulatory oversight.
Serving as a privacy commissioner at the federal and provincial levels in Canada exposed me to towering figures in my field. I think of Jennifer Stoddart, the former federal privacy commissioner, and David Flaherty, the former B.C. information and privacy commissioner. Their names recall a time when Canadian regulators and Canadian law were deeply respected internationally, when our laws and our regulators really served the world as a bridge between the U.S. and Europe. Although commissioners who followed, Daniel Therrien and Philippe Dufresne, have continued to contribute internationally, Canada’s laws have fallen behind any global benchmark.
I think we can recover some ground by returning to fundamental Canadian values, by remembering that our laws once led the way for installing accountability as the cornerstone of the law. Enforceable accountability means companies taking responsibility and standing ready to demonstrate that the risks they are creating for others are being mitigated. That's increasingly part of reformed laws around the world, including AI regulation. The current draft of the CPPA does not have enforceable accountability. Neither does it require mandatory privacy impact assessments. That puts us alarmingly behind peer nations when it comes to governing emerging technologies like AI and quantum.
My last point is that Bill creates a tribunal that would review recommendations from the Privacy Commissioner, such as the amount of an administrative fine, and it inserts a new administrative layer between the commissioner and the courts. It limits the independence and the order-making powers of the commissioner. Many witnesses have spoken against this development, but a similar arrangement does function in the U.K.
Companies can appeal commissioner decisions, assessment notices and sanctions to what is called the first-tier tribunal. That tribunal is not there to mark the commissioner’s homework or to conduct de novo hearings. I would suggest that, if Parliament proceeds with a tribunal, it has to be structured appropriately, according to the standard of review and with independence and political neutrality baked in.
As a witness before you today, I have a strong sense of what Canada can learn from other countries and what we can bring to the world. Today, Canada needs to do more to protect its citizens’ data. Bill may bring us into the present, but it seems to me inadequate for limiting, controlling or making sure we have responsible emerging technologies.
Thank you for hearing my perspective this afternoon. I very much look forward to your questions.
I have been researching and writing in the areas of tort law, privacy law and the regulation of automated technologies for over a decade, with a particular focus on rights and substantive equality, including recent publications on safety in AI and robotics governance in Canada and work with the B.C. Law Institute's civil liability and AI project.
I'm here today representing the Women's Legal Education and Action Fund. LEAF is a national charitable, non-profit organization that works toward ensuring that the law guarantees substantive equality for all women, girls, trans and non-binary people. I'm a member of LEAF's technology-facilitated violence advisory committee and will speak to LEAF's written submissions, which I co-authored with LEAF senior staff lawyer Rosel Kim. Our submission and my comments today focus on the proposed AI and data act.
You've heard this before, but if we're going to regulate AI in Canada, we need to get it right. LEAF agrees with previous submissions emphasizing that AI legislation must be given the special attention it deserves and should not be rushed through with privacy reform. To the extent that this committee can do so, we urge that AIDA be separated from this bill and wholly revisited. We also urge that any new law be built from a foundation of human rights and must centre substantive equality.
If the AI and data act is to proceed, it will require amendments. We examined this law with an acute awareness that many of the harms already arising from the introduction of AI into social contexts are inequitably experienced by people who are already marginalized within society, including on the grounds of gender, race and class. If the law is not cognizant of the inequitable distribution of harm and profit from AI, then despite its written neutrality, it will offer inequitable protection. The companion document to AIDA suggests that the drafters are cognizant of this.
In our written submission, we made five recommendations, accompanied by textual amendments, to allow this law to better recognize at least some of the inequalities that will be exacerbated by the growing use of AI.
The act is structured to encourage the identification and mitigation of foreseeable harm. It does not require perfection and, in fact, is likely to be limited by the extent to which harms are not considered foreseeable to the developers and operators of AI systems.
In this vein, and most urgently, the definitions of “biased output” and “harm” need to be expanded to capture more of the many ways in which AI systems can negatively impact people, for instance, through proxies for protected grounds and through harm experienced at the group or collective level.
As we note in our submission, the introduction of one AI system can cause harm and discriminatory bias in a complex and multi-faceted manner. Take the example we cite of frontline care workers at an eating disorder clinic who had voted to unionize and were then replaced by an AI chatbot system. Through an equity lens, we can see how this would cause not just personal economic harm to those who lost their jobs but also collective harm to those workers and others considering collective action.
Additionally, the system threatened harm to care-seeking clients, who were left to access important medical services through an impersonal and ill-equipped AI system. When we consider equity, we should emphasize not only the vulnerable position of care workers and patients, but also the gendered, racialized and class dimensions of frontline work and experience with eating disorders. The act as currently framed does not seem to prompt a fulsome understanding nor a mitigation of the different complex harms engaged here.
Furthermore, as you've already heard, the keystone concept in this legislation, “high-impact system”, is not defined. Creating only one threshold for the application of the act and setting it at a high bar undermines any regulatory flexibility that might be intended by this. At this stage in the drafting, absent a rethinking of the law, we would recommend removing this threshold concept and allowing the regulations to develop in various ways to apply to different systems.
Finally, a key challenge with a risk mitigation approach, such as the one represented in this act, is that many of the harms of AI that have already materialized were unforeseeable to the developers and operators of the systems, including in the initial decision to build a given tool. For this reason, our submission also recommends a requirement for privacy and equity audits that are transparent to the public and that bring the attention of the persons responsible to as extensive as possible prevention and mitigation.
Finally, I would emphasize that concerns about the resources required to mitigate harm should not dissuade this committee from ensuring that the act will mitigate as much harm and discrimination as possible. We should not look to expand an AI industry that causes inequitable harm. Among many other reasons, we need a human rights approach to regulating AI for any chance of an actually flourishing industry in this country.
Industries will also suffer if workers in small enterprises are not protected against harm and discrimination by AI.
Public resistance to a new technology is often based on an understanding that a select few stand to benefit, while many stand to lose out. To the extent possible, this bill should try to mitigate some of that inequity.
Thank you for your time, and I look forward to your questions and the conversation.
:
Thank you for that follow-up question.
I have reviewed the amendments proposed by the , which make it clear that the joint purposes of the act are the protection of the fundamental right of privacy and the legitimate interests of business. I think that's an appropriate way to do it. One has to understand that every fundamental right, including the fundamental rights in the charter, is subject to the Oakes test, which is a balancing exercise. The purpose clause makes it clear that we have to balance that fundamental right and the interests of business. This gives the courts the appropriate tools to solve the problem.
I'll also point out that the “Appropriate Purposes” section is an override section. If there is something an organization does that, frankly, is offside, this trumps everything. When you put together the purposes of the act and “Appropriate Purposes”, the public is adequately protected.
I won't get into the fact that, also, the Privacy Commissioner has huge discretionary powers as to how to enforce it, with very limited rights to appeal. If the Privacy Commissioner believes a fundamental right has been violated and that it's an inappropriate purpose, the commissioner has all the powers required to do the proper calibration to protect the public.
:
Thank you very much for that question. I have concerns about legitimate interest as well.
As to whether it is weaker or stronger than the GDPR, it is substantially weaker than the GDPR in its wording. The GDPR, like the CPPA proposal, has a balancing.... It has no “get out of jail free” card. There must be a balance and the use must be appropriate, and it can't adversely affect individuals.
However, unlike the GDPR, it doesn't apply to disclosures, which is very significant. For example, search engines or AI companies are either not going to be able to operate in this country without an amendment, or they'll operate and they won't be subject to the law. This section needs to be fixed.
The other thing is that it has additional tests, which aren't in the GDPR, about a reasonable person having to expect the collection of such activity. That is tethered to an old technology—a known technology—rather than a technologically neutral approach. We want something that's going to work in the future, and this language doesn't work.
I think the problem is actually the opposite of what you were asking for. We need to fix it so that it works properly but still protects the public.
:
As I said, in the U.K. there is a tribunal system, and administrative tribunals are used across many areas of law. In the U.K., when it comes to freedom of information, data protection, cybersecurity or electronic marketing—all of those areas the commissioner is responsible for—the decisions that the commissioner fines and sanctions are subject to a review by the first-tier tribunal. Then the case can actually go to appeal at the second-tier tribunal, and then on to the court.
That sounds like what could be a very lengthy process. However, I think that over time the tribunals have become expert tribunals, so you're not taking a very specialist policy area like data protection and having a general court look at the issues.
I think there are pluses and minuses. Obviously, the government wants to make sure there is administrative fairness and an appeal system, because otherwise you have too much power concentrated in a government body.
You could understand why there should be appeals, but my argument is that, if there is going to be a tribunal, then the standard of review needs to be reasonableness, as it is in British Columbia. Also, the members of the tribunal need to be independent and appointed that way. Finally, I think it's really important that the tribunals not conduct an inquiry from scratch, because I think that undermines the commissioner's expertise.
If there is no tribunal, then I agree with the Privacy Commissioner's recommendation that an appeal go directly to the Federal Court of Appeal, rather than starting at the tribunal and then going to Federal Court.
:
The structure of the adjudication of issues, where they start and how they get appealed, is a very important question. We have to recognize that, with the importance of privacy and the amount at stake for the public and organizations, getting it right is really important.
The powers of the Privacy Commissioner are very broad. There's really only anorexic protection procedurally, yet the Privacy Commissioner makes the determination as to whether there's a breach and can make a recommendation as to whether there are penalties.
The appeal goes to the tribunal, but the tribunal only has the power to order a reversal if it's an error of law. It has no power to do anything if it's a mixed question of fact and law or a question of fact. When you look at the way the CPPA operates, there are going to be huge numbers, almost invariably a huge number of questions of fact and law. This means that, effectively, there are almost no procedural protections before the Privacy Commissioner, and any decisions are virtually unappealable. Also, the constitution of a tribunal doesn't require a judge, which is required in other contexts. I do think there needs to be procedural protection in front of the Privacy Commissioner.
As for the appeal, you heard Ms. Denham talk about at least a reasonableness standard. That doesn't even exist before the tribunal. That would only exist on a further judicial review, but it's almost impossible to get there.
I do think the structure really needs to be changed to provide at least a modicum of procedural protection.
Mr. Sookman, you expressed your thoughts in an article on artificial intelligence that was published three days ago, if I'm not mistaken, in which you analyzed the AI Regulation Bill in the UK House of Lords.
In that analysis, you noted that the British bill could provide a roadmap as to how to improve the Artificial Intelligence and Data Act.
You outlined issues such as parliamentary sovereignty, the creation of an artificial intelligence authority and regulatory principles.
What lessons can we, as legislators, learn from the British bill, and what specific aspects would you recommend should be incorporated in the framework of the Artificial Intelligence and Data Act to strengthen AI regulations in Canada?
:
Thank you for your question.
[English]
I think that is a fantastic question.
The private member's bill that you referred to is also an attempt to have an agile mechanism for the regulation of AI. There are two things about that bill that I think are fundamentally important. If this committee is going to make recommendations for improvements, two things can be taken from that bill, which I would strongly recommend.
First, the secretary had the power to enact regulations in the first instance. The regulations only become effective when they're passed by a resolution in both Houses of Parliament. Regulations that don't have that procedure can be annulled by either House of Parliament.
In my mind, that is a way to give effect to the important principle of parliamentary sovereignty. That way, the government can go ahead with its regulatory analysis, but at the end of the day, it's still regulated by a mechanism of Parliament. I think that's a brilliant approach to solving the problem of parliamentary sovereignty.
The second thing about the draft AI bill that I think is really important is that it contains the principles for guiding the legislation. If you look at AIDA, it doesn't define “high impact” and it tells you nothing about what the principles should be that would guide regulation. What this bill does is provide a good first look at what could be an approach.
It starts off with saying the principles should be fundamental ethical principles for responsible AI. They should deliver safety, security, robustness and those sorts of things. Secondly, any business that's going to engage in AI—in this case, I would say a high-impact system—should test it thoroughly and should be transparent about its testing. Thirdly, it has to comply with equalities legislation—that is, discrimination, which is extremely important.
Lastly—and this is completely missing in our bill—a consideration has to be that the regulation benefits outweigh the burdens and that the burdens of the restriction don't prejudice and would enhance the international competition of the U.K.
I think having a set of principles like that to guide the regulatory framework would be very useful. When I saw that bill, I thought, this is genuis. This is from a private member.
I'll start with Mr. Sookman, who's here, and then I'll go to our two virtual witnesses.
With regard to the process, one of the things that's been difficult is that we still don't have some of the amendments. That was brought up at the beginning of the meeting here.
Mr. Sookman, I'm kind of curious. When you were preparing to come here today, how well could you be prepared for testimony on the full bill itself and whether or not it's your opinion that...? Will you submit more documents or information later on, or will the committee have to circle around again to our original witnesses? We've tried to compartmentalize these things the best that we can. In fact, we split the bill for voting purposes into two sections, but it's still one bill here.
I'm just curious about a witness coming here and the process that we've engaged in. Tell us what you think.
Second, what are we going to have to do once we get the other part of the bill in front of us?
:
Mr. Masse, I don't envy the predicament you're in. In fact, the predicament you're in is a microcosm of the whole public who's interested in the regulation of artificial intelligence.
It was quite clear from the 's statement that there were amendments. We haven't seen the amendments on AIDA. All we have right now is the letter of the minister, which describes in a very amorphous, open-ended form what the first focus of the government's going to be, but it's quite clear that there's no definition of what the factors are for what will be “high-impact”, and there's no criteria for future systems.
The reality is that we still have no idea. What really concerns me is that, at the last minute—you know, even if it's next week—we're going to get draft amendments, and those amendments are probably only going to be half of the amendments, because they're probably only going to relate to the pieces that were in the 's letter. Everything else is going to be, I think, saved for clause-by-clause.
When you look at the work that's gone on around the world trying to come up with an appropriate regulatory framework, it has taken years. The British have really studied this issue and, as Ms. Denham said, had taken the view that what's needed is a hub and spoke, decentralized regulatory framework.
The committee may get amendments, and you're going to get a couple of weeks to review something that should have taken a year or years to evaluate. Also we're in the process of still finding out what the Europeans are doing and exactly what the U.S. Congress is going to do. We are making a big mistake, I believe, if we think that we can get dropped amendments, do a thorough analysis of them and make policy for the country that's going to affect jobs, the protection of the public and innovation for decades.
In my view, whatever comes out is not going to give this committee enough time to study it, and my strong recommendation would be to step back. The government's already said that it's going to take two years to do the regulations. They cannot go ahead with this part. Do the study and introduce a proper bill. We won't lose any time, but we'll get something that's thought through and debated by the public and Parliament.
:
Yes, Jennifer Stoddart is a personal hero of mine. I worked closely with her in Ottawa as her assistant commissioner.
One of the benefits of Canada is that Canada has been a hard-working member of the OECD, so Jennifer Stoddart was the chair of the privacy and security committee of the OECD for many years. I think she was very influential in that role in bringing together various members of the OECD and others around the world.
As a Canadian working in the U.K., I was able to chair the Global Privacy Assembly, which is the group that brings together 135 privacy authorities from around the world. Again, that was an influential post because it took me to G7 meetings and meeting with the ministers of industry, technology and trade. The privacy commissioners can fulfill a really important diplomatic and bridge-building role around the world.
We have a great example in our current privacy commissioner. He's very well regarded already in international circles, but I think the investment has to go to influence other places and countries around the world. Given the fact that data knows no borders and we're all dealing with the same big companies, there needs to be some collaboration and co-operation.
I can see that Canada can continue to play that role but not when our laws are so 20th century. The update of the laws is so important because, with the Privacy Commissioner in Canada, it's not sustainable to have ombudsman and recommendation-only powers when data is the greatest asset of the 21st century.
Thanks to the witnesses. I'm going to ask my questions in quick succession, and witnesses may answer them after that.
Mr. Sookman, the comments you've made from the outset lead me to believe that we're working backwards. We've heard that many times since we began our study. Do you think there are any positive or useful aspects that should be retained? Are there any that we should develop further?
Earlier you talked about the tribunal. Among the various opinions we've heard to date, certain individuals believe that the tribunal should exist, while others, on the contrary, believe we should immediately go to an appellate court. I would like to hear your opinion on that.
Ms. Denham, attendees at the conference that was held in the UK a few weeks ago came to a certain consensus that there should be a voluntary code. In fact, I believe we've already signed the agreement regarding such a code.
Will that voluntary code replace certain bills in certain countries? Will some countries back off on certain elements that they've already put in place to make room for the voluntary code?
I'll let you go first, Mr. Sookman.
:
Thank you very much for those questions.
When I looked at the 's letter, which provides some guidance, what I was very pleased to see are the new areas that are proposed to be regulated. We don't know how they're going to be regulated, but there was a proposal to add in dealing with discrimination in the courts and administrative tribunals, and I think that's very important.
There was also something about introducing some guardrails in the criminal investigations. That can help police officers to do investigations that might be discriminatory. Those sorts of public things are very important.
There was also something about regulation of services. Again, we have no idea whether this is intended to be public or private, but the EU has proposed to ensure non-discrimination in emergency services, for example, which is very important.
However, what we didn't see...and this a problem. All the authority is still with the . When you look at what is proposed in the letter, you see various areas that are intended to be regulated. You have courts, peace officers, human rights and content moderation, which should be the purview of the justice minister. You have issues relating to employment, which should be the purview of the labour minister. You have issues related to health regulation, which should be within the Department of Health.
I could go on and on, but I think one thing that could be done would be to give numerous ministers the power to make regulations in their areas. That would help make this decentralized.
I know I'm out of time, so I won't get to the tribunal question.
You asked about any positive aspects of the bill, and we did emphasize, in our written submission, some aspects of the bill that we think are very commendable. Dealing with discriminatory bias is important. The recognition that psychological harm is an aspect of harm is important and commendable. Taking a regulatory approach to give clarity to how AI systems can be operated and what obligations and transparency requirements exist for companies using AI, I think, is all very important.
The companion document that accompanies the legislation contains a lot of very important points, as well, and perspectives that we don't see reflected in the draft legislation itself, which we noted, in our submission, is a missed opportunity. There's a discussion about collective rights in the companion document that is crucial when we're talking about AI systems because of the way these systems work on large quantities of data, drawing out inferences based on an assessment of a large group of people. The idea that harm can be actually materialized at the collective level, rather than solely at the personal level, is something that the law needs to acknowledge. This would be relatively new for our laws. It wouldn't be brand new because there are areas of law that recognize collective rights, of course. However, it's something that we're going to have to see recognized more and more, and I think that exists in the companion document. That should be integrated into this bill if it's going to go forward.
I would just say, very generally, that a lot of what we're talking about highlights that we actually need to step back when we think about AI regulation in Canada. The AIDA did not benefit from the consultation. I think that would have been useful in advance of its drafting. It could take a much more holistic approach. Mr. Sookman has highlighted some of this. Ms. Denham has highlighted some of this. There are many considerations that have to go into how we would establish a framework for regulating AI in Canada that we don't see here and that I think are going to be difficult to integrate, solely through textual amendments, into what we have in front of us.
:
The EU AI act is comprehensive in terms of its scope. It's a comprehensive act, so it applies to the public sector, the third sector and the private sector. It's a product-safety statute, so it doesn't give individuals any rights. What it does require is that companies categorize the AI system they are procuring or developing. It has to be slotted into one of the risks. There is prohibited AI, high-risk AI and low-risk AI, and it's up to the company to determine the risk they're creating for others. Then, according to that risk.... Due diligence, accountability, transparency and oversight are tied to the level of risk.
To give you an example, there is a group of prohibited AI uses. One of them is live facial recognition technology used by police in public spaces. The EU has already decided that's prohibited. There are many low-risk.... Chatbots, for example, may be considered a low AI or algorithmic risk.
What companies and governments need to do is prove they have a comprehensive AI data governance program in place and an inventory of all AI systems in use, and then stand ready to demonstrate this to AI regulators across the EU.
What the EU has is first-mover advantage, in terms of a comprehensive AI law. That's what it has. This doesn't mean the rest of the world is going to copy and paste their approach. That said, any company outside the EU that is directing services to citizens and organizations in the EU will be subject to the EU law. That means the world is paying attention to what the EU is doing, in the same way they did with the GDPR. There is first-mover advantage there.
I think what the U.S. is doing is extremely interesting. It's difficult to get anything through Congress these days. We know that. Instead, there is an executive order on AI, which requires all government agencies—the supply chains and procurement in every single agency, be it Health and Human Services or the Department of Defense—to comply with AI principles. They also have to stand ready to demonstrate that they are being responsible. I think that is going to be hugely influential but quite different from the approach the EU is taking.
:
I can't recall at this moment whether the CPPA has a power in there for the commissioner to order the deletion of data for data that may have been gathered illegally or not properly consented to.
I can't remember if the commissioner has that power, but it is certainly something that is necessary in the modern world. It could be much more effective in changing business practices, rather than fining a large data platform, a social media platform, hundreds of millions of dollars. If the data is collected illegally, and if the data is used in a significant contravention of the act, requiring a company to delete that data has an enormous effect.
I think of a case that I had when I was the commissioner in the U.K. One government department had illegally collected the biometric data of claimants. They had to provide voice prints, which is biometric data and which is sensitive information under the GDPR. The order was that the government department had to delete all of that data and start over again. That was more significant. Among peer departments in the government, that lesson was learned really quickly. Rather than fine the government department for collecting data illegally, the data had to be destroyed.
You'll see that the U.S. Federal Trade Commission has acted in several cases around data deletion and data disgorgement. At the end of the day, companies want sustainable business practices. They want assurances that they're doing the right thing. That's the lens through which we should be looking in Canada at the powers of the Privacy Commissioner, for example.
Mr. Sookman, at the ALL IN summit on AI, you hosted a panel entitled Creating tomorrow today: AI, copyright, and the wisdom of experience. I'd be very curious to hear what you have to say about it.
What are the concerns regarding copyright protection, particularly in the cultural sector, but also in a research context.
What are the weak points of Bill when it comes to protecting artificial intelligence?
We know that Canada's Copyright Act is now out of date and that it generally provides little copyright protection.
Will Bill push us down into an even deeper hole?
:
Mr. Lemire, thank you very much for that very important question.
The draft law that's in front of you doesn't deal with intellectual property rights at all, and that's unlike the EU legislation, which has at least a provision requiring transparency in data that's disclosed. It's also unlike the draft U.K. bill that requires compliance with copyright laws, and it is also not consistent with draft French legislation, which would also require compliance with copyright laws for training models.
As you know, there is a consultation ongoing with ISED and the Department of Canadian Heritage that asks a number of questions, including whether the act needs to be changed and, in particular, whether there should be a new text and data mining exemption. That is a very important consultation and raises the balance between the ability of creators, including many creators from Quebec, to be able to control the uses of the work and to get compensation for the uses of the work when their models are trained, and what might be an interest that AI entrepreneurs and larger businesses have to use works for training the models. It's a question that has policy considerations in it.
In my view, the existing law adequately sets the standard because, while training models would involve the reproduction right, which would be prima facie infringement, they're always subject to a fair dealing exception and fair dealing is the best way to calibrate, using the current law and the current principles, the use of works without consent.
:
Thank you. That's an excellent question.
My first reaction would be that this goes back to an important need for this bill to really be able to allow for regulations that will address discriminatory bias outputs that come from AI systems. If the bill is structured in a way that the obligation is clear and actually captures the range of ways in which algorithmic bias can arise, then companies will have an impetus to hire teams that are better enabled to anticipate and mitigate some of those harms.
One of the comments that I included in my introduction was that one of the issues, and what I see as one of the limits of this bill as it's structured right now, is that many occasions of discriminatory bias have been identified after the fact, usually through investigative reporting, usually by experts or people who experience these harms themselves and so have an understanding of the kind of harm that might arise. Then when that becomes publicly known and there's a public backlash, at that point there's an explanation that it was unforeseeable at the time the system was being developed or that the initial idea was developed to automate a decision-making process that was previously done by people, for example.
This bill, in the structure it is in right now, needs to be enabled to capture not just discrimination on recognized grounds but also discrimination by proxy for a recognized ground. Where a postal code might stand in or where employment status or previous experience of imprisonment or social networks might stand in to influence algorithmic decision-making and reflect a protected ground, this bill needs to really capture all the complexity of how these harms can arise so that companies are then motivated to ensure, to the best of their abilities, that this kind of discrimination doesn't happen.
This is also why we recommended an equity audit, which obviously would need more structure probably in regulation to really signal and advance the importance of equity and anti-discrimination in the development of these systems.
It's a crucially important point. I honestly do think that the companion document reflects the importance of that, but we don't see it fleshed out in the bill right now.
:
That came directly from the recent Auditor General's report, which basically outlined that the government is failing Canadians as it relates to the management of data that the Government of Canada collects on behalf of all of us.
Ms. Denham, thank you for your testimony as well. I was very interested in some of the work you've done in the U.K., and I really believe that your testimony provides a lot of weight with respect to the amendments we need to make in respect of children.
Prior to this meeting, I looked at “Age appropriate design: a code of practice for online services”, which came from the Information Commissioner's Office in the U.K. In that document, it is outlined that there are 15 principles related to the protection of children's data. Those include the best interests of the child, data protection impact assessments, age-appropriate application, transparency—that's in respect to how companies are using data, I presume—detrimental use of data, policies and community standards, default settings, data minimization, data sharing, geolocation, parental controls, profiling, nudge techniques, connected toys and devices, and other online tools.
I presume, given your expertise, that you're somewhat familiar with the code of practice outlined in the United Kingdom, which is also part of the Data Protection Act of 2018 and in section 123 of the U.K. act.
Given your expertise, would it be your recommendation that this committee adopt a similar code of practice to ensure that children across Canada are not subject to online harms?
:
Absolutely. Thank you for the question. I'm happy to explain that.
The exact wording, of course, we would leave to the discussion of the expert lawmakers and to your esteemed committee, who are working on proposed amendments. The reason we incorporated that wording into proposed section 5 was to respond to the ways in which harm can be experienced at the collective level and not solely at the individual level.
The law has traditionally recognized individual harm. If I'm injured physically or psychologically or I lose money, that has been more the traditional focus of the law. As I mentioned in an earlier response, the law is going to have to start to recognize and acknowledge the ways that harm can be experienced by the group and that this harm affects us as individuals as well. Our wording is meant to provide a suggestion for how that could be incorporated here.
The reason that we incorporate that—as you see in some of the discussion that follows the suggestion—is that AI systems trained on large bodies of data are used to identify patterns and to draw out inferences that can affect folks based on the groups they are a member of, even if it doesn't affect them directly as an individual. To recognize that as a harm is significant. It allows the act to actually capture some of the ways in which inequitable injury and losses from the growing and flourishing of this industry in Canada are going to be distributed within society.
If there's time, I'm happy to give an example. We have some examples of collective injuries in the written submissions as well, if that is easier.
:
Thank you very much, Mr. Perkins.
I think that one of the real failings of AIDA is that there are no guiding principles. The guiding principles are important, because they will influence the regulation and, where there's enforcement, what those principles should be.
I indicated in a previous answer that some of the guiding principles that could be a useful start were in that U.K. private member's bill. The member had given quite good consideration of various factors, which included responsible AI factors, the requirement for transparency and risk mitigation, as well as, from an economic perspective, the need to be sure that regulations are proportionate to benefit versus burden, and also to take into account international competitiveness.
That may not be the be-all and end-all, but the act should.... Assuming this can be somehow salvaged, there are some things that could be done. One of them is guiding principles. I would suggest that bill is worth considering.
:
Thank you very much for the question.
Ms. Denham talked about the European approach as being a first mover. In fact, there's starting to be a wisdom that, while they're a first mover, they're no longer the right approach. The concern is very much that not only is it not the right structure, but that it will impede innovation. The U.K. is very much focused on that. The U.S. executive order on AI is very much focused on that. The reality is that Canadians need access to these tools. If there are impediments to that, it could set us back a lot.
You have to recognize as well that if you have complicated regulations that aren't in place in other countries, it requires Canadian entrepreneurs to make investments that don't have to be made in order to compete in foreign markets. A lot of these companies are small companies. They don't have the resources to sit down with lawyers and figure out how to comply with something that is really difficult. We need to empower them. We need to be sure they're responsible with voluntary codes and to deal with the very high-risk stuff in a thoughtful way.
However, we have to be sure we don't make a mistake in bogging down entrepreneurs and have them move, as they've done in other areas, to the United States where they can start businesses. Then we would lose the income and the people.
I wanted to speak to this because my understanding is that the original motion we agreed to—and I don't want to read it to members, because I'm sure you all have a copy of it—specifically indicates that this committee would take the testimony of witnesses who have appeared or testified before other committees. It was explicitly part of the conversation we had that Annette Verschuren, who is the former chair of the board at SDTC, had already appeared before the ethics committee. I'm not sure why Ms. Verschuren was actually invited to this committee, given the fact that we said we would take that testimony as our own so as to avoid duplication at this committee.
I would also direct people's attention to the fact that the motion also said, “the committee reserves the right to re-invite said witnesses as necessary”. However, my understanding is that no case has been made before this committee as to why that would be necessary.
I have Ms. Verschuren's testimony here before the ethics committee, which I've reviewed, and it seems quite substantive. I'm abiding by the original conversation and agreement we had in passing that motion. I don't understand why we would need to summons a witness—no one has made a case for why we need to reinvite her to this committee—when we all agreed we'd have her testimony as a part of this study at this committee.
Those are my thoughts. I'm not inclined to support a summons at this time.
:
The committee hadn't had a deliberation on that, so I don't understand a summons, given the fact that any invitation was premature because the committee hadn't deliberated on it at all. I don't know why Ms. Verschuren was invited, but seeing how it is moot at this point if she's available and has gotten back to the committee and said December 12 works, then why wouldn't the committee just take her up on that offer rather than summoning her, which is completely unnecessary at this point?
There are two good arguments there for.... What's the argument for why members think that we need her to come to this committee, when she's already testified at another committee? I don't really hear a good rationale for that from my perspective, but I would invite other members to provide that rationale.
My understanding is that she has resigned from the board, and the has accepted her resignation. If that's in effect as of December 1, what can be gained?
Anyway, regardless of that, in terms of the timeline, I think there are two arguments that I've made here for why this committee would have to justify the invite to have her come and appear here, given the first motion that we all agreed to.
I'm just abiding by what the committee agreed to, and we haven't had a deliberation on why Ms. Verschuren should be invited. I certainly would say that a summons does not seem necessary from any perspective that I can entertain.
The witness was already invited. She was invited for this Thursday and she is around this Thursday. She's in the country. She leaves on Friday. She was already invited and has refused to come.
She's available, but she hasn't said whether she's coming on the 12th. Frankly, at this stage, given the lack of response and her refusal to even acknowledge an invitation by the committee until my Friday motion for the summons.... Yes, it's a powerful tool that was tabled. All of a sudden she remembered that she had to phone back, or her assistant had to phone back, and said, “Oh, maybe I should come, now that you're going to use your power to summon a witness.”
At this stage, I'm not feeling comfortable that the witness who is proposed here is going to comply unless she's summoned.
:
Respectfully, I don't think we can confuse not responding with a refusal to appear before the committee. I want to make the point that I made before. Those are two very different things. An email can get lost in an inbox or someone can be unavailable. I think there could be a very reasonable explanation, other than what Mr. Perkins is assuming, which is the most negative interpretation that we could give.
Being a little bit more charitable in our interpretation, she has now responded and said she's available on the 12th, so perhaps the word “summons” is not necessary.
Chair, I want to go back to the point that I made earlier, which is that Ms. Verschuren shouldn't have been invited at all until our committee deliberated on whether that witness was necessary. How did she get invited twice when the motion itself, the letter of that motion and the agreement that this committee made, was not to invite witnesses but to use their testimony and then deliberate on whether those witnesses needed to be reinvited to this committee?
Mr. Rick Perkins: [Inaudible—Editor]
Mr. Ryan Turnbull: That's what the motion reads, Mr. Perkins, so don't tell me that it doesn't. I have it right here and I can read it back to you on the record. I know you don't need that—you're a smart enough guy—but the point is that she shouldn't have been invited before this committee had a conversation about it.
To then imply that she's somehow refusing to appear is not the case. It's just not true.
:
If I can just jump in on that, I'll take some of the blame.
The motion wasn't particularly specific about the list of witnesses in the text of the motion. I asked the clerk to invite the witness, so I'll take the blame for that. You're absolutely right that the text of the motion doesn't necessarily specify that the committee would invite that witness. It's a discussion that this committee should have had.
I guess it's hard to have it now, given that there is a motion on the floor that needs to be voted on. That's also, to some extent, an opportunity to have the discussion about whether we feel it's necessary.
I had some members come up to me and make the case that they felt we needed to invite that witness, so I instructed the clerk to invite the witness. However, I didn't have the instruction from the committee. I'll grant you that, Mr. Turnbull. You're correct.
We still have that motion with “summons” that Mr. Perkins presented.
Mr. Turnbull.
:
Just briefly, that might be the best way to do it. Here's my concern. We don't really want to have to summons somebody if we don't have to, but if somebody is going to be leaving the country and can't be here, then we lose control altogether.
I was there originally when the minister was at.... I still don't think the workers are getting the best due process. Therefore, if I have to vote for this amendment, I will not support it, because I will fall to getting the summons. I don't really want to summons anybody here, but at the same time, because of the workers and the way they are still compromised in this, I can't support their not having their day.
I don't know if there's any other way to do this. If we just do it this way and then nothing happens, then I'll regret that, but I really don't want to summons anybody either. It's a tough thing to do, and I think we're all taking it seriously.
If there's a way to put the two together, I would be open to that. I don't have a solution, though, at the moment. I'm trying to think of how they could word that.
:
Colleagues, we'll resume this meeting. It's been a little more than five minutes.
[Translation]
Pursuant to the motion adopted on November 7, the committee is commencing consideration of the study on the Recent Investigation and Reports on Sustainable Development Technology Canada.
I would like to welcome our two witnesses: Geoffrey Cape, Chief Excutive Officer, R‑Hauz, who is participating in the meeting by videoconference, and Andrée-Lise Méthot, founder and managing partner, Cycle Capital, who is attending the meeting in person.
Mr. Cape and Ms. Méthot, thank you for agreeing to join us, and I apologize for the brief delay.
Without further ado, I yield the floor to Mr. Cape, who will have five minutes to deliver his opening remarks.
:
Thank you very much, Mr. Chair.
Dear members of the committee, thank you for welcoming me today.
A few words about myself first: I grew up in Baie-Comeau, in the beautiful region of Côte-Nord in Quebec, and as far back as I can remember, my daily life has always been influenced by environmental issues.
In my career, I worked for Lucien Bouchard's Parti Québécois government in a policy capacity. Then I devoted my life to the development of new technologies and the fight against climate change. As a geological engineer, I have been working hard on this for over 25 years. I have dedicated my entire career to building a company that combines ecology and venture capital. It was a very surprising combination 20 years ago.
The purpose of my contribution to Sustainable Development Technology Canada, or SDTC, was to get involved in the Canadian ecosystem to build the global economy of tomorrow. Let's start with my involvement on the board of SDTC, to which I was appointed in June 2016. Upon being appointed by the directors, I sat on the board until 2021.
In the context of my commitment to SDTC, I have always acted with complete transparency by declaring all conflicts of interest, whether real, apparent or potential. That was the case for companies in the Cycle Capital portfolio in which we generally hold a stake ranging on average from 4% to 30%. The same was also true of companies likely to attract future interest from Cycle Capital because, in my field, we see projects coming far in advance. Situations that could be perceived as conflicts were always declared.
Even where there was doubt, even though SDTC's governance did not always consider a situation as a conflict of interest, I chose to withdraw. I am convinced that my actions demonstrate the seriousness with which I approach this process in all situations.
As for the universal and equitable emergency aid granted during the pandemic to all organizations in the portfolio that met the conditions pre-established by SDTC, allow me to go back in time a little bit. On that date, schools were closed, Quebec was preparing for a general curfew, supply chains were disrupted, and our entrepreneurs were concerned, wondering if they would go bankrupt and if they should lay off their employees.
It is important to note that several government programs, at both provincial and federal levels, were essential to keep our businesses afloat during this difficult period, in all sectors of activity in Canada, from Imperial Oil to the local restaurant.
As for what we implemented when I was at SDTC, these were measures aimed at protecting Canadian companies holding intellectual property in clean technology in Canada and human expertise, in other words brains. The goal was to avoid weakening them and thus preserving them from low-cost foreign acquisitions, by countries such as China.
Studies show that in Canada, green technology companies are underfunded, up to twice as much as their American counterparts.
Every time we weaken our companies by refusing to support them in times of crisis, for example, we are not building the Canada of tomorrow; instead, we expose them to the appetites of foreign interests, especially those of the Chinese.
I would also like to point out that during our discussions on the SDTC board of directors, we obtained legal advice from a lawyer at Osler. This legal opinion stated that no conflict of interest existed, given that the measure in question was universal and exceptional, applying equitably to all companies that had benefited from SDTC in the past.
In conclusion, faced with this unprecedented situation, it had become imperative to protect our technological companies, to preserve Canadian intellectual property, and to ensure the sustainability of our green technology industry.
Investing in our Canadian green technology companies goes beyond simple economic consideration. It is a commitment to our nation, to the environment, and to our children.
By supporting these companies, we preserve our technological sovereignty, create sustainable jobs, and contribute to building a greener and more prosperous future for all. I personally believe in this just transition.
At Cycle Capital, for example, we have invested over $200 million in Canada. We have also taken the lead and contributed to injecting $2 billion in equity into Canadian companies.
Today, our portfolio is worth more than $3.9 billion.
We have also participated in the direct creation of 1,300 high-tech quality jobs that are being filled by engineers, PhDs and leading experts on these issues.
Ladies and gentlemen, I am now ready to answer your questions and to take part in the discussion.
:
Thanks to the witnesses for being here today. I know this is a hot topic for this committee and some others. I'm looking forward to your testimony.
I know that SDTC has been around for about 22 years, if I'm not mistaken. I know that it was, in fact, awarded an extra $325 million around 2013, which I think is a testament to the fact that the organization has been recognized by multiple governments as doing some really great work.
That doesn't solve the issue at hand, which is really talking about board governance and about assuring both us, as members of Parliament, and the general public that the funds that the organization presided over and distributed were presided over and distributed in a way that hopefully navigated the challenges and made sure that the organization abided by the highest standards of governance.
Obviously, I think that's a balancing act. It's important for us, as members of Parliament, and for this committee in particular, which has the job of holding the government to account on everything to do with industry and its large portfolio. Given the fact that SDTC falls under that portfolio, I think it's important that we have these debates and discussions.
I know that we've learned from investigations that were initiated by the and from the committee testimony that we heard around conflict of interest policies at SDTC in the past. They left a lot to be desired, I'll say quite frankly. Board members didn't necessarily recuse themselves from decisions where they had conflicts or where there could be perceived conflicts. At least, that's what we've heard. If they did recuse themselves, those things were not necessarily noted anywhere. Decisions were taken unanimously without any notation of dissenting votes. Board members, including the chair, gave money to their own companies in some cases.
I'm going to start with you, Ms. Méthot. Can you comment on some of these challenges and your experience while you were on the board?
:
Thank you for your question.
I have always disclosed conflicts of interest. I even have a letter here confirming that I have complied with all the rules. This was also observed by a departmental representative. I don't really know his title, but I think he was an assistant deputy minister. He was there during all our deliberations. In addition, the common practice when I was there was to declare our conflicts of interest, actual, potential or apparent, in advance and to recuse ourselves in advance. Here I have a letter confirming that I recused myself in advance and complied with all the rules. It's available if you want to see it.
That being said, the comment about the minutes is fair. If there's one thing that all the organizations and boards that I've been a part of have in common, it's that they aren't perfect and are improving. In this instance, a certain amount of progress remains to be made. As you can understand, having left the board more than two years ago, I'm not in a position to judge the present situation. I can only speak to the period when I was there.
Here's a very specific example. I read the minutes on the weekend. They state that we are an investor in SPARK Microsystems, whereas the declaration of interest states that SPARK Microsystems could be an investment but in fact isn't one. That was incorrectly entered in the minutes.
You're entirely right on that point, sir. I agree with you.
First of all, I must say that both of you have impressive CVs. You are pioneers in investment, sustainable development and the emergence of a new technology that may perhaps save the planet. Let's hope. The fact remains that what we're interested in here is concerning.
I would mention, Ms. Méthot, that you have a very calm tone of voice, which I find quite reassuring in the circumstances. I'd like to go back to the context of the COVID‑19 pandemic because you mentioned it.
I remember the workload the committee was coping with at the time. We were considering a bill to amend Canada's Investment Act, and there was this sense that we had to save our businesses from devaluation. You suggested in your remarks that this desire to save businesses tainted your work.
What would have happened if the board hadn't made the decision to adopt an emergency plan for clean technology businesses during the pandemic?
:
We were extremely impressed to be receiving calls from entrepreneurs, even if they were only those from our portfolio at Cycle Capital.
There was a whiff of panic among them. In many instances, their businesses weren't profitable, and that's why they didn't meet the criteria of other available programs. Those companies develop technologies and invest in technology, but they don't turn a profit. Consequently, panicking entrepreneurs were wondering if they had to declare bankruptcy or lay off their employees. However, losing human resources means losing business intelligence. You have intellectual property, but without engineers and experts, it's very hard to turn that asset into real products and build an actual commercial enterprise that's globally viable.
Many scenarios were possible, depending on the phases of COVID‑19, obviously. Looking in the rearview mirror, we can see that we may have vastly undermined those businesses. Some very serious studies show us that more than 85% or 90% of new technology intellectual property in the water industry, for example, is owned by Chinese interests. In addition to the brain drain, that intellectual property could have been a target.
Let's say you stop all that, take a break and then start over. You will then have to find employees, people who understand and are capable of doing what's called “scaling”, which is the real commercial rollout, without losing knowledge.
I would have been more embarrassed here if you had asked me why we did nothing. I'm glad that we did something and that this committee is looking into this matter.
Welcome to the witnesses, one in person and one virtually.
Obviously, a number of questions were asked today.
[Translation]
Welcome to the committee, Ms. Méthot.
[English]
I have taken a look at Cycle Capital, the firm that you are integral to—if I can use the word integral. It's obvious that your firm has been around for a while. You have made probably a lot of investments into a number of companies, and for those investments you obviously check out the corporate governance structure of all companies you would invest in.
It sounds as though you sit on a few boards. You sat on the board of SDTC, and, if my understanding is correct, you sit on the Infrastructure Bank board, which obviously acknowledges your experience and so forth within the investment industry.
How would you judge the corporate governance structure that was in place at SDTC during your tenure?
It's important to note that with respect to the billion-dollar Liberal-Green slush fund, the Auditor General had given it a clean bill of health up to 2017. While I am interested in the documents from 2015 forward, I think that some of that has already been pronounced upon by the Auditor General. Irrespective of who appointed any of the individuals to other OIC appointments and who appointed them to the board, it doesn't matter who appointed you: If you break the rules, then you need to be held to account for that.
Ms. Verschuren was in fact appointed to the SDTC billion-dollar Green slush fund by the current Liberal government and has resigned in disgrace, so in terms of conflict of interest, the government review that they commissioned has determined that the conflict of interest policies were not followed. I think it's very important that for that period of time they reviewed, to Mr. Lemire's point, that's the period of time for which we see those recusals. The in-house counsel, the one who is advising them on conflicts of interest, was the same one who told board members to backdate their conflict of interest documents. The documents they have are highly suspicious at best.
Certainly, to go back to 2015, there is full support from me for that suggestion.