:
Good afternoon, everyone. I call this meeting to order.
Welcome to meeting no. 92 of the House of Commons Standing Committee on Industry and Technology.
Today's meeting is taking place in a hybrid format, pursuant to the standing orders.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill .
I'd like to welcome our witnesses today and also apologize for the brief delay caused by a vote in the House.
Joining us today are Colin J. Bennett, professor; Dr. Michael Geist, professor of law and Canada research chair in Internet and e‑commerce law; Vivek Krishnamurthy, associate professor of law at University of Colorado Law School; Dr. Brenda McPhail, acting executive director of the public policy in digital society program; and lastly, Teresa Scassa, Canada research chair in information law and policy, Faculty of Law, Common Law Section, University of Ottawa.
I'd like to welcome you all.
We'll begin the discussion without further ado.
Mr. Bennett has the floor for five minutes.
:
Thank you very much, Mr. Chair.
I'm from the University of Victoria, although I'm currently in Australia. I wish everybody a good day.
I would like to emphasize five specific areas for reform of the CPPA and to suggest ways in which the bill might be brought into better alignment with Quebec's law 25. I don't think that Bill should be allowed to undermine Quebec law, and in some respects, it does. I also think these are some of the areas where the bill will be vulnerable when the European Commission comes to evaluate whether Canadian law continues to provide an adequate level of protection.
Some of these recommendations are taken from the report that you have from the Centre for Digital Rights, which I'd like to commend to you.
First, I believe that CPPA's proposed section 15, on consent, is confusing to both consumers and businesses. In particular, I question the continued reliance on “implied consent” in proposed subsection 15(5), which states, “Consent must be expressly obtained unless...it is appropriate to rely on an individual's implied consent”.
The bill enumerates those business activities for which consent is not required, including if “the organization has a legitimate interest that outweighs any potential adverse effect on the individual”. That's a standard that has been imported from the GDPR. However, in the GDPR, “consent” means express consent; it's “freely given, specific, informed and unambiguous”.
In the current version of the CPPA, businesses can have it both ways. They can declare that they have implied consent because of some inaction that a consumer allegedly took in the past because of not reading the legalese in a complex terms-of-service agreement, or they can assert a “legitimate interest” in the personal data by claiming that there is no “potential adverse effect on the individual”. That is a risk assessment performed by the company rather than a judgment made about the human rights of individuals to control their personal information.
In that respect, it's really important that the bill be brought within a human rights framework. There should be no room for implied consent in this legislation. It's a dated idea that creates confusion for both consumers and businesses.
Second, there is no section in the CPPA on international data transfers. I find that very odd. I know of no other modern privacy law that fails to give businesses proper guidance on what they have to do if they want to process personal data offshore. The only requirement is for the organization to require the service provider, “by contract or otherwise,” to ensure “a level of protection of the personal information equivalent to that which the organization is required to provide under this Act.” That's proposed subsection 11(1) of the CPPA.
That due diligence applies whether the business is transferring personal data to another province in Canada or overseas to a country that may or may not have strong privacy protection or, indeed, a record of the protection of human rights. That's particularly troubling because of proposed section 19 of the CPPA, which reads, “An organization may transfer an individual's personal information to a service provider without their knowledge or consent.”
The Canadian government has never gotten into the business of adopting a safe harbour approach or a white list, and I'm not recommending that. However, Quebec, I believe, has legislated an appropriate compromise under section 17 of law 25, which requires businesses to do an assessment, including of the legal framework, when sending personal data outside of Quebec. As many businesses will have to comply with the Quebec legislation, why not mirror that provision in Bill ?
Third, the bill ignores important accountability mechanisms that were pioneered in Canada and exported to other jurisdictions, including Europe. Therefore, it's very strange that those same measures do not appear in the CPPA. In particular, privacy impact assessments are an established instrument and a critical component of accountable personal data governance, and they should be required in advance of product or service development, particularly where invasive technologies and business models are being applied, where minors are involved, where sensitive personal information is being collected, or where the processing is likely to result in a high risk to an individual's rights and freedoms. Businesses do the PIAs, and they stand ready to demonstrate their compliance or their accountability to the regulator.
A fourth and related problem is the absence of any definition of sensitive forms of personal data. The word “sensitivity” appears throughout the legislation in several provisions of the bill, but with the exception of the specification about data on minors, it is nowhere defined. In my view, the bill should define what “sensitive information” means, and it should also enumerate a non-exhaustive list of categories, which, in fact, occurs in many forms of legislation.
Finally—I know you've heard about this in the past, and I've researched on this—the absence of proper privacy standards for federal political parties is unjustifiable and untenable. The government is relying on the argument that the FPPs’ privacy practices are regulated under the Elections Act, but those provisions are nowhere near as strong as in Bill . I think businesses resent the fact that parties are exempted. This is not an issue that will go away, given advances in technology and its use in modern digital campaigning. Canada is one of the few countries in the world in which political parties are not covered by applicable privacy law.
Thank you so much.
:
Thank you very much, Chair.
Good afternoon. As you heard, my name is Michael Geist. I am a law professor at the University of Ottawa, where I hold the Canada research chair in Internet and e-commerce law and am a member of the Centre for Law, Technology and Society. I appear in a personal capacity representing only my own views.
I’d like to start by noting that the very first time I appeared before a House of Commons committee was in March 1999 on Bill , which would later become PIPEDA. I must admit that I don't think I really knew what I was doing at that appearance, but my focus at the time was on whether or not the law would provide sufficient privacy protections for those just coming online who had little background or knowledge of privacy or security, or even the Internet, for that matter.
I highlighted some of the shortcomings of the bill, including poorly defined consent standards that would lead to overreliance on implied consent, broad exceptions on the use or disclosure of personal information and doubts about enforcement. I urged the committee to strengthen the bill, but I have to say that I did not fully appreciate that the policy choices being made back then would last for decades.
I start with this brief trip down memory lane because I feel that we find ourselves in a similar position today, with policy choices on things like artificial intelligence and emerging technologies that will similarly last for far longer than we might care to admit.
It is for that reason that I think it is important to emphasize the need to get it right rather than to get it fast. I often hear the talk about being first, at least on AI, and I must admit that I don’t understand why that is a key objective. Indeed, if you leave aside the fact that the core of at least the privacy part of this bill was introduced in 2020 and languished for years, we now find ourselves in a race to conduct hearings that I don’t totally get. We have an AI bill where there is a major overhaul with no actual text available yet. Witnesses seemingly have to pick between privacy and AI, creating the risk of limited analysis all around.
I think we need to do better. I’ll focus these remarks on privacy, but to be clear, the AI bill and the proposed changes raise a host of concerns, including the need for independent enforcement and the high-impact definitions that puzzlingly include search and social media algorithms.
The other lesson from the past two decades is that you can seek to create a balanced statute—I know there's been a lot of talk about balance—but the playing field will never be balanced. It's always tilted in favour of businesses, many of which have the resources and expertise to challenge the law, challenge complaints and challenge the Privacy Commissioner. Most Canadians don’t stand a chance. That’s why we must craft rules that seek to balance the playing field, too, with broad scope of coverage, better oversight and audit mechanisms, and tough penalties to ensure that the incentives align with better privacy protection.
How do you do that? Given my limited time, I have five quick ideas.
First, to pick up where Professor Bennett ended, we must end the practice of “do what I say, not what I do” when it comes to privacy. I think it's unacceptable in 2023 for political parties to exempt themselves from the standards they expect all businesses to follow. Indeed, you can't credibly argue that privacy is a fundamental right and then claim that it should not apply in a robust manner to political parties.
Second, the addition of language around the fundamental right to privacy is welcome, but I think it should also be embedded elsewhere so that it factors more directly into the application of the law. For example, as former commissioner Therrien noted, it could be included in proposed subsection 12(2) among the factors to consider in an “appropriate purposes” test.
Third, the past 20 years have definitely demonstrated that the penalties matter for compliance purposes and are a critical part of the balance. The bill features some odd exclusions. There are penalties for elements of the appropriate purposes provision in proposed section 12, but not the main provision limiting collection, use and disclosure for appropriate purposes.
In the crucial proposed section 15 provision on consent, there are no penalties around the timing of consent or for using an implied consent within the legitimate interest exception. The bill says such a practice “is not appropriate”, whatever that means. It is an odd turn of phrase in a piece of legislation. But the penalty provision doesn't apply regardless.
Fourth, the committee has already heard debate about the appropriate standard of anonymized data. I get the pressure to align with other statutes. I’d note that proposed subsection 6(6) specifically excludes anonymized data from the act, and yet I think we want to ensure that the commissioner can play a data governance role here with potential audits or review, particularly if a lower standard is adopted.
Finally, fifth, provided we ensure that the privacy tribunal is regarded as an expert tribunal that will be granted deference by the courts, I’m okay with creating another layer of privacy governance. I appreciate the concerns that this may lengthen the timeline for resolution of cases, but the metric that counts is not how fast the Privacy Commissioner can address an issue but how fast a complainant can get a binding final outcome. Given the risks of appeals and courts treating cases on a de novo basis, existing timelines can go far beyond the commissioner's decision, and the tribunal might actually help.
Thanks for your attention. I look forward to your questions.
:
Thank you, Mr. Chair and members of the committee. I am very honoured to be speaking with you today regarding Bill .
I am currently a professor of law at the University of Colorado, but when I was the director of CIPPIC at the University of Ottawa, we published two reports in the spring of 2023 that consider AIDA and the CPPA. I am going to focus my remarks on the CPPA, particularly on provisions that relate to the privacy of minors. I would be happy to share some of my thoughts around AIDA as well.
I would like to begin by saying that I agree with everything that Professor Bennett and Professor Geist said. You could treat these remarks as additive.
While it is very welcome that the CPPA, unlike PIPEDA, specifically defines that the personal information of minors is sensitive information, Professor Bennett already told you about how “sensitive information” is not a defined term in the legislation. It is positive that children would have—if this bill passes into law—some recognition of the importance of protecting their personal information to a higher standard. However, we believe that this legislation can do far better.
For context, it is important to realize that children spend increasing amounts of time online, at younger and younger ages. This is a trend that accelerated during COVID-19 and the transition to digital online learning. I am a parent, and I am sure many of you are parents. Our children are using devices under commercial terms of service all the time, and this poses a very significant risk to the privacy rights of children.
While COVID has receded, it's the new reality that kids are using more and more technology at younger ages. What can we do? There are three things, and then a fourth about jurisdictional competence.
The Privacy Commissioner, in his recommendations regarding the CPPA, suggested that “best interests of the children” language should be incorporated into the law, and he suggested doing that in the preamble. I take no position myself as to where that should be done, but it is clear that this is international best practice. The United Kingdom and California have both incorporated such language into recently enacted statues, and we think that Canada should follow this approach. What would that mean? It means that organizations that handle children's personal data must take the best interests of children into account. That must come ahead of their commercial interests.
Second, we think it is important for the CPPA to require organizations that develop products or services that are likely to be accessed by children to set their privacy settings to the highest level. Defaults play a really important role in our subjective experience of privacy. It is great to have rights, but you can very easily leave those rights on the table if a setting is such that it contracts you out. We think that requiring a company to set those defaults to high levels when children are their likely users or their known users is very important.
Third, I'd like to pick up on what Professor Bennett told you about data protection impact assessments, a made-in-Canada idea. Bill is extremely weak when it comes to data protection impact assessments. The provisions apply only when the legitimate interest, excepting the consent, is being used. This is a problem for everyone, especially for children.
We believe—and I specifically believe this personally—that the data protection impact assessment requirements of this bill need to be considerably strengthened whenever data-processing activities pose a high risk to the privacy rights of Canadians. I would say that if children's data is sensitive data, that means we basically need to do that impact assessment all the time.
Last, I'd like to talk about constitutional competence here. There may be some concerns that it may be beyond federal competence to protect the privacy rights of children with more expansive provisions. Our analysis suggests otherwise. CPPA, like PIPEDA before it, is being enacted under Parliament's power to regulate trade and commerce.
Now, it is true that in our federal system, provincial governments get to determine the age of majority, but there is plenty of federal legislation that is designed to protect the rights of children. This also leads to how we think of this law, the consumer privacy protection act. It's not just a form of privacy regulation; it's also, when you think about it, a form of consumer protection legislation that is regulating the safety of digital products that invade and interfere with our right to privacy.
In view of the long history of federal regulation directed at protecting children in the marketplace, we think it would be appropriate for the federal government to include stronger privacy protections, and that would not prejudice provincial laws, like Quebec's, that are stronger. Just as PIPEDA yields to provincial legislation when it's substantially equivalent or better, the same could be true of strengthened children's privacy protections in the new CPPA.
Thank you very much.
:
Thank you, Mr. Chair and members of the committee, for inviting me here today to speak to the submission authored by Jane Bailey, professor at the faculty of law of the University of Ottawa; Jacquelyn Burkell, professor at the faculty of information and media studies at Western University; and myself, currently the acting executive director of the public policy and digital society program at McMaster University.
It is a privilege to appear before you on this omnibus bill, which needs significant improvement to protect people in the face of emerging data-hungry technologies.
I will focus on part 1 and very briefly on part 3 of the bill in these initial remarks, and I welcome questions on both.
Privacy, of course, is a fundamental underpinning of our democratic society, but it is also a gateway right that enables or reinforces other rights, including equality rights. Our written submission explicitly focuses on the connection between privacy and equality, because strong, effective privacy laws help prevent excessive and discriminatory uses of data.
We identified eight areas where the CPPA falls short. In these remarks, I will focus on four.
First of all, privacy must be recognized as a fundamental human right. Like others on this panel, while we welcome the amendment suggested by , we would note that proposed section 12 in particular also requires amendment so that the analysis to determine whether information is collected or used for an appropriate purpose is grounded in that right.
Bill offers a significant improvement over PIPEDA in explicitly bringing de-identified information into the scope of the law, but it has diminished the definition from the predecessor law, Bill , by removing the mention of indirect identifiers. The bill also introduces a new category, anonymized information, which is deemed out of the scope of the act, in contrast to the superior approach taken by Quebec. Given that even effective anonymization of personal data fails to address the concerns about social sorting that sit at the junction of privacy and equality, all data derived from personal information, whether identifiable, de-identified or anonymized, should be subject to proportionate oversight by the OPC, simply to ensure that it's done right.
Third, proposed subsection 12(4) weakens requirements for purpose specification. It allows information collected for one purpose by organizations to be used for something else simply by recording that new purpose any time after the initial collection. How often have you shared information with a business and then gone back a year later to see if it had changed its mind about how it's going to use it? At a minimum, the bill needs constraints that limit new uses to purposes consistent with the original consensual purpose.
Finally, the CPPA adds a series of exceptions to consent. I'll focus here on the worst, the legitimate interest exception in proposed subsection 18(3), which I differ from my colleagues in believing should be struck from the bill. It is a dangerously permissive exception that allows collection without knowledge or consent if the organization that wants the information decides its mere interest outweighs adverse impacts on an individual.
This essentially allows collections for organizational purposes that don't have to provide benefits to the customer. Keeping in mind that the CPPA is the bill that turns the tap for the AIDA on or off, this exception opens the tap and then takes away the handle. Here, I would commend to you the concerns of the Right2YourFace coalition, which flags this exception as one in which organizations may attempt to justify and hide their use of invasive facial recognition technology.
Turning to part 3 of Bill , the AIDA received virtually no public consultation prior to being included in Bill C-27, and that lack of feedback has resulted in a bill that is fundamentally underdeveloped and prioritizes commercial over public interests. The bill, by focusing only on high-impact systems, leaves systems that fail to meet the threshold unregulated. AI can impact equality in nuanced ways not limited to systems that may be obviously high-impact, and we need an act that is flexible enough to also address bias in those systems in a proportionate manner.
A recommender system is mundane these days, yet it can affect whether we view the world with tolerance or prejudice from our filter bubble. Election time comes to mind as a time when that cumulative impact could change our society. Maybe that should be in, and maybe it should be out. We just haven't had the public conversation to work through the range of risks, and it's a disservice to Canadians that we're reduced to talking about amendments to a bad bill in the absence of a shared understanding of the full scope of what it needs to do and what it should not do.
Practically, in our submission, we nonetheless make specific recommendations in our brief to include law enforcement agencies in scope, to create independent oversight and to amend the definitions of harm and bias. We further support the recommendations submitted by the Women's Legal Education & Action Fund.
I would be very happy to address all of these recommendations during the question period.
Thank you.
I have concerns about both the CPPA and the AIDA. Many of these have been communicated in my own writings and in the report submitted to this committee by the Centre for Digital Rights. My comments today focus on the consumer privacy protection act. I note, however, that I have very substantial concerns about the AI and data act, and I would be happy to answer questions on that, as well.
Let me begin by stating that I am generally supportive of the recommendations of Commissioner Dufresne for the amendment of Bill , as set out in his letter of April 26, 2023 to the chair of this committee.
I will address three other points.
The has chosen to retain consent as the backbone of the CPPA, with specific exceptions to consent. One of the most significant of these is the “legitimate interest” exception in proposed subsection 18(3). This allows organizations to collect or use personal information without knowledge or consent if it is for an activity in which an organization has a legitimate interest. There are guardrails: The interest must outweigh any adverse effects on the individual; it must be one that a reasonable person would expect; and the information must not be collected or used to influence the behaviour or decisions of the individual. There are also additional documentation and mitigation requirements.
The problem lies in the continuing presence of “implied consent” in proposed subsection 15(5) of the CPPA. PIPEDA allowed for implied consent because there were circumstances where it made sense and there was no legitimate interest exception. However, in the CPPA, the legitimate interest exception does the work of implied consent. Leaving implied consent in the legislation provides a way to get around the guardrails in proposed subsection 18(3). An organization can opt for the implied consent route instead of legitimate interest. It will create confusion for organizations that might struggle to understand which is the appropriate approach. The solution is simple: Get rid of implied consent. I note that implied consent is not a basis for processing under the GDPR. Consent must be expressed, or processing must fall under another permitted ground.
My second point relates to proposed section 39 of the CPPA: an exception to an individual's knowledge and consent where information is disclosed to a potentially very broad range of entities for “socially beneficial purposes”. Such information need only be de-identified—not anonymized—making it more vulnerable to re-identification. I question whether there is social licence for sharing de-identified rather than anonymized data for these purposes. I note that proposed section 39 was carried over verbatim from Bill , when “de-identified” was defined to mean what we now understand as anonymized. Permitting disclosure for socially beneficial purposes is a useful idea, but proposed section 39, especially with the shift in meaning of “de-identified”, lacks necessary safeguards.
First, there is no obvious transparency requirement. If we are to learn anything from the ETHI committee's inquiry into PHAC's use of Canadians' mobility data, transparency is fundamentally important. At the very least, there should be a requirement that written notice of data sharing for socially beneficial purposes be given to the Privacy Commissioner of Canada. Ideally, there should also be a requirement for public notice. Further, proposed section 39 should provide that any sharing be subject to a data-sharing agreement, which should also be provided to the Privacy Commissioner. None of this is too much to ask where Canadians' data are conscripted for public purposes. Failure to ensure transparency and a basic measure of oversight will undermine trust and legitimacy.
My third point relates to the exception to knowledge and consent for publicly available personal information. Bill reproduces PIPEDA's provision on publicly available personal information, providing in proposed section 51 that “An organization may collect, use or disclose an individual's personal information without their knowledge or consent if the personal information is publicly available and is specified by the regulations.” We have seen the consequences of data scraping from social media platforms in the case of Clearview AI, which used scraped photographs to build a massive facial recognition database. The Privacy Commissioner takes the position that personal information on social media platforms does not fall within the “publicly available personal information” exception.
Not only could this approach be upended in the future by the new personal information and data protection tribunal, but it could also easily be modified by new regulations. Recognizing the importance of proposed section 51, former Commissioner Therrien recommended amending it to add that the publicly available personal information be “such that the individual would have no reasonable expectation of privacy.” An alternative is to incorporate the text of the current regulations specifying publicly available information into the CPPA, revising them to clarify scope and application in our current data environment. I would be happy to provide some sample language.
This issue should not be left to regulations. The amount of publicly available personal information online is staggering, and it is easily susceptible to scraping and misuse. It should be clear and explicit in the law that personal data cannot be harvested from the Internet, except in limited circumstances set out in the statute.
Finally, I add my voice to those of so many others in saying that data protection obligations set out in the CPPA should apply to political parties. It is unacceptable that they do not.
Thank you.
After eight long years, we finally have privacy legislation in front of this committee. Of course, we've heard from witnesses that it's actually been 24 years since we updated the last privacy legislation.
When we looked at this at second reading in the House, we really focused on what was missing in this bill. What was missing was listing privacy as a fundamental right. However, when we came to committee and we had witnesses lined up, the added a bunch of amendments. The amendments seemed to indicate that he was listening. Of course, we're not sure where we are, because amendments will go in certain parts of the bill.
Mr. Geist, thank you for appearing today. When we had the original copy of this, I understand you were part of the original iteration of this bill, PIPEDA, 24 years ago. You don't look that old, sir.
The came and presented a bill and did not list privacy as a fundamental right, and now there are all these amendments. Did the minister break this bill?
:
I do. I think that would provide clarity in how it is interpreted by the commissioner, obviously, as well as by the courts, and it would provide a strong signal from the legislative branch of the importance it accrues to privacy.
However, as I mentioned, in many respects, I'd love to see this in some core provisions that are ultimately going to serve as a testing ground when there's analysis, when you make the determination, for example, of whether consent is appropriate or whether it is for the appropriate purpose. That's when you can begin to bring in that privacy is a fundamental right, because at that stage, you're engaged a bit in some of that balancing, rather than the more overarching side, which, it seems to me, may come at a later date. A court is reviewing a decision. Did the commissioner take adequate account of the fact that privacy has that elevated status?
:
It's in Canada's interest to get right what is a critically important issue—appropriate regulation of artificial intelligence. The idea that we want to race ahead with no consultation is just the wrong way to do something that all Canadians have an active interest in. We saw the government do the same on the generative AI guardrails, which were conducted privately, in secret, over the summer, and then rushed out with practically no public discussion.
When we look at some of the developments taking place around the world, we see that it becomes essential in terms of the kinds of protections Canadians might get with AI systems as well as some of the economic interests driven by the adoption of AI. We want to ensure that we contribute to that global conversation, and that some of our rules are broadly consistent with where things are headed, provided that they meet the kinds of standards that we're looking for.
In this instance, it's hard to figure out what the government is doing, other than that it raced out a sort of skeleton piece of legislation, got criticized for the lack of consultation and the lack of detail, and now says, “Okay, we'll provide more detail that makes it look a little bit more like Europe”, but we don't even have the language on that yet either.
Thank you to all the witnesses for making time for the committee, and for their testimony.
My questions are for Ms. McPhail.
In the brief that you co-authored and submitted to the committee, you recommended that Bill C-27 be amended to “Ensure continued and appropriate protection of de-identified and 'anonymized' information”. We spent our last committee meeting on Monday with witnesses who talked about the definition maybe being too stringent, the fact that we've raised the bar too high.
What are your concerns regarding the protection of de-identified information and of anonymized information in the CPPA, and how can it perhaps be amended to address your concerns?
:
I think there will always be differences of opinions as to whether definitions are sufficiently stringent or overly weak.
What would address our concerns? There are three categories of concerns that we have around de-identified and anonymized information. The first is that the definition has been weakened between Bill and the current iteration, Bill In the past definition, it included indirect identifiers. You can identify me by my name, but you can also identify me if you have a combination of my postal code, my gender and a few other factors about me. To truly de-identify information to an adequate standard where re-identification is unlikely, I believe—and my co-submitters believe—that the definition should include indirect identifiers.
To some degree, that definition has been weakened because Bill includes the addition of a new category of information: anonymized information. The problem with that new category is that technically people agree that it's extremely difficult to achieve perfect and effective anonymized information, and by taking anonymized information out of the scope of the bill, what we do is remove it from the ability of the Office of the Privacy Commissioner of Canada to inspect the processing that has happened to ensure that it has been done to a reasonable standard.
Like some of the witnesses you heard from—who would disagree with me about whether or not definitions should be stronger or weaker—I think we all agree on the reality that when personal information is processed, whether it is used to create de-identified information or anonymized information, there should be some checks and balances to make sure that the companies doing it are doing it to a reasonable standard that is broadly accepted. The way to achieve that is by including the ability within the bill for the Office of the Privacy Commissioner to inspect that processing and give it a passing grade, should that be necessary.
The last piece of concern we have with anonymization, which makes that scrutiny even more important, is that the bill conflates anonymization with deletion. It was introduced to great fanfare when this bill was put forward that individuals would now have a right to request deletion of their personal information from the companies with which they deal.
That right, I believe, is rendered moderately illusory. Certainly members of the public would not expect that if they ask for their information to be deleted, an organization could say, yes, they'll do that, and then simply anonymize the information and continue to use it for their own purposes. If we are going to allow anonymized information to be equivalent to deletion, again, it's incredibly important that we are 100% certain that the equivalency is real and valid, that truly no individual can be identified from that information and that it's not going to harm them in its use after they've explicitly exercised their right to ask for deletion.
I'd like to thank all the witnesses.
Mr. Bennett, in your February 12, 2021, submission to the public consultations on Bill , you distinguished between the concepts of interoperability and harmonization. I believe this is particularly germane to the subject before us, because these two concepts can be confused. You showed the difference between the two with an example I'd like to quote:
For instance, the processes for doing PIAs should be interoperable between the federal government and the provinces. If an organization does a PIA under the authority of one law, it may need the assurance that the PIA will also be acceptable in another jurisdiction. But that does not necessarily mean the harmonization or convergence of rules.
First, can you provide us with a definition of these two distinct concepts?
Second, can you tell us whether the provisions of Bill promote the interoperability of processes among the various levels of government or rather the harmonization of rules?
:
Thank you for that question.
I was trying to draw, in that statement, a distinction between harmonizational convergence, which is a harmonization of text ensuring that the statutes essentially say the same thing, and interoperability, which I think means something subtlely different. It means that if businesses have a requirement to do something in one province or one jurisdiction, such as a privacy impact assessment under Quebec's law 25, it will in fact be accepted by a regulator elsewhere. You can see that distinction in Canada among different provincial laws that have been worked out over time pragmatically, but it's also important to see it internationally through the GDPR.
That was the point I was trying to make. I'm not an expert on Quebec law, but I was trying to point out certain areas in Quebec's law where I think businesses would be required to do more under that law than they would under the current text of Bill . Then you have to ask this question: What might be the economic impact of that across Canada if the CPPA is perceived to be lowering the standard within the Quebec legislation? That's the point I was making.
I think the particular provision on international data flows is an interesting example, because in the CPPA at the moment there's really nothing explicit for businesses on what to do when they are processing data offshore, and the vast majority of data protection laws that I know of.... This is also something that's of critical importance to the European Union when it comes to making a judgment about the adequacy, and the continued adequacy, of our laws in Canada. What happens when data on Europeans comes to Canada and then it is processed offshore elsewhere? Those are critical questions. I think there would be some concerns about that by our European friends when they come to make those judgments.
I hope that answers your question.
With respect to the shortcomings of the Canadian law, in an article entitled “What political parties know about you”, one thing you talk about is the factors affecting how political parties, MPs or independent candidates protect the personal information of Canadians that they may have in their possession. In the current context, Bill makes no mention of protection of this kind.
Is the government falling short of protecting voter data and perhaps moving forward in the quest for open and transparent governance?
Do you think Canada should follow Quebec's lead and subject federal parties to the same privacy standards as organizations?
:
Thank you for reading that work.
I first wrote about this issue about 10 years ago, when I issued a report to then commissioner Stoddart on political parties and privacy. It was obvious back then that there was a major gap in our law. Then Cambridge Analytica came along, and the issue hit the front pages, and there was a lot more attention to this.
I'll say this. It's become increasingly indefensible and untenable for political parties to be exempted—to say that they're exempted, to be clear—from provisions that businesses have to comply with, and I don't think the issue is going to go away.
The question is how that is done. An easy thing to do would be to apply the CPPA to federal political parties. That wouldn't necessarily undermine what Quebec has done, although the Quebec law, in fact, is an amendment to your Elections Act. It's by no means as far as I would want to go.
In British Columbia, the commissioner's office there has made a ruling that, in fact, that law does apply to federal political parties, as well as provincial political parties. That ruling is currently under judicial review, but you do have a real problem of interoperability, to go back to your original question, meaning that it's become absurd that the provincial political parties should have to comply with a higher set of standards in B.C. and Quebec than federal political parties federally.
I don't think that's in the interest of our political parties, either. It needs to be fixed. There need to be standards for federal political parties to comply with commonly accepted privacy standards of the kind that we are debating here with respect to businesses.
I apologize to our guests for being a little bit late. Our Conservative colleagues were up to mischief in the House today delaying things.
Some hon. members: Oh, oh!
Mr. Brian Masse: I'm just kidding.
If I ask a couple of questions that are a little out of context, I apologize.
I'd like to start with Mr. Geist.
With the Privacy Commissioner, one of the proposed changes is the creation of a tribunal. I'm just wondering if you have any thoughts about that. I have mixed emotions on it and thoughts, subsequent to that.
I've also seen recently what the tribunal has done to the Competition Bureau, and I'm really worried that we could be in the same boat. I was told by administrative people from the department that it couldn't happen, but others are now telling me that it can happen. I'm in a bit of a vacuum of space here, and I would like your opinion on that situation.
:
I did highlight that in the opening remarks, but I'm happy to engage.
You're right. When we look at what we just saw most recently involving Rogers and Shaw, it's understandable why people would be skeptical about the creation of a tribunal that provides that kind of oversight.
With that said, one of the things we have seen over the years is that, because of the way the federal court treats Privacy Commissioner decisions on a de novo basis—if there are appeals, they go to the court, and the court then effectively starts from scratch—you are faced with a situation where it almost incentivizes challenging tough cases—and we've seen that—because you get another chance at it.
Creating a tribunal, provided it is viewed from an administrative law perspective as an expert tribunal that's going to be granted some deference for the decision by follow-on courts, if it does go to court, has the potential, in some ways, to strengthen the outcomes of that process, because there is some of that deference, but that requires ensuring that the tribunal is genuinely viewed as an expert tribunal and properly constituted.
The initial version of this bill didn't go anywhere near there, with just one privacy expert. We now have half.
Finding ways to ensure that public interest is well represented on the tribunal and that the tribunal has genuine expertise at least opens the door to the prospect that it might provide some advantages, although I recognize why some would say that these are already lengthy processes so we should just let the Privacy Commissioner handle it all.
:
What we want is for the courts to say that a decision that comes out of the tribunal is one they are prima facie going to respect.
At the moment, when the commissioner issues a decision, the courts start from scratch. They have the ability to start from the beginning, which is why we see.... The case of Cambridge Analytica has already been referenced a couple of times today. We still have Cambridge Analytica before the Canadian courts. We had the commissioner, then we had a first court decision, and now we have an appeal ongoing.
These are long processes and the courts play out potentially somewhat differently than the administrative side in terms of privacy itself. It's more about having the process of the Privacy Commissioner and the tribunal better respected by courts, especially given the kinds of penalties we're envisioning. I think we can well assume that, if the penalties are significant, we're going to see organizations that are facing those penalties appealing the decision through the courts.
:
I am less enamoured with the proposed privacy tribunal, for a number of reasons.
The first thing is that.... It's certainly true that the Federal Court, under the section 14 process in PIPEDA, specifically in the legislation, holds its hearings de novo. You can change that. You can keep the same framework, but you can require that it not be a proceeding de novo. It is currently part of the legislation, but that part could be changed without creating a whole new data tribunal.
Currently, the Privacy Commissioner doesn't make decisions. They don't have order-making powers. The commissioner makes findings. The process before the Federal Court is a process where either the commissioner or the complainant seeks an order, or the complainant is seeking damages. It's not an appeal proceeding. The organization, for example, doesn't have a mechanism through that process to go to the court. Again, you could certainly tinker with that formula, but it isn't really an appeal; it's a hearing de novo on this issue of whether an order should issue.
With the personal information and data protection tribunal, the tribunal is actually going to now have the authority not only to review orders of the Privacy Commissioner—because the commissioner will have new order-making powers—or to review recommendations for administrative monetary penalties, but also to hear appeals of any findings, because the commissioner can still make findings. Findings aren't orders. They're not binding. They're the commissioner expressing particular views about the law. Those can also be appealed to the tribunal.
I think that requires another look in this context. Really, what you're doing is taking an independent regulator with the approach and interpretations that the independent regulator brings to their decision-making role. You then have an appointed tribunal that is going to review those decisions and findings, those approaches and interpretations that are not binding and that are the commissioner's approach to the law.
One of the things I've expressed concerns about.... The previous commissioner and the current commissioner have been working very collaboratively with the provinces that have private sector data protection laws, so Alberta, B.C. and Quebec. They have engaged in joint investigations. They have issued joint findings. They work in a way that is very collaborative to try to ensure some sort of consistency across the country with respect to interpretations of their laws, and to try to find cohesive shared interpretations of the laws.
We're going to move into a situation where the commissioner has less latitude, because the commissioner may agree with the provincial commissioners, who have order-making powers and are only subject to judicial review and not appeal before any kind of tribunal. The federal commissioner will perhaps agree with them in joint findings and then find those findings appealed to a tribunal that might rule otherwise, disrupting that collaboration among commissioners and the balance that might be found there.
I have quite a number of concerns about the tribunal structure and what implications it might have for how decisions are made about the interpretation and application of the legislation.
Thank you to our witnesses here today.
I'm somewhat concerned about this bad bill before us today.
With Bill , the Government of Canada had an opportunity to enshrine the fundamental right to privacy for children, to define what a minor is, to define perhaps an age of consent and do a whole bunch of stuff to ensure that children were protected. That bill died on the Order Paper.
Then, we had Bill when this Parliament opened up again. The minister again had an opportunity to enshrine the fundamental right for children to protect their privacy in some of the actions they may take online. Then the government had the opportunity to define what sensitive information is—likely in the context of a child. They had an opportunity to define what a socially beneficial purpose was in the context of a child.
The minister came before us a few weeks ago. He said, “I have this bill. It's going to do so much work to protect children, but we have to amend it.” Then we had to put a motion forward to get a copy of those amendments. We're here today. I am not going to relent on this until we have more clarification and I hear from as many witnesses as possible to ensure that children's rights are protected.
My question is open-ended. I'll start with you, Mr. Geist. What clauses of the bill do you believe need to be amended to ensure that a child's fundamental right to privacy and their online actions are not used in a way that will compromise them as adults, or at a future period of time in their life?
:
I'll give you a brief answer, but Professor Krishnamurthy, who is one of the witnesses, has done studies and reports on this, so he's probably best suited to answer some of those particular questions. However, I will say two things in response to your opening comments.
First, to reiterate, I think there's general disappointment for many in the lack of prioritization of privacy over the last number of years. Bills, as you mentioned, get introduced and then seem to languish.
I'm glad that we're here now, but I'm inclined to agree with you that the best way to ensure that you get the best time out of witnesses and the best kind of study is to reflect on legislation as it's intended by the government. If we're left with this amalgam of a bill plus comments about where things are headed, that doesn't provide the best sort of study.
In terms of minors, specifically, I'll note that one of the real concerns arises in differing definitions of minors from province to province and the like. Therefore, one thing I think we need to include within the legislation—I know other witnesses have highlighted it—is the need for some sort of consistent definition here so that we know there is that consistency of protection.
I agree with Professor Geist's initial responses, but let me just take a step back.
There are certainly specific clauses of the bill that could be amended to improve the protection of children and minors. However, we need to consider the structure of the bill as a whole in protecting children—and adults, for that matter.
Several witnesses—not only Professor Scassa and Professor Bennett—spoke about the interaction between the legitimate interest exception and the implied consent provisions of this bill. When they are applied to minors, structurally speaking, those exceptions could be very problematic. I think we need a structural approach to this. All of these pieces of this bill interact together.
What are the exceptions to consent? What are the situations in which someone collecting or using data has to go through a process to justify that? That's a data protection impact assessment. What are the remedies?
Specifically, there are a few things I would like to highlight, which I think are easy amendments, relatively speaking. The first is the “best interest of the child” language, and that could be inserted into—
:
I don't think we should view this as a competition between innovation and privacy. The two can be harmonized. The question is, how do we get responsible innovation that respects what I believe is the fundamental human right to privacy that all of us enjoy?
I think it is very instructive to look at what has happened in the European Union since the enactment of the GDPR, which has consent as one of six bases that an organization that collects, processes and uses personal data can use. Legitimate interest is a key part of the European data protection framework, and it is relied upon very extensively to provide all kinds of innovative services. In fact, in the European Union, as far as I know—and this is getting into AIDA territory—it's the main way that training data for AI is acquired.
The European law, unlike what we are thinking of here, includes many more protections around the use of those exceptions to consent. When we are relying on those exceptions in order to get business activity or other forms of innovation, I think it's very important that there are, for example—and I've mentioned this before—data protection impact assessments, to think very carefully and evaluate very carefully the interests of the data processor and what they are doing versus the interests of the people whose data is being processed.
Especially when it comes to data that is sensitive, those protections are extremely strong in the European approach, and this goes to Professor Bennett's point in response to another honourable member's question about interoperability versus harmonization. We can make our law interoperable with other laws, but maybe here is an area where a bit of harmonization with Europe would be good, to protect the privacy rights of Canadians but also to allow us to do business on a transatlantic basis, because we are going to have that strong level of protection that one of our leading trading partners has as well.
:
Historically, there are two ways you can do this. You can do it the way that is included in PIPEDA, which is to put the onus on the organization to ensure that, when data is transferred anywhere to a service provider, whether that is in Canada or elsewhere, the same legal protections apply. The problem with that approach is that it relies on contract or other business-to-business agreements, and the individual tends to be excluded from that arrangement.
The other approach is to do what the Europeans have done over the years, which is a legal test, a jurisdiction-to-jurisdiction approach, which is to say, “These are the countries around the world to which personal data might be safely transferred.” The disadvantage with that is that it's a lengthy approach. It's highly legalistic. At the end of the day, it doesn't do a lot to ensure that the data is protected on the ground.
The short answer to your question is that it's complex. As I said, I think the approach that says that when a business is transferring data to a service provider, whether that's in Canada or offshore, it has to do an assessment, not only an assessment of what the company is doing but also an assessment of the legal and political environment.... For economic reasons, our businesses transfer personal data on Canadians to countries around the world that do not have proper privacy protection and, in some cases, have questionable human rights records. I think Canadians would be pretty annoyed about that if they knew it was happening.
A business should have to assess that. This is essentially what the Quebec law says. Do a privacy impact assessment—actually, broader than a privacy impact assessment—and be ready to demonstrate accountability for that data if and when a regulator comes calling.
That would be the compromise approach that I would suggest, but, at the moment, a business looks at this bill and says, “I want to transfer that data overseas. I want that data to be processed overseas. What do I have to do?” It's not clear. There's nothing there. Most legislation, as I said, has a section on international data transfers, and I think that would be something I would strongly advise.
:
A conventional way to look at the government-related collection of data is through the Privacy Act lens, which successive commissioners, going back well before even the creation of PIPEDA, have argued is insufficient and inadequate. Government has consistently failed to hold itself to the same standard that it expects of the Privacy Commissioner. We know the reason why privacy commissioners have regularly raised it, but it has rarely risen to the level of actual reform.
If the question is more around political parties and their potential application—we've had several witnesses raise this—I think if we're honest about it, it's pretty obvious why they're not included. It's because political parties have grown addicted to access to that data. They value that data and, quite frankly, they fear that if they had to actually get the same level of consent that they are expecting businesses to obtain from users, they wouldn't get that consent and it would put that data at risk.
For me, this highlights two things. First, I just think it's so obvious: If you claim that there's a fundamental right to privacy and you're going to elevate the expectations for businesses, please put up the mirror and have that same expectation for yourselves as political parties.
It also highlights why there are real challenges with the law with respect to the private sector. Just as political parties don't want to have any sort of limitations on the collection and use of the data outside of some bare bones sort of legislation, so too for lots of businesses. They would also say that they are super innovative and acting in the public interest or have a legitimate interest. We know all the kinds of language that comes out of this. Fundamentally, they don't want to have to ask for actual, informed consent because they know they might not get it.
We can see why you need to ensure in these rules that we hold those businesses to a higher standard. I'd argue that we ought to be doing the same thing for political parties.
Part of the problem that I see—and I think it's been echoed by some of the other witnesses, who may want to chime in—is that this omnibus approach that has combined both privacy and AI fundamentally really impairs the ability to have an effective review of the legislation as a whole. We are unsurprisingly having much of our discussion on the privacy side, which I understand. That's where the committee was driving, at least initially, but the AI rules are critically important. As we've said, we don't even have the full text associated with them, and the implications are enormous.
To me, the starting point fix for this committee is to say that this is not working the way it needs to work for the committee to do its job effectively. You want to shelve the AI portion altogether for the moment and either go back to the drawing board or say that you're going to conduct two studies or that two committees are going to conduct studies. Perhaps ETHI gets involved. There has to be some sort of mechanism where both of these different pieces of legislation get the kind of attention they deserve.
In terms of the privacy side, very quickly, on this bill, I've highlighted the political party side. I guess I would again emphasize that we will hear, and we do hear, from many of our witnesses that we need to be innovative. We can't be out of step with these things. I have to say that you need to recognize that this is going back to the hearings in the 1990s. We saw the same kind of idea that the sky is falling if you legislate in this way. You saw the same kind of comments being made in Europe when the GDPR was being developed. The reality is that businesses will adapt. They will adopt those rules and, in many instances, find competitive advantage for doing so.
I would urge you, as you go through the bill, to look for where can it be strengthened and where there are some exceptions—we heard today about many of the exceptions that are problematic—but, most fundamentally, recognize that the lens that needs to be brought here is not one of the scare tactics of “You're going to harm innovation in this country”, because that's just the basic playbook on privacy rules. It's rather how we can ensure that we have the best possible law looking a decade or two decades ahead.
I'm sorry. I had an urgent call. I had to leave, so like MP Masse, I apologize if somebody covered this.
My first questions will be for Dr. McPhail.
I want to start by saying that we've had some interesting testimony already, and some pulling of teeth out of the to get the amendments he said he would make and then refused to make and then did make as drafts—which I think, in some cases on privacy, are wholly inadequate.
You know, we had Bill , which the Liberal government brought in and which was flawed. They didn't listen to the privacy commissioner of the day and got responses afterwards, when it was tabled, that it was a bad bill. Then the 2021 election came along, so it died. The didn't listen to the testimony and brought in a flawed bill again, and let it sit in the House for a year before we debated it. Then, at the last minute, after four years of battling back and forth, he decides that maybe individual privacy matters, so we'll recognize a fundamental right.
Here's my problem with where the government is, and I think Dr. McPhail and Dr. Scassa outlined some of the reasons. If you had watched my earlier questioning.... While the Liberals are going to put the fundamental right in the “Purpose” section, the most important section, they also say the ability of an organization to use that data is basically of parallel importance in the purpose of the bill.
Then, as you've pointed out, there are issues in proposed section 12 around consent and implied consent. Quite frankly, I thought implied consent was gone a long time ago, in the 1990s, like reverse consent. Apparently, implied consent still exists here, so I can just say, “No problem, Brad. I think you would have consented to this, so I'll use it anyway.”
Then, in proposed subsection 15(5), as pointed out in the testimony we had earlier, there's a huge problem.
Proposed section 18, which I've talked a lot about, basically says, “No problem. Big business can use your data, no matter what the consent is, if it's in their interest to use it, even if it causes harm.”
Then there's proposed section 35. I brought up proposed section 35 to the former privacy commissioner last time. It says that if an organization is using your data for research or statistics, it can use the data however it wants—unidentified, directly. It doesn't say, like PIPEDA used to say, that it is for scholarly work. Those words are no longer there. It says that an organization can use it, and “an organization”, as we know, in this bill is a business.
There's a lot to fix in this bill to put the balance back on the individual. The Liberals have put the balance on big, multinational data-mining companies—Facebook, Google and others—to have the rights to do whatever they want with an individual's data. I am wondering, is it simply removing proposed section 18, the legitimate interest, that puts the balance, or do you have to make another statement of a higher level in the “Purpose" section? Do you have to get rid of proposed section 35 and replace it with what already existed in PIPEDA that's being removed here?
Maybe I could ask Dr. McPhail and then Dr. Scassa to comment.
:
One of the real challenges with this bill is that it's not just specific, targeted amendments to the law that we had in place before, but it's a complete rewriting and reworking. Many of these provisions have come over verbatim from PIPEDA, but related concepts have been changed or redefined. In some cases, there have been new provisions added and new language used. As a result, there's a lot going on and there are substantive concerns, and then there are the concerns about how the provisions interact with each other. There are concerns about whether there are legacy problems created because things have been carried over but not adapted to other concepts that have been introduced in the bill.
I think all of us who have been studying and looking at this bill, when we were asked to name the top three things, came up with lists of 15 or 20 or 25 things and thought of those as short lists, because there's a lot that you could comment on and address. I think we've been trying to move up to the top things.
I'm very concerned that if this bill is passed, over the course of the next few years we're going to be finding problems and issues in the bill relating to some of these other changes and language adjustments that we didn't address or didn't anticipate. I think this is a concern, particularly in light of Professor Geist's comments that it takes an awfully long time before there's room on the legislative agenda for further amendments.
I think this is one of the fundamental challenges of this bill. I am sorry that this committee cannot fully devote its time to this and has to split the time between this and the AI and data act, which is very seriously problematic as well.
:
Thank you for that question, and also for reading my scholarship, which I appreciate. It's nice to see that it does get read.
I'd actually like to address that question, if I can, in connection with the previous discussion around proposed section 51 of the legislation, which is about how data can get reused with publicly available information, or earlier in the statute with regard to statistical and other kinds of purposes. I think this is a key point of interaction between the CPPA and AIDA—if you want to be operatic about it.
Technology is changing very quickly, and we are seeing a proliferation of extremely powerful technologies—we'll call them AI—that have an interesting business model. The business model is that smaller and smaller companies—those with less capacity than previously in terms of compliance, regulatory affairs and legal—are able to leverage extraordinarily powerful tools and do incredible things with our data.
In the article, I talk about a company that just takes pieces of Amazon's cloud offerings—a voice recognition system, a translation system or a transcription system, data analysis—and creates automatic systems to monitor prisoner phone calls. The uses to which personal data can be put once collected, because of the AI environment we live in and the fact that these technologies are available with the swipe of a credit card to anyone, regally change what we are talking about when we talk about privacy.
PIPEDA was enacted 25 years ago. We could not have foreseen this revolutionary change, and now we have this legislation and its two central components. Of course, there is the procedural one with parts that would establish the tribunal. Given that changing landscape, and given what we think is going to happen in the next 15, 20 or 25 years with regard to technological advances that can more efficiently use our data and determine things about us that we didn't know, we need to be extremely thoughtful about both parts of the legislative package.
Again, I believe Dr. McPhail mentioned this. The CPPA is the statute that governs the input of data into AI systems, for better or for worse, and the AIDA will hopefully, with amendment, regulate the uses to which the systems can be put. I think it is a critically important intersection that this committee needs to think about very carefully as technology becomes more powerful and accessible to more and more actors.
:
I think an excellent example that should be on this committee's mind and those of all Canadians is Clearview AI, which is a technology company based in the United States that takes publicly available information—your photos on Facebook, on Tumblr or whatever else—and has developed a very powerful AI-based facial recognition system, using publicly available information, with no consent for the collection or use of that. Of course, all the privacy commissioners who studied this came to the conclusion that this violates basically all existing Canadian privacy laws.
In this case, we have a company that is based in the U.S., so there are questions about how applicable the law is, but I think it demonstrates several things. It demonstrates how some of these exceptions are very susceptible to powerful forms of misuse, and then once you have ingested the data and trained the model, we also need the regulation of how it's used.
One could argue that there are some purposes where publicly accessible information can and should be collected and used. We can talk about that. However, the fact that it's not subject to any oversight—an impact assessment requirement, for example—is problematic when it comes to the CPPA part of the bill, and then we can talk separately about the many weaknesses of AIDA.
:
Many of my colleagues who conceptualize privacy legislation think about its substantive provisions—you can do this and you can do that—and also about privacy law as a process. The idea is to get organizations that collect and use personal data to build governance and accountability around how they do so. It's thinking very carefully about what the uses are, documenting that and having checks.
In my earlier comment, I talked about the overall framework of the legislation and the interacting components. This is where I believe a stronger process orientation in both.... In the CPPA, that's the data protection impact assessment provision, which will interact subsequently with enforcement provisions. When it comes to AIDA, it's making clear some uses that are beyond the pale, which we're going to forbid, and then calibrating the legislation to the level of risk. Right now, AIDA really only governs systems that are high-risk and we don't know what those are because the criteria are not there.
The European law, which I think is weak and could be improved, governs all AI systems presumptively. Even those that are low-risk, where the people who are developing those systems have actually shown that the risk is low, are subject to requirements. I think that's a key safeguard. If we're going to create legislation that is going to be durable for a long time, it needs to assess that entire risk environment and capture it in the legislative package.
:
Thanks for the question.
Technically speaking, the bill doesn't address it because we have the bill. What we also have now, as you know, is essentially a memo from the that highlights that they've begun to identify what they intend to include within the high-impact systems. As we just heard from Professor Krishnamurthy, the approach is to seek to regulate those and establish a number of regulatory frameworks around those provisions.
There is generally a consensus that it's appropriate and necessary to have rule sets, particularly where there are concerns around bias coming out of AI systems. Think of the use of AI in labour markets for hiring. Think about it in the health sector, in the financial sector and in law enforcement. There are a lot of places where we can easily identify potential risks, potential harms and the like. That's where much of the discussion has been.
Oddly, at least in terms of the list that has been provided, search engine algorithms and social media algorithms are included here as well. Unquestionably, we need algorithmic transparency with respect to these companies. We need to identify ways to deal with the potential for harms that are coming out of this, such as any competitive behaviour in search results, which is clearly an issue that is raising some significant concerns. However, I find very puzzling the notion that we would be treating that as a high-impact system in the same way we would treat law enforcement's use of this or health uses. I'm not aware that anyone else anywhere in the world has seen fit to do that as they work through some of these questions.
I would like to thank my colleague for sharing his time with me.
I'd like to thank all the witnesses for being here.
Dr. Geist, I'd like to talk about the way this bill, formerly Bill , has been presented over the past two years. We know that amendments were requested and that the minister didn't really listen, because the new version is no better. So here we are, 18 months later, and you are having to testify about this bill.
During this whole process, which is set to last several months, we will be meeting with about 100 witnesses. How do you feel about this process, when we haven't had access to the eight amendments put forward by the minister, other than the few lines we've be able to get so far? I'm asking because you talked about this earlier.
I'd like you to speak as a witness. I'm not necessarily asking you to speak on behalf of others, but at the very least I'd like people to understand the process we are currently in, which I consider to be skewed. How can you or any of the witnesses who will appear possibly give your opinions on the content of a bill without access to the amendments?
:
First, I'll say that there were improvements made from the prior bill to this bill, so the government did listen, and some of the things that are in this bill are better than the one that was introduced in the first instance and really didn't go anywhere.
I will also say candidly that I must admit that this is one of the hardest committee appearances that I've had in a very long time, in part because typically, when you're invited—and I have appeared on omnibus legislation before like the one we get with, let's say, the budget implementation act—you're invited for a specific kind of provision, and we recognize how that works.
In this instance, we really have two distinct bills, perhaps more than two, but two fundamentally on those two issues, plus, of course, the tribunal. You get five minutes. I recognize that you don't get multiple appearances, and you don't get multiple amounts of time to deal with it. I do think, as I look around, that the witnesses—people like Brenda McPhail, Professor Krishnamurthy, my colleague Teresa Scassa and Professor Bennett—have enormous expertise across the board. There is some correlation here.
This strikes me as not just an inefficient way of dealing with this, but I think, if I'm honest about it, that it's an ineffective way of trying to be effective with the responses that I'm giving. There is obviously a limit from a time perspective, a limit in terms of what I could prioritize and the kinds of issues that I try to highlight. Something's got to give, so to speak. At the end of the day, you can't talk about everything, and this would have been far better, I think, had we divided the two.
:
That was a reference, and we've heard it regularly, to a desire to strike a balance within these provisions, but let's recognize that, once you get outside and start applying these rules, it's not a balanced playing field. Many individual Canadians don't know their privacy rights or, if they do, the prospect of filing a complaint, proceeding with it and facing all the challenges that are inherent in the system—and this is true for more than just privacy, but it's certainly true with privacy—is enormous.
We've had a system that, for the last two decades, has left people really disappointed because oftentimes they go through that process and are left with nothing other than a non-binding finding.
I'm glad that this law seeks to remedy that. The enforcement side is important, but we need to realize that, especially as we bring in tougher penalties, which are one of the really good things that I think are in this legislation, it also means that those who are facing the potential for penalties are going to take a much more aggressive approach in terms of how they deal with these various complaints.
It's not a level playing field, which means that you need to embed as much as you can within the legislation to limit the ability of those businesses to take advantage of what is quite clearly a power imbalance between themselves and the individuals.
:
Sure. I want to speak specifically to highlight the importance of this legislation and how important it is to include far-improved enforcement measures. I think the experience over the last couple of decades is that for many.... If all you're left with are non-binding findings, it is very tough to enforce, and it is, I think, tough for the commissioner to ensure that companies themselves are compliant, because they will naturally engage in a bit of a risk analysis, at least under the current bill. They'll ask, “What happens if I don't comply?” or “What happens if I push the envelope a little bit?” The answer is that you might face an investigation if someone realizes and files a complaint, and, at worst, at least initially, all you're going to get is a non-binding finding, and you need to try to do something. We've even seen that. I can recall an incident, I believe it was with Bell, which rejected some of the initial findings of the commissioner, so there was some pressure, and they came back.
We've seen companies take a pretty aggressive approach. I'm glad that the government has seen fit to really improve on the enforcement side, both in terms of the powers the commissioner has and in terms of the penalties that are associated with the legislation. I'm glad it's seen fit to begin to adopt some of the kinds of provisions that we've seen in Europe. It's long overdue, and they're not quite the same or at least identical, but, nevertheless, it moves us in a direction.
Absolutely, I think there are things that improve on our existing legislation. That legislation is more than 20 years old. We need to fix the legislation, but, as I said, the point of emphasis for me, and we've heard it from others as well, is that if you're, in all likelihood, fixing this bill or this legislation only every 10 or 20 years, you can't rest on your laurels and say, “Here, we got a bunch of things right”, when there are all sorts of other things that need fixing.
:
Undoubtedly, the Bill package of amendments is an improvement over the status quo. I think all of us would acknowledge that. However, I'm not sure we should settle for a C+ bill. I think Canadians deserve A+ privacy protection, and amendments to this bill can get us there.
I think that is the spirit in which all of us who are scholars and activists, and who think about privacy and take a big-picture approach to this, think of it. We understand that private information does need to be collected and processed, but that needs to be done in a way that respects what is a very fundamental human right, one that is becoming more important in our digital age over time, as technology becomes more invasive, and it is important to get that right.
Political oxygen is scarce. Again, you have many priorities, many things to legislate, so if this is our shot, we have to do our very best. I think everyone here today has provided lots of really good ideas, and if this committee would embrace them and enact some amendments, this could be a much better bill.
:
You're speaking to the importance of this process, which I think we're all very committed to. I was just trying to get a sense of what the repercussions are going to be if this bill stalls any longer, but I think you've answered that well enough.
Maybe I'll just say quickly a couple of things based on some testimony we've heard.
The AIDA portion of this bill went through over 300 consultations, so I think there has been a lot of consultation that has happened. I'll just put that out there.
In relation to some of the comments made about political parties, the government has been carefully studying the Chief Electoral Officer's recommendation on strengthening privacy measures. We will have more to say about that in due course. Just to let you know, just for information purposes, I think that's helpful to reassure folks.
Maybe I'll leave it there.
We've had some meaningful discussions. However, I'm wondering whether this committee will really have the will or capacity to move quickly and help get this bill passed. To be honest, I even wonder if the government really wants to get Bill passed at this point, in the context of this legislature.
Having said that, I feel like asking you some questions, Dr. McPhail.
In your publications, you put a great deal of emphasis on developing responsible artificial intelligence and transparent governance of artificial intelligence.
Because the rapid development of technology poses significant data security and privacy challenges, what are your thoughts on establishing a technological sandbox that would isolate emerging technologies in a separate environment, with a view to assessing their compliance with privacy standards before they are made available to the public?
:
Thank you very much for that question.
There are a range of ways in which the AIDA could be improved to facilitate truly responsible AI governance.
The idea of a sandbox is an interesting model. One of the big problems with the ways artificial intelligence tools are currently developed is that they are created and tossed out into the wild, and then we see what happens. A sandbox, to the extent that it would be able to mitigate that kind of risk, is a really interesting concept. I would note that there's absolutely nothing in the current bill that actually fosters the creation of such a sandbox at this time.
Of course, that's only one of the many gaping holes in the truly skeletal structure of AIDA, which, even with some of the potential amendments that have now been floated, still has a long way to go in order to be the effective bill that people across Canada deserve. That is why many of us have actually called for a reset of that bill, rather than a revision. It is so fundamentally flawed that it's hard to imagine how you're going to make it something that truly respects Canadians' rights and truly reassures Canadians that artificial intelligence is a tool that can be used across all sectors of our economy as it is envisioned to be used, safely and with respect for their privacy rights.
We've heard a little bit about reticence risks. I would counter reticence risks, which is a business concept, with social licence. Members of the public are deeply concerned that their information is being collected and used in ways that they don't understand, often without their consent—something the CPPA would facilitate—and for purposes that they disagree with fundamentally.
If we allow our AI act to take that data, collect it in that way, and leverage it in tools that, again, members of the public find difficult to trust, we are not fostering a vibrant innovation economy in Canada; we are fostering a distrustful society that will not believe their government has their back, and we will be genuinely reticent as citizens to use these technologies in a way that we would like to, if we take seriously the idea that this technology has immense potential to improve our world, used responsibly.
:
Thank you very much, Mr. Lemire.
[English]
I have no more speakers, so I'll yield the floor to myself.
My first question is regarding proposed section 35, which MP Perkins brought up. I'll ask Professor Geist, but if anyone wants to volunteer comments.... Proposed section 35 provides that “An organization may disclose an individual's personal information without their knowledge or consent”. Proposed paragraph 35(c) is, to me, the oversight of that disposition, which requires the organization to inform the commissioner of the disclosure before the information is disclosed.
Is this a sufficient form of oversight for that sort of transmission of personal information?
:
I think it's positive that the disclosure does need to be made to the commissioner when this happens. However, I think we need to question how much oversight the commissioner can exercise.
Again, this is a point where an interaction between the CPPA and AIDA.... What kinds of research for statistical purposes might organizations make? It might well be to train AI models. We now know from research that when data is used to train an AI, AI systems can retain that data. I believe the technical name is “imprinting”. If you use ChatGPT and you use it hard, you can probably get an AI system to spit that data back, and that's a big problem.
The mere disclosure to the commissioner that this is happening, without some kind of analysis of what the risks are.... This is why I keep coming back to this data protection impact assessment point. It's so important that this weighing occurs. What are the relevant risks?
We want to incentivize research, of course, but let's remember that Cambridge Analytica was a research organization. It was a research disclosure of data that was the beginning of that terrible privacy scandal. That safeguard alone is not enough. I think we need more.
I'm very interested in research. I'm at an academic institution. I want to promote that. It's a very pro-social thing, and there is a real anti-commons problem with trying to get individual consent every time, but the safeguards need to be stronger.
:
I completely agree that there are problems with this provision.
The one I flagged in my opening comments is that it refers to de-identified information. This was taken verbatim from Bill and put into Bill , but in Bill C-11, “de-identified” was given the definition that is commonly given to anonymized information.
Under Bill , we have two different categories: de-identified and anonymized. Anonymized is the more protected. Now you have a provision that allows de-identified information—which is not anonymized, just de-identified—to be shared, so there has actually been a weakening of proposed section 39 in Bill C-27 from Bill , which shouldn't be the case.
In addition to that, there are no guardrails, as you mentioned, for transparency or for other protections where information is shared for socially beneficial purposes. The ETHI committee held hearings about the PHAC use of mobility data, which is an example of this kind of sharing for socially beneficial purposes.
The purposes may be socially beneficial. They may be justifiable and it may be something we want to do, but unless there is a level of transparency and the potential for some oversight, there isn't going to be trust. I think we risk recreating the same sort of situation where people suddenly discover that their information has been shared with a public sector organization for particular purposes that have been deemed by somebody to be socially beneficial and those people don't know. They haven't been given an option to learn more about it, they haven't been able to opt out and the Privacy Commissioner hasn't been notified or given any opportunity to review.
I think we have to be really careful with proposed section 39, partly because I think it's been transplanted without appropriate changes and partly because it doesn't have the guardrails that are required for that provision.
:
Thank you for that question.
I think it's been mentioned already today, but I will repeat it. Merely looking at high-impact systems, however they are defined—and right now that's unclear in the current amendments—is not enough to fully mitigate the risks of AI, particularly the collective risks to communities and groups. That kind of risk, furthermore, is not covered under the current definition of “harm” in the bill, which is focused strictly on individuals and quantitative forms of harm. In looking at how you can restructure that better, you could look at the European act, but I would refer you to something closer to home.
The Toronto Police Service recently did an extensive public consultation and developed rules on artificial intelligence for use by their service. They adopted a tiered approach, where there are some systems that are deemed low-risk, but require an assessment in order to determine that they are so. There are some systems that are deemed medium-risk, and there are different sets of precautions and safeguards in order to ensure that those risks are appropriately analyzed and mitigated prior to the technology being used. There are also systems that are considered high-risk, which have the highest level of protections and safeguards. Then there are systems that are considered beyond the pale. Some systems are considered so risky that it is not appropriate to use them in a country governed by the Charter of Rights and Freedoms and where democratic freedoms are valued.
That's a much more tiered and nuanced approach requiring assessments at different stages, and then proportionate safeguards and restrictions, depending on the level of risk, can be much more finely tuned and much more responsive to the genuine concerns that members of the public have about ways that AI systems can be used for them or against them in violation of their beliefs and values.
:
I'm sure it can be sought for. It's available online.
Thank you very much.
This concludes our meeting.
Thanks to all our witnesses. It's been a very informative discussion.
Thanks in particular to Professor Bennett. We've seen the day rise in Australia through the blinds behind you. Thanks for waking up so early to meet with us. It's much appreciated.
[Translation]
I'd like to thank the analysts, interpreters, clerk and support staff.
The meeting is adjourned.