:
We'll call to order meeting 146 of the Standing Committee on Access to Information, Privacy and Ethics. Pursuant to Standing Order 108(2), we're continuing our study on the ethical aspects of artificial intelligence and algorithms.
In the first hour today, we have with us, as individuals, witnesses Marc-Antoine Dilhac, professor of philosophy, Université de Montréal, and Christian Sandvig, director of the Center for Ethics, Society, and Computing, University of Michigan.
Also, as we all know, we're going to discuss in camera, pursuant to Standing Order 108, a briefing by the law clerk of the House on the power of committees to summon witnesses. That will be our discussion following this hour.
We'll start off with Marc-Antoine for 10 minutes.
Thank you for inviting me to share with you some of the reflections about artificial intelligence ethical issues which we set out in Montreal.
I was asked to speak about the Montreal declaration for the responsible development of artificial intelligence, which was presented in 2018. I will speak about this document.
First I will outline the context in broad strokes. The technological revolution that is taking place is causing a profound change in the structure of society, by automating administrative processes and decisions that impact the life of our citizens. It also changes the architecture of choice by determining our default options, for instance. And it transforms lifestyles and mentalities through the personalization of recommendations, access to online, automated health advice, the planning of activities in real time, forecasting, and so on.
This technological revolution is an unprecedented opportunity, it seems to me, to improve public services, correct injustices and meet the needs of every person and every group. We must seize this opportunity before the digital infrastructure is completely established, leaving us little or no leeway to act.
To do so we must first establish the fundamental ethical principles that will guide the responsible and sustainable development of artificial intelligence and digital technologies. We must then develop standards and appropriate regulations and legislation. In the Montreal Declaration for a Responsible Development of Artificial Intelligence, we proposed an ethical framework for the regulation of the artificial intelligence sector. Although it is not binding, the declaration seeks to guide the standardization, legislation and regulation of AI, or artificial intelligence. In addition, that ethical framework constitutes a basis for human rights in the digital age.
I will quickly explain how we developed that declaration. This may be of interest in the context of discussions about artificial intelligence in our democratic societies. Then I will briefly present its content.
The declaration is first and foremost a document produced via the consultation of various stakeholders. It was an initiative of the University of Montreal, which received support from the Fonds de recherche du Québec and from the Canadian Institute for Advanced Research, or CIFAR, in the rest of Canada. Behind this declaration there was a multidisciplinary inter-university working group from the fields of philosophy, ethics, the social sciences, law, medicine, and of course, computer science. Mr. Yoshua Bengio, for instance, was a member of this panel.
This university group then launched, in February 2018, a citizens' consultation process, in order to benefit from the field expertise of citizens and AI stakeholders. It organized over 20 public events and discussion seminars or workshops over eight months, mainly in Quebec, but also in Europe, Paris and Brussels. More than 500 people took part in these workshops in person. The group also organized an online consultation. This consultation process was based on a prospective methodology applied to ethics; our group invited workshop participants to reflect on ethical issues based on prospective scenarios, that is to say scenarios about the near future of the digital society.
We organized a broad citizen consultation with various stakeholders, rather than consulting experts alone, for several reasons. I will mention three, rapidly.
The first reason is that AI is being deployed in all societies and concerns everyone. Everyone must be given an opportunity to speak out about its deployment. That is a democratic requirement.
The second reason is that AI raises some complex ethical dilemmas that touch on many values. In a multicultural and diverse society, experts alone cannot make decisions on the ethical dilemmas posed by the spread of artificial intelligence. Although experts may clarify the ethical issues around AI and establish the conditions for a rational debate, they must design solutions in co-operation with citizens and all parties concerned.
The third reason is that only a participative process can sustain the public's trust, which is necessary to the deployment of AI. If we want to earn the population's trust and give it good reasons to trust the actors involved with AI, we have the duty to involve the public in the conversation about AI. That isn't a sufficient condition, but it is a necessary condition to establish trust.
I should add that although industry actors are very important as stakeholders, they must stop wanting to write the ethical principles instead of citizens and experts, and the legislation that should be drafted by Parliaments. That attitude is very widespread, and it can also undermine the public trust that needs to be fostered.
Let's talk about the content of the declaration. The consultation had a dual objective. First, we wanted to develop the ethical principles and then formulate public policy recommendations.
The result of that participatory process is a very complete declaration that includes 10 fundamental principles, 60 subprinciples or proposals to apply the principles, and 35 public policy recommendations.
The fundamental principles touch on well-being, autonomy, private life and intimacy and solidarity—that principle is not found in other documents—democracy, equity, diversity, responsibility, prudence and sustainable development.
The principles have not been classified according to priority. The last principle is no less important than the first, and according to circumstances, a principle may be considered more relevant than another. For instance, if privacy is in general considered a matter of human dignity, the privacy principle may be considered less important for medical purposes, if two conditions are met: it must contribute to improving the health of patients—under the well-being principle—and the collection and use of private data must be subject to individual consent—the autonomy principle.
The declaration, thus, is not a simple checklist, but it also establishes standards and checklists according to activity sectors. Thus, the privacy regime will not be the same, according to the sector, for instance; it may vary depending on whether we are talking about the medical or banking sector.
The declaration also constitutes a basis for the development of legal norms, such as legislation.
Other similar declarations, such as the Helsinki Declaration on Bioethics, are also non-binding declarations like ours. Our declaration simply lists the principles which the AI development actors should commit to respecting. For us, the task is now to work on transposing those principles into industrial standards that also affect the deployment of artificial intelligence in public administrations.
We are also working on the transposition of those principles into human rights for the digital society. That is what we are going to try to establish through a citizens' consultation which we hope to conduct throughout Canada.
Thank you.
I appreciate the opportunity to address the committee. To frame my remarks before I begin with the substance of my comments, I just want to say that my position is that we are at a moment where I'm delighted that the committee is holding these hearings. We're at a moment where there is increasingly a widespread concern about the harms that might be possible as a result of these systems, meaning artificial intelligence and algorithms.
I thought what I could offer to you in my brief opening remarks would be some sort of assessment of what might governments do in this situation. What I'd like to do with my opening statement is to discuss five areas in which I believe there is the most excitement in communities of researchers and practitioners and policy-makers in this area right now. I offer you my assessment of these five areas. Many of them are areas that you at least preliminarily addressed in your earlier reports, but I think that I have something to add.
The five areas I'll address are the following: transparency, structural solutions, technical solutions, auditing and the idea of an independent regulatory agency.
I'll start with transparency. By far the most excitement in practice and policy circles right now has to do with algorithmic automation centres and the idea that we can achieve justice through transparency. I have to tell you, I'm quite skeptical of this area of work. Many of the problems that we worry about in the area of artificial intelligence and transparency are simply not amenable to transparency as a solution. One example is that we're often not sure that the problems are amenable to individual action, so it is not clear that disclosing anything to individuals would help ameliorate any difficulty.
For example, a problem with a social media platform might require expertise to understand the risk. The idea of disclosing something is in some ways regressive because it demands time and expertise to consider the sometimes quite arcane and complicated intricacies of a system. In addition, it might not be possible to perceive the risk at all from the perspective of the individual.
There is a tenet of transparency that we need to be sure that what is revealed has to be matched with the harm that we hope to detect and prevent, and it's just not clear that we know how to match what should be revealed with the harms we hope to prevent.
Sometimes we discuss transparency as a tactic that we use so that we can match what is revealed to an audience that will listen. This is often something that is missing from the debates right now on transparency and artificial intelligence. It's not clear who the audience would be that we need to cultivate to understand disclosures of details of these systems. It seems like they must be experts and it seems like deconstructing these systems would be quite time consuming, but we don't know who exactly they would be.
A key problem that's really specific to this domain that is sometimes elided in other discussions is that algorithms are often not valuable without data and data are often not valuable without algorithms. So if we disclose data we might completely miss an ethically or societally problematic situation that exists in the algorithm and vice versa.
The challenge there is that you also have a scale problem if you need both the data and the algorithm. It's often not clear just in practical terms how you would manage a disclosure of such magnitude or what you would do once you receive the information. Of course, the data on many systems also is continually updated.
Ultimately, I think you have gathered from my remarks I'm pessimistic about many of the proposals about transparency. In fact, it's important to note that when governments pass transparency requirements they can often be counterproductive in this area because it creates the impression that something has happened, but without some effective mechanism of accountability and monitoring matched to the transparency, it may be that nothing has happened. So it may actually harm things to make them transparent.
An example of a transparency proposal that's gotten a lot of excitement recently would be dataset labels that are somehow made equivalent to food labels, such as nutrition facts for datasets or something like that. There are some interesting ideas. There would would be a description of biases or ingredients that have an unusual provenance—where did the data come from?—but the metaphor is that tainted ingredients produce tainted food. Unfortunately, with the systems we have in AI, it's not a good metaphor, because it's often not clear, without some indication of the use or context, what the data are meant to do and how they will affect the world.
Another attractive, exciting idea in this space of transparency is the right to explanation, which is often discussed. I agree that it's an attractive idea, but it's often not clear that processes are amenable to explanation. Even a relatively simple process—it doesn't have to be with a computer; it could be the process by which you decided to join the House of Commons—might be a decision that involves many factors, and simply stating a few of them doesn't capture the full complexity of how you made that decision. We find the same things with computer systems.
The second big area I'll talk about is structural solutions. I think this was covered quite well in the committee's previous report, so I'll just say a couple of things about it.
The idea of a structural solution might be that because there are only a few companies operating in some of these areas, particularly in social media, we might use competition or antitrust policy to break up monopoly power. That, by changing the structure and incentives of the sector, could lead to the amelioration of any harms we foresee with the systems.
I think it is quite promising that if we change the incentives in a sector we could see changes in the harms that we foresee; however, as your report also mentioned, it's often not clear how economies of scale operate in these platforms. Without some quite robust mechanism for interoperability among systems, it's not clear how an alternative that's an upstart in the area of social media or artificial intelligence—or really any area where there is a large repository of data required—would be effective.
I think that one of the most exciting things about this area might be the idea of a public alternative in some sectors. Some people have talked about a public alternative to social media, but it still has this scale problem, this problem of network effects, so I guess we could summarize that area by saying that we are excited about the potential but we don't know exactly how to achieve the structural change.
One example of a structural change that people are excited about and is more modest is the information fiduciary proposal, whereby a government might regulate a different incentive by just requiring it. It's a little challenging to imagine, because it does seem like we are most successful with these proposals when we have a domain with strong professionalization, such as doctors or lawyers.
The third area I will discuss is the idea of a technical solution to problems of AI and algorithms. There's a lot of work currently under way that imagines that we can engineer an unbiased fair or just system and that this is fundamentally a technical problem. While it's true that we can imagine creating these systems that are more effective in some ways than the systems that we have, ultimately it's not a technical problem.
Some examples that have been put forward in this area include the idea of a seal of approval for systems that meet some sort of standard that might be done via testing and certification. This is definitely an exciting area, but only a limited set of the problems we face would fall into the domain that could be tested systematically and technically solved. Really, these are really societal problems, as the previous witness stated.
The fourth area I'll introduce is the idea of auditing, which I saw mentioned only briefly in the committee's last report. The auditing idea is my favourite. It actually comes from work to identify racial discrimination in housing and employment. The idea of an audit is that we send two testers to a landlord at roughly the same time and ask for an apartment. The testers then see if they get different answers, and if they get different answers, something is wrong.
The exciting thing about this area is that we don't need to know the landlord's mind or to explain it. We simply figure out if something is wrong. There's a lot that legislatures can do in the area of testing. They can protect third parties that wish to investigate these systems or they can create processes akin to software's “bug bounties”, but the bounties could be for fairness or justice. This is I think the most promising area that governments can use to intervene.
Finally, I'll conclude by just mentioning there is also talk of a new agency, a judicial administrative law or commission agency to handle the areas of AI. I think this is an interesting idea, but the challenge is that it just postpones many of the comments I made in the earlier parts of my remarks. We often would imagine such an agency doing some of the same things that I've already discussed, so the question then becomes, what is different about this area that requires processes that are not the processes of the legislature and the standard law-making apparatus—the courts—that we already have? The argument has been made that expertise makes this different, but it's hard to sustain that argument, because we often do see plain old legislatures making rules about quite complicated areas.
I'll conclude there. I'm happy to take your questions.
:
Thank you very much for this question, because I think it exposed a weakness in my own explanation.
In the social science literature, they use the term “audit”, but they don't use it in the financial sense. The audit simply describes the process I outlined where two testers, say one black and one white or one woman and one man, ask a landlord for a room or an employer for a job. They call that an audit, but it's quite confusing, because obviously the tax authorities also have an audit and it means something else.
I think the reason audits are exciting to me is that you can have an audit without transparency. Remember that I said you don't get to see the inside of the landlord's brain. That's why the audit is exciting. We can audit platforms like Facebook and Google without transparency by simply protecting third parties like researchers, investigative journalists and civil society organizations like NGOs, who wish to see if there are harms produced by these systems. To do that, they would act like the testers in my example. They would act as users of the systems and then aggregate these data to see if there were patterns that were worrying.
Now, this has some shortcomings. For example, you might have to lie. Auditors lie. The people who go in to ask a landlord for a room don't actually want a room; they're testers working for an NGO or a government agency. So you might have to lie; you might have to waste the landlord's time, but not very much.
Usually on systems like large Internet platforms, it's hard to imagine that an audit would be detectable. However, it's possible that you would provide false information that makes it into the system somehow, because you aren't actually looking for a job; you're just testing. There are definitely downsides.
As I mentioned, you also need some sort of system to continue...after your audit finds that there is a problem. For example, if you found that there was something worrying, you would then need some other mechanism like a judicial proceeding, say, involving some disclosure. You could say that transparency comes later through another process, if you needed to really understand how the system works. However, you might never need to understand that. You might just need to detect that there is a harm and tell the company they have to fix it, and they're the ones that have to worry about how.
This is why I'm excited about auditing, because it gets around the problems of transparency.
:
In the Montreal Declaration for a Responsible Development of Artificial Intelligence, for instance, one of the principles mentioned is prudence. The idea behind that is to state that there are security and reliability criteria for the algorithms, but not only for the algorithms. I would like to expound on the topic because the way in which the algorithm is put in place in a system is important.
There is a whole system around an algorithm, like other algorithms, data bases, and their use in a specific context. In the case of a platform, it is easy since you have an individual user behind his screen. However, when you are talking about aircraft or a complex enterprise, you have to take the entire system into account.
Here, the reliability involved is that of the system and not only that of the algorithm. The algorithm does its work. The issue is to see how the data is being used, what types of decisions are made and what human control there is over those decisions or predictions. From that perspective, it seems extremely important to me that the algorithmic systems—not simply the algorithm—be audited. I'm talking about audits in the sense where people really look into the architecture of the system to find its possible shortcomings.
In the case of aircraft, since you mentioned those two recent tragic air catastrophes, we must, for instance, ensure in advance that human beings keep control, even if they may make mistakes. That is not the issue; human beings make mistakes. That is precisely why we could also put in place algorithmic aids. However, admitting that to err is human and that there is still human control over the machine—that is part of the things we need to discuss. However, this is certainly an essential factor if we are to identify the problems with a given algorithmic system.
:
Well, I think I'm in sympathy with remarks made by my colleague.
What I can add is that it's hard to foresee specific legislation, in part because we don't have a good definition of what we mean by artificial intelligence. It's really a loose term that covers all kinds of different things. Even ideas within it that we're particularly concerned about, like machine learning...that term is itself a loose term that covers a variety of approaches that are quite different.
One of the challenges for us is the success of computing, because it has meant that things that look like artificial intelligence are all kinds of things, and they are in all kinds of domains. I think it's more likely that we will see legislation that specifically addresses a context and a use of technology, as opposed to an overarching principle.
A colleague of mine said that we are at “peak white paper”. We might be near peak principles as well. There are many statements of principles, and these are valuable. However, I think our task is to translate these into specific situations rather than to legislate all of AI, because I just don't know how to do it. There are some exceptions, though. There are a few areas where we might see overarching legislation that's of value.
One example would be that this committee has done some important work on the Cambridge Analytica scandal with its previous report. One of the challenges of that scandal for many countries around the world was that they had taken an approach to communication that said social media platforms essentially do nothing. Many governments, as you know, provide immunity to liability for online platforms or social media companies as conduits.... They did that in a very blanket way. We could say it's a terrible mistake of the United States.
This is an area where you have one legislation that affects a huge swath of activity, because it affects all use of computers to act as intermediaries or conduits between humans. The idea that you would give away freedom from liability seems like a bad one.
There are some areas where there could be broad legislative action, but I think they're rare. It's more likely that we'll see domain-specific approaches.
You have to accept that there is a transparency aspect to this. I'll use an example. In the public sector at the moment—and this is very recent for the Government of Canada—there is a questionnaire that any department that is employing automated decision-making needs to fill out. It's 80-some-odd questions. Based upon the answers to those questions, they're assigned basically a level 1, level 2, level 3 or level 4 in terms of the risk.
Then there are measures that need to be taken, some additional notice requirements. They have to obtain experts who peer review the work, but in the initial impact assessment itself, there are questions about the purpose of the automated decision-making that they intend to employ and the impact that it's likely to have on a particular area, such as individual rights, the environment or the economy.
We could argue about the generality of it and whether this could be improved, but it seems, on one hand, to provide a transparency mechanism in that it is requiring a disclosure of the purpose of the algorithm and potentially the inputs to the algorithm, its benefits and costs, and the potential externalities and risks. Then, depending on the outputs to that assessment, there are additional accountability mechanisms that could apply.
If you haven't looked at it yet, my question would be this: If and when you do take a look at the Canadian model for the public sector in more detail, is that something that you could transcribe and treat more like a securities filing—that is, to say “this is going to be required for private sector companies of a certain threshold, and if there is any non-compliance where material terms are excluded purposefully or in a negligent way, then there are penalties for non-compliance”? Would that be sufficient to meet at least the baseline of transparency accountability generally before we get into sector-specific regulations?
:
You have to find a balance between the law and the contract. Today, the contractual mode is predominant when it comes to deploying artificial intelligence in applications. When you click on a button at the end of the contract on Facebook, you accept or not. You don't have time to read it.
If you look at the content of these contracts, you see that they contain totally unacceptable elements that should not be there, and I'll take Facebook as an example. We examined the conditions of use of Facebook a little. That enterprise gives itself the authority to obtain your information through third-party applications.
Whether or not you are online using Facebook, whether or not you have registered with them, the enterprise has given itself the right to go and get information about you from other applications. That type of thing is entirely possible through the use of the contract form. If, as a user, you accept that, well it's too bad for you. That kind of contract should be regulated by law. That is precisely where a balance needs to be found. It isn't easy but it is the government's work to find that balance between what should be in a contract between a service provider and a user, and what should be in the law.
What is the priority? There are a lot of things that need to be done, but I think that in order to protect the public, your main, most serious priority should be the use that is made of the data. It isn't just the fact that you like the colour blue that is important, but if one day you no longer like it, an algorithm may come to the conclusion that you have a mental problem or a disease you don't know you have, for instance, and that will be much more troublesome.
:
I'll try to keep my answer brief.
Yes, AI does come with unknowns. A modest stance would be to say that we don't quite know where we are headed. If we look at the past, we can find guideposts. You brought up the Industrial Revolution, which led to major advancements. However, the revolution occurred in the early 19th century—two centuries ago—without any groundwork being laid. It gave rise to more than a century of torment, more than a century of transitions and war, not to mention revolutions and, all told, millions of deaths. Government was completely overhauled.
If the Industrial Revolution taught us anything, it's that we need to address the period of transition that comes with technological advancement and new tools. Economist Joseph Schumpeter, whom you're probably familiar with, coined a relevant expression. He talked about the destructive transition, better known as creative destruction, meaning that something is destroyed in order to create new economic activities. Creative destruction can take a long time, and the destructive aspect is not necessarily appealing.
It's important to focus on the conditions for transition so that there are as few losers as possible. AI and the use of algorithms leads to tremendous progress, not just in medicine, but also with respect to repetitive tasks. That is something we should welcome, but we also need to prepare for the revolution.