:
Good morning, everybody.
I call the meeting to order. Welcome back. I hope everybody took full value of the two weeks back home staying in touch with constituents or finding a few days to do something completely different. Here we are back at work.
Welcome to the 19th meeting of the House of Commons Standing Committee on Public Safety and National Security. We will start by acknowledging we're meeting on the traditional unceded territory of the Algonquin people.
Today's meeting is taking place in a hybrid format pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. Members and witnesses participating virtually may speak in the official language of their choice. You have the choice at the bottom of your screen of floor, English or French.
Pursuant to Standing Order 108(2) and the motions adopted by the committee on Thursday, February 17, 2022, the committee is resuming its study of the rise of ideologically motivated violent extremism in Canada.
With us today by video conference we have Evan Balgord, executive director of the Canadian Anti-Hate Network; Barbara Perry, director, Ontario Tech University, Centre on Hate, Bias and Extremism; and Dr. Heidi Beirich and Wendy Via, Global Project Against Hate and Extremism.
Welcome to all. Up to five minutes will be given for opening remarks after which we will proceed with rounds of questions.
I now invite Mr. Balgord to make an opening statement of up to five minutes.
Mr. Balgord, the floor is yours.
My name is Evan Balgord. I'm the executive director of the Canadian Anti-Hate Network.
We're an anti-fascist and an anti-racist non-profit organization. Our mandate is to counter, monitor and expose hate-promoting movements, groups and individuals in Canada. We focus on the far right because it gives rise to the most issues of ideologically motivated violent extremism.
Today, I'm going to give a recent history of the far-right movement to explain in part how it escalated to the convoy and the occupation, and then I will describe the threat we are currently facing today.
I started doing this work originally as a journalist about five or six years ago. Today, our far-right movement was really born out of a racist anti-Muslim movement. We had hate groups spring up that were emboldened by Trump's election and his rhetoric about Muslims, and then they took to the streets to protest against our Motion No. 103, which was to broadly condemn Islamophobia.
At the time there were groups involved that you might recognize, like the Proud Boys and the Soldiers of Odin, and there were two threats largely emerging out of this space. The first was that they were assaulting people at demonstrations. Those could get quite violent. The second was that they were harassing Muslims in their places of worship, which was quite concerning to them.
Of course, Motion No. 103 passed and the sky didn't fall, so they needed a new issue. They rebranded and started calling themselves Yellow Vests Canada. When they did that, they added new grievances. They said it's not just Muslims, but also also about oil and gas, and western separation. But, of course, make no mistake: If you went into the Facebook groups at those times, you would find regular occurrences of largely anti-Muslim racism—although you'll find every form of racism and anti-Semitism present—and you would also find calls for violence, oftentimes towards politicians.
They also had a convoy, interestingly enough, called United We Roll. A lot of people who organized that convoy would later organize the more successful occupation of Ottawa. You can see how you can draw a straight line from one thing to the other.
This was also around the time we saw the rise of livestreamers and content creators being more important than “hate” groups. These are individuals like Pat King, who would go on to have an oversized impact on the occupation.
Their convoy, United We Roll, was a bit of a flop. It did not meet their expectations, and the Yellow Vests Canada movement dwindled, although they were still holding weekly demonstrations in most of our cities. Then came the pandemic, which was like manna from heaven for these groups.
Far-right groups and racist groups are also conspiratorial groups at their core, right? They believe there's this Muslim or this Jewish or this globalist takeover of Canada or of the world. At they core, they are conspiracists. So, when COVID came around, they very genuinely adopted COVID conspiracy theories. But this was also very dangerous and led to very awful second-order effects, because regular people were being fed misinformation and disinformation about COVID, and they would go out and find groups of like-minded people. Who were those groups of like-minded people? Well, they were started by our right-wing extremists here. We had more normal people coming into contact with our far-right movement. That was bad because a lot of those people got radicalized and we started to have marches in the hundreds and the thousands in our cities to protest things like public health measures. That all kind of culminated with the convoy, and we saw that they were now capable of occupying Ottawa.
One of the things I want to point out moving forward is those people haven't gone anywhere. They're back to their regularly scheduled programming. They are still holding their large demonstrations in various cities and some of them are returning this weekend to Ottawa as part of a Rolling Thunder convoy, which will not be as significant, but the point is that this just continues and it grows.
I want to describe two threats we're facing today. We are talking about ideologically motivated violent extremism. That means extremism that gets violent or criminal. That's a lot of what we're talking about here. We have threats like the threats of a terrorist attack or the threat of a mass violence incident. We have the threat that this movement of convoy-supporting COVID conspiracists. They're not all racists; they're not all violent. Not all the people on January 6 were either. There were groups in those midst that decided they were going to try to do a coup, and they swept up a lot of the other people there.
The same thing is kind of happening here. We have more extreme elements of our far-right movement than others, but as a whole they are becoming a threat to our democracy. The goal of the whole thing is an undemocratic overthrow of the government so that they can take power and persecute their perceived political enemies. That would mean putting doctors, journalists and politicians on trial and perhaps executing them. That's what a lot of them want to do.
That's a pretty significant threat. That's the ecosystem threat, right? We can't just talk about ideologically motivated violent extremists in a vacuum—
:
Thank you very much, and thanks for the opportunity.
Evan, thanks for providing a good segue for me. I really want to emphasize the lessons we can learn about the far right movement more broadly from their engagement in the convoys or the occupation.
There are really four points I want to stress here. One is what it tells us about their organizational capacity. We really saw the capacity to organize, in a Canadian context, unlike we've ever seen it before, on a large scale, largely facilitated by both the encrypted and unencrypted social media platform. That theme will sort of be running through what I say today, because that was also the venue through which they were able to display this adeptness that they really have in terms of their ability to exploit broader popular concerns, grievances and anxieties and weave them into their own narratives. As well, there are the implications of social media platforms for the deployment and, disturbingly, the ready acceptance of the sorts of disinformation, conspiracy theories, etc. that we see underlying much of far-right activism but particularly in the context of the convoy and COVID much more broadly, as Evan suggested.
The convoy and the occupation also tell us a great deal about the risks and threats associated with the right-wing movement in Canada. Obviously we have the threats to public safety, as we saw in Ottawa in particular, not just in terms of the disruption of the whole downtown community but also in terms of the harassment, the hate crime, the threats, the intimidation of people of colour or LGBTQ+ people or even people who were wearing masks in the downtown area.
We see threats to national security. Obviously the fact that they occupied that space so close to Parliament Hill is paramount, but also very important to keep in mind is the threat to border security that we saw in the border blockades, especially with the discovery of the artillery and weapons in Coutts associated with far-right groups.
On dangers to democracy, there's obviously the threat that Evan referred to in terms of attempts to overthrow a democratic government, but even more broadly than that the far right in this context is also very concerned with enhancing that erosion of an array of key institutions—surely the state but also science, media and education and academe as well.
The next point, the final key point in terms of the pattern, is the failure of law enforcement in this context to properly evaluate and prepare and understand the risks associated with the far right and, more broadly again, their failure to intervene and counter right-wing extremism generally. In fact, in the convoy and in other contexts, we've seen sympathy for the far right, and here, with the fundraising donations coming from law enforcement. We've seen social media platforms and pages that are devoted to law enforcement also sharing some of these conspiracy theories and this disinformation.
The last point I want to make is about what the points of intervention are, given what I've identified here as some of the key lessons. The first is the need to enhance not just critical digital literacy but civic literacy as well. There was an awful lot of misinformation and misunderstanding about the nature of the charter, about the role of the Governor General, about how governments operate generally. Both of those pieces are important.
Another point of intervention is around the law enforcement/intelligence community enhancing their awareness, their capacity and their willingness to intervene around right-wing extremism.
Finally, there is a need to create opportunities and incentives to engage in civil dialogue and engage across partisan sides whether we are talking about the general public or whether we are talking about politics.
I will end there. Thank you.
:
Good morning, committee members. Thank you for the honour of inviting us to speak today on the important issue of ideologically motivated extremism.
My name is Wendy Via, and I'm joined by my colleague, Heidi Beirich. We co-founded the Global Project Against Hate and Extremism, an American organization that counters ideologically motivated extremism and promotes human rights that support flourishing, inclusive democracies. We particularly focus on the transnational nature of extremist movements and the export of hate and extremism from the United States.
The United States, Canada and many countries are currently awash in hate speech and conspiracy theories like QAnon, anti-vax, election disinformation and “the great replacement” spreading on poorly moderated social media. It is indisputable that social media companies are major drivers of the growth of global hate and extremist movements, conspiracy theories, the radicalization of individuals and organization of potentially violent events.
The consequence of this spread is a polarization of our societies and violence in the form of rising hate crimes and terrorist attacks. The tragedies of the Quebec City mosque shooting, the Toronto van attack and others, such as the shootings at the Tree of Life synagogue in Pittsburgh and the mosques in Christchurch, are a horrific reminder of the toll that hate and online radicalization can take. These movements also manifest in direct threat to our democracies, as we've seen so clearly with the January 6 insurrection and the trucker occupation that held Ottawa hostage for weeks.
Canada and the United States have long had similar and intertwined white supremacist, anti-government and other hate movements. In recent years we have seen American hate and militia organizations, including the neo-Nazi The Base, the anti-government Three Percenters, the misogynistic and racist Proud Boys and others establish themselves on both sides of the border. Because these organizations attempt to infiltrate key institutions, both countries are facing the issue of extremists in the military and the police, though to varying degrees.
In the U.S. and other countries, political figures and media influencers with tremendous online reach, and in particular, former president Donald Trump, have legitimized hate and other extremist ideas, injecting them into the mainstream political discourse and legitimizing bigoted and fringe ideas across borders. Research shows that Trump's campaign and politics galvanized Canadian white supremacist ideologies and movements, and his endorsement of the trucker convoy, along with media personalities like Tucker Carlson, undoubtedly contributed to the influx of American donations to the trucker siege.
In addition to the key role of social media, a more systemic driver of extremism is the growing demographic diversity in both countries which, along with histories of white supremacy, though different in each country, fuel nostalgic arguments that a more successful white past is being erased and intentionally reconstituted with communities who do not belong. The movements pushing these ideas will likely become stronger in the years to come, as they have a historical foundation and sympathy that other extremist movements will never achieve. It is for this reason that countering them is of the utmost importance.
If I may, I'll offer some recommendations here with a broader list in our written testimony.
This growing problem will not be solved without taking on the online social media and financial spaces. Absent a domestic law with teeth, tech companies will not reform their practices. Importantly, the tech companies must be held to account in all languages, not just American English. A sovereign democracy cannot thrive when there are massive ungovernable spaces. Most research into the impact of social media on our democracies and societies is generated by civil society and focuses on the U.S.
Independent research of online harms should be funded. We should improve cross-border co-operation, particularly in terms of transnational travel and sharing of intelligence and threat assessments. We should fully implement the Christchurch Call commitments, of which Canada was an original signatory. We should put in place and enforce strong policies against extremism in the military and police forces, from recruitment to active duty to veteran status.
Finally, extremist movements are emboldened by endorsement of their ideas from influential people. They can also be diminished by public rejection and publicly and forcefully condemning hate, extremism and disinformation whenever possible.
I hope these suggestions will be helpful.
Thank you.
:
I appreciate the honesty, Mr. Balgord. It's important. I'm not diminishing some of the work that you do.
I come from an area where last summer we had a hundred-year-old church burn to the ground, and dozens of people had to be evacuated from an apartment building close by, which nearly went up in flames and killed dozens of people, but you just don't hear it talked about in this country. I understand that it's not your organization's mandate to talk about these things. As you've said, you're clearly focused on the far right.
During the convoy protests, your executive director—I believe that's his position—Bernie Farber, posted a tweet with a photo of a vile anti-Semitic flyer and claimed that this was a picture of the flyer being circulated in Ottawa among the trucker protesters. Upon further examination, it was proven that this exact same photo was taken in Miami, Florida, weeks before the protests ever began.
Can you explain why the executive director of your organization was claiming that this photo was being circulated at the protests when, in fact, it was a photo that was from a completely different country weeks before the protests?
:
Thank you very much for giving me a way to address this.
First off, that was our chair. I'm the executive director. I was privy to the email chain that led to him tweeting that out. What had occurred was that somebody in Ottawa had reached out and said that they saw that flyer there, and they provided the photo. At that moment, Bernie was not aware that the photo itself was taken from an American source.
What the person was trying to communicate to our organization was that they saw the same flyer, but they had attached the photo from the States. It was our error in not communicating that more clearly, where the photo itself originated from. What the person was reporting to us was that they had seen the same flyer in Ottawa.
Thank you.
:
Let's move on here. We're talking about the Ottawa protest, but I appreciate your clarification on that matter.
You've raised some pretty disturbing allegations about the potential for a terrorist attack, a mass violence event. I think we can all be thankful that this didn't happen during the protests, and I think it sort of undermines the argument that was being made by many, including by organizations such as yours, that this protest had violent motivations, that they had a desire to commit violence. The fact that we didn't see a terrorist attack or mass violence event sort of undermines the claim.
You've connected the United We Roll protest, which came to Ottawa in 2018, I believe.... A lot of people from western Canada concerned about the carbon tax, pipelines being blocked.... How do you draw this connection between white supremacy and fascism with people who are concerned about protecting their livelihoods?
:
I have been studying far-right extremism in the Canadian context since about 2012-13. I had done a little work previously in this space in the U.S. in the mid mid-nineties or so, but I have been working more broadly in the area of hate studies for about 30 years now.
In 2015, we published a report coming from a study that was funded by Public Safety Canada, which was really the first comprehensive academic approach to understanding right-wing extremism in Canada. We have just finished another three-year study, which is an update of that.
What we have found in that report in 2015—and I can share it or the subsequent book that came out of that—was a very conservative estimate of about 100 active groups across Canada. We could document through open-source data that there were over 100 incidents of violence of some sort associated with the far-right in Canada. Just to put that in context, during the same period of time there were about eight incidents of Islamist-inspired extremism, which is what the focus was at the time.
What else did we find there? In the update, we have found in the last couple of years in particular over 300 active groups associated with the far-right and, of course, just in the last seven years or so we have seen now 26 murders, 24 of those mass murders, motivated by some variant of right-wing extremism.
What else are we finding? One of the things that was alluded to earlier was the idea of the shifting demographics within the movement as well. I think that as we saw with the convoy, it is a much older demographic than what we were seeing previously, where it was not wholly but predominantly a youth movement—Skinheads, neo-Nazis,those traditional sorts of groups—but we're now seeing an older, better educated demographic being brought to the movement as well. Certainly, it is a movement that is much more facile and ready to use social media in very ironic, as well as very open, ways to share their narratives.
I thank the witnesses for joining us.
I will address Mr. Balgord.
In an article from September about protests during the election campaign, you said that protest groups were organizing their activities through online groups, including on platforms like Facebook. I assume something similar happened with the “freedom convoy”. You talked a bit about that earlier.
Do you think platforms like Facebook are doing enough with their service policies to counter those activities? Do you think they are helping hate groups get organized?
:
Through all of the whistle-blower data that has come out and from the whistle-blowers themselves who have told the story of what happens behind the scenes at Facebook, we've seen pretty conclusively that they identify problems like polarization and hate speech. When they propose solutions, they're told by their executives not to do them because it would hurt engagement or they discover that some of the things they do to increase engagement are in fact driving polarization. They move forward with those decisions because engagement is money for them. Platforms like Facebook and Twitter have more of a built-in incentive to drive engagement at all costs.
No, they are not doing enough to combat things. I know that right now the government is looking at an online safety piece of legislation. That would have been very effective five years ago. It's still going to be effective and it's important because when people get involved in ideologically motivated violent extremism or far-right organizing or COVID conspiracies, they don't start doing that on the weird fringe platforms like Telegram. They start on the Facebooks and the Twitters of the world.
If we can stop people from connecting with that misinformation and disinformation, we can help a lot of families who are dealing with their grandmother, their uncle or their aunt who's been swept up into this alternate reality that's causing a lot of trouble.
There's still a lot that we can accomplish with the platforms, but we need to change the incentives. We need to make it so that they act responsibly.
They've had 10 years to figure out how to do it themselves. Unfortunately, nobody really likes the idea of government having to step in and tell an industry what to do. Everybody rankles at that here and there, but we have to because, quite frankly, the status quo is untenable.
:
As far as I can tell, none of the legislation that has tried to address online harms has made a difference to people who are victimized by it. I mean, platforms may point and say they did this and they did that, but I dare say that if you ask people who use these platforms, they will not perceive that there's much of a difference in their safety or how they perceive these platforms.
Of course, we run into opposition to doing anything about online harms, so I think we should be moving forward with a different model. I don't think we should have a complicated model that looks at censoring or taking down individual pieces of content. I think that we should have an ombudsperson model.
The basic idea is that you have an ombudsperson that is a well-resourced regulator with investigatory powers, so they can kick down the door of Facebook and take its hard drives. I'm being a little hyperbolic here, but we know that these platforms hide data from us and lie to journalists, so we do need broad investigatory powers to investigate them.
I believe that this ombudsperson should be able to issue recommendations on the platforms about the algorithms and things like that. That would be very similar to what their own employees kind of want to do behind the scenes. Like, if they learn that something drives polarization and negative engagement and is leading to hate speech, they suggested to maybe do this instead, or put this in as a stopgap measure.
If we had an ombudsperson who could look at what was happening under the hood and make recommendations on the platforms, that's the direction we want to go. Where the platforms do not take those recommendations, we feel that the ombudsperson should be able to apply to a court. The court can measure what the ombudsperson is recommending versus all the charter implications. If the court decides that it's a good measure and it's charter consistent, then the court can make it an order. Then if the platforms don't follow it, they could face a big fine.
This is a much more flexible way to move forward because it means that any particular arguments we might have against free speech versus hate speech, etc., are taken out of the hands of government and instead happen with a bunch of intervenors in front of a court and a judge. That's how we would move forward because it's kind of flexible. We can put it in place now and we can defer some of those arguments and have them in front of a court where they belong.
Thank you to all of our witnesses who are aiding our committee in this study.
Mr. Balgord, maybe I'll start with you. On the subject of Elon Musk, I was reading some of his tweets. In one that stuck out with me, he likened Twitter to sort of being the next iteration of the “public town square”, and how in this digital space it was important to protect people's abilities to voice their opinions and to enshrine free speech.
I guess the main issue with social media on a variety of platforms is that it allows users to cloak themselves in anonymity. For example, I can't just go out among the public and start shouting obscenities and directing hate speech against identifiable groups, because I'll be held liable. People will see who I am. I can be held to account for my actions. But the cloak of anonymity is very prevalent on many social media platforms. There have also been problems with fake accounts being set up, and with troll factories, bot farms and so on.
If social media companies to date have been wildly unsuccessful at tackling that problem, could you perhaps offer some comments on whether or not you foresee the role of the ombudsperson that you mentioned tackling that issue? Perhaps you could expand a bit more on that theme.
:
On the issue of anonymity, I think you are entirely correct in how you've kind of diagnosed it. Our public square is more socially located and more democratic, in a sense. If you go spout off in your local Starbucks or Tim Hortons or whatever, you might be held socially responsible for it, whereas you are not online. Of course, now we have the social media companies that are very much not a democratic space. They can make unilateral decisions over who gets to speak, and how and when.
On the issue of anonymity, I do very much take your point that people are more likely to troll and be abusive anonymously. However, we have to look at the case of perhaps a trans teenager whose parents are not supportive and they're looking to connect with a community online. Anonymity for them is safety, as it is for a woman who is perhaps fleeing a domestic violence situation who wants to engage with a social network online. In some cases, anonymity is absolutely the most valuable thing to people who are vulnerable. In the case of individuals overseas as well, where they face very real and very direct persecution by the government, anonymity is the only thing that keeps them safe.
So I don't think making the Internet not anonymous is necessarily the way to go, because there are all these cases where it has unintentional consequences on people who do need safety.
:
I appreciate your raising that point. I think that is a very fair consideration. Perhaps the focus should be exclusively on the content.
I'd like to turn my next question to the Global Project Against Hate and Extremism.
In your opening remarks, you were talking about the fact that we do have to take social media companies on “with teeth”. In previous testimony from other witnesses in front of this committee, we heard a little bit about how far right and extremist groups are using different avenues to monetize their hate. For example, they may be using platforms like Amazon and Etsy to sell paraphernalia and raise funds that way.
With the work that your organization does, is there anything on that particular subject you can inform our committee on that would help us produce some recommendations to the federal government?
Maybe I'll direct the one question I have to Ms. Perry.
A lot of the things that we're contemplating, policy-wise, are essentially reactive in nature, so I'm more interested in the proactive end of the spectrum. How can we properly address people's legitimate grievances and their frustrations with the way things in life are going right now?
Also, with respect to our youth, we know education is largely within the provincial domain, but do you have any recommendations that our committee could make about what could be done at the federal level to ensure that young Canadians are aware of the narratives used by radical and extremist groups? Do you have any strategies we can use at the federal level to counteract that?
:
I call the meeting back to order.
Colleagues, we're ready to resume with our second panel. With us this second hour, we have Ilan Kogan, data scientist at Klackle. From Meta Platforms, we have Rachel Curran, public policy manager of Meta Canada, and David Tessler, public policy manager. From Twitter Inc., we have Michele Austin, director of public policy for the U.S. and Canada.
I would like to invite our guests to give an opening statement of up to five minutes. I will begin with Mr. Kogan.
Mr. Kogan, the floor is yours.
:
Mr. Chair, members of the committee, I would like to thank you for inviting me today to discuss artificial intelligence and social media regulation in Canada.
I begin with an oft-quoted observation: “For every complex problem, there is a solution that is clear, simple and wrong.”
Canada is not the first country to consider how to best keep the Internet safe. In 2019, for instance, the French Parliament adopted the Avia law, a bill very similar to the online harms legislation that the Canadian government considered last year. The bill required social media platforms to remove “clearly illegal content”, including hate speech, from their platforms. Under threat of significant monetary penalties, the service providers had to remove hate speech within 24 hours of notification. Remarkably, France's constitutional court struck the law down. The court held that it overly burdened free expression.
However, France's hate speech laws are far stricter than Canada's. Why did this seemingly minor extension of hate speech law to the online sphere cross the constitutional line? The answer is what human rights scholars call “collateral censorship”. Collateral censorship is the phenomenon where if a social media company is punished for its users' speech, the platform will overcensor. Where there's even a small possibility that speech is unlawful, the intermediary will err on the side of caution, censoring speech, because the cost of failing to remove unlawful content is too high. France's constitutional court was unwilling to accept the law's restrictive impact on legal expression.
The risk of collateral censorship depends on how difficult it is for a platform to distinguish legal from illegal content. Some categories of illegal content are easier to identify than others. Due to scale, most content moderation is done using artificial intelligence systems. Identifying child pornography is relatively easy for such a system; identifying hate speech is not.
Consider that over 500 million tweets are posted on Twitter every day. Many seemingly hateful tweets are actually counter-speech, news reporting or art. Artificial intelligence systems cannot tell these categories apart. Human reviewers cannot accurately make these assessments in mere seconds either. Because Facebook instructs moderators to err on the side of removal, counterintuitively, online, the speech of marginalized groups may be censored by these good-faith efforts to protect them. That is why so many marginalized communities objected to the proposed online harms legislation that was unveiled last year.
Let me share an example from my time working at the Oversight Board, Facebook's content moderation supreme court. In August 2021, following the tragic discovery of unmarked graves in Kamloops, British Columbia, a Facebook user posted a picture of art with the title “Kill the Indian, Save the Man”, and an associated description. Without any user complaints, two of Facebook's automated systems identified the content as potentially violating Facebook's policies on hate speech. A human reviewer in the Asia-Pacific region then determined that the content was prohibited and removed it. The user appealed. A second human reviewer reached the same conclusion as the first.
To an algorithm, this sounds like success, but it is not. The post was made by a member of the Canadian indigenous community. It included text that stated the user's sole purpose was to bring awareness to one of the darkest periods in Canadian history. This was not hate speech; it was counter-speech. Facebook got it wrong, four times.
You need not set policy by anecdote. Indeed, the risk of collateral censorship might not necessarily preclude regulation under the charter. To determine whether limits on free expression are reasonable, the appropriate question to ask is, for each category of harmful content, such as child pornography, hate speech or terrorist materials, how often do these platforms make moderation errors?
Although most human rights scholars believe that collateral censorship is a very significant problem, social media platforms refuse to share their data. Therefore, the path forward is a focus on transparency and due process, not outcomes: independent audits; accuracy statistics; and a right to meaningful review and appeal, both for users and complainants.
This is the path that the European Union is now taking and the path that the Canadian government should take as well.
Thank you.
Thank you for the invitation to appear before the committee today to talk about the important issue of ideologically motivated violent extremism in Canada.
My name is David Tessler and I am the public policy manager on Meta's counterterrorism and dangerous organizations and individuals team.
With me today is Rachel Curran, public policy manager for Canada.
Meta invests billions of dollars each year in people and technology to keep our platform safe. We have tripled to more than 40,000 globally the number of people working on safety and security. We continue to refine our policies based on direct feedback from experts and impacted communities to address new risks as they emerge. We're a pioneer in artificial intelligence technology to remove harmful content at scale, which enables us to remove the vast majority of terrorism- and organized hate-related content before any users report it.
Our policies around platform content are contained in our community standards, which outline what is and what is not allowed on our platforms. The most relevant sections for this discussion are entitled “violence and incitement” and “dangerous individuals and organizations”.
With respect to violence and incitement, we aim to prevent potential offline harm that may be related to content on Facebook, so we remove language that incites or facilitates serious violence. We remove content, disable accounts and work with law enforcement when we believe there's a genuine risk of physical harm or direct threats to public safety.
We also do not allow any organizations or individuals who proclaim a violent mission or who are engaged in violence to have a presence on our platforms. We follow an extensive process to determine which organizations and individuals meet our thresholds of “dangerous”, and we have worked with a number of different academics and organizations around the world, including here in Canada, to refine this process.
The “dangerous” organizations and individuals we focus on include those involved in terrorist activities, organized hate, mass or serial murder, human trafficking, organized violence or criminal activity. Our work is ongoing. We are constantly evaluating individuals and groups against this policy as they are brought to our attention. We use a combination of technology reports from our community and human review to enforce our policies. We proactively look for and review reporting of prohibited content and remove it in line with our community standards.
Enforcement of our policies is not perfect, but we're getting better by the month. We report our efforts and results quarterly and publicly in our community standards enforcement reports.
The second important point, beyond noting that these standards exist, is that we are always working to evolve our policies in response to stakeholder input and current real-world contexts. Our content policy team works with subject matter experts from across Canada and around the world who are dedicated to following trends across a spectrum of issues, including hate speech and organized hate.
We also regularly team up with other companies, governments and NGOs because we know those seeking to abuse digital platforms attempt to do so not solely on our apps. For instance, in 2017, we, along with YouTube, Microsoft and Twitter, launched a Global Internet Forum to Counter Terrorism, GIFCT. The forum, which is now an independent non-profit, brings together the technology industry, government, civil society and academia to foster collaboration and information sharing to counter terrorism and violent extremist activity online.
Now I'll turn it over to my colleague, Rachel.
In Canada, in 2020, in partnership with Ontario Tech University Centre on Hate, Bias and Extremism, led by Dr. Perry, who you just heard from, we launched the Global Network Against Hate. This five-year program will help advance the centre's work and research on violent extremism based on ethnic, racial, gender and other forms of prejudice, including how it spreads and how to stop it.
The Global Network Against Hate also facilitates global partnerships and knowledge sharing focused on researching, understanding and preventing hate, bias and extremism online and off. Our partnerships with the academics and experts who study organized hate groups and figures help us stay ahead of trends and activities among extremist groups. Our experts are able to share information with us on how these organizations are adapting to social media and to give us feedback on how we might better tackle them.
Based on this feedback, in Canada we've designated several Canadian hate organizations and figures in recent years, including Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, Aryan Strikeforce, Wolves of Odin and Soldiers of Odin. They've all been banned from having any further presence on Facebook and Instagram.
We also remove affiliate representation for these entities, including linked pages and groups. Recent removals include Alexis Cossette-Trudel, Atalante Québec and Radio-Québec—
:
Thank you very much, Chair and members of the committee, for the opportunity to be here, and thank you for your service.
I'd also like to acknowledge the political staff who are in the room and thank them for their service and support.
Twitter's purpose is to serve the public conversation. People from around the world come together on Twitter in an open and free exchange of ideas and issues they care about. Twitter is committed to improving the collective health, openness and civility of public conversation on our platform. We do this work with the recognition that freedom of expression and safety are interconnected.
Twitter approaches issues such as terrorism, violent extremism and violent organizations through a combination of interventions, including the development and enforcement of our rules, product solutions and work with external partners such as government, civil society and academia.
For my opening remarks, I will focus on our work with partners and, in particular, the Government of Canada.
Twitter shares the Government of Canada's view that online safety is a shared responsibility. Digital service providers, governments, law enforcement, digital platforms, network service providers, non-government organizations and citizens all play an important role in protecting communities from harmful content online. Twitter is grateful for the Government of Canada's willingness to convene honest and sometimes difficult conversations through venues such as the Christchurch call to action and organizations such as Five Eyes.
Through our joint work on the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, which my colleague Mr. Tessler referred to in his remarks, we have made real progress across a wide range of issues, including establishing GIFCT as an independent, non-government organization; building out GIFCT's resources and impact; forming the independent advisory committee and working groups; and implementing a step change on how we respond to crisis events around the world.
In Canada, the Anti-terrorism Act and the Criminal Code of Canada provide measures for the Government of Canada to identify and publicly list known terrorist and violent extremist organizations. Twitter carefully monitors the Government of Canada's list, as well as other lists from governments around the world. The last time that list was updated was on June 25, 2021. We also collaborate and co-operate with law enforcement entities when appropriate and in accordance with legal processes. I also want to acknowledge the regular and timely dialogue I have with officials across government working on domestic issues related to these files.
In addition to governments, Twitter partners with non-government organizations around the world to help inform our work and to counter online extremist content. For example, we partner closely with Tech Against Terrorism, the global NGO, to share information, knowledge and best practices. We recently participated alongside the Government of Canada in the Global Counterterrorism Forum's workshop to develop a tool kit to focus on countering racially motivated violent extremism.
Our approach is not stagnant. We aggressively fight online violent extremist activity and have invested heavily in technology and tools to enforce our policies. As the nature of these threats has changed, so has our approach to tackling this behaviour. As an open platform for free expression, Twitter has always sought to strike a balance between the enforcement of our own rules covering prohibited behaviour and the legitimate needs of law enforcement with the ability of people to express their views freely on Twitter, including views that people may disagree with or find offensive.
I would like to end my testimony with a quote from Canada's Global Affairs Minister, the , on March 2 of this year. She said:
More than ever, social media platforms are powerful tools of information. They play a key role in the health of democracies and global stability. Social media platforms play an important role in the fight against disinformation....
Twitter agrees.
I'm happy to answer any questions you might have on policies, policy enforcement, product solutions and the ways in which we're working to protect the safety of the conversation on Twitter.
Thank you.
Thank you to the witnesses for being here. My first question is for Twitter.
Today in committee, as you may have heard, we talked a lot about right-wing opinion and left-wing opinion, sharing online, and the harmful content from extreme elements of both. I'm sure you're also aware that Conservatives sometimes comment how they feel unfairly targeted by social media censorship.
In that same vein, in your joint statement with Elon Musk, he explained his motivation for wanting to buy Twitter and take it private. He said, “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are [being] debated”. Elon Musk, as you know, has also said he wants to enhance Twitter with new features, “making the algorithms open source to increase [user] trust, defeating the spam bots, and authenticating all [human users].
Do you feel that Mr. Musk can achieve these goals, and do you feel that will ensure all sides of the political spectrum, so to speak, including Conservatives, are better protected to share their opinions freely on your platform?
My next question is for Facebook.
Thank you, Ms. Curran, for being here today.
I want to talk a bit about what happened in Australia. As you know, the Australian government brought forward legislation that would force Facebook to pay publishers of news media if Facebook hosted, or users shared, news content. As you know, Facebook retaliated and banned news links from being shared by Facebook users in Australia, and shut down Australian news pages hosted on the Facebook platform, in a protest to the Australian law that the government was looking to bring forward. Ultimately, Facebook had cut off the ability to share news publications online from users or otherwise. An agreement was reached shortly afterwards, but it did take this extraordinary step to ban the sharing of news publications.
We know that the Liberal government brought forward a similar bill to what the Australian government did. Bill has some similarities. It's called, in short, the online news act. You may be familiar with it. There's also Bill , which aims to control what Canadians see when they open their social media apps such as Facebook, Twitter and the like.
Ms. Curran, is it reasonable to believe that Facebook could do the same thing in Canada as it did in Australia and prohibit the sharing of news, should the Liberal government move forward with bills such as Bill or other iterations of it?
:
The short answer is that we're still evaluating that legislation. We didn't know the scope of it until it was tabled very recently.
We have some pretty serious concerns. Our view is that when publishers place links to their content on our platforms, they receive significant value from doing that. We don't actually control when or how or to what degree they post news material on our platforms.
I will say this. We're committed to fuelling innovative solutions for the news industry and to the sustainability of the news industry in Canada. That's why we've entered into a number of partnerships to support that kind of work.
I can't comment definitively on our future action with respect to that bill specifically, since we're still evaluating it.
:
I agree with Rachel that we're still in the early stages of analysis.
There are a couple of things to say with regard to Bill .
Twitter, like the news industry, does not make a lot of money on news. In fact, we have nobody in Canada who is selling news content. If you see news advertised on Twitter, it is largely self-serving. The news organizations have chosen to advertise on their own.
We are also what's called a “closed” platform. When you link to news on Twitter, you have to leave the site. That is not necessarily the case with the other platforms.
The thing we're most concerned about is with regard to scope and transparency. The question is whether or not Twitter is scoped in under that bill. That is very unclear. I understand that there will be quite an extensive GIC coming out after the bill is passed.
I am more than happy to meet with anybody to discuss the content of Bill .
:
Thank you for that question. It's actually a very good question.
On this question of algorithms, what you see in your newsfeed, including advertising, depends on a number of what we call “signals”. Those signals include what you have liked before, what kinds of accounts you follow, what you have indicated your particular interests are, and any information that you have given us about your location, who you are and your demographic information. Those all act to prioritize, or not, particular information in your newsfeed. That will determine what you see when you open it up. It's personalized for each user.
I thank the witnesses for joining us.
I will first go to Ms. Austin, from Twitter.
A little earlier, we discussed with the previous panel Mr. Musk's purchase of Twitter. Those people carried out two surveys in March to ask users whether they felt that Twitter's algorithm should be open source code and whether freedom of expression was respected. Those surveyed answered yes to the first question, and no to the second. Of course, Mr. Musk accused the platform of applying censorship.
Do you think Mr. Musk's taking over Twitter may lead to changes in some of the platform's policies and ways of operating? The fact that people could speak out more may unfortunately encourage the spread of disinformation and hate speech.
I will now turn to the Meta Platforms representative.
In October 2021, a former Facebook data scientist told members of the U.S. Congress that Facebook knows the algorithms its platforms use are causing harm, but it refuses to change them because eliciting negative emotions in people encourage them to spend more time on sites or to visit them more often, which helps sell advertising. To reduce that harm without hurting Facebook's profits, she suggested that posts be displayed in chronological order instead of allowing the algorithm to anticipate what will engage the reader. She suggested that an additional step be added before people can share content.
What do you think of those accusations?
What would be the consequences of removing the engagement prediction function from a platform like Facebook?
:
The assertion that we algorithmically prioritize hateful and false content because it increases our profits is just plain wrong. As a company, we have every commercial and moral incentive to try to give the maximum number of people as much of a positive experience as possible on the platform, and that includes advertisers. Advertisers do not want their brands linked to or next to hateful content.
Our view is that the growth of people or advertisers using our platforms means nothing if our services aren't being used in ways that bring people closer together. That's why we take steps to keep people safe, even if it impacts our bottom line and even if it reduces their time spent on the platform. We made a change to News Feed in 2018, for instance, which significantly reduced the amount of time that people were spending on our platforms.
Since 2016, we've invested $13 billion in safety and security on Facebook, and we've got 40,000 people working on safety and security alone at the company.
I'd like to go to the previous conversation you had with respect to the convoy that made its way to Ottawa and then turned itself into an illegal occupation. When we had GoFundMe before our committee, they pointed out that any fundraising campaigns relating to misinformation, hate speech, violence or more are prohibited by their terms of service. Yet, their fundraising platform, their crowdfunding, allowed this convoy to raise money all the way up until they shut it down on February 4, despite factual evidence that misinformation was floating everywhere for the previous two weeks.
I want to know from Meta's perspective what you were doing during the time that you were monitoring these Facebook groups. How did you change tactics when GoFundMe stopped the fundraiser, when Ottawa declared a local state of emergency on February 6, when the Province of Ontario followed suit on February 11, and when finally the federal government was forced to do so on February 14? How did your company escalate its actions in that regard?
I will close by addressing Ms. Austin.
You concluded your opening remarks by saying that social media platforms played an important role in the fight against disinformation, and I agree with you. However, a lot of disinformation exists on those platforms.
Even we, elected members, are facing those kinds of problems. On the one hand, social media are our best friends because they enable us to reach out to people we represent, but, on the other hand, they are our worst enemies because we get bad comments and hate speech, if I may say so.
Despite everything, you announced something interesting, last Friday, to mark Earth Day. You said that misleading advertising on climate change will be prohibited to prevent the undermining of efforts to protect the environment. That decision came at a time when the platform's content moderation is being roundly criticized left and right by those who are accusing it of censorship and those who are criticizing its lax approach. I personally think this is a wonderful announcement and a good decision.
Can we expect a similar policy from Twitter to counter hate speech and disinformation?
Ms. Austin, I'll ask you my last question. I know that for both Meta and your platform, it is a struggle to.... You do care about your platform. You want to ensure that there are legitimate users. I guess what I wanted to know from you is, can you inform our committee on what the trend has been like over the last number of years over the unverified accounts, the bots, the ones that are pushing extremist content?
Is it like a game of Whac-a-Mole? How difficult is it, from your company's perspective, to actually verify that an account is a real person? What are some of the ways in which people are finding unique features in your platform to exploit the loopholes that might exist?
To both platforms, as a woman in politics, I am subjected to some of the most vile, misogynistic comments on all of your platforms—Instagram, Facebook and Twitter. Your reporting tool is not effective. If it's a direct message on Facebook, I can't report it at all. You're not doing a good job of monitoring your social media sites. When I'm tagged by a colleague who is a person of colour, the racists comments are absolutely disgusting.
My comment was that you need to do better. I've brought this up before with these platforms at the status of women committee. It's not acceptable that people should be subjected to these kinds of comments on these platforms.