Skip to main content

JUST Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Justice and Human Rights


NUMBER 127 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Thursday, December 12, 2024

[Recorded by Electronic Apparatus]

(1100)

[Translation]

    I call this meeting to order.

[English]

     Welcome to meeting 127 of the House of Commons Standing Committee on Justice and Human Rights.

[Translation]

    Pursuant to Standing Order 108(2) and the motion adopted by the committee on December 2, 2024, the committee is meeting in public to continue its pre‑study of the subject matter of Bill C‑63.

[English]

     Before I welcome the witnesses for the first panel, I have a few introductory remarks to make.
    For those appearing in person, please use your microphone and your headset. Move it away from the microphone so that we do not give a hard time to our interpreters. It's also for their safety and health. For those in the room and those appearing virtually, please wait to be recognized by the chair.

[Translation]

    I'm speaking French right now. The English participants should be hearing the English interpretation.

[English]

     If you did not understand what I just said in French, you do not have your device turned the proper way. I would ask that you ensure that you have your device turned the right way so that you understand the language of your choice and we're not interrupted midstream.

[Translation]

    Please mute your electronic devices.

[English]

    If you are appearing virtually, unmute yourself only when you are recognized by the chair.
    I will now introduce our three panellists this morning.

[Translation]

    First, we have Frances Haugen.

[English]

     She is an advocate for social platform transparency and accountability. She is appearing by video conference.
    Marni Panas is a Canadian certified inclusion professional.
    From Connecting to Protect, we have Jocelyn Monsma Selby, chair, clinical therapist and forensic evaluator, by video conference.
    I will give each of you up to five minutes to say your introductory remarks. I understand that it's a little bit difficult, particularly if you're on your screen. When you have 30 seconds left, I will let you know. When the time is up, I will interrupt you as softly and delicately as possible, whether during your five-minute remarks or during your answers to members' questions.
    I want to let you know that we have Senator Kristopher Wells with us today. He will be here for the first hour. Welcome, Senator.
    I will now ask Ms. Frances Haugen to please start.
    You have up to five minutes.
(1105)
    You have probably had the opportunity to hear from a lot of people about the harms of social media, so I will not repeat the laundry list again. Instead, I'd like to focus on two topics that hopefully will give context to that testimony and provide urgency for action.
    First, I want to emphasize that we are profoundly underestimating the severity of social media's impact on children, due to limitations in how we observe and measure these effects. When researchers and policy-makers discuss the harmful effects of social media, they typically point to studies of teenagers documenting rates of self-harm, eating disorders and declining mental health among 16-year-olds, but these studies are echoes of the past, capturing the aftermath of social media exposure that began years earlier, typically around 12 or 13.
    What's alarming is that when we talked to today's 12-year-olds and 13-year-olds, we discovered that they started on social media around age eight or nine. In 2022, 30% of American children between the ages of seven and nine were already active on social media platforms. That number is probably higher today. This creates what I call the telescope effect of our understanding of social media's impact. Like astronomers observing distant galaxies, we're always looking at information about the past: how social platforms were designed in the past, past usage patterns. This may be okay when we look at the stars, because the heavens change slowly, but when we examine the digital lives of teenagers, their rapidly changing world means that we end up continually surprised that rates of harm keep going up.
    A seven-year-old is influenced and impacted in a meaningfully different way than a 13-year-old is. The children starting social media use today are doing so at ever younger ages, during even more crucial developmental periods and with even more sophisticated and engaging platforms than today's teenagers whom we're currently studying. If we don't act, we're on track to wake up in 10 years to realize that we've fundamentally altered a generation's development in ways that we failed to anticipate or prevent.
     My second point concerns the emerging and under-reported threat of the rise of AI avatars and their impact on children's social development. These AI avatars are sophisticated virtual companions. They use artificial intelligence to engage in conversation, respond to emotions and build what feel like genuine relationships with users. They're designed to be always available, eternally patient and perfectly attuned to their users' interests and needs.
    The leading provider of these AI avatars proudly announces that the average user—predominantly children under 18—spends two hours daily interacting with these virtual companions. This statistic should alarm us. Learning to navigate real human relationships is inherently challenging and sometimes uncomfortable. It requires compromise, patience and the ability to engage with others' interests and needs, not just our own. AI avatars, in contrast, offer a path of least resistance. They never disagree uncomfortably. They never have conflicting needs, and they never require the complex emotional labour that real friendships demand.
    We need to expand our understanding of what constitutes social media. These AI-driven spaces represent a new frontier of potential harm, where the artificial ease of virtual relationships further erodes children's ability and motivation to build genuine human connections. If we don't act now to understand and regulate these technologies, we risk being blindsided by their effects, just as we were with social media platforms.
    In conclusion, the problems we're seeing with social media are reflections of broader societal issues. The adults most negatively impacted by social media are often those already marginalized in our society, whether geographically, physically or economically. While in-person socialization carries real costs in terms of transportation, activities and time, online socialization appears free at first. The true cost is paid in terms of mental health, development and human connection.
    Similarly, and perhaps most critically, the children most likely to become deeply enmeshed in these virtual worlds, whether traditional social media or AI-driven spaces, are likely to be our most vulnerable and marginalized youth. These are often the children with fewer opportunities for in-person social interaction, fewer resources for supervised activities and fewer adult mentors to guide them through the challenges of growing up or to provide context and support when they face online harm.
    We must act now to ensure that children have appropriate and safe digital spaces, because their ability to meaningfully build relationships and connect will shape the world we all live in for decades to come.
    Thank you.
    Thank you very much.
    I will now ask Ms. Panas to please proceed.
     I am Marni Panas. I use the pronouns “she” and “her”. I am a Canadian certified inclusion professional. I led the development of diversity and inclusion activities at Alberta Health Services, Canada's largest health care services provider. I am the director of DEI for one of Canada's most respected corporations, and I am the board chair for the Canadian Centre for Diversity and Inclusion.
    Today, I am speaking on behalf of myself and my own experiences. I'm here to vehemently defend every Canadian's right to freedom of expression, the foundation of our democracy. However, I and millions like me do not have freedom of expression, because it is safer to be racist, homophobic, sexist and transphobic online than it is to be Black, gay, a woman or transgender online. Online hate is real hate. It descends into our streets. It endangers Canadians in real life.
     In September 2021, I took the stage at a university in my hometown of Camrose, Alberta, to deliver a lecture on LGBTQ2S+ inclusion, a lecture I've delivered to thousands of students, medical professionals, and leaders around the world. While I was on stage, unbeknownst to me, a student, like many other youth who have been radicalized by online hate, was livestreaming my presentation on Facebook and several far-right online platforms. By the time I got off stage, thousands of people were commenting on my appearance, my identity and my family. The worst of the comments included threats to watch my back. My next lecture was cancelled. Police escorted me off campus for my own safety.
     In March 2023, I was invited to participate on a panel celebrating International Women's Day to raise awareness for an organization in Calgary that works to protect women and children from domestic violence. Because of the many online threats of violence directed towards me, the Calgary Police Service and my employer's protective services unit had to escort me in and out of the Calgary Public Library, where the event was being held.
     Last February, emboldened by the introduction of anti-trans legislation in Alberta, people harassed and threatened me and others online at levels I had never experienced before, even trying to intimidate me by contacting my employer. I'm grateful for the support of my current employer, who once again had to step in to have my back.
     It is rarely the people spewing hate online who are the greatest threat, but words are never just words. It is the people who read, listen and believe in hate speech who become emboldened to act on what's been said. These words and the actions they fuel have followed me to my community, my workplace and even my doorstep. The impact of this relentless harassment for simply living my life publicly, proudly and joyfully as me has profoundly impacted my mental health, my well-being and my sense of safety where I live and work, leaving me withdrawn from the communities I cherish and leaving me wondering every time someone recognizes me on the street whether this is the moment where online hate turns to real physical violence. I feel far less safe in my community and in my country than I ever have before.
    No, I don't have freedom of expression. There is a cost to being visible. There is a cost to speaking out. There is a cost to speaking before you today, knowing that this is being broadcast online. Most often, the cost just isn't worth it. The people all too often silenced are those who desperately need these online platforms the most to find community and support. This is made worse when the same platforms allow disinformation to be spread that aims to dehumanize and villainize LGBTQ2S+ people, contributing to the significant rise in anti-LGBTQ2S+ violence as highlighted by CSIS this past year.
    The status quo is no longer acceptable. Platforms need to be held accountable for the hateful content they host and the disinformation they allow to spread. The federal government needs to act. We can't wait. I've been called brave, courageous and even resilient, but I'd rather simply just be safe. People have a right to freely exist without fear because of who they are and whom they love. This is needed in online spaces, too. In fact, our communities and our democracy depend on it.
    Uphold freedom of expression. Pass Bill C-63, and protect us all from online harms.

[Translation]

    Thank you.
(1110)
    Thank you as well.

[English]

    I now turn to Jocelyn Monsma Selby, please.
    Honourable Chair Diab and all member of the Standing Committee on Justice and Human Rights, thank you for the opportunity to be here today.
     My first point is that, in Canada, our current legal framework addresses child sexual abuse and exploitation via the Criminal Code and the law for protection of children from sexual exploitation. However, we should not be relying on a broad duty of care by any Internet platform. There should be law requiring the identification and immediate action to report and take down illegal sexually explicit images. We need regulation that is fit for purpose and safety by design.
    My second point is this. Bill C-63 reads, “reduce harms caused to persons in Canada as a result of harmful content online and ensure that the operators of social media services...respect...their duties under that Act.” This is a glitch. All Internet platforms need accountability, not just social media sites. It takes just three clicks to find child sexual abuse imagery or child sexual exploitation material on the regular Net, and this includes images generated by artificial intelligence found through accessing many, many online platforms, including the dark web. These IPAs are disguised within websites and embedded in emojis and hidden links, requiring the viewer to follow a digital pathway that can disappear as quickly as the next link is clicked on.
    In 2022, the IWF found a 360% increase in self-generated child sexual abuse reports of seven-year-olds to 10-year-olds, more prevalent than non-self-generated content. This trend has continued into 2023, when the IWF hashed 2,401 self-generated sexually explicit images and videos of three-year-olds to six-year-olds. Of those images, 91% were girls showing themselves in sexual poses, displaying their genitals to the camera. It's normal for children to have curiosity, explore their bodies or experiment sexually, but that is not what the IWF found. What is shocking is the unsupervised access of children using digital devices.
     My third point is with regard to guidelines respecting the protection of children in relation to regulating services and age of consent to data processing and in using social media. There is a duty to make certain content inaccessible. Caution should be used in passing regulation based on precedents set out in other countries. We need to look in turn at all the international laws, treaties and conventions. A single guiding principle is in article 5 of the UNCRC, concerning the importance of having regard for an individual child's “evolving capacities” at any moment in time in their interactions with the online world.
    My fourth point is the establishment of a digital safety office of Canada, a digital safety commission and a digital safety ombudsperson. Could Canada benefit by establishing an online safety office and a children's commissioner or ombudsperson? The answer is yes, and several countries have been blazing a trail for us. These countries are part of a global online safety regulators network that aims to create a coordinated approach to online safety issues. Canada, sadly, is not at the table.
    Last week, I was invited to attend a global summit in Abu Dhabi, sponsored by WeProtect and the UAE government. I was the only child protection representative from Canada, and I'm a self-funded third party voice.
    I have a few final thoughts.
    It took 50 years from the development of the Gutenberg Press to develop 20 million books. It took Ford 10 years to develop 10 million Model Ts. It took Playboy approximately two years to sell over a million copies each month. It took the global Internet in 1995 two years to develop 20 million users. It took Facebook 10 months to reach one million users. Today, Meta's ecosystem—including Instagram, WhatsApp and Messenger—has approximately 2.93 billion daily active users.
(1115)
     We need to close the gap between the rapid development and access of the Internet and needed regulation. We cannot have a continued partisan approach, lacking civility, to develop the important regulations needed to protect children and vulnerable individuals.
    Thank you very much.
    Thank you for the opportunity to appear today.
    You'll be able to answer questions as well.
    We will start with our first six-minute round.
    I will ask Ms. Ferreri to please start.
    Thank you so much, Madam Chair.
    Thank you to our witnesses.
    We're talking here about one of the most serious bills, I think, that have come before Parliament, certainly in my time and many others' time. That is Bill C-63.
     I want to start with you, Ms. Selby. This is on record from the Canadian Constitution Foundation:
“Bill C-63 combined things that have no reason to go together,” Van Geyn said. “The issue of the online sexual exploitation of children through pornography is urgent and serious and should not be lumped in together with the government’s controversial plans to criminalize all kinds of speech and allow for civil remedies through the Canadian Human Rights Commission for speech,” she added.
    My question for you is this: Shouldn't we have a stand-alone bill or legislation that protects children from online perverts? Shouldn't that be its own legislation?
(1120)
     We definitely need regulation to protect children from sexual exploitation online. I wouldn't use the term “online perverts”. I think there are many groomers and there are many reasons that children are exploited on the Internet. There's obviously a market for it, or it wouldn't be such a problem.
    I guess I'd push back on that. I would certainly call those groomers “perverts”. I guess we just have a difference of language, but I see your point.
    What I'm trying to ask you is this: Shouldn't we have a stand-alone policy that enforces laws to protect these children and that also ensures that social media platforms have a duty of care to ensure they're not allowing this to happen?
    This is where we have a glitch. I believe all Internet platforms have a duty of care. We need regulation that is what I would call “best in practice” to protect all children and vulnerable individuals on the Internet. Under the age assurance umbrella, you have numerous tools. If they are legislated to happen at the device level, everyone gets protected.
    Picking and choosing just certain media sites is not the only answer here. Very few people are protected when you take that approach. You need a broader approach to all Internet providers and all Internet platforms.
     Certainly. I think we would agree on that, for sure.
    I guess what I'm trying to say is.... There's a bill that the Conservatives have. It's called Bill C-412. It was put forward by my Conservative colleague. It deals with exactly what you're saying immediately, as opposed to Bill C-63, where the Liberals have combined two separate issues that are not targeting the predators online and the sexual exploitation.
    To Ms. Haugen's point, the brains of these young children are forever changed. There's not a parent out there who isn't concerned about this.
     I view it as child sexual abuse via digital images. There isn't a child protection expert on the planet who would agree that it's okay for children to have this kind of exposure.
    I agree 100%. I think that's where we're going with this. Bill C-412 directly deals with this immediately, as opposed to Bill C-63, which has combined too many issues that will not hold these perpetrators, whom I will call “perverts”, to account, as well as social media platforms that, to your point exactly, need to have accountability.
    Ms. Haugen, I was very interested in your testimony. It was profound. You hit a lot of nails on the head in terms of the impact social media is having on our children and exposing them too young to this. Without mandatory parental controls or algorithmic accountability in Bill C-63, how do we ensure that platforms are actually protecting children?
    That's a great question. Having a positive duty of care is a really critical component of the Canadian bill, because it says you have to be actively thinking about how your product might be misused and be designing proactively for it.
    Parental controls can be really powerful. They are one set of tools, but not all children have parents who understand technology well enough. Remember, most parents today didn't grow up as a 10-year-old with a smart phone, or I hope not. It means that we need to make sure there is, at a minimum, a floor or a net that is catching all children.
     We also need to ask whether we should be putting the obligation on parents, when they have so much to deal with already, to also stay abreast of exactly what threat is coming from where and what setting and toggle they need to put on their phones.
     Are you familiar with Bill C-412, which my Conservative colleague put forward?
(1125)
     I don't know all of the details for it.
     I would love to direct it to you or send it to you. I think it's addressing what you're saying. It gets to the heart of the issue quickly and more efficiently than Bill C-63.
    I have another question for you. How does Bill C-63 ensure that platforms understand their obligations without explicit definitions?
     That's a great question.
    One of the challenges when writing any kind of Internet regulation law is that technology moves very quickly. For example, right now the Europeans are suggesting things like banning addictive features. In technology, it can be really hard to define what an addictive feature is. If I say, “This is the thing you're not allowed to do”, what usually happens is that either the definition is specific enough to be easy to understand, in which case the tech companies immediately just do a slight twist and say, “Well, it's not in anymore”, or you have a situation where you write them at such a high level that you have to ask what it means to have an addictive feature.
    Duty of care is a nice, flexible in-between where you say, “Hey, you need to be demonstrating proactively that you're looking out for the needs of children and designing safety by design.”
     Thank you very much.
    We'll now move to the next six minutes, please.
     Ms. Dhillon.
     Thank you, Madam Chair.
    Thank you to all of our witnesses for being here this morning.
    I'll start with Ms. Marni Panas.
    Ms. Panas, I'm very sorry about what you've gone through: not being able to express yourself and also being threatened for who you are. Can I please ask you if you think that having freedom of expression includes this kind of hatred that you have faced? Please, can you elaborate a little bit?
    I have so many privileges that I actually do okay. That's with all of my privileges. I can't imagine children and youth and people who don't have privileges like the support of my employer and the people around me. That's with all of those supports.
    There is freedom of expression, but there are consequences. We all have consequences for speaking. You folks in the House of Commons can't just say anything in the House of Commons without some consequence. That has to occur online, and it has to occur in all of our spaces.
    Today, again, I do not have freedom of expression. Even just visibly posting a picture of my partner and me being happy, dancing at a concert, comes at a cost. That cost is often ridicule. That cost turns into harassment. Then that cost turns into people believing the disinformation that is spread online, which leads to policies that restrict my ability to even participate fully in society. It goes so far. This is for somebody who has all of the privileges that I have.
    I don't know if you had the chance to hear last week's testimony when Jane Doe came. It was a very painful testimony. It was painful for all of us to hear what kind of evil can exist in this world.
    These parents came. There was one parent whose child was part of the armed forces and committed suicide. They're begging. We're talking about how parents should not be held responsible, completely responsible, because there's also a duty on governments and the platforms. We know that Bill C-63 applies to all online platforms. They're begging for us to do something as quickly as possible to mitigate the damages that are already done and that could come in the future.
    We keep hearing things about regulatory bodies and delays. Do you not think that, at this point, it's better to pass something rather than nothing? Nothing is perfect, but at least something can give you support. We can give you support.
    Yes, I'm not sacrificing the good for perfection. The good needs to happen now.
    The fact is that if online platforms honoured their own standards of practice and the community standards they already have in place, we probably wouldn't be here. They all come out with these great standards of practice, but any time I report anybody not following those, they're ignored. We're ignored. We need something now. Lives are being lost to this.
    What's important is that there are a lot of youth who find community in online platforms. That's essential when you think of rural populations and when you think of people like myself. The very first time I found somebody like me, when for the first time in my life I realized that I'm not alone and that there are other people like me, was a life-saving moment for me. That was 20 years ago, when the Internet first started. That saved my life.
     We need to protect those environments for youth and people to find social connection in a healthy and meaningful way. That has been robbed from them. The impact of that is violence, death, isolation, loneliness and having to hide the most important parts of your identity. That needs to change now. We can no longer wait. Too many lives have been interrupted. Too many lives have been lost because of the harms experienced online.
(1130)
     I'll continue with a story that just came out about an AI chatbot that encouraged a child to commit suicide. It was saying, “Come home. Come home, my king.” He was 14. He had become addicted to the chatbot because that was his only friend. I guess he didn't have many real-life friends.
    Can you talk to us a little bit about how you see this going forward in terms of the addiction children have towards social media platforms, chatbots and things like that?
     When I was growing up as a child in rural Alberta, I got really good at being alone. I got really good at keeping my secrets—the secrets of my gender identity—because that was where my secrets were safest. I know that many youth are like that even today.
    Having somebody, whether real or artificial, reach out and show some attention feels good. It feels validating. Then you start to seek it out. That's where the dangers lie. That might be your only place for that validation, which then leads to significant harms and violence. That is who I'm concerned about.
     Thank you so much, Ms. Panas.
     Ms. Haugen, I think you wanted to jump in as well. Please go ahead.
     I was going to say that I think “addiction” is maybe not the exact word. People form relationships. We form relationships with those we spend the most time with. In the case of the child who killed himself, it wasn't so much that he got addicted, but he fell in love with this person he talked to every day for hours and hours. The reality is that if you had intimate conversations where you always felt safe with someone and they always validated you, you might fall in love with them too.
    It wasn't that the child didn't have any friends. I worry that sometimes we look at these issues around relationships with AI and say, “Oh, that person must be so pathetic. You'd only turn to an AI if you had nothing else.” His parents didn't know anything was wrong. All they found out was what they saw on his phone afterwards. He lamented that he would never be able to live a life with this person he had fallen in love with.
     Thank you very much for that.

[Translation]

    Mr. Fortin, you have the floor for six minutes.
    Thank you, Madam Chair.
    My first question is for Ms. Panas.
    I understand that you've looked at Bill C‑63, which provides for the creation of three bodies, including an ombudsperson's office and a commission.
    How do you assess the effectiveness of the complaint process with those organizations?

[English]

     Right now, we do not have a meaningful process in place at all. We can't go online. There are no supports in place to go to, so we remain silent. We'll just withdraw from the tools and the platforms if we don't find safety there.
    I think not having a place to go is a real problem. When we look at the processes that will be in place, it's certainly better than leaving us to try to fend for ourselves on this complex issue. Most people with the experiences I've had don't even bother going through the platforms, and we have no other recourse.

[Translation]

    Do you think the complaint process provided for in Bill C‑63 is effective? I'm thinking of complaints made to the ombudsperson, for instance.

[English]

     It will be certainly much more effective than what we have in place today.

[Translation]

    Ms. Haugen, could you tell us about the issue of violations of freedom of expression and privacy? We obviously agree that we need to better protect everyone, and especially our children, on digital platforms, but we must always keep in mind the problem of violations of freedom of expression. It's a bit of a juggling act, so to speak.
    How do you see it? Does it go too far, does it go far enough, or should it go further?
    How can we protect our freedom of expression and our right to privacy while protecting our children on digital platforms?
(1135)

[English]

     You have to be very careful when writing these laws. You can either write them from the perspective of saying platforms are responsible for disclosing risk and demonstrating progress to reduce risk, or you can write them to say that every instance of hate speech has a penalty. The challenge with the latter kind, where you say you have a zero-tolerance perspective—no hate speech—is that computers can't accurately and reliably identify what is okay and what is not okay. What it will mean is things like erasing trans people from the Internet, because it can't tell whether a comment is hateful. It would mean erasing religious minorities: Can you talk about religion confidently if there is a $40,000 violation for the platform?
    As long as the law stays within the bounds of saying you must disclose what risks you believe exist and demonstrate your progress, that can be okay, but we need to be careful not to believe that we can erase hate from these platforms without accepting that we will also erase lots of legitimate speech, because the computers are just not that smart.

[Translation]

    I'll quickly read the definition of “intimate content” proposed in Bill C‑63:
(b) a visual recording…that falsely presents in a reasonably convincingly manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.
    That seems like a rather long definition that seeks to cover a number of areas. Maybe I wouldn't have done any better. So it's not really a criticism.
    Do you think that's a good definition, or should it be amended differently?

[English]

    The issue of non-consensual intimate partner images—some call this “revenge porn”—is an example where we accept that the computer will get it wrong and be more aggressive. It sometimes might take down an image of a person who looks very similar to that person. It's one of these questions around whether we accept false positives and false negatives. Having a broader definition for something like that is okay if the consequence is that a little bit of pornography disappears from the Internet.
    For things that are more controversial topics, that's where relying on censorship is much harder, because the scope, the complexity and the diversity of ideas are so broad, and it's much harder for computers to do than just trying to identify whether this is the same person and this is the same image as what was reported. You're matching one-to-one instead of one-to-many.

[Translation]

    When it comes to deepfakes, do you think there are any effective measures we can take to combat this problem, and if so, what are they?

[English]

     This is a great example of where safety by design and having disclosure on how platforms operate are so important, because it is unlikely that we will be able to reliably identify whether an image is real or a deepfake; the computers will keep getting better and better at making these. That means we need to, instead, ask questions about whether we are weaponizing the platforms or making ourselves vulnerable to people abusing these images.
     Thank you.
    Mr. MacGregor, go ahead, please.
     Thank you very much, Madam Chair.
    I'd like to thank all the witnesses who have joined us today to help guide the committee through this study.
    Ms. Panas, I'd like to start with you.
     Thank you for showing up today and explaining just how your previous experience—your life experience—makes this approach a very important thing for our committee to consider. Often, when a party comes forward with a policy idea on regulating the Internet or online spaces, the first charge levelled against policy-makers is that they're taking away freedom of speech and freedom of expression, but I think you have quite clearly explained how, by not doing anything.... The status quo is actually affecting your freedom of expression right now.
     I want to talk about this concept of a public space or the public square. When we're in a room, like we are right now, everyone has an equal voice and we can all hear each other equally, but in an online space, especially on social media platforms, the platform itself is not a passive bystander. It can actively promote content, or it can actively put it down into certain corners and it can direct people to certain dark corners of the Internet.
     My other committee is the public safety committee. We've been looking at how our foreign adversaries make use of online platforms to spread disinformation, and there's quite a lot of overlap with the subject matter we're dealing with today. We've had witnesses at that committee talking not only about whether we need to take a law approach or a regulatory approach, but also about trying to instill a digital literacy strategy.
     Do you have any thoughts on equipping Canadians with the skills they may need to navigate the online space?
(1140)
     Thank you so much for the question and for the comments of support.
    Yes, it is scary being here. It's not scary because of you folks—you folks are pretty friendly—but I know the moment I leave this space.... I know the people who are watching me right now, and what it will mean to me online. It's terrifying, quite honestly.
    This is such a complex issue. The Internet is so complex. Literacy is part of it. We need a multi-faceted approach to supporting this. We need education supports, but certainly online accountability as well.
    You know, when I think about literacy, it's a really interesting word. X, for example, has banned the word “cis”—“cisgender”, for example. It's a Latin word. It's essentially a biological and chemical term, often rooted in science, that has been banned because of the implications of denying transgender people's existence. That's the whole purpose.
    Literacy would rely on the platforms to actually use language that is appropriate, rather than ban language, which serves to actually eliminate me from society. The people need the literacy, but the platforms have to be held accountable for ensuring proper literacy.
     Thank you very much.
    Ms. Haugen, I'd like to turn to you for my next question. We're in a kind of legislative deadlock right now in the House of Commons. There's pretty much nothing getting done in our main chamber. It's been like that since the end of September. In fact, we don't even have Bill C-63 properly before this committee. This is a prestudy. It hasn't even passed second reading.
     The fact of the matter is that this Parliament is rapidly running out of runway. Bill C-63 is still a long way away from the Governor General's desk. You have just talked about how rapidly this technology is evolving. It may be that we don't actually have a proper legislative approach to this problem for another two or three years.
    What are some of the things a future Parliament has to take note of? We have this draft of Bill C-63, but what are some of the other things we may need to think of in a future piece of draft legislation?
     One of the reasons I'm so excited about the approach Canada took was that you guys did more rounds of citizen assemblies than anyone else in the world did. You actually had conversations. Groups of Canadians went and argued about trade-offs on how to approach the Internet. This is what came out, other than the hate speech attachments that have been added on at the end. As a result, I think the bill overall is pretty resilient. It addresses a bunch of core things that need to be addressed.
    The place where I would encourage you guys to be a little more open-minded or to do a little more future-proofing would be to ensure that the concept of what is a social platform is able to evolve. For example, virtual reality is easy to laugh at right now. If you go walk around Meta Horizon Worlds, which is Facebook's virtual reality space, it's overwhelmingly full of people under the age of 12. Age assurance is important for that reason. Those who talk to AI chatbots are overwhelmingly under the age of 18.
    Think a little more expansively about what it means to be social, because children are starting to say.... Games are another space that is effectively social networks. As long as you're thinking a little bit more expansively about what's under the tent, the structure overall, and saying that we need to have a proactive duty of care and we need to care about transparency and these issues, that is what's important.
(1145)
    I have just a few seconds left in this round. Let me close by saying that, again, at my public safety committee, we have had witnesses who are complete and total experts in the AI space, and its rapid pace of development leaves them greatly concerned.
    I would love to talk to your committee, because I worked in that space at Google.
    We'll keep you in mind.
    Thank you.
    Thank you very much.
    Now we'll move to our second round.
    We will go to Mr. Brock for five minutes, please.
     Thank you, Chair.
    I thank the witnesses for their attendance. I echo the commentary of my colleague Ms. Ferreri that this is such an important discussion we're having today.
    Just to clarify, Ms. Haugen, I heard you say that you are not familiar with Bill C-412, which ostensibly achieves the same result in terms of keeping kids safe online. We get to it in a vastly different way versus Bill C-63. It's unfortunate that you haven't had a chance to review that.
    Can the same be said for you, Ms. Selby, that you are not familiar with Bill C-412?
     I've only glanced at Bill C-412. I haven't looked at all the in-depth recommendations.
    Thank you.
    Ms. Panas, have you had a chance to look at Bill C-412?
    I have at a very high, tertiary level, but not deeply, no.
    Okay.
    I'll start with you, Ms. Panas. I listened very carefully to your opening statement. You reiterated in some of the questions put to you that ultimately you feel safe in this environment, but the same cannot be said when you actually leave this building. You talked about various avenues of online harassment.
    Let's face it: That's the reality Canadians are facing. It's not necessarily just children and teenagers. It's also adults. There is a legal definition of criminal harassment in the Criminal Code of Canada, but what's sadly lacking in the Criminal Code of Canada is provisions to deal with online harassment. Sadly—and this is a direct indictment against the Liberal government—Bill C-63 contains no provisions at all that deal with online harassment. Bill C-412 does. I don't know if you've had a chance to dive into Bill C-412 to take a look at the provisions that deal with online harassment.
    The question I put to you, Ms. Panas, is this: Do you think law enforcement and judges should have more tools to provide “no contact” orders for criminal harassment online? Do you think that's a good idea?
     Look, to even get to a situation where I have the courts involved and police involved would require me—a person who is already unsafe online, a person who is already facing enormous costs just in being visible—to have to report that, to have to be believed by the police in the first place that these things are happening, and to have to address all the biases that are inherently built in law enforcement against trans people. I'm more likely to just do nothing and probably withdraw. That is the consequence. You can give them all the tools they want, but that requires reporting and that requires people to believe and to have a safe process.
    Bill C-63 provides means for us to be able to do that in a way where I feel I would be believed for the first time, I would be supported for the first time and I would find some avenue to get that far.
    By the time it's gone to the police—
    I'm going to interrupt you there.
    Bill C-63 does not provide an avenue for you to deal with online criminal harassment. It is a glaring oversight. Bill C-412 provides a ready, able mechanism that addresses some of the concerns you deal with.
    I just wanted to highlight that to you and encourage you to review that.
    Sure.
    I'll move on now to Ms. Haugen—
    I'm sorry, but I have to question you on that. The best thing we can do is avoid these online harms happening in the first place. By the time they get to the police and the courts, we're too late.
     I agree. Thank you.
    Ms. Haugen, I heard your comments on AI avatars. Do you think it's important to have a broad definition of online operator with regard to the responsibilities tech company operators have, and how their products interact with children, in order to ensure that technological advancements don't outpace protections for children?
(1150)
    I think we've seen really strong approaches out of places like the U.K. They have standards like “reasonably likely to encounter” versus products that are designed for children. We need to think expansively around that access question and around what it means to be social. For any spaces where extensive communication takes place, where kids have ongoing relationships that are facilitated by technology, we want to make sure they are future-proofed under a similar umbrella.
     Thank you.
     I believe that's my time.
    Yes. Thank you very much.
    We will now go to MP Maloney for five minutes, please.
     Thank you, Madam Chair.
    I want to thank all the witnesses for being here today. There's a lot to cover.
    Ms. Panas, I'll start with something you said. You talked about feeling comfortable and feeling safe online. Last Christmas Day, I posted a video. I was standing in front of a Christmas tree at a community centre wishing everybody a merry Christmas. The first five or six or 10 comments were, “I hope you lose the next election”, “Rot in hell”—blah blah blah—and those were the nice ones. But I sloughed it off. I have big shoulders. It doesn't matter. That's not what this bill is about. This bill is about protecting people who don't have that ability and who are the most vulnerable.
    I want to pick up on what Mr. Brock was trying to do. I want to thank you for your answers about the difference between Bill C-63, which you support, and Bill C-412, which I consider to be.... Well, it doesn't matter what I think. We've had witnesses who have said it's far too narrow and doesn't accomplish the goals we're trying to achieve here. One witness said that she thought it confused tort law with criminal law, which I agree with.
    I want to deal with this right off the bat. If something is posted online that's offensive and that involves some of the things we're talking about—I won't use the examples—Bill C-63 provides a method to have it taken down from the Internet right away. Contrast that with the so-called solution of Bill C-412, which would require somebody to go out and retain a lawyer, put together some sort of application or motion, go before a judge and try to convince him or her that this should be taken down.
    First of all, you're dealing with people who are the most vulnerable, who don't know how to find a lawyer, who can't afford a lawyer, who have to find a lawyer who knows how to deal with this and appear before a judge who has no expertise in this. It's an insulting joke dressed up as policy. It's not effective. I'd like to get that off the table.
    I'm assuming you agree with that, Ms. Panas. You've already highlighted the importance of having the ability to deal with this quickly.
    I couldn't agree more. If you think putting yourself in front of a Christmas tree is giving you some grief online, I encourage your Conservative colleagues to tweet support for trans people and trans rights, say that trans women are women, and see what kind of hate and abuse they get. That's what I live with every day. You folks get to walk out of here, take off your MP hats and be okay. I don't get to take off my trans identity and be cis for the rest of the day just because life is hard. My life is hard all day, every day.
    That would mean trying to find a judge or lawyers or police who will even believe me that this thing even occurred. It's not possible.
    Right. Thank you.
    Ms. Selby, I want to move over to you. I took your comments to mean that you're in support of the digital safety commission and the ombudsman. You're in support of this process, which would provide a mechanism to respond and act quickly. Is that right?
     Absolutely.
    Okay.
     It also means, as you quite accurately pointed out, that it provides consistency around the world. If you go with the court approach, you'll have the challenges we just talked about, and it also won't be consistent with what happens in the U.K., Australia and other countries around the world, whereas now you're creating a group of people across the globe who have expertise in this and you can address the problem. That was your point, I take it.
     Absolutely.
    I also want to make one other point. In Holland right now, they've experienced that for sexually exploitive material.... It will take three days to take down child sexual exploitation material, but terrorist material is taken down immediately, within an hour. We know that some of the online platforms can take down sexual exploitation material as soon as it is identified, within an hour. However, if regulation isn't put in place, the sexual exploitation material takes longer to take down.
(1155)
    Thank you.
    We've heard from witnesses already who have lived through horrific experiences with their children and families, who have tried to use the courts and the criminal process to address this and who have tried to do it directly with the social media platforms. It simply doesn't work. That is why the digital safety commission and the ombudsperson are so critical, so that it can be responded to quickly.
    Ms. Selby, I take it you support part 1 of Bill C-63.
    Yes, I do. It's consistent with other countries in the world. It's consistent with Australia. I have a whole list of countries, if you want them.
    Thank you.
     Thank you very much.
    We will now go to our final two and a half minutes. We have Monsieur Fortin, followed by Mr. MacGregor.
    Go ahead, Monsieur Fortin.

[Translation]

    Thank you, Madam Chair.
    I want to go back to Ms. Haugen, this time on the issue of private messaging. It was discussed that it should be included in Bill C‑63, and it was proposed that certain obligations be imposed on social media companies, including:
…reporting unusual friend requests from strangers in remote locations…removing invitations to expand one's network through friend recommendations based on location and interests…providing easy-to-use complaint mechanisms…providing user accountability tools, such as account blocking.
    All that seems reasonable to me, but the fact remains that we're talking about breaking into individuals' private messages. I have the same question about freedom of expression and privacy: Aren't we going too far? Shouldn't private messaging be left private, or is there really a need for provisions to enable the owners of these addresses to better control what goes on there and the messages their users receive and send?

[English]

     I think people should have the right to encrypted, secure private messages, but that doesn't mean platforms get carte blanche to do whatever they want in how they design these services or how people behave once they're on them.
    I'll give you an example. You want to exclude things that say you must take down individual pieces of content from encrypted messaging, because that requires breaking encryption. But if you say, “You need to articulate what you believe the risks are of how your product is designed today and have a plan to address them”, that leads to things like what Instagram did maybe two months ago. They said, “We're going to make all under-16 accounts private, because we found that adults were contacting these children.”
    That's an example of a behaviour and design intervention, not a content intervention, involving private messaging.

[Translation]

    Thank you, Ms. Haugen.
    I have a few seconds left, Dr. Selby. Quickly, what do you think?

[English]

    I think all platforms need to have, as Ms. Haugen so articulately said, a duty to articulate what they're going to do as far as their “safety by design” is concerned. I think that's the term we need to use here.

[Translation]

    Thank you to all the witnesses.

[English]

    Thank you.
    Mr. MacGregor, go ahead, please.
     Thank you, Madam Chair.
    Ms. Haugen and Ms. Selby, I'd actually like to continue on that same subject. One of our previous witnesses, the Canadian Centre for Child Protection, is expressly calling for private messaging services and certain aspects of private messaging features to be subject to regulation.
    It's hard. To give a personal example, I have 12-year-old twins. We have them on Messenger Kids. We started them off with iPads. We're not prepared to go to the cellphone yet. I'm sure I'm going through what a lot of parents are going through. This is the new frontier. When they get their own cellphones, how can I be sure that those messaging services will be protecting them?
    Ms. Haugen, you cited Instagram, but are social media companies doing enough? Do we need to take this regulatory approach?
    I'd just like to hear both of you—Ms. Haugen first, and then Ms. Selby—offer a little bit of context.
(1200)
     The only reason Instagram took those actions—they knew they could have taken those actions a decade ago—was that they were afraid of laws like this one. They were afraid of Australia banning access to social media for under-16s. They were afraid of the lawsuits that are happening in the United States.
    You have to put them in situations where they are afraid of consequences, because of the amount of money to be made from cutting corners and from maximizing the number of connections, no matter the risk for these kids and no matter how addictive this is, for advertising dollars. They have to face consequences if you want them to behave well.
    To the second question, on how we keep encrypted messaging secure, we need to think a little more expansively. For example, if I am a child on an encrypted messenger and an adult sends me a lewd image—I did not ask for it and I do not want it—I should have the ability to report that adult. No encrypted messaging has been violated by me reporting that adult. Platforms should have an obligation to take people off their platforms who contact children in that way.
     Thank you.
     You have 30 seconds.
    I just want to give Ms. Selby the final 30 seconds to comment.
     I agree 100% with Ms. Haugen—absolutely—but I think we need to go bigger. The umbrella needs to extend, as I mentioned in my discussion, to say that all Internet sites need regulation, because this duty of them taking responsibility early has not happened. As Frances says, it only happened because they were worried about regulation coming forward like this.
    Thank you.
    Thank you very much to all of the witnesses for appearing today, in person and on the screen. I will simply add that if there's anything else you'd like to send to us—anything you wanted to say but didn't get an opportunity to say today—please do so through the clerk in writing.
    With those words, I'll suspend for two minutes while we get the next panellists ready.
    Thank you so much.
(1200)

(1205)
     We will now resume for our second panel.

[Translation]

    Appearing as an individual, we have Andrew Clement, professor emeritus, Faculty of Information, University of Toronto, by video conference.

[English]

     I hope that everybody is able to understand both languages and that you've selected the right language of your choice at the bottom.

[Translation]

    We also have Guillaume Rousseau, full professor and director of Applied State Law and Policy Programs at the Université de Sherbrooke. He is participating in the meeting by videoconference.
(1210)

[English]

    From the Canadian Constitution Foundation, we have Joanna Baron, executive director. She is here in person.
    Please wait until I recognize you by name before speaking.
    Each panellist will be allowed up to five minutes for opening remarks.
    Mr. Clement, please commence with your opening remarks. You have up to five minutes.
     Thank you, Madam Chair and committee members, for the opportunity to contribute to your important prestudy of Bill C-63, the online harms act.
    I'm Andrew Clement, a professor emeritus in the faculty of information at the University of Toronto, speaking on my own behalf. I'm a computer scientist by training and have long studied the social and policy implications of computerization. I'm also a grandfather of two young girls, so I bring both a professional and a personal interest to the complex issues you're having to grapple with.
     I will confine my remarks to redressing a glaring absence in part 1 of the bill—a bill I generally support—which is the need for algorithmic transparency. Several witnesses have made a point about this. The work of Frances Haugen is particularly important in this respect.
    Social media operators, broadly defined, provide their users with access to large quantities of various kinds of content, but they're not simply passive purveyors of information. They actively curate this content, making some content inaccessible while amplifying other content, based primarily on calculations of what users are most likely to respond to by clicking, liking, sharing, commenting on, etc.
    An overriding priority for operators is to keep people on their site and exposed to revenue-producing advertising. In the blink of an eye, they select the specific content to display to an individual following precise instructions, based on a combination of the individual's characteristics—for example, demographics, behaviour and social network—and features of the content, such as keywords, income potential and assigned labels. This is referred to as an “algorithmic content curation practice”, or “algorithmic practice” for short.
    These algorithmic practices determine what appears most prominently in the tiny display space of personal devices and thereby guides users through the vast array of content possibilities. In conjunction with carefully designed interactive features, such curation practices have become so compelling, or even addictive, that it holds the attention of U.S. teens, among others, for nearly five hours a day. Disturbingly, their time spent on social media is strongly correlated with adverse mental health outcomes and with a rapid rise in suicide rates starting around 2012. We've heard vivid testimony about this from your other witnesses. Leading operators are aware of the adverse effects of their practices but resist reform, because it undermines their business models.
    While we need multiple approaches to promote safety online, a much better understanding of algorithmic curation practices is surely one of the most important.
    Canadians have begun calling for operators to be more transparent about their curation practices. The Citizens' Assembly on Democratic Expression recommended that digital service providers “be required to disclose...the...inner workings of their algorithms”. Respondents to the online consultation regarding this proposed online harms legislation noted “the importance of...algorithmic transparency when setting out a regulatory regime.” Your sister standing committee, the Standing Committee on Public Safety and National Security, has made a similar recommendation: “That the Government of Canada work with platforms to encourage algorithmic transparency...for better content moderation decisions.”
    Internationally, the U.S., the EU and others have developed or are developing regulatory regimes that address online platforms' algorithmic practices. Most large social media services or online operators in Canada also operate in the EU, where they are already subject to algorithmic transparency requirements found in several laws, including the Digital Services Act. It requires that “online platforms...consistently ensure that recipients of their service are appropriately informed about how recommender systems impact the way information is displayed, and can influence how information is presented to them.”
    While Bill C-63 requires operators to provide detailed information about the harmful content accessible on the service, it is surprisingly silent on the algorithmic practices that are vital for determining the accessibility, the reach and the effects of such content. This lapse is easily remedied through amendments—first, by adding a definition of “algorithmic content curation practice”, and second, by adding requirements for the inclusion of algorithmic content curation practices in the digital safety plans in clause 62 and in the electronic data accessible to accredited persons in clauses 73 and 74. I will offer specific amendment wording in a written submission.
(1215)
     Thank you for your attention, and I welcome your questions.

[Translation]

    Thank you very much.
    Mr. Rousseau, you now have the floor.
    I apologize for my appearance. I had surgery yesterday, which is why I'm wearing a bandage. Although I have a few scars on my head, my mind is working fine. I should be able to make this presentation and answer your questions.
    As a constitutional lawyer, I mainly want to draw your attention to the issue of freedom of expression and, since I'm from Quebec, I also want to draw your attention to the fact that Bill C‑63 is very similar to Bill 59, which was studied in Quebec in 2015 and 2016.
    For those who, like me, fought against Bill 59, it's a bit like Groundhog Day, since Bill C‑63 contains extremely similar elements, including the prohibition on hate speech. This reminds us of the extent to which Quebec and federal jurisdictions are not always sufficiently exclusive and that there is a lot of overlap. I will stop my digression on Canadian federalism here, but I would like to point out in passing that I have just tabled a report with the Quebec advisory committee on constitutional issues within the Canadian federation. If you're interested in this issue, you should know that a report has just been submitted to the Government of Quebec.
    Bill 59, which was studied in 2015 and 2016, banned hate speech, and it was considered very problematic in terms of freedom of expression. In the end, the government of the day decided to set aside part of the bill and not adopt the hate speech component of the bill in order to keep the other part of the bill, which was much more consensual and dealt in particular with the regulation of underage marriages. With respect to Bill C‑63, I hope we are preparing for a similar outcome.
    I think the bill contains a lot of interesting things about sexual victimization and “revenge porn”. I believe the equivalent term in French is “pornodivulgation”. I think this whole area of protecting minors and protecting them from sexual victimization is very important. However, everything to do with hate seems much more problematic to me.
    Sometimes, people talk about splitting the bill, saying that part 1 isn't a problem, and that parts 2 and 3 are more problematic. For my part, I draw your attention to the fact that, even in part 1, the definition of harmful content includes content that promotes hatred. Even in part 1, there's this mix between the issue of protecting minors from certain elements of pornography and the issue of hate. In my opinion, if we want to rework the bill properly, we must not only not adopt parts 2 and 3, but also eliminate hate from part 1.
    The problem with everything to do with hate in the bill is that the definition is very vague and very broad. Hate is defined as detestation and defamation, but the definitions of detestation and defamation often include a reference to hate. It's all a bit circular. It's very vague and, for that reason, it's very difficult for litigants to know what their obligation is, to know what they can and cannot say.
    I understand that this definition is inspired by the Supreme Court's Whatcott case, but there are two problems in this regard.
    First, this definition was given in a human rights case, but here we want to use it as a model in criminal law. In terms of evidence, in particular, these two areas are very distinct. Second, I understand why we are taking our cues from the Supreme Court when it comes to definitions, because that means that the provision of the act is less likely to be struck down. I understand it on a technical level, but on the substance, a definition that isn't clear and isn't good isn't clear and isn't good, even if it comes from the Supreme Court.
    I want to repeat this famous sentence: The Supreme Court is not final because it is infallible, it is infallible because it is final.
    As legislators, you really have to ask yourself whether the definition is clear rather than just whether it is the Supreme Court's definition. Ultimately, if you absolutely want to have a definition inspired by the Supreme Court, I would recommend the definition in the Keegstra decision, which is more of a criminal decision. It's a little clearer and a little less problematic than the Whatcott inspired definition.
    That said, if you go along with what I'm proposing and remove the hate component from the bill, it will raise the following question: If we create a bill that is more targeted on sexual victimization and the protection of minors, will we need a commission, an ombudsperson, an office and all the bureaucracy that is planned when the purpose of the act is more limited? We will therefore have to rethink the bill so that it is less bureaucratic.
    Finally, I draw your attention to the fact that the bill should include the abolition of exemptions that allow hate speech in the name of religion. We were talking earlier about Bill C‑63 and Bill C‑412, but there's also Bill C‑367, which I invite you to study.
(1220)
    Thank you.
    Thank you.

[English]

     Now we go to Ms. Baron, please.
     Good afternoon. Thank you for the opportunity to present before this committee.
     I represent the Canadian Constitution Foundation, a national legal charity that defends fundamental freedoms. We have participated in Whatcott, Fleming, Ward and other seminal Supreme Court of Canada decisions on freedom of expression. We view this bill, Bill C-63, as posing a grave threat to all Canadians' right to free speech and a flourishing democracy.
     We welcome the minister's announcement that he intends to split the bill with regard to parts 1 and 4, but we remain concerned about the constitutionality of aspects of part 1, as well as parts 2 and 3 in their entirety.
    First I'll address portions of the bill that expand sanctions for offences related to hate speech, including “harmful content” and “content that foments hatred”. I am referring to both the mandate of the new digital safety commissioner, created in part 1 of the bill, and the expanded penalties for hate crimes in part 2.
    Part 1 of the bill imposes obligations on an operator to “implement measures that are adequate to mitigate the risk that users...will be exposed to harmful content”. This includes “content that foments hatred”. This office will cost around $200 million over five years and impose fines up to the millions of dollars on platforms.
    Part 2 of the bill, meanwhile, increases penalties for existing hate crimes, including promoting genocide, now punishable with up to life. It also creates a new stand-alone offence, in proposed section 320.‍1001, for any federal offence motivated by hatred, now punishable up to life.
    As the previous witness mentioned, and I agree with many of his comments, hate speech is an inherently subjective concept. These expanded penalties and regulatory obligations pose a risk of gross disproportionality and excessive chill of protected expression. In Whatcott, the Supreme Court of Canada said that hatred encompasses only the most “extreme manifestations [captured] by the words 'detestation' and 'vilification'”. Only that type of speech can be penalized without violating the charter.
    Bill C-63 adopts this language in proposed subsection 319(7): “hatred means the emotion that involves detestation or vilification”. But “detestation” is really just a synonym for “hate”, and vilification is a highly subjective concept. We are in a present moment of passionate and often fraught disagreement in our society, where a lot of claims are made that are understood differently depending on context.
    For example, calling someone a Zionist currently may land as vilification or, more dubiously, promotion of genocide, or as praise, depending on the speaker and the audience. Just a few days ago, a former CBC producer, Shenaz Kermalli, accused MP Kevin Vuong of hateful expression for posing with an individual wearing an “F Hamas” sweatshirt on social media. That's the problem with criminalizing language. It's subjective. It shifts depending on context.
    These concerns become pressing with the expanded sanctions proposed in part 2. Even if our judges can be relied upon to respect the principles of proportionality when sentencing an offender under section 320, for example, the range of available sentences in the law will now include life imprisonment. It's not a frivolous possibility that prosecutors can refer judges to a range of sentencing up to life imprisonment for a crime such as vandalism if it is alleged that the crime was motivated by hate.
    The reality is that it's virtually impossible to identify in advance, predictably, a line that separates the merely “awful but lawful” from criminal hate speech. This lack of clarity poses an urgent threat to online discourse, which is our current town square and should brook this type of passionate and adversarial disagreement. When these types of sanctions are in play, everyone has an incentive to err on the side of caution. Platforms will flag and remove content that is actually protected expression, and individuals will self-censor.
    Finally, I will briefly address part 3 of the bill. It brings back a civil remedy for online hate speech, which allows members of the public to bring complaints before the Canadian Human Rights Commission. This would be disastrous. You should not go forward with this proposal. Even if most alleged instances are dismissed for not meeting the threshold of hate speech, the penalties for individuals found liable—up to $50,000 paid to the government plus $20,000 to the victim—are severe enough that we can infer that the new regime will lead to large amounts of soft-pedalling of expression for fear of skirting the line. It will interfere severely with press freedom to publish controversial opinions, which are necessary for a flourishing civil society. Finally, process is punishment, even if the case does not proceed. We will see more people punished for protected expression.
(1225)
    Thank you. I welcome your questions.
    The time is 12:26. I will guard time effectively to get us to one o'clock.
    We will start with the first round, and we'll leave it at six minutes each.
     MP Rempel Garner, go ahead for six minutes, please.
     Thank you, Madam Chair.
    Ms. Baron, I'd like to focus my line of questions on clause 140 in part 1 of the bill, which lists the different types of powers that the regulator has to make and enforce regulations. I note there are over 25 different areas that the regulatory body would have the power to regulate on. Are you concerned, given the broad terms that are used in this bill, like “harmful content”, that Parliament is ceding both rule-making and enforcement capacity to this regulator in such a broad way that it could have serious implications on things that you mentioned, like press freedom and freedom of speech?
    Yes, absolutely. My understanding is that the platforms are not clear on how this is actually going to work. They've had some conversations, and all of this is said to be worked out later. From what we know, this will come down to a lot of these decisions about speech that is, perhaps, close to the line and highly subjective. Much of it we know will end up being protected expression, even if it offends certain people or hurts certain people's feelings. Those decisions will come down to government-appointed bureaucrats or, you know...mindful of the severe financial consequences of running afoul of the bill for platforms.
     Thank you.
    It seems to me that having a regulator without a legislated duty of care that includes clearly defined terms on what online platforms would be responsible for is putting the cart before the horse in a potentially dangerous way. That's from the perspective of both delaying action that could protect victims and also allowing opportunities for an unelected regulator to place significant restrictions on speech without legislative oversight.
    Would you characterize that as an accurate fear in this situation?
    Yes, I think that the goals the government has spoken about in protecting children and victims of revenge porn are pressing. It's unconscionable to create a new, $200-million regulator to combat those very specific harms.
    Would you say that it would be more effective for the government, and perhaps for all parties, to spend time debating a legislated list of responsibilities for online platforms prior to abdicating responsibility? The better first step, prior to looking at a regulator, would be for Parliament to define what that responsibility is.
    I think that's fair.
    Thank you.
    Clause 140, particularly paragraph (h), gives the regulator powers to essentially label so-called harmful content. In that, I read it as the regulator would have almost greater authority than the Canadian Human Rights Commission currently has to regulate speech, in a very undefined way.
    Would you characterize that as an accurate take?
(1230)
    I would say it's very vague and virtually unchecked.
    I don't think that part 1 can proceed. Fundamentally, for me, I am being asked as a legislator to abdicate my power to an unelected regulator to regulate speech in very broad terms. What do you think about that?
    I fundamentally think that's wrong, and I think it's lazy on the part of the government, as opposed to actually putting forward legislation—which Conservatives have done, with Bill C-412—as a starting point that could define what online platforms are, as opposed to just porting that responsibility, with potential impingements on civil liberties, to an unelected regulator.
    I think part 1 should not proceed. The objectives of protecting children and online victims are pressing, but there are other ways of enforcing that.
    We also know there are huge problems in the courts right now where individuals who are accused of child predation aren't even being tried because of backups in the criminal courts. There are many things we should look at before we look at creating a three-headed hydra.
     Thank you.
    Conservatives have tabled Bill C-412.
    Would you recommend to this committee that perhaps we should be starting with a review of what should actually be in a list of responsibilities for online operators and debating that, as opposed to just porting unfettered powers into an unelected regulatory body that will cost $200 million?
     Yes.
    You haven't asked me to comment on the substance of Bill C-412. We do have some concerns with some of the categories that are listed in Bill C-412, but I do think it is worthy of further study and consideration. I'm happy to talk about the categories we have concerns with.
    Fair enough. The whole point of Bill C-412 was to have an open debate about what platforms would be responsible for, rather than just porting that behind closed doors into an unelected regulator and giving it unfettered power to regulate speech or impinge on civil liberties.
    Do you think that's something the government or any member on this committee should endorse?
     I'm not making any specific endorsements, but I do think proceeding by way of law rather than regulation has benefits.
     Thank you.
    Thank you, Ms. Rempel Garner.
    We will now move for six minutes to MP Brière.

[Translation]

    Thank you, Madam Chair.
    Good afternoon to all our witnesses.
    Mr. Rousseau, it's a pleasure to have you here. I wish you a good and speedy recovery.
    In your opening remarks, you said that creating the ombudsperson position and the commission would only increase bureaucracy. I don't know if you were here, but we just heard from Ms. Panas that her life is difficult every day, that she experiences hate online and on the street and that there is no process in place right now, which is a real problem. This process would make a big difference.
    I'd like to hear your comments on that.
    Thank you for your very good question. I appreciate it in particular, since it was asked by the member for Sherbrooke, the member who represents me.
    If the committee accepted my recommendation and decided that the bill should focus only on the issue of sexual victimization and revenge porn, rather than also include the hate component, that's where the issue of bureaucracy arises. The fact that the issue of sexual victimization has been mixed up with the issue of hate is problematic, since we can agree on one part but not on the other. We are mixing up two debates that aren't necessarily related. However, this approach has the advantage of ensuring that there's a volume of cases that perhaps better justifies the creation of the commission, the ombudsperson and the office. That's what I wanted to bring to your attention.
    Considering the different points of view and the challenges related to freedom of expression, you could get away with focusing on sexual victimization. Does that justify this question, which is more targeted and very important? Does that justify the creation of these three organizations? That's what I'm mainly drawing your attention to. If we look for avenues other than creating this bureaucracy, we can think of legal recourse by individuals, as legislation often allows. A person could initiate a lawsuit. If they've been a victim or if they've suffered damages, they may be inclined to use that kind of recourse. However, it raises other issues, such as access to justice.
    Another possible avenue would be to imagine a fund dedicated to victims of revenge porn or, more broadly, hate speech. That could facilitate access to justice.
(1235)
    Thank you for your response.
    We heard from a mom whose little girl was abused. The family is currently in court. As my colleague mentioned earlier, sometimes people don't know where to turn or don't necessarily have the money to start legal proceedings.
    Don't you think that would be a way to help them? I think we underestimate the scope of the problem when we aren't caught up in these networks or platforms, or when our children aren't necessarily affected by everything that happens online.
    The testimony we've heard so far has been horrific. It's heartbreaking. Our goal is to be there to protect our children, to help families and to reduce online hate, at the very least, if we can't eradicate it.
    Help us find the right way to do that.
    That's a very good point. If we create these three organizations, we will have to ensure that there is a certain volume of business. If we deal with the more targeted issue of sexual victimization and set aside the issue of hateful content, will there be enough business to justify the creation of these three organizations? That's the question I wanted to put to you.
    At the other end of the spectrum, however, the danger is that too many cases will be filed. If we add hate speech to sexual victimization, since the definition of harmful content is very broad, we could end up with an extremely high number of people filing complaints. This could result in very long delays. Generally speaking, administrative tribunals offer slightly faster and less costly access to justice than the courts, but some administrative tribunals are still overwhelmed by cases and there are very long delays. So we shouldn't think that just because an administrative route is created, there will necessarily be access to justice. It's difficult, but it's important to try to anticipate the volume of cases we'll have and the resources we'll need.
    They said it was going to cost about $200 million. I think that estimate comes from the Parliamentary Budget Officer. You would think that with that kind of money, there would be relatively quick processing, but hate and online sexual victimization are such broad issues that it's quite likely there will be an extremely large volume of cases, where some complaints will be warranted and others less so, and you end up with a problem of access to justice. So I draw your attention to that.
    I have only a few seconds left. Thank you for being here, as well as all the witnesses who are with us today.
    Thank you very much, Mrs. Brière.
    Mr. Fortin, you have the floor for six minutes.
    Thank you, Madam Chair.
    I would like to welcome the three witnesses. This is a good group of witnesses. I'm pleased to have them here today. I just deplore the fact that we have far too little time to ask such important questions of such competent witnesses.
    Mr. Rousseau, I also wish you a speedy recovery. First, I want to mention that we haven't received your opening remarks. It's not mandatory to send them, obviously, but you had some interesting references. So if you have the opportunity to send them to us, I would be very grateful.
    I would ask the same of each of the witnesses.
    That said, Mr. Rousseau, I'm going to address the issue of the definition of hate. You told us that this is indeed a rather problematic definition. You referred to a Supreme Court decision that contains, if I understood correctly, a definition that might be more appropriate, but I didn't really understand what decision it was about.
    First, can you spell the name of the case in question for me so I can write it down properly?
    Second, what definition did the Supreme Court propose in this regard?
(1240)
    Thank you for your question.
    I think I sent my notes, but maybe too late. I was told to send them 72 hours in advance, and I think I did that last night. Perhaps that's why you didn't receive them. They should be arriving a little late. In the worst-case scenario, please don't hesitate to write me an email, and I'll send you my notes in the next few days.
    The definition proposed in the bill is inspired by the Supreme Court decision in Saskatchewan (Human Rights Commission) v. Whatcott. In my view, this definition is a bit too broad; it refers to detestation and defamation, the definition of which refers to hate. So it's a circular and vague definition. Also, it comes from a human rights judgment. We know that human rights differ from criminal law, particularly when it comes to the notion of intent. In human rights, when it comes to discrimination, we focus mainly on the effects, regardless of intent, whereas in criminal law, intent is at the heart of the reflection. So it's really not the same logic. That's why it's problematic to create a definition based on a human rights ruling in a bill that is part of a more criminal logic. In addition, the definition is too broad.
    I'd like to draw your attention to the decision in R. v. Keegstra, which was rendered in 1990 and was repeated in Mugesera v. Canada (Minister of Citizenship and Immigration). It defines hatred as “an emotion of an intense and extreme nature that is clearly associated with vilification and detestation”. That seems to me to be a bit narrower than the definition in the bill, which is inspired by the Whatcott decision, which refers instead to detestation and defamation, since we're talking here about the intense and extreme nature of emotion. The word “extreme” already prevents it from being interpreted too broadly. However, here too, we're talking about vilification and detestation, so we have somewhat the same problem. I'm not telling you that the definition is perfect, but since it comes from a criminal case, it's preferable to the definition set out in the Whatcott decision.
    Thank you, Mr. Rousseau.
    When it comes to the aspect of individual freedoms provided for in the charter, is it reasonable to think that a definition that is poorly chosen, that is not the right one, could have consequences for freedom of expression? The act could govern certain situations that it shouldn't, and that could lead to lengthy and costly legal debates that might not have taken place if we had a better definition.
    Am I right to be concerned about that?
    You're absolutely right.
    Obviously, the challenge is to protect vulnerable people who are victims of revenge porn or hate speech while protecting freedom of expression. That's your challenge.
    Where that balance is very concretely reflected, where it can be found, is in the definition of hate. At the heart of this balance is the definition of content that promotes hatred and the definition of hate. I would draw your attention to the fact that you need to define this very precisely. This concerns freedom of expression for two reasons.
    First, if you define the concept of hate too broadly, the courts will sanction people who have justly acceptable speech that, in an open liberal democracy, should ideally be tolerated. There's that risk.
    Second, there is an even greater risk: if it isn't clear, litigants who wish to speak won't know exactly whether their remarks could fall within the scope of the act or not. A litigant might want to say something that is just fine and not subject to the act. However, since the definition isn't clear, the litigant could refrain from making that comment. That chilling effect is perhaps more problematic than the risk of courts convicting people of actions that should be protected by freedom of expression.
    What you're telling me leads me to the proposal you raised about abolishing religious exceptions.
    Isn't that a case similar to what you're describing? The issue of religious exceptions is so unclear that it can hinder the conduct of trials. People don't know what they can and cannot say. It can be harmful because Crown prosecutors don't know whether or not they should prosecute someone.
    What do you think?
    Indeed, it's the same kind of problem: the exception isn't clear. The other advantage of abolishing the religious exemption is that we don't need a ton of bureaucracy. We would really be fighting hate speech without encountering the problems of bureaucracy and costs that were raised earlier.
(1245)
    Thank you.
    Thank you very much.

[English]

     Now we will go to MP MacGregor, please.
     Thank you very much, Madam Chair.
     Thanks to all the witnesses for joining us today. This is an important prestudy on a very important subject matter for a lot of Canadians.
    Professor Clement, I'd like to start with you, just on the subject of artificial intelligence. The other hat I wear is that of a member of the public safety committee, and at that committee we certainly heard a lot of concern from a lot of experts about the rapid pace of development in artificial intelligence. Can you tell this committee what the role of artificial intelligence is with respect to algorithmic content curation practices?
    Thank you very much for that question.
    “Artificial intelligence” is not a very well-defined term. It's used very broadly, and it has multiple meanings, but we can think of it as a set of algorithmic techniques. It's part of algorithmic practices on the part of these companies. I prefer to use the term “algorithmic intensification”, rather than “intelligence”, because these algorithms do not comprehend or understand content in the way humans do, so they're very limited in their ability to moderate content, particularly if it's going to be taken down.
    AI is being used by the platforms particularly in order to keep people on their site and to keep the content sort of flowing and people clicking and so on, and that they can be quite good at because they can keep refining it. It's a statistical process. Also, as we've heard, most recently with generative AI, it's being used to create deepfakes, which can be deeply misleading. I think it's very important that when that is being done, it's clearly understood by the users that this is not a real, authentic image. That doesn't address all of the problems—like these AI friends that become seductive in various ways—but it's a start.
     Thank you.
     In the course of the conversation around Bill C-63, my Conservative colleagues have mentioned one of their own bills, Bill C-412. I want to mention another private member's bill, brought in by my colleague MP Peter Julian, Bill C-292, the online algorithm transparency act.
    I'm just wondering if you could talk a little bit about the features in that legislation and maybe how Bill C-63 might not be hitting the mark of where we need to be in this space.
     I did look at Bill C-292. In part, it inspired my recommendations regarding algorithmic transparency, because that is the main feature of that bill.
     However, I think that what I'm proposing here around the prospective amendments to define algorithmic transparency will go beyond what your colleague has proposed in Bill C-292, in that his definition refers only to personal information. There's a lot more information that goes into the algorithmic practices. I think it's very important that we understand all of the aspects of the way in which online operators curate information.
    I think it's a good start. I think Bill C-20 can go further. It needs an algorithmic transparency amendment.
     Thank you very much.
     Ms. Baron, I'll turn to you for my last question.
    In your opening remarks, you were talking about the importance of protecting freedom of expression, and you said this is the new public square. One key difference, though, is that unlike the physical town square, the digital town square is not a passive bystander. We know that on platforms, those algorithms can play a role in amplifying some content while suppressing other content. It can have a very real effect of pushing some people into some pretty dark corners.
    We just heard from a witness in the previous panel, a member of the LGBTQ community, who said that her ability to freely express herself with the status quo is being hampered. How would you like to tackle that? We're trying to figure out a way forward here. How do we protect her ability to freely express herself, because the status quo is greatly impugning her right?
(1250)
     I didn't see the other witness's testimony, but I will say that the beautiful thing about these online platforms is that there are many of them. There's Bluesky. There's Twitter. There's Instagram. There are different communities that have different norms. As we've seen since Elon acquired Twitter, many people have chosen to migrate to Bluesky, and you have every right to do so.
    I think that putting down further government regulations, especially when we see that apparently the result of that is the ballooning three-headed $200-million bureaucracy proposed in part 1.... The ends don't justify the means.
     Would you have any comments on algorithmic transparency?
    I'm sorry; I am a constitutional lawyer.
    That's okay, no worries.
     Thank you.
    Thank you, Mr. MacGregor.
    Thank you to the witness.
    Recognizing the time, 12:51, I'll move very quickly now to the second round, and I'm going to abbreviate it a little bit.
    Mr. Van Popta, you have four minutes.
    Thank you, Madam Chair.
    Before I ask questions of the witnesses, I want to put forward the motion of which I gave notice at the last meeting. We heard evidence earlier this week, and again today, of the rapid growth of online harms, particularly for our children. Professor Clement, I too am a granddad, and I have an image of my innocent grandchildren in my mind when I hear this evidence, so I'm very motivated to act quickly on this. This has definitely become a global epidemic that requires immediate action.
    Now, happily, our Conservative private member's bill, Bill C-412, addresses some of those issues in an immediate manner. Therefore, Madam Chair, I move the following motion, and we're asking for unanimous consent: That the committee urgently undertake a prestudy of Bill C-412, an act to enact the protection of minors in the digital age and to amend the Criminal Code.
     I'm asking for unanimous consent on that.
     Do we have unanimous consent?
    Mr. MacGregor.
     Can you read it one more time? Is it the motion that was discussed last time?
     It is the same motion as last time.
    An hon. member: No, it's a different motion.
     Would you mind reading it one more time?
    My apologies if it's different. It's the same subject matter: That the committee urgently undertake a prestudy of Bill C-412, an act to enact the protection of minors in the digital age and to amend the Criminal Code.
     Do we have that?
    There are a few hands up.
    I have Mr. MacGregor first.
    Can I clarify something?
    I think everybody wishes to clarify, but go ahead, Mr. Maloney.
    I assume the reason Mr. Van Popta is seeking unanimous consent is that this motion hasn't been circulated prior to right now. Is that right?
    That's correct.
     Okay, so that's why we don't have it. Thank you for the clarification, because I didn't have that.
    I have Mr. MacGregor, and then I have Monsieur Fortin.
    Mr. MacGregor.
    It's a slightly differently worded motion from what was distributed last time. I'm not going to say yes, and I'm not going to say no at this point. I want some more time to think about it.
    I would move, Madam Chair, that we adjourn the debate at this time.
     Mr. Van Popta asked for unanimous consent. It's either a yes or a no.
    Do we have unanimous consent?
    We do not.
    Thank you very much.
    Mr. Van Popta, I'm going to let you continue with your time for the questions.
(1255)
     Thank you.
    How much time do I have?
    I will give you two minutes. How's that? That's probably very nice, to give you two minutes.
     Okay, that's good. Thank you.
     Thank you to all the witnesses.
     Ms. Baron, I have a question for you. I'm reading from an article written by you that was published in The Hub on February 28 of this year. You said, “The internet is an ugly place.” I agree with you. There's a lot of good, and there's a lot of ugliness. You said that the online harms act is “a profoundly anti-free expression bill that threatens draconian penalties for online speech, chilling legitimate expression by the mere spectre of a complaint to the Canadian Human Rights Commission or the new Digital Safety Commission of Canada.”
     Now, you heard that the minister has parsed parts 2 and 3 out of this bill, so I'm assuming it's a less offensive bill. Here's my question for you. In your opinion, if we were to also remove part 1, so all that's left is part 4, would that be a good stand-alone bill? Could that work together with Bill C-412 as well?
     I think part 4 should be moved forward immediately. It's pressing and urgent, and I think it's really unfortunate that this government has lumped it together with parts 1, 2 and 3, which are entirely different.
    As for Bill C-412, it does have language that is disconcertingly vague to us—content that can lead to loneliness, content that constitutes bullying, content that is harmful to dignity. This is also vague and could lead to takedowns of protected content. I think that bill needs to be debated and studied further.
    That's fair enough. Hopefully we will debate it, and hopefully you will be back to give evidence on that.
    My question really is, could the two be debated at the same time?
    I believe so.
    One doesn't contradict the other.
    No.
    Thank you very much for that.
    We will now go for four minutes, two minutes and two minutes, and that will wrap it up.
     Please go ahead, Mr. Maloney.
    Thank you, Madam Chair.
    Thank you to the witnesses.
     Ms. Baron, I want to pick up on something you said in your opening remarks. You've repeated several times that you're not in favour of a digital safety commission and the process laid out in part 1. You just made it very clear that you don't support Bill C-412 because it's too “vague”, which is a word that's been used by virtually every witness who's been asked about it. We've had two categories of witnesses on Bill C-412: They either didn't know about it or didn't like it, so I'll leave that there.
    But what don't you like about the idea...? You said in your remarks, “there are other ways of enforcing that”, and then you went on to criticize the court process. Where does that leave us?
     Well, first of all, in the court process, we can look at the causes of why we see shockingly lenient penalties being meted out to those individuals who are convicted—
     With all due respect, that's an entirely different issue. We're talking about taking measures to remove stuff from the Internet. It has nothing to do with penalties for people who have been charged and convicted. Let's separate the two, if we can, please.
     I think there are just much more nimble and focused approaches that could go after child sexual exploitation materials and child predation, as well as revenge porn. You do not need a $200-million regulator and commissioner. Also, the majority of Canadians who are going to be affected by the provisions in part 1 are adults who are perhaps communicating spicy opinions online. That is, luckily, the majority of individuals.
    Okay. I'm asking you, then, what are these nimble and useful approaches? So far, all I've heard you say is that this one doesn't work and the courts are no good either, so what is it?
     It's perhaps a pared-down office that focuses just on revenge porn and child sexual exploitation materials.
    So it's some sort of bureaucratic mechanism, some sort of structure in place, just not the one that's being proposed.
     Is that what you're saying?
    It's not my job to present the precise mechanism. It's my job to point out a mechanism that would be less offending of constitutional rights.
    Okay, well, it's our job to come up with a mechanism and come up with a solution. When witnesses like you, who come here with a certain level of expertise, criticize what's being proposed, I would really like to hear your thoughts on alternatives. That's why I'm asking if you have any. If you don't, then that's fine too.
     I think I've said all that I have to say about what a future response should look like.
(1300)
     All right.
    Now, you said something else that intrigued me. I guess this is sort of the Internet or social platforms self-regulating. You said that people can migrate from one platform to another. How does migrating from Twitter or Facebook over to Bluesky, which is the current popular social media platform, help address the concerns of the mothers, the families and the victims we've heard from in this study on previous occasions? That does nothing, other than that they go to a nicer platform. How does that address issues like the dark web? How does that create a solution for these families that are victims?
     To be clear, I was answering that question in the context of a witness who spoke about feeling unsafe while communicating on social platforms. I was not answering it in the context of child predators. I think that, clearly, all of the platforms are aware that this content gets distributed. They have algorithms. They have ways to flag it and remove it much faster. No doubt, tragedies still happen. To the extent possible, I think that should be addressed. My comment about migrating to Bluesky was in an entirely different context.
    If I could sum up your testimony, you're here saying that we need to take some drastic steps to protect children and create a safer online environment, but you're offering no solutions or a way to do that.
    I'm saying that existing penalties ought to be enforced. Do you think this content is not already criminalized? Of course it is.
     But it's not being removed from the online world, is it? That's what this bill is about.
    Thank you.
     Thank you very much for that.

[Translation]

    Mr. Fortin, you now have the floor for two minutes.
    Thank you, Madam Chair.
    Ms. Baron, I thought I saw on your LinkedIn page that you speak French; I'll take advantage of that.
    I won't make you repeat what you've already said, but I'd like to bring you to another topic addressed by Mr. Rousseau, which is the abolition of religious exceptions in the Criminal Code.
    Bill C‑373 has been introduced, and it provides for the repeal of paragraphs 319(3)(b) and 319(3.1)(b) of the Criminal Code. These are provisions that serve as a defence for hate speech or anti-Semitic speech, as long as it is based on a religious concept that we believe in and defend in good faith. In my opinion, the spread of hatred seems a bit difficult to accept as part of a religion. I would say that 99% of religions are based on love and communal harmony, not on the spread of hate.
    Is it a good idea to abolish these defences of religious exception? I'd like to hear your thoughts.

[English]

     I think it's a very complicated question. I think there are inherent problems with criminalization of speech altogether. Having said that, I think that, as a secular, pluralistic society, if we're going to deem something as hate speech, it should be hate speech whether it's based in the individual's religious belief or whether it's based in some secular philosophy.

[Translation]

    Only a minute and a half has gone by, and I feel like I've covered the issue. Ms. Baron, thank you for your clear answer.
    I'm going to give the remaining minute of speaking time to my NDP colleague.
    Thank you very much, Mr. Fortin.
    I would still like to take this opportunity to thank all the witnesses for being with us today.
    Mr. MacGregor, you have the floor for the final two minutes.

[English]

     Thank you, Madam Chair.
     Ms. Baron, I guess I want to talk to you about your views on the responsibilities that social media companies should assume. An example was given in the previous panel about how Instagram suddenly made private the accounts of everyone who was 16 or younger, because there were people who were going through those images and trying to find ways to approach young teens. Instagram could have done this 10 years ago, but they did it now because there was the threat of regulation coming their way.
    These social media companies have algorithms that can amplify certain content and suppress other content. What is your view on the government's role in making these social media companies have some basic standards of practice that allow people to safely participate online? Again, one day it might be freedom of expression that's being compromised, people being able to freely express themselves. The great fear is that this can follow them from the online space and manifest itself physically. People are feeling that their actual lives are in danger.
    What are your views on how we approach the subject of making social media companies more responsible for making a safer online space?
(1305)
     My instinct would be to say—and this is a bit out of my zone of expertise—that when it comes to children and their use of social media, I think parents are primarily responsible, to be honest. I don't think the government should take the place of parents supervising—
    Isn't the government's main job to protect its citizens? I understand your point there, but at some point we need to use the collective power of the state to at least make these social media companies a little bit more responsible. I think that's just what I'm asking for.
    Thank you.
    Thank you very much to our witnesses who appeared in person and virtually. I wish everybody a wonderful, safe rest of today.
    Thank you very much to the committee members for a wonderful session. I wish you all a very nice Christmas holiday season, and I will see you in January.
     I am going to suspend now. Thank you.
    [The meeting was adjourned at 4:40 p.m., Monday, January 6, 2025. See Minutes of Proceedings]
Publication Explorer
Publication Explorer
ParlVU