:
Thank you very much for inviting me back.
I understand that today one of the things we've been asked to focus on is this notion of algorithmic curation. I'm making these remarks as the co-leader of the eQuality Project, which is a project that in fact is focused on the big data environment and its impacts on online conflict between young people. I'm also a member of the steering committee of the National Association of Women and the Law.
Big data, or the big data environment, where each of us trade our data online for the services we get, is a mechanism for sorting all of us, including young people, into categories in an attempt to predict what we will do based on what we've done in the past and also to influence our behaviour in the future, especially around marketing, to encourage us to purchase certain goods or to consume in certain ways.
In terms of our concerns at the eQuality Project with the big data model, and with algorithmic sorting in particular, there are three that I want to touch on.
The first is this assumption that the past predicts the future. This can become a self-fulfilling prophecy, which in the context of youth is particularly concerning. The assumption is not only that what we do predicts what we will do individually in the future, but that what people who are assumed to be like us will do or have done in the past somehow predicts what we as individuals will do in the future.
We can begin with an example that will appear soon in the eQuality Project annual report, courtesy of my co-leader Valerie Steeves. Think about online advertising and targeting. If you are a racialized male online and the algorithmic sort sorts racialized males as people who are more likely to commit crimes, then the advertising targeted to those people in that category—the young racialized male—might lean more toward names of criminal lawyers and ads for searching out people's criminal records, as opposed to advertising for law schools, which might be the kind of advertising that a middle-class white young person might get. There's a study by Latanya Sweeney about this.
The shaping of our online experience, that information to which we have access, according to our algorithmic sorting into groups, then can become a bit of a self-fulfilling prophecy because it's assumed that there's certain information that's relevant to us, and that's the information that we have access to. I don't know if you have ever sat side by side with someone and done a Google search and have seen that you get different results. That's one thing. The assumption that the past predicts the future is problematic in a very conservative way. It's problematic when the groups that we're using are based on discriminatory categories as well.
The second problem obviously is the constraint that this imposes on change, the constraint that it imposes on people's equal capacity to participate and to grow. In the context of young people, our concern is around whether young people will be influenced in ways such that they internalize the stereotypes that are wallpapering their online spaces, how internalization of that stereotype may affect their self-presentation, their self-understanding, and their understanding of their possibilities for future growth and participation, and in what ways this may set youth up for conflict with one another and set youth up to judge each other according to the stereotype's marketed standards that are part of the algorithmic sort in an online environment.
The third problem that we're particularly concerned with is the lack of transparency, of course, around this algorithmic sort. We cannot question it. Most people, even people who are computer programmers, don't necessarily understand the outcomes of the algorithmic sort. When important decisions are getting made about people's lives, such as what information they have access to and what categories they're being sorted into, and we have a system that we are not allowed to question, that isn't required to be transparent, and that isn't required to provide us with an explanation of why it is we've been sorted in this particular way, there are obviously serious democratic issues.
Again, our concern in the eQuality Project is to focus on the impact that this has on young people, particularly young people from vulnerable communities, which includes girls.
What to do about this?
One of the important points, which came from earlier work that I did with Professor Steeves in the eGirls Project, is that more surveillance is not the solution. The big data algorithmic environment is a surveillance environment. It's a corporate surveillance environment and, of course, the corporate collection of this data spills over into the public environment, because it creates opportunities for public law enforcement access to this data.
What the girls in the eGirls Project told us about their experiences in the online environment was that surveillance was a problem and not a solution. Algorithmic sort solutions that purport to categorize young people according to surveillance of their data instill greater distrust for young people and adults, and greater distrust of young people in the systems they're using.
I think it's really important to think about refocusing and reshaping our concerns on corporate practices here, rather than on training children to accept an algorithmic model, to accept that they're going to be sorted in this particular way. We should take a step back and ask corporations to better explain their practices—the how, the why, the when—and to consider regulation if necessary, including to require that explanations be provided where decisions are being made about a young people's life chances according to algorithmic curation and sorting.
Those are my remarks for now.
:
Thank you to the committee for inviting MediaSmarts to testify on this issue.
Our research suggests that algorithms and the collection of the data that make them work are poorly understood by youth. Only one in six young Canadians feel that the companies that operate social networks should be able to access the information they post there, and just one in 20 think advertisers should be able to access that information, but almost half of youth appear to be unaware that this is how most of these businesses make money.
With support from the Office of the Privacy Commissioner, we've been creating resources to educate youth about this issue and to teach them how to take greater control of their online privacy.
Algorithmic content curation is relevant to cyber-violence and youth in a number of ways. When algorithms are used to determine what content users see, they can make it a challenge to management one's online privacy and reputation. Because algorithms are typically mostly opaque in terms of how they work, it can be hard to manage your online reputation if you don't understand why certain content appears at the top of searches for you. Algorithms can also present problems in terms of how they deliver content, because they an embody their creator's conscious or unconscious biases and prejudices.
I believe Ms. Chemaly testified before this committee about how women may be shown different want ads than men. There are other examples that are perhaps more closely related to cyber-violence. Auto-correct programs that won't complete the words “rape” or “abortion”, for example, or Internet content filters, which are often used in schools, may prevent students from accessing legitimate information about sexual health or sexual identity.
This is why it remains vital that youth learn both digital and media literacy skills. One of the core concepts of media literacy is the idea that all media texts have social and political implications, even if those weren't consciously intended by the producers. This is entirely true of algorithms as well and may be particularly relevant because we're so rarely aware of how algorithms are operating and how they influence content that we see.
Even if there is no conscious bias involved in the design of algorithms, they can be the product and embodiment of our unconscious assumptions, such as one algorithm that led to a delivery service not being offered in minority neighbourhoods in the United States. Similarly, algorithms that are designed primarily to solve a technical problem, without any consideration of the possible social implications, may lead to unequal or even harmful results entirely accidentally.
At the same time, a group that is skilled at gaming algorithms can amplify harassment by what's called “brigading”: boosting harmful content in ways that make it seem more relevant to the algorithm, which can place it higher in search results or make it more likely to be delivered to audiences as a trending topic. This was an identified problem in the recent U.S. election, where various groups successfully manipulated several social networks' content algorithms to spread fake news stories. Also, it could be easily used to greatly magnify the reach of an embarrassing or intimate photo, for example, that was shared without the subject's consent.
Manipulating algorithms in this way can also be used to essentially silence victims of cyber-violence, especially in platforms that allow for downvoting content as well as upvoting.
In terms of digital literacy, it's clear that we need to teach students how to recognize false and biased information. Our research has found that youth are least likely to take steps to authenticate information that comes to them via social media, which, of course, is where they get most of their information. We need to educate them about the role that algorithms play in deciding what information they see. We also need to promote digital citizenship, both in terms of using counter-speech to confront hate and harassment, and in terms of understanding and exercising their rights as citizens and consumers. For example, there have been a number of cases where consumer action has successfully led to modifying algorithms that were seen to embody racist or sexist attitudes.
Thank you.
:
There are models in the EU in particular, in the EU directives around data privacy, that focus more on bringing human decision-making into the loop. Where a decision is made that affects someone's life chances, for example, there needs to be some sort of human element in the determination of the result.
Again, this adds a certain level of accountability or transparency, where neither you nor I—or maybe even some computer scientist—could actually explain what the algorithm did in terms of how it came to the conclusion that you were in a particular group, or that certain information should come to you or not. Thus, we can have some other form of explanation about what is actually being taken into account in determining what kind of information it is that we're seeing and why a particular decision is being made about us. This is becoming more and more important as we move toward machine-made decision-making in all kinds of atmospheres.
I think people or countries are beginning to think about ways in terms of how to put the “public” in public values and public discourse back into decision-making in this sphere, which, although it is largely privately controlled is really a public infrastructure, in terms of a necessity for people to have access to it increasingly for work, for social life, and for education. It's about how to think about righting the balance between the decisions being made from a private sector perspective—not for nefarious reasons, but for profit reasons, because that's what they're in business to do—and how we re-inject public conversation and public discourse around the issues in terms of what's happening, what kinds of decisions are being made, how people are being profiled, and how they're being categorized. I think this is a really important start.
Hi, and thank you very much for coming.
I want to start off with a personal story. Maybe you can share with me how this came about. You can say, no, these were algorithms—or maybe I had a bad past I don't know about—but what happened is this. I was on a flight the other day, and I watched two YouTube videos, parts one and two from the international advertising awards. I'll share with everybody that they were regarding men's underwear. It was a funny clip—very funny; two testicles; great.
After the first two videos, the third video, which automatically went to play, was pornography. It was a young man and a young woman. Unfortunately, I was sitting there with my 13-year-old son, and I went, “Oh, my gosh”, because the video itself that I was watching with my son wasn't too inappropriate—somewhat, but not too inappropriate—but I can tell you that the third thing absolutely should not have been there.
Would that have been an algorithm? Would that have been something from a previous search history, although I can tell you that I've never searched for pornography on YouTube? How would that have come up? Can you share with me your thoughts on how you start with something that's getting a national award for advertising and the third thing is pornography?
One of the things that we educate young people about is, again, digital citizenship: their ability to make a difference online. We teach them, for instance, that when they see inappropriate content, particularly when it's something like cyber-bullying or hate content, there are a lot of steps they can take. Almost every platform, whether it's a video platform or a social network, has ways of reporting content. Many of them do have downvoting. That's one of the reasons downvoting exists, even though it can be misused. We teach them that they have a responsibility to do that, and that they have a right to have an online experience where they're not exposed to harassment and hate.
We also advocate and provide resources for parents to talk to their kids and for teachers to teach students about all of these different issues. We know that kids are going to be exposed to them, whether intentionally or unintentionally. We know it happens. We know that even the best filters don't block out all of this content, and often, when it comes to things like hate or cyber-bullying, filters don't do a good job.
It's important that we talk about these things, so that by the first time someone encounters pornography, they already know that it's not real, and they already know not to take it as a realistic or healthy view of sexuality.
:
Thank you, Madam Chair.
Thanks to the witnesses. It's good to see you back again.
It was last June, at the very beginning of our study, when we talked with you, Ms. Bailey. I'm glad you're here again.
Just last month, the United Nations committee to end discrimination against women issued its report on Canada. The report comes out every five years, and it's a good opportunity for us to touch in. One of the items that the committee noted was this concern, and I'll quote:
The repeal of section 13 of the Canadian Human Rights Act, which provided a civil remedy to victims of cyber violence, and the enactment of the Protecting Canadians from Online Crime Act (2015), which penalizes the non-consensual distribution of intimate images, but fails to cover all situations that were previously covered by section 13 of the...Human Rights Act.
The recommendation from the committee in paragraph 25(g) is that the federal government:
Review and amend legislation in order to provide an adequate civil remedy to victims of cyber violence and reintroduce section 13 of the Canadian Human Rights Act.
Do either of you, in your professional experience, have any advice for the committee on a recommendation that we might reinforce in that area?
:
On the repeal of section 13 of the Canadian Human Rights Act under the prior government, I testified before the Senate about that. It came, I would say, at the most ironic time in history. It was a time when everyone was talking about the impact of online hate and harassment. Canada was uniquely placed in having a federal human rights provision that allowed for a tribunal rather than a court to respond to online hate and harassment as a human rights issue, hate and harassment that's identity based. We were uniquely and proudly situated in Canada to have had that remedy.
Right at the time, I would have said, when the remedy had its most meaning, when most experts were saying the way to respond to this was not going to be primarily through criminal law remedies but through a human rights approach, Canada chose to repeal section 13. I think that was an unfortunate decision. It hobbled Canada's ability to deal effectively with online hate and harassment and to offer a variety of responses. Also, it's not just that. It's the symbolic recognition that what's underlying these attacks is harassment, discrimination, and prejudice based on identity.
To me, the reinstatement of section 13 of the Canadian Human Rights Act would make a lot of sense at this time, because, with all due respect, it made no sense to repeal it at the time that it was repealed.
In terms of civil remedies, I think one of the more interesting civil remedies that I've looked at recently is in Manitoba, where they are using the body that runs Cybertip to assist those whose intimate images are posted online and to get the images taken down in a quick way. I think that's a very sort of meaningful support mechanism. It's one of the number one issues for those who are victims of non-consensual distribution: to get the image down as quickly as possible.
None of that is to negate the criminal law provision, but these things do something different. I think having a panoply of different legal responses that suit different people in different situations and their abilities and needs makes a lot of sense.
:
We can look at “the right to be forgotten” in the EU. An example is the Google case in Spain, where Google was upset that EU law was applying to their situation, because they didn't think their presence in Spain was sufficient to justify the application of that particular directive.
In brief, the case was about whether they could order Google to delete a particular outcome from its search engine, so that if you searched for a particular person, this story would not surface in the Google search, on the premise that most of us get our information from Google searches, and even if it's still out there on a website somewhere, our access to it is relatively limited if it doesn't come up in the first one or two pages of a Google search. They were ordered to remove this from their search, so that when someone searched for this individual, the story about a prior proceeding wouldn't come up. The idea was that if you don't want to do this broadly, you need to figure out how to do it so that residents in Spain or in the EU don't have access to this material, because this person is entitled, under EU law, to not have other people in the EU getting access to this on their search engines.
It can be done.
Ms. Pam Damoff: Okay.
Ms. Jane Bailey: On the other question about a different kind of algorithmic sort, I think my answer would be that I'm not sure. It would depend on what was happening. That isn't to say that it's not complicated. It's just to say that sometimes it's the first thing that gets put on the table, and I think there is a corporate interest in its being the first thing that gets put on the table.
:
Technically, we have a number of easy steps that can be partially effective.
If you're talking on an individual level, almost every search engine has a “SafeSearch” setting. There are also content filters that are available. Most ISPs make those available. There are commercial filter programs as well. These are never going to be 100% effective, particularly when you broaden your definition of inappropriate content beyond just nudity. There are certainly things that we recommend, especially using something free like the SafeSearch.
This is one of the reasons why we approach digital literary in a holistic way. This is why things like authentication and search skills address content issues as well. One of the best ways to avoid finding this is having sufficient search skills so that you're looking for only the one thing that you're looking for, so that you're able to craft a successful search string that will narrow out things you don't want.
There are certainly also steps that you can take to avoid having a profile built. If you watch a video that may, for whatever reason, have inappropriate content algorithmically connected with it, if you're not having a profile of you built, it's going to have less of an effect. There are measures like using search engines that don't collect data on you, possibly using an IP proxy, or using, in some cases, the incognito modes of browsers, or activating the do-not-track function in browsers.
All of those, again, are incomplete on their own. Again, that's why we say that you can never entirely shield young people. That's why we have to talk about these issues. Those are all effective steps that you can take to reduce the odds of those things happening.
:
I will call the meeting back to order. We are going to start our second panel discussion. I have a couple of announcements before we get to that.
I want to remind members that tomorrow is the National Day of Remembrance and Action on Violence against Women. You will remember that years ago the most savage, violent attack in Canada happened at École Polytechnique, and women engineers—I have to say that they were my sisters—were killed in an act of horrific gender violence. Please remember tomorrow. I know that we're not meeting because of votes in the evening, but I'm sure there will be other activities going on to remember that by.
The other thing I want to let you know is that when we were discussing our next study at committee and how we were going to move forward, we were going to have a bunch of the economic development area networks come and speak first. They've all declined to appear—amazing—so we have an opportunity instead to have one panel discussion with ISED, ESDC, and StatsCan, along with Status of Women. We could have that whole bunch come and talk to us in the first hour. For the second hour, the analysts have agreed to get our work plan ready by Friday and sent out to us, so that we can start talking about the work plan and at least agree on some of the initial meetings in the new year. Unless there's an objection, I'm going to suggest that we do that.
Without any further ado, we want to welcome our witnesses for this panel discussion. We have with us Sandra Robinson, who is an instructor at Carleton University. I will just let you know that Sandra wants to be sure she can hear your questions, so if you would ask them loudly and enunciate, that would be very good. We also have with us, from the Department of Industry, Corinne Charette, Senior Assistant Deputy Minister, Spectrum, Information Technologies, and Telecommunications Sector.
Welcome, ladies. We are going to give each of you seven minutes for your remarks.
We'll start with you, Sandra.
:
Thanks to the committee for the invitation today. It's a pleasure and a privilege to appear before you.
I am a full-time faculty member at Carleton University in communication and media studies. I teach in the areas of media and gender, law communication and culture, and algorithmic culture and data analytics on the more technical side. I'd like to share some concerns and considerations about the role of algorithms in the context of networked communications, such as those for social media, search, and, in particular, what is broadly conceived as automatic content curation by algorithms.
There's been some discussion of this already, obviously, so I'll focus on three things: defining algorithms and their operations; the trade-off between user interfaces and the increasing complexity of software; and, the impact of algorithmic content curation.
I want to be clear at the start about what I mean when I refer to an “algorithm”. In very simple terms and in the context of information systems and networked communication, it can be thought of as a series of computational steps or procedures that are carried out on information as an input to produce a particular output. For example, a search term typed in as input to “Google Search” produces an output in terms of search results.
Also, they don't operate in isolation. Algorithms are part of a complex network of digital devices, people, and processes constantly at work in our contemporary communication environment.
Embedded in any algorithmic system is a capacity for control over the information it analyzes, in that it curates or shapes the output, based on multiple factors or capacities the algorithm uses to generate the outputs. Again, in the case of Google Search, their suite of algorithms takes in the search term, personal search history, similar aggregated history, location, popularity, and many other factors to generate a particular set of filtered results for us.
The rather amazing thing about any of the algorithms incorporated into our contemporary communication is that these computational systems know much more about us than we know about them. They're often mysterious and non-transparent, as has been mentioned: a black box that governs our information landscape, persistently at work to shape information flows, determining what information we see and in what order we see it, and then nudging us towards certain actions by organizing our choices.
Algorithms do govern content automatically, but they do so because they have been designed that way. The capacity of algorithms to curate or sort information has been designed to sit behind the user interface of our popular search and social media applications, so we don't directly interact with the algorithm. Curation and filtering of information is sometimes something that we can see happening, but it's not entirely clear how it is happening. For example, the simplification includes things like swiping and tapping, and clicking icons in our mobile apps—highly simplified behaviour.
The extraordinary complexity of algorithms in automated curation is thus deeply hidden in the software and digital infrastructure necessary for networked communication, and this leads to a sort of distancing effect between us as human users and the complexity in the systems we are interacting with, such as Google Search, for example. It becomes difficult for us to connect our simple button choices or search queries to any wider effect. We don't necessarily think that our own individual actions are contributing to the ranking and sorting of other information searches or the popularity of a particular newsfeed post.
Social media companies tell us that reaction buttons like “Like” and “Don't Like”, or love or angry icons, are a way to give feedback to other users, stories, and posts, and to connect with the issues, ideas, and people we care about, but this effectively trains us to input information that feeds the algorithm so that it can generate its output, including ranking posts and shares based on these measures.
I was recently reminded of the powerful ways algorithmic curation happens. In the context of a group of Facebook users making a few original and offensive posts, the situation quickly escalated over a week, and hundreds of reactions or clicks on all those “like”, “angry”, or “haha” buttons continually moved up that cyber-bullying incident in people's newsfeeds. As Facebook itself notes on the relevancy score of a newsfeed algorithm, “we will use any Reaction similar to a Like to infer that you want to see more of that type of content”. These simple actions literally feed the algorithm and drive up the issue.
I also find Google's auto-complete algorithm even more troubling. While Google likes to make grand public assurances that their auto-complete algorithm—the drop-down of suggestions you see when you're searching—is completely objective and won't link personal names with offensive auto-completes, it still drives users to problematic content via its complex and comprehensive knowledge graph.
Google's knowledge graph combines search results in one page with images, site links, stories, and so on, but it still combines information that is problematic. For example, the Google auto-complete algorithm still points us to details of the late Ms. Rehtaeh Parsons' horrific case that were propagated by Internet trolls and continue to feature in Google's “searches related to” suggestions that appear at the bottom of the search page, pointing to images and other problematic content.
Recent changes to automated curation techniques point to our need for sustained efforts to build digital literacy skills, as discussed earlier, that steer young people into thinking more critically and being ethically minded in terms of what's going on. I would argue that we also need, then, a specific effort to educate young people about what algorithms are, not in their mathematical complexity, but generally how it is that they're operating with these simplified user actions that young people are so eager to participate in.
Visibility and publicity, and shares and various Snapchat scores are part of the new social accounting that young people value, and it's driven by an increasingly subtle yet complex infrastructure: an algorithmic milieu of communication and control that leaves very little in the hands of users.
Algorithmic sorting, ranking, and archiving is persistent and ceaseless. It churns away continuously as social media and search users navigate, click, view, search, post, share, retweet, @mention, hashtag, and react. As users, these actions and immediate results feel dynamic and vital. At its best, it affords us efficiencies in information retrieval and communication, and at its worst, it amplifies some of our most problematic and prejudicial expression, action, and representation online.
Thank you. I look forward to your questions.
:
Thank you very much, Chair. Thank you for inviting Innovation, Science and Economic Development to address the issue of big data analytics and its applications to algorithm-based content creation, to the detriment, in some cases, of young girls and women.
[Translation]
This is an important issue for me not only because of its impact on my work, but also because I am a woman engineer who was in Montreal during the events at the Polytechnique, which were devastating for me.
[English]
Following graduation as an electrical engineer, I was very fortunate to have many great roles in technology with a lot of leading organizations, including IBM, KPMG, and FINTRAC, our money-laundering detection agency. I was the government CIO, until my current post as the SADM of SITT. For 30 years, I've been working in technology, I've seen the adoption of many great technology trends, including the Internet and big data analytics.
[Translation]
Now, as senior assistant deputy minister, my job is to use key tools—policies, programs, regulations, and research, to advance Canada's digital economy for all Canadians.
[English]
Briefly, my sector is responsible for a wide range of programs, including the radio frequency spectrum, helping to maintain the security of our critical telecommunications infrastructure, and building trust and confidence in the digital economy. We safeguard the privacy of Canadians through two key pieces of legislation: the Personal Information Protection and Electronic Documents Act, or PIPEDA, Canada's private sector privacy legislation, and Canada's anti-spam legislation. In my capacity, I can affirm that the Government of Canada is committed to seizing the benefits of big data analytics through the discovery, interpretation, and communication of meaningful patterns in data, while protecting the privacy of Canadians.
[Translation]
Today, I would like to share with the committee two linked ideas about predictive analytics and algorithm based content curation.
[English]
The first relates to the personal stewardship of our digital information and the second is about the Government of Canada's commitment to building trust and confidence in the economy.
[Translation]
To begin, I would like to note that what citizens, business, and government do online generates a massive amount of data about our world and about us as individuals.
[English]
Every day, businesses and consumers generate trillions of gigabytes of data, structured and unstructured, in texts, videos, and images. Data is collected every time someone uses their mobile device, checks their GPS, makes a purchase electronically, and so on. This data can provide beneficial insights on developing new products and services, predicting preferences of individuals, and guiding individualized marketing.
This is a tremendous opportunity for Canadian innovation. According to International Data Corporation, the big data analytics market is expected to be worth more than $187 billion in 2019. The amount of data available to analyze will double very quickly and progressively; however, there are growing concerns about whether the benefits of big data analytics could be overshadowed by the accompanying pitfalls and risks.
Studies demonstrating biased results and decisions that impact whether people can access, for example, higher education or employment opportunities, are increasing. We do need to better understand how biases towards individuals are generated and how we can guard against this. This phenomenon may be explained in part by algorithms that are poorly designed—poorly from a user's perspective—or data that is poorly selected, incorrect, or not truly representative of a population.
[Translation]
It is easy to see that spotty data and mediocre algorithms could lead to poor predictive analysis which can be very detrimental to individuals.
[English]
I share my colleague's comment that step one in terms of risk mitigation involves better digital literacy for all Canadians as an increasingly important tool to ensure that we know what bread crumbs we are leaving behind online. It can give Canadians the knowledge and tools to understand how to use the Internet and technology effectively, critically, and responsibly.
Personal stewardship of our online information can help all Canadians, especially young women and girls, but it needs to also be supported by my second point, which is about the frameworks that preserve our privacy, and now I will talk about PIPEDA. Canada's federal sector privacy law, PIPEDA, sets out a flexible principles-based regulatory framework for the protection of individual privacy.
The principles set out in PIPEDA are technologically neutral and are based on the idea that individuals should have a degree of control over what information businesses collect about them and what they use it for, regardless of the circumstances.
Of course, some information, such as demographics, geographic location, etc., can be determinants to targeted advertising. This is the data that these algorithms use to produce these recommendations, but this can have significant implications for the privacy of individuals, especially given the lack of transparency of the privacy policy of many online sites, and the lack of awareness amongst young people—and also older Canadians—about the data that they are freely sharing.
[Translation]
We need to strike the right balance between privacy and the economic opportunities resulting from the collection of personal information.
[English]
In conclusion, Innovation, Science and Economic Development Canada
[Translation]
works with a number of other government departments to promote the use of big data analytics and other digital technologies by the government and the private sector.
[English]
We need to promote an increased understanding of both the opportunity and the risks of our digital world, of how our data can be used, and of the privacy obligations of prediction analytics users, so that the benefits can be enjoyed by all, especially young women and girls.
I want to thank you for making this issue a part of your important work.
[Translation]
Thank you.
:
That's a great question. Thank you.
From my perspective, it's interesting because I catch up with youth in their first year of university. I think I get a sense, then, of that lack of digital literacy. They can Snapchat the heck out of the world, but they are struggling to understand how some of the pieces make that technology happen.
For teachers, perhaps, in those years prior to their students bursting out onto the world, I think we need to make a concerted effort. One recommendation is, organize appropriate training for teachers. I even think that teachers who feel that they have a facility with technology maybe should be marked out as champions within their schools or within their program to help be leaders and to encourage their colleagues. People fear technology and, in most studies, women more so than men. I think there has to be a very safe and encouraging environment to consider what kind of participation can happen.
I think we need to tackle not just the surface level of software applications, the how do we use this.... We've come full circle now with the Internet and now it's time to come back and say, “Hang on a second.” These simple user interfaces are masking a very complex ecosystem of software, and we can't escape trying to make an effort to understand and then to share that understanding with youth and among ourselves in pulling each other into the 21st century. I think it absolutely has to happen before students reach the upper level of their school training, in high school and whatnot. I think that in some ways it's never too early, given where you see young people and kids with cellphones.
:
Thank you, Madam Chair.
Before I start with the questions, I would like to thank the witnesses for being here today.
I have a question for Ms. Charette.
[English]
You spoke a bit about the innovative practices and all the great applications that exist, which seemingly nothing to do with violence against women specifically. I'm very excited about the further development in Canada of a personalized user-based web experience. I think there are a lot of great applications, both from the consumer's perspective and from a business perspective.
Of course, I share the same concern the committee has, which is that the Internet seems to have gotten very good at showing us the good things that we want, but also at showing us bad things, whether we want to see them or not.
As we're seeking to make recommendations to the government, I've heard you loud and clear: education has to be at the fore when we're making these recommendations. Are there certain areas that you'd like to see us avoid to make sure we don't stifle the positive innovative practices that are developing in the private sector?
:
Hackers have a variety of tools. The most basic one used is the concept of phishing. Everybody here probably routinely gets a number of phishing emails. Now they're starting to do text-message phishing that says “click on this link”, and as soon as you click, that's it: your device has just been downloaded with an unwelcome visitor that resides there, often gathering information about your daily online activities without your knowing, until such time as that information is of value, for whatever purpose.
Phishing and the ability to infect devices with malware is very prevalent, and hackers are really key at wanting to do that to capture personal information as well as information such as your bank account number, your password, and so on, so that they can log in as you from another place and time and transfer all of your funds to some other destination.
Phishing is number one, but you might have heard a couple of weeks ago about quite a successful denial-of-service attack generated by the Internet of things in things as basic as your home thermostat. All of these devices that connect to the Internet come with often pre-set standard passwords. The user may not always know what the standard password is or even know how to change the standard password, but hackers will know how to do it, so they'll be able to penetrate your home networks and lurk until such time as they need to do what they'd like to do.
Unfortunately, given the complexity of technology in our homes and businesses.... At least in our businesses we have LAN administrators and technologists who are looking out for us, but at home we're all obliged to become at least basic technology defenders in some way.