:
Thank you for this opportunity to speak with you today.
Over the past several years my research has focused on some of the social media sites most popular among children, from online communities like Neopets to virtual worlds like Club Penguin. These types of sites don't look very much like Facebook, but they nonetheless do allow for many of the same types of social interactions and activities we identify as characteristic of social media.
Privacy issues are of enormous relevance within these environments. The research shows that since the very early days of the World Wide Web, kids' privacy rights have been infringed upon for commercial purposes within certain online social forums. This happens with much greater frequency than most of the other risks associated with kids online. It's also something that in other countries has led directly to the establishment of child-specific privacy legislation. The key example here is the U.S. Children's Online Privacy Protection Act, or COPPA, which was initially created in response to the then growing practice of soliciting names and addresses from children in order to direct-market to them.
Today the type of data collected from kids and the purposes for which it's used have both expanded significantly. The article that was circulated to you in advance of my appearance here today describes this shift in detail, explaining industry trends toward data mining, in which children's conversations, behaviours, and ideas can become fodder for market research and product development.
In my work in this area, I have observed that within social media forums, when children are considered at all, concern for their rights often plays second fiddle to narrowly defined notions of risk. Children are still very much more often seen as potential victims or conversely as potential criminals in the online environment. As such, the emphasis is placed on protecting them from strangers, from each other, and from themselves, rather than supporting and empowering them as citizens.
This tendency has greatly impacted the way in which social media companies address child users. The first and most common response has been to simply ban children under the age of 13 from participating in social media sites. This was the strategy found until very recently on Facebook, and it remains common throughout other popular social media as well. Although some children may, and often do, bypass these bans—by lying about their age, for instance—a formalized age restriction still has deep impacts on how and where children use social media. It also serves as a way of deflecting some of the public and regulatory scrutiny that can be associated with sites that do openly allow children or invite children to participate.
While in some cases age restrictions may very well be appropriate—there are many sites where they would be—in others, the no-children-allowed approach has more to do with wanting to avoid the risks and complications that kids might bring than it does with the actual content or activities that unfold there, which means that younger children are frequently banned from participating fully and inclusively in online culture and from reaping many of the benefits and opportunities that social media presents, simply because it's been deemed too much work or too expensive or simply too risky to accommodate them.
Another increasingly common response is the creation of tightly controlled child-specific social media, found in social networking sites, virtual worlds, and online communities designed and targeted specifically to children, usually under the age of 13. In my research I've found that in many of these cases the emphasis on risk has put privacy front and centre. Privacy concerns integrated at the level of design are quite apparent. They surface in legal documents such as privacy policies in terms of use, and they appear in the marketing of the sites themselves.
However, a number of areas are in dire need of improvement. As mentioned, there is continued evidence that children's online interactions are being surveilled and data-mined, most often without the full knowledge or consent of the kids involved, or that of their parents and guardians. While kids are regularly asked to agree to these kinds of activities through the privacy policies and terms of use they are required to agree to in order to participate, even on sites designed and targeted to younger children, these documents are long and extremely complex. They describe a wide variety of data collection activities and include a number of terms that are inappropriate and even inapplicable to ask children to agree to.
This raises important questions about informed consent, an issue that's particularly pressing when the users consist of young children with widely varying literacy levels and emerging capacities for understanding complex legal relationships. Best practices would include providing a child-friendly version of both of these documents to ensure that children and their parents know exactly what they're agreeing to. While there are definitely some really great examples of this practice out there, overall very few sites for kids bother to do it. When they do, the child-friendly versions are rarely comprehensive: most don't explain the full reasons for user data collection or only describe items that present the social media company in a positive light.
The misrepresentation of children's privacy as a matter of online safety is also becoming an increasingly prevalent trend. Now, don't get me wrong here. A broader consideration of how rules and design features aimed at protecting children's privacy rights might also offer protection from online predators and bullies has some very real benefits for children's safety and for their enjoyment of social media. But so far, in many cases this dual function has been realized in ways that work primarily to obscure the underlying commercial practices that privacy policies are actually meant to address. By reframing children's privacy as predominantly a matter of online safety--which is, in these cases, defined as safe from other users--the more mundane and less obviously risky threats to children's privacy, such as corporate surveillance and invasive market research, are sidelined.
A related emerging trend is to commercialize the safety features themselves, as I discovered in a recent study of kids' virtual worlds. Some kids' virtual worlds come with a “safe chat” version, where chat between users is limited to selecting preconstructed sentences from a drop-down menu. In one case, the “safe chat” version limited kids' options to a mere 323 different phrases, 45 of which were cross-promotional and 30 of which promoted third-party ads. As you might have guessed, none of these phrases were in the least bit negative. Kids could chat about how much they loved the brand but were prohibited, by design, from saying anything critical about it.
Among the many potentially negative impacts this can have on children is the impact it has on children's rights. These examples reveal that an unfortunate trade-off is taking place, as limited approaches to children's privacy and safety can place undue restrictions on children's other rights, such as the right to freedom of expression or the right to participate freely in cultural life.
Now, it's important to note that what I've described here are general trends, mostly found in commercial social media sites that are considered to be popular among children. Not all social media companies follow these practices. And there are, in fact, a number of Canadian companies that have come up with some pretty brilliant alternative strategies for balancing kids' privacy, safety, self-expression, and cultural participation. There is potential for real leadership here, but there's currently a lack of the kind of regulatory and government support that would be necessary for these types of individual, small-scale, ethical, rights-based approaches to develop into widespread industry practice.
In the time I have left, I'd like to outline four key take-aways or recommendations.
First, there is a clear and growing need for child-specific regulation on the collection, management, and use of children's data. In so doing, however, we'll need to avoid making the same mistakes that have plagued certain previous attempts, such as COPPA, in the U.S., which resulted in kids losing access to certain very important social spaces and/or widespread lying about their ages. We'll also need to expand this regulation in ways that better reflect current and emerging online data collection practices.
Second, we need a much clearer articulation of the ethics of informed consent where children of various ages are involved.
Third, we need to strive for a better balance between children's privacy rights and other rights, such as freedom of expression and the right to participate in cultural life, both within our discussions of these issues and within regulations, either amended or new.
Last, we need to establish clearer leadership and stronger enforcement of these child-specific rules, which would include acknowledging and supporting the innovative, ethical, rights-based examples that certain independent and small Canadian social media companies are already working to build.
I look forward to discussing these issues further with you during the question period.
Thank you.
:
I'll talk a little slower.
The growing importance and benefits of social media to Canadians cannot be understated. These are far-reaching and permeate every aspect of our individual, social, and political lives. The innovative and commercial growth of such networks should not be unduly restricted. At the same time, Canadians should not be forced to choose between their privacy rights and their right to participate in this new interactive world.
PIPEDA, which forms the backbone of privacy regulation in Canada, provides a flexible set of principles that cater to the legitimate needs of businesses while providing safeguards for user privacy. While PIPEDA has largely withstood the test of time, the privacy landscape has changed substantially since its enactment, and a decade of experience has exposed a number of shortcomings that should be addressed if the statute is to continue to meet its objectives.
I will quickly say a few words about the shifting privacy landscape and proceed to elaborate on four areas that I think need immediate attention.
In recent testimony before this committee, Professor Valerie Steeves pointed to research indicating growing lack of trust in online companies. A survey conducted for Natural Resources Canada in late 2009 similarly found that respondents' level of trust in different types of organizations to keep their personal information secure is moderate to low. The least trusted were small private sector businesses and social networking sites.
The study similarly found that the ability to control the context in which information is shared increased levels of trust. In another study conducted by researchers at Annenberg and Berkeley, 67% of Americans agreed or strongly agreed that users have lost all control over how personal information is collected and used by companies.
Feeding this sense of lost control is an increasingly complex ecosystem where the scope and nature of data collected increases daily, even as the sophistication of information collection and analysis mechanisms keeps pace. While Google and Facebook have been at the forefront of debates on these issues, numerous other companies are involved. Acxiom, a data broker based in Arkansas, has reportedly collected an average of 1,500 data points on each of its 500 million active user profiles.
Few of these users have heard of Acxiom, let alone had any direct interaction with the company. Yet the profiles, which data brokers such as Acxiom sell, are populated with their browsing habits; the Facebook discussions they have with their friends and family; their sensitive medical and financial information; their ethnic, religious, and political alignments; and even real-world locations visited. All this data is collected, analyzed, and refined into a sophisticated socio-economic categorization scheme, which Acxiom's customers use as the basis of decision-making.
The sheer complexity of the ecosystem that fuels databases such as Acxiom's defies any attempt to articulate within the confines of a privacy policy. A number of jurisdictions are looking at ways of addressing the need for greater transparency and choice. I will briefly focus on four here that I think are relevant specifically to PIPEDA. I'll point out as well that the nature of the data being collected in this ecosystem is also increasing in sensitivity. Newly emerging capacities are aiming to incorporate real-time location and even emotional state into the categories of information that are available for targeting. I'll touch on four changes I think we should focus on. The first is transparency.
Greater transparency is needed. To this end, the United States Federal Trade Commission has recently stated it will push data brokers to provide centralized online mechanisms that will help users discover which data brokers have collected their data. This can serve as the basis for the exercise of other user rights.
Informing users can be achieved in a number of contexts through greater integration of notification into the service itself. This not only allows for greater flexibility and nuance in notification, but also increases privacy salience by reminding users in context of the privacy decisions they are making. In addition, elements of privacy policies can be standardized, but care must be taken not to oversimplify data practices that are in reality complex. The dangers of oversimplification are that organizations will begin to rely on blanket and categorical consent, which are simple but do not provide customers or advocacy groups the details they need to properly assess their practices.
Another area I'd like to touch on is privacy by default or privacy by effort, which is an analog to that.
Transparency alone is not enough to protect privacy in this interconnected age we are in. In a recent consultation process on online privacy, it was noted that many online services are public by default and privacy by effort. New users will rarely know how to configure the complex web of the often conflicting privacy control services that are offered when first signing on. Settings constantly shift and change, as new ones are introduced and old ones replaced, or when new features are added to existing services. Simply maintaining a constant level of privacy is a never-ending effort.
Compounding such efforts is a tendency for social networking sites to make occasional tectonic shifts in the constitution and nature of their services. These are often imposed on ingrained users as “take it or leave it” propositions. At other times, pre-selected defaults are used to nudge users in directions that are very different from the service they have grown accustomed to.
As you've heard from other experts, the devil is indeed in the defaults. Stronger protections are needed to ensure new services and settings are introduced with privacy-friendly defaults that reflect the expectations of users and the sensitivity of the data in question, not whatever configuration is best fitted to the service provider's business model.
Under PIPEDA, the form of consent should already be tailored to user expectations and the sensitivity of the data that might be affected. However, in order to firmly ingrain this concept in service design, privacy by default should be explicitly adopted as a principle under PIPEDA.
Another area I want to touch on briefly is enforcement and process.
The committee has heard from a number of parties about the importance of ensuring that the Office of the Privacy Commissioner can enforce its powers. Adding bite to PIPEDA is critical for a number of reasons. First, it is necessary in order to provide incentives for compliance. Currently there are very few penalties for non-compliance. In most cases the most an organization can expect is the threat of being publicly shamed for non-compliance. Second, having these powers in place will assist the Office of the Privacy Commissioner in its interactions with large multinational organizations so it can carry out its mandate in protecting the privacy of Canadians.
In addition to adding penalties, procedural changes to the OPC's investigative and compliance framework should be explored. Compliance with OPC recommendations in a social networking context may be a long and complicated road, requiring changes to system design. However, under PIPEDA the OPC's legal mandate to exercise its powers over a particular complaint ends 45 days following the issuance of an official finding. The mechanism lacks the flexibility necessary to ensure Privacy Commissioner recommendations are carried out adequately.
Finally, I'll touch briefly on breach notification requirements.
Canada is in dire need of a breach notification obligation. Such an obligation will improve incentives to build stronger technical safeguards and provide users with opportunities to redress harm, such as identity theft and the potential humiliation that may result from a breach of their data.
Bill , which is currently in first reading, provides a workable framework for breach notification, but it requires fixes and a commitment to introduce penalties for non-compliance if it is to be effective.
I would be happy to elaborate further on any of these points. CIPPIC plans to file a more detailed brief with the committee at a later point.
Thank you very much for your time and attention.
:
Good morning, Mr. Chair and honourable members. Thank you for the opportunity to speak with you today.
My name is Adam Kardash. I am a partner at the national law firm of Heenan Blaikie, and chair of the firm's national privacy and information management practice. I am also managing director and head of AccessPrivacy, a Heenan Blaikie consulting and information service focusing on privacy and information-related matters.
I appear before this committee in a personal capacity, representing only my own views. However, my views are based upon my experience at Heenan Blaikie and AccessPrivacy.
Over the past ten years I have focused almost exclusively on advising private sector organizations on privacy and information management matters. I regularly consider the privacy law implications of new technologies and platforms.
In my opening remarks I will offer a number of comments that centre on a single theme; namely, that our federal private sector privacy law, the Personal Information Protection and Electronic Documents Act, or PIPEDA, works very well. Since coming into force in 2001, and despite all sorts of criticism from a range of stakeholders across the Canadian privacy arena when first introduced, the statute has stood the test of time. In my view, PIPEDA has worked and continues to work particularly well in addressing privacy challenges raised by new technologies.
The act sets out a comprehensive set of requirements that regulates an organization's collection, use, disclosure, storage, and management of personal information. One of the reasons the statute remains effective today is because it was drafted in a technologically neutral fashion. PIPEDA's core rules are mainly set out in plain language as broad principles, and therefore can be applied to any new technology, new application, or new system that involves the processing of personal information, including social media platforms.
It is precisely because PIPEDA does not focus on any particular type of technology that it is so well suited to addressing seemingly novel privacy issues that may be raised by new technological developments. In this regard, it is important that PIPEDA remains drafted in a technologically neutral manner. Given the increasingly rapid pace of technological innovation, any statute that is drafted focusing on a certain technology or platform, whether social media or otherwise, will be obsolete, out of date, by the time it comes into force.
In my experience, technology-based issues, privacy or otherwise, are most effectively addressed through self-regulatory frameworks that work in concert with the statutory regime. Compared to statutes or regulations, self-regulatory frameworks are far easier to develop, implement, supplement, or revise in order to remain current with changing technological developments.
Notably, under PIPEDA, a self-regulatory framework developed by way of a meaningful consultation process would have legal value under the statute. Self-regulatory frameworks establish industry standards, and well-developed industry standards inform the meaning of PIPEDA's overarching reasonable person test. This is in subsection 5(3) of the act, which provides that organizations may only collect, use, or disclose personal information for a purpose that a reasonable person would consider appropriate in the circumstances.
When advising clients, as a matter of practice I do not refer to PIPEDA as merely a set of legal rules. Rather, the statute sets out a useful framework for organizations to proactively address privacy concerns in a manner that balances individual privacy with the collection, use, and disclosure of personal information in the course of legitimate business activities. PIPEDA's rules are dynamic, in that they apply to the entire life cycle of data, from the collection or creation to the ultimate destruction of personal information held by an organization.
All of these rules fall under the principal feature of PIPEDA: the accountability principle. The accountability principle is a simply worded but very powerful requirement. It provides that organizations are responsible for personal information in their possession or control.
Notably, PIPEDA's accountability model is now being referred to around the world, by foreign data protection authorities, foreign governmental bodies, and global privacy think tanks, as the enlightened statutory model for the protection of personal information. PIPEDA's framework, in large part due to its accountability model, is specifically cited in these international fora as being well positioned to appropriately address the privacy concerns that may arise in the online sector, and otherwise in the technological context.
There are a number of published letters of findings from the Office of the Privacy Commissioner of Canada that clearly demonstrate the OPC's effectiveness, under PIPEDA's existing framework, in considering and appropriately resolving emerging privacy issues raised by new technologies. They include several letters of findings issued in the social media context.
One of the central and in my view critical features of PIPEDA is the ombudsman model incorporated into the act. The Privacy Commissioner is vested with the role of ombudsman in carrying out her duty to oversee the personal information practices of organizations subject to PIPEDA, with recourse to the Federal Court where issues remain unresolved.
The ombudsman model is hardly new. It is typically employed by governments to regulate public administration. But PIPEDA applies the ombudsman model, in a novel fashion, as a means of regulating private sector activity. In my experience dealing and interacting with the OPC when advising clients across all sectors, the OPC's ombudsman model has proven over time to be very effective and generally well received by private sector organizations.
An ombudsman model is particularly well suited to facilitating effective privacy compliance, since meaningful privacy protection is not just about an organization satisfying legal rules. Rather, privacy interests are addressed meaningfully when a privacy mindset is fostered within an organization in a manner that's tailored to the reality of an organization's business context. Experienced chief privacy officers understand that privacy is about enhancing trust. And building trust requires engaged discussion with stakeholders within an organization, within industry sectors, and across the privacy arena. The OPC plays an important part in this discussion, and the ombudsman model facilitates flexible and collaborative interaction with private sector organizations.
Commissioner Jennifer Stoddart eloquently described the nature of her role as ombudsman in a 2005 speech in which she considered the merits of the ombudsman model. She stated:
It must be underscored that the Ombuds-role is not simply remedial, but transformative in nature. The aim is the resolution of individual complaints, but it is also the development of a lasting culture of privacy sensitivity among parties through their willing and active involvement in the process itself. In order to achieve these twin goals, the process must necessarily be flexible, participative and individuated in its approach.
Recently there have been calls from various stakeholders in the Canadian privacy arena, including from Commissioner Stoddart, for PIPEDA to be amended to provide the OPC with greater enforcement powers. Based on my experience in the privacy arena over the last ten years, it is not clear that any such amendments are necessary.
To their credit, Commissioner Stoddart and the more recently appointed assistant commissioner, Chantal Bernier, have been remarkably successful in carrying out their mandate in the ombudsman model context. They have done so with an arsenal of several powers under PIPEDA. In particular, they have the power to publicly name organizations that are in breach of PIPEDA, the power to self-initiate investigations or audits of an organization's personal information practices, and, as I noted, the power to refer complaints to the Federal Court.
The OPC has been highly respected in the international privacy arena for years, but it enhanced its reputation considerably among foreign data protection authorities as a result of its highly publicized investigation of Facebook's personal information practices. As a direct result of the OPC's enforcement activities, Canada is now regarded as one of the leading jurisdictions globally, exploring privacy issues associated with new technologies, including in the social media context. The OPC's achievements in this regard have been accomplished without order-making power or other enforcement mechanisms, such as the ability to levy fines. Notably, Commissioner Stoddart has made public statements to the effect that the mere public threat by the OPC of potential Federal Court action against a given organization has almost always resulted in the organization satisfying the OPC's concerns.
Innovative new technologies, such as social media platforms, offer Canadians tremendous value. As we continue to engage with and take advantage of new technologies, and we all provide our personal information in the course of doing so, privacy will continue to play an increasingly integral part of private sector organizations' trust relationship with individuals.
As we consider emerging privacy issues, it is of course important to reflect upon whether the existing privacy regulatory framework serves to ensure that individual privacy is appropriately addressed. With PIPEDA, we're fortunate: we have a technologically neutral, principle-based statutory framework that has served us exceedingly well in ensuring the protection of privacy in a balanced fashion.
As the committee continues its study, I respectfully offer the following concluding suggestions when it considers whether and the extent to which PIPEDA needs to be amended to address challenges posed by new technologies, in particular, amendments that will provide enhanced enforcement powers.
First, as individuals we all have a responsibility to be careful with how we use our personal information in public contexts. Public outreach and regular training and awareness by privacy regulatory authorities and relevant private sector organizations are critical in this regard. No amendments to PIPEDA would be required to enhance our collective efforts in this fashion.
Second, I respectfully submit that the committee carefully consider the costs of moving to an enforcement model under PIPEDA. To accommodate new enforcement powers such as order-making power, structural changes to the OPC will be required, and key benefits afforded by the ombudsman model will be lost.
Third, as part of a national strategy to ensure growth of our domestic technology sector, we need to ensure that any legislative change or initiative be carefully considered in a manner that ensures we don't impose unnecessary impediments to legitimate business activity. In short, in my view, the economic costs of privacy regulatory change need to be carefully considered. We need a regulatory framework that fosters innovation. In the privacy arena, PIPEDA provides us now with an appropriate model that has served us well in this regard.
Finally, the constitutional impact of any legislative change to PIPEDA, in particular with respect to new enforcement powers, needs to be carefully reflected upon. The recent Supreme Court of Canada decision in the securities reference, a case that considered the constitutionality of a national securities administrator, serves as an important reminder that constitutional considerations need to be a part of any study of privacy legislative reform.
Thank you again for the opportunity to speak with you this morning. I would be pleased to respond to any questions from the committee.
Thank you very much to our witnesses. We've heard some fascinating information here.
I don't even know where to begin, but I'm going to just start, Ms. Grimes, with you.
You said that unbeknownst to most Canadians—I think this is fairly common knowledge—online activities are surveilled. We have data-mining going on out there. We have spiders. We have bots. We have all kinds of things that are downloaded onto people's computers unwittingly. We have spyware, malware, adware, and whatever you want to call it tracking people's activities, whether they're on a laptop or a mobile device. In these user agreements, we agree that our information will be allowed. It's in our settings in our devices whether or not we want to allow cookies, for example, on our computers. It's in our settings on our iPods and our iPads. We get push notifications. We can turn these kinds of things on or off. An educated user will have to make a little bit of an effort to do that. We can get third-party software that will help us protect, for example, our computers at home that our children are on when they're trying to do their homework, so that I as a parent can get notification on what kinds of activities my children may or may not be doing online.
And that's going to be a question I have for you: Do you think my child has the right to be able to do that on a computer, without me knowing what my child is actually doing? I'll save that question for the end.
In all of these agreements, I have one choice: I either accept the terms of the agreement in its entirety or I don't. That's the choice I have. I don't have the option to parse parts out.
My question, broadly, for all three of you is do you think there should be a legislative or a regulatory requirement to have these kinds of agreements parsed out in such a way that an end-user can actually have the ability to select which parts they're going to agree to, or which parts they're not going to agree to? Most of these things set defaults on how my information is going to be shared with a company like Acxiom, which frankly has me terrified.
I know how these things work, because I used to be a database administrator. I understand how these data points are collected, and many of these things are collected without my knowledge. I'm sure my name's in Acxiom, because I'm an avid computer user, or if it's not in Acxiom it's somewhere else. Somebody has information about me and my browsing habits and my user habits, and so on. So this is a very frustrating thing.
Why can I as a user not have the ability to choose which parts of the agreement I want to agree with and which parts I don't? Is that a reasonable thing, from a regulatory environment point of view, for a government to be involved in?
:
There have been some developments and some recommendations in the EU. I'm not up to date enough on that to know where they are in that process. They did a huge study in the EU, which ended recently. Academics and a number of government agencies studied these types of issues with kids of various ages online in something called the EU Kids Online project. After the reports came out, I know that discussions started about industry guidelines and implementing new guidelines and implementing potential regulations. Where they are in that process, I'm not entirely sure, but that would be one place to look. I know they have been considering it, and they've also grounded a lot of what they've been doing in research, which is great.
In terms of examples of children being specifically targeted and used for commercial gain, one of the big problems of studying this area and these processes is that there's a lack of transparency. Data is collected and you can read the terms of service and you kind of see the data coming out in different places, but it's not always clear what the links are and how data is being transferred and how it's being used. The examples I've looked at to see how this process can work tend to be sites that actually sell the data to other companies and that are quite open about selling the data to other companies. They function as a social media space, but they also do data-mining and data-brokering in-house.
An example from a few years ago was that of Neopets, which is an online community for kids. They sold the market research they had done to various different companies and had an annual report in AdAge, which is a big advertising industry trade publication in the U.S. They would include surveys and pretty easy-to-identify market research strategies within the site.
A more recent example is Habbo Hotel, which is based out of Finland but is popular all around the world. Most of the people who use it are between the ages, I think, of 13 and 18, but they do have a significant number of users who are 11 to 13, as well. They offer a similar type of service, called Habbel, through which they package data and sell it to other companies. Through that service, companies can also hire them in advance to sort of spy on conversations that kids might be having about a particular product, and tell them not just what the kids are saying about the product but the larger context within which that conversation emerges—what kinds of likes those kids have, what areas of the site they are gravitating towards, what other things they talk about, what time of day they are there, where they plan to go after if they plan to meet up in real life, because a lot of kids who meet and communicate in social media actually do know each other in real life and go to the same schools and that kind of thing. It can be very detailed information.
The only reason we know how detailed they are and we know about these kinds of processes is that they're openly selling the data. But in many cases they're not selling the data. They're keeping it or they're selling it through more covert means, so it's not as obvious what's happening to it.
Are kids concerned about privacy? Definitely. There's been a lot of talk about the different concepts of privacy that kids have. I think this comes back to Mr. Andrews' comment earlier about kids being born in this age of Facebook, and not knowing any different type of environment and having pictures of themselves online before they're even old enough to go online themselves.
They may have slightly different concepts of privacy, but a lot of them are very similar to traditional concepts of privacy. In study after study, what comes out the most is that they're most concerned with privacy infringements that impact them on an immediate level: friends infringing on their privacy or parents infringing on their privacy or perceiving that their parent is infringing on their privacy. These abstract forms are at a length. They doesn't seem to impact them on that day-to-day basis. They are dealing with these privacy issues in ways that we have yet to fully appreciate. They might not seem as concerned about these things, but oftentimes they just don't really understand how they're going to impact them and where. Frankly, because so many of us also don't understand how those types of privacy infringements are impacting us and where, we're worried about what might happen, but we're not completely seeing the consequences yet. It's more difficult to find out how they feel about that.
There is a new study of Canadian children and youth that has come out just recently and has explored these issues. Increasingly, kids are even able to articulate these concerns about abstract privacy infringement, which I think is a really important development. They're learning about it more, they're experiencing it more, and they're able to communicate more about how it makes them feel and whether they feel their rights are infringed.
The sad thing is that I'm not sure if they feel there's an escape, a solution, or an alternative. There certainly isn't one being presented to them right now.
:
That's a very good question. I think it's something that really needs a lot of closer study.
The same issue is starting to arise in the child gaming context. The marketing materials used to be easier to get because companies would have their practices out in their marketing materials. If I were trying to figure out what a specific site was doing, I could pick up their marketing materials and see it in there, as Sara was saying. Now they've moved away from that. They don't have those marketing materials available any more, so it's not as easy to do.
It's the same issue as with the data brokers. It's not very clear to me what they're doing. Some of their marketing materials are available, so you can get a sense, but I think you need.... I don't have a solution. I think what's needed is a more in-depth investigation, with those data brokers at the table, that tries to get them to explain what their processes are.
What's been suggested is to just have a centralized place where individuals can ping these data brokers and do searches of these data brokers all in one place to see if their names are on there. Then you have, under PIPEDA, for example, a right to request an organization to give you everything they have on you. But you have to first know which organization to go to, what the organizations are. I don't want to send out 100,000 of these. If there are 100,000 data brokers, I want to be able to go to one spot, see who these are, send them requests, see what data they have on me, and then maybe correct any errors that are there.
In addition to that transparency mechanism, there's probably an analogous regulatory-ish mechanism that could be put in place that would talk to these organizations and get a sense of where their data's going, how it's being used, and where it's being collected from. That's a fact-finding type of expedition that I think would be really useful, but it's very difficult for individuals to undertake on their own.
That's a starting point.