Skip to main content

HUMA Committee Report

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

PDF

Implications of Artificial Intelligence Technologies for the Canadian Labour Force

Introduction

Automation, or the use of technology to automate a process or procedure to be performed with minimal or no human assistance, has been used in workplaces for decades. Artificial intelligence (AI) is emerging as a new technology and is increasingly being adopted in businesses, workplaces and classrooms. Its current and future impacts on the world of work are still unclear, though many agree that there will be significant shifts in how large portions of the workforce perform their daily tasks.

On 2 June 2023, the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities (HUMA or the committee) adopted the following motion:

That, pursuant to Standing Order 108(2), the committee undertake a study regarding the implications of artificial intelligence technologies for the Canadian labor force; that the study will examine the impact these technologies may have upon, including but not limited to: regions, age groups, organized labor, workforce sectors, gender, person with disabilities, income levels, and race within Canada. That the committee hold a minimum of five meetings on this study; and that the committee invite to testify representatives, including but not limited to: representatives from Statistics Canada, the Minister of Innovation, Science and Technology, the Minister of Labour and related department officials. That the committee report its findings and recommendations to the House; and that pursuant to Standing Order 109, the committee request that the government table a comprehensive response to the report.[1]

Over the course of this study, the committee heard from 21 witnesses, including government officials, researchers and academics, labour groups and businesses and received 10 written briefs. The committee thanks all those who participated in the study for their contributions.

Following a brief overview of AI technologies and their current and potential effects on the Canadian labour force, this report highlights key testimony the committee received relating to: AI’s potential impacts on workers, businesses and workplaces; the need to develop a mechanism for the federal government to hear from experts on emerging AI issues; and better data collection and research. Based on the evidence heard, the committee provides recommendations to the Government of Canada.

Background Information

What Is Artificial Intelligence?

An AI system is defined in Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, as “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”[2] In the context of HUMA’s study, AI was more broadly described by the Council of Canadian Innovators as being “fundamentally a family of technologies that makes both labour and capital inputs more efficient.”[3]

During its study, the committee heard about early research on AI in Canada and across the last six decades.[4] A variety of more modern types of AI and their uses in the workplace were discussed, such as: large language models like generative pre-trained transformers (GPTs), which allow AI applications to create human-like text and content; image generators that create images based on text prompts from content pulled from a database or search engine; and algorithmic management, which uses AI and predictive analytics to manage employees, including such aspects as scheduling work, assigning tasks and evaluating performance.[5] The committee also heard about the estimated value of “the broad AI sector”; one witness noted that it is currently valued at around $200 billion, by 2030 will likely expand to around $2 trillion, and has potential to be an “era-defining” technology that could bolster Canada’s economy.[6]

The Canadian Labour Force and Artificial Intelligence

The issue of technologies impacting the labour force or changing the way that work is performed is not new; however, AI in particular is more recent and, as the committee heard, not yet making big impacts on large portions of the workforce.[7] Marguerita Lane, Economist at the Organisation for Economic Co-operation and Development (OECD), noted that in a survey conducted by the OECD in 2022, firms in countries including Canada indicated that AI has not yet “had a massive effect on employment, or hasn’t reduced employment” as “AI mostly tends to automate tasks, rather than jobs.”[8] Statistics Canada similarly shared with the committee that in its 2022 Survey of Advanced Technology, 3.1% of firms had adopted AI at some point.[9] However, Laurent Carbonneau, Director at the Council of Canadian Innovators, told the committee that Canadian companies, when compared to international counterparts, particularly in the United States and United Kingdom, were less likely to be using AI.[10] The Dais similarly noted in its brief that in 2021, Canada ranked 21st out of 38 OECD countries when it came to AI adoption in businesses.[11]

Regardless, Marc Frenette, Research Economist at Statistics Canada, indicated that in Canada, technological adoption has historically been associated with job transformation.[12] Over time, certain advances in technology have changed the way people work – and many predict this will be the case with AI. This was echoed by Morgan Frank, Professor at the University of Pittsburgh, who indicated that “technology does not automate occupations wholesale, but instead automates specific activities within a job” and that the impact of AI will be felt through the realignment of skills and tasks.[13]

Additionally, witnesses discussed the likelihood that AI technologies will impact some businesses and sectors more than others. In particular, the committee heard that larger companies are more likely to be early AI adopters due to higher costs of implementing new technologies and that they are better positioned to attract top talent through higher wages.[14] In its brief, the Dais referenced the Statistics Canada Survey of Digital Technology and Internet Use, noting that in 2021, 20% of firms with over 100 employees had adopted AI, while just 3% of businesses with fewer than 20 employees had done so.[15]

Finally, witnesses indicated that new types of jobs will likely emerge as a result of AI technologies in areas such as AI development, cybersecurity, ethical management, data labelling, hardware specialization or prompt engineering.[16] James Bessen, Professor at Boston University, suggested that expanding the use of AI in workplaces will not result in mass unemployment but that AI “will require people to change jobs, to acquire new skills, to maybe change locations or to learn new occupations.”[17]

Issues Affecting Workers

Throughout the study, many witnesses spoke to the diverse impacts AI technologies are having and will likely have on workers. For the most part, these comments fell into three categories: the need for better protection of workers’ rights, including with regard to privacy and the use of workers’ personal information, and intellectual property; the differential impacts AI technologies may have on various groups of workers; and the need for workers to receive training or upskilling to adjust to AI implementation in workplaces. These topics are explored in the next section.

Protection of Workers’ Rights

The committee heard from a variety of witnesses on the need for protection of workers’ rights as AI is implemented across sectors and in workplaces. As Fenwick McKelvey, Associate Professor at Concordia University, noted, “AI will affect the labour force, and these effects will be unevenly distributed … AI’s effects are not simply about automation but about the quality of work.”[18] Théo Lepage-Richer, Social Sciences and Humanities Research Council and Fonds de recherche du Québec Postdoctoral Fellow, University of Toronto, echoed and elaborated on this, indicating that “rather than completely replacing positions, artificial intelligence tends to be deployed in such a way as to … limit the opportunities that workers have to exercise their judgment, reduce the dependence of organizations on certain forms of expertise and replace investments in training and workforce development.”[19] While the committee learned about the benefits AI could bring to workers, it also heard that AI will likely increase the pace at which work is completed; one witness recognized that “increased work intensity can also induce psychosocial risks such as increased stress and anxiety.”[20]

Several witnesses suggested to the committee the need to swiftly protect workers. As Danick Soucy, President, Canadian Union of Public Employees – Quebec, noted, protections must be established “before companies undertake large-scale implementation, so that everything possible is done to avoid bringing in systems that cause problems for workers.”[21] A variety of ways in which workers could be better protected were shared, including:

  • Create a framework or “road map for appropriate regulation of AI development and adoption in workplaces,” which could include “a strategy for ensuring a voice for workers and unions in the regulation and oversight of AI;”[22]
  • Determine how to protect workers from “invasive data-gathering” that could reduce their autonomy or “train less skilled workers or automated replacements;”[23]
  • Obligate employers to declare the use of AI in the workplace, including requiring consultation with employees or worker representatives in the design and implementation of the use of AI;[24]
  • Have “clear laws that define the responsibilities around the use of AI,” including adapting the Canada Labour Code to address the impacts of these new technologies;[25] and
  • Include guidelines that protect the rights of workers in Bill C-27.[26]

Marguerita Lane indicated, however, that current legislation may be sufficient to protect workers.[27] Conversely, Gillian Hadfield, Chair and Director at the Schwartz Reisman Institute for Technology and Society, pointed out that Canada’s current laws and regulations were designed before the existence of AI and that these should be updated to “address the unique challenges and opportunities that AI presents.”[28]

Overall, worker protections were noted as a major concern in large part due to the nature of AI technologies, which David Autor described as “opaque,” adding that since it may not always be clear as to how decisions are being made around work, a major concern is to protect those most likely to feel its impact.[29]

Privacy Concerns

A particular concern the committee heard from witnesses and in briefs related to workers’ privacy and the ways in which AI technologies may be applied to monitor employees as well as use their personal information.[30] For example, the British Columbia Civil Liberties Association indicated in its brief that while employers can already use a range of technologies to monitor employees, such as video surveillance or keyloggers to capture keystrokes, more recent AI technology “incentivizes the expansion of workplace surveillance by enabling more surveillance for the same or fewer resources.”[31] First West Credit Union recognized privacy concerns in its brief, indicating that because of such concerns, there is a need to restrict the provision of sensitive information such as financial records, employee data or confidential information when using AI tools.[32]

Some suggested adopting further regulation or legislation addressing the use of personal information as AI technology is implemented in workplaces. The committee heard that this should include directives on the types of personal information that may be provided to AI systems to develop and train them.[33] Morgan Frank cautioned the committee that given the nature of this technology, data from one population could be used in another country that may operate under different rules – something that would need to be considered when developing legislation.[34]

Intellectual Property

Issues of AI technology impacting workers’ intellectual or cognitive property were also discussed. Examples were provided of AI using people’s work or property, such as a performer’s name, image or likeness, or an author’s writing; or, more generally, concerns around the way that large language models are trained, based on text and material, some of which is copyrighted.[35]

Some told the committee it is important to protect workers’ intellectual property through regulation or legislation.[36] The Alliance of Canadian Cinema, Television and Radio Artists suggested in its brief the importance of better defining “personal information” to include an individual’s “personality rights,” which would include biometric data, and prevent the unauthorized use of such personality rights.[37]

Impacts on Workers with Diverse Identities and Perpetuating Bias

With respect to impacts on workers with diverse identity factors, the committee heard testimony that fell into two main categories: the benefits and negative impacts AI can cause for diverse groups of workers, and the bias and discrimination AI may reproduce.

Benefits and Disadvantages for Workers with Diverse Identities

Witnesses spoke to the benefits that workers with disabilities could reap with implementation of AI in the workplace. For example, Angus Lockhart, Senior Policy Analyst at the Dais at Toronto Metropolitan University, shared with the committee the opportunities that AI tools could present for workers with disabilities, indicating there are instances “in which AI has been used to improve the capacity of people with disabilities to operate in a workplace.”[38] Fenwick McKelvey provided the committee with the example of workers who do not have English or French as a first language and stated that AI technologies could serve as a proxy, which could “ultimately be beneficial in allowing people who are non-native [English or French] speakers to actually access those [writing] skills.”[39]

However, many witnesses cautioned about AI’s potential drawbacks for workers with diverse identity factors. Some noted that it has the potential “to increase socio-economic inequalities,” as some jobs may become obsolete while other higher-skilled jobs are created, resulting in “a gap that could disproportionately affect certain segments of the workforce.”[40] As the Dais noted in its brief, women are currently underrepresented in technology occupations; additionally, businesses that are majority owned by women, Indigenous people or persons with disabilities are less likely to have adopted AI technology than other businesses.[41] Fenwick McKelvey also pointed out that AI technologies are beginning to affect precarious or “gig” workers and could increase their numbers; he predicted that jobs that “have already been deskilled or marginalized” could become even more so.[42] Additionally, Morgan Frank noted that workers in rural areas are less likely to experience the benefits of AI, as work that can leverage the use of AI is more likely to be performed in cities, given the requirement for access to high-speed Internet.[43] Accordingly, witnesses spoke to the need to ensure that diverse perspectives are included in the development and implementation of AI.[44] As Marguerita Lane told the committee, it will be important to ensure that the benefits of AI “are available to everybody and to every company, as well as every worker.”[45]

Artificial Intelligence Reproducing Bias

The committee also heard about the potential of AI technologies to reproduce existing societal bias and the ethical concerns surrounding this.[46] The brief submitted by the British Columbia Civil Liberties Association summarized the issue:

AI models are only as good as their training data: they have no knowledge or insight into the world beyond what is included in that data, and no independent judgment or sense of ethics … If discriminatory bias is present in the dataset that the AI model is trained on – whether in the underlying data itself or in the way the data has been packaged and formatted to render it intelligible to the program – the AI will replicate this pattern … There is also a risk that bias will be introduced if an AI model is used for a different purpose than it was trained for, as the new context may raise forms or expressions of bias that were not considered or controlled for in the preparation of the training dataset.[47]

As the Ontario Nonprofit Network noted in its brief, “AI is only as informed as those who build it” and “the technology can profoundly perpetuate and deepen inequities” if its builders are not diverse.[48] To compensate, some witnesses indicated the need for education on the potential for bias in AI technologies or to require developers to be transparent with regard to which data and information were used to train AI software.[49] Nicole Janssen, Co‑founder of AltaML Inc., summarized the need for implementing AI in a responsible manner, which would involve transparency, accountability and privacy.[50]

Need for Training

Throughout the study, several witnesses described the need for training and education to help workers adapt to changes related to AI implementation since AI’s most direct impact is likely to be “through a shift in workers’ skills and activities.”[51] First, many described the need for workers affected by the adoption of AI to receive training and stated that greater learning supports will be “imperative to ensure that people of diverse age groups, genders, income levels, and races are supported.”[52] Witnesses noted that employers should be required to train or retrain employees affected by the adoption of AI or provide them with opportunities to move to other positions; as Danick Soucy indicated, if these individuals end up out of work, ultimately it will be the greater society that would need to support them.[53]

Suggestions regarding the need for broader AI-related training and information were also received. Chris Roberts, National Director at the Canadian Labour Congress, cited the importance of investing in training and “continuous and lifelong learning opportunities” and ensuring that such training is “equitably distributed.”[54] McGill University advocated for establishing scholarships or bursaries for adult learners with a focus on underrepresented or underserved groups, women returning to the workforce and older workers among others.[55]

In their briefs, Chegg Inc. and McGill University both suggested a need for increasing AI awareness more globally through a variety of mechanisms, including an awareness campaign, and that this should include an AI literacy component for the greater public.[56] Finally, Laurent Carbonneau summarized the issue by noting that companies in Canada are having trouble finding people with the right skillsets and are losing talent to companies in other countries that are able to provide higher wages.[57]

Considering the testimony relating to supporting workers’ rights, the committee makes the following recommendations:

Recommendation 1

That Employment and Social Development Canada, with Justice Canada, undertake a review of federal labour legislation to assess its capacity to protect diverse workers’ rights in the context of current and future implementation of artificial intelligence technologies.

Recommendation 2

That Employment and Social Development Canada develop a framework, in collaboration with provinces and territories and labour representatives, to support the ethical adoption of artificial intelligence technologies in workplaces.

Recommendation 3

That Employment and Social Development Canada invest in skills training to increase the adaptability of the Canadian workforce to the use of artificial intelligence technologies.

Recommendation 4

That the Office of the Privacy Commissioner undertake a review of how artificial intelligence is impacting the privacy of Canadian workers and create proper regulations to ensure the protection of Canadians from artificial intelligence and that those regulations can be and are properly enforced. Also, to consider how this will interact with provinces and territories.

Impacts for Workplaces and Businesses

Another main topic discussed throughout the study centred on the current and future impacts of AI technologies on workplaces and businesses. Witnesses focused on the benefits of AI in terms of compensating for high employee turnover and increasing worker productivity but also raised concerns about the need for Canadian companies to remain competitive, both internationally and between sectors. Ethical considerations were also mentioned.

Benefits for Workplaces and Businesses

Several witnesses discussed the benefits that AI could bring to workplaces to compensate for high employee turnover or gaps in the workforce. Ryan Smith, Divisional Director of Planning and Development, spoke of the ways in which the City of Kelowna is leveraging AI tools to fill labour gaps and reduce repetitive work for employees. He noted that high turnover in frontline staff results in a lower knowledge base among employees due to the short time in which they have been in a role, and provided one example of using AI tools to “permit housing faster, with less red tape.”[58] AI tools are being used to complete tasks more efficiently or optimize manual workloads; they have also been used to perform repetitive, often time-intensive, tasks such as answering phone calls about snow removal or producing financial reports that summarize large volumes of data.[59] Another example provided by First West Credit Union in its brief was the use of AI to respond to repetitive process questions to gain efficiencies and provide faster client service.[60]

AI technologies can also compensate for lower levels of education among employees; David Autor expressed the belief that “AI can create novel opportunities for … low and middle-educated workers. With the support of AI tools, these workers could perform tasks that had previously required more costly training and highly specific knowledge.” He added that AI tools will not make human experience irrelevant; instead, “AI can enable valuable expertise to go further.”[61]

Laurent Carbonneau mentioned that productivity, measured by gross domestic product per hour worked, had dipped to the same rate it was in 2018 in Canada, and below the OECD average.[62] Witnesses described the potential of AI technologies to increase productivity, helping workers to focus on more strategic or “higher-level tasks that are better rewarded in the labour market,” and let “computer algorithms handle the more repetitive tasks.”[63] Implementing AI to handle recurring tasks could “facilitate mobility” and “improve employee satisfaction by efficiently matching workers with tasks.”[64] Morgan Frank described the efficiency and productivity gains that could be achieved, providing the example of graphic designers, who could leverage AI technologies to complete portions of their work in less time.[65] Another example, provided in a brief to the committee by Peter Cihon, was of computer programmers being able to write software up to 55% faster.[66]

The committee also heard of the benefits to workplaces that could arise from more access to talent: AI could grow labour pools by increasing people’s abilities to perform tasks through augmentation as well as provide workers with more opportunities to work across what were previously geographic boundaries.[67]

However, two witnesses suggested caution in this regard. Fenwick McKelvey noted that certain tasks or jobs being completed more quickly or efficiently could result in deskilling or “a kind of devaluing of the type of labour that’s being done.”[68] David Kiron, Editorial Director at the Massachusetts Institute of Technology Sloan Management Review, warned further that AI could “create work without jobs,” in that designing work around tasks or projects, as can be the case when AI is leveraged, could “increase reliance on contingent workers for whom fewer benefits are required” and noted that this could also lead to greater reliance on social benefits.[69]

Notably, of the testimony received, witnesses unequivocally indicated that to date, implementing AI technologies has not resulted in job losses, with some even arguing there will be net job gains as a result.[70] Witnesses indicated in this regard that AI is used to “augment humans,” not replace them, and frees up workers’ time to perform different or more complex tasks.[71]

Better Supports for Workplaces and Businesses

Discussions touched on the need to support businesses in Canada in remaining competitive in the global market, particularly in relation to AI technologies, given that they are digital in nature and “easy to share across borders.”[72] Accordingly, two witnesses indicated a need to adopt a harmonized approach such that our regulations are in alignment with other countries, such as the United States of America, or the European Union; invest more in Canadian companies;[73] and address AI in trade agreements or treaties so that companies in Canada are not left at a disadvantage.[74]

The committee heard words of caution from others regarding the risks to businesses and sectors in Canada of not “keeping pace” with technological change.[75] Some witnesses indicated that AI has increased the dominance of larger firms; Anthony Durocher, Deputy Commissioner at the Competition Bureau of Canada, advised the committee on regulation, noting that “smaller players have much fewer resources they can devote to compliance and to [a] regulatory regime” and that governments must therefore “be mindful of the potential undue burden on smaller players to make sure that it’s as level a playing field as it can be in the sector.”[76] The committee heard that the Competition Bureau of Canada is preparing for challenges that may arise from the use of AI, including forming a Canadian digital regulators forum with the Office of the Privacy Commissioner of Canada and the Canadian Radio-television and Telecommunications Commission.[77]

Imagine Canada and the Ontario Nonprofit Network noted in their briefs the importance of also providing supports to the nonprofit sector, indicating that nonprofit organizations in Canada have an important role in creating and governing AI.[78]

Lastly, the committee heard about the ethical side of using AI tools in business. Ryan Smith cited the need to be transparent with the end user by clearly indicating when AI was used in the process of generating products or responses – for example, in cases where an AI tool was applied to generate a response to a client’s question.[79] Further, James Bessen told the committee that ethical considerations with respect to AI adoption are “going to become more important as these systems develop and we understand more about what they can do and what their effects will be.”[80]

The committee sees increased need to support smaller businesses and nonprofit organizations in this regard and, accordingly, recommends:

Recommendation 5

That Innovation, Science and Economic Development Canada deliver dedicated funding to support small businesses and nonprofit organizations in all regions of the country, including rural Canada, in adopting artificial intelligence technologies in an ethical manner, that supports Canadian productivity, has clear objectives, is transparent and accountable, and has clear measurement of results.

Recommendation 6

That the Canadian government seek ways it can pragmatically increase efficiency, productivity and reduce red tape in their operations and workplace by utilizing artificial intelligence.

Mechanism to Hear from Experts

Some witnesses and briefs raised the need for an advisory table, or similar mechanism, for the federal government to receive regular advice on emerging issues related to AI.[81] As Gillian Hadfield noted, an advisory table is an agile method “for increasing government visibility into how things are changing on the ground.”[82] The committee recognizes the Advisory Council on Artificial Intelligence, established in 2019, whose mandate includes identifying opportunities in the AI sector and making recommendations to the Government of Canada for ensuring Canadians benefit from the growth in the AI sector, leveraging AI to foster job growth in Canada and ensuring people have the skills and training for these jobs.[83] Witnesses suggested that an advisory group should also provide recommendations about areas of concern, identify data gaps, disseminate research and address emerging technology issues.[84] Regarding membership, witnesses indicated the importance of having diverse representation, such as industry, universities, civil society, workers and labour organizations.[85]

Another suggestion the committee heard was to create a role of parliamentary science and technology officer that could play a function similar to that of the Parliamentary Budget Officer. Laurent Carbonneau noted that this role would allow access to “timely, actionable information on emerging technology and science issues that would help inform a lot of these debates and give us all a level ground to understand a lot of these emerging technology issues.”[86]

Given the testimony received, the committee recommends:

Recommendation 7

That Innovation, Science and Economic Development Canada ensure that the membership of the Advisory Council on Artificial Intelligence encompasses a wide diversity of perspectives, such as labour, academia and/or civil society, and the private sector; that the Advisory Council be asked to undertake work to examine mechanisms to protect workers and to identify existing data and research gaps; and that the department report back to the committee on these matters within one year.

Data Collection

Throughout the study, witnesses referenced many data points to describe the impacts of AI. However, those that spoke to data often indicated it was limited and that additional data collection would benefit decision makers. Several suggestions were provided relating to the need for additional data collection or expansion of current data collection to better examine the impacts of AI on the labour force, including:

  • Measuring job separations with better granularity, such as by industry, firm or job title, or reason for separation (to capture the case where an employee is not adaptable to new demands), to have a better understanding of the shifts in skillsets resulting from AI;[87]
  • Tracking within-occupation skill changes to predict which types of skills AI will enable;[88]
  • Capturing “unemployment risk” by occupation or industry – something that is not typically measured, as someone who is unemployed does not have an occupation; however, estimating the probability of becoming unemployed could allow better understanding of AI job disruptions;[89]
  • Ongoing data collection through the Statistics Canada Survey of Digital Technology and Internet Use; Canadian Internet Use Survey; and Canadian Survey of Cyber Security and Cyber Crime, all of which are currently conducted on an occasional basis;[90] and
  • Overall continued monitoring of labour market impacts over time.[91]

The committee also received suggestions relating to better understanding AI use in workplaces through research. The Dais at Toronto Metropolitan University advocated for more research to better understand these types of technologies and how they impact productivity, working conditions and workers themselves.[92] David Autor indicated the importance of understanding which tasks AI is applied to, in which sectors and for which types of activities. His observations were echoed by Morgan Frank, who also mentioned the need for more information on where these types of skills are employed in the workforce.[93]

Given the suggestions received, the committee makes the following recommendation:

Recommendation 8

That Statistics Canada develop a methodology to monitor labour market impacts of artificial intelligence technologies over time, including by collecting data on job separations by reason for job separation and industry type and by tracking unemployment risk by occupation.

Conclusion

Throughout this study, HUMA members received testimony relating to the benefits and risks of AI technologies for the Canadian labour force. They heard that AI technologies are being implemented at a rapid pace and that, while technological shifts can be disruptive, there are likely to be benefits, including for productivity and growth. During the study, witnesses discussed the importance of addressing concerns for workers, businesses and the labour market in general before it is too late.

While the risks and benefits presented to the committee were vast, the recurring themes were the need to protect workers, the need to support businesses and the need to strengthen access to information, data and research to facilitate better decision making. The committee sees the need for the Government of Canada to strengthen the ways in which workers are protected, to support the ethical adoption of AI and to undertake additional data collection to monitor the current and inevitable future impacts of AI technologies on the Canadian labour force.


[1]              House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities (HUMA), Minutes, 2 June 2023.

[2]              Bill C-27 An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 44th Parliament, 1st Session.  At the time of writing, Bill C-27 has completed second reading and is being considered by the House of Commons Standing Committee on Industry and Technology.

[3]              HUMA, Evidence, 1 November 2023, 1645 (Laurent Carbonneau, Director, Policy and Research, Council of Canadian Innovators).

[4]              HUMA, Evidence, 20 November 2023, 1215 (Théo Lepage-Richer, Social Sciences and Humanities Research Council and Fonds de recherche Québec Postdoctoral Fellow, University of Toronto).

[5]              HUMA, Evidence, 8 November 2023, 1800 (Fenwick McKelvey, Associate Professor, Information and Communication Technology Policy, Concordia University).

[6]              HUMA, Evidence, 1 November 2023, 1645 (Carbonneau); HUMA, Evidence, 22 November 2023, 1645 (Yana Lukasheh, Vice-President, Government Affairs and Business Development, SAP Canada Inc.).

[7]              HUMA, Brief, the Dais.

[8]              HUMA, Evidence, 1 November 2023, 1630 (Marguerita Lane, Economist, Organisation for Economic Co‑operation and Development).

[9]              Background document submitted to the committee by Statistics Canada, 21 November 2023.

[10]            HUMA, Evidence, 1 November 2023, 1645 (Carbonneau).

[11]            HUMA, Brief, the Dais.

[12]            HUMA, Evidence, 1 November 2023, 1650 (Marc Frenette, Research Economist, Statistics Canada).

[13]            HUMA, Evidence, 8 November 2023, 1630 (Morgan Frank, Professor, Department of Informatics and Networked Systems, University of Pittsburgh).

[14]            HUMA, Evidence, 20 November 2023, 1100 (James Bessen, Professor, Technology & Policy Research Initiative, Boston University); HUMA, Evidence, 1 November 2023, 1650 (Frenette); HUMA, Evidence, 8 November 2023, 1635 (McKelvey); HUMA, Evidence, 1 November 2023, 1700 (Carbonneau).

[15]            HUMA, Brief, the Dais.

[16]            HUMA, Evidence, 20 November 2023, 1220 (Nicole Janssen, Co-Founder, AltaML Inc.).

[17]            HUMA, Evidence, 20 November 2023, 1100 (Bessen).

[18]            HUMA, Evidence, 8 November 2023, 1635 (McKelvey).

[19]            HUMA, Evidence, 20 November 2023, 1215 (Lepage-Richer).

[20]            HUMA, Evidence, 1 November 2023, 1635 (Lane).

[21]            HUMA, Evidence, 22 November 2023, 1640 (Danick Soucy, President, Political Official, Committee on New Technologies, Canadian Union of Public Employees - Quebec).

[22]            HUMA, Evidence, 1 November 2023, 1640 (Chris Roberts, National Director, Social and Economic Policy Department, Canadian Labour Congress). See also HUMA, Evidence, 20 November 2023, 1135 (Angus Lockhart, Senior Policy Analyst, the Dais at Toronto Metropolitan University); HUMA, Evidence, 22 November 2023, 1645 (Soucy); HUMA, Evidence, 20 November 2023, 1125 (Olivier Carrière, Executive Assistant to the Quebec Director, Unifor).

[23]            HUMA, Evidence, 8 November 2023, 1635 (McKelvey).

[24]            HUMA, Evidence, 22 November 2023, 1640 (Soucy); HUMA, Evidence, 20 November 2023, 1130 (Carrière); HUMA, Evidence, 20 November 2023, 1230 (Janssen).

[25]            HUMA, Evidence, 22 November 2023, 1700 (Soucy).

[26]            HUMA, Evidence, 22 November 2023, 1710 (Nathalie Blais, Research Representative, Canadian Union of Public Employees - Quebec).

[27]            HUMA, Evidence, 1 November 2023, 1635 (Lane).

[28]            HUMA, Evidence, 20 November 2023, 1210 (Gillian Hadfield, Chair and Director, Schwartz Reisman Institute for Technology and Society).

[29]            HUMA, Evidence, 20 November 2023, 1255 (David Autor, Ford Professor, Massachusetts Institute of Technology).

[30]            HUMA, Evidence, 22 November 2023, 1720 (David Kiron, Editorial Director, Massachusetts Institute of Technology Sloan Management Review); HUMA, Evidence, 22 November 2023, 1640 (Soucy); HUMA, Evidence, 22 November 2023, 1735 (Blais).

[31]            HUMA, Brief, British Columbia Civil Liberties Association.

[32]            HUMA, Brief, First West Credit Union.

[33]            HUMA, Evidence, 20 November 2023, 1120 (Lockhart); HUMA, Evidence, 20 November 2023, 1125 (Bessen); HUMA, Evidence, 20 November 2023, 1125 (Carrière); HUMA, Evidence, 8 November 2023, 1640 (McKelvey); HUMA, Brief, First West Credit Union.

[34]            HUMA, Evidence, 8 November 2023, 1745 (Frank).

[35]            HUMA, Brief, Alliance of Canadian Cinema, Television, and Radio Artists (ACCTRA); HUMA, Evidence, 22 November 2023, 1720 (Kiron); HUMA, Evidence, 20 November 2023, 1120 (Bessen); HUMA, Evidence, 20 November 2023, 1255 (Autor).

[36]            HUMA, Brief, Ontario Nonprofit Network; HUMA, Evidence, 20 November 2023, 1255 (Autor); HUMA, Brief, ACCTRA.

[37]            HUMA, Brief, ACCTRA.

[38]            HUMA, Evidence, 20 November 2023, 1200 (Lockhart). See also HUMA, Evidence, 22 November 2023, 1715 (Lukasheh); HUMA, Evidence, 1 November 2023, 1720 (Roberts).

[39]            HUMA, Evidence, 8 November 2023, 1700 (McKelvey).

[40]            HUMA, Brief, Imagine Canada.

[41]            HUMA, Brief, the Dais.

[42]            HUMA, Evidence, 8 November 2023, 1700, 1755 (McKelvey).

[43]            HUMA, Evidence, 8 November 2023, 1720 (Frank).

[44]            HUMA, Brief, the Dais; HUMA, Evidence, 8 November 2023, 1800 (McKelvey); HUMA, Evidence, 20 November 2023, 1200 (Lockhart); HUMA, Evidence, 1 November 2023, 1640 (Roberts).

[45]            HUMA, Evidence, 1 November 2023, 1700 (Lane).

[46]            HUMA, Evidence, 22 November 2023, 1640 (Soucy); HUMA, Evidence, 1 November 2023, 1710 (Lane).

[47]            HUMA, Brief, British Columbia Civil Liberties Association.

[48]            HUMA, Brief, Ontario Nonprofit Network.

[49]            HUMA, Evidence, 22 November 2023, 1640 (Soucy); HUMA, Evidence, 22 November 2023, 1700 (Blais); HUMA, Evidence, 20 November 2023, 1255 (Autor).

[50]            HUMA, Evidence, 20 November 2023, 1255 (Janssen).

[51]            HUMA, Evidence, 8 November 2023, 1630 (Frank).

[52]            HUMA, Brief, Chegg Inc.

[53]            HUMA, Evidence, 1 November 2023, 1800 (Roberts). See also HUMA, Evidence, 22 November 2023, 1645 (Soucy); HUMA, Evidence, 1 November 2023, 1700 (Lane).

[54]            HUMA, Evidence, 1 November 2023, 1730 (Roberts). See also HUMA, Evidence, 22 November 2023, 1710 (Blais); HUMA, Evidence, 8 November 2023, 1630 (Frank).

[55]            HUMA, Brief, McGill University.

[56]            Ibid.; HUMA, Brief, Chegg Inc.

[57]            HUMA, Evidence, 1 November 2023, 1700 (Carbonneau).

[58]            HUMA, Evidence, 6 November 2023, 1100, 1115 (Ryan Smith, Divisional Director of Planning and Development, City of Kelowna).

[59]            HUMA, Evidence, 6 November 2023, 1125 (Smith); HUMA, Evidence, 22 November 2023, 1655 (Lukasheh).

[60]            HUMA, Brief, First West Credit Union.

[61]            HUMA, Evidence, 20 November 2023, 1205 (Autor).

[62]            HUMA, Evidence, 1 November 2023, 1645 (Carbonneau).

[63]            HUMA, Evidence, 1 November 2023, 1650 (Frenette).

[64]            HUMA, Evidence, 22 November 2023, 1635 (Kiron). See also HUMA, Evidence, 22 November 2023, 1645 (Lukasheh).

[65]            HUMA, Evidence, 8 November 2023, 1650 (Frank). See also HUMA, Evidence, 1 November 2023, 1720 (Carbonneau).

[66]            HUMA, Brief, Peter Cihon.

[67]            HUMA, Evidence, 22 November 2023, 1635 (Kiron).

[68]            HUMA, Evidence, 8 November 2023, 1700 (McKelvey).

[69]            HUMA, Evidence, 22 November 2023, 1635 (Kiron).

[70]            HUMA, Evidence, 20 November 2023, 1220 (Janssen).

[71]            HUMA, Evidence, 20 November 2023, 1255 (Autor). See also, HUMA, Evidence, 6 November 2023, 1110 (Smith); HUMA, Evidence, 20 November 2023, 1220 (Janssen).

[72]            HUMA, Evidence, 8 November 2023, 1740 (Frank).

[73]            Current investments are being made through the Pan-Canadian Artificial Intelligence Strategy. These include up to $60 million for the National Artificial Intelligence Institutes to grow the capacity of businesses to adopt AI technologies; $125 million to strengthen the adoption of made-in-Canada AI technologies by businesses, public and nonprofit entities; and $208 million to attract, retain and develop academic talent and maintain centres of research, including training and knowledge mobilization programs. See Innovation, Science and Economic Development Canada, Pan-Canadian Artificial Intelligence Strategy.

[74]            HUMA, Evidence, 8 November 2023, 1740 (McKelvey); HUMA, Evidence, 1 November 2023, 1725, 1755 (Carbonneau).

[75]            HUMA, Evidence, 20 November 2023, 1110 (Lockhart).

[76]            HUMA, Evidence, 6 November 2023, 1140 (Anthony Durocher, Deputy Commissioner, Competition Promotion Branch, Competition Bureau Canada).

[77]            HUMA, Evidence, 6 November 2023, 1105 (Durocher).

[78]            HUMA, Brief, Ontario Nonprofit Network; HUMA, Brief, Imagine Canada.

[79]            HUMA, Evidence, 6 November 2023, 1120 (Smith).

[80]            HUMA, Evidence, 20 November 2023, 1120 (Bessen).

[81]            HUMA, Evidence, 1 November 2023, 1640 (Roberts); HUMA, Evidence, 8 November 2023, 1720 (Frank); HUMA, Evidence, 20 November 2023, 1255 (Autor).

[82]            HUMA, Evidence, 20 November 2023, 1240 (Hadfield).

[84]            HUMA, Evidence, 1 November 2023, 1715 (Carbonneau); HUMA, Evidence, 1 November 2023, 1640 (Roberts).

[85]            HUMA, Evidence, 8 November 2023, 1720 (Frank); HUMA, Brief, McGill University; HUMA, Evidence, 1 November 2023, 1730 (Roberts); and HUMA, Evidence, 20 November 2023, 1140 (Lockhart).

[86]            HUMA, Evidence, 1 November 2023, 1715 (Carbonneau).

[87]            HUMA, Evidence, 8 November 2023, 1745 (Frank); HUMA, Brief, Morgan Frank.

[88]            HUMA, Brief, Morgan Frank.

[89]            Ibid.

[90]            HUMA, Brief, the Dais.

[91]            HUMA, Brief, Peter Cihon; HUMA, Evidence, 1 November 2023, 1740 (Vincent Dale, Director General, Labour Market, Education and Socio-Economic Wellbeing Statistics, Statistics Canada).

[92]            HUMA, Brief, the Dais. See also HUMA, Evidence, 20 November 2023, 1235 (Autor); HUMA, Evidence, 1 November 2023, 1750 (Frenette).

[93]            HUMA, Evidence, 20 November 2023, 1235 (Autor); HUMA, Evidence, 8 November 2023, 1630 (Frank).