In today’s digital age, many apps and websites available to consumers leverage artificial intelligence (AI). These tools, often unbeknownst to consumers, collect, process, and analyze personal data. This data availability supports the expansion of AI-powered technologies and significantly impacts how businesses operate, how they employ data, and how consumers share it. So, how does AI-powered technology work?

AI is integrated into our daily lives, and its effectiveness relies on consuming vast amounts of data, including personal data, to develop and train the models supporting this technology. Unlike traditional software programming, which requires a programmer to write specific instructions for computer systems to perform and produce clear outcomes, AI algorithms need access to data to learn. The models gather data, sort it, understand patterns, and make decisions based on their learning.

The personal data that AI models can collect include text, voice, images, videos, biometrics, behaviors, and anything else shared through consumers interactions with technology. The data provided to these models may come from internet-connected devices, social media, third-party data brokers selling information to organizations, or internal tools developed by organizations creating AI models. These models are trained through the ongoing collection of data, including sometimes sensitive personal data. Depending on the AI model’s nature, they can track consumers’ preferences and habits continuously, identifying patterns to predict human behaviour.

Concerns about the use of AI

Consumers and regulators worldwide have raised several concerns about the use of AI and its potential privacy risks. In a 2024 KPMG survey, 63% of consumers expressed concerns about AI technology and their privacy. Major concerns include targeted scamming and phishing, bias and discrimination, and image and voice manipulation.

Regulators have launched investigations and even banned organizations from collecting and using personal data to train their AI models. For example, Clearview AI, a US technology company, violated Canadian privacy laws in 2021 after four Canadian privacy commissioners undertook an investigation of Clearview AI practices. The Federal Privacy Commissioner at the time, Daniel Therrien, stated, “What Clearview does is mass surveillance, and it is illegal; it is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society who find themselves continually in a police lineup.”

In 2023, the current Canadian Federal Privacy Commissioner, Philippe Dufresne, also launched an investigation into OpenAI, the parent company of ChatGPT. The report highlighted concerns about “the AI company’s collection, use, and disclosure of personal information without consent. Artificial intelligence and its effects on privacy are a top priority.” On a similar vein, countries such as Italy and China have banned the use of ChatGPT.

Privacy is paramount in the context of AI technologies

The importance of privacy cannot be overstated, given how AI technology consumes data and how the AI model’s learning may impact individuals. Privacy risks related to AI go beyond the known impacts of a typical privacy breach. Individuals are at risk of manipulation when AI tools learn their preferences and behaviors. Unauthorized data collection and use may damage an individual’s reputation or inflict emotional or physical harm, such as in cases of mistaken identity. AI development and training must be done ethically, professionally, legally, and compliantly.

Organizations must appropriately govern AI technology development to avoid risks such as losing customer trust, damaging their reputation, or facing legal and regulatory consequences for inappropriate use of these tools.

The Privacy-First approach to AI development and governance

Organizations can take several initial steps to develop privacy-friendly AI models that comply with regulations:

  • Understand the data you have and need: Knowing what data is currently available avoids duplication and lowers the risk of data breaches. Understanding what data is needed allows for proactive planning within privacy law boundaries.
  • Map the data flow in AI models: This helps understand the inputs and outputs of models, that allow the organization to identify potential threats, unauthorized access, and manipulation.
  • Conduct ongoing risk assessments and testing: Identifying “what can go wrong” in any specific scenario before it occurs allows for the implementation of mitigation strategies in case of mistakes, because mistakes will certainly happen.
  • Train your staff: Provide privacy and ethical AI training to ensure everyone takes a “privacy-first” approach in the development of AI models.

Where do we go from here?

AI technology is powerful and unprecedented, and it is here to stay. Used ethically, it can positively transform our lives in unimaginable ways. It is exciting and promising. However, with great power comes great responsibility. It is up to us, as industry leaders, to ensure these tools are used for the greater good without infringing on the rights of the individuals we serve.

Electric Vehicles and Data Privacy: Solution or Systemic Issue?

Read Article

The Privacy Dilemma: AI’s Double-Edged Sword

Read Article

Privacy risks going viral

Read Article

You've got... a breach notification?

Read Article