Ekaterina Lyapustina
7 min readApr 27, 2021

--

Data Privacy Risks to Consider When Using AI

Credit: Gerd Altmann • Freiburg/Deutschland

By Ekaterina Lyapustina and Jared Maslin

Since the go-live of regulations like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA and CCPA 2.0) in California, we’ve seen consistent growth and evolution of new and more stringent standards for the collection and processing of personal information. Amongst many nuanced requirements, these laws consistently seek to provide consumers with enhanced rights when it comes to traditional personal information practices and how businesses interact with their user bases. As requirements evolve, so must the available technologies for preserving privacy in an efficient and effective manner. One such instance is the introduction of artificial intelligence (AI) to privacy challenges on a global scale.

Recent enhancements in AI have left little doubt that AI can solve many routine issues and address a variety of business challenges — from predicting consumers’ needs, preferences, and wants to easily spotting questionable charges in a pile of thousands of invoices. Understanding the logic underlying any specific AI-based solution, including its algorithmic processes and decision-making criteria, present new and dynamic challenges and issues for privacy officers and executives responsible for data protection and governance in organizations across industries, ranging from fashion retailers and healthcare companies to cloud service providers.

Given a general absence of targeted regulatory or legal obligations and responsibilities, AI poses both new ethical as well as practical challenges for organizations striving to maximize consumer benefits while minimizing potential harms.

How can Artificial Intelligence Compromise Privacy?

There are many advnatages to AI that make it attractive and ideal for use in information gathering and analytics. A few clear benefits that come to mind are scale, speed, and automation. It is no secret that the immense speed at which artificial intelligence can perform computations is much faster and more efficient than what any human data analysts can achieve. Furthermore, you can almost arbitrarily increase its scale by adding more hardware, which is far more difficult when comparing to human capital.

AI technology is also better at utilizing larger data sets for analysis. Perhaps, it is the only feasible way to easily process truly “big data” at the scale that today’s business environment demands. These foundational capabilities make it such that AI, through the performance of recurring operations, can also have a tangible impact on privacy in many different ways. Let’s take a look at a few examples:

1. Identification, Monitoring, and Tracking

AI can be a powerful tool in being able to process large sets of data for identification, monitoring users, and tracking behavior across multiple devices, such as smartphones, whether they are at home, at work, or any public venue. Balancing with the benefits of doing so, this also means that even if your customers’ data is considered “anonymized”, it’s possible for AI to easily de-anonymize personal data on the basis of statistical inference and prediction from other devices once it is a part of a larger data set. This is one example of where the gains made in the analytical space can also create an erosion of consumer privacy, if left unchecked.

2. Data Exploitation

Did you know that numerous consumer products, such as computer applications and smart home appliances, usually have some unique features, such as firmware vulnerabilities, which make them prone to data exploitation by AI? An example of this is data gathered from microprocessors that are embedded into household items such as washing machines or tv remotes to allow for voice controls. This data can run the risk of being easily de-anonymized by AI and allows for tracking, monitoring, profiling and prediciting people’s behaviors.

What makes matters worse is that people are usually not aware of how much personal data their devices and software generate, process, and share. Smart-home devices and gadgets hold a treasure trove of confidential personal information, such as your birth date and credit card details, which cybercriminals and hackers can steal if these devices lack robust and comprehensive protections to thwart attacks. A personal data activity and location logs can be used to discern a person’s political views, ethnicity, sexual orientation and overall health. Great supporting article here.

Note that as consumers become more reliant on digital technologies in their everyday lives, be it smartphones or smartwatches, it is likely that the potential for data exploitation will continue to increase. As such, developing and maintaining awareness of these potential weaknesses in emerging technology will be critical to proactive protection of consumer interests.

3. Predictions and Inferences

AI can leverage sophisticated and complex machine learning algorithms in order to infer and predict sensitive information from seemingly non-sensitive datasets. For example, it is possible to use a person’s keyboard typing patterns in order to deduce their various emotional states, like nervousness, sadness, confidence, and anxiety. What is more alarming is that AI can even potentially determine an individual’s ethnic identity, political views, and sexual orientation from information like activity logs, location information, and similar metrics.

Tech companies have long ago started using facial recognition to read your emotions with the help of AI. As an example, check out the emojify.info (built by researchers from the University of Cambridge), and you can see how your emotions are “read” by your computer via your webcam. According to the Verge article “emotion recognition technology is rapidly gaining traction, with companies promising that such systems can be used to vet job candidates (giving them an “employability score”), spot would-be terrorists, or assess whether commercial drivers are sleepy or drowsy. (Amazon is even deploying similar technology in its own vans.)

Overseeing Data Privacy in an AI-Dominated World

Irrespective of their scope or size, organizations that are using various AI technologies have to carefully think through how they protect and use their client and customer data. This is critical to ensuring that these companies are not violating the privacy expectations of consumers and regulators alike. Here are some great ways to insert privacy issues and ethical concerns into corporate and management discussions regarding AI:

1. Boards should Lead the Initiatives for Privacy Protections

Note that executive leaders are central to stressing the fact that any new technology has to take data security and data privacy risks into consideration. While Board members do not necessarily need to understand all the ins and outs or nuances of every single technology (which simply isn’t feasible), these individuals can still ensure that their business strictly follows all the best practices to keep consumers’ data safe and secure by default.

2. Collect Fewer Data Points

Based on the goals of consumer privacy and data protection, one could argue that companies of all sizes can positively contribute to privacy preservation by collecting and processing less data (e.g., data minimization). This is because stockpiling every bit of personal data is not necessary. Looking back to recent data breaches impacting global consumers, storing large volumes of confidential and personal data unnecessarily often contribute to privacy issues.

3. Consider Data Security and Privacy from the Start — Privacy By Design

Traditional business and operational models often stress that focus on revenue growth and innovation pave a more reliable path to success, including flush cash flows and even attracting new funding to grow. As a result, the idea of protecting data and personal information isn’t typically a priority, especially in the first few stages of an organization’s life. However, by incorporating data protection measures and best practices to regularly screen for any issues, organizations will be much better off, particularly in the long run. Companies with a longer term vision on customer data privacy will be better off as a result of winning customer trust and loyalty from being at the forefront of protecting their customer’s data.

4. Audits

The benefits of AI can be (and often are) significant, but the risks it can pose to the data rights and freedoms of individuals are even bigger. This is why it’s important for companies to perform an audit of their AI applications to ensure:

• They process personal data fairly, lawfully and transparently;

• The necessary measures are in place to assess and manage risks to rights and freedoms that arise from AI.

Slalom advises companies on how to assess the impact of regulatory evolution in their operations, including as it applies to AI, and recommend best practices for organizational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate.

Summary

The role of AI in our daily lives, whether as consumers or practitioners, continues to grow in prevalence and is key to product and service evolution for many different industries. As such, innovation in the space will outpace our ability to fully understand the risk inherent in its logical impact, placing the rights of consumers at risk before much else. As a result, it is becoming a social imperative that for a business to operate ethically and in the best interest of their consumer base, the same level of focus and investment must be applied to both the development and the risk management of new technology proactively — before it’s too late.

“For privacy experts, AI is more than just Big Data on a larger scale. Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.” Future of Privacy Forum

--

--

Ekaterina Lyapustina

Passionate about data privacy, security, and building better technology that matters. Privacy Consultant, @Slalom Global Privacy Center of Excellence