AI products and services need data but should not invade privacy. He needs sustainability

  • Artificial intelligence is an integral part of developments in healthcare, technology and other sectors, but there are concerns about how data privacy is regulated.
  • Data privacy is key to gaining public trust in technological advancements.

Data privacy is often tied to artificial intelligence (AI) models based on consumer data. Understandably, users are wary of automated technologies that obtain and use their data, which may include sensitive information. As AI models depend on the quality of data to deliver salient results, their continued existence depends on privacy being built into their design.

More than just a way to allay customer fears and concerns, good privacy and data management practices have a lot to do with a company’s core organizational values, business processes, and security management. Privacy issues have been widely researched and publicized, and data from our privacy perception survey indicates that privacy is a top concern for consumers.

Addressing these concerns in a contextual way is crucial, and for companies operating with consumer AI, there are several methods and techniques that help address privacy issues often associated with artificial intelligence.

Some products and services need data, but they don’t need to invade anyone’s privacy

Companies working with artificial intelligence already face a disadvantage in the public eye when it comes to privacy. According to the European Consumer Organization in 2020, a survey showed that 45-60% of Europeans agree that AI will lead to more abuse of personal data.

There are many popular online services and products that rely on large datasets to teach and improve their AI algorithms. Some of the data in these datasets may be considered private, even by the least privacy-conscious users. Streams of data from networks, social media pages, cell phones, and other devices contribute to the volume of information companies use to train machine learning systems. Thanks to the excessive use of personal data and the mismanagement of some companies, the protection of privacy is becoming a matter of public policy all over the world.

Much of our sensitive data is collected to improve AI-enabled processes. Much of the data analyzed is also driven by the adoption of machine learning, as sophisticated algorithms need to make decisions in real time, based on these datasets. Search algorithms, voice assistants, and recommendation engines are just a few of the solutions that leverage AI based on large sets of real-world user data.

Massive databases can encompass a wide range of data, and one of the most pressing issues is that this data can be personally identifiable and sensitive. In reality, teaching algorithms to make decisions does not rely on knowing who the data relates to. Therefore, the companies behind these products should strive to make their datasets private, with little or no way to identify users in the source data, as well as create measures to suppress cases. extremes of their algorithms to avoid reverse engineering and identification.

The relationship between data privacy and artificial intelligence is quite nuanced. Although some algorithms may inevitably require private data, there are ways to use them in a much more secure and non-invasive way. The following methods are just a few of the ways companies using private data can be part of the solution.

Designing Artificial Intelligence with Data Privacy in Mind

We talked about the issue of reverse engineering, where bad actors discover vulnerabilities in AI models and discern potentially critical information from model outputs. Reverse engineering is why modifying and improving databases and training data is essential for the use of AI in cases facing this challenge.

For example, combining conflicting data sets in the machine learning process (adversarial learning) is a good option to distinguish flaws and biases in the output of the AI ​​algorithm. There are also options to use synthetic datasets that don’t use real personal data, but their effectiveness is still in question.

Healthcare is a leader in governance around AI and data privacy, especially in handling sensitive private data. He has also worked extensively on consent, both for medical procedures and for the processing of their data – the risks are high and have been legally enforced.

When it comes to the overall design of AI products and algorithms, decoupling user data through anonymization and aggregation is essential for any company using user data to train their AI models.

Many considerations can strengthen privacy in AI enterprises:

Privacy at its core: Put privacy protection on the developer radar and find ways to effectively strengthen security

Anonymize and aggregate data sets, remove all personal identifiers and unique data points

Have strict control over who in the company has access to specific data sets and constantly monitor how that data is accessed, as this has been the reason for some data breaches in the past

More data is not always the best solution. Test your algorithms with minified data to find out what is the smallest amount of data you need to collect and process to make your use case viable

It is essential to provide a simplified means of eliminating personal data at the request of the user. Companies that only pseudo-anonymize user data must then continually retrain their models with the most up-to-date data.

Leverage strong anonymization tactics, e.g. aggregated and synthetic datasets with full anonymization, irreversible identifiers for algorithm training, auditing and quality assurance, among others

Protect both user autonomy and privacy by rethinking how to obtain and use critical third-party information – carefully review data sources and only use those that collect data with clear consent and informed by the user

Consider the risks: Could an attack jeopardize user privacy based on the results of your AI system?

What is the future of data privacy and AI?

AI systems need a lot of data, and some premium online services and products couldn’t work without the personal data used to train their AI algorithms. Nevertheless, there are many ways to improve the acquisition, management, and use of data, including the algorithms themselves and overall data management. Privacy-friendly AI requires privacy-friendly companies.

This article originally appeared in the World Economic Forum.


Read also : How AI is changing healthcare in India – it reads scans, predicts risks and more


Comments are closed.