Sonia Randhawa on Rethinking Artificial Intelligence and Machine Language Data Privacy

1822

The Artificial Intelligence (AI) market will be worth $169.41 Billion by the year 2025. Indeed AI seems to be on a roll with most of the industries adopting machine learning in one way or another. And we don’t need to look far; if you introspect your daily life you will realize AI is playing a crucial role without you realizing it. Your Netflix or Amazon recommendations, Google Searches, Map Navigations, YouTube Suggestions, voice assistants like Alexa all are driven by machine learning.

On the surface, the impact of AI on society really looks promising, and all the hypes created by media and companies will lead you to believe so. However, if we take a closer look at the current AI landscape you will realize there are some glaring issues of data privacy and ethics that are causing adverse social implications!

An important concern is the potential bias induced in AI and ML big data analytics. To accurately assess and address data privacy risks, organizations need to formulate a strategic plan that focuses on data use, monitors data use in different ways, identifies and quantifies the risk factors, and then mitigate them, suggests Sonia Randhawa, a San Francisco based Senior Technology Leader.

Data Privacy and Ethics

Almost always people do not read the terms and conditions of using a website or mobile app and don’t know what all data you are sharing with them unknowingly. In fact, there are many shady websites and mobile apps that look harmless but collect your data and sell to other companies who mostly process it with AI.

In 2018, it was revealed that the makers of a quiz app for Facebook had collected user’s and their friend’s data by exploiting loopholes in FB privacy permissions and sold it to an analytics firm Cambridge Analytica. It is believed that this firm mined FB user’s data to target them with Ad campaigns to influence the 2016 US presidential elections. The collection of user’s data without their knowledge and its unethical processing with AI broke into a Facebook Cambridge Analytica data scandal.

Similarly, in 2020 a face recognition application Clearview, which serves police agencies. found itself in controversy. A report by The New York Times revealed that it had scraped 3 billion photos of users without their knowledge from FB, Instagram, YouTube, and Twitter to train its face recognition software. Apart from data privacy and ethical issues, the scary part is that if this unregulated software lands in the wrong hands or results in wrong identification it could create trouble for innocent people.

Another very disturbing use of AI was found in 2019 when someone created an application DeepNude which could be used by anyone to generate real-looking fake nude photos of women. It was obviously shut down very soon due to its controversial nature that compromised both the data privacy and ethics. But it did not stop more similar websites to come up with similar offerings.

If you think what can get worse then you should head to China where its citizens are constantly under mass surveillance by its government. AI-enabled surveillance cameras track people’s behavior and the government uses this data to give social credit scores to the citizens for giving them or depriving them of benefits. And all this happens without people’s knowledge!

Need for Regulation

You would have realized now that data privacy and ethics in AI are very important issues that need to be regulated otherwise it can lead to more such bad repercussions. Governments across the world have already started to implement their own data privacy policies, GDPR being one prime example.

“There already exists privacy-preserving Machine Learning (ML) capability via Homomorphic Encryption (HE). It’s a privacy-enhancing technology that allows multiple parties to analyze encrypted data and allow insight into data without exposing Personally Identifiable Information (PII)”, confirms Sonia Randhawa.

When it comes to the ethical use of AI, the responsibility lies with the AI community to create this awareness. However, a survey of 2020 revealed that only 15% of the college professors educate students on ethics in AI/ML which is again a cause of concern. It is a high time ethics should become a core part of all the AI/ML subjects and courses to create a better society with artificial intelligence.

Previous articleTbaytel Positions for the Future with Successful Bid for Wireless Spectrum
Next articleJuly 30, 2021 – Missing Person Chanel Finlayson