News
The ‘Chat GPT Effect’ – Parsing Privacy and AI Regulation in India
The increasingly common use of AI in our daily lives raises multiple concerns, particularly around data privacy and regulatory preparedness. In this article, we discuss the privacy risks associated with AI and the position under Indian law.
Chat GPT and its various ‘features’ are at the centerpiece of many a recent dinner time conversation. Its popularity underscores just how much AI based tools such as chat bots, facial recognition software, text editors, personal assistants, etc., have already become a part of our everyday life. Since regulation always lags innovation, discourse on regulating AI is still at a nascent stage. But this discussion has been given a shot in the arm by the ubiquity of Chat GPT and its various avatars (and, to be fair, there is a separate conversation to be had about the privacy gaps in Chat GPT itself).
Regulation of AI in India is presently attempted indirectly, via regulations on issues such as data privacy, intellectual property, and cybersecurity. Regulators have made efforts to get a dialogue going on using these technologies. For instance, the Indian Telecom Regulator has issued a consultation paper discussing leveraging AI and big data for the telecommunication sector. Similarly, the Reserve Bank of India (viz., the financial regulator) has encouraged adoption of AI for "Know-Your-Customer" processes , etc., while reiterating its ethical use with a consumer safety perspective (owing to privacy, security, profiling, etc.). In 2018, NITI Aayog (i.e., policy think tank of the Government of India) had issued a discussion paper titled "National Strategy for Artificial Intelligence", identifying healthcare, agriculture, education, smart cities and mobility as the focus sectors for deployment of AI. Separately, the Ministry of Electronics and Information Technology (“MEITY”) has confirmed the commencement of the "National Program on AI" for transformational use of AI and set up a 'knowledge hub' for AI developments.
Recent global legislative attempts contemplate a graded regulation, i.e., regulate AI based on potential risks. The Indian government's approach may be similar. During a recent consultation session on the framework of the proposed "Digital India Act" (potential successor to extant Indian IT laws), MEITY stated that the proposed law may define and regulate "high risk AI systems". Regulation will rely on legal, quality testing framework to examine regulatory models, algorithmic accountability, zero-day threat & vulnerability assessment, examining AI based ad-targeting, content moderation, etc.
A key consideration around AI is data privacy; this will be particularly relevant for India, since it is in the process of agreeing a new data privacy regime. While the proposed law is sector agnostic and does not specifically address the challenges that may arise due to the use of AI, it does contemplate a certification-based mechanism for use of ‘new technologies’. In the future, there may be a number of questions that the Indian data regulator may need to parse and answer, based on a AI use cases. Here are 3 examples:
- Individual Profiling: AI and machine learning solutions are increasingly being deployed to chalk an outline of the consumer and their likely preferences. Since AI primarily relies on the information fed into its systems and typically uses 'patterns' (viz., common data points such as behavioral trends) to arrive at its inferences, an individual that exhibits these patterns is likely to be pegged into a certain profile. From a business perspective, the purpose of profiling is to be able to pitch services and products to a particular individual based on their profile i.e., likelihood of opting for a certain product/service due to certain attributes. Although this approach of gauging an individual's choices based on their body language and speech is an intrinsic part of offline forms of business as well, AI has the capability to record such attributes and deploy it for purposes apart from mere transactions. For instance, profiling based on markers (such as location, educational background, residential address, past purchasing trends, etc.) can be used to classify an individual as more likely to commit an offence, as opposed to others. Alternatively, it may be used to foster bias to make recruitment decisions due to a certain profile (based on markers such as location, educational background, financial position, purchase history, etc.).
- Non-consented purposes: Due to the dynamic nature of AI, it can be used for purposes that a data subject has not consented to. AI may be used for purposes that the data subject did not consent to, for e.g., aggregating data of individuals from a particular location for marketing strategies, extracting health information for use by insurance companies, etc. Since data subjects have no sight on how their data is used further, it may be used to unfairly influence their opinions, choices, and/or offers made to them by a particular business (for e.g., higher interest rates for loans, etc.). Also, given that AIs can be ‘opaque’, it may not be easily apparent whether a particular algorithm used a data point for a decision (say, a hiring decision), or not.
- Surveillance: Profiling when used with sensor-equipped devices that gather data from voice control, gestures, biometrics can be used to identify individuals, and geo-tracking can trace an individual's movements continually. As such, businesses (or Governments) may be able to leverage data for surveillance. Introducing the processing power of an AI in this equation takes it to another level. For instance, CCTVs are omnipresent at public places and if used in conjunction with facial recognition technologies and an AI tracking model, it may be a serious intrusion to privacy. Unregulated use of these technologies, but particularly in conjunction with increasingly powerful AI, is troubling from a data privacy perspective.
How should these issues be handled in the emerging regulation? Or are they even being capable of being legislated?
Historically, Indian lawmakers have opted for a reactive approach for drafting regulations. Regulations are formulated when a legal issue simmers in a sector or loose ends are identified after being used to defraud the public. While these knee jerk regulations have managed to patch flaws in some cases, the dynamics of the digital world are different and ever evolving; not having adequate tools to deal with its consequences could have a lasting impact.
Regulating AI would likely need nuanced laws, which are conducive for its growth and simultaneously address issues such as the potential threat to privacy. It will be interesting to see how increased use of AI challenges the status-quo under Indian and other data privacy laws.
Article provided by INPLP member: Vikram Jeet Singh (BTG legal, India)
Discover more about the INPLP and the INPLP-Members
Dr. Tobias Höllwarth (Managing Director INPLP)
News Archiv
- Alle zeigen
- November 2024
- Oktober 2024
- September 2024
- August 2024
- Juli 2024
- Juni 2024
- Mai 2024
- April 2024
- März 2024
- Februar 2024
- Jänner 2024
- Dezember 2023
- November 2023
- Oktober 2023
- September 2023
- August 2023
- Juli 2023
- Juni 2023
- Mai 2023
- April 2023
- März 2023
- Februar 2023
- Jänner 2023
- Dezember 2022
- November 2022
- Oktober 2022
- September 2022
- August 2022
- Juli 2022
- Mai 2022
- April 2022
- März 2022
- Februar 2022
- November 2021
- September 2021
- Juli 2021
- Mai 2021
- April 2021
- Dezember 2020
- November 2020
- Oktober 2020
- Juni 2020
- März 2020
- Dezember 2019
- Oktober 2019
- September 2019
- August 2019
- Juli 2019
- Juni 2019
- Mai 2019
- April 2019
- März 2019
- Februar 2019
- Jänner 2019
- Dezember 2018
- November 2018
- Oktober 2018
- September 2018
- August 2018
- Juli 2018
- Juni 2018
- Mai 2018
- April 2018
- März 2018
- Februar 2018
- Dezember 2017
- November 2017
- Oktober 2017
- September 2017
- August 2017
- Juli 2017
- Juni 2017
- Mai 2017
- April 2017
- März 2017
- Februar 2017
- November 2016
- Oktober 2016
- September 2016
- Juli 2016
- Juni 2016
- Mai 2016
- April 2016
- März 2016
- Februar 2016
- Jänner 2016
- Dezember 2015
- November 2015
- Oktober 2015
- September 2015
- August 2015
- Juli 2015
- Juni 2015
- Mai 2015
- April 2015
- März 2015
- Februar 2015
- Jänner 2015
- Dezember 2014
- November 2014
- Oktober 2014
- September 2014
- August 2014
- Juli 2014
- Juni 2014
- Mai 2014
- April 2014
- März 2014
- Februar 2014
- Jänner 2014
- Dezember 2013
- November 2013
- Oktober 2013
- September 2013
- August 2013
- Juli 2013
- Juni 2013
- Mai 2013
- April 2013
- März 2013
- Februar 2013
- Jänner 2013
- Dezember 2012
- November 2012
- Oktober 2012
- September 2012
- August 2012
- Juli 2012
- Juni 2012
- Mai 2012
- April 2012
- März 2012
- Februar 2012
- Jänner 2012
- Dezember 2011
- November 2011
- Oktober 2011
- September 2011
- Juli 2011
- Juni 2011
- Mai 2011
- April 2011
- März 2011
- Februar 2011
- Jänner 2011
- November 2010
- Oktober 2010
- September 2010
- Juli 2010