News
Recommendations for reliable artificial intelligence
Argentine Information Technology Subsecretariat approves a set of recommendations for trustworthy artificial intelligence (“AI”), specifically directed to the public sector.

Very recently, the Argentine Information Technology Subsecretariat, which is part of the Chief of Staff Office, issued Resolution 2/2023 approving a set of recommendations for trustworthy artificial intelligence (“AI”), specifically directed to the public sector.
This Resolution came during very active discussions worldwide regarding the use of artificial intelligence in all industries. Particularly, addressing the use of generative AI and tools such as ChatGPT (in its different versions).
In that connection, Argentina has no still specific general legislation regulating the use, development and/or deployment of AI. The buzzword “AI” and/or “artificial intelligence” could be found in the recitals of many laws and regulations but still no specific guidance on that respect. For example, communications from the Argentine Central Bank refer to certain obligation and requirements -including conducting an impact assessment-. At the same time, Argentina has adhered to the UNESCO Recommendation on Artificial Intelligence.
With all of these as background, the recommendations aim to compile and provide tools for those carrying out innovation projects through technology, specifically focusing on those involving the use of AI. They aim to provide a framework for the technological adoption of AI focused on individuals and their rights. As anticipated, these recommendations are directed specifically to the public sector but, nonetheless, it is reasonable to expect that, in the absence of other guidelines directed to the private sector or mandatory applicable regulations, they could also work as non-mandatory guidelines for the private sector as well.
In general, the recommendations focus on establishing a set of ethical principles to guarantee the protection of fundamental rights, respect democratic values, prevent or reduce risks, foster innovation and a people-centred design. To establish and conceptualise the principles, the recommendations are structured and developed throughout the lifecycle of AI projects.
In that connection, they establish a preparatory stage that deals with how artificial intelligence should be conceived and what measures are recommended to be taken before starting the AI cycle. Guidelines highlight building an interdisciplinary team, awareness campaigns, pre mortem and the model scope definition, besides others.
Among others, the guidelines highlight the principles of proportionality and harmlessness, safety and security, equity and non-discrimination, sustainability, right to privacy and data protection, oversight and human decision making, transparency and explainability, responsibility and accountability.
On the other hand, within the AI cycle, the recommendations are divided into four stages concerning "Design and data modelling" (Stage 1), "Verification/Validation" (Stage 2), "Implementation" (Stage 3), and "Operation and maintenance" (Stage 4). Finally, in a fourth stage, the recommendations set out what ethical issues should be considered outside the AI cycle.
In that connection, the recommendations suggest to emphasize on the difference between executions and responsibility concepts, making it clear that although the execution of a task or service might be delegated to algorithms inserted into an AI project, the decision and, therefore, the responsibility should always rest with the organization or individual controlling the development and deployment.
Regarding measures that could be taken during the AI cycle, it should be pointed out that Stage 1 (Design and data modelling) and stage 2 (Verification and validation) addresses measures tending to reduce risks and ensure transparency, accountability, data quality and modelling biases elimination. There are several tools recommended by the guidelines to address these objectives such as ethical commitment signing, data scientist participation and implementation of processing activities records.
They also suggest measures to be taken within Stage 3 (Implementation) depending on whether the implementation was made on premise (on own infrastructure), via cloud services or a combination, aiming to guarantee an adequate degree of information security, traceability of actions and decisions. Stage 4 (related to Operation and Maintenance) guidelines recommend certain actions in order to guarantee the availability, continuity and sustainability of the service provided by this technology, such as system performance ensurance, improvements adopted by the existence of biases and ethical incidents, and access, updates and authentications management control procedures.
In the last section, the guidelines raise several issues related to the post-cycle of AI, recognizing that each stage requires constant assessment of both changes and risks, the appointment of individuals responsible for containing and remedying the harms generated by artificial intelligence, as well as the proper recording of accountability and responsibility actions for learning and process improvement.
As previously mentioned, these guidelines are a first set of recommendations aimed at the public sector which follow, in many respects, a similar set of international principles as those of UNESCO (and others).
It remains to see how and if they will be followed by the public sector and whether they would carry any weigh in the private sector as well. At the same time, many expect the different public regulators, including for example the Data Protection Authority, to work together with the Information Technology Subsecretariat and other bodies to come together and produce a more comprehensive set of recommendations that would tackle AI from many different angles, including privacy and data protection as well as Intellectual Property Rights.
Article provided by INPLP member: Diego Fernandez (Marval O’Farrell Mairal, Argentina)
Discover more about the INPLP and the INPLP-Members
Dr. Tobias Höllwarth (Managing Director INPLP)
News Archiv
- Alle zeigen
- September 2023
- August 2023
- Juli 2023
- Juni 2023
- Mai 2023
- April 2023
- März 2023
- Februar 2023
- Jänner 2023
- Dezember 2022
- November 2022
- Oktober 2022
- September 2022
- August 2022
- Juli 2022
- Mai 2022
- April 2022
- März 2022
- Februar 2022
- November 2021
- September 2021
- Juli 2021
- Mai 2021
- April 2021
- Dezember 2020
- November 2020
- Oktober 2020
- Juni 2020
- März 2020
- Dezember 2019
- Oktober 2019
- September 2019
- August 2019
- Juli 2019
- Juni 2019
- Mai 2019
- April 2019
- März 2019
- Februar 2019
- Jänner 2019
- Dezember 2018
- November 2018
- Oktober 2018
- September 2018
- August 2018
- Juli 2018
- Juni 2018
- Mai 2018
- April 2018
- März 2018
- Februar 2018
- Dezember 2017
- November 2017
- Oktober 2017
- September 2017
- August 2017
- Juli 2017
- Juni 2017
- Mai 2017
- April 2017
- März 2017
- Februar 2017
- November 2016
- Oktober 2016
- September 2016
- Juli 2016
- Juni 2016
- Mai 2016
- April 2016
- März 2016
- Februar 2016
- Jänner 2016
- Dezember 2015
- November 2015
- Oktober 2015
- September 2015
- August 2015
- Juli 2015
- Juni 2015
- Mai 2015
- April 2015
- März 2015
- Februar 2015
- Jänner 2015
- Dezember 2014
- November 2014
- Oktober 2014
- September 2014
- August 2014
- Juli 2014
- Juni 2014
- Mai 2014
- April 2014
- März 2014
- Februar 2014
- Jänner 2014
- Dezember 2013
- November 2013
- Oktober 2013
- September 2013
- August 2013
- Juli 2013
- Juni 2013
- Mai 2013
- April 2013
- März 2013
- Februar 2013
- Jänner 2013
- Dezember 2012
- November 2012
- Oktober 2012
- September 2012
- August 2012
- Juli 2012
- Juni 2012
- Mai 2012
- April 2012
- März 2012
- Februar 2012
- Jänner 2012
- Dezember 2011
- November 2011
- Oktober 2011
- September 2011
- Juli 2011
- Juni 2011
- Mai 2011
- April 2011
- März 2011
- Februar 2011
- Jänner 2011
- November 2010
- Oktober 2010
- September 2010
- Juli 2010