Data Protection Authority Recommendations in the Field of AI

Data Protection Authority Recommendations in the Field of AI


Data Protection Authority Recommendations in the Field of AI



The Personal Data Protection Authority (“KVKK”) has published its Recommendations on Protecting Personal Data in the Field of Artificial Intelligence (“Recommendations”) on September 15, 2021. The Recommendations consists of general policy recommendations and special considerations for developers, manufacturers, service providers, and policy makers who operate in the field of AI.

The Recommendations are based on the “Guidelines on Artificial Intelligence and Data Protection” published by the Council of Europe, “Recommendation of the Council on Artificial Intelligence” by the OECD, and “Ethics Guidelines for Trustworthy AI” by European Commission.

Today, great progress has been made in AI techniques and applications, and AI-based systems have started to directly affect life in many areas. Although AI provides important benefits for individuals and society, it should be managed correctly in terms of protection of personal data. 

Keeping this in mind, KVKK provides its recommendations in three sections: 

(i) General recommendations,

(ii) Recommendations for developers, manufacturers, service providers, and

(iii) Recommendations for policy makers.

The recommendations of the KVKK are as follows:


1. General Recommendations

  • During the development and implementation of AI applications, the fundamental rights and freedoms of the data subjects and the right to protection of human dignity should be respected.
  • AI and data collection studies based on data processing should be in conformity with the principles of data protection; lawfulness, integrity, proportionality, accountability, transparency, accuracy, data minimization, and data security and should be based an approach that protects the fundamental rights and freedoms of individuals.
  • While processing personal data; a perspective that focuses on the prevention and reduction of potential risks should be adopted and human rights, the functioning of democracy, and social and ethical values should be considered.
  • Data subjects should be able to control the data processing activities.
  • If processing is likely to entail a high risk to the rights and freedoms of natural persons, privacy impact assessment should be conducted, and the lawfulness of data processing should be determined.
  • In AI studies, compliance with the personal data protection legislation should be ensured from the first stage and all systems should be developed and managed according to the data protection principle from design (data protection by design). In this context, a data protection compliance program specific to each project should be established and implemented.
  • If special categories of personal data are processed while developing and applying AI technologies, technical and administrative measures should be applied more strictly.
  • In case it is possible to achieve the outcome without processing personal data, the anonymization of the data should be considered.
  • At the beginning of an AI project, the data controller and the data processor should be determined.

2. Recommendations for Developers, Manufacturers and Service Providers

  • In the design, an approach based on personal data privacy, consistent with national and international regulations should be observed.
  • A prudent approach should be adopted based on appropriate risk prevention and mitigation measures.
  • At every stage of data processing, the risk of discrimination or other negative effects and prejudices that may occur on the data subjects should be prevented.
  • Data use should be minimized. The accuracy of the developed AI program should be constantly monitored.
  • The use of algorithms originally designed for a specific AI model but used for a different purpose should be carefully evaluated for the risk of causing adverse effects on individuals and society.
  • Academic institutions which can contribute to the design of human rights-based and ethical AI applications should be contacted. Opinions of impartial experts and organizations should be sought in areas where transparency and stakeholder participation may be difficult.
  • Data subject should have the right to object to processing based on technologies that affect their views and personal development.
  • Considering the power of AI systems to analyze and use personal data, the rights of the persons concerned arising from national and international legislation should be protected while processing of personal data.
  • Risk assessment based on the active participation of individuals who are most likely to be affected by AI practices should be encouraged.
  • Products and services should be developed in a way that ensures individuals are not subject to automated decision making.
  • Alternatives with less interference with personal rights should also be offered and the freedom of choice of users should be guaranteed.
  • Algorithms should be adopted to ensure accountability for all stakeholders in terms of compliance with personal data protection laws starting from the design of products and services throughout their lifecycle.
  • Data subject should have the right to restriction and the mechanisms for erasure, destruction or anonymization of data should be designed.
  • Persons interacting with the application should be informed about the reasons for the personal data processing activity, the details of the methods used in the processing of personal data and the possible consequences. An effective consent mechanism should be designed for necessary cases.

3. Recommendations for Policy Makers

  • The principle of accountability should be observed at all stages.
  • Risk assessment procedures should be adopted and an implementation matrix should be established on the basis of sector/application/hardware/software.
  • Appropriate measures such as codes of conduct and certification mechanisms should be taken.
  • Adequate resources should be allocated by decision makers to monitor whether AI models are used for a different context or purpose than their original purpose.
  • The role of human intervention in decision-making processes should be established. The freedom of individuals not to trust the suggestions presented by AI should be protected.
  • Supervisory authorities should be consulted when there is a possibility of significantly affecting the fundamental rights and freedoms of the data subjects.
  • Cooperation between supervisory authorities and other authorized bodies should be encouraged on data privacy, consumer protection, promotion of competition and anti-discrimination.
  • Research measuring effects of AI applications on human rights, ethics, sociology, and psychology should be supported.
  • Individuals and stakeholders should be actively involved in discussing the role of AI, including big data systems, in shaping social dynamics and in the decision-making processes.
  • Appropriate open software-based mechanisms should be encouraged to create a digital ecosystem that supports secure, fair, legal and ethical sharing of data.
  • Investment should be done in digital literacy and educational resources to raise awareness on AI applications and its implications.
  • Trainings should be encouraged to raise awareness of personal data protection for developers.

Comments and Conclusion

The Recommendations are a step forward in personal data protection in the field of AI as none of the guidelines of the KVKK had addressed AI until now and the Turkish Data Protection Law (“Law”) which aimed to be technology neutral deliberately left out subjects specific to any technology. 

The Recommendations involve some concrete obligations for instance conducting privacy impact assessment or contacting academic institutions which can contribute to the design of human rights-based and ethical AI applications. On the other hand, the consequences of non-compliance with such obligations are not mentioned nor referred to the specific provisions of the Law regulating administrative fines. It is not clear whether a data controller shall interpret the Recommendations as “recommendations” only as in its literal meaning or shall adopt its processing activities as per the Recommendations.

As per GDPR, a data controller shall contact the data protection authority if data protection impact assessment indicates high risk to the rights and freedoms of data subjects and shall seek its advice on the DPIA. Does the KVKK require the same? If yes, what should a DPIA include and again what is the consequence if the advice of the KVKK is not sought? There are certainly lots of questions need to be answered.

In conclusion, both the GDPR and the Law provides data subjects the right to object against a decision based solely on automated processing. It should be accepted that AI enables cost effective and more precise automated predictions, especially in direct marketing through profiling. However, such predictions could also introduce biases and cause discrimination, not to mention that the predictions could also be inaccurate. Controllers engaging in AI-based processing must consider purpose limitation and data minimization and policies and guidelines should provide more precise guidance instead of ambiguous recommendations.

Authors: Soley Çoban, Sinan Erkan

© 2019 Deriş - All Rights Reserved