The use of AI offers com­pa­nies num­e­rous op­por­tu­ni­ties and pos­si­bi­li­ties for process op­ti­mi­sa­ti­on and the de­ve­lo­p­ment of new busi­ness models. However, its use also brings with it legal and ethical re­qui­re­ments, par­ti­cu­lar­ly in the area of data protection.

The „Re­gu­la­ti­on on Ar­ti­fi­ci­al In­tel­li­gence (AI Act)“ was of­fi­ci­al­ly pu­blished in the Of­fi­ci­al Journal of the Eu­ro­pean Union on 12 July 2024 and will enter into force 20 days later on 1 August 2024.

The fol­lo­wing tran­si­ti­on periods apply until entry into force:
  • 6 months: From 02 Fe­bru­ary 2025, the re­gu­la­ti­ons on pro­hi­bi­ted AI systems will already apply. In ad­di­ti­on, there is an ex­ten­si­ve trai­ning ob­li­ga­ti­on for em­ployees who handle AI systems.
  • 12 months: from 2 August 2026, the re­gu­la­ti­ons on AI models for general pur­po­ses and the re­gu­la­ti­ons on aut­ho­ri­ties, go­ver­nan­ce and sanc­tions will apply
  • 36 months: from 2 August 2027, the re­gu­la­ti­ons for so-called „in­te­gra­ted high-risk AI systems“, which are in­stal­led as com­pon­ents in e.g. devices, ve­hic­les or ma­chi­nes ex­pli­cit­ly re­fer­red to in ANNEX I, apply.
  • 24 months (basic rule): from 02 August 2026, all other pro­vi­si­ons of the AI Re­gu­la­ti­on apply.
EU’s re­gu­la­to­ry objectives

A central element of the EU’s re­gu­la­to­ry ob­jec­ti­ves in the area of data ma­nage­ment and ar­ti­fi­ci­al in­tel­li­gence is the General Data Pro­tec­tion Re­gu­la­ti­on. The scope of the General Data Pro­tec­tion Re­gu­la­ti­on extends to per­so­nal data. An­ony­mous data is not covered. The iden­ti­fi­ca­ti­on of a natural person can result from a single piece of in­for­ma­ti­on itself. Iden­ti­fia­bi­li­ty also exists if the in­for­ma­ti­on (e.g. a tech­ni­cal iden­ti­fier, a quo­ta­ti­on) can be as­si­gned in con­junc­tion with other in­for­ma­ti­on available about this person.

major chall­enge for AI in the context of an­ony­mi­sa­ti­on is that AI has a great deal of „back­ground know­ledge“ from which it can draw con­clu­si­ons that lead to iden­ti­fia­bi­li­ty. Another problem lies in „prompt en­gi­nee­ring“, whereby the AI (espe­ci­al­ly in the case of free text entries) could be de­li­bera­te­ly induced by the user to carry out an­ony­mi­sa­ti­on that is not ac­tual­ly in­ten­ded or to give the user the op­por­tu­ni­ty to enter ad­di­tio­nal back­ground know­ledge that leads to identifiability.

Prin­ci­ples of data protection

The prin­ci­ples of data pro­tec­tion in the pro­ces­sing of per­so­nal data must be gua­ran­teed (Art. 5 GDPR): Lawful­ness, fair­ness of pro­ces­sing, trans­pa­ren­cy, purpose li­mi­ta­ti­on, data mi­ni­mi­sa­ti­on, ac­cu­ra­cy, storage li­mi­ta­ti­on as well as in­te­gri­ty and confidentiality.
Con­trol­lers must not only ensure that they comply with the re­qui­re­ments of the GDPR but must also be able to prove this. Sui­ta­ble tech­ni­cal and or­ga­ni­sa­tio­nal me­a­su­res must be im­ple­men­ted to ensure the se­cu­ri­ty and pro­tec­tion of per­so­nal data. (Ac­coun­ta­bi­li­ty)

The per­mis­si­bi­li­ty of using trai­ning data is a prio­ri­ty for many com­pa­nies. For example, in­ter­nal gui­de­lines and pro­ces­ses should be es­tab­lished for mo­ni­to­ring, in­clu­ding com­pli­ance with co­py­right regulations.
Lia­bi­li­ty for AI-ge­­ne­ra­­ted content is also an im­portant aspect.

Re­gu­la­ti­on (EU) 2024/1689 of the Eu­ro­pean Par­lia­ment and of the Council – Ar­ti­fi­ci­al In­tel­li­gence Act: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689

Deut­sche Über­set­zung: https://www.adorgasolutions.de/datenschutz-verordnung-ueber-kuenstliche-intelligenz-ki-vo-tritt-in-kraft/

Do you have any ques­ti­ons on this and other topics? We are of course at your dis­po­sal – by e-mail consulting@adorgasolutions.de or by te­le­pho­ne on 0173 8198864. 

Wie können wir Ihnen weiterhelfen?

Kontaktieren Sie uns: Wir sind gerne für Sie da!