The rapid development of artificial intelligence (AI) opens up new opportunities for companies, but also presents them with regulatory and ethical challenges. Our AI governance check supports companies in meeting the complex requirements of the AI Regulation (EU AI Regulation) and other standards, minimising risks and strengthening stakeholder trust.

EU AI Regulation - What exactly is changing now (objectives and priorities)?

The Artificial Intelligence Regulation (AI Regulation) is a regulation of the European Union. It creates a first common regulatory and legal framework for AI in the European Union (EU). It will enter into force on 1 August 2024, with the provisions coming into force gradually over the following 6 to 36 months. The AI Regulation covers all types of AI in a wide range of sectors, with exceptions for AI systems used exclusively for military, national security, research and non-commercial purposes. The aim is to promote the development and use of AI in the EU while minimising the associated risks. The AI Regulation categorises non-exempt AI applications according to their risk of harm. There are four levels - unacceptable, high, limited, minimal - as well as an additional category for general purpose AI.

  • Applications with unacceptable risks are prohibited. These include, for example, AI systems that can manipulate or exploit humans.
  • High-risk applications must fulfil safety, transparency and quality requirements and undergo conformity assessments. This includes AI systems that are used in critical infrastructures, education or human resources.
  • Only transparency obligations apply to low-risk applications. These include chatbots or emotion recognition systems, for example.
  • Applications with low or no risk are not regulated. These include, for example, AI-supported video games or spam filters.

Additional and higher assessment standards must be defined for AI systems that are used for very general purposes or have a particularly high level of performance. These are AI systems that can be used for a variety of applications, such as language models or image generators. The AI Regulation also threatens very high fines for violations - penalties of up to 7 per cent of the previous year's global turnover or 35 million euros - and thus significantly more than the fines for GDPR violations. Companies must therefore ensure that their AI systems comply with the requirements of the regulation in order to avoid high fines.


Klassifizierung von KI-Systemen nach Risikopotenzial, Quelle: https://blog.adesso-insure.de/business/ai-act-folgen-fuer-versicherer

Other relevant regulations, ordinances and systems

In addition to technological expertise, the implementation of AI in companies also requires compliance with various regulatory and ethical standards. AI governance can be defined as a series of framework conditions and guidelines that regulate the use of AI in various areas. The aim is to ensure the responsible development and use of AI systems that meet ethical, legal and social standards. In addition to the EU AI Regulation, there are other important regulations, guidelines, standards and tools that are relevant for assessing the maturity level of AI. These include

  • ISO/IEC 42001:2023: This international standard specifies requirements and guidelines for the development, deployment and use of trustworthy AI systems. It covers aspects such as transparency, explainability, fairness and robustness.
  • VDE SPEC 90012: This specification from the German Association for Electrical, Electronic & Information Technologies (VDE) provides practical guidance for the development of trustworthy AI systems. It contains specific recommendations on topics such as data protection, IT security and ethical principles.
  • Microsoft Responsible AI: Microsoft has developed its own framework for responsible AI, which comprises six core principles: Fairness, Reliability and Security, Privacy and Security, Inclusion, Transparency and Accountability. This framework is designed to help companies develop and deploy trustworthy AI systems.
  • AIC4 (AI Cloud Services Compliance Criteria Catalogue): The AIC4 catalogue was developed by the German Federal Office for Information Security (BSI) and defines criteria for the security and compliance requirements of AI cloud services. It covers aspects such as data protection, IT security, risk management and compliance to ensure that AI services can be operated securely and trustworthily in the cloud.

Fraunhofer Test Catalogue for Artificial Intelligence: The Fraunhofer Test Catalogue offers comprehensive guidelines for the evaluation and certification of AI systems. This can be used to guarantee the quality and trustworthiness of AI applications. The catalogue covers various aspects, including technical safety, ethical requirements, legal conformity and the transparency and traceability of decisions.

Analysis tool DORA.KI: from 17 January 2025, the EU's Digital Operational Resilience Act (DORA) will require all EU financial companies to strengthen their IT resilience. This also includes clear agreements with third-party information and communication technology service providers and their sub-service providers. These must fulfil certain information security standards in order to ensure robust IT operations. With the new DORA.KI analysis tool, adesso supports customers in checking contracts and associated documents for DORA compliance.

The implementation of these standards in day-to-day business requires a comprehensive and detailed analysis of existing processes and systems. The identification and assessment of risks plays a central role in this. Companies must ensure that they not only fulfil the regulatory requirements, but also integrate ethical considerations into their decision-making processes. Ensuring transparency and traceability is a key concern when implementing AI regulation. Companies are faced with the challenge of explaining complex AI algorithms in an understandable way and making their decisions comprehensible. This requires not only technical expertise, but also interdisciplinary collaboration between technicians, lawyers and ethicists. The future of AI will largely depend on the ability of companies to adapt to dynamic regulatory requirements and ethical standards. The combination of technical expertise and a deep understanding of the regulatory landscape enables companies not only to be compliant, but also to act responsibly and sustainably. By implementing our comprehensive AI governance check, companies can create the conditions to gain the trust of their stakeholders and successfully drive their AI initiatives forward.

Implementing the AI Governance Check together with adesso

How does the AI Governance Check work?

Initial phase

Carrying out an in-depth analysis as part of a comprehensive joint workshop.

a

Implementation of a risk assessment, compliance check and comparison with ethical guidelines.

Recommendation for action

Derivation of strengths and optimisation potential for the development of a business field-oriented alignment.

Support

Joint consideration and adaptation of data protection, algorithm bias, transparency, etc.

Data strategy

Evaluation of data quality and optimisation of the associated data strategy.

Securing the future

Long-term risk minimisation and maintaining compliance. Realising potential and future orientation.

Our approach begins with a thorough analysis of our customers' AI usage. In a joint workshop, we develop a comprehensive understanding of current practices and challenges. These findings form the basis for our AI governance check, which covers important aspects such as risk assessment, compliance and ethical guidelines. Our methodology combines technical know-how with industry-specific expertise. We interpret the results of our AI governance check in the context of our clients' individual corporate structure and culture. In this way, we precisely identify the strengths and optimisation potential of their AI strategy and develop business field-oriented recommendations for action. Our AI governance check actively helps to uncover potential challenges in areas such as data protection, algorithm bias or lack of transparency. Against the backdrop of increasing regulatory requirements, in particular the EU AI Regulation, we support our clients in minimising risks and ensuring the compliance of their AI software.

Our AI governance check enables us to examine and optimise the company's governance structures with regard to the development and use of AI. We identify potential for improvement in organisational processes and provide specific recommendations for strengthening AI governance. This can include the establishment of AI-specific committees, for example for ethics and equality, or the implementation of robust quality assurance processes for AI systems. Analysing and evaluating the current situation forms the basis for future compliance with relevant standards and guidelines. With the help of our AI governance check, we can identify concrete and specific requirements for AI applications and work out how these can be effectively fulfilled. This creates legal certainty and strengthens stakeholder confidence in the company's AI activities. Another important aspect of our AI governance check is the assessment of data quality and integrity. We analyse how companies collect, process and protect data for the training and use of AI models. In doing so, we identify potential weaknesses and make recommendations to improve the data strategy to ensure fair and unbiased AI.

We see working with our clients as an iterative process. After the initial analysis and implementation of the first recommendations, we carry out regular follow-up assessments. This enables us to measure progress, recognise new challenges at an early stage and continuously adapt the AI strategy to changing regulatory and legal frameworks. With our AI governance check, we support our clients in minimising risks, meeting compliance requirements and exploiting the potential of their AI investments. In this way, we ensure that our clients are set up for the future in the dynamically developing world of AI. We accompany them on the path to a responsible and sustainable use of AI that not only fulfils regulatory requirements, but also strengthens stakeholder trust and promotes business success.

Conclusion: Future development of AI and areas of application

While AI is nothing new in principle, recent technological and regulatory advances have been groundbreaking and are expected to remain rapid for a long time to come. These rapid developments present organisations with the challenge of finding appropriate governance mechanisms while complying with the growing number of AI regulations. A comprehensive AI governance check is crucial to meet these challenges. It combines our technological expertise, regulatory knowledge and ethical considerations to identify and minimise risks at an early stage. This increases legal certainty and strengthens stakeholder trust. Companies that continuously improve their governance structures can fully utilise the economic benefits of AI while meeting the high requirements for transparency, ethics and compliance. The AI governance check helps organisations to be successful and sustainable not only in the present, but also in the future.

However, in order to utilise the potential responsibly, companies must ensure that their AI systems comply with ethical, legal and social standards. The AI governance check provides a valuable basis for this and helps companies to continue working successfully and sustainably with AI in the future.

Would you like to find out more about exciting topics from the world of adesso? Then take a look at our previous blog posts.

AI @ adesso

Would you like to find out more about AI and how we can support you? Then take a look at our website. Podcasts, blog posts, events, studies and much more - we offer you a compact overview of all topics relating to GenAI.

Find out more about GenAI on our website

Picture Marina Žagar

Author Marina Žagar

Marina Žagar studied law with a focus on IT law and has worked in the field of data security for international organisations abroad. Her main areas of interest are regulatory requirements in the field of information security and data protection, such as BSI standards and IT-Grundschutz compendium, ISO 27001 and GDPR. She has extensive experience as a consultant and information security officer in various sectors and industries: Banks, public administration, energy suppliers, healthcare and critical infrastructures.

Picture Jonas Reinhardt

Author Jonas Reinhardt

Jonas Reinhardt has been working intensively on IT security and software development for over four years. He specialises in the public sector, healthcare and life sciences. In terms of content, he concentrates on security by design, privacy by design, threat modelling, SAST, the (secure) software development lifecycle and the relevant regulatory framework conditions such as BSI IT-Grundschutz and GDPR. In addition, Jonas Reinhardt has several years of experience in customised, platform-independent software design and development.

Save this page. Remove this page.