With the EU AI Act, the European Union has launched a comprehensive set of rules to regulate artificial intelligence (AI).
This law has far-reaching implications, affecting banks and insurance companies in particular. In our blog post, we summarize the most important points and explain what banks and insurance companies need to do to remain legally compliant.
A risk-based approach to regulation
The EU AI Act classifies AI systems according to the risk they pose to the rights and freedoms of individuals. Three main categories can be distinguished:
- Unacceptable risk: AI systems that pose an unacceptable risk, such as social scoring by public authorities, are prohibited.
- High risk: Systems that are used in critical areas such as infrastructure, education, the labor market, financial services and administration are subject to strict requirements.
- Low risk: There are transparency obligations for low-risk systems, but less stringent requirements. Voluntary compliance with the regulations for high-risk AI systems nevertheless makes sense.
Requirements of the EU AI Act for high-risk AI systems
Banks and insurance companies must meet specific requirements for high-risk AI systems:
- Risk management: systems must be implemented to identify, assess and mitigate risks.
- Data processing: High-quality, relevant and representative data must be used to train the AI systems.
- Documentation and recording: There is an obligation to document the functionality and design of AI systems in detail.
- Transparency and information: Users must be informed about the functionality and risks of AI systems.
- Monitoring and human supervision: It is essential to ensure appropriate human supervision of the AI systems.
- Accuracy, robustness and cybersecurity: Companies have a duty to ensure accurate, robust and secure results.
How can you tell if an AI system complies with the AI Act?
Before high-risk AI systems are placed on the market, it is essential that they undergo a conformity assessment. The assessment ensures that they meet the requirements of the AI Act. Successful systems receive a CE marking as proof of conformity.
What are the penalties for violating the AI Act?
Significant fines can be imposed – up to 6 percent of a company’s global annual turnover. This emphasizes the importance of the new regulations.
Who is responsible for governance and supervision?
An EU Committee on Artificial Intelligence will be set up to monitor and enforce the AI Act. National supervisory authorities in the Member States will also play an important role in enforcement.
Why is the EU AI Act particularly relevant for banks and insurance companies?
The EU AI Act is of great importance for banks and insurance companies as many of their AI systems are classified as high-risk. These include systems for credit assessment, fraud detection, risk management and automated decision-making processes. Institutions must ensure that their AI models meet the requirements of the AI Act, particularly in terms of transparency, fairness and accuracy.
What are the latest developments regarding the EU AI Act?
The AI Act has been in force since August 2024.
However, implementation in all EU member states is proving difficult, as some regulations are still unclear. In Germany, only a quarter of companies have dealt with the new regulatory framework so far. It’s obvious: There is a considerable need for clarification.
Another important point is the balance between innovation and risk protection. The German government emphasizes that clear rules strengthen trust in AI technologies and at the same time leave room for innovation. The AI Act takes a risk-based approach here, with requirements varying depending on the risk level of the AI application.
What is the transparency of AI systems all about?
The white paper from the German Federal Office for Information Security (BSI) emphasizes the importance of transparency in AI systems. Transparency involves providing information about the entire life cycle of an AI system and its ecosystem. It promotes systems that are comprehensible and explainable so that they in turn remain trustworthy.
Factors for transparency
AI system: This is about the definition and functioning of the system, including the technologies used such as machine learning and neural networks
Ecosystem: This refers to the context in which the AI system is developed, provided and operated, including information about the provider and the supply chain.
Information: This focuses on relevant and appropriate data that must be disclosed to enable an informed assessment of the system.
Lifecycle: Actual transparency must be present in all phases – from planning and development to commissioning and decommissioning.
Opportunities and threats through transparency
Transparency can ensure that decisions remain comprehensible. It can prevent misuse and also allows it to identify potential risks at an early stage and react appropriately.
However, the disclosure of information also poses the risk of new threat vectors being added that attackers could exploit. For this reason, a balanced level of transparency is required that takes into account both security and the information needs of stakeholders.
Conclusion
Compliance with the EU AI Act requires banks and insurance companies to adapt their IT systems and internal processes. As an IT consultancy, X1F supports these institutions in implementing risk management systems, ensuring compliance with regulatory requirements and meeting transparency and security standards.
Get in touch with us. Discover how you can bring your AI systems up to date and meet the requirements of the EU AI Act. matrix technology, as part of the X1F Group, is at your side as a competent partner to help you successfully master the challenges of the EU AI Act.