Understanding The EU’s New Act On Artificial Intelligence.
Artificial Intelligence, or AI, is becoming a necessity in our daily lives, from health care decisions to policing. whether you need help drafting a quick email to send to a client or are stuck writing a code, a social media manager who requires monthly content ideas for social media management or perhaps needs to create a quick 3D model for a clothing brand, AI help make lives easier.
It increases efficiency, accuracy, productivity, and decision-making in many areas of our lives. Some claim that AI will someday take our jobs (I do not believe this is possible).
There has been only one legal definition of the term “AI systems” which is defined by the European Commission as;
‘“The range of software-based technologies using specific techniques and approaches (‘machine learning’, ‘logic and knowledge-based’ systems, and ‘statistical’ approaches) that could be complemented through the adoption of delegated acts to factor in technological developments.”
As with everything that has an advantage, there is also a disadvantage.
The rapid popularity of AI systems, such as Google’s chatbot Gemini, OpenAI’s ChatGPT, and QuantX used in the diagnosis of breast abnormalities, raises ethical and privacy concerns. These concerns include breaches of fundamental human rights, lack of originality, continuous adaptation and unpredictability of AI systems, authenticity and let’s not forget cyber breaches (eg. Deep fakes). The alacritous development highlights the need to regulate AI systems and their use.
As of the time of writing, there is a proposed AI Act by the European Commission which is to come into force by June 2024. This Act establishes a global benchmark for protecting EU citizens from the potential risks of AI and sets guidelines for businesses in the European market developing AI systems.
Unlike other countries like China whose aim is to maintain social stability and state control or the USA which took a benign approach toward AI law by proposing an AI Bill of Rights for AI ethics. The EU’s AI Act provides a comprehensive approach. It assesses through a “risk approach”, based on specific level risks posed by different AI systems.
Four categories of risk were identified;
- Harmful AI practices: Any AI System that normally would exploit your safety, livelihoods, and rights, or lead to an undesirable consequence is prohibited under this Act. This includes AI Systems that use biometric identification, to evoke emotions or profiles based on your race, sexual orientation, or religion.
- High-Risk AI systems: AI systems that threaten the user’s health, safety, livelihood, and rights are regarded as High-Risk AI systems. These systems are authorized but are subject to a set of requirements and obligations to gain access to the EU market. Such as products falling under EU health and safety regulations. Before an organization providing public service deploys a High-Risk AI system the impact of a fundamental right will need to be assessed.
- Limited Risk: AI systems that interact with humans or create image, audio, or video content would be subject to very light transparency obligations. Eg, Synthesia AI , PlazmaPunk an AI platform that generates stunning music videos from audio files, or DALL-E 3 from OpenAI
- Low or Minimal risk: AI systems that pose low risk or minimal risk could be developed and used in the EU and will not be subject to further obligations beyond currently applicable legislation.
The European Commission proposed a two-tiered enforcement system for the AI Act.
At the EU level, a European Artificial Intelligence Board composed of representatives from the Member States and the Commission would oversee the application and implementation of the regulation.
While the National Supervisory Authorities would ensure individual member states enforce regulations for high-risk AI systems within their markets.
AI regulatory sandboxes will be established to provide an environment for the development, testing, deployment, and validation of innovative AI systems. The Sandbox also allows the testing of innovative AI systems in real-world conditions.
Fines for Noncompliance with the AI Act
Any organization under the EU that fails to comply with the law will be liable for administrative fines of up to €30 million or 6 % of the total global annual turnover, These fines are subject to how severity of the breach and sanctions for non-compliance with the AI act.
Criticism of the EU AI Act
- One of the major concerns raised was that the rules regarding prohibited and high-risk practices might be ineffective in practice because the risk assessment suggested is to be left to the provider self-assessment. Leaving no obligation to the developer to assess if these risks are acceptable taking into account fundamental rights laws
- The question of whether research and development in AI should be regulated is a hot topic of debate. Some argue that regulating AI could stifle innovation. However, I disagree with this view. Currently, AI is one of the leading emerging threats to both businesses and critical infrastructure we can see that AI poses a high-risk emerging threat, evident through the issues of Deep fake and data privacy issues associated with the usage of LLMs and and generative AI. Without some form of regulation, more threats may emerge, potentially causing more harm than good. Therefore, it is essential to have safeguards in place when designing AI systems. However, there should be a balanced approach to regulating AI and promoting innovation while ensuring that it remains ethical, transparent, and aligned with societal values.
- AI systems that are classified high-risk are subject to a set of requirements and obligations. However, these requirements do not apply to AI systems if they are developed or used solely for the purpose of national security or developed by police enforcement or government agencies. Often, the public is unaware that the authorities are using a high-risk AI system in the first place. Information related to the use AI by law enforcement will only be available in a non-public database, severely restricting scrutiny.
The EU AI Act is a significant step in regulating the use of AI, and sets a global benchmark, this is not a perfect Act and its still open for improvements. The EU AI Act will be enforced next month. There is presently no specific legislation on AI in Nigeria.
I hope other countries and international bodies follow suit in setting up regulations concerning the use and design of AI systems which will not stiffen innovations.
What do you think? Is regulating AI a good power move, or would it cause more harm than good?
To explore the content draft of the EU AI Act check out : https://artificialintelligenceact.eu/ai-act-explorer/
Did you enjoy reading this you can follow me on
X: @afrotechiee
LinkedIn: Naomi Emma Ekwealor