Software is eating the world” said 2011 American businessman and former software engineer Marc Andreessen to point out how each modern activity involves the use of software. Little did Andreessen know that the same software industry could be at risk of being eaten. From 2011 on, new cutting-edge technology surged massively, posing a threat to the same software industry, Artificial Intelligence. Artificial intelligence, or AI, is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. This technology aims to reproduce the neural communication system, works through algorithms and can show instantly complex results. Since 2011, AI has become a ubiquitous technology, transforming various aspects in many sectors. Indeed, its applications have span across diverse industries, from healthcare to finance, retail, transportation and manufacturing, revolutionizing human work and interaction.
AI is destined to take over the software, and that was clear already in 2017 when NVIDIA CEO Jensen Huang said “Software Is Eating the World, but AI Is Going to Eat Software”, underlying the new frontier of the development of technology. Today Artificial Intelligence is widely used in many sectors and for many activities, however, not without implications. While this technology holds immense potential to transform various aspects of our lives, its rapid development and deployment raise concerns about potential risks and unintended consequences. Not only one AI raise ethical questions about its usage of personal data and about its capacity to determine moral decisions, but it may also pose threats: from job displacement and economic inequality to bias and discrimination up to weaponization and the conduction of autonomous warfare.Addressing these threats requires a multi-pronged approach involving collaboration among governments, industry, academia, and civil society. It is crucial to establish ethical guidelines, develop robust regulatory frameworks, promote transparency and accountability, and foster public engagement and education about AI. The European Union has been a world pioneer in the development of a comprehensive legislative framework for the regulation of artificial intelligence, the AI Act. Proposed by the Commission in 2021, the AI Act was adopted by the Council in May 2024, representing the first regulation on the use of artificial intelligence. EU wants to regulate artificial intelligence to ensure better conditions for the development and use of this innovative technology, which can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy, while at the same time assuring that its use does not pose harmful risks to its citizens. The new law aims to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it aims to ensure respect for the fundamental rights of EU citizens and stimulate investment and innovation in artificial intelligence in Europe.
The EU AI Act defines that AI systems can be used in different applications and are analysed and classified according to the risks they pose to users. Four are the regimes used to define the risk. Unacceptable risk involves the cases that pose a threat to core EU values such as the respect for human dignity, democracy, or the protection of fundamental rights. Each member State of the Union must ban machines that may harm these values. High risk refers to AI systems that concern health, safety, education, employment, justice, migration and basic services. This categorisation applies to imports too. Under the AI Act, such systems can only access the EU market if they conform to various legal requirements. Limited risk refers to the risks associated with a lack of transparency in the use of AI. The AI Act introduces specific transparency obligations to ensure that humans are informed, when necessary, therefore promoting trust. Systems caught in this regime are to be governed chiefly by other legal instruments at the EU and national level. The fourth category is represented by the minimum risk. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category. The aim of this classification is to ensure the safety of the EU single market, applying regulation before the entrance of a product into the market, to safeguard European customers.
The EU AI Act represents a milestone for the regulation of the use of artificial intelligence, however, some grey areas are still present. First of all, the AI act applies only to areas within EU law and provides exemptions such as for systems used exclusively for military and defence as well as for research purposes. Especially military weapons that use artificial intelligence can select and engage targets without human intervention, thus posing more risks, as this could lead to unintended consequences, such as attacks on civilians or even the outbreak of war. Moreover, in the framework, risk-based regulation is a broad concept. Several regulatory techniques can be deployed under a risk-based approach, some of which can lead to different framings of risk than the ones reflected in the AI Act. Another point of criticism is the lack of precision in addressing rights through a product safety lens. The AI Act, in fact, aims at protecting fundamental rights controlling the products, however, this approach not always is effective and may have the risks of missing an efficient protection of fundamental rights. Critics argue, moreover, that the Act could be used to protect the interests of large tech companies, who have the resources to comply with its requirements, while smaller companies and startups may be left behind.
To conclude, AI represents a milestone for the development of a regulative framework for the use and development of AI products and there is a deep hope that a Brussels effect will spread around the world, however, it is noteworthy to underline that this legal recognition represents just the beginning of regulation of a sector which is in constant evolution and which will require an adaptive regulative approach that will follow this swift development.
Sources:
Almada, M. Petit, N. 2023. The EU AI Act: A Medley of Product Safety and Fundamental Rights?. European University Institute. European Commission. 2024. Plasmare il future dell’Europa. https://digital-strategy.ec.europa.eu/it/policies/regulatory-framework-ai European Council. 2024.
Artificial Intelligence (AI) Act: The Council gives the final green light to the first worldwide rules on AI. https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/
European Parliament. 2024. EU AI Act: first regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
European Parliament. 2024. Artificial intelligence act. A Europe fit for the digital Age. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence
Author: Elisa Modonutti