Artificial intelligence (AI) is a branch of computer science aimed at the programming and design of hardware and software systems that enable machines to be endowed with characteristics considered typically human.
These are systems by which machines can perform complex actions and reasoning, learn from mistakes, and perform functions hitherto exclusive to human intelligence.
Simply put, AI allows machines to function like humans, enacting its own reasoning and mechanisms.
Due to its purpose and functioning, artificial intelligence inevitably entails a change in society and its greater digitization.
RISKS AND BENEFITS OF AI
AI can bring different benefits to our society, for different targets and in different areas, and in so many ways it has already done so.
It is thanks to it, in fact, that we have navigators or translators, simple tools that facilitate and speed up certain aspects of our lives. Just as it has enabled the development of more intuitive search engines, new medical tools, new methods of crime prevention.
As for goods and services, AI makes it possible to facilitate and improve their production, and also their supply, helping businesses, citizens and the state at the same time. It could also help democracy by allowing easier access to information and ensuring greater transparency.
However, as much as AI can improve our lives, it also carries several risks, which is the main reason why good regulation of it is needed.
Such risks arise first and foremost from its manipulation. AI, in fact, needs to be programmed, and this process is naturally carried out by a person, who could intentionally influence the programming. This would pose many risks both to security and to democracy and the protection of people’s rights.
It follows that it is also necessary to understand, in cases where a machine commits a crime, who should be held responsible for it, whether the error should be blamed on the machine or on the person who programmed and therefore operated it.
Finally, a major challenge is to strike a balance between the risk of underutilizing artificial intelligence, missing the opportunities it offers, and the risk of overutilizing it.
It is therefore useful and necessary to create legislation to regulate the use of AI and all aspects of it, so that it is used in the right amount and in the right ways, thereby limiting possible risks.
THE REGULATION OF THE EUROPEAN UNION
European Union policy on technology in general is along two lines:
- technology stimulation and development, through public and private investment;
- the adaptation of technologies to the needs of people and respect for their fundamental rights.
Here, too, therefore, the EU has given rise to a strategy that promotes the cooperation of member states and the creation of a set of standards that will enable the creation of a “human-centric” artificial intelligence.
Let’s therefore analyze the steps of the European Union for the regulation of AI.
The Commission presents a Communication to outline a European Strategy on Artificial Intelligence which defines the objectives in the field. They are mainly three:
- Increasing the use of AI in the European context across the board;
- Ensuring its ethically and legally appropriate use;
- Preparing for the socioeconomic changes that may result.
The Commission and the Member States adopt the Coordinated Plan on Artificial Intelligence, aimed at strengthening synergies between States and European institutions and promoting the adoption by national governments of a national strategy on AI.
The Commission presents the White Paper on Artificial Intelligence, aimed both at supporting the adoption of AI and at addressing the risks involved.
The Commission approves a Regulation aimed at promoting AI and containing strict rules to minimize its risks. To this end, a risk ranking has been created based on which mitigation measures or prohibitions are provided.
In order of seriousness, we therefore find:
- low-risk systems, for which only minimum transparency requirements are envisaged.
- high-risk systems, for which there are both specific technical requirements (use of high-quality data sets, establishment of proper documentation to improve traceability, information sharing, design and implementation of human surveillance measures, and achievement of robustness, safety and accuracy standards) and the establishment of a risk management system.
- prohibited systems, deemed incompatible with EU principles and the fundamental human rights contained in the European Charter. Such are systems that: use subliminal techniques or exploit a person’s vulnerabilities in order to distort his or her behavior or cause physical or psychological harm; assign people a social score on which prejudicial or unfavorable treatment depends; allow remote biometric identification (except in cases exceptionally authorized by law).
Article 29 also establishes the obligations to which users of AI systems are required (follow instructions, ensure relevance to the intended purpose, monitor operation, suspend use in the event of an accident, keep the logs generated ).
To ensure the implementation of the Regulation and the cooperation, the States are required to establish national competent authorities, while at the European level there is a European Artificial Intelligence Committee. The latter also acts as a bridge between the States and the Commission.
Finally, a system of ex ante conformity assessment and a sanction mechanism in case of violation of the rules are envisaged.
Artificial intelligence has undergone a strong development in recent years and is destined to continue to expand at European and global level in various sectors.
There is still a lot of work to be done in Europe in terms of investment, but there is a good starting point and good regulation to support an ethically and legally appropriate development of AI.