by Helga Zanotti
Let's examine the history of AI regulations in Europe, the focus on the risks that GPT-3 presents, and conclude on the principles that may be essential to adopt, in order to employ this form of generative AI within our companies, reducing its risks.
The European Union has long maintained a position of substantial neutrality on the topic of artificial intelligence, focusing on the study of risk hypotheses, but limiting itself to the issuance of soft-law norms, such as the Recommendation of the European Council adopted on May 14, 2019. The issuance of the Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial intelligence act) and amending certain legislative acts of the Union, of January 6, 2021, represents a change of course. It intervenes cogently on the functioning of the internal market, through a uniform legal framework relating to the development, marketing and use of artificial intelligence in accordance with the values of the European Union. The focus of the norm is the public interest, for example, by protecting the health of European citizens, their safety and their fundamental rights. This represents the first - potential - obstacle to the implementation of a generative AI tool like GPT-3, which belongs to the family of technologies grouped under the name of Artificial Intelligence, aiming to make human interaction systems more intuitive.
From the White Paper on Artificial Intelligence, published by the European Commission in 2020, it is clear that the European Union intends to use Artificial Intelligence to change pace. The proposal for a regulation, currently under examination since 2021, focuses on the risk to fundamental rights, according to three degrees: 1. total risk; 2. risk from total to high; 3. high risk. The focus on risk represents the distinctive feature of the regulations issued within the digital European Union. For this reason, it also represents the first element to be studied in relation to the implementation of GPT-3.
In particular, this software expresses moral judgments and evaluations on requests made by the user, such as the use of controlled substances in pain therapy, which is illegal in some states and legitimate in others. Research on drugs for clinical use, formulated on the Bing search engine (1), in a state that qualifies the use of drugs for patients with end-stage tumor pathology as illegal, involves the use of GPT-3 to limit or interrupt the user's research. It is not a human being, but GPT-3 that makes the selection, i.e., evaluates whether what the user intends to know is legitimate, limiting their freedom according to rules and principles that may be affected by bias, for example, or more generally, that it is not in a position to review through comparison with human beings. In addition, GPT-3 can be subject to abuse by circumventing its limits (2). GPT-3 has self-learning capabilities, and therefore falls among the high-risk technologies, which the proposal for a regulation subjects to constant human control also in relation to the violation of ethical principles informing the text.
Reading the Ethics Guidelines of trustworthy AI in Europe suggests the need to introduce compelling rules suitable for governing the management autonomy of Artificial Intelligence systems, which have the power to modify their behavior based on the conditions in which they operate (3). Although lacking the so-called moral stress that belongs to human beings, i.e., the perception of the disvalue of conduct, Artificial Intelligence can still be instructed based on fundamental principles, such as the choice of the lesser evil. To this end, it becomes essential to monitor the purity of data, that is, the sources on which GPT-3 is trained. To date, GPT-3 draws on data made available by the Google search engine and doctrinal texts that it finds here.
The new generation of community regulations, of which the regulation on Artificial Intelligence is also part, as soon as it is issued, is characterized by the provision of tests, procedures, and checks that allow for the extremely operational control of compliance with the rules. In light of this, ethics for new technologies can be useful in practical terms, as a tool to guarantee:
• The exact use of algorithms towards stakeholders;
• Transparency of processes related to Artificial Intelligence systems and privacy by design;
• Transparency of decision-making processes related to Artificial Intelligence systems and to the implementation of ethical principles.
The implementation of GPT-3 entails the urgency of integrating corporate values and objectives with:
• Respect for human rights, such as those of customers/consumers;
• The safeguarding of emotional and professional well-being of workers in relation to the development and use of increasingly performing algorithms, even for complex tasks;
• The protection of the environment and the appropriate use of technology, to reduce resource use;
• The construction of trust towards Artificial Intelligence in choice operations and in the learning of specific know-how to motivate responses given to users;
• The use of intelligible criteria in the processing of outputs.
The European Parliament had already approved an initiative on 20.10.2020, focusing on the value of ethical principles and responsibility for damages caused by AI4, containing a focus on legal principles that must be respected in the development, implementation, and use of it and related technologies, including software like GPT-3.
In conclusion, it is necessary to verify that GPT-3 observes:
1. Detailed guidelines for input data control;
2. Security and transparency;
3. Constantly monitored guarantees against distortions and discrimination;
4. Social responsibility.
The indicated checks represent the necessary tool for achieving a controlled and effective implementation of GPT-3.
*Thanks to Ing. Giovanni Mandelli for the technical data related to the operation of the GPT-3 software.
1 Bing is the first search engine to have declared using GPT-3 within searches made by users, editor's note.
2 On this point, Matteo Flora, https://youtube.be/EKT5-OH2Ns4.
3 The American Algorithmic Accountability Act leads to entirely similar conclusions, editor's note.
4 More properly, the expression "from the human being through Artificial Intelligence" is considered correct, editor's note.
5 The object of systematic control by the EU Commission but lacks an indication of the evaluated requirements for calculating control times.