SUR-SUR
Ethical and legal aspects of the use of artificial intelligence in Russia, EU, and the USA: comparative legal analysis
Aspectos éticos y legales del uso de la inteligencia artificial en Rusia, la UE y los EE.UU: análisis jurídico comparativo
Ethical and legal aspects of the use of artificial intelligence in Russia, EU, and the USA: comparative legal analysis
RELIGACIÓN. Revista de Ciencias Sociales y Humanidades, vol. 4, núm. 19, pp. 212-220, 2019
Centro de Investigaciones en Ciencias Sociales y Humanidades

Recepción: 29 Mayo 2019
Aprobación: 20 Agosto 2019
Abstract: The article is devoted to the comparative legal analysis of ethic-legal questions of use of artificial intelligence in Russia, the European Union and the USA. The paper notes a deep and ambiguous discussion among scientists regarding the understanding and use of artificial intelligence as equal to a person. Thus, the EU adheres to a cautious approach to the legal regulation of robotics taking into account all risks and placing full responsibility for the compensation of damage caused by artificial intelligence on the person who created (programmed) and controlled the robot. In the Russian Federation, in the absence of a basic law, a draft Grishin law was proposed, which could have been the most courageous in the world practice. There is also no unified legal regulation of artificial intelligence in the United States, although some states regulate the use of robots, including unmanned vehicles in road traffic.
Keywords: Artificial intelligence, subject of law, robotics, personality, risk, ethics of law, digital technology.
Resumen: El artículo está dedicado al análisis jurídico comparativo de cuestiones ético-legales del uso de la inteligencia artificial en Rusia, la Unión Europea y los Estados Unidos. El documento señala una discusión profunda y ambigua entre los científicos con respecto a la comprensión y el uso de la inteligencia artificial como igual a una persona. Por lo tanto, la UE se adhiere a un enfoque cauteloso de la regulación legal de la robótica teniendo en cuenta todos los riesgos y atribuyendo toda la responsabilidad de la compensación del daño causado por la inteligencia artificial a la persona que creó (programó) y controló el robot. En la Federación de Rusia, en ausencia de una ley básica, se propuso un proyecto de ley Grishin, que podría haber sido la práctica más valiente en el mundo. Tampoco existe una regulación legal unificada de la inteligencia artificial en los Estados Unidos, aunque algunos estados regulan el uso de robots, incluidos los vehículos no tripulados en el tráfico rodado.
Palabras clave: inteligencia artificial, sujeto de derecho, robótica, personalidad, riesgo, ética del derecho, tecnología digital. .
INTRODUCTION
Robotics and artificial intelligence have become an integral part of society over the last decade. The use of robots and algorithms has significantly affected professional, economic and public activities. Robots are successfully used in industry, medicine (robots-surgeons, diagnosticians), social care (nannies and nurses), transport and search systems, environmental and space studies, work with hazardous factors. Thanks to robots, the risk of human error is minimized, people are freed from routine work for the sake of creativity, the risk to human life and health is eliminated. However, along with the positive results of the use of robots, a number of negative and socially significant questions arise: loss of work due to replacement of a person by a robot (especially in areas where there are typical types of work - accounting, driving, etc.), anthropomorphism in relation to robots, the use of robots as a means of warfare or violence against people, etc.
In the sphere of ethics and law, one of the most complicated issues is the problem of giving robots legal personality and defining the subject who should be responsible for the harm caused by artificial intelligence. At the same time, in the absence of a unanimous opinion of the public and scientists regarding the recognition of the quality of human personality for artificial intelligence, the legislation and judicial practice of different countries of the world retains gaps and uncertainty on the issue of legal personality of robots endowed with artificial intelligence.
The most interesting for analysis and taking into account legal experience in the sphere of robotics regulation are the legal systems of the European Union, the USA and Russia. These legal systems offer certain models of solving problems of using artificial intelligence both in ethical and legal relations.
Among the ethical and legal issues of the application of artificial intelligence technology can be identified the following: the development of a unified approach to understanding artificial intelligence, the problem of assigning robots the qualities of personality, types of robots, robots as subjects of law, the problem of determining the subject of responsibility for the harm caused by “smart machines”, the possibility of recognizing the robot as a subject of intellectual property, public administration, control and standardization in the field of robotics and cyberphysical systems, the possibility of using (Aletras et al., 2016).
One of the tasks of jurisprudence is to develop a definition of “artificial intelligence”. At the same time, one should pay attention to a number of complications in solving this issue:
• There is no generally accepted definition of the basic term “natural (human) intellect” in science, which is perceived as a leading property of human nature. In this case, the intellect is often associated with such a property as thinking;
• There is no unity in understanding artificial intelligence in special literature and along with this term such notions as “machine learning”, “neural network”, etc., are used.
It should be noted that two terms “weak artificial intelligence” and “strong artificial intelligence” differ among scientists. Weak artificial intelligence is understood as a smart machine for solving specific tasks (for example, developing scripts for movies), while strong artificial intelligence is used to solve a wide range of tasks (Morehat, 2017).
In the variety of definitions of artificial intelligence we can highlight the following main directions:
• Artificial Intelligence as a system that acts like a person with similar cognitive abilities;
• Artificial intelligence as a system (device) possessing at least one of the properties of the human mind;
• Artificial Intelligence as a superintelligence as a system surpassing a person’s intellectual abilities (Bostrom, 2016);
• Artificial Intelligence as a scientific direction, studying the possibility and use of systems (devices) for modeling human thinking (machine learning).
An interesting definition is offered by A.V. Ponkin and A.I. Redkin: “Artificial intelligence is an artificial complex cybernetic computer hardware and software system (electronic, including - virtual, electronic-mechanical, bioelectronic-mechanical or hybrid) with cognitive-functional architecture and own or correspondingly accessible (downlinked) computational power of necessary capacities and performance”. This cybernetic system has a number of properties: substance (subjectivity and ability to improve); high-level ability to perceive information, make decisions and perform them, analyze one’s own experience; ability to adapt to the external environment, perform cognitive functions (creative, analytical), ability to consciousness (Ponkin, Redkin, 2018).
P.M. Morehat adheres to a close definition: “Artificial intelligence is a fully or partially autonomous self-organizing (self-organizing) computer hardware-software virtual or cyber-physical system, including bio-cybernetic, system (unit), endowed/ possessed with abilities and capabilities:
• Anthropomorphic and intelligent thinking and cognitive actions, such as the recognition of images, symbolic systems and languages, reflexion, reasoning, modeling, figurative (semantic and semantic) thinking, analysis and evaluation;
• Self-reference, self-regulation, self-adaptation to changing conditions, self-limitation;
• Self-maintenance in homeostasis;
• Genetic algorithm (genetic algorithm - heuristic search algorithm, with preservation of important aspects of “parental information” for “next generations” of information), accumulation of information and experience;
• Learning and self-study (including - on one’s own mistakes and experience); independent development and independent application of algorithms of self-omologation;
• Independent development of tests for own testing, independent performance of self-tests and tests of computer and, if possible, physical reality;
• Anthropomorphous and reasonable independent (including numeric) decision-making and problem solving (Aletras et al., 2016).
A number of foreign countries have attempted to define the term “artificial intelligence” in their legislation. As early as 2008, South Korea adopted the Law on Promotion of Development and Dissemination of Smart Robots. The law proposes the following definition of a smart robot: a mechanical device that is able to perceive the environment, recognize the circumstances in which it operates, and move around on its own in a targeted manner.
The analysis of existing approaches to the concept of “artificial intelligence” allows us to identify the following features:
• Availability of a technical device (cyber-physical system) capable of perceiving information and transmitting it;
• A certain degree of autonomous work without human participation (subjectivity) in the absence of such a system’s life;
• Ability to analyze, generalize information, develop intellectual solutions based on the studied data (thinking), self-awareness;
• Ability to learn, to search for information independently and to make decisions based on this information.
Such a feature as autonomy and the ability to make independent decisions on the basis of one’s own experience and education determines the question of the similarity of a person and artificial intelligence, as well as the appearance of a separate actor from a person - a person, a subject, and maybe a subject of law. Undoubtedly, this question concerns philosophy, the problem of understanding the human mind and personality. The fundamental similarity of the human mind and the work of neural networks (built on the principles of the human brain), capable of selfstudy, leads to the conclusion that robots with artificial intelligence have a personality. As a consequence, ethical questions arise about the creation, use, and disposal of robots: the admissibility of cruel treatment, the use of shutdown and destruction functions, and the use of the robot as a killer or military robot.
DEVELOPMENT
Methods
Complex ethical issues related to the legal regulation of the status of artificial intelligence, which is an unconventional phenomenon for the classical subject of legal regulation, determine the complexity of the methodology of the study of these problems. The authors use, first of all, the legal-dogmatic method, by means of which the current legislation is studied, as well as the bills regulating the information sphere, including artificial intelligence. At the same time, a critical analysis and assessment of existing formulations and approaches is carried out in terms of their compliance with social realities and intra-systemic legal characteristics. The comparative legal method is also actively used in combination with the dogmatic one, which is the main one for such researches. Moreover, not only the legislation, which is not so much in the sphere researched in the article today, but also program documents of political-legal nature are studied in the comparative plan. The predominance of program documents over normative legal acts testifies to the fact that some states, as well as the world community as a whole, have not yet defined their attitude to artificial intelligence. Theoretical concepts of the legal status of artificial intelligence proposed by Russian, European, and American scientists are also examined in a comparative manner. Scientifically, the problem is initially analyzed in relation to individual countries, and then, in conclusion, the authors generalize and formulate synthesized approaches to the problem of artificial intelligence. It is necessary to state as a key methodological principle the problem-pollemic character of the article: since legal acts, political acts and scientists of different countries do not give an answer to the question about the legal model of regulation of the status of artificial intelligence at the moment, as the authors of the article also denote problem moments to a greater extent, inviting for discussion rather than putting forward some integral and coherent legal concept of artificial intelligence.
Results and discussion
There is virtually no legal regulation of robotics and artificial intelligence in the Russian Federation. A number of strategic documents have been adopted as part of the Digital Economy project. The Decree of the President of the Russian Federation of May 7, 2018 “On the national goals and strategic development objectives for the period up to 2024 and the Information Society Strategy 2017-2030 provide for the development of a system of legal regulation of the digital economy and the use of artificial intelligence among the priority tasks in the sphere of the digital economy.
The State Program “Digital Economy” (Order of the Government of the Russian Federation of June 28, 2017) provides for neurotechnologies and artificial intelligence, robotics among the end-to-end digital technologies, and particularly stipulates the need for systemic regulatory support for the use of digital technologies. At the same time, the Russian Federation lacks the necessary legal framework for the use of artificial intelligence technology. There is a vacuum of legal regulation in this area, while artificial intelligence is already used in the world and in Russia. At the same time, various proposals are made in the legal and corporate circles in terms of regulating the use of robotics.
In 2017, at the request of Grishin Robotics Corporation, a group of scientists from the Research Center for Regulation of Robotics and Artificial Intelligence (A. Neznamov, V. Naumov, V. Arkhipov) developed a draft Russian federal law in the field of robotics. The authors of the bill have been rather cautious in developing approaches to the legal regulation of the use of robots (Robots.., 2019). First of all, the developers refrained from comprehensive regulation of all aspects of the use of robotics, touching upon the most pressing issues of robotics application. The task of developing the basic law on robotics was left to the authors of the “Grishin’s Law” for a longer period of time (Arkhipov, Naumov, 2017).
The bill raises very serious legal issues of robotics application.
First, the authors of the draft law introduce a legal definition of robots as autonomous from human intellectual systems. “A robot is a device capable of acting, determining its own actions and evaluating their consequences on the basis of information coming from the external environment, without full human control,” says the text of the bill. It is obvious that the emphasis is placed on such qualities as the independence of robots in decision-making and the lack of complete control over their actions by man.
Moreover, it is the autonomy of robots that can be a key issue in determining the legal personality and responsibility of cyber-physical systems. The lack of autonomy turns the robot into an object of legal relationship, into a technical structure and software product. The qualities of the subject of law in this case are typical only for the robot owner.
Secondly, the bill proceeds from the separation of two types of robots and the corresponding dualistic nature of robots:
• Robots as a type of property;
• Robots-agents as independent participants of civil turnover, endowed with the status of legal entities with special legal personality.
It is obvious that the solution to the question of the presence of the legal personality of robots should be decided on the basis of the notion of a subject of law and the decisive quality that determines the presence of an independent legal personality, which has been conventionally developed in legal science and practice. At the same time, some of the scientists state that there is no unambiguous method of determining the subject of law taking into account the presence of a category of legal entities that are not identical to a person. Most researchers note that it is premature to give smart machines the status of legal entities. This possibility is recognized in the event that technologically, in the future, an artificial mind will appear that is largely similar to the human mind, and, in addition, has such qualities as conscience and emotions.
Among the criteria for the identification of the subject of law in legal science are given:
• Consciousness, will, and emotions (the absence of consciousness in mentally ill people, children, and legal entities is not a reason to deprive them of their legal personality);
• Independence of a person in making decisions and managing his actions;
• The concept of fiction of a legal entity as a means of risk management and limitation of property liability.
Comparison of a physical person with an autonomous robot (artificial intelligence) leads to the conclusion that in the presence of a common feature of consciousness, the robot does not possess such qualities of a person as emotions and will. Consequently, the identity between a person and a robot as subjects of law is impossible.
At the same time, there are no external obstacles in extending the qualities of a legal entity to an autonomous robot through fiction as a legal technique. It is accepted to recognize artificially the presence of some organization, although it may not be a real person or organization of people behind it. Thus, V. Naumov, V. Arkhipov in one of their works emphasize that robots can be endowed with a special legal personality for certain purposes (trade through bots on the Internet, etc.) (Robots.., 2019). P. Cherka notes that any subject of law acquires the quality of a legal personality by virtue of law. Legal personality is a legal category, and it can be applied to any subjects and subjects based on certain interests and goals (Cerka et al., 2017). The understanding of the subject of law as a legal concept, which does not coincide with a person, is very accurately conveyed by D.V. Pyatkov. According to the scientist, it is necessary to distinguish between a person in legal relations and a person in other social relations. In particular, he notes: If an individual is only a legal representation, one property of a person, his mask, then a legal entity is also a legal representation, property, mask. It is necessary to suppose, that under this mask the same person, i.e. the legal person is one more person, one more his mask. It is better to say: the next person is a human being, because according to the legislation a person can have a lot of legal persons” (Pyatkov, 2012). Accordingly, to solve the issues of civil turnover and satisfaction of other public interests, the method of legal fiction is used, with the help of which some or other actors are endowed with the qualities of a subject of law: legal entities, animals, robots. In each case, a person hides behind this or that subject of law. Thus, a “smart machine” can be endowed with legal personality by analogy with corporate legal capacity to satisfy human interests.
Another question is whether there is a need to give robots legal personality of a legal entity. Understanding of a legal entity as a way to minimize the risks of legal liability is unlikely to apply to robots, because it will allow avoiding legal liability for manufacturers and owners of robots. Thus, during the discussion of the resolution on robots, Member of the European Parliament J. Lebreton noted: “I object to this perspective for two reasons: firstly, because it would release all responsibility from their producers and users, which would undoubtedly make powerful lobbies happy; secondly, and most importantly, because I believe, as well as Jacques Maritain, that the human being is endowed with a spiritual existence, with which no artificial intelligence can be compared” (European.., 2017). Giving robots a special legal personality is possible in the presence of sufficient guarantees of the rights of other participants of legal relations.
At the same time, these goals can be achieved with the help of other legal means and regimes without giving robots legal personality: as a type of property, database, etc. Therefore, the concept of an electronic person at the current stage of scientific discussion is very controversial.
Third, the issue of liability for damage caused by robots is being addressed in an original way.
A positive decision on granting robots with delicacy can be made only if there is sufficient property of such a robot or insurance of its property liability.
In addition, it should not be forgotten that the punishment with its purpose of correction and prevention of new offences in respect of robots is inapplicable. Although in the legal literature it is proposed to use such a measure as the destruction of robots.
It should be noted that the authors of the draft law have not been able to draw a clear line in establishing liability for damage caused by the robot as a legal entity and at the same time as a robot property.
Fourth, the draft law proposes to establish a requirement for the safety of robots and the introduction of a special warning system for legal conflict.
Let us now turn to the official legal documents of the European Union regulating the issues of social and legal status of artificial intelligence. European Parliament resolution of 16 February 2017. “The Civil Law on Robotics is a set of legal principles and ethical requirements for the creation and use of robots. As such, the resolution does not contain any specific legal norms, but serves as a basic reference point for the European Union member states in the development of legal acts on robotics.
It should be noted that the resolution takes a thorough and serious approach to the topic of autonomous robots and artificial intelligence. The resolution reflects various aspects of the use of robots: the use of robots in industry, assistance to the sick and the sick, replacement of routine work, etc. In doing so, the resolution is devoid of enthusiasm and calls for the cautious use of self-learning robots because of the threats they may pose: loss of jobs for people; growing social inequality; the problem of robot control and manageability; the issue of responsibility for the harm caused by autonomous robots, etc. Taking into account the advantages and threats of using artificial intelligence, the European Parliament has led to the establishment of the principle of gradualism, pragmatism, and caution with respect to future initiatives in the field of robotics and artificial intelligence. On the one hand, this principle ensures that all risks and threats are taken into account, and on the other hand, it does not hinder innovative development in the field of robotics.
The European Parliament’s resolution specifically addresses liability issues in the case of autonomous robots. The autonomy of robots in the context of the resolution is understood as the ability to make decisions and implement them independently without external control or influence. At the same time, autonomy is perceived in purely technical terms as the implementation of the program.
It should be noted that the independence of the robot at the current level of technological development is relative. First of all, the algorithm of robot’s actions is created by a human being, even if we are talking about artificial intelligence and self-learning neural networks. It is the person who lays down the model of robot activity at the level of the program. Secondly, most often the robot acts in deep interaction with a person both remotely (drone control, deep-water submersibles) and inside the device (aircraft control). As D. Mindell notes, there is a myth about the robots’ independence and it is necessary to clearly realize the robots’ dependence on humans (Mindell, 2017).
The resolution of the European Parliament proposes the following features of autonomous (smart) robots:
• The ability to become autonomous by using sensors and/or exchanging data with their environment;
• The ability to learn from experience;
• Availability of at least minimal physical support;
• The ability to adapt their actions and behaviour to the environment;
• Lack of life from a biological point of view.
Obviously, the solution to the question of the presence of legal personality of robots should be decided on the basis of the notion of the subject of law and the decisive quality that determines the existence of an independent legal personality. At the same time, some of the scientists state that there is no unambiguous method of determining the subject of law taking into account the presence of a category of legal entities that are not identical to a person. Most researchers note that it is premature to give smart machines the status of legal entities. This possibility is recognized in the event that technologically, in the future, an artificial mind will appear that is largely similar to the human mind, and also possesses such qualities as conscience and emotions (European, 2017).
Among the criteria for identifying the subject of law in legal science are given:
• Consciousness, will, and emotions (the absence of consciousness in mentally ill people, children, and legal entities is not a reason to deprive them of their legal personality);
• Independence of a person in making decisions and managing his actions;
• The concept of fiction of a legal entity as a means of risk management and limitation of property liability.
Comparison of a physical person with an autonomous robot (artificial intelligence) leads to the conclusion that in the presence of a common feature of consciousness, the robot does not possess such qualities of a person as emotions and will. Consequently, the identity between a person and a robot as subjects of law is impossible.
The world legal literature attempts to consider the nature of robots by analogy with the legal regime of animals. At the same time, in most legal systems of the world animals are considered as an object of law taking into account the principle of humane attitude to them. Unlike robots, some of the animals are capable of displaying emotions, but do not have freedom of will and, therefore, cannot exercise rights and obligations.
Another way to solve the problem of the legal personality of “reasonable robots” is the concept of “electronic person”, which is actively defended by P.M. Morehatt in his scientific research (Morehat, 2018). P.M. Morehat names the following as the main prerequisites for granting legal personality to certain persons: the existence of moral law, social potential and legal convenience. It is natural that artificial intelligence can be used only for the purposes of legal convenience in a number of cases: the conduct of electronic business and the definition of jurisdiction, the creation of intellectual property objects, limiting the liability of developers of artificial intelligence units (Morehat, 2018).
At the same time, these goals can be achieved with the help of other legal means and regimes without giving robots legal personality: a type of property, a database (Schrijver, 2018). Therefore, the concept of an electronic person at the current stage of scientific discussion is very controversial.
G.A. Gadzhiev and E.A. Voynikas propose to proceed from the fact that the robot will be able to satisfy the requirements for compensation for the damage that has occurred independently when solving the question of legal personality of robots. According to them, “if recognition of the robot by the subject of law has any meaning or purpose, it consists in a more efficient and balanced distribution of responsibility. On the contrary, if the robot is unable to compensate for the harm it has done, the need for its recognition as a subject of law becomes problematic. In turn, the task or social need for distribution of responsibility is a consequence of a more complex, universal need for balance” (Gadzhiev, Voynikas, 2018).
A positive decision on granting robots with delicacy can be made only if there is sufficient property of such a robot or insurance of its property liability.
Besides, it should not be forgotten that the punishment with its purpose of correction and prevention of new offences in respect of robots is inapplicable. Although in the legal literature it is proposed to use such a measure as the destruction of robots.
Since the question of the autonomy of the robot as a subject of law is premature, the European Parliament resolution notes the possibility of applying two legal constructs to the tort relations involving robots:
• The design of the manufacturer’s liability for robot malfunctions (for the quality and safety of the robot);
• The design of liability for malicious acts, according to which the robot user is responsible for the behavior that caused the emergence of damage.
Thus, in the act of the European parliament insufficiency of the above-stated rules in the event that the harm has come as a result of actions and decisions of the robot and in the absence of fault and a causal connection of actions of the person and the come harm is underlined. Without prejudging the final legal decisions in this matter, the European Parliament has outlined a number of areas of legislation development in terms of responsibility for the actions of smart robots.
First of all, the European Parliament, in addressing the issue of liability in connection with the use of autonomous robots, adheres to the idea of inadmissibility of limiting the types, forms and scope of compensation for the harm that may be caused by intelligent robots. This approach ensures that the interests of the victims are taken into account and limits the lobbying aspirations of robot manufacturers by reducing the scope of their own responsibility.
The European Parliament, in determining the person who will be responsible for the robot’s actions, proceeds from the theory of risk, in which the person who could have minimised the risks and taken into account the negative consequences is held responsible. At the same time, the degree of responsibility should be determined by the degree of robot autonomy and the role played by the person teaching the robot.
A European Parliament resolution notes the difficulty of addressing the issue of liability when harm occurs in the absence of human control in a highly autonomous robot. In this case, the absence of guilt and causality is an obstacle to assigning responsibility to a person. For such cases, the European Parliament proposes to use the mechanism of liability insurance through the payment of contributions by manufacturers and owners of robots to compensate for damage to affected persons.
Within the framework of European law, the position of P. Cherke, who sees two options for solving the issue of liability for damage caused in connection with the use of artificial intelligence, is interesting:
1) Imposition of liability on a person who programs artificial intelligence using the analogy of Article 12 of the UN Convention on the Use of Electronic Technologies in International Contracts;
2) The use of “deep pocket” theory through an insurance institute or wealthy and interested corporations (Cerka, Grigiene, 2015).
The “deep pocket” theory is based on the idea of distributing legal liability to more advantaged persons. The insurance institute may include large technology companies that produce robots. Thus, the risk of liability would be rationally and fairly distributed.
In the long term, the European Parliament proposes to consider granting robots separate legal statuses where robots as electronic persons make autonomous decisions.
Among the positive aspects of the European Parliament resolution is the following:
• A comprehensive approach to the regulation of robotics, taking into account advances in technology, ethics and law;
• The recommendation of the European Commission to establish an EU Agency for Robotics and Artificial Intelligence, which would become a regulator of the legal, technical and ethical aspects of the use of intelligent robots;
• Objectivity and caution, taking into account both the advantages and threats posed by the use of artificial intelligence.
A separate subject of the study is the ethics of artificial intelligence, as reflected in the Charter of Robotics, an annex to a European Parliament resolution. It is fair to say that the European Parliament has reached a conclusion regarding the development of not only legal but also ethical standards in the field of artificial intelligence. Robotics raises a number of questions of a philosophical and ethical order: the recognition of smart robots as human beings; the acceptability of using robots in a number of spheres of life; the possibility of using robots and robots as weapons; and the observance of the inviolability of private and family life in the event of contact with robots.
The Charter of Robotics formulates ethical principles for researchers in robotics:
1) The principle of “do good”, which defines the use of robots for the benefit of people;
2) The “do no harm” principle aimed at preventing harm to people when using robots;
3) The principle of autonomy, which means the right of a person to independently decide on the possibility of interacting with a robot;
4) The principle of equity, according to which all benefits derived from the use of robots should be fairly distributed.
The Charter of Robotics pays special attention to the principles of the Committee on the Ethics of Scientific Research in the Field of Robotics, as well as ethical standards for developers and users of smart robots. First and foremost, the Charter stresses the need to respect human dignity and privacy when interacting with the robot and to prohibit the use of the robot as a means of harm (weapon). Moreover, it is the developers who are responsible for all possible harmful consequences.
Let us now turn to the North American approach to solving the legal problems of artificial intelligence. In the U.S., as well as in Russia and European countries, there is no unified legal regulation of robotics and artificial intelligence. Issues of robotics development are being addressed at the level of strategic documents and plans. Among the strategic acts is the National Robotics Initiative (“National Robotics Initiative”, 2011), which aggregates plans for financing and development of robots in various fields: astronautics, healthcare, agriculture, etc. (National.., 2017).
More specific issues are being addressed at the level of the Roadmap for Robotics in the USA (“Roadmap for USRobotics”, 2009). The Roadmap notes the urgent need to develop legislation on robots, as existing legal acts hamper progress in this area. The issues of robot safety, manufacturers’ liability, and insurance have not been resolved, which does not allow companies to invest in this area.
Finally, of particular interest is the White House National Artificial Intelligence Research and Development Strategic Plan (2016) and Preparing For The Future Of Artificial Intelligence (2016), which provide a strategy for the management and regulation of robotics development processes (The National.., 2018). Responsibilities have been assigned to the Subcommittee on Research and Development in Network Technologies and Information Technology (NITRD) and the U.S. Government.
The White House Plan provides a set of principles for the development of robotics in the United States:
• Transparency and clarity of artificial intelligence technologies for society;
• Development of robot architecture in accordance with the requirements of ethics;
• Inclusion of ethical principles in decision-making algorithms, as robots will inevitably face moral dilemmas.
Separate issues of robots use are defined in state legislation and judicial practice. For example, a number of states allow the use of unmanned vehicles on roads.
A unique analysis of case law is presented in R. Kalo’s work “Robots in American law”. In the article 9 court decisions concerning robots are analyzed. Part of them deals with robots as objects of law. It is noteworthy that in the part of the decisions the courts try to understand the problem of legal personality of robots and compare the person and artificial intelligence. The author shows that robots have become a part of modern society and lawyers will inevitably have to solve complex legal and ethical issues in understanding and using smart machines (Calo, 2016).
We believe that the American law concerning artificial intelligence will develop along the way of accumulation of precedents taking into account the established tradition of formation of law and thinking by analogy. Most likely, it is in practice that balanced and fair recipes will be found regarding the legal personality and responsibility of robots.
CONCLUSION
It can be noted that the draft law on the regulation of robotics (Grishin’s law) could become one of the most courageous in the world practice, as it was supposed to give autonomous robots a special legal personality by analogy with legal entities. However, giving artificial intelligence the status of a legal personality is possible in the presence of such criteria as the independence of robots from a person, the ability to exercise rights and duties (will) and the question of taking into account the interests of persons who may suffer from the actions of robots.
At the same time, there is an interesting EU experience in defining the legal nature of artificial intelligence in the legislation of European countries. The European Parliament’s resolution “Norms of civil law on robotics” is a set of legal and ethical guidelines for scientists, manufacturers, users and public authorities in the use of robotics and artificial intelligence. The resolution proposes further public debate on the status of smart robots as part of a shared belief that liability for harm caused by the use of robots should not be restricted. Importantly, the Resolution raises the serious issue of the definition of the subject of liability where the harm was caused by the robot in making its own decisions based on learning and experience.
European experience seems to indicate the need for careful regulation in the field of robotics, given the inadmissibility of broadening the horizon of uncertainty and the risk of dangerous ethical and social consequences of making robots quasi-capable.
American law in the field of robotics is developing in a similar way as in Russia through the adoption of strategic documents. On the other hand, it is obvious that there are legal acts of individual states on private issues of robotics. In addition, we can assume that the problem of the legal personality of robots will be solved in U.S. case law.
REFERENCIAS
Aletras, N, Tsarapatsanis, D, Preoţiuc-Pietro, D, Lampos, V. (2016). Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective. PeerJ Computer Science, 2:e93, doi: https://doi.org/10.7717/peerj-cs.93
Arkhipov, V.V., Naumov, V.B. (2017). About some questions of theoretical grounds for the development of legislation on robotics: aspects of will and legal personality. Law, 5, 157-170.
Arkhipov, V.V., Naumov, V.B. (2017). Artificial intelligence and autonomous devices in the context of law: on the development of the first Russian law on robotics. SPIIRAN Labor, 6, 47.
Bostrom ,N. (2016). Artificial Intelligence. Stages. Threats. Strategies. Moscow.
Calo, R. (2016). Robots in American Law. University of Washington School of Law. Research Paper No. 2016-04. Available from: https://ssrn.com/abstract=2737598.
Cerka, P. Grigiene, J. (2015). Sirbikyte Liability for damages caused by artificial intelligence. Computer law & security review, 31, 376-389.
Cerka, P., Grigiene, J., Sirbikyte, G. (2017). Is it possible to grant legal personality to artificial intelligence software systems? 33, 685–699.
European Parliament. (2017). REPORT with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Debates,15 February 2017. Available from: http://www.europarl.europa.eu/sides/getDoc.do?type=CRE&reference=20170215&secondRef=ITEM-014&language=EN&ring =A8-2017-0005/ (Accessed 01.02.2019)
Gadzhiev, G.A. (2018). Is the robot-agent a face? (Search for legal forms to regulate the digital economy). Journal of Russian Law, 1, 29-35.
Gadzhiev, G.A., Voynikas, E.A. (2018). Can the robot be a subject of law? (Search for legal forms to regulate the digital economy). Law. Journal of the Higher School of Economics, 4, 41-48.
Mindell, D. (2017). Machine revolt is canceled. The myth of robotization. Moscow. Alpina.
Morehat, P.M. (2017). Artificial Intelligence. Legal view. Moscow.
Morehat, P.M. (2018) To the question of the legal personality of the “electronic person”. Yuridicheskie Nauki, 4, 1 -8. DOI: 10.25136/2409-7136.2018.4.25647
Morhat, P.M. (2018). Unit of Artificial Intelligence as an Electronic Person // Bulletin of Moscow State Regional University. Series of Law, 2, 66-73.
National Robotics Initiative 2.0. Ubiquitous Collaborative Robots (NRI-2.0). (2017). The National Science Foundation. Available from: https://www.nsf.gov/pubs/2017/nsf17518/nsf17518.htm/ (Accessed: 08.08.2019)
Ponkin, A.V., Redkin, A.I. (2018). Artificial intelligence from the point of view of law. PFUR Newsletter. Series: Law sciences, 91-109.
Pyatkov, D.V. (2012). People’s faces (in Russian). Russian Law Journal, 5, 33-40.
Robots Law. (2019) Available from: http://robopravo.ru/matierialy_dlia_skachivaniia#ul-id-4-35/ (Accessed: 05.08.2019)
Schrijver, S. (2018). The Future Is Now: Legal Consequences of Electronic Personality for Autonomous Robots. In: Who’s Who Legal, 2018. Available from: http://whoswholegal.com/news/features/article/34313/ future-now-legal-consequences-electronic-personality-autonomousrobots/ (Accessed: 05.08.2019)
The National Artificial Intelligence Research and Development Strategic Plan. (2018). The Federal Networking and Information Technology Research and Development. Available from: https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf/ (Accessed 03.08.2019).
Información adicional
CITAR: Vasiliev, A., Zemlyukov, S., Ibragimov, Z., Kulikov, E., & Mankovsky, I. (2019). Ethical and legal aspects of the use of artificial intelligence in Russia, EU, and the USA: comparative legal analysis. Religación. Revista De Ciencias Sociales Y Humanidades, 4(19), 212-220. Retrieved from https://revista.religacion.com/index.php/religacion/article/view/459