Software Agents as Legal Persons

of 10
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Reviews

Published:

Views: 12 | Pages: 10

Extension: PDF | Download: 0

Share
Description
Software Agents as Legal Persons
Tags
Transcript
   SOFTWARE AGENTS AS LEGAL PERSONS Francisco Andrade 1 , José Neves 2 , Paulo Novais 2  and José Machado 2   1  Escola de Direito, Universidade do Minho, Braga, Portugal 2  Departamento de Informática, Universidade do Minho, Braga, Portugal {fandrade, jneves, pjon, jmac}@di.uminho.pt The Law has long been recognizing that, besides natural persons, other entities socially engaged must also be subject of rights and obligations. Western laws usually recognize Corporate Bodies as having legal personality and capacity  for every right and obligation needed or convenient to the prosecution of its social goals. But can we foresee a similar attribution of such a regime to software agents? In other words, are intelligent software agents capable of being personified? One of the main characteristics of a personality is the existence of a physical being or organization provided with its own will. In that sense, intelligent software agents are quite close to human beings. Indeed, they have a physical existence, and they have the capability of learning and of having a will of their own. 1. INTRODUCTION In order to evaluate the chances of attributing a legal personality to intelligent software agents, it might be interesting to analyse – establishing some due comparisons – the arguments that justified the consideration of corporate bodies as legal persons (Gonçalves, 1929). In fact, legal persons are to be seen as a Technical Reality (Fernandes, 1995) or instrument at the service of The Law, through which it is achieved a way of dealing with certain human interests (Fernandes, 1995). Legal persons are thus considered a reality of the legal world corresponding to a social need, to a social interest worth of  being dealt with, according to The Law. Applying such considerations to intelligent software agents, it may be argued that those are physical and logical entities capable of multiple and autonomous intervention in the legal world, whose personification under The Law might be foreseen as a technical way of responding to a social need  – the need for more efficient and reliable ways of undertaking actions that man alone can not perform, or can not perform sufficiently and economically and in time. Besides one own will, two basic requirements were enounced as needed for a corporate body to become a personality, and those were substratum  (e.g., personal or  patrimonial component, teleological component, intentional component) and recognition  (Andrade, 1974). Does substratum exist in software agents? Can we 01     BOOK    TITLE 2 consider its physical and logical structures as a personal or patrimonial element? Can we speak of teleological and intentional elements when referring to software? And how could recognition of legal personality to software substratum be handled? The attribution of legal personality to intelligent software agents would have some obvious advantages: it would solve the question of consent and the validity of declarations and contracts enacted or concluded by software agents (Felliu) (Fisher, 2001); and it would reassure the owners-users of the agents about eventual liability concerns (Sartor, 2002). But it would also face several difficulties, due to the intrinsic characteristics of software agents – some difficult problems could arise relating to questions such as domicile or patrimony (Weitzenboeck) (Lerouge). And of course we must wonder whether electronic agents could or not be liable for negligent acts or omissions, whether it is possible or not to consider them to act in good or bad faith (Miglio et al), whether or not it is possible to sue a software agent in Court, or to impose sanctions on it (Andrade and Neves, 2004). The attribution of legal personality to electronic agents would require at least some sort of constitution/declaration act and eventually registration (Allen et al, 1996), in order to attribute a physical location to the agent, a minimum patrimony through a banking deposit or even a compulsory insurance regime, in order to fulfill financial obligations and liabilities. But even if all those difficulties could be overcome, would it be worth such a legal attribution? Or should we rather foresee the creation of special corporate bodies on whose behalf the electronic agents would act? Anyway, we must have a realistic approach to this issue, considering the challenging technical possibilities of software agents as entities requiring a  particular new legal setting in order to enhance the full use of e-Commerce in a global world. Much work has been done in terms of the humanization of the behaviour of virtual entities, by expressing human like feelings and emotions; work presented in (Ortony et al, 1988) (Picard et al., 1997) detail studies and propose lines of action that consider the way to assign emotions to machines. Attitudes like cooperation, competition and socialization of agents (Bazzen at all, 2000) are explored, for example, in the areas of Economy (Arthur, 1994) and Physics (Challet and Zhang, 1998), as it is the case of the  El Farol Bar Problem , the Minority Game and the Iterated Prisoner’s Dilemma. In (Bazzen and Bordini, 2000) and (Castelfranchi et al., 1997) it is recognized the importance of modelling the virtual agent’s mental states in a human like form. Indeed, an important motivation to the development of this work comes from the one that is going on in the intersection of the disciplines of AI and The Law, that enforced new forms of knowledge representation and reasoning in terms of an extension to the language of logic programming (i.e., the Extended Logic Programming (ELP) (Alferes at all, 1998) (Neves, 1984) (Traylor and Gelfond, 1993) (Costa at al., 2000) (Costa and Neves, 2000). This introduction raised a very wide scientific area and raises a substantial number of issues. However, once one has limited space, the links between the key discussions that are addressed in the  paper, will be set in terms of an analyse of the predicate reputation() . The formalization presented in chapters 3 and 4, which is aimed at reputation, intends to open the way to deal with legal issues, in a similar form. Indeed, reputation, in itself, may also be understood as a legal issue.    Software Agents as Legal Persons 3 2. MULTIAGENT SYSTEMS In the current economical context, characterized by the existence of a global society, the access to information is crucial for any economical and social development, still remain important technological challenges. The representation, maintenance and querying of information is a central part of this problem. How can we obtain the adequate information at the adequate time? How can we supply the correct items for the correct people at the correct time? How and where can we get the relevant information for a good decision making? The organizations focus their competences in strategically areas and have recourse to external supplies, cooperating with sporadically partners, with the objective of reducing costs, risks and technological faults or maximizing benefits and business opportunities. One of the most radical and spectacular changes is the information unmaterialization, the task or procedure automation, the recourse to decision support systems or intelligent systems and to new forms of celebrating contracts (e.g., is it possible to practice commercial acts and celebrate deals using autonomous and pro-active computational agents?). The e-Commerce has now new challenges, searching for new answers to old questions. The negotiation processes through electronic means and the e-Commerce platforms may set new forms of contracts, with engagements and negotiations among virtual entities. Software agents are computational entities with a rich knowledge component, having sophisticated properties such as planning ability, reactivity, learning, cooperation, communication and the possibility of argumentation. The use of the agent figure is particularly adequate to such problems. The objective is to build logical and computational models, as well as implementing them, having in consideration The Law norms (i.e., legislation, doctrine and jurisprudence). Agent societies may mirror a great variety of human societies, such as commercial societies with emphasis to behavioural patterns, or even more complex ones, with  pre-defined roles of engagement, obligations, contractual and specific communication rules. The traditional programming languages do not support the description of certain types of behaviour which usually involves computational agents. In genesis, systems that incorporate those functionalities have a multi-layer architecture, evolve from esoteric software sub-systems, network protocols, and the like. On the other hand, once one deals with multi-agent systems, it must be guaranteed that they may answer to different and simultaneous demands, in a secure and error free way. An agent must be able to manage its knowledge, beliefs, desires, intentions, goals and values. It may be able also to plan, receive information or instructions, or react to environment stimulus. It may communicate with others agents, share knowledge and  beliefs, and respond to other agents upon request. It may cooperate diagnosing errors or information faults in its knowledge bases, sharing resources, avoiding undesirable interferences or joining efforts in order to revisit the knowledge bases of its own and of its peers, in order to reach common goals. Knowledge and belief are generally incomplete, contradictory or error sensitive,  being desirable to use formal tools to deal with the problems that arise from the use of incomplete, contradictory, imperfect, wrong, nebulous or missing information.     BOOK    TITLE 4 The Extended Logic Programming language presents itself as a formal and flexible tool to contribute to obtain a solution for the problems just referred to above. 3. AN AGENT KNOWLEDGE BASE Using knowledge representation techniques as a way to describe the real world,  based on mechanical, logical or other means will be, always, a function of the systems ability to describe the existing world. Therefore, in the conception of an agent knowledge base it must be object of attention the existent information, which may not be known in all its extension; the observed Information, which is determined by the experience, and obtained by contact or observation; and the information to be represented, which refers to a given event and may or may not be taken into consideration. Definition 1 . The knowledge in an agent’s knowledge base is made of logic clauses of the form r  k  :P i+j+1 ←  P 1  ∧  P 2  ∧ … ∧ P i-1  ∧ not P i  ∧ … ∧  not P i+j , where i, j, k belong to the set of natural numbers, P1, …,Pi+j are literals; i.e., a formula of the form P or ¬ P, where p is an atom, and where r  k  , not, P i+j+1 , and P 1  ∧  P 2  ∧ … ∧ P i-1  ∧ not P i  ∧ … ∧  P i+j  stand, respectively, for the clause’s identifier, the negation-by-failure operator, the rule’s consequent, and the rule’s antecedent. If i=j=0 the clause is called a fact and is represented as r  k  :P 1 . An  Extended Logic Program (ELP)  program is seen as a set of clauses, as given by the definition below. Definition 2 . An  Agent    Knowledge Base (AKB)  is taken from an ordered theory OT=(T,<,(S, p )), where T, >, S and p  stand, respectively, for an  AKB  in clausal form, a non-circular ordering relation over such clauses, a set of priority rules, and a non-circular ordering relation over such rules. Definition 3 . An argument (i.e., a proof, or series of reasons in support or refutation of a proposition) or arguments  have their genesis on mental-states seen as a consequence of the proof processes that go on unceasingly at the agent's own knowledge about its states of awareness, consciousness or erudition. On the other hand, the mental states are by themselves a product of reasoning  processes over incomplete or unknown information; an argument may not only be evaluated in terms of true or false, but it may be quantified over the interval [ 0…1]. An argument may be built over abnormal or exceptional situations and it may recourse to incomplete or contradictory information. This will be accomplished through the use of disjunctive logic programming, here defined in terms of ELP, to the representation of partial information which commonly occurs in the AKBs. We will focus our attention on representing various forms of null values through a set of techniques to distinguish between known and unknown or inapplicable values of attributes in the extensions of the predicates present in the AKBs. The identification of null values emerges as a strategy for the enumeration of cases, whenever one intends to distinguish between situations where the answers are known (true or false) or unknown (unknown) (Traylor and Gelfond, 1993) (Neves et al., 1997). The representation of null values will be scoped by the ELP. In this work, it will be considered two types of null values: the former will cater for the    Software Agents as Legal Persons 5 representation of unknown values, not necessarily from a given set of values, and the later will denote unknown values, from a given set of possible values. Let us consider the predicate reputation() , which stands for an agent reputation and its valuation, reputation: Entity ×   Valuation where the first argument denotes the agent and the later its degree of reputation. For example, reputation (Paul, 0.5 )  denotes that the reputation of the agent Paul has a valuation of 0.5, a situation that formally may be given in terms of the clauses depicted below (Program 1): reputation(paul,0.5) ¬  reputation(E,V) ←   not reputation( E,V ), exception-to-reputation(E,V) Program 1 - Extension of the predicate that describes the reputation of agent Paul In Program 1, the symbol ¬ stands for strong negation, not designates negation- by-failure, and exception-to-reputation()  denotes the set of clauses that are to be considered as exceptions to the extension of predicate reputation() . Considering the example given by Program 1, one may now claim that the reputation of the agent John was not yet been established. This situation will be represented by a null value, of the type unknown, that allows one to conclude that John has a certain reputation,  but it is not possible to be concise with respect to its valuation (Program 2). reputation( paul,0.5 ) reputation( john, ⊥  ) ¬reputation( E,V ) ←   not reputation ( E,V ), not exception-to-reputation ( E,V ) exception-to-reputation ( E,V ) ←   reputation ( E, ⊥  ) Program 2 - Extension of the predicate that sets the reputation of agent John The symbol ⊥  denotes a null value of an undefined type, in the sense that it is assumed that any solution to the problem may be subscribed, but nothing is said about which solution one is speaking about. Computationally, it is not possible to determine, considering the positive information, the reputation of the agent John; however, if one looks to the exceptions to the extension of predicate reputation()  (fourth clause of Program 2, that sets the closure of predicate reputation() ), it is discarded the possibility of any non-standard question to be assumed as false, when set with respect to the reputation of the agent John. Consider now the example where the reputation of an agent, the agent Ivan, is foreseen at 0.75, with a margin of error of 10%. It is not possible to be conclusive regarding the reputation as 0.75 or 0.70, or even as 0.825. However, it is false that Ivan reputation is 0.70 or 1. This example suggests that the lack of knowledge may  be described by an enumerated set of possible values (Program 3). reputation( paul,0.5 )
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x