Intelligent Agents and the Future of Identity

Thierry Nabeth and Claudia Roda, INSEAD

Issue: New information technologies make it possible to track users’ information flows and on-line behaviour in a way that enables the design of "intelligent agent" systems that could deliver radically personalized services. Potential applications include personal electronic "tutors" or advisors, and the technology could provide mechanisms facilitating group formation and the sharing of opinions in society.

Relevance: The use of software agents, which gather personal information and use it in socially and humanly aware ways, raises several issues: (1) the highly personal nature of the tasks delegated to personal agents and the consequences for the development of people’s identity; (2) the potential impact on social interactions; (3) the management of such highly sensitive personal information and, in particular, the risk of its disclosure and misuse.

The views expressed here are the author’s and do not necessarily reflect those of the European Commission.

The new "intelligent" services exploiting personal information

Many commentators are already sketching out some of the details of how the Internet could develop from a passive medium over which data is transferred, to a more intelligent "semantic web" which has meaning embedded in it. For instance, Tim Berners-Lee, the inventor of the World Wide Web, defines it thus: "The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs. The effectiveness of such software agents will increase exponentially as more machine-readable Web content and automated services (including other agents) become available".

We can easily imagine some of the radically new categories of services that artificial agents could make it possible to implement, and their potential benefits for both individuals and society: (1) first, highly personalized agents that closely assist users in a variety of different personal activities; (2) second, social agents inhabiting digital "social spaces" that facilitate social interactions.

Artificial "intelligent agents" could make it possible to offer highly personalized individual services that assist users in a variety of activities and group-oriented services that facilitate social interactions

In specific terms, the first category of agents consists primarily of guides and assistants able to build up a good understanding of the user, anticipating the user’s needs and expectations and possibly making decisions on his or her behalf. An example of an agent of this kind would be an interactive on-line tutor which knows about the user’s existing skills and qualifications, goals, and learning style and is able to use this knowledge to propose the most suitable learning strategy for this user. The agent would then go on to support the user in executing this strategy by selecting and delivering the most appropriate teaching materials, measuring the effectiveness of the learning process and even by giving the user encouragement. Other types of intelligent agents could help people at work, while shopping or help them choose entertainment.

By means of techniques such as "collaborative filtering" social agents could help form groups of people with similar profiles and interests and thereby bring people together

The second category includes "social agents" that operate in digital "social spaces", such as "virtual community platforms", and whose role is to facilitate a process of social interaction. For instance, the function of some of these agents is to encourage group formation by clustering users with similar profiles and interest (often referred to as social or collaborative filtering1) and therefore connecting groups of people. The function of other social agents is to encourage knowledge exchange and collaboration, which they do by encouraging transparency. In practical terms, these agents track and display users’ activity (such as who is active, who contributes, who is reliable) in digital "social spaces".

A concrete example of this later category of agent can be found in eBay’s auctioning system or the freelance marketplace eLance, which keeps track of the transaction history of all the participants (buyers and sellers), and which helps people to form an opinion about the reliability of establishing a transaction or a business relationship with another party.

What are the implications for people’s identity of using artificial agents?

Identity in this context refers to more than just "identity information" (such as a person’s social security or tax number), rather it represents a more general concept which relates to the individual’s development and his sense of belonging. Moreover, the next generation of artificial personal or social agents should not be confused with basic "anthropomorphic" wizards (such as the Microsoft agents) that provide help with certain simple and highly specific tasks. "Intelligent" agents develop a very deep understanding of users in order to provide sophisticated guidance in essential areas of their lives (for instance helping the user throughout the educational process or initiating and supporting the development of a human relationship).

In this context, artificial agents with an understanding of both individuals and social contexts may have important implications for personal identity both at the individual and social level.

Artificial agents with an understanding of both individuals and social contexts may have important implications for personal identity both at the individual and social level

At the individual level, personal agents may transform users’ personal construction of themselves by creating a symbiotic alter ego. In practice, personal agents may help in an individual’s self-development by providing feedback, guidance, and relieving the user of repetitive tasks. These agents may also have some negative implications such as the risk of loss of the user’s autonomy and excessive dependence ("delegating too much to the agents may mean the agents do not follow the individual’s course in life"). They may also encourage the emergence in the society of passive ("why should I bother to make an effort") and individualistic attitudes ("I don’t need other people").

Although the role of an artificial agent may not differ fundamentally from that of human agents such as friends or instructors, their availability is likely to depend less on socio-economic factors

The influence of artificial agents on the individual is not fundamentally different in nature from that human agents such as a companions, friends, or instructors can have. Good human agents can, however, be difficult to find even among the relatively favoured social classes (few people can afford a personal tutor, it can take time to develop a trustful relationship, and apparently "friendly" gestures may have other motives). Besides, the human agents available to an individual often reflect (and perpetuate) an individual’s socio-economic circumstances. There are reasons for believing that artificial agents are less prone to these difficulties, even taking into account that wealthier individuals might be able to avail themselves of more sophisticated personal agents.

Finally, it has to be noted that in some cases, artificial agents may a have a role that the individual would not find in a human agent. For instance, people tend to compartmentalize their life (separating work, family, leisure) in order to secure some areas of freedom, and therefore may not want to disclose to a single human agent all the information about their "multiple identities". Moreover, there are some limits about what a human being is ready to disclose to another human being (because of self-esteem, shyness, embarrassment, etc.). These restrictions are less likely to apply in the case of an artificial agent.

By allowing users to improve the quality of their social interactions, such agents could be of particular benefit to users with limited social capital

At the social level, social agents may transform radically the dynamics of social interaction, and therefore the development of the user’s social identity(ies).

First, because the user may get personal support for his social-related activities (getting advice about how to behave in society and how to forge useful relationships with others, delegating to some agents some social-related tasks such as filtering social contacts). The consequences may be manifold: first, the agents will help the user to improve the quality of his or her social interactions. In particular, agents could benefit categories of users with limited access to social capital2, who do not have the chance to find in their environment a "human" mentor that will help them to "find their place" in society. Second, agents will allow the user to more efficiently manage social interaction. For instance the agents may act as a proxy for a user and help him or her to manage a larger set of identities. But the consequences could also be negative: agents may reduce the occurrence of "chance encounters" and introduce another barrier to the diversity of social interaction.

Second, because some artificial agents will help to make the social interaction process more transparent (by tracking and displaying people’s behaviour) and efficient (less risky). The social transparency generated by "social agents" in digital social space will certainly transform user behaviour, for instance by making more reliable information about other people available, the user may more readily engage in an interaction/ transaction with them; on the other hand, the knowledge that his personal actions be recorded and be made available to others for a long period of time may also hamper his actions (mistakes may not easily be forgiven in a system that does not forget!) As occurs with other monitoring systems in the past, however, it is likely that people and technology will find ways to evade/mislead the monitoring system, especially as the benefits to those evading it become higher.

The consequence of this "social transparency" may be an increase in peer pressure in these social spaces, the outcome of which may be the more rigid enforcement and homogenization of existing social values, potentially leading to greater conformity and limiting the emergence of new values.

On the downside, the "social transparency" such agents bring may lead to greater peer pressure toward uniformity in social values

The exact implications (whether positive or negative) of artificial agents for people’s identity are extremely difficult to determine, and will depend on a complex set of factors which include the functionality that these agents deliver and their ability to fulfil people’s needs and desires (such as personal achievement, security, reduced effort, interpersonal relationships); people’s perception of the value of the risk/benefit analyses that they can obtain from these functionality; and, the social, technological and economic limitations.

Capturing, managing and protecting personal information

Personal information includes potentially all the information that directly relates to a given individual and which can be exploited to deliver services that take that individual’s characteristics into account. The more information a personal agent can extract about a given user, the better able the agent will be to deliver something meeting the user’s needs appropriately. Likewise, the more information a social agent has about a group of users, the more effective this agent will be at supporting their social interaction.

Personal information covers a wide range of aspects, and the more information an agent has the more effectively it can operate at either the individual or group level. However, at the same time, this raises privacy concerns

A user’s information covers a very diverse range of facets such as his or her identity (his/her name, address, and telephone number, email), preferences (e.g. does he/she prefer small or large fonts), skills and qualifications (what domains he/she has experience in, what are his/her university degrees, etc.), interests (what are the areas that motivate him), goals and expectations (does he/she have an "agenda" for career or life development), personality (introverted or extroverted), cognitive style (does he/she like abstraction or does he/she feel more comfortable with concrete cases) or attitude (is he/she a risk taker or risk averse).

Capturing user’s information can take a variety of different forms, ranging from the simplest which consists of asking the user to enter information manually, to more advanced techniques by which information is extracted from databases, or to highly sophisticated techniques where the actions of the users are automatically recorded and the information is extracted and categorized using data mining tools.

A user’s personal information is very sensitive and needs to be protected. In particular, it is important that the user be guaranteed that highly personal information (such as his or her psychological profile) is only used within the limits of defined boundaries. This kind of protection can be ensured by legal means (typically by prohibiting the recording of certain types of private information). However, excessive regulation at this stage could prevent the development of these new services, and it could be more desirable that protection is achieved both through a combination of technology (to manage the storage and the disclosure of this information in a secure way), and informing users and placing the management of their information under their control (making very clear what systems actually have access to this personal information).

A combination of technology solutions and providing users with information and control is probably preferable to implementing excessive regulation at this stage

Conclusion: intelligent agents represent an opportunity, but are not without risks

Artificial agents represent an important opportunity for the development of radically new services (guidance, support, etc.) which have the potential to transform (or even revolutionize) people’s individual and social life. They represent a chance for the less-favoured to mitigate some of the limitations of their environment both at an individual and social level: (1) by providing them the individual assistance (education, advice, motivation) that they could not afford or that would not be available to them otherwise; and, (2) by helping them to acquire the social skills, and in particular how to interact with others and overcoming their lack of social capital.

Intelligent agents also represent an opportunity for society as a whole to benefit from more highly personalized and more effective services, and to develop a richer social life in the many digital spaces that are opening up, and which increasingly occupy a larger part of their life. In particular, they offer users the possibility of developing and managing a much larger and more diverse set of identities and thereby of increasing their degree of personal fulfilment.

However, these agents also pose a number of risks. These include: 1) the risk of increased user dependence; 2) increasing transparency of people’s behaviour, loss of privacy, and the possibility of exercising social control; 3) risks related to the disclosure of personal information.

Although the precise implications (positive or negative) of artificial agents for the future of personal identity are still somewhat hazy, some preliminary measures can at least be suggested to facilitate the development of these approaches and at the same time limit the risks, in particular the use of technological means to protect information, and encouraging the maximum of transparency regarding how, and by whom, this personal information is accessed.


artificial agents, user profiling, personalization, personal guide, social transparency


1. Collaborative filtering is a way of guiding people’s choices based on information gathered from other people. The basic method involves registering the preferences of a large group of people. A subgroup is selected whose preferences are similar to the preferences of the person to whom advice or recommendations are to be offered. A (possibly weighted) average of the preferences for that subgroup is calculated and the resulting preference function is used to make recommendations.

2. The concept of social capital rests on the idea that social networks have value to their members. Thus, social capital refers to the collective value of all "social networks" (i.e. who the members of the networks know) and the "norms of reciprocity" between the members of the networks, which shape the extent and type of actions they are prepared to undertake for one another.


Thierry Nabeth & Claudia Roda, INSEAD CALT (Centre for Advance Technologies),

Tel.: 33 160 72 43 12, fax: 33 160 74 55 50, e-mail:

Laurent Beslay, IPTS

Tel.: +34 95 448 82 06, fax: +34 95 448 82 08, e-mail:

About the authors