Digital doppelgängers: Building an army of you
- 15 August 2012 by Sally Adee
One morning in Tokyo, Alex Schwartzkopf furrows his brow as he evaluates a grant proposal. At the same time, Alex Schwartzkopf is thousands of kilometres away in Virginia, chatting with a colleague. A knock at the door causes them to look up. Alex Schwartzkopf walks in.
Schwartzkopf is one of a small number of people who can be in more than one place at once and, in principle, do thousands of things at the same time. He and his colleagues at the US National Science Foundation have trained up a smart, animated, digital doppelgänger - mimicking everything from his professional knowledge to the way he moves his eyebrows - that can interact with people via a screen when he is not around. He can even talk to himself.
Many more people could soon be getting an idea of what it's like to have a double. It's becoming possible to create digital copies of ourselves to represent us when we can't be there in person. They can be programmed with your characteristics and preferences, are able to perform chores like updating social networks, and can even hold a conversation.
These autonomous identities are not duplicates of human beings in all their complexity, but simple and potentially useful personas. If they become more widespread, they could transform how people relate to each other and do business. They will save time, take onerous tasks out of our hands and perhaps even modify people's behaviour. So what would it be like to meet a digital you? And would you want to?
It might not feel like it, but technology has been acting autonomously on our behalf for quite a while. Answering machines and out-of-office email responders are rudimentary representatives. Limited as they are, these technologies obey explicit instructions to impersonate us to others.
One of the first attempts to take this impersonation a step further took place in the late 1990s at Xerox's labs in Palo Alto, California. Researchers were trying to create an animated quasi-intelligent persona, to live on a website. It would do things like talk to that person's virtual visitors and relay messages to and from them. But it was unsophisticated and certainly far from capable of intelligent conversation, says Tim Bickmore, of Northeastern University in Boston who worked on the project, so it was not commercialised.
The consensus has long been that the roadblock to creating a convincing persona is artificial intelligence. It still hasn't advanced sufficiently to reproduce complex human behaviour, and it would take years of training for an AI to resemble a person. Yet it has lately become clear that fully representing a human is unnecessary in today's digital environments. While we cannot program machines to think, getting them to do specific tasks is not a problem, says Joseph Paradiso, an engineer at the Massachusetts Institute of Technology.
Faceted identity
To understand why and where this could be useful, consider the way that a person's identity is represented on the internet. The typical user has a fragmented digital self, broken up into social media profiles, professional websites, comment boards, Twitter and so on. Of course, people have always presented themselves differently depending on context - be it the workplace or a bar - but Danah Boyd, a social media researcher at Microsoft Research in Cambridge, Massachusetts, argues that digital communication enhances this because it inherently gives a narrow view of a person.People manage these subsets of their identity like puppets, leaving them dormant when they're not needed. What researchers and companies have realised is that some of these puppets could be programmed to act autonomously. You don't need to copy a whole person, just a facet, and it doesn't require impressive AI and months of training.
For example, the website rep.licants.org, developed by artist Matthieu Cherubini, allows you to create a copy of your "social media self", which can take over Facebook and Twitter accounts when required. You prime it with data such as your location, age and topics that interest you, and it analyses what you've already posted on your various social networks. Armed with this knowledge, it then posts on your behalf.
In principle, such services could one day perform a similar job to the ghostwriters who manage the social media profiles of busy celebrities and politicians today. In fact, some people already automate their social media selves: some add-ons to a Twitter account can be programmed to send out messages such as a thank-you note if somebody follows you. As far as the recipients are concerned, the messages were sent by a real person.
Your professional persona can be replicated, too. The Australian company MyCyberTwin allows users to create copies of themselves that can engage visitors in a text conversation, accompanied by a photo or cartoon representation. These copies perform tasks such as answering questions about your work, like an interactive CV. "A single CyberTwin could be talking with millions of people at the same time," says John Zakos, who co-founded the firm. MyCyberTwin also uses tricks to add a touch of humanity. Users are asked to fill in a 30-question personality test, which means that the digital persona may act introverted or extroverted, for example.
In a few years, this simple persona could be extended to become an avatar - a visual animation of you. Avatars have long been associated with niche uses such as gaming or virtual worlds like Second Life, but there are signs that they could become more widespread. In the past year or two, Apple has filed a series of patents related to using animated avatars in social networking and video conferencing. Microsoft, too, is interested. It has been exploring how its Kinect motion-tracking device could map a user's face so it can be reproduced and animated digitally. The firm also plans to extend the avatars that millions of people use in its Xbox gaming system into Windows and the work environment.
So could avatars be automated too? It already happens in gaming: many people employ intelligent software to control their avatars when they're not around. For example, some World of Warcraft players program their avatars to fight for status or to farm gold.
To similar ends, in 2007 the National Science Foundation began Project Lifelike, an experiment to build an intelligent, animated avatar of Schwartzkopf, who at the time was a program director. The hope was to make the avatar good enough to train new employees.
Jason Leigh, a computer scientist at the University of Illinois at Chicago, used video capture of Schwartzkopf's face to create a dynamic, photorealistic animation. He also added a few characteristic quirks. For example, if Schwartkopf's copy was speaking intensely, his eyebrows would furrow, and he would occasionally chew his nails. "People's personal mannerisms are almost as distinguishing as their signature," Leigh says.
These tricks combined to make the copy seem more, well, human, which helped when Leigh introduced people to Schwartzkopf's doppelgänger. "They had a conversation with it as if it were a real person," he recalls. "Afterwards, they thanked it for the conversation."
The Project Lifelike researchers are now building a copy of the astronaut Jim Lovell, who flew on Apollo 13 and will answer questions at Chicago's Adler Planetarium, and one of Alan Turing, who will field questions at the Orlando Science Center in Florida. Others are working on ways to create doppelgängers that will persist after people die.
Dear doctor
Meanwhile, Bickmore and his team are developing animated avatars of doctors and other healthcare providers. One of the nurse avatars they created is designed to discharge people from hospitals. In tests, he found 70 per cent of patients preferred talking to the copy rather than a real nurse, because they felt less self-conscious. Doctors, meanwhile, could use avatars to streamline their work. "A doctor might want to make a copy, for example, if they are the pre-eminent expert in a field," Bickmore says.Admittedly, some of these avatars did take a lot of time to train. Schwartzkopf spent months teaching his digital self about his job. But it depends on the sophistication of the task, says Jeremy Bailenson, who directs the Virtual Human Interaction Lab at Stanford University in California.
One way to shortcut this process is to give an avatar specific behaviours adapted for the purpose, says Bailenson. "We've demonstrated that it doesn't matter how good the AI is. What matters is the belief in the social presence." Along with collaborator Jim Blascovitch, he created an avatar to teach students via a screen in a lecture theatre. The pair designed it to peer at each student for 2 seconds at a time. They called it "supergaze", and found it made all the difference. When the students thought of the avatar as an unthinking, unfeeling AI, they stopped paying attention - even if it was programmed with the necessary knowledge. But with the supergaze, they were more likely to respond as if there was a human in control.
The point, says Bailenson, is that AI is not the stumbling block many researchers once thought it was. He argues that people will engage with a screen avatar if their abilities suit the task in hand, and if there is the small possibility that a human is operating them.
As with doctors, academics could spread their workload too. "This would allow you to teach as many sections as your department desires," Bailenson says. With several copies operating simultaneously, a teacher could jump between them at will, inhabiting any one without ever letting on to the students.
Of course, many people might be reluctant to set loose autonomous facets of themselves. What happens if they say something inappropriate, or even evolve on their own? The experience of British writer Jon Ronson provides a hint. Earlier this year, a Twitter account under the name @jon_ronson began issuing tweets, raising the hackles of the real Ronson, who tweets as @jonronson. It was an impersonation, operated by an algorithm.
Ronson discovered that it was created by a British company called Philter Phactory, which makes autonomous bots called Weavrs. These can operate Twitter accounts and other social media on a person's behalf. The company's selling point is that Weavrs can be used to trawl the web for interesting links about certain topics, then post status updates or share videos and articles about them.
The Ronson-bot's chatter was anodyne, expressing, among other things, a love for midnight snacking. In a film Ronson made about the experience, he described how he felt unsettled and angry, because he had no control over this copy. Someone had mimicked his digital persona without his knowledge and there was nothing he could do to stop them.
Toon army
Many actors and performers have digital personas, sometimes created against their will. It seems laws will need to be adapted to define who can control people's digital selves (see "Double jeopardy").Some people are more troubled by the effects on society as a whole. Jaron Lanier, an author and Microsoft researcher, worries about technologies that claim to amplify our efficiency. The promise of technology to free our time for more leisure pursuits is an old one. It means we simply find new chores to keep us busy instead. Create 10,000 selves, he says, and we will create a world that demands a million. And in principle, doppelgängers could be cheaper to employ than real people. "If you're a history professor and you can operate 10,000 of these things, why does the university have to hire any other history professors?" Lanier asks.
For individuals, however, seeing copies of themselves acting outside their own bodies might have positive side effects. For example, when Bailenson subtly morphed people's avatars to be slightly more attractive, he found it gave them a confidence boost that persisted afterwards. Half an hour after the experiment, volunteers were asked to identify the most attractive person they thought they could successfully date: people made bolder choices when, unbeknown to them, their copy was slightly prettier or more handsome than reality. The same was true of a slight increase in height. Conversely, in another experiment, virtual doubles that were presented as fatter than their real counterparts successfully motivated participants to exercise.
Clearly, then, creating virtual selves could have unintended consequences. Meeting our digital counterparts will not be like meeting ourselves, at least not at first. But they might be a convincing facet, and could even give you insights into how other people see you. The several Alex Schwartzkopfs could be the start of a whole new population explosion.
Double jeopardy
Earlier this year, the rapper Tupac Shakur appeared on stage at a music festival in California. This was a surprise, because Shakur had been dead for 14 years. He was projected as a hologram.Digital versions of performers and athletes routinely appear in movies, advertisements and games. Yet these advances raise ownership issues that lawyers are only now beginning to tackle.
When the actress Sigourney Weaver had her face and emotional expressions digitised to star inside the virtual world of the movie Avatar, the information was stored in a database. Who owns the rights to that face? To what extent are manipulations - such as putting words into Weaver's or Tupac's mouth - acceptable without consent?
Some existing laws can be adapted to deal with these issues, says Simon Baggs, a partner at Wiggin, a media law firm based in London. When a photo of the racing driver Eddie Irvine was manipulated suggestively in an advert he successfully sued. Meanwhile, some actors and athletes have already launched lawsuits under trademark law after their likenesses were used without their permission in advertisements and games.
If more of us start creating digital selves (see main story), other laws could be more suitable. Design rights, for example, could protect an animated avatar of your face and body against misuse under copyright law.
Still, each country's laws vary. Individuals in the US have far greater rights over their own image than they do in the UK, for example. Baggs says this could attract lawsuits to certain countries, much as the UK's strict defamation laws have attracted libel tourism.
No comments:
Post a Comment