People increasingly rely on software agents in their lives, from searching for information to chatting with a bot. These goal-oriented computer programmes are spreading wide across many fields to assist, take care of or entertain people.
Decoding the voice commands of the users, voice assistants (e.g. Google Assistant, Siri, Cortana, Alexa) bring tailored search engine results. Personal healthcare assistants make it easier for patients to communicate with their care team and keep control of their health in their hands. Intelligent game agents make games more responsive and challenging for players.
Evolving from rational to sentient agents
Current intelligent systems are rational and lack the emotional component. “By design, intelligent personal assistants do not have anthropomorphic representations: they lack most of the features we rely on to communicate. We interact with them via smart phones and smart speakers, while communication is voice-driven and episodic,” notes Josep Blat, coordinator of the EU-funded PRESENT project.
The assistants also lack the ability to leverage visual clues or build an affective relationship that might evolve. “While augmented and virtual reality involve visual and spatial data, the systems create physical barriers that restrict interaction by getting in the way of sensory input and communication,” adds Blat.
Building on advances in the real-time creation of photorealistic computer-generated characters, coupled with emotion and behaviour recognition, the PRESENT team designed a sentient virtual agent. The agent is realistic in both looks and behaviour and can interact with users as they navigate rich and complex environments. “Importantly, the sentient agent has both high visual and animation quality and establishes meaningful dialogue, adapting to human mood and emotional states and evolving in response to the user’s behaviour,” highlights Blat.
Avatars revolutionising digital experience
PRESENT’s agents offer the best of artificial intelligence and human conversation, interacting using both verbal and non-verbal cues like tone of voice and facial expression to recreate natural human interaction. With highly realistic facial and bodily appearance and sophisticated animation, PRESENT’s digital human technology is already being leveraged in research institutions and some visual effects and facial animation companies. Partner CREW demonstrated the results in different public performance venues, including SIGGRAPH.
Project partner Framestore advanced machine learning methods to develop faster animation rigs (a technique in computer animation used to create 3D character models with full skeletal and muscle definition). Improved representation of digital characters will greatly benefit film companies, enabling them to create digital characters that could fool the audience into believing that they are played by humans.
Cubic Motion has been improving the facial and body animation of the characters, bringing it closer to an actor’s performance. The advanced methods relying on dense stereo reconstruction make it possible to overlay emotional states on faces.
Brainstorm demonstrated the integration of sentient agents in Virtual Studio solutions for broadcasting.
The University of Augsburg investigated the sentient agent interaction based on the integration of diverse sensorial/emotional inputs, while Inria investigated how to render the virtual characters more reactive and expressive in situations involving 1-to-1 and 1-to-n interactions between a user and virtual characters. Using interaction fields, researchers sought to sketch collective behaviours, namely how one agent is moving relatively to the others when forming groups, fleeing, hiding, etc.
Virtual agents for banking or health agents with higher degrees of security and trust are provided by partners InfoCert and Pompeu Fabra University.
Leave a Reply