Communications

The presentation of the communications of the CHIST-ERA Conference 2011 (keynote and short talks, posters) will be continuously updated in the course of the summer.

Elisabeth André
Exploring Unconscious Signals in Human-Computer Interaction: Identification of Key Issues and Outline of a Roadmap for Future Research
Elisabeth André

Elisabeth André is a full professor of Computer Science at Augsburg University and Chair of the Research Unit for Human-Centered Multimedia. Prior to that, she worked as a principal researcher at DFKI GmbH where she has been leading various academic and industrial projects in the area of intelligent user interfaces.

Elisabeth André holds a long track record in embodied conversational agents, multimodal interfaces and social signal processing. Elisabeth André is on the editorial board of various renowned international journals, such as Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS), IEEE Transactions on Affective Computing (TAC), ACM Transactions on Intelligent Interactive Systems (TIIS), and AI Communications. In summer 2007 Elisabeth André was nominated Fellow of the Alcatel-Lucent Foundation for Communications Research. In 2010, she was elected a member of the prestigious German Academy of Sciences Leopoldina, the Academy of Europe and AcademiaNet.

Abstract

Societal challenges, such as assisted living for elderly people, create a high demand for technology able to emulate human-style interaction modes. Currently, most human-machine interfaces focus on input that is explicitly issued by the human users. However, often it is the myriad of unconsciously conveyed signals that will determine whether an interaction with a machine is successful or not. Even though a significant amount of effort has been spent on the analysis of behavioral data, most approaches focus on offline analysis of data, partially relying on acted data or laboratory conditions. On the other hand, significant progress made in the area of pervasive computing enables us to collect data in more realistic settings using unobtrusive sensors. In my talk, I will demonstrate how progress made in the area of social signal processing and pervasive computing can contribute to a deeper symbiosis in human-machine interaction by collecting subtle behavioral cues under naturalistic conditions and linking them to higher-level intentional states. However, on the way to this goal, a number of challenges need to be solved: Users show a great deal of individuality in their behaviors, and there is no clear mapping between behavioral cues and intentional states. This is in particular true for real-life settings where users are exposed to a more diverse set of stimuli than under laboratory conditions. Furthermore, it isn’t obvious how to acquire ground truth data against which to evaluate the performance of system components that map unconsciously conveyed behavioral cues onto intentional states. Finally, we need to cope with limited resources when recording behavioral cues in a mobile context and responding to them in real-time. Apart from technological challenges, psychological, societal and privacy issues need to be taken into account. Based on an analysis of recent activities in the area of social signal processing and pervasive computing, I will outline a road map for future research.

Intelligent User Interfaces
July 5

Keynote talk

Karim Jerbi
Turning thoughts into actions with Brain-Machine Interfaces (BMIs): State-of-the-art and future challenges
Karim Jerbi

After being initially trained as a biomedical engineer at the Technical University of Karlsuhe in Germany, Karim Jerbi took on a research assistant position at the University of Southern California (USC) in Los Angeles (CA, USA) where he developed advanced techniques to localize brain activity from Electro-encephalography (EEG) and Magneto-encephalography (MEG) recordings. In 2006 he completed a PhD in Cognitive Neuroscience at the Cognitive Neuroscience and Brain Imaging Lab (CNRS UPR 640) in Paris. His PhD research focused on the identification of large-scale network dynamics that mediate visuo-motor control in humans. During his post-doctoral research work with the Collège de France (CNRS - Action and Perception Laboratory, Paris) in collaboration with the Brain Dynamics and Cognition Lab (INSERM, Lyon) and the University Hospital in Grenoble, he investigated the utility of direct brain recordings in human subjects to explore new strategies for novel Brain-Computer Interfaces.

Karim Jerbi currently holds a position as permanent researcher with INSERM in Lyon. He’s a principal investigator in the Brain Dynamics and Cognition team of the newly launched Lyon Neuroscience Research Center (headed by Dr Olivier Bertrand). He’s an internationally recognized expert in the study of Brain Connectivity, Visuomotor Control, Human Electrophysiology and Brain-Computer Interfaces. Over the last years, Dr Jerbi has participated in numerous French national projects (e.g. ANR-RNTL OpenVibe Consortium) and EC projects such as NEUROBOTICS (FP6), NEUROPROBES (FP6) and BRAINSYNC (FP7) and he performs regular reviews for numerous leading journals and grant-awarding bodies in the fields of Neuroscience, Cognition and Brain Imaging.

Abstract

Over the last decade, increasing collaboration and conceptual convergence between neuroscience, engineering and robotics has shifted the idea of direct brain-machine communication from the realm of science-fiction into an exciting and thriving field of research. The most convincing examples of functional Brain-Machine Interfaces (BMIs) have been reported in monkeys, where direct recordings of neuronal discharges in primary motor cortex can be translated into motor commands that actuate a robotic arm for instance. In humans, reports of invasive BMI systems are less numerous than in non-human primates and signal selection as well as signal decoding methods for optimal control are still in their early days. Although, non-invasive BMIs remain an important goal for BMI research with both clinical and non-clinical objectives, invasive recordings in humans in the context of neuronal decoding of intended actions will continue to be crucial in order to bridge the gap between human and non-human primate research into BMIs. In addition, the debate about whether invasive BMIs might ultimately be unavoidable for specific clinical applications is still ongoing.

In this talk, I will first provide a brief overview of the state-of-the-art of BMI research and highlight the major challenges that need to be tackled today in order to move from the proof of concept phase into real implementations in various contexts. I will also describe a few studies that we performed in order to evaluate the possible utility of intracerebral recordings obtained via Stereo-Electroencephalography (SEEG) depth electrodes in epilepsy patients for the development of novel Brain-Computer Interfaces. To test the ability of patients to control various parameters of their intracranial recordings we used an online signal analysis system that computes and displays ongoing brain power variations in real-time. This system, which we coined “Brain TV”, paves the way for the development of novel strategies for BCI, Neurofeedback and real-time functional mapping (Lachaux, Jerbi et al. 2007). I will also present findings from offline analysis of intracranial EEG data acquired during delayed motor tasks that addresses the question of whether motor intentions can be decoded from the human brain. Among other things, our results suggest that BCI performance may be improved by using signals recorded from various brain structures such as neural  activity recorded in the motor and oculomotor systems as well as higher cognitive processes including attention and mental calculation networks (Jerbi et al. 2007, 2009). Finally, I will discuss putative implications of our findings on the development of novel real-time tools for clinical application (restoring communication, or neurorehabilitation via neurofeedback training) as well as non-clinical applications (video games, artistic creativity and augmenting human performances).

Intelligent User Interfaces
July 5

Keynote talk

Laurence Devillers
Towards a natural Interaction with Robots: Affective and social dimensions of spoken interactions
Laurence Devillers

Prof. Dr. Laurence Devillers is a full-time Professor of Computer Sciences applied to human and social sciences at Paris-Sorbonne IV University, France. Her current research activities fall in the areas of affective computing and spoken interactions (http://perso.limsi.fr/Individu/devil/). She is doing her research at the Computer Science Laboratory for Mechanics and Engineering Sciences (LIMSI – CNRS) within the Spoken Language Processing Group and heads the team on “Affective and social dimensions of spoken interactions”, working on machine analysis of human non-verbal behaviour including audio and multimodal analysis (paralinguistic cues, affect bursts like laughter, postures) of affective states and social signals, and its applications to HCI. Prof. Devillers was also a partner in several ESPRIT, FP5 and FP6 European projects, including the HUMAINE Network of Excellence. She is also involved in several national projects for building assistive or companion robot but also for building serious games applications with goals of health, security and education.

Abstract

This talk will highlight some open challenges towards a natural Interaction with Robots. We will report on the state of the art in this area, give our experience in the framework of different projects and highlight future trends in this field of research. In order to design affective interactive systems, experimental grounding is required to study expressions of emotion and social cues during interaction.

Robotics are a relevant framework for assistive applications due to the learning and skills of robots. In the framework of the ANR Tecsan ARMEN, we build an Assistive Robotics to Maintain Elderly people in Natural environment. We also participate to the Cap Digital FUI ROMEO project with two goals: build a social humanoid robot to help the visually impaired and elderly in their everyday activities and a game companion for children. In a near future, socially assistive robotics aim to address critical areas and gaps in care by automating supervision, coaching, motivation, and companionship aspects of one-to-one interactions with individuals from various large and growing populations, including the elderly, children, disabled people and individuals with social phobias among many others. The ethical issues, including safety, privacy and dependability of robot behaviour, are also more and more widely discussed. It is therefore necessary that a bigger ethical thought be combined with the scientific and technological development of robots, to ensure harmony in their relation with human beings.

Intelligent User Interfaces
July 5

Keynote talk

Tevfik Metin Sezgin
Intelligent Interaction, Recognition and Multimodal Fusion: The cases for Sketching, Gestures, Speech, Haptics, Eye-gaze, and Affect
Tevfik Metin Sezgin

T. Metin Sezgin graduated summa cum laude with Honors from Syracuse University in 1999. He completed his MS in the Artificial Intelligence Laboratory at Massachusetts Institute of Technology in 2001. He received his PhD in 2006 from Massachusetts Institute of Technology. He subsequently moved to University of Cambridge, and joined the Rainbow group at the University of Cambridge Computer Laboratory as a Postdoctoral Research Associate. Dr. Sezgin is currently an Assistant Professor in the College of Engineering at Koç University, Istanbul.

His research interests include intelligent human-computer interfaces, multimodal sensor fusion, and HCI applications of machine learning. Dr. Sezgin is particularly interested in applications of these technologies in building intelligent pen-based interfaces. Dr. Sezgin’s research has been supported by international and national grants including grants from DARPA (USA), and Turk Telekom. He is a recipient of the Career Award of the Scientific and Technological Research Council of Turkey. Dr. Sezgin delivered invited lectures and conducted invited tutorials at MIT (USA), Nottingham University (UK), and Bogazici University (Turkey). He also held a visiting researcher position at Harvard University in 2010.

Abstract

In this presentation, I will talk about the projects that I lead at the Intelligent User Interfaces Laboratory at Koc University. I will also summarize various projects that I have been involved in at MIT, and the University of Cambridge.  The overarching theme in my research is to build computer systems capable of supporting natural modes of communication that humans use in their daily human-human interactions. People communicate with each other predominantly through speech, and they support speech with accompanying cues. Pen input (sketching) is one such particularly powerful means of communication. The increasing availability of pen-based hardware such as the tablet PCs, and smart phones has fueled the demand for applications that can intelligently interpret pen input. The Intelligent User Interfaces Laboratory at Koc University houses leading experts in the field of sketch recognition and pen-based interaction. In addition, the group has a strong track-record in building multimodal interfaces that combine speech recognition, emotion recognition, and haptic interaction technologies. The group places particular emphasis in building intelligent interactive systems through well-founded applications of machine learning, computer vision, and pattern recognition techniques.

Intelligent User Interfaces
July 5

Keynote talk

Share/Save