The LILIBUSE project will provide new means of searching and browsing a user's documents, either locally or in the cloud. Novel graphical and vocal user interfaces will be developed for both desktop and mobile devices. Navigation will be supported by semantic content indexing, realized with free linguistic analysis tools. This will enable intuitive and non-intrusive navigation in semantic data associated to user activities. Cloud technologies will ensure system scalability and results will be validated via technological and user-based evaluation campaigns.
The presentation of the communications of the CHIST-ERA Conference 2011 (keynote and short talks, posters) will be continuously updated in the course of the summer.
Adrian Popescu is a researcher at the LIST laboratory of the French Alternative Energies and Atomic Energy Commission (CEA) in France. Before joining CEA LIST, he was a postdoctoral researcher at TELECOM Bretagne, where I contributed to the Georama project.
He is a research scientist with experience in building linguistic resources for Internet image retrieval and for geographic information retrieval. Adrian Popescu is primarily interested in building large scale systems that are sensitive to semantics but also in introducing image processing techniques in information retrieval. He is Interested in continuing my current research work in industry or academia.
Daniela Petrelli is Reader in Interaction Desing at the Art & Design Research Centre at Sheffield Hallam University, UK.
She has a PhD in Interaction Design, an Italian High Degree (Laurea) in Computer Science, a Fine Art Diploma, and an extensive experience in social research. Her current research interests focus on all aspects of people interacting with technology in contexts other than the office, mainly the family home, for purposes different from work, e.g. leisure or reminiscing, and with devices that are not obviously digital, e.g. a radio from the 70s or a set of Christmas baubles.
In the area of intelligent interfaces she has carried out research in: language-based and multimodal human-computer interaction; human-computer collaboration and task sharing to improve the effectiveness of machine learning; the visualisation and manipulation of semantic data for sense making; and the personalisation of content for context sensitive museum visits.
Her research approach is human-centred, i.e. needs and aspirations are the focus and the technology is designed around them. She uses ethnography and co-design practices to explore and design innovative solutions that are then evaluated 'in the wild' with the actual users.
The Virtual Physiological Human (VPH) is an integrated model of the human body that will makes possible to predict how the condition of an individual patient will evolve. There is a stigma attached to long-term conditions and aging, most of the time due to an illness-centred design of assistive technology. Good human-centred design can help change this perception. We propose to embed digital technology in objects specifically designed to adapt to changing health conditions that are monitored and compared with one’s own individual VPH.
The EU has invested substantially in the VPH for medical aims. This exceptional tool holds much potential for other disciplines that address issues of health and well-being, specifically engineering, computing and design. Multidisciplinarity is essential to reach solutions centred on quality of life.
Dr. Eric Ras is R&D manager in the Service Science and Innovation Department of the Public Research Centre Henri Tudor, Luxembourg. He is supervising PhD students and postdoctoral fellows in the domain of service innovation, knowledge system and services, and technology-based assessment in particular. He combines knowledge-based approaches and Semantic Web technologies with innovative human-computer interfaces (e.g., tangible user interfaces, dialogue-based systems) in order to foster knowledge externalization and its formal semantic representation. In addition, he works on a measurement framework for IT services and an ontology of services. He lectures at different universities in Germany and Luxembourg. In 2012, he became national contact point for the FET flagship project FuturICT.
The need for more adequate technology-based assessment has been recognized by the e-learning community and practitioners. This counts especially for the assessment of 21st Century skills and competencies in collaborative environments. Intelligent user interfaces provide a huge potential to address this challenge. Especially, tangible user interfaces enhance the engagement in solving tasks in collaborative settings and allow the recording of so-called traces, for example, activity traces that allow us to reason on solving strategies.
Having more reliable information on the skills and competencies of people will allow us to extend the dimension of user profiles beyond preferences, personal interests, etc. Hence, new possibilities arise to enhance recommendation and personalization approaches for intelligent user interfaces.
Grzegorz J. Nalepa holds a position of Assistant Professor in Computer Science Lab in Institute of Automatics in AGH UST, Poland.
His primary interests concern Artificial Intelligence, knowledge-based intelligent systems, rule-based expert systems, knowledge engineering and design, logic programming and Prolog, evaluation, verification and validation of intelligent systems, software engineering, semantic web, semantic knowledge wikis, Internet technologies. He is also interested in user interfaces, operating systems, embedded systems, and computer security.
Semantic wikis have been proposed as a flexible tool social knowledge sharing. They provide simple UIs facilitating rapid and collaborative knowledge acquisition and management on the Web. Numerous improvements of the UI include the use of semantic forms as well as controlled natural languages. However, the widespread use of this social platform is limited by its desktop oriented UI. The presentation discusses how novel UIs oriented on mobile devices as well as ambient intelligence techniques could be used to push semantic wikis to the next level.
Dr. Jacek Ruminski is researcher in the Department of Biomedical Engineering, Faculty of Electronics, Telecommunication and Informatics of Gdansk University of Technology (GUT) in Poland.
His research activities cover the domains of design (including theoretical studies), implementation and deployment of sensors, electrodes, interfaces, measurement systems, integrated circuits, software for the hardware at different scales, etc. In the field of user interfaces his group has received a number of awards, including for best papers (e.g. for eye-tracking solutions, colour identification solutions) and for practical innovations (e.g. a blowing device, an extended TV-remote).
Our R&D is focused on portable interfaces for therapy, assisted living and e-Inclusion. Examples include multimedia eyeglasses, an extended remote controller, and a blowing device. For example, the prototype of the multimedia eyeglasses uses three cameras for the estimation of gaze direction and for analysis of surroundings of the user. The Linux-based processor collects and processes data from sensors embedded in the smart eyeglasses. Some successful applications of the eyeglasses include colour identifications (for colour-blinded), remote pulse recognition, etc.
Sviatlana Danileva is PhD student in the Computer Science and Communications Research Unit of the Sciences, Technology and Communication Department of the Luxembourg University.
Her primary academic interests are analysis and modelling of the interaction over a prolonged period of time, and of the mechanisms that allow communication to succeed or lead to failure. The goal of her work is to create more sophisticated computational systems by understanding how the interaction between humans is shaped and how it influences the relationships. This interdisciplinary work focuses chat dialogue interaction in the context of second language acquisition.
We currently observe a demand for highly adaptive conversational interfaces, which draw from data sets and integrate organizational elements pertaining to human-human dialogues.
Within our interdisciplinary perspective, we focus on the implementation of a companionable agent integrating utility, adaptability, talk-in-interaction and socio-cultural features. We have therefore set up qualitative experiments for collecting data from natural scenarios to motivate computational models of companionship. Specifically, chat-conversations which consist of 30-minutes sessions and run over a longer period of time are in focus.
The poster highlights some research directions in the field of long-term interaction and human-machine social connections:
- Utility analysis: scenarios where multimodal, spoken and natural language interfaces make sense, “wizarded” or qualitative experiments.
- Highly adaptive user models
- Adaptive machine behaviour
- Formal models of interaction, interdependencies and forgetting
Professor Sylviane Cardey is senior researcher at the Research Centre Lucien Tesnière (Linguistics and Natural Language Processing) of the Université Franche-Comté in Besançon (France) and senior member of the Institut Universitaire de France.
The 6th MUC (Message Understanding Conference) identified Named Entity Recognition as a fundamental step in text understanding. This step is equally important for natural language interfaces. Since then research has been undertaken into identifying and describing entities in various domains: time, space, chemical, biomedical etc. It has become apparent that a unified theory needs to be developed for identification and description of these entities. In addition to the corpus, this theory can be based on generalized measure theory.
User environments include more and more heterogeneous data, the automated and integrated exploitation of which would facilitate mastering such environments. A unified theory in the image of UML (Unified Modeling Language) would increase the efficacy of user interfaces.
Yacine Bellik is Assistant-Professor in Computer Science at the Orsay University Institute of Technology, Paris-South University, France. He is empowered to supervise research (HDR), LIMSI Member of AMI Group (Architecture and Models for Interaction) and Head of MIA (Modalities, Interactions and Ambient) research topic at LIMSI laboratory.
His research interests belong to the domains of Human-Computer Interaction, Multimodal Interaction, and Ambient Intelligence.
Ambient Intelligence aims at providing embedded environments that are able to assist users in their daily tasks intelligently. Due to their specific characteristics, ambient environments require interfaces that are able to use different communication modalities, to adapt their behaviour to interaction context, to acquire knowledge and learn from users habits, to explain their own behaviour to the user, etc. In this poster we discuss the main research topics that should be addressed to achieve this kind of intelligent interfaces.