Projects

Eye Tracking

Eye Tracking

Eye gaze trackers are devices that estimate the user's gaze position. Most eye trackers use video cameras to capture images of the eyes of the user. We are currently researching methods to improve eye tracking technology.

Eye Movements

Eye Movements

The user's gaze behavior provides clues about their focus of attention and even intentions. Thus, the user's gaze information can be useful to evaluate interfaces. Moreover, it is crucial to understand eye movement behavior when developing gaze-based interfaces.

Gaze-based Interaction

Gaze-based Interaction

People with quadriplegia can use gaze trackers to interact with a computer using their eye movements. Virtual keyboards are commonly used to enter text by looking at the keys, or by performing some eye gesture.
Gaze can also be useful for handsfree interaction for people without disabilities. We are researching gaze-based interaction techniques that could be useful to both people with and without disabilities alike.

Wearable gaze-based Interaction

Wearable gaze-based Interaction

Wearable computing creates an environment with devices that are always on and available to the user. The interaction with such devices is, however, still a challenge. Touch screens and speech recognition techniques are commonly adopted. We are researching new interaction methods for wearable environments using gaze information.

Magic Surfaces

Magic Surfaces

Ubiquitous computing is a concept where computing is thought to be omnipresent, effectively "invisible", so that humans and computers are brought together in a seamless way. The progress of hardware and software technologies make it compelling to investigate innovative possibilities of interaction with computers.
We are exploring novel forms of interaction in interactive installations that use portable and low cost equipment. The Magic Surface is a system that transforms a regular surface (such as a wall or a tabletop) into a multitouch interactive space. The Magic Surface detects touch of hand fingers, colored pens and eraser. It also supports the usage of a magic wand for 3D interaction. The Magic Surface can run applications, allowing the transformation of a regular surface into an interactive drawing area, a map explorer, a 3D simulator for navigation in virtual environments, among other possibilities. Areas of application range from education to interactive art and entertainment. The setup of our prototype includes: a Microsoft Kinect sensor, a video projector and a personal computer.

Interactive Mirror

Interactive Mirror

The objective of this project is to develop an interactive virtual mirror for makeup simulation to improve user experience when trying different makeup products. The interaction with the simulator was based on the metaphor of a mirror. The system interface allows the user to choose and to apply makeup over their own image. The system renders the video displayed in the mirror in real time without the need of fiducial markers on the user's face or the creation of a 3D face model before the use of the system. The proposed interface consists of a touch-sensitive monitor and a RGBD camera.

Sparse Video Understanding

Sparse Video Understanding

Currently it is common to find places being monitored by a considerable number of video cameras, such as in stores, parking lots, and train stations, not to mention sports and other TV broadcasted events. Multiple sensors are also required to monitor complex large environments, due to the limited field of view of a single camera and to avoid blind spots due to occlusion.
In typical large surveillance applications though, the existence of multiple video streams being displayed in separate monitors might be overwhelming for human operators because, while a single monitor produces useful information that is easy to understand, with multiple monitors, it is difficult for humans to integrate multiple views to create a better global comprehension of events. For example, a simple task of tracking a moving object from monitor to monitor might become a complex task since there might be several moving objects, and the operator might not know what monitor to look next once the object leaves the field of view of a camera.
The integration of multiple videos onto a 3D scene model in an augmented virtual environment (AVE) has been shown to be an effective tool for coping with multiple cameras monitoring complex scenes [1–4]. The 3D visualization provides users with a natural browsing and multi-resolution view of the spatial and temporal data provided by the multiple cameras.