Seminars

Thu
11:00 a.m.
Jul 13 2017

Controlling Simulations in Graphics

Speaker:
Prof. John Keyser
Location:
Auditório Jacy Monteiro

Abstract

Simulations form an important part of generating realistic computer graphics animations. It is often desirable to find a way to control simulations, so that they can be directed in the way an artist wants, or match some data that is known. However, this creates problems in that the desired simulation behavior may conflict significantly with simulation results generated from first principles. In this talk, I will describe some ways that we can apply control to simulations, modifying the simulation to attempt to match user-specified goals while still appearing realistic. We will look specifically at two motivating examples: simulating large piles of rigid objects and simulating the behavior of gaseous fluids. The talk will conclude with a discussion of the challenges in determining acceptability of graphics simulations, and some initial work addressing this problem.

Short Bio:
John Keyser is a Professor and the Associate Department Head in the Department of Computer Science and Engineering at Texas A&M University, one of the largest universities in the United States. He joined Texas A&M in 2000, after receiving his PhD in Computer Science from the University of North Carolina, and earlier earned Bachelor’s Degrees in Applied Math, Engineering Physics, and Computer Science from Abilene Christian University. His research has spanned a wide range of graphics, with the majority of his work in the areas of geometric modeling, especially in robust solid modeling applications, and physically-based simulation. He has also worked on topics in rendering, data visualization, and a large interdisciplinary project on scanning and reconstructing small animal brains at sub-micrometer resolution.

Tue
11:00 a.m.
Mar 07 2017

Vision-based Human-Computer Interfaces, Personalization, and Crowdsourcing

Speaker:
Margrit Betke
Location:
Auditório Jacy Monteiro

Abstract

On a single day, billions of videos are watched on the internet and millions of new photographs are uploaded. Cell phone cameras and smart-home sensors enable us to interface with computing devices and with each other. Automatically interpreting visual data is challenging and leads to exciting new research tasks. In this talk, I will highlight several projects my research group has worked on recently, including “personalizing gesture recognition,” “remote monitoring of physical therapy,” “document image analysis,” “crowdsourcing the annotations of videos of political discourse,” and “subitizing with deep learning.”

Short bio:
Margrit Betke is a Professor at Boston University where she co-leads the Image and Video Computing Research Group. Her team has worked on research projects that connect computer vision, machine learning, human computation, and human-computer interaction. Prof. Betke earned a PhD from MIT, has published over 140 original research papers, is an Associated Editor of IEEE TPAMI, and has led large sponsored research programs.

Wed
01:00 p.m.
Nov 30 2016

Identifying usability problems and accessibility barriers from interaction log analysis

Speaker:
Vagner Figueredo de Santana
Location:
CCSL Auditorium

Abstract

User interface evaluation is one of the most popular activities of Human-Computer Interaction. Part of this activity involves capture and analysis of usage data, also known as interaction log). Interaction log analysis is present in multiple industries and research problems. However, in this realm, 2 different lines of thought present different ways of analysing interaction data: 1) methods based on optimum task models/grammars; 2) quantitative analysis of click streams. These two approaches count on pros and cons, for instance, in (1) the models/grammars are hard to build and maintain and, in (2), detailed interaction data is hardly taken into account. In this talk I'll present a log analysis approach to identify accessibility barriers and/or usability problems based on detailed usage data and how it can be related to graph topology metrics supporting its use in scale.

Short bio:
Vagner Figueredo de Santana
IBM Master Inventor, researcher at IBM, and associated collaborator at Federal University of ABC (UFABC). PhD (2012) and MSc (2009) degrees in Computer Science from University of Campinas (UNICAMP). BSc degree (2006) in Computer Science from Presbyterian University Mackenzie. Was webmaster of Folha Online (www.folha.com.br) from 2002 to 2007 and visiting professor at Presbyterian University Mackenzie from 2009 to 2015. Researches topics on Human-Computer Interaction since 2006 and acts as a reviewer on the following conferences and journals: CHI, EICS, IHC, IHCI, INTERACT, MobileHCI, CBIE, IJHCS (International Journal on Human Computer Studies), and UAIS (Universal Access in the Information Society).

Fri
12:00 p.m.
Apr 29 2016

Eyes on the interface: typing by gaze

Speaker:
Andrew Kurauchi, PhD student at DCC/IME/USP
Antonio Diaz Tula, postdoctoral researcher at DCC/IME/USP
Location:
CCSL Auditorium

Abstract

Gaze-based interaction allows people with severe motion disabilities to communicate through a computer using their eye movements. The most common method is text entry, or "typing", by gaze.
The most used techniques for text entry by gaze are based in virtual keyboards. The user performs a selections by fixating their gaze on the desired key for a given dwell time. This technique, though simple, forces the user to wait before performing each selection, resulting in low typing speed.
In this seminar we will present two recent advances proposed by our research group to improve the speed of text entry by gaze with virtual keyboards. The first, developed in collaboration with researchers from Boston University, eliminates the need to wait for each selection, as words are obtained from a dictionary according to the gaze pattern of the user on the keyboard.
The second, instead of trying to eliminate the wait time for each selection, show augmented information on the keys, so the user can use the dwell time to detect typing errors, and explore the list of most probable words without moving their gaze from the fixated key. Both techniques improve the typing speed, as shown by the user studies.

Fri
12:00 p.m.
Apr 01 2016

Gaze-based interaction - challenges and opportunities

Speaker:
Prof. Carlos Hitoshi Morimoto
Location:
CCSL Auditorium

Abstract

Eye gaze trackers are devices that indicate the gaze position on a known object, for example, a computer monitor. Earlier these devices were used primarily for research. The size and cost reduction of such devices has been allowing its use for more general applications, with particular success in assisting people with severe motion disabilities. In this talk we will present recent advances, current challenges in the field, and the research opportunities in the eye gaze tracking field, based on work presented at ETRA 2016. As it is the first HCI seminar in 2016, prof. Morimoto will also provide a brief description of the projects being developed by the HCI research group at DCC-IME-USP, with applications in wearable computing, augmented reality and gaze-based interaction.