University of Toronto
Facile interaction with displays all over the place
As computing increasingly veers towards more mobile and "everywhere" usage scenarios, the user interface must evolve to better support such activities. In this talk, I will provide a broad overview of some of the more promising research being undertaken in the area of next-generation user interfaces for future computing environments, illustrated with examples from the work being undertaken at the Department of Computer Science, University of Toronto. This will include interaction using handheld projectors, sketch and gesture based interfaces, interfaces for very large scale but expensive displays, interfaces for very cheap "displays" all over the place, and supporting infrastructure.
Ravin Balakrishnan is a Professor of Computer Science and Canada Research Chair in Human-Centred Interfaces at the Department of Computer Science, University of Toronto where he co-directs the Dynamic Graphics Project (DGP) laboratory. His research interests are in Human Computer Interaction (HCI), Information and Communications Technology for Development, and Interactive Computer Graphics. He earned his Ph.D. in Computer Science from the University of Toronto, working with Bill Buxton, while concurrently a part-time researcher at Alias|wavefront (now part of Autodesk). He was elected to the ACM CHI Academy in 2011, is the recipient of an Alfred P. Sloan Research Fellowship (2007), an Ontario Premier's Research Excellence Award (2003), the Bell University Laboratories Associate Chair in HCI at the University of Toronto (2002-2006), and best paper awards, nominations and honourable mentions at the CHI 2010, CSCW 2010, CHI 2009, CHI 2008, CSCW 2006, UIST 2006, CHI 2005, Graphics Interface 2005 and UIST 2004 conferences. In addition to working with students and colleagues at Toronto, he collaborates with researchers at leading industrial laboratories and universities worldwide, including stints as a visiting researcher at Mitsubishi Electric Research Laboratories (MERL) (2005-2007), a visiting professor at the University of Paris & INRIA (2006), and a visiting researcher at Microsoft Research's Redmond, Beijing, Bangalore and Cambridge labs while on sabbatical from the University of Toronto during the 2007-2008 academic year. He was a co-founder of Bump Technologies Inc., which was acquired by Google in 2010, and is involved in another startup that is commercializing research conducted in his lab: Arcestra.
Carnegie Mellon University
Programmers are People Too: Applying HCI to Software Developers
My Natural Programming Project is working on making programming languages and environments easier to learn, more effective, and less error prone. We are taking a human-centered approach, by first studying how people perform their tasks, and then designing languages and environments that take into account people's natural tendencies. We are designing new programming languages for people who are not professional programmers (sometimes called "end-user programmers") based on how people think about expressing algorithms and tasks. We also are working on improving programming environments and libraries for professional programmers. For example, by studying programmers working on every-day bugs, we found that they continuously are asking "Why" and "Why Not" questions, so we developed the "Whyline" debugging tool which allows programmers to directly ask these questions of their programs and get a visualization of the answers. The WhyLine increases productivity by about a factor of two. When reverse-engineering unfamiliar code, we saw that programmers frequently need to trace feasible execution paths, so we developed a new visualization tool to directly present this information. We studied the usability of APIs, such as the Java SDK and the SAP eSOA APIs, and discovered some common patterns that make programmers up to 10 times slower in finding and using the appropriate methods, so we developed new tools to compensate. This talk will provide an overview of our studies and resulting designs and tools, which benefit from applying both Software Engineering and Human-Computer Interaction approaches.
Brad A. Myers is a Professor in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University. He is an ACM Fellow, and a member of the CHI Academy, an honor bestowed on the principal leaders of the field. He is the principal investigator for the Natural Programming Project, and the Pebbles Handheld Computer Project and and previously led the Amulet and Garnet projects. He is the author or editor of over 400 publications, including the books "Creating User Interfaces by Demonstration" and "Languages for Developing User Interfaces," and he has been on the editorial board of five journals. He has been a consultant on user interface design and implementation to over 70 companies, and regularly teaches courses on user interface design and software. Myers received a PhD in computer science at the University of Toronto where he developed the Peridot UIMS. He received the MS and BSc degrees from the Massachusetts Institute of Technology during which time he was a research intern at Xerox PARC. From 1980 until 1983, he worked at PERQ Systems Corporation. His research interests include user interface development systems, user interfaces, handheld computers, programming environments, programming language design, programming by example, visual programming, interaction techniques, and window management. He is a Senior Member of the IEEE, and also belongs to SIGCHI, ACM, and the IEEE Computer Society.
Purdue University and Microsoft Research Asia
Adding Touch Feedback to Human Computer Interactions
For a long time, the sense of touch has been regarded as an inferior sense as compared to vision or audition. However, the potential to receive information through touch is well illustrated by natural communication methods used by individuals with severe auditory and/or visual impairments. With the advent of cellphones and handheld digital devices, there are renewed interests in transmitting information through touch for privacy or for enhanced interaction experience. My talk will start with a historic review of vibrotactile displays for sensory substitutions with an emphasis on wearable/portable systems. I will then provide an overview of more recent advances in haptics research enabled by force-feedback human-machine interfaces. Looking towards the future, haptics research has now reached a level of maturity that it is only a matter of time that human computer interactions will not only benefit from touch input but also touch feedback. In fact, many technologies are readily available, today, to make this happen. I will speculate on near-term opportunities for adding touch feedback to mobile devices, keyboards and tablets.
Hong Z. Tan is a professor of electrical and computer engineering with courtesy appointments in mechanical engineering and psychological sciences at Purdue University. Her research focuses on haptic human-machine interfaces and haptic perception. She received her Bachelor's degree in Biomedical Engineering from Shanghai Jiao Tong University, P.R. China. She earned her Master and Doctorate degrees, both in Electrical Engineering and Computer Science, from the Massachusetts Institute of Technology (MIT). She was a Research Scientist at the MIT Media Laboratory before joining the faculty at Purdue's School of Electrical and Computer Engineering in 1998. She has held a McDonnell Visiting Fellowship at Oxford University, and a Visiting Associate Professorship in the Department of Computer Science at Stanford University. She is currently a Visiting Researcher with Microsoft Research Asia in Beijing, P. R. China.
Tan was a recipient of the US National Science Foundation's Early Faculty Development (CAREER) Award from 2000 to 2004. In addition to serving on numerous program committees, she was a co-organizer (with Blake Hannaford) of the International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems from 2003 to 2005. She was the founding chair of the IEEE Technical Committee on Haptics, a home for the international interdisciplinary haptics research community, from 2006 to 2008. She is currently an associate editor of Presence: Teleoperators & Virtual Environments, ACM Transactions on Applied Perception and IEEE Transactions on Haptics.
University of Tokyo
Design Everything by Yourself: User interfaces for graphics, CAD modeling, and robots
I will introduce our research project (design interface project) aiming at the development of various design tools for end-users. We live in a mass-production society today and everyone buy and use same things all over the world. This is cheap, but not necessarily ideal for individual persons. We envision that computer tools that help people to design things by themselves can enrich their lives. To that end, we develop innovative interaction techniques for end users to (1) create rich graphics such as three-dimensional models and animations by simple sketching (2) design their own real-world, everyday objects such as clothing and furniture with realtime physical simulation integrated in a simple geometry editor, and (3) design the behavior of their personal robots and give instructions to them to satisfy their particular needs.
Takeo Igarashi is a professor at CS department, the University of Tokyo. He received PhD from Dept of Information Engineering, the University of Tokyo in 2000. His research interest is in user interface in general and current focus is on interaction techniques for 3D graphics. He is known as the inventor of sketch-based modeling system called Teddy, and received The Significant New Researcher Award at SIGGRAPH 2006. He is currently leading a JST ERATO Igarashi Design Interface Project as a director.