Bio   Projects   Publications    Curriculum Vitae

 

Helping Consumers and Patients Manage their Health

 

Patient-Friendly Information Displays
Electronic medical records are increasingly comprehensive but rarely accessible to patients, who are frequently under-informed about their own hospital courses. In this work, we propose various patient-centric information displays to deliver wait-time predictions as well as in-room progress reports. More info...

Pfeifer Vardoulakis, L., Karlson, A. Morris, D., Smith, G., Gatewood, J., Tan, D.S. (2012). Using Mobile Phones to Present Medical Information to Hospital Patients. CHI 2012.

Wilcox, L., Morris, D., Tan, D.S., Gatewood, J., Horvitz, E. (2011). Characterizing Patient-Friendly “Micro-Explanations” of Medical Events. CHI 2011.

Wilcox, L., Gatewood, J., Morris, D., Tan, D.S., Horvitz, E., Feiner, S. (2010). Physician Attitudes About Patient-Facing Information Displays at an Urban Emergency Department. AMIA 2010.

Skeels, M., Tan, D.S. (2010). Identifying Opportunities for Inpatient-Centric Technology. ACM IHI 2010.

Wilcox, L., Morris, D., Tan, D.S., Gatewood, J. (2010). Designing Patient-Centric Information Displays for Hospitals. CHI 2010. (Best Paper Nomination)


 

Healthcare is shifting from being reactive to preventive, with a focus on maintaining general wellness through positive decisions on diet, exercise, and lifestyle. In our work, we explore the requisite information consumers/patients need to understand and manage their health. We also begin to develop technologies to empower them to do so.

Lester, J., Tan, D.S., Patel, S., Brush, A.J. (2010). Automatic Classification of Daily Fluid Intake Pervasive Health 2010. (Best Paper Award)

shraefel, m., White, R., Andre, P., Tan, D.S. (2009). Investigating Web Search Strategies and Forum Use to Support Diet and Weight Loss. CHI 2009.

Grimes, A., Tan, D.S., Morris, D. (2009). Understanding Family Reflections on Health Data. Group 2009 Conference.

Kirovski, D, Oliver, N., Sinclair, M., Tan, D.S. (2007). Health-OS: A Position Paper. HealthNet 2007.


Physiological Computing: Bio-sensing for Mobile Natural User Interfaces

 

Sensing Gestures Using the Body as an Antenna
This work utilizes the human body as an antenna to receive electromagnetic (EM) noise that already exists in our environments and to infer gestures. While this noise is bothersome to nearly every other EM sensing application, we treat it as the core of our signal. By observing the properties of the noise picked up by the body, we can infer gestures on and around existing surfaces and objects, specifically the walls and appliances in the home. More info...

Cohn, G., Gupta, S., Lee, TJ, Morris, D., Smith, J., Reynolds, M., Tan, D.S., Patel, S. (2012). An Ultra-Low-Power Human Body Motion Sensor Using Static Electric Field Sensing. Ubicomp 2012. (Best Paper Award)

Cohn, G., Morris, D., Patel, S., Tan, D.S. (2012). Humantenna: Using the Body as an Antenna for Real-Time Whole-Body Interaction. CHI 2012. (Best Paper Nominee)

Cohn, G., Morris, D., Patel, S., Tan, D.S. (2011). Your Noise is My Command: Sensing Gestures Using the Body as an Antenna. CHI 2011. (Best Paper Award)


 

Skinput: Bioacoustic Sensing to Appropriate the Body as an Input Surface
Skinput appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. We resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. More info...

Harrison, C., Tan, D.S., Morris, D. (2010). Skinput: Appropriating the Body as an Input Surface. CHI 2010. (Best Paper Award)

Harrison, C., Tan, D.S., Morris, D. (2011). Skinput: Appropriating the Skin as an Interactive Canvas. Communications of the ACM.


 

Muscle-Computer Interfaces
Muscle-computer interfaces directly sense and decode human muscular activity rather than relying on physical actuation or perceptible user actions. Using a wireless EMG armband, we have shown relatively high accuracies decoding simple finger gestures both when the arm is rested on a surface, but also when the gestures are performed in free space. More info...

Saponas, T.S., Tan, D.S., Morris, D., Turner, J., Landay, J. (2010). Making Muscle-Computer Interfaces More Practical. CHI 2010.

Benko, H., Morris, D., Saponas, T.S., Tan, D.S. (2009). Enhancing Input On and Above the Interactive Surface with Muscle Sensing. Interactive Tabletops and Surfaces 2009.

Saponas, T.S., Tan, D.S., Morris, D., Balakrishnan, R., Turner, J., Landay, J. (2009). Enabling Always-Available Input with Muscle-Computer Interfaces. UIST 2009.

Saponas, T.S., Tan, D.S. Morris, D., Balakrishnan, R. (2008). Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces. CHI 2008.


 

Tongue-Computer Interfaces
Many patients with paralyzing injuries or medical condi-tions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. We use infrared optical sensors embedded within a dental retainer to sense tongue gestures. Our system effectively discriminates between four simple gestures with over 90% accuracy. To demonstrate, we have had users play Tetris with their tongues.

Saponas, T.S, Kelly, D., Parviz, B., Tan, D.S. (2009). Optically Sensing Tongue Gestures for Computer Input UIST 2009.


 

Bionic Contact Lenses
Millions of people use contact lenses daily. We are incorporating technology directly into the structure of these contacts, which provides an unparalleled opportunity to build augmented reality displays and explore new interaction paradigms, as well as to perform continuous healthcare monitoring. More info...

 

Brain-Computer Interfaces
Advances in cognitive neuroscience and brain imaging technologies provide us with the unprecedented ability to interface directly with activity in the brain. In our work, we use brain imaging to passively sense and model the user’s cognitive state as they perform their tasks. We use brain imaging to explore human cognition in the real world, evaluate interface design, and build interfaces that adapt based on cognitive state. More info...

Tan, D.S., Nijholt, A., eds. (2010). Brain-Computer Interaction: Applying our Minds to Human-Computer Interaction. Springer-Verlag: London. ISBN 987-1-84996-271-1, e-ISBN 978-1-84996-272-8.

Nijholt, A., Tan, D.S., Pfurcheller, G., Brunner, C., del R Millán, J., Allison, B., et al. (2008). Brain-Computer Interfacing for Intelligent Systems. IEEE Intelligent Systems.

Nijholt, A., Tan, D.S., Allison, B., del R Millan, J., Graimann, B., Moore-Jackson, M. (2008). Brain-Computer Interfaces for Human-Computer Interaction and Games. Workshop at CHI 2008.

Kapoor, A., Shenoy, P., Tan, D.S. (2008). Combining Brain Computer Interfaces with Vision for Object Categorization. IEEE CVPR.

Grimes, D., Tan, D.S., Hudson, S., Shenoy, P., Rao, R. (2008). Feasibility and Pragmatics of Classifying Working Memory Load with an Electroencephalograph. CHI 2008. (Best Paper Nomination)

Shenoy, P., Tan, D.S. (2008). Human-Aided Computing: Utilizing Implicit Human Processing to Classify Images. CHI 2008.

Lee, J.C., Tan, D.S. (2006). Using a Low-Cost Electroencephalograph for Task Classification in HCI Research. UIST 2006.


End-User Interactive Machine Learning

 

ManiMatrix: Interactive Optimization for Steering Machine Classification
ManiMatrix is a system that provides controls and visualizations that enable system builders to refine the behavior of classification systems in an intuitive manner. With ManiMatrix, users directly refine parameters of a confusion matrix via an interactive cycle of reclassification and visualization. More info...

Kapoor, A., Lee, B., Tan, D.S., Horvitz, E. (2010). Interactive Optimization for Steering Machine Classification CHI 2010.


 

EnsembleMatrix: Interactive Visualization to Support Multiple Classifer Models
Ensemble learning algorithms combine multiple classifiers to build one that is superior to its components. EnsembleMatrix is an interactive visualization system that presents a graphical view of confusion matrices that allows them to interact with the system and understand relative merits of various classifiers as well as build combination models. More info...

Talbot, J., Lee, B., Kapoor, A., Tan, D.S. (2009). EnsembleMatrix: Interactive Visualization to Support Machine Learning with Multiple Classifiers. CHI 2009.


 

CueFlik: Interactive Concept Learning in Image Search
While image search engines provide tags based on simple characteristics of images, such approaches are limited by our inability to articulate all possible tags. CueFlik is an image search application that allows end-users to quickly create (and reuse) their own rules for re-ranking images based on specified visual characteristics. We use CueFlik to explore general properties of end-user interactive machine learning systems. (Partially available as Find Similar Images in Bing Image Search) More info...

Amershi, S., Fogarty, J., Kapoor, A., Tan, D.S. (2011). Effective End-User Interaction with Interactive Machine Learning. AAAI 2011 Special Track on NECTAR.

Amershi, S., Fogarty, J., Kapoor, A., Tan, D.S. (2010). Examining Multiple Potential Models in End-User Interactive Concept Learning. CHI 2010.

Andre, P., Cutrell, E., Smith, G., Tan, D.S. (2009). Designing Novel Image Search Interfaces by Understanding Unique Characteristics and Usage. Interact 2009.

Amershi, S., Fogarty, J., Kapoor, A., Tan, D.S. (2009). Overview-Based Example Selection in Mixed-Initiative Interactive Concept Learning. UIST 2009.

Fogarty, J., Tan, D.S., Kapoor, A., Winder, S. (2008). CueFlik: Interactive Concept Learning in Image Search. CHI 2008.


 

CueTIP: Mixed-Initiative Handwriting Recognition
We propose a mixed-initiative approach to handwriting recognition error correction. CueTIP is a novel correction interface that takes advantage of the recognizer to continually evolve its results using the additional information from user corrections. This significantly reduces the number of actions required to reach the intended result. (Available as Windows Tablet Input Panel) More info...

Shilman, M., Tan, D.S., Simard, P. (2006). CueTIP: A Mixed-Initiative Interface for Correcting Handwriting Errors. UIST 2006.


Visualization and Interaction for Sensemaking and Task Management

 

Faceted Browsing for Scalable Information Exploration
We are working on new user interfaces that present faceted metadata to make searching and browsing various data sources simpler and more informative. We’re particularly interested in an approach that works over a large range of display sizes, with a diverse collection of data domains, over many different sizes of dataset. More Info...

Lee, B., Smith, G., Robertson, G., Czerwinski, M., Tan, D.S. (2009). FacetLens: Exposing Trends and Relationships to Support Sensemaking within Faceted Datasets. CHI 2009.

Smith, G., Czerwinski, M., Meyers, B., Robbins, D., Robertson, G., Tan, D.S. (2006). FacetMap: A Scalable Search and Browse Visualization. IEEE Infovis & IEEE TVCG.


 

Collabio: Annotating People using Social Computation
Much information about you is latent in your social network: your friends have developed complex models of your history, opinions, personality, interests and expertise. This information acn be used to perform tasks such as personalization, expert matching, and friend finding. Collabio is a Facebook game that leverages this by fuses human computation with online social networks. More Info...

Bernstein, M., Tan, D.S., Smith, G., Czerwinski, M., Horvitz, E. (2010). Personalization via Friendsourcing. TOCHI 2010.

Bernstein, M., Tan, D.S., Smith, G., Czerwinski, M., Horvitz, E. (2009). Collabio: A Game for Annotating People within Social Networks. UIST 2009.


 

iSee: Interactive Scenario Explorer for Entertainment
Current online fantasy sports portals make it easy to set up competitions and track progress. iSee is a system of interactive visualizations that allows players to project potential outcomes as well as explore future scenarios. Having this makes games vastly more compelling socially and cognitively. More Info...

Smith, G., Tan, D.S., Lee, B. (2009) iSee: Interactive Scenario Explorer for Online Tournament Games. ACE 2009.

Tan, D.S., Smith, G., Lee, B., Robertson, G.G. (2007). AdaptiviTree: Adaptive Tree Visualization for Tournament-Style Brackets IEEE InfoVis 2007.


 

Exploring the Design Space for Adaptive Graphical User Interfaces
Researchers have presented adaptive user interfaces and discussed how each affects task performance and satisfaction. We design and implement adaptive graphical interfaces in order to specifically isolate and study various aspects such as accuracy and predictability of adaptive interfaces, which make some successful and others not.

Gajos, G., Everitt, K., Tan, D.S., Czerwinski, M., Weld, D. (2008). Predictability and Accuracy in Adaptive User Interfaces CHI 2008.

Gajos, K., Czerwinski, M., Tan, D.S., Weld, D. (2006). Exploring the Design Space for Adaptive Graphical User Interfaces. Advanced Visual Interfaces 2006.


 

InkSeine: In Situ Search for Active Note Taking
InkSeine is a prototype ink application designed from the ground up to have a user interface uniquely tailored to pen input. Some people have described InkSeine as "Windows Journal on steroids." But InkSeine goes well beyond Windows Journal, particularly in its features to search from ink and to easily drag hyperlinks for documents and web pages into your notes. (Available for download here)...

Hinckley, K., Zhao, S., Sarin, R., Baudisch, P., Cutrell, E., Shilman, M., Tan, D.S. (2007). InkSeine: In Situ Search for Active Note Taking [pdf] CHI 2007.


 

An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks
In this study of browser antiphishing defenses, users classified web sites as fraudulent or legitimate. We found picture-in-picture attacks showing a fake browser window as effective as the homograph attack. Extended validation did not help users identify either. Additionally, reading help files made users more likely to classify both real and fake web sites as legitimate when the phishing warning did not appear.

Jackson, C., Simon, D., Tan, D.S., Barth, A. (2007). An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks. Usable Security 2007.


 

Improving Multitasking Efficiency with Peripheral Information Design
Tumbler and Splatter facilitate easy access to occluded content in 2D drawings. Tumbler allows users to spreads layers of occluded content into a 3D stack by right-click-dragging. It allows users to select and rearrange layers. Splatter allows users to spread occluded objects out in 2D.

Matthews, T., Czerwinski, M., Robertson, G.G., Tan, D.S. (2006). Clipping Lists and Change Borders: Improving Multitasking Efficiency with Peripheral Information Design. CHI 2006.


 

Tumble! Splat! Helping Users Access and Manipulate Layers in 2D Drawings
Tumbler and Splatter facilitate easy access to occluded content in 2D drawings. Tumbler allows users to spreads layers of occluded content into a 3D stack by right-click-dragging. It allows users to select and rearrange layers. Splatter allows users to spread occluded objects out in 2D. (Available as Dynamic Reordering in Office 2010 for Mac) More Info...

Ramos, G., Robertson, G., Baudisch, P., Czerwinski, M., Tan, D.S. Robbins, D., Hinckley, K., Agrawala, M. Tumble! Splat! Helping Users Access and Manipulate Occluded Content in 2D Drawings. AVI 2006.


 

Phosphor: Explaining Interface Transitions using Afterglow Effects
Phosphor shows the outcome of interface transitions instantly; at the same time explaining their change in retrospect. Users who already understood the transition can continue interacting without delay, while those who are inexperienced can take time to view the effects at their own pace. More Info...

Baudisch, P., Tan, D., Collomb, M., Robbins, D., Hinckley, K., Agrawala, M., Zhao, S., Ramos, G. (2006). Phosphor: Explaining Transitions in the User Interface Using Afterglow Effects. UIST 2006.


 

Panomramic Viewfinder: Real-Time Previews for Better Panoramic Photography
Panoramic Viewfinder is an interactive system that offers a real-time previews panoramas while shooting pictures. As the user swipes the camera across the scene, each photo is interactively added to the preview. By providing a preview of the cropped panorama and making ghosting and stitching failures apparent, the system allows users to immediately correct and optimally complete their panoramas. (Available as Interactive Panorama Capture in Microsoft Photosynth) More Info...

Baudisch, P., Tan, D.S., Steedly, D., Rudolph, E., Uyttendaele, M., Pal, C., Szeliski, R. (2006). An Exploration of User Interface Designs for Real-time Panoramic Photography. Australian Journal of Information Systems.

Baudisch, P., Tan, D., Steedly, D., Rudolph, E., Uyttendaele, M., Pal, C., Szeliski, R. (2005). Panoramic Viewfinder: providing a real-time preview to help users avoid flaws in panoramic pictures. OZCHI 2005.

Baudisch, P., Tan, D., Steedly, D., Rudolph, E., Uyttendaele, M., Pal, C., Szeliski, R. (2005). Panoramic Viewfinder: shooting panoramic pictures with the help of a real-time preview. Demo at UIST 2005.


Multiple Device Computing Environments

 

IMPROMPTU: Supporting Collaboration in Multiple Display Environments
IMPROMPTU allows users to share task information across displays via off-the-shelf applications, to jointly interact with information for focused problem solving and to place information on shared displays for discussion and reflection. Our framework also includes a lightweight interface for performing these and related actions.

Biehl, J., Baker, W., Bailey, B., Tan, D.S., Inkpen, K., Czerwinski, M. (2008). IMPROMPTU: A New Interaction Framework for Supporting Collaboration in Multiple Display Environments and its Field Evaluation for CO-located Software Development [pdf] CHI 2008.


 

WinCuts: Manipulating and Sharing Arbitrary Window Regions
WinCuts allows users to replicate arbitrary regions of existing windows into independent windows. Each WinCut is a live view of some region of a source window with which users can interact. By sharing WinCuts between multiple machines, users can work across display space and input devices.

Tan, D.S., Gergle, D., Czerwinski, M. (2005). A Job-Shop Scheduling Task for Evaluating Coordination during Computer Supported Collaboration. Journal of Personal and Ubiquitous Computing.

Tan, D.S., Meyers, B., Czerwinski, M. (2004). WinCuts: Manipulating Arbitrary Window Regions for More Effective Use of Screen Space. Short paper at CHI 2004.


Large Display User Experiences

 

Physical Size Affects Spatial Task Performance
We compare the performance of users on large wall-sized displays to that of users on smaller displays, viewed at equivalent visual angles. Results from experiments show large displays biasing users into adopting egocentric strategies that allow them to perform better on spatial orientation tasks, including 3D virtual navigation.

Robertson, G., Czerwinski, M., Baudisch, P., Meyers, B., Robbins, D., Smith, G., Tan, D.S. (2005). Large Display User Experience. IEEE CG&A.

Tan, D.S., Gergle, D., Scupelli, P., Pausch, R. (2005). Physically Large Displays Improved Performance on Spatial Tasks. ACM TOCHI.

Tan, D.S., Gergle, D., Scupelli, P., Pausch, R. (2004). Physically Large Displays Improve Path Integration in 3D Virtual Navigation Tasks. CHI 2004.

Tan, D.S., Gergle, D., Scupelli, P.G., Pausch, R. (2003). With Similar Visual Angles, Larger Displays Improve Performance on Spatial Tasks. CHI 2003.

Tan, D.S. (2003). Exploiting the Cognitive and Social Benefits of Physically Large Displays. Carnegie Mellon University Dissertation Proposal.

Tan, D.S. (2004). Exploiting the Cognitive and Social Benefits of Physically Large Displays. Carnegie Mellon University Dissertation.


 

Physical Size Affects Social Perception of Information
We show, using the novel application of an implicit memory paradigm, that even with equivalent visual angles and legibility, visitors are still more likely to glance over a user’s shoulder and read information on a large wall-projected display than on a smaller desktop monitor. We also present the spy-resistant keyboard, an interaction technique that allows users to type sensitive information safely on large public displays.

Tan, D.S., Keyani, P., Czerwinski, M. (2005). Spy-Resistant Keyboard: More Secure Password Entry on Public Touch Screen Displays. OZCHI 2005.

Tan, D.S., Czerwinski, M. (2003). Information Voyeurism: Social Impact of Physically Large Displays on Information Privacy. Short paper at CHI 2003.


 

Wider Fields of View Close the Gender Gap in 3D Navigation
Existing reports suggest that males significantly outperform females in navigating 3D virtual environments. We show that providing wider fields of view on large displays not only increases performance of all users on average, but also benefits females to such a degree as to allow them to perform as well as males do. These benefits come from the presence of better optical flow cues offered by displays with wider fields of view.

Tan, D.S., Czerwinski, M., Robertson, G.G. (2005). Large Displays Enhance Optical Flow Cues and Narrow the Gender Gap in 3D Virtual Navigation. Human Factors Journal.

Tan, D.S., Czerwinski, M., Robertson, G.G. (2003). Women Go With the (Optical) Flow. CHI 2003.

Czerwinski, M., Tan, D.S., Robertson, G.G. (2002). Women Take a Wider View. CHI 2002.

Tan, D.S., Robertson, G.G, Czerwinski, M. (2000). Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting. CHI 2001.


 

Pragmatics of using Large Displays
We explore the pragmatics of distributing large displays within our environments. We show that separation of divided attention tasks across the visual field is detrimental to task performance, but only when coupled with an offset in depth. In separate work, we present a technique that eliminates the blinding light that front-projected large display systems emit.

Tan, D.S., Czerwinski, M. (2003). Effects of Visual Separation and Physical Discontinuities when Distributing Information across Multiple Displays. OZCHI 2003.

Tan, D.S., Czerwinski, M. (2003). Effects of Visual Separation and Physical Discontinuities when Distributing Information across Multiple Displays. Short paper at INTERACT 2003.

Tan, D.S.,Pausch, R. (2002). Pre-emptive Shadows: Eliminating the Blinding Light from Projectors. Interactive poster at CHI 2002.


Augmenting Human Memory

 

Infocockpits: Interfaces that Improve Human Memory
We utilize well-understood psychology principles and show the benefits of our two basic strategies: (1) Ambient context displays (both visual and auditory), to engage human memory for place; (2) Multiple spatial displays surrounding the user, to engage human memory for location.

Tan, D.S., Czerwinski, M., Bell, G., Berry, E., Gemmell, J., Hodges, S., et al. (2007). Supporting Human Memory with a Personal Digital Lifetime Store. In Personal Information Management: Challenges and Opportunities.

Brush, AJ, Meyers, B., Tan, D.S., Czerwinski, M. (2007). Understanding Memory Triggers for Task Tracking. Extended Abstracts CHI 2007.

Tan, D.S., Stefanucci, J.K., Proffitt, D.R., Pausch, R. (2002). Kinesthesis Aids Human Memory. Short paper at CHI 2002.

Tan, D.S., Stefanucci, J.K., Proffitt, D.R., Pausch, R. (2001). The Infocockpit: Providing Location and Place to Aid Human Memory. Workshop on Perceptive User Interfaces 2001.


Augmented and Virtual Reality

 

Tiles: A Tangible Augmented Reality Interface
The Tiles system seamlessly blends virtual and physical objects to create a workspace that combines the power and flexibility of computing environments with the comfort and familiarity of the traditional workplace. We use this system to support rapid prototyping and collaborative evaluation of aircraft instrument panels.

Poupyrev, I., Tan, D.S., Billinghurst, M., Kato, H., Regenbrecht, H., Tetsutani, N. (2002). Developing a Generic Augmented-Reality Interface. IEEE Computer 35(3).

Poupyrev, I., Tan, D.S., Billinghurst, M., Kato, H., Regenbrecht, H., Tetsutani, N. (2001). Tiles: A Mixed Reality Authoring Interface. INTERACT 2001.

Tan, D.S., Poupyrev, I., Billinghurst, M., Kato, H., Regenbrecht, H., Tetsutani, N. (2001). On-demand, In-place Help for Augmented Reality Environments. Presented as poster at Ubicomp 2001.

Tan, D.S., Poupyrev, I., Billinghurst, M., Kato, H., Regenbrecht, H., Tetsutani, N. (2000). The Best of Two Worlds: Merging Virtual and real for Face-to-Face Collaboration. Work Presented at Shared Environments to Support Face-To-Face Collaboration Workshop, CSCW 2000.

Poupyrev, I., Billinghurst, M., Kato, H., Tan, D.S., Regenbrecht, H. (2000). A Tangible Augmented Reality Interface for Prototyping Aircraft Instrument Panels. Public Demonstration at ISAR 2000. 


 

Alice: Interactive 3D Graphics Programming
Alice is a rapid prototyping system that makes it easy to create interactive virtual worlds for telling stories, playing games, or sharing media over the web. Early versions of Alice required users to script code in an object-oriented, interpreted language (Python) that immediately displayed the effects of changes. Today, Alice uses a drag-and-drop interface and is used as a tool for teaching introductory computing. (Available for download here).

Autonomous Robotic Path Planning

 

Path Planning Algorithms using Framed Quadtrees and Octrees
Motion planning is central to the fields of robotics, spatial planning, and automated design. To find collision-free paths of shortest distance through obstacle-scattered environments, we utilizing framed-quadtree (2D) and framed-octree (3D) data structures to propagate a circular path planning wave through the environment.

Tan, D. S., Herro, J. T., Szczerba, R. J. (1996). Simulation of Shortest Path Planning Algorithms Based on the Framed-octree Data Structure. University of Notre Dame CSE Technical Report, 96-36.

Tan, D. S., Szczerba, R. J., Uhran, J. J., Jr. (1996). Developing Graphical User Interfaces for Robotic Motion Planning Simulations. Paper presented at the Eighth Annual Butler University Undergraduate Research Conference.

Tan, D. S., Herro, J. T., Szczerba, R. J. (1995). Simulation of Euclidean Shortest Path Planning Algorithms Based on the Framed-quadtree Data Structure. University of Notre Dame CSE Technical Report, 95-26.