The ability to gauge distances is crucial for interacting with the world. Mislocalizing objects can have dire consequences, such as collisions when driving, stumbling over trip hazards, falling, sports injuries, and so on. These outcomes place a tremendous burden on individuals and cost our nation billions in health care dollars. Factors such as normal aging, neurological and visual disorders, high workload situations, and fast-moving environments can all limit the time and attention we have available to pick up important distance cues. Our work uses motion capture to study the factors that contribute to normal localization of objects, and the ways that these processes can break down. For example, we use motion tracking to record eye movements as people look around the room as they attempt to judge distances to objects. This allows us to study the contribution of attention and eye movements to object localization during glimpses of the environment. We also use motion tracking to sense people's head and body motions as they maneuver through virtual environments. Virtual environment technology allows us to precisely control the kinds of visual information that are available when viewing the environment.
By finding out how perceptual and cognitive factors govern the human ability to localize objects in the environment, our work promises to help improve efforts to minimize the tremendous personal and health care costs of injury in a broad range of populations.