Medical / Visualization

Volumetric datasets are used in many surgical applications. Displaying the 3D information contained in these datasets on 2D screens effectively is challenging, because of loss of information due to projection from 3D to 2D space. Problems such as opaque structures occluding each other and difficulty of perceiving depth of a given pixel complicates these kinds of volume visualizations. Application of effects like transparency and transfer functions to show certain anatomical structures makes the mental mapping between the visualization and the patient difficult.

We have used the focus+context visualization paradigm to help with these problems. The users can explore the datasets by using different transfer functions in different parts of therendering. This way, internal structures can be displayed by exploring multiple co-registered datasets in a single coherent view. A novel rendering method using depth images minimizes the performance impact of this visualization approach. An extension to this visualization idea is also developed by using volumetric brushes to perform volume editing tasks. This way, the users can perform accumulated selection of arbitrary shaped regions, and combine multiple datasets in a single visualization.

[More info and video]
Participants: Can Kirmizibayrak, James Hahn

 

Traditional tactile interaction methods such as trackers or the mouse can cause problems when used in the operating room. Trackers can be expensive, difficult to setup and calibrate and prone to errors due to line-of-sight requirements or electromagnetic interference. The mouse is ubiquitous, but it is a 2D interface and might be difficult in 3D interaction tasks necessary for volume visualization. The common problem for all these interfaces is that they require sterilization, a process that is time consuming, costly, and possibly can cause complications.

The use of gestures can replace these interaction methods and eliminate these problems. Furthermore, intuitive methods like these can decrease the training time necessary for performing interaction tasks important for medical volume visualization. We are developing such methods for common tasks such as data exploration, editing and rotation. The user studies conducted showed that novice users can perform tasks such as matching rotations and finding internal structures in a volumetric dataset very effectively and quickly using a gesture-based interface compared to the mouse.

[More info and video]
Participants: Can Kirmizibayrak, Nadezhda Radeva, James Hahn

 

A logical extension of current conventional videographic analysis of swimming is threedimensional (3D) computer animation and visualization. Recent advances in computer graphics now make it possible construct realistic, 3D animated computer models of swimmers, which can then be used for a detailed analysis of swimming technique by coaches and athletes. This kind of approach has been used in a variety of domains including biomechanics to analyze human gait.

Such an approach would be able to answer questions that 2D video or live action cannot answer. For example, precisely how is the motion of one swimmer different from another? How do the various body parts move during a stroke? 3D animations could also become an invaluable tool for training athletes. Simple models of fluid forces could also be included into these animated model and used for rapid assessment of various strokes. Finally, 3D animations can also provide body motion data that can be fed into the CFD analysis described above. Preliminary proof-of-concept work in this direction has already been done by the group using body-scan and videographic data provided by USA Swimming, and adjacent figure shows a multi-exposure view of a 3D computer model captured from video of a real swimmer executing a dolphin kick. This model can be made to move precisely like the athlete in question. The model can then be measured and visualized to give a variety of information about the swimmer and this would be difficult to do using conventional video analysis. For instance, the red line in the figure traces the motion of the toe and the animation can also be viewed from any direction as shown in the lower figures in order to examine in detail, the various stages in the stroke. Currently, several swimming motions such as backstroke are being added to the motion library, and the visualization application has several tools for comparison and analysis of different styles of swimming motion.

Participants: Can Kirmizibayrak, James Hahn
Project Poster: (3072x2034, TIFF file) | (1024x768, JPEG File)
(.AVI 51.8M) | (.WMV 21.3M)

 

We have started a new line of research associated with the use of medical informatics in a number of applications. In one of these applications, we are interested in creating a prototype system capable of detecting in real time, biologic, chemical, nuclear, and radiological terrorism attacks as well as routinely supporting the local public health community with information regarding natural outbreaks of contagious and traumatic diseases. This can be accomplished through syndromic surveillance of the Emergency Medical Dispatch

(EMD) interrogatory data, collected by the computer aided dispatch (CAD) system at 911 Centers already in place. A set of tools will allow further analysis to determine possible causes (e.g. type of biological agents), the mechanism of transmission (for example, based on current atmospheric conditions and simulations of the dispersion of the plume), and possible actions (e.g. evacuation of a population center in danger). The system is intended to be used by local, regional, and national public health officials giving them advance information about possible man made or naturally occurring events, and helping them make decisions on possible actions to take.

 

The researchers in the GW H21C Lab focus on providing a seamless integration of technology within a simulated living space. Through a computer vision based game, home residents can engage personal fitness in a variety of exciting environments. With special modification for physical limitations, senior residents with disabilities can have their physical rehabilitation right at their own living space. Using currently available sensors and other technologies connected to the Internet, the

home will become a context-aware environment,able to identify who is present and react accordingly. This will enable home residents to communicate with ease, to receive music and video anywhere, to have remote access to home appliances and environmental controls, to monitor the home remotely, and to provide safety against intrusion, fire and other hazards.

Also visit Home of the 21st Century

 

The system in this project provides GIS (Geospatial Information System) data analysts with an effective way to visualize multidimensional data and visually filter information from it. In GISs, most of data are in conjunction with location information and shown to data analysts on top of the location layer such as a map. For this reason, it is necessary to help them minimize distraction from their track on the spatially distributed data and at the same time handle database intuitively. With the system we proposed in this project, avoiding

complex database schema, the analysts are able to build idea towards the final decision through the repeated interaction with data. Using this system, extended idea of Magic LensTM interface, we made the way that data analysts bring the movable visualization interface to the data as opposed to the way that they repeatedly load the data to visualization tool and analyze the data individually. This system thus allows data analysts to effectively keep their hypotheses in mind while they are applying their hypotheses to the data. Taking advantage of it, they can reach the higher knowledge level and ultimately final decision.

Participants: Sang-Joon Lee, James K. Hahn

 

The goal of this system is a computer-based educational system that trains medical personnel in the performance of a variety of needle stick procedures. The system is designed to apply two syringe procedures, subcutaneous insertion and intravenous insertion. For each procedure, the system consists of a multimedia training component and a virtual reality (VR) simulation system in which the student performs the procedures. For each procedure the tutorial subsystem provides a lesson that presents information through multimedia contents and user-friendly widgets.

contents and user-friendly widgets. At the end of the module or, the student can execute the simulation module for their practice purpose. The VR simulation incorporates a visual display which presents a realistic view of the procedure as it is performed. During the VR simulation, the user is also able to feel haptic feedback of various virtual patients in the simulation using the computer mouse and PHANToM device which supplies 6- DOF input manipulation and 3-DOF haptic feedback. (Funded by Casde Corp. and the Army Research Institute)

Participants: Dongho Kim, Sang-Joon Lee, James K. Hahn

Medical simulations are used for a number of purposes. One of the most promising is the development of surgical simulators. The accepted paradigm for teaching in medicine has been "see one, do one, teach one." Although the methodology has served medicine well, there is a growing interest in the use of computer-based surgical simulators to teach complex surgical procedures. This has been prompted by the prevalence of "minimally-invasive" procedures. Minimally-invasive procedures involve the use of imaging techniques (MRI,CT, ultrasound, laparoscopes) to guide instruments through a small opening in the patient to perform certain surgical procedures.

The benefit is the reduced amountof trauma to the patient. However, the procedures have become extremely complex making effective training critical. Personnel from SEAS and SMHS have been involved in a number of research projects that involve computer scientists, electrical engineers, mechanical engineers, and physicians to develop virtual reality simulators that allow physicians to see as well as feel the simulated procedure. This type of training, although of limited use currently, has a great deal of potential in revolutionizing medicine in much the same way that flight simulators have revolutionized pilot training.

Participants: James K. Hahn, Roger Kaufman, Raymond Walsh, Thurston Carleton, Dongho Kim, Sang-Joon Lee

Vocal cord paralysis and paresis are debilitating conditions leading to difficulty with voice production. Medialization laryngoplasty is a surgical procedure designed to restore the voice in patients by implanting a uniquely configured structural support lateral to the paretic vocal fold through a window cut in the thyroid cartilage of the larynx. Currently, the surgeon relies on experience and intuition to place the implant in the desired location, therefore it is subject to a significant level of uncertainty. Window placement errors of up to 5mm in the vertical dimension are common in patients admitted for revision surgery.

The failure rate of this procedure is as high as 24% even for experienced surgeons. An intraoperative image-guided system will help the surgeon to accurately place the implant by superimposing the CT data from the patient with the actual larynx of the patient during surgery. One of the fundamental challenges in our system is to accurately register the preoperative 3D CT data to the intraoperative 3D surfaces of the patient. Our proposed image guided system will use the anatomical and geometric landmarks and points to register intraoperative 3D surface of thyroid cartilage to the preoperative 3D radiological data. The proposed approach has three phases. First, the laryngeal cartilage surface is segmented out from the preoperative 3D CT data. Second, the surface of the exposed laryngeal cartilage during the surgery is reconstructed intraop-eratively using stereo vision and structured light based surface scanning. Third, the two geometries are registered using ICP based shape matching. The proposed ap-proach has several advantages over alternative approaches: the combination of stereo vision and structured light surface scanning is capable of tracking the fiducial markers, reconstructing the surface of laryngeal cartilage and matching the preoperative and postoperative surfaces for registration purposes. The computer vision based approach can be applied to delicate areas like laryngeal cartilage with no danger of causing physical damage.

Participants: Steven Bielamowicz, M.D., James Hahn, Ph.D., Rajat Mittal, Ph.D., Raymond Walsh, Ph.D.
Image Guided Medialization Laryngoplasty (20.6M)

 

Medical simulations are used for a number of purposes. One of the most promising is the development of surgical simulators. The accepted paradigm for teaching in medicine has been "see one, do one, teach one." Although the methodology has served medicine well, there is a growing interest in the use of computer-based surgical simulators to teach complex surgical procedures. This has been prompted by the prevalence of "minimally-invasive" procedures. Minimally-invasive procedures involve the use of imaging techniques (MRI,CT, ultrasound, laparoscopes) to guide instruments through a small opening in the patient to perform certain surgical procedures.

The benefit is the reduced amountof trauma to the patient. However, the procedures have become extremely complex making effective training critical. Personnel from SEAS and SMHS have been involved in a number of research projects that involve computer scientists, electrical engineers, mechanical engineers, and physicians to develop virtual reality simulators that allow physicians to see as well as feel the simulated procedure. This type of training, although of limited use currently, has a great deal of potential in revolutionizing medicine in much the same way that flight simulators have revolutionized pilot training.

Participants: James K. Hahn, Roger Kaufman, Raymond Walsh, Thurston Carleton, Dongho Kim, Sang-Joon Lee

The analysis of myocardial function is important for the diagnosis of heart diseases, the planning of therapy and the understanding of the effect of cardiac drugs on regional function. Many cardiac disorders result in regionally altered myocardial mechanics. Traditionally, an abnormal contractile function of ventricles is determined by measuring the wall thickening using MRI, Echocardiography and SPECT. Because abnormalities in myocardial strain are detectable before first symptoms of a heart attack, we want to establish a finite element mesh based on time-series imaging data.

This finite element mesh, enables us to calculate the deformation gradient and all the mathematical derivables, such as volume and strain tensor, of the left ventricle. Strain vectors at each vertex point are visualized in order for users to understand the major direction and amount of strain. Since there are many individual vectors to be shown over a time period, it could be hard for users to explore a pattern of major direction and its magnitude, if interrelation between the vector data is not carefully shown. For example, drawing changing arrows representing the vector information at each vertex over time, may be one way to show. However, because the deformation will also be animated at the same time, it could be visually inconsistent over time resulting in clutter visualization. Streamline interconnecting the individual vertices can provide where the major strains are and what their directions are. Scholar information can be encoded into color information and then can be mapped into the surface colors.

Image-guided technology provides surgeons with intra-operative anatomical images of patients, which can help the surgeons decide the best location for incisions, optimal path to the target area, and the critical structures along the path. Due to the minimally invasive nature and the required accuracy, image-guided technology has been widely applied to brain tumor biopsies, spinal surgery, breast cancer biopsy and other surgical applications. We have applied ultrasound guidance in cryo-surgery of the prostate.We are also in the process of developing image-guided techniques to medialization laryngoplasty.

The biggest obstacles come from (1) registering the geometry of the patient registering the geometry of the patient during the surgery to the pre-operative 3D CT data (2) presenting this additional visualization to the surgeon with minimal intrusion or modifications to the current surgical practices and (3) implementing with only a moderate increase in the requirement for additional equipment.

 

This project simulates the interaction of a needle with skin using deformable surfaces and particle systems. We are also simulating human hair and its movement using a hybrid model.

Participants: Yi Wu, James K. Hahn

In this research, we propose a new robust segmentation method for blood vessels from volume data. The proposed method extracts a blood vessel with detailed geometry, such as bifurcations and changes in radius. In addition, it can generate an abstract tree data structure representing the extracted blood vessel. Thus, the resulting data is very useful for various geometric operations and visualization in many 3D medical applications, including surgical simulation.

Participants: Kwang-Man Oh, James K. Hahn

CT or MR images acquired from the patient are used to automatically segment out the vasculature which is then stored in a hierarchical blood vessel data structure. This information is used by the simulation module to perform a dynamic simulation of a catheter moving through the vasculature.

A proprietary force-feedback device has been developed by the mechanical engineering department. This device relays user movements to the simulation, which computes the catheter interaction with the vasculature, and returns the proper forces and torques to the device, which then outputs them to the user.

The rendering module accesses the blood vessel data structure, the catheter data structure, as well as an environment map to produce a fluoroscopic representation on a display screen. The entire system operates at interactive frame rates.

The catheter simulator has been used in the creation of a tutorial and training simulation of Inferior Vena Cava (IVC) filter placement. The filter is guided into place using a catheter inserted through an incision at a remote location. The procedure is monitored through a fluoroscopic view of the patient presented on a screen in front of the surgeon.

Participants: James K. Hahn, Roger Kaufman, Raymond Walsh, Adam Winick, Thurston Carleton, Nadia al-Ghreimil

We are designing an Object Oriented library of tools and techniques for Scientific Data Visualization. We have defined all common scientific objects as smart objects (poly-lines, surfaces, volumes, etc.) and provide a design which allows Functional Composition of Techniques to take place.

Participants: Jean Favre, James Hahn

Motion Control

With the increased use of motion captured data for character animation the need for editing such data arises. Participants in this research area try to find new ways to combine partial motions (e.g. throwing) with full body motions (e.g. walking) effectively. The resulting motion has to look natural and has to retain the original characteristics of the input motions. Since motion capture data is used, it is necessary to convert it to joint angles usable for animating an articulated figure. In the figure below, the top row shows a newly generated sequence of frames for a person throwing while walking. It is generated through the combination of the person's walking motion (middle row) and a throwing motion (lower row). Where the throwing motion is actually extracted from another person's throwing motion while walking.

Participants: Nadia Al-Ghreimil, James K. Hahn

This research presents procedural methods to solve the problems of physically based modeling. We called it physically based procedural methods. Unlike previous procedural methods, physically based procedural methods formulate equations of motions mainly based on physical quantities and physical meanings. Therefore, the motions from the physically based procedural methods better correspond with physical properties. Additionally, it provides an easy way of controlling the motions and sound.

Participants: Jong Won Lee, Dongho Kim, James K. Hahn

We have recently been applying the AI technique known as Genetic Programming (GP) to control the motion of articulated figures. This allows the system to automatically generate life-like motion for jointed figures. The human animator must provide a fitness function which rates the motion which the system generates.



Published paper:

Gritz, Larry and James K. Hahn. "Genetic Programming for Articulated Figure Motion", Journal of Visualization and Computer Animation, vol. 6: 129-142 (1995).

Participants: Larry Gritz, James Hahn
Genetic Programming for Articulated Figure Motion (4.25M)

 

Articulated figure motion remains a challenging area of computer animation. It is difficult to create realistic motion for animated characters using conventional approaches based on traditional animation. These approaches largely employ a process known as "keyframing" where individual poses are constructed at specific points in time. Interpolation is then performed to obtain continuous character motion ("in-betweening"). This project explores alternative approaches to creation of character motion. These include the use of inverse kinematics and adaptation to the character's environment. The use of dynamical

simulation is also being explored. Another approach involves use of prerecorded motion capture sequences that are tailored to the requirements of motion. The resulting motion retains the expressive qualities of the data without repetition of a particular motion.

Participants: James K. Hahn, Shih-kai Chung (Kiles), Nadia Al-Ghreimil, Doug Wiley

A image from "Blowing in the Wind" animation which demonstrated one dynamic anlysis can be used both motion and sound seamlessly. Worked with Sang Yoon Lee and Larry Gritz.

Dynamics has been used extensively to simulate the physical world. We have tried to combine this with geometry, constraints and user interaction. This approach gives us intuitive control and fast calculation for some applications.

Participants: Won Lee , James Hahn

Interactive Constraint Dynamics (1.89M)

Rendering

Current infrared (IR) ship signature codes have some limitations in accurately predicting ship IR signatures. They either model general BRDF while ignoring multi-reflections, or compute multi-reflection with simplified BRDF models. In this project we apply state-of-the-art global illumination models to accurate simulate the general case for low observable targets. (Funded by Office of Naval Research)

Participants: Dongho Kim, Ge Jin, James Hahn

This project involved combining radiosity and ray tracing to generate realistic images, with special attention paid to specular-to-diffuse transfer of illumination. This project led to the MS thesis of Larry Gritz.

Some of the ideas from this research were incorporated into the Blue Moon Rendering Tools, a RenderMan-compliant rendering system which supports ray tracing and radiosity.

Participants: Larry Gritz, James Hahn

Illumination and Rendering System (2.69M)

Virtual Reality

Windowing within immersive virtual environments is an attempt to apply 2D interface techniques to three-dimensional (3D) worlds. 2D techniques are attractive because of their proven acceptance and widespread use on the desktop. With current methods of performing 2D interaction in immersive virtual environments, however, it is difficult for users of 3D worlds to perform precise manipulations, such as dragging sliders, or precisely positioning or orienting objects. We have developed a testbed for comparing different indirect user interaction techniques using bimanual interaction, proprioception, and passive-haptic feedback.

The HARP system testbed provides users with a physical surface on which to perform 2D interactions. The paddle is held in the non-dominant hand of the user. The dominant hand is used as a selection device. The user interacts with GUI widgets simply by touching them with the index fingertip of the dominant hand.

Participants: Robert W. Lindeman, James K. Hahn, John L. Sibert

In this project, we are investigating the design and generation of sounds as well as problems related to environmental effects and synchronization to motion. We have used the idea of "timbre trees" (analogous to "shade trees") to express sounds procedurally. Genetic algorithms have been used to alter these trees to design new sounds.
We are exploring the use of these techniques in virtual environments. Our work in sound generation for virtual environments has focused on the problem of real-time generation of synthetic sound, as well as frameworks for integrating sound into virtual environment interfaces.

We have developed a framework for integrating sound into virtual environments which supports real-time generation of spatialized synthetic as well as sampled sound sources. The system provides high level abstractions for modeling the auditory world making integrating sound a relatively painless process. The Virtual Audio Server is now available to the public.

Participants: Hesham Fouad, James Hahn

Demo Video1 (3.67M) | Demo Video2 (2.3M)

A virtual environment differs from a classical frame based animation system mainly in its non-deterministic nature. To address these dynamic conditions, actors must respond to events within the environment as they occur and not simply follow pre-specified scripts.

We are developing an adaptive control technique to improve the creation and runtime control of reactive actors. A reactive actor is defined as a control entity that autonomously chooses its behavior based on the information it receives from the environment and its own internal state.

RAVE (Reactive Actors in Virtual Environments) uses a reinforcement learning model to automatically generate controllers for typical 2D navigational tasks. Collective Learning Systems (CLS) theory is integrated within a hierarchical control model to create controllers which quickly converge on optimal navigational strategies and also adapt to changing environment conditions during runtime.

Participants: Daria E. Bergen, James K. Hahn

In this research we will explore the use of visualization and animation in support of several projects at the national zoo's Center for Biological Research. This project includes visualization of digitized biological artifacts (i.e. skulls, skeletons), animation of animal locomotion, and shape transformation (3D morphing). The ultimate objective is the "Digital Museum" that allow users to access 3-D artifacts which may be located far away electronically.

Participants: James K. Hahn, Shih-kai Chung (Kiles), Randy Rohrer, Pavadee Sompagdee

Morphing ("metamorphosis") is a smooth transition between two objects (images, volumes, geometric models). This research focuses on improving morphs between 3D geometric models. Geometric morphing can be viewed as 3 distinct problems: correspondance, interpolation, feature specification. Correspondance refers to how points on one surface get mapped to points on another surface. Interpolation refers to how an object transitions to a new object (the actual transformation process). Feature specification refers to a users ability to control how features of a source object get mapped to features of destination object (including mid-morph feature control). In this research, we focus on problems of interpolation and feature-based morphing.

Participants: Randy Rohrer, Pavadee Sompagdee, James Hah

Go to Current Research Page