Project on Image Guided Surgery:
A collaboration between the MIT AI Lab and Brigham and Women's
Surgical Planning Laboratory
The Computer Vision Group of the MIT Artificial
Intelligence Lab has been collaborating closely for several years
with the Surgical
Planning Laboratory of Brigham and Women's Hospital. As part of
the collaboration, tools are being developed to support image guided
surgery. Such tools will enable surgeons to visualize internal
structures through an automated overlay of 3D reconstructions of
internal anatomy on top of live video views of a patient.
We are developing image analysis tools for leveraging the detailed
three-dimensional structure and relationships in medical images.
Sample applications are in preoperative surgical planning,
intraoperative surgical guidance, navigation, and instrument tracking.
Contents
Constructing 3D Models
The anatomical structures that appear in an internal scan such as MR
and CT must be explicitly extracted or segmented from the scan before
they can be directly used for surface registration or for 3D
visualization. By segmentation, we refer to the process of labeling
individual voxels in the volumetric scan by tissue type, based on
properties of the observed intensities as well as known anatomical
information about normal subjects. Below is an image of the raw MR
scan and an mpeg movie of all the sagittal slices.
The segmentation is performed using automated techniques and
semi-automated techniques. Automatic segmentation techniques include
an automatic gain artifact suppression technique based on
expectation-maximization in association with a cortical volume
isolation technique based on image morphology and active contours.
Below is a slice of an MRI, with the brain, ventricles, and tumor
segmented.
We use surface rendering techniques to display the segmented MRI
structures. This procedure consists of first extracting bounding
surfaces from the segmented MRI volume using the marching cubes
algorithm. This algorithm generates a set of connected triangles to
represent the 3D surface for each segmented structure. These surfaces
are then displayed by selecting a virtual viewing camera location and
orientation in the MRI coordinate frame and using standard computer
graphics techniques to project the surface onto the viewing camera.
This rendering process removes hidden portions of the surface, shades
the surface according to its local normal, and optionally varies the
surface opacity to allow glimpses into internal structures. Sample
renderings and two movies are shown below.
Setup in the Operating Room
We have built a surgical navigation system that is currently used
regularly for neurosurgical cases such as tumor resection at Brigham
and Women's Hospital. The system consists of a portable cart
containing a Sun UltraSPARC workstation and the hardware to drive the
laser scanner and Flashpoint tracking system (Image Guided
Technologies, Boulder, CO). On top of the cart is mounted an
articulated extendible arm to which we attach a bar housing the laser
scanner and Flashpoint cameras. The three linear Flashpoint cameras
are inside the bar. The laser is attached to one end of the bar, and
a video camera to the other. The joint between the arm and scanning
bar has three degrees-of-freedom to allow easy placement of the bar in
desired configurations. The figure below shows the cart set up in the
operating room.
Laser Scanning
In order to register the patient to the segmented MR skin, the
coordinates of points on the patient's skin must be obtained. We use
a laser scanner to collect 3D data of the patient's scalp surface as
positioned on the operating table. The scanner is a laser striping
triangulation system consisting of a laser unit (low power laser
source and cylindrical lens mounted on a stepper motor) and a video
camera. The scanner consists of a laser mounted on a stepper motor at
one end of a bar and a camera on the other end. The laser beam is
split and projects a plane of light at the angle determined by the
stepper motor. Each pixel of the camera defines a ray going through
the center of projection of the camera. When the plane of light hits
an object, a visible line appears on the object. Intersecting the
laser plane with the optical ray yields a 3D point that lies on the
object. The positional data of the patient is acquired with high
positional accuracy (< 1 mm) while avoiding direct contact with the
patient. In the images below, the patient's head is scanned with the
laser scanner. The points of interest on the patient's head are
selected using a simple mouse interface and are shown in red.
Registration
After a rough initial alignment has been performed, the automatic
registration process performs a two-step optimization to accurately
localize the best laser to MRI transformation. The basis of the
registration algorithm we use has been previously described in some of
our groups papers, listed below.
After the registration, we have the transformation from the MRI
coordinate frame to the operating room coordinate frame--that is, we
know exactly where the MRI points are positioned in the patient--both
on the surface and internally. In the image below, we have blended
the 3D skin model with the video image of the patient. The movies
show the skin model being blended in and out to confirm the
registration.
The registration points are overlaid on the 3D skin model as another
method to verify the registration. The points are color coded based
on the distance to the skin model (green = 0 mm, yellow = 2.5 mm, red
= 5 mm).
Enhanced Reality Visualization
We can peel back the MRI skin and see where the internal structures
are located relative to the viewpoint of the camera. Thus the surgeon
has x-ray vision, a capability which will be needed more and more as
we continue moving towards minimally-invasive surgeries.
Surgical Instrument Tracking
Another method of leveraging the 3D imagery is the tracking of medical
instruments in the frame of reference of the medical imagery. Such
visualization is useful for identifying exact position of internal
probes whose tips are not directly visible or for identifying the
tissue properties of structures that are visible but not necessarily
known.
In addition to our intraoperative pointer, we have attached a bipolar
simulator (Cadwell Laboratories Inc., Washington, USA) to the
trackable probe (see image, right). This stimulator is used to
determine the location of vital regions of the brain, including motor
and sensory corticies and language area. When the stimulator is
placed on motor cortex, a muscle response occurs, and when placed on
sensory cortex, sensation in different areas is reported. Language
suppression (including temporary loss of speech) occurs when the
stimulator touches the languages area. As the neurosurgeon stimulates
different areas of the brain and receives responses, it is common for
him to place numbered markers on the cortex highlighting regions to
avoid. When our probe is attached to the stimulator, we can obtain
the position of the tip during stimulations and immediately produce a
color-coded visualization highlighting these important areas.
Determining Stimulation Grid Positions
In some neurosurgical cases where the patient suffers from seizures,
it is difficult to locate the focus of the seizure activity either
visually or in the MR scan. In such cases, it is common for the
patient to undergo two surgical procedures, one for placement of a
electrode grid and one, about a week later, for removal of the lesion.
During the first surgery, a grid of electrodes is placed on the
surface of the cortex with wires coming out of the skin. During the
next week, when the patient has seizure activity, the responses from
the grid are monitored to localize the focus of the seizures.
In most cases, one of the technicians sketches on paper where the grid
is located on the cortex as a reference during the week of monitoring.
Using our navigational system, we touch each grid point with the
Flashpoint probe and obtain the positions in model coordinates. Below
is the rendered image with the grid points in red. The doctors
monitoring the grid responses have reported that our images were very
helpful in drawing a correspondence between grid numbers and positions
on the cortex.
The electrodes can also be used to directly stimulate the surface of
the cortex to map out the position of the motor and sensory corticies.
In one case, we created a visualization where we colored the grid
points depending on whether they were adjacent to motor cortex,
sensory cortex, or the seizure focus. The neurosurgeon reported that
the color-coding was very useful as he moved our probe over the cortex
when planning out the region to resect.
Impact of Surgical Navigation System
The navigation system has been used at the Brigham and Women's
Hospital for over 200 neurosurgical cases, and is currently being used
routinely for 1-2 cases per week. The system achieves high positional
accuracy with a simple, efficient interface that interferes little
with normal operating room procedures, while supporting a wide range
of cases. An investigation is underway to calculate the monetary
savings of using our system for neurosurgery. Initial estimates
indicate that the use our system reduces the cost of a neurosurgical
procedure by $1000 to $5000, on average, per case. This savings is
mainly due to the fact that our system enables the surgeon to
confidently perform the surgery more quickly. In one case, the
neurosurgeon reported that the use of our system reduced the length of
the surgery from eight hours to five. Click here
to see the neurosurgical case of month at the Brigham and Women's
Hospital Surgical Planning Lab web site.
Selected publications of the project
W.E.L. Grimson,
G.J. Ettinger,
T. Kapur,
M.E. Leventon,
W.M. Wells III,
R. Kikinis.
"Utilizing Segmented MRI Data in Image-Guided Surgery."
In IJPRAI, 1996.
[color postscript 13.0M]
W.E.L. Grimson,
T. Lozano-Perez,
W.M. Wells III,
G.J. Ettinger,
S.J. White,
and
R. Kikinis,
"An Automatic Registration Method for Frameless Stereotaxy,
Image Guided Surgery, and Enhanced Reality Visualization"
In Transactions on Medical Imaging, 1996.
[gzipped postscript 3.2M]
Back to the Medical Vision Group page.
Back to the MIT AI Lab page.
Last updated Feb 5, 1999.
Michael Leventon