Getting the mice out of the operating theatres
If doctors could use natural user interfaces, like gestures, surgery could be quicker and safer.
Wordlessly, with infinite care, a team of highly specialised clinicians weaves a catheter through a conscious patient’s blood vessels towards a blockage. When the catheter reaches the precise spot, a tiny balloon will be inflated to open up the vessel. A radiologist injects a contrast and takes a run of images as it flows through the blood vessels.
Then she moves to a bank of screens displaying the continuum of X-ray images. Time is of the essence. Automatically, she flips her sterile gown over her gloved hand and, with the non-sterile inside surface, clasps the computer mouse to scroll through the images and assess whether the flow through the vessels is normal.
The gown routine is an everyday less-than-ideal “workaround” demanded when clinicians move between sterile operating tables and their information systems. But wouldn’t it be quicker – and safer – if the radiologist didn’t need to use a mouse, but rather could find and manipulate the image with voice commands, or a hand gesture?
They are among the possibilities offered by putting natural user interfaces to work in healthcare. Others include therapy for children with cerebral palsy, helping people living with long-term conditions to manage their care at home and giving surgical robots a real human touch.
The term natural user interface (NUI) covers a spectrum of ways to communicate with IT devices using more intuitive means than mouse and keyboard.
The concept is already familiar in entertainment; an example is Microsoft’s Kinect for Xbox 360. Kinect incorporates facial recognition along with gesture-based and voice control. It has taken the games industry by storm since its launch last year.
Now something similar looks set to happen in the healthcare and medical sectors. “It’s going to be huge,” says Dr Bill Crounse, Senior Director, Worldwide Health at Microsoft. “There’s been an explosion of interest in Kinect. Every doctor who has seen the device immediately understands how it might one day relate to their work.”
Crounse says that healthcare has lagged other sectors in IT transformation because the user interface – keyboard and mouse – is awkward in the extremely demanding environment of the operating theatre. Besides, he says, that of all industries, healthcare applies the most exacting criteria to IT: portability, security and the ability to absorb and make sense of intensive streams of data – ideally all wrapped up in a graphical user interface that is intuitive and doesn’t require lots of training. “Doctors want it all,” he says, and it’s only recently that information technology has matured enough to deliver.
Computers are now essential surgical tools. The screen (above)
demonstrates the difficulty of using a mouse in sterile environments,
such as operations involving interventional radiology.
Hence the interest in NUI, especially in “touchless” interactions. Researchers at Microsoft’s labs are investigating a number of concepts (see panel). Among the most interesting work is a collaborative effort between social science and computer vision researchers at Microsoft Research in Cambridge, UK. One of its projects, led by Kenton O’Hara, a researcher at Microsoft Research, takes us into the world of interventional radiology, and the possibility of controlling computer images by gestures during the course of surgical procedures.
Interventional radiology involves operating on patients’ circulatory systems from the inside, by inserting wires and catheters through the veins. Procedures require absolute precision. Radiologists navigate with real-time images, produced by angiography techniques such as X-ray fluoroscopy (the use of injected dyes that show up under X-rays), computed tomography and magnetic resonance imaging.
It’s a demanding procedure, physically and mentally, and requires close teamwork. The patient, who is awake throughout, often has to be reassured and warned of discomfort. In such settings, members of the team may communicate by means of gestures out of the patient’s line of sight. IT is an essential element, with images displayed on a bank of screens above the patient, with the computers and keyboards set to one side, in a non-sterile area behind a radiation screen.
Working with colleagues at the Open University and Addenbrooke’s Hospital Cambridge, O’Hara studied how radiologists juggled the tasks of managing the procedure while manipulating fluoroscopy images to get the best possible view. “The whole idea is to get an understanding of how surgeons interact with people and technology in the operating room,” O’Hara says. “You can’t touch certain things when you’re scrubbed up. We’re looking at what that means for how you coordinate teamwork and what would it mean if some of these obstacles were removed.”
Results of the study were presented at the CHI 2011 conference, generally considered the most prestigious in the field of human–computer interaction. The conclusion: though there is strong potential for controlling the images by gesture, building a touchless system is not going to be as simple as wiring the imaging systems up to an Xbox. Further research is needed into the way surgeons already use movement to communicate, in order to delineate the gestures aimed at the computer and those aimed at other members of the surgical team. Possibilities include making the system respond only to gestures in close proximity, or to commands from only one hand. The next phase of the project will begin shortly at a London hospital. And while the final design of the touchless architecture is still far from settled, O’Hara predicts a major for role consumer devices: “Kinect technology has made NUI cheap and acceptable.”
This article was originally published in Issue 9 of the Futures Magazine.