Sunday, July 10, 2011

Inner eye Project


While Microsoft tend to support more Natural User Interface (NUI) projects, as most celebrated example its early NUI work is the Kinect Xbox body motion sensor, there’s also quite a bit of focus by Microsoft’s NUI researchers on the intersection between NUI and healthcare. There are a number of Microsoft Research projects exploring the NUI-health connection. One of these projects, named “Inner Eye” is focused on  the medical automation analysis of Computed Tomography ( CT ) scans, using modern machine learning techniques as 3D model navigation and visualization.
InnerEye takes advantage of advances in computer-human interactions that have put computers on a path to work for us and collaborate with us. The development of a natural user interface (NUI) enables computers to adapt to you and be more integrated into your environment via speech, touch, and gesture. As NUI systems become more powerful and are imbued with more situational awareness, they can provide beneficial, real-time interactions that will be seamless and naturally suited to your context—in short, systems will understand where you are and what you’re doing
Antonio Criminisi, as a leader of the research group of Microsoft’s Research center in Cambridge, who develop the system that will make it easier for doctors to work with databases of medical imagery. This system indexes the images generated during the scans. It automatically recognizes organs, and they are working to train the system to detect certain kinds of brain tumors.
This software snaps a collection of 2D and 3D images and index them all together. After combining them all together, medical imaging databases are created using the text comments linked to the image for doctors to search. This gives them the ability to search, but it takes time because not all of the results are relevant. These kinds of systems will allow doctors to easily navigate from new images to old images in the same patient, side-by-side. It will also allow doctors to easily pull up images from other patients for comparison.
Criminisi’s team is also working on embedding the technology found in Kinect. This will give surgeons the ability to navigate through the images with gestures. This will give them access to the images mid-procedure without them having to touch a mouse, keyboard, or even a touch screen. As these are all things that could compromise the sterility of the operation, this will be a very useful tool. The team plans for this tool to be implemented at a large scale, making automatic indexes of images as they are scanned and tying them into the greater database seamlessly.
Using Kinect technology, they would only have to motion their hands to access the parts they need to focus on. The potential Microsoft solution is quicker and slicker: And it could help to save lives. Criminisi said: “Our solution enables surgeons to wave at the screen and access the patients images without touching any physical device, thus maintaining asepsis. By gesturing in mid-air surgeons can zoom in on specific organs or lesions and manipulate 3D views; they can also search for images of other patients with similar conditions. It’s amazing how such images can offer clues of disease and potential cure. Pre-filtering patient data can be an important tool for doctors and surgeons.
Although needs in each hospitals different and levels of sophistication, the general outcome was sufficiently encouraging to drive scientific research towards a new, efficient tool to aid surgery.




Resources: 
  • Tecnology Review
  • Pappas Evangelos. 30/03/2010 - Assesment for CO3 6th Semester Academic English. University of Wales

No comments:

Post a Comment