Servicios
Servicios
Buscar
Idiomas
P. Completa
Touchless control module for diagnostic images at the surgery room using the Leap Motion system and 3D Slicer Software
Andrés Felipe Botero-Ospina; Sara Isabel Duque-Vallejo; John Fredy Ochoa-Gómez;
Andrés Felipe Botero-Ospina; Sara Isabel Duque-Vallejo; John Fredy Ochoa-Gómez; Alher Mauricio Hernández-Valdivieso
Touchless control module for diagnostic images at the surgery room using the Leap Motion system and 3D Slicer Software
Módulo para el control sin contacto de imágenes diagnósticas en la sala de cirugía con el sistema leapmotion y el software 3D Slicer
Revista Facultad de Ingeniería Universidad de Antioquia, no. 82, pp. 40-46, 2017
Facultad de Ingeniería, Universidad de Antioquia
resúmenes
secciones
referencias
imágenes

ABSTRACT: During surgical procedures, it is important that the personnel (surgeon, residents, or assistants) interact with the patient avoiding any physical contact with equipment and materials that might have not been appropriately sterilized. This is done in order to prevent patient’s infections and complications after surgery. With the increased availability of diagnostic images, this technology has become indispensable in operating rooms but to maintain asepsis control of computer equipment on which the visualization programs are executed is not always possible, hindering access to personnel to information contained in the images. This paper describes the development of a system that allows the personnel to manipulate a medical imaging display program using gestures, avoiding the surgeon or the nurse to have a direct contact with the computer. Thesystem, which requires a computer with 3D-Slicer software and Leap Motion(LM) device, allows through gestures made with the hands, to access the basic operations such as the movement between sections of a volume, to change the image size and the anatomical plane visualization; operations that are essential to the surgeon for the spatial location and decision making.

Keywords:Diagnostic imagingDiagnostic imaging,human-computer interactionhuman-computer interaction,image guided neurosurgeryimage guided neurosurgery,medical informatics computingmedical informatics computing,positioning systempositioning system.

RESUMEN: Durante los procedimientos quirúrgicos es importante que el personal (cirujanos, residentes o asistentes) interactúe con el paciente, evitando cualquier contacto físico con equipo y materiales que pudieron no ser esterilizados apropiadamente. Esto se hace con el fin de evitar al paciente infecciones y complicaciones posteriores a la cirugía. Con el aumento de la disponibilidad de imágenes diagnósticas esta herramienta se ha hecho cada vez más indispensable en los quirófanos, pero no siempre es posible mantener el control de asepsia de los equipos informáticos en los cuales se ejecutan los programas de visualización, factor que dificulta el acceso al personal asistencial a la información contenida en las imágenes. En este trabajo se presenta el desarrollo de un sistema que permite manipular un programa de visualización de imágenes diagnósticas mediante gestos evitando que el cirujano tenga contacto directo con la computadora. El sistema, que requiere una computadora con el software 3D-Slicery el dispositivo LeapMotion, permite mediante gestos realizados con las manos acceder a operaciones básicas como el movimiento entre cortes de un volumen, cambio del tamaño de la imagen y cambio del plano anatómico de visualización, operaciones que para el cirujano son esenciales para la ubicación espacial y la toma de decisiones.

Palabras clave: Imágenes diagnósticas, interacción humano-computador, neurocirugía guiada por imágenes, informática médica, sistemas de posicionamiento.

Carátula del artículo

Artículos

Touchless control module for diagnostic images at the surgery room using the Leap Motion system and 3D Slicer Software

Módulo para el control sin contacto de imágenes diagnósticas en la sala de cirugía con el sistema leapmotion y el software 3D Slicer

Andrés Felipe Botero-Ospina
Universidad de Antioquia, Colombia
Sara Isabel Duque-Vallejo
Universidad de Antioquia, Colombia
John Fredy Ochoa-Gómez
Universidad de Antioquia, Colombia
Alher Mauricio Hernández-Valdivieso
Universidad de Antioquia, Colombia
Revista Facultad de Ingeniería Universidad de Antioquia, no. 82, pp. 40-46, 2017
Facultad de Ingeniería, Universidad de Antioquia

Received: 07 May 2016

Accepted: 21 December 2016

1. Introduction

The operating room is divided into several areas, where one of these is the sterile area, which is the cleanest one and contains the surgical scrub area. Aseptic rules states that in the workplace only objects that have been sterilized may be used. In some procedures, it is necessary to use diagnostic images such as computed tomography (CT) or magnetic resonance imaging (MRI), and the specialist will need to interact with electronic devices that allow the manipulation of those images [1] [2]. The electronic devices are usually computers with software that facilitates the visualization of medical images like the 3D-Slicer, which is a free and open source software package for image analysis [3]. The manipulation of these display systems is usually done by manual input devices such as a mouse, a keyboard or a touch screen, putting at risk the sterile conditions of the room and therefore the success of the intervention.

There are some human-computer interfaces controlled by hand gestures [4], facial expressions [5] and body gestures [6] that allow surgeons to browse and manipulate medical image keeping sterile conditions, but the access to such technologies in countries like Colombia is limited.

The LM (see Figure 1] is a computer hardwaresensor device that supportshand and finger motions as an input [7] [8] and recently it has been used in different applications such as: Control of 3D molecular graphics using gestures [9] , interaction between virtual reality environments and pieces of cultural heritage [10] and in the package Cyber Science 3D® that allows the exploration of natural anatomy and mechanical structures [11].


Figure 1:
Left Leap Motion positioning device. Right: Physical space defined by the Leap Motion

The 3D-Slicer is a highly known development platform, based in the Visualization Toolkit [12] , useful in the analysis and visualization of medical images. Its modular organization easily allows the addition of new functionality [3]. All the information in the 3D-Slicer environment is known as a Scene and can contain medical images, virtual models, geometric transformations and users annotations (see Figure 2].


Figure 2
Typical scene in 3D-Slicer. Botton: Axial, sagital and coronal planes. Top: Volume Rendering reconstruction of the data in the medical images

According to the information above the development of a tool that allows the specialist to manipulate diagnostic images, without touching the input device, using the Leap Motion (LM) positioning system is proposed. Due to the fact that the LM is a cheap device, it might help to spread the developed technology.

In the current paper, the development of a software module based in LM positioning device and the visualization software 3D-Slicer will be discussed. The final software module is oriented to handling different types of diagnostic images just by moving a hand. It uses intuitive gestures andcan be useful in the operative room increasing the patient safety.

2. Materials and methods

A module that recognizes the Leap Motion (LM) as an input device was developed in 3D-Slicer, it was developed in Python programming language because it includes libraries for the communication protocol (TCP-IP) used as sockets and is supported by Leap Motion and 3D-Slicer. Objects in 3D-Slicer as diagnostic images and 3D models have attributes such as size, orientation, position in space that can be modified using python (see Figure 3].


Figure 3Interaction
between the developed module and 3D Slicer software

The proposed system requires a computer with 3D-slicer software previously installed and the LM connected. The first requirement to use the developed module is that a diagnostic image should be loaded in the 3D-Slicer environment.

2.1 Client-server model and communication protocol (TCP-IP) between the leap motion device and 3D-Slicer software

The model used for the transmission of information between the LM device and the 3D-Slicer software is a client-server and the communication protocol is TCP-IP, where the software module receives the information transmitted by the LM and generates a series of events under the established conditions for the handling of medical images in the 3D-Slicer (see Figure 4].


Figure 4
Client-server model between Leap Motion and 3D-Slicer

The communication is initiated by the LM through a port number to find the Slicer program. The LM includes a python library that enables the capture of the gestures; in our case each nine millisecond, and according the current gesture a predefined code is transmitted to the 3D-Slicer software. Additional to the gestures, the LM captures the position of each finger. That information is also sent to the visualization software.

There is a synchronization between client and server once the communication starts. The data reliability are checked with a parity bit between the data that is transferred between client and server, where the TCP-IP protocol provides security in the handling of exceptions during communication.

The overall algorithm used is shown in the Figure 5. Once the communication has started, the LM transmits the (x, y, z) coordinates of the fingers in LM scene and parity bit that ensures the integrity and data availability. The default function is zoomed on images. When the LM detects two hands it switches to other functionality, which may be either the movement of anatomical planes or the navigation in a 3D space. The information is sent to the serveronly if there are objects on the LM physical space.


Figure 5
Algorithm to get the position sent by the client to the server program and to switch between functions

2.2. Medical images handling in 3D-Slicer

Using the LM as input device, the developed module enables the manipulation of the trackball navigation that is introduced in each 3D-Slicer Scene, something that corresponds to manipulate the view point of the uploaded images scene. The motion of the axial, the sagittal and the coronal planes are also possible as well as the zoom in and zoom out on specific areas.

Calibration is achieved once the module recognizes the input device. If the LM is calibrated correctly, it ensures data integrity [13]. The coordinate system origin of the physical space detected by the LM is set to coincide with the coordinate system origin of the 3D-Slicer software. The LM provides the position in millimeters (mm), maintaining the correspondence with 3D-Slicer coordinate system.

2.3 Performance assessment

To perform the analytical assessment of the system in terms of gesture recognition, a test was done that involved 20 test subjects including clinical and development engineers from the University of Antioquia and 40 samples of each of the gestures. The evaluation (1) was madecalculating sensitivity as follows:

Where TP corresponds to the true positives, FN corresponds to the false negatives and FP corresponds to the false positives. Additionally, the positive predictive value (2) was calculated as follows:

(2)

3. Results
3.1 Connection between devices

The module software has a user-friendly interface: once the connection between the LM and 3D-Slicer is started, the interface shows the function that is active, helping theusers in the interaction. Also an important part of the developed module is exception handling, allowing discloses any connection error between devices.

When the LM is turned on, the connection is set by pressing the button labeled "Connect with Leap Motion", when the connection is successful, the program will notify the user and the images that were loaded for display will be shown in 3D space, then, it is required to initiate the transmission of information between devices, which is done by pressing the "Start" button. Similarly, when the user wants to finish the data transmission "Finish" button must be pressed (see Figure6).


Figure 6
Left: Connection module with LM. Righ: Main interface with connection successful message

3.2. Module functions

The developed module has three functions: Zoom on images, movement of anatomical planes and navigation in the 3D space. Switching between functions is made by placing two hands in the LM field of action (see Figure7(a)). In each of the gestures described in the following sections for the diagnostic images control, it does not matter if the user works with the left or right hand as long as the gesture is preserved. Additional to this, each of the gestures described in Figure7 were suggested by neurosurgeons with the criterion of performing the minimum number of movements, so that the process was intuitive and software manipulation learning time was short.


Figure 7
Supported gesture controls for the Diagnostic Imaging Viewer

1) Zoom on images:The Zoom function is set by default in the visualization module. This function allows the user to Zoom in or Zoom out on axial, sagittal orcoronal planes of the images (see Figure 8].


Figure 8
Zoom function. The user has selected a point to Zoom in

As a first step, a point on one of the chosen planes is enabled, the point moves by simply moving the index finger on the physical space defined by the LM device; when an area of interest for visualization is found, it can be selected as the new center of the image by placing four fingers on the physical space of LM. Once the area is selected, Zoom may be done:zoom in is done by moving one finger towards the screen, zoom out is done by moving the finger away from the screen (see Figure7(b)).There is a correspondence between the hand movement and cursor movement (Dot in Figure 9] in the visualization software.


Figure 9
Top: Original image. Bottom: Zoom in on an area that shows a patient's tumor

To select another anatomical plane, four fingers on the space defined by the LM should be placed, and the change takes place immediately (see Figure7(b));for the case shown in Figure 8, the sagittal plane was selected and a point located in a patient's tumor became the center of the image and the zoom was performed on this area, allowing users to examine this specific area without using the keyboard or other physical input device.

2) Movement of anatomical planes:The movement of the anatomical planes is another of the functions of the developed module. This feature allows the user to navigate through the anatomical planes by hand movements. Moving the hand up and down, right to left or toward and away from the monitor, the axial, sagittal, coronal planes respectively move keeping correspondence between the plans and movements of the hand (see Figure7(c) and Figure10).


Figure 10
The movement of the planes is done by moving the hand over the physical space defined by the Leap Motion

3) Navigation in three-dimensional space defined by the leap motion:The navigation allows the manipulation of a trackball navigation providedby the visualization software, which can be manipulated to locate desired points for specific views in three-dimensional scene, this location is given by the position of a finger on the 3D space defined by the LM, and the module informs the user what functionality is running.

The position of the trackball navigation keeps correspondence with the location of the hand in the field of action of the LM, where the location of the hand is the location of the trackball navigation in 3D space defined in the viewing environment (see Figure7(d)).

3.3. Performance assessment

Tables 1 and 2 shows the average percentages of sensitivity , positive prediction and standard deviation (() for different types of functions developed. The percentages describe a good performance with an average value exceeding 94%.

Table 1
Performance of the system: Sensitivity

Table 2
Performance of the system: Positive predictive value

4. Discussion

The human-computer interfaces are considered a major advance at a technological and scientific level [4]. The growth of this technology proposes new challenges to biomedical applications developers. The possibilities offered by Leap Motion positioning device are enormous because of its high accuracy and resolution [7], these areadvantages that were exploited in the module software presented in this paper. The software developed is in a prototype stage but has important applications in the surgical area, where the maintenance of aseptic conditions of the surgical room is a requirement, enabling the manipulation of diagnostic images without the need to touch a physical input device like a mouse.

The sensitivity results and positive predictive values in the recognition of gestures show the good performance of the system for real-time applications and ensure the proper functioning of its use for non-experienced users. The diagnostic imagescontrolusingthe module can deplete the user after a few minutes of interaction by keeping the hand raised and the working range of the leap motion may be a limiting factor when working within an operating room (approximately 1.2 meters from the device). Personnel who need to manipulate diagnostic imaging should be close to the working range of the sensor to control the images and have their hands free to manipulate the diagnostic images.Nevertheless, the personnel need to be close to the images, also to see clearly the details, therefore, 1.2 meters is an acceptable working distance if both issues are taken into account.Additional limitations will arise when clinical trial will be done, issues concerning the comfort when the hand isrised or movement artifacts will need a rigorous future study.

In spite of the recent publications in the same line of our work [14] [15], our software module has been developed to work with 3D-Slicer which is open source software platform for the analysis (including registration and interactive segmentation), visualization (including volume rendering) of medical images and research in image guided therapy.

The spread of this technology requires the standardization of gestures. Thus, the previous developed applications related to viewing images controlled by motion can be easier to handle by the user and less training will be required when using another application. Further improvements should be developed using methods such as augmented virtuality so that the made gesture superimposes on the manipulated images, to finally achieve a more intuitive interaction.

5. Conclusion

The integration of LM to a standard image viewer presented in this paper is the starting point to develop new tools for planning, training and surgeon assistance. As a consequence, the patient safety will improve during complex procedures.

When usage of diagnostic imaging such as CT, MRI or three-dimensional reconstructions is needed, these kinds of systems will allow the addition of new functions that will improve or incorporate better ways to interact with the diagnostic images without the need to touch the computer’s hardware.

The software module presented to control the diagnostic images was developed in free software and represents a low cost tool that can help to maintain the sterile conditions in surgery environments where the control of diagnostic images is required.

Part of future work focuses on validating the system under clinical conditions or operating rooms and to increase the library of gestures according the physician needs using a big sample of surgeons.

6. Acknowledgment

This work has been supported by Vicerrectoría de Investigación of Universidad de Antioquia (CODI), Project “Sistema de EntrenamientoenNeurocirugía”, code MDC-10-1-6

Supplementary material
7. References
1. R. Johnson, K. O’Hara, A. Sellen, C. Cousins, and A. Criminisi, “Exploring the Potential for Touchless Interaction in Image-Guided Interventional Radiology,” in 29th Int. Conf. Hum. FactorsComput. Syst. , Vancouver, Canada, 2011, pp. 3323-3332.
2. L. C. Ebert, G. Hatch, M. J. Thali, and S. Ross, “Invisible touch-Control of a DICOM viewer with finger gestures using the Kinect depth camera,” J. Forensic Radiol. Imaging, vol. 1, no. 1, pp. 10-14, 2013.
3. A. Fedorovet al., “3D Slicer as an image computing platform for the Quantitative Imaging Network.,” Magn. Reson. Imaging, vol. 30, no. 9, pp. 1323-1341, 2012.
4. M. Satoet al., “Development of an image operation system with a motion sensor in dental radiology,” Radiol. Phys. Technol., vol. 8, no. 2, pp. 243-247, 2015.
5. A. Nishikawa et al., “FAceMOUSe: a novel human-machine interface for controlling the position of a laparoscope,” IEEE Trans. Robot. Autom., vol. 19, no. 5, pp. 825-841, 2003.
6. A. B. Albu, “Vision-Based User Interfaces for Health Applications : A Survey,” in Advances in Visual Computing, G. Bebiset al. (eds). Germany: Springer, 2006, pp. 771-782.
7. M.Spiegelmock, Leap motion development essentials, 1st ed. Birmingham, UK: Packt Publishing Ltd., 2013.
8. P. Garg, N. Aggarwal, and S. Sofat, “Vision based hand gesture recognition,”Int. J. of Computer, Electrical, Automation, Control and Information Engineering,vol. 3, no. 1, pp.186-191, 2009.
9. K. Sabir and B. Tabor, “The molecular control toolkit: Controlling 3d molecular graphics via gesture and voice,” in IEEE Symposium on Biological Data Visualization (BioVis),Atlanta, GA, USA, 2013, pp. 49-56.
10. S. Webel, M. Olbrich, T. Franke, and J. Keil, “Immersive experience of current and ancient reconstructed cultural attractions,” in Digital Heritage International Congress (DigitalHeritage),Marseille, France, 2013.
11. Cyber Science 3D, Products. [Online]. Available: http://cyberscience3d.com/products/. Accessed on: Mar. 15, 2016.
12. W. Schroeder, K. Martin, and B. Lorensen, The Visualization Toolkit-30,3rd ed. Kitware, 1996.
13. Leap Motion, Inc., Recalibrating your Leap Motion Controller. [Online]. Available: https://support.leapmotion.com/hc/en-us/articles/223782328-Recalibrating-your-Leap-Motion-Controller. Accessed on: Mar. 15, 2016
14. N. Bizzotto et al., “Leap Motion Gesture Control WithOsiriX in the Operating Room to Control Imaging: First Experiences During Live Surgery,” Surg. Innov., vol. 21, no. 6, pp. 655-656, 2014.
15. L. Di Tommaso, S. Aubry, J. Godard, H. Katranji, and J. Pauchot, “A new human machine interface in neurosurgery: The Leap Motion (R). Technical note regarding a new touchless interface,” Neurochirurgie, vol. 62, no. 3, pp. 178-181, 2016
Notes
Author notes

* Corresponding author: Andrés Felipe Botero Ospina e-mail: felipe.botero.ospina@gmail.com


Figure 1:
Left Leap Motion positioning device. Right: Physical space defined by the Leap Motion

Figure 2
Typical scene in 3D-Slicer. Botton: Axial, sagital and coronal planes. Top: Volume Rendering reconstruction of the data in the medical images

Figure 3Interaction
between the developed module and 3D Slicer software

Figure 4
Client-server model between Leap Motion and 3D-Slicer

Figure 5
Algorithm to get the position sent by the client to the server program and to switch between functions

Figure 6
Left: Connection module with LM. Righ: Main interface with connection successful message

Figure 7
Supported gesture controls for the Diagnostic Imaging Viewer

Figure 8
Zoom function. The user has selected a point to Zoom in

Figure 9
Top: Original image. Bottom: Zoom in on an area that shows a patient's tumor

Figure 10
The movement of the planes is done by moving the hand over the physical space defined by the Leap Motion
Table 1
Performance of the system: Sensitivity

Table 2
Performance of the system: Positive predictive value

Buscar:
Contexto
Descargar
Todas
Imágenes
Scientific article viewer generated from XML JATS4R by Redalyc