Sunteți pe pagina 1din 15

ROBOTICS SURGERY

TECHNICAL PAPER PRESENTATION

STANLEY STEPHEN COLLEGE OF ENGG & TECH.


Panchalingala, Kurnool 518 004, A.P. (Approved by AICTE, New Delhi) (Affiliated to JNTU, Hyderabad)

PRESENTED BY T SREE SAHITHI


(REGD NO: 08BM1A0453) (DEPARTMENT: ECE) Email:sreesahithi453@gmail.com &

K SINDU PRIYA
(REGD NO: 08BM1A0424) (DEPARTMENT: ECE) Email:k.sindupriya424@gmail.com

ABSTRACT:
Surgery now uses robotic and image processing systems in order to interactively assist the medical team, both in planning the surgical intervention, and in its execution. The objective of this new technique is to enhance the quality of surgical procedures by minimizing their side effects (smaller incisions, lesser trauma, more precision,...), thus increasing patient benefit while decreasing the surgical cost. These techniques are being successfully introduced in several areas of surgery: neurosurgery, orthopaedics, microsurgery, cardiovascular and general surgery etc. Three main steps can be pointed out in a general robotic surgery intervention: data acquisition and subsequent planning, intra operative assistance, and post-operative patient control. In the preoperative phase, a patient dependent model of the rigid (eg bones), and deformable (eg the heart) anatomical entities involved in the surgical act have to be built. For this, several medical imagery techniques (MRI, Scanner, Ultrasonics, etc.) are used, where the anatomical structures are detected, located and modelled. In the same time, the mechanical model of the robotic system is fused in an overall geometric model. This will be used to describe and simulate the different potential problems that may occur during the intervention. The results obtained in the planning phase are then calibrated and put in correspondence with patient in intraoperative situation. As a consequence, the robotic system is able to provide interactive assistance/guidance, and to constrain the movements of the surgeon in order to perform, with the desired precision, the possibly pre-defined procedure (eg neuro-surgical biopsy). In some cases, the robot may have an autonomous behaviour in order to realize a dedicated and fixed part of the procedure (eg thighbone drilling for artificial hip installation). As for tele-operated robots, the surgeon through a master console benefits from an enhanced (sometimes 3D) vision of the organs. In addition, augmented reality would allow the overlay, in realtime, of the pre-operative data of the patient during the intervention. The surgeon movements may be reduced to increase precision, and smoothed to avoid hand tremor by virtue of a

decoupled master/slave unit.

INRIA Sophia Antipolis (Chir Robotics Medical Team) is directing its efforts towards this specific application field. In

particular, a recent French National Research Grant Telemedecine realized in collaboration with Pr. Alain Carpentiers heart surgery team, has been initiated and is now in progress. Fundamental research in key areas of robotics surgery are being studied: the modelling of de-formable organs (eg the heart), planning and simulation of robotics procedures (eg using the Da Vinci robotic system), safe & real-time integration with augmented reality. Our major concern is having a safe and robust software, especially in what concerns software integration of various disciplines. Several efforts in computer science have been devoted to develop modular and safe control mechanisms of software components used in robotic systems. This is the case of the Orccad/Maestro initiative at Inria, which is a software integration environment for the design and validation of complex robotic tasks. The fundamental properties required on such software systems can be validated beforehand, using formal verification of their logical behaviour, thus enf

orcing system safety. The projects second priority is the demonstration of the validity of the developed research and tools through experimentation (eg in coronary artery bypass in heart surgery), and through industrial transfers towards specialized partners.
y y y y y y

Introduction Development of current systems The commercialization years Challenges and future systems Conclusion References

The use of robotics has been emerging for about 75 years, however it is only during the past 5 years that the potential of robotics has been recognized by the surgical community as a whole. This personal perspective is intended to chronicle the development of robotics

for the general surgical community, the role of the military medical research effort and to document many of the major programs that contributed to the current success. INTRODUCTION For about 75 years, robots had been the sole perview of science fiction. Their descriptions ranged from the dumb machine that replaced monotonous work as first described by the Czech playwright Karel Capek in the classic play "Rossum's Universal Robots (RUR)" in 1921 to the ultraintelligent anthropomorphic robots of Isaac Asimov's classic science fiction books of the 1950's to the familiar R2D2 and C3PO of Star Wars films beginning in the 1970's to the incredible cyborgs of the Terminator film series. However it was a rare exception that the depiction was that of a medical robot (a few scenes in Star Wars).

Robots gradually made their way into factories for dangerous repetitive accurate tasks (automobile assembly), handling hazardous wastes in the nuclear industries (figure 1), great dexterity and precision (computer chip assembly) and as delivery robots, such as those by Joseph Engelberger, MD

(figure2). None of these were anthropomorphic, they all were designed to provide functionality. While many could exceed human performance in a specific dexterous task such as the MIT- Utah Hand (figure 3) or exceed human sensual perception, none achieved even the minimal intelligence of a two year old baby. Many tried to gain expertise in a specific domain, with various recognition capabilities, however these robots were never able to demonstrate cognitive abilities. This is the background upon which the origins of medical robotics arose. The earliest conceptions of surgical robotics came from Scott Fisher, PhD (1) (at the National Aeronautics and Space Administration (NASA) Ames Research Center, Palo Alto, CA) and Joseph Rosen, MD (Plastic Surgery, Stanford University, Palo Alto, CA) in the mid to late 1980's. At that time the NASA-Ames group lead by Michael McGreevy, PhD and Steve Ellis, PhD were working in virtual reality (VR). This group was joined by Scott Fisher and Joe Rosen as the first head mounted display (HMD) (figure 4) was being developed as a way to display the massive amounts of data being returned from NASA's planetary exploration missions of Voyager and others. At this time Jaron Lanier, who coined the term "virtual reality" (VR), contributed the DataGlove and object oriented program (his company VPL, Inc was an abbreviation for Visual Programming Language), which made it possible to interact with the three dimensional (3-D) virtual scenes. Scott Fisher and Joe Rosen integrated these ideas of interactivity of VR and applied them to surgical robotics. Their earliest concepts (figure 5) envisioned telepresence

surgery (a term coined by Scott Fisher, who later started his own company called Telepresence Research Inc.) using the DataGlove as a method of controlling the remote robotic hands. The NASA-Ames team had expertise in VR, but not robotics. Joe Rosen and Scott Fisher took their vision to Phil Green, PhD at Stanford Research Institute (SRI, later changed to SRI International after acquiring Sarnoff Research Institute of Princeton, NJ). Phil Green was head of the biomechanics section at SRI and was working with other roboticists such as John Hill, PhD, Joel Jensen, PhD and Ajit Shah, PhD. Tom Piantanida, PhD provided the human interface technology expertise, and was SRI's expert in the emerging VR field. With Joe Rosen's clinical input, the first direction for development was as an extremely dexterous telemanipulator to greatly enhance vascular and nerve anastomoses for hand surgery. In keeping with the VR and telepresence concept, the design focused upon an intuitive interface (figure 6) which was able to give the surgeon the sense that (s)he were operating upon a hand directly in front of their eyes, but which was in fact located upon the other side of the room. Scott Fisher was fond of saying that, although he could not teleport (as in "Beam me up, Scotty") he could send his presence to a remote site. In 1988-89, the parallel development of laparoscopic cholecystectomy emerged on the surgical front. Jacques Perrisat, MD of Bordeaux, France presented a video tape of a laparoscopic cholecystectomy to the Society of American Gastrointestinal Endoscopic Surgeons (SAGES) annual meeting in Atlanta, GA. The profound effect of the introduction of laparoscopic surgery to the main stream surgical community (in

addition to the pioneering procedures performed by Joe Eddy Reddick, MD, Douglas Owens, MD, Barry McKearnen, MD and George Berci, MD) caused an explosion in the use of laparoscopic cholecystectomy. It soon became apparent that, although laparoscopic surgery was of great benefit to the patient, it created an enormous difficulty for the surgeon, since there was the

degrading of the sense of touch, the loss of natural 3-D visualization(2) and impairment of dexterity principally due to the fulcrum effect of the instruments. While Joe Rosen was beginning animal and then early clinical trials with the Green Telepresence Surgery System (as it was being called), Richard Satava, MD began working first with the NASAAmes group and then was introduced to the SRI telepresence team. As a general surgeon and surgical endoscopist, it was immediately evident that the telepresence system provided a number of solutions to the laparoscopic surgery problems (3). In response to Rick Satava's suggestions, Phil Green began devoting the telepresence effort towards macroscopic surgery (4) and specifically to improving upon laparoscopic surgery, in addition to the microscopic surgery for Joe Rosen in hand surgery. A video tape of the telepresence surgery system was demonstrated to COL Russ Zajtchuck, MD and Donald Jenkins, PhD of the Borden Institute of Walter

Reed Army Medical Center. They brought this to the attention of the Surgeon General of the Army, Alcide LaNoue, which resulted in the transfer of Satava from Ft. Ord, CA to the Pentagon's Advanced Research Projects Agency (ARPA - which was developing the ARPA-net that later evolved into the Internet). Under the Surgeon General's support, ARPA (later to become DARPA in 1993) was requested to begin a program in Advanced Biomedical Technologies, to include telepresence surgery (as it was now being called). Donald Jenkins was requested to be coprogram manager in this effort, which over the next 7 years funded a majority of projects in telepresence and robotic surgery. A third effort was also beginning independently in the early 1990's. Dr. Hap Paul, DVM and William Barger, MD (orthopedic surgeon) began collaborating with Russell Taylor, PhD of IBM's T.J. Watson Research Center(5) to develop a robotic system (based upon the IBM Puma arm) that would be able to be used for hip replacement surgery (many breeds of dogs, including German shepards and golden retrievers came to Hap Paul to have hip replacement for the fractured and dislocated hips). The research which this team conducted resulted in the first robotic surgical device, named RoboDoc (figure 7). This was a modification of the basic principals of the Puma Arm which enabled pre-operative planning of the procedure (to include matching the

prosthesis exactly with the femur that would accept the prosthesis). RoboDoc was able to precisely core out the femoral shaft with a 96% precision, while standard hand broach was able to provide an accuracy of only 75%. Dr. Barger then took the system to clinical trials in humans after Hap Paul proved its efficacy (clinically) in his veterinarian practice and RoboDoc is now a commercial product. Subsequently, other orthopedic surgeons such as Dr. Anthony DiGioia, MD (6), are developing other systems such as the HipNav for replacement of the knee and hipjoints. On the other side of the Atlantic at this same earl timeframe, two teams were producing early prototype surgical robotic systems, one in each of the different categories as above. One system by Sir John Wickham, MD and Brian Davies, PhD of Guy's Hospital in London(7) was similar to RoboDoc in that it was used for precise coring; but as a urologist, John Wickham developed the system to assist in trans-urethral resection of the prostate (TURP). This was a mechanically constrained system which used a robotic arm similar to the Puma and RoboDoc. However for patient safety, there was a large circular metal ring (figure 8) through which the resection instrument was passed and which prevented the robotic arm from moving out of the precise field of the prostate. After successfully proving the accuracy of the system on potatoes and then a few patients in the clinic trial, John Wickham was given permission to conduct studies on animals to show efficacy and safety. The second system being developed in Europe was a collaboration of Hermann Rinnsland, PhD of the Forschungszentrum Karlsruhe

(Karlsruhe Nuclear Research Center, Karlsruhe, Germany) and Gerhard Buess, MD of the University of Tuebingen, in Tuebingen, Germany(8). Hermann Rinnsland was head of the group which developed Germany's telemanipulation robotics for handling of nuclear waste. This was a highly dexterous system, similar to the SRI system, but with significant differences, especially in the surgeon's workstation. This system, called Advanced Robot and Telemanipulator System for Minimally Invasive Surgery (ARTEMIS) (figure 9) had remote telemanipulators like the SRI system, but the surgeon's console had the hand input devices "over the shoulder" to provide extra manipulation abilities. The system was very efficient, however after the first prototype was developed and demonstrated to be effective, funding for the Forschungszentrum project was not renewed and this promising system has yet to progress into the commercial phase. Thus the state of the art in robotics surgery at 1993 was that of the systems describe above. However the military (through the DARPA program) began to dramatically increase attention in the Green Telepresence Surgery system in the following years until 1999 with the close-out of the DARPA program (see below for military system). All during this timeframe (late 1980's 1993), the neurosurgical and radiological community were investigating robotics to collaborate with neurosurgeons to precisely position probes, resection instruments, ablation devices and other surgical tools, principally for minimally invasive brain surgery. Frank Jolesz, MD of Brigham and Women's Medical Center and William Lorensen, PhD of General Electric Research Center were pioneers in the open MRI system(9) for

real-time updates of brain images in the initial real-time, image-guided neurosurgical systems. Neurosurgeon Richard Bucholz, MD of St. Louis University Medical Center(10) was also independently developing a tracking system, called the Stealth Station, that could be used during neurosurgery to provide accurate stereotactic navigation during surgery. This effort resulted in an image guided system for real-time tracking of instruments in surgery. In the 1996, Daniel Karron, PhD of New York University, New York(11) developed an audio system that provided audio feedback depending upon proximity to the intended target, in essence an audio navigation assistance for the Stealth Station. This is one of the only systems which were designed to provide synesthesia, the substitution of one sense (audio) for another sense (vision) in

order to improve accuracy. Beginning in July, 1992, Rick Satava and Don Jenkins developed the Defense Advanced Research (DARPA) Advanced Biomedical Technologies (ABMT) program. The military imperative was to save soldiers that had been wounded on the battlefield using

advanced technologies. Review of the Wound Data and Munitions Effectiveness Team (WDMET) database of the casualties of the Viet Nam war(12) revealed that although great improvement had occurred in overall mortality, when examining those soldiers with life threatening wounds in the far forward battlefield, there was little change from as early as the Civil War. As a rough generalization one-third died of head or massive injuries, about one third died from wounds (principally exsanguinating hemorrhage) which were estimated to be survival based upon today's technology, and one-third survived. A comprehensive program was initiated, utilizing advanced sensor, robotics, telemedicine and virtual reality systems. One of the prime concepts was to implement Scott Fisher and Joe Rosen's idea to "bring the surgeon to the wounded soldier - through telepresence". The Green Telepresence Surgery System was seen as a method of providing surgical care right on the battlefield to save those soldiers which would otherwise exsanguinate(13). It was envisioned that the robotic manipulator arms would be mounted in a vehicle for Medical Forward Advanced Surgical Treatment (MEDFAST). The vehicle chosen was a Bradley Fighting Vehicle 577A (figure 10). The surgical workstation was to be placed in the rear echelon Mobile Advanced Surgical Hospital (MASH); when a soldier was wounded it was envisioned that the medic would place him into the MEDFAST and together the surgeon (at the telesurgery unit in the MASH) and the medic in the MEDFAST would together perform just enough surgery to stop the hemorrhage (the current concept of "damage-control surgery), in order for the casualty to be transported as soon as

possible back to the MASH, but now the soldier was alive instead of exsanguinating before arrival at the MASH. In 1996, a military field test was conducted by SRI, International which demonstrated successfully that surgery could be performed over a 5 kilometer distance with a microwave telecommunication link-up between a MASH hospital and the MEDFAST. However, the battlefield of the 1990's was changing from conventional, open battlefields to the close quarters of urban terrain, which was ill suited for the MEDFAST vehicle. Although successfully demonstrated on the animal model, the system has not yet been implemented for battlefield casualty care. A significant number of other robotic surgery applications were being developed by DARPA to provide solutions for many of the difficult issues. Thomas Sheridan, PhD of Massachusetts Institute of Technology (MIT)(14) was tackling the latency problem - the time of travel of the electronic signal from the moment the handle of the instrument on the workstation moved until the signal arrived at the tip of the manipulator. It is known that humans can compensate for latency (delay) of up to 200 milliseconds (msec), after which the delay is too great for accuracy. Tom Sheridan was attempting to solve the problem by predictive algorithms, and was successful in demonstrating tolerance of delay up to 300 msec. Other investigators such as Alberto Rovetta,

PhD of Milan, Italy, have tried to work around the problem by having identical software programs at the two remote places, so the only thing transmitted is the hand signals. In 1993, Alberto Rovetta(15) was able to successfully perform a liver biopsy on a pig liver with the surgeon's station being at the NASA Jet Propulsion Lab (JPL) in Pasadena, CA and the manipulators and pig liver in his laboratory in Milan. The time lag using a satellite was over 1200 msec (1.2 seconds). (Note: it takes about 1.2 sec for a signal to be transmitted to a geosynchronous satellite 22,000 miles above the earth and then return). Other contributions occurring came from Kenneth Salisbury, PhD, Marc Raibert, PhD, and Robert Playter, PhD from the MIT Artificial Intelligence and Robotics Laboratory under the direction of Rodney Brooks, PhD. This group was working upon the haptics (sense of touch) and developed an accurate force feedback system for the robotic devices(16). This became a commercial product called "The Phantom" (figure 11), which has become the industry standard for providing haptics to a virtual environment and the basics for robotics systems. Other researchers from MIT working to improve telepresence surgery in one fashion or another included Blake Hannaford, PhD and

David Brock, PhD.

The commercialization of robotic surgery started with the RoboDoc system in 1992-93 as indicated above. In spite of the exceptional performance of RoboDoc, the system went through a prolonged approval process with the Food and Drug Administration (FDA). However for direct surgical manipulation in laparoscopic surgery, the first application was to control the camera in laparoscopic surgery. With initial seed funding from DARPA, Yulun Wang, PhD began developing the Automated Endoscopic System for Optimal Positioning (AESOP)(17) in his newly formed company, Computer Motion, Incorporated. This provided acceptance by the medical and surgical community of robotics as an effective assistive device. This system was the first robotic device to receive FDA approval and launched the robotics in general surgery movement. During this timeframe, image guided surgery systems began commercialization with both the NeuroMate in Switzerland (figure 12), and Richard Bucholz's Stealth system. These systems were specifically developed for neurosurgery, as was the General Electric open magnetic

resonance imaging (MRI) that was popularized by Ference Jolesz and Ron Kikinis of Brigham Women's Hospital(8). While AESOP was being marketed to the surgical community, Fredrick Moll, MD licensed the SRI Green Telepresence Surgery rights and started Intuitive Surgical, Inc. After extensive redesigning from the ground up, the daVinci surgical system (figure 13) was produced and introduced. In April, 1997, the first robotic surgical (tele-operation) procedure on a patient was performed in Brussels Belgium by Jacques Himpens, MD and Guy Cardiere, MD(18). Within a year, Computer Motion had put their system, Zeus (figure 14), are similar, in that they have remote manipulators which are controlled by a surgical work station. One major difference is in the surgical workstations. The daVinci system has stereoscopic image which is displayed just above the surgeon's hands so it appears as if the surgical instrument tips are an extension of the handles - this gives the impression that the patient is actually right in front of the surgeon (or conversely, that the surgeon's presence has been transported to right next to the patient - hence the term tele-presence). The Zeus system is ergonomically designed with the monitor comfortably in front of the surgeon's chair and the instrument handles in the correct eyehand axis for maximum dexterity. There is no illusion of being at the patient's side, rather there is the sense of an operation at distant site but with enhanced capabilities. Initially, the daVinci system was the only one with an additional degree of freedom, a "wrist"; however recently the Zeus system has introduced instruments with a wrist. The concept of dexterity enhancement was suitable for the emerging

laparoscopic surgery field, and especially for minimally invasive cardiac surgery applications. Although the original Green Telepresence Surgery system was designed for remote trauma surgery on the battlefield, the commercial telepresence systems were envisioned for delicate cardiac surgery, specifically coronary artery bypass grafting. It was believed that the robotic systems would allow minimally access surgery on the beating heart. This is to be achieved by first blocking and then overpacing the heart and gating the motion of the robotic system to the heart rate. While the minimally access approach has been achieved, the "virtual stillness" of the gating method is still in development. The challenge of extremely accurate and dexterous robotics was chosen for ophthalmologic surgery, and specifically for laser retinal surgery. The blood vessels on the retina are 25 microns apart. Human performance limits are an accuracy of approximately 200 microns.

Stephen Charles, MD of Baptist Hospital and MicroDexterity Systems, Inc (MDS) in Memphis, TN collaborated with a brilliant team at NASA Jet Propulsion Laboratory (JPL), which included Paul Schenker, Hari Das, Edward Barlow and othersa to develop the Robot Assisted MicroSurgery (RAMS) system (figure 15). This system included 3 basic innovations: 1) eye tracking of the saccades of the eye (200 Hz) so the

video image was perfectly still on the video monitor, 2) scaling of 100 to 1, giving the system 10 micron accuracy, and 3) tremor reduction (between 8 - 14 Hz), removing any tremor or inaccuracy. Today, any surgeon could sit down at the microdexterity system and perform laser surgery with 10 micron accuracy, that is 20 times beyond the accuracy of the unaided human hand. The issue of remote surgery using robotics was limited to short distances because of the latency issue. It was only recently (2001) that the Zeus system was used for a trans-Atlantic robotic surgery operation between New York City and Strasbourg, France by Jacques Marescaux and Michele Gagner. The limitation to long distance surgery is the latency or delay, which cannot exceed 200 msec. At longer delays, the time from the hand motion of the surgeon until the action of the robot's end effector (instrument) is so great that the tissue could move and the surgeon would cut the wrong structure. In addition with delays greater than 200 msec there is conflict within system such that the robotic system becomes unstable. However, Marescaux and Gagner employed a dedicated highbandwidth Asynchornous Transfer Mode (ATM) terrestrial fiberoptic cable and were able to conduct the surgery with a delay of only 155 msec. Thus, with very broadband, terrestrial fiber optic cable connection, it is possible to perform remote surgery over thousands of miles. When the Next Generation Internet, with the 45 Mbyte/sec fiber optic cabling, becomes universally available, such remote surgery can become a reality to many places in the world.

CHALLENGES AND FUTURE SYSTEMS The current systems are just the beginning of the robotics revolution. All of the systems have in common a central workstation from which the surgeon conducts the surgery.

This workstation (see figures 13,14) is the central point which integrates the entire spectrum of surgery (figure 16). Patient specific pre-operative images can be imported into the surgical workstation for pre-operative planning and rehearsal of a complicated surgical procedure, as is being developed by Jacques

Marescaux, MD (19). Figure 17 illustrates a patient's liver with

a malignant lesion, and the methods of visualizing, pre-operative planning and procedure rehearsal. At the time of surgery, this image can be imported into the workstation for intra-operative navigation. It can also be used as a stand

alone workstation for surgical simulation for training of surgical skills and operative techniques. Thus, on an invisible level, the challenge is going to be to perform the integration of the software of all the different elements, such as importing images, pre-operative planning tools, automatic segmentation and registration for data fusion and image guidance and sophisticated decision and analysis tools to provide automatic outcomes analysis of the surgicalprocedure. On a technical side, few if any of the systems include the full range of sensory input (eg. sound, haptics or touch) and there are but a few simple instruments (end effectors). The next generation systems will add the sense of touch, and improved instruments. The instruments will need to be both standard mechanical instruments as well as energy directed instruments such as electrocoagulation, high intensity focused ultrasound, radiotherapy, desiccation, ablation etc. In addition advanced diagnostic systems, such as ultrasound, near infra-red, and confocal microscopy can be mounted on the robotic systems and used for minimally invasive diagnosis. The systems will become smaller, more robust (not require a full time technician) and less expensive. They will adapt for the requirements of the other surgical subspecialties. In the evolution of the robotics, the systems will become more intelligent, eventually performing most, if not all, of an operative procedure. In current systems such as RoboDoc and NeuroMate, the surgeon preplans the operation on patient specific CT scans. This plan is then programmed into the surgical robot, and the robot performs precisely what the surgeon would have done if (s) he were performing the

operation, but with precision and dexterity above human limitations. This is a trend which will continue, with the surgeon planning more and more of the operation which the robot can effectively and efficiently carry out. The robot must be under complete control of the surgeon, in case something unexpected were to occur and the surgeon would take over. It is conceivable that in the distant future under special circumstances such as remote expeditions or the NASA Mission to Mars, that robots would be performing the entire surgical procedure. However in the near future there will be development of hybrid hardwaresoftware systems that will perform complete portions of an operation, such as an anastomosis, nerve grafting, etc. These systems will require a complicated infrastructure, and the operation room (OR) of the Future will have to accommodate them. The unique requirements for these systems include a very robust information infrastructure, access to information from the outside (such as xrays, images, consultation), voice control of the system by the surgeon, and microminiaturization of the systems. Perhaps there will be an evolution of the OR to resemble more of a "control room" because of the large number of electronics which need to be controlled. An interesting product involved with patient monitoring and control is the Life Support for Trauma and Transport (LSTAT) (figure 18), which is in essence an entire intensive care unit (ICU). Although the LSTAT was developed by the military as an evacuation system for the battlefield (the "trauma pod" from Robert Heinlein's "Starship Troopers"), it contains complete monitoring and administration systems, telemedicine capability and can

be docked and undocked without removing the patient and is fully compatible with current tele-robotic systems. A system similar to this may be incorporated into the OR of the Future (figure 19) to facilitate patient anesthesia, surgery and transportation while maintaining continuous monitoring. There has been speculation about the use of nanotechnology to inject miniscule robots into the blood stream to migrate or be navigated to the target. Numerous concept diagrams show mechanical types of systems that either are controlled by a surgeon or are autonomous. While interesting conceptually, there is little practical understanding of how to actually construct such total, complex systems on a molecular level, and more importantly how to control them. The first generations of these systems will not be visible to the eye, will probably be manufactured chemically by the billions, and will not be controlled but rather, like drug design, be programmed to recognize certain cell or tissue types to deliver medication or cause ablation. Frequently micro-electro-mechanical systems (MEMS) are discussed in conjunction with nanotechnology, however these systems are one thousand times larger (1.0 x 10-6meters) than nanotechnology (1.0 x 10-9meters).

tiny robots which could be directly controlled by a surgeon. However as the technology scales down in size, it also scales down in power or force which can be generated, making it extremely difficult to actually conduct work at this scale. While there are a number of MEMS robots (figure 20), none are actually performing any significant work, let alone any activity resembling a surgical procedure. Nevertheless, MEMS and nanotechnology are areas for future potential surgical robotics which will take decades to develop and perfect. It is essential for surgeons to be aware of these technologies, and others such as quantum mechanics, biomimetic materials and systems, tissue engineering and genetic programming, in order to anticipate the great revolution that is developing. CONCLUSION Robotics has established a foothold into surgical practice. Commercial systems have been available for a few years and their value is undergoing stringent scientific evaluation by randomized clinical trials. While the initial reports are promising, it will be necessary for more long term, evidence-based outcomes to prove their efficacy. More importantly, it will be necessary to prove the cost effectiveness in addition to the other non-technical significant issues of accommodating the operating rooms, training of OR personnel and surgeons, and the acceptance of the technology. However, the future is promising because of the great potential of these systems to extend the capabilities of surgical performance beyond human limitations. In addition to the typical robotic systems that are available today,

Such systems would be visible as very

next generation systems, using the emerging MEMS- and nano-technology fields, will extend even further the capabilities. This nascent field will provide fruitful and rewarding research for decades to come, with the promise to greatly improve the quality of surgical care to patients.

ILLUSTRATIONS
y

Figure 1. The Argonne National Laboratory dexterous manipulator - (from Johnson and Corliss, Archives of Argonne National Laboratory, 1967) Figure 2. HelpMate, a medical delivery robot of Joseph Engelberger (courtesy of HRI website http://www.pyxis.com) Figure 3. MIT-Utah dexterous hand. (Courtesy Stephen Jacobsen, PhD, University of Utah, Salt Lake City, UT) Figure 4. Scott Fisher wearing one of the first head mounted displays at the NASA Ames Research Center virtual reality laboratory - ca 1985. (Courtesy of Dr. Scott Fisher, PhD, Telepresence Research, Inc., Palo Alto, CA) Figure 5. Earliest concept of telepresence surgery from drawings by Drs. Joseph Rosen, MD and Scott Fisher, PhD - ca 1986 (Courtesy Dr. Joseph Rosen, MD, Dartmouth University Medical Center, Hanover, VT ) Figure 6. Initial telepresence surgery workstation showing intuitive interface - ca 1987 (Courtesy Dr. Philip Green, PhD,

SRI International, Menlo Park, CA) Figure 7. RoboDoc - the first robotic surgical system - used to core the femoral shaft in total hip replacement - ca 1986. (Courtesy Hap Paul, DVM, University of California- Davis, Sacramento, CA) Figure 8. The Trans Urethreal Resection of the Prostate (TURP) robot with a mechanically constraining ring to insure safety - ca 1986. (Courtesy Sir John Wickham, MD, Guy's Hospital, London) Figure 9. The ARTEMIS robotic surgery system, showing the remote manipulators and surgical workstation - ca 1989. (Courtesy Gerhard Buess, MD, Tuebingen University Medical Center, Tuebingen, Germany) Figure 10. Medical Forward Advanced Surgical Treatment (MEDFAST) vehicle. (Courtesy Anthony Aponick, Foster-Miller, Inc, Waltham MA,1995) Figure 11. Phantom haptics input device. (Courtesy Marc Raibert, PhD, Boston Dynamics Inc., Cambridge, MA, 1995) Figure 12. NeuroMate neurosurgical system, originally a Swiss development, but now a component of Integrated Surgical Systems (ISS) who developed RoboDoc. Figure 13. daVinci robotic telepresence surgery system (courtesy Dr. Frederick Moll, MD of Intuitive Surgical Incorporated, Menlo Park, CA, 1999) Figure 14. Zeus robotic system (courtesy Dr. Yulun Wang,

Computer Motion, Inc., Goleta, CA) Figure 15. Robot Assisted MicroSurgery (RAMS) for 10 micron accuracy in laser retinal surgery. (Courtesy Steve Charles, MD and the NASA-JPL team, Pasadena, CA) Figure 16. The concept of telepresence surgery and a central workstation which integrates the entire spectrum of surgical care. (Courtesy of Joel Jensen, PhD, SRI, International, Menlo Park, CA) Figure 17. Patient specific imaging from CT scan of a liver metastasis used for pre-operative planning, operative rehearsal, intra-operative navigation and surgical simulation. (Courtesy of Jacques Marescaux, MD, IRCAD, Strasbourg, France) Figure 18. The Life Support for Trauma and Transport (LSTAT) in its battlefield configuration (Courtesy of Matt Hanson, Integrated Medical Systems, Signal Hill, CA) Figure 19. Concept of integration of the LSTAT into a robotic surgical system. (Courtesy of Matt Hanson, Integrated Medical Systems, Signal Hill, CA) Figure 20. A small, autonomous biomimetic robot constructed from MEMS technology (Courtesy Sandia National Labs, Albuquerque, NM)

REFERENCES

1. Fisher SS, McGreevy MM, Humphries J, and Robinett W. Virtual Environment Display System. In: Crow F and Pizer S (Ed) Proceedings of the Workshop on Interactive 3-D Graphics 1:1-12, 1986. 2. Cuschieri A. Visual Displays and Visual Perception in Minimal Access Surgery. Seminars in Laparoscopic Surgery 2:209-14, 1995 3. Green PS, Hill JH, and Satava RM. Telepresence: Dextrous procedures in a virtual operating field.(Abstr). Surg Endosc 57:192, 1991 4. Satava, RM. Robotics, telepresence and virtual reality: a critical analysis of the future of surgery. Minimally Invasive Therapy 1: 357-63, 1992

S-ar putea să vă placă și