Smarter robot, better surgery
Problem being addressed
Many tasks in robot-assisted surgeries can be represented by finite-state machines, where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the finite-state machines.
A unified surgical state estimation model that incorporates multiple data sources including the kinematics, vision, and system events. The outputs are fed to a fusion model that makes a comprehensive inference.
Advantages of this solution
The model mproves the frame-wise state estimation accuracy of state-of-the-art methods by up to 11% through the incorporation of multiple sources of data. It demonstrates the advantages of a multi-input state estimation model through the comparison of single-input models’ performances in recognizing states with different representative features or levels of granularity in a complex and realistic surgical task.
Solution originally applied in these industries
Possible New Application of the Work
Aerospace & Defence Sector
The model can help replace astronauts with robots that can work for a long time while being remotely controlled from Earth while in low Earth orbit space stations to reduce the burden on astronauts, shorten the time it takes to perform work in space, and reduce costs.
The methodologies that aim at improving the efficiency of robots performing human-like tasks can be applied during all sorts of disasters (fires, earthquakes, floods etc.) where human rescuers are currently working facing health and life threat.
Source URL: #############