//
you're reading...

ocean engineering

Goodbye sound, hello vision: less $$$ methods to help Unmanned Underwater Robots stay on track.

Manzanilla, A., Reyes, S., Garcia, M., Mercado, D. and Lozano, R., 2019. Autonomous Navigation for Unmanned Underwater Vehicles: Real-Time Experiments Using Computer Vision. IEEE Robotics and Automation Letters4(2), pp.1351-1356.

Robots are here to stay.  We encounter them at the grocery store in the self-checkout isle and on the road with self-driving cars.  In the ocean, they help us explore areas we cannot access.   Some robots have a neat feature called ‘autonomy’ that enables them to work without a remote.  For instance, when the race car came to life in Toy Story, it worked without a remote, so in a sense it had autonomy.

Autonomous vehicles complete tasks based on programmed sets of rules.  Some tasks are simple, such as a robot driving forward at 1 foot per second for 20 seconds and then coming to a complete stop.  The task can become more complicated when the robot’s path is obstructed by a wall.  If the robot does not have a rule to deal with the obstruction, it will continue forward into the wall.  In order for the robot to recognize the wall and then decided to either stop or go around it, it must be given a set of rules that can deal with the obstacle.  The robot must also be equipped with appropriate sensors to overcome obstacles, such as a way to recognize that there is a wall there at all.

In the ocean, autonomous robots called unmanned underwater vehicles (UUV) are used to inspect underwater infrastructure and map the seafloor, for examples.  One of the greatest limitations to UUVs finishing their autonomous tasks is their failure to regain position (where it is) and orientation (which way it is facing), together called their pose, if either are compromised. A major contributor to their failure is uncertainty in the estimates of position and orientation. Unfortunately, minimizing that uncertainty requires costly equipment that comes with its own limitations, like battery life and power level.

A group of researchers from Mexico sought to improve a UUV’s ability to recover position and orientation by using an alternative, less costly method.  While traditional position and orientation recovery methods use sound waves and measures of the UUV’s acceleration to determine pose, this group of scientists investigated the application of a different approach called Vision Based Localization (VBL). VBL is simpler than traditional methods because it uses a single camera to generate a 3D representation of the area. The VBL method in submerged environments has been used for 3D reconstruction of underwater features, but has faced trouble from light, turbidity, and scale complications that result in skewed images.

For their experiment, Manzanilla and team combined camera imagery, measurements of how the vehicle turned and twisted, and mathematical filtering to estimate the robot’s pose in real time.  A camera secured to the robot captured a scene while a motion sensor recorded changes in the robot’s orientation. Then based on a set of rules, the combined visual and motion information was used to correct the robot’s position.

Manzanilla’s team tested their success at improving the ability of a robot to recover its pose with a ‘PID controller’ algorithm.   A PID controller updates the corrections for pose in real time.  This is important because by maintaining pose the robot is able to stick to an intended task. Examples of PID controllers we encounter day to day include thermostats and cruise control.  Manzanilla and team’s PID controller used the information about pose to continuously adjust the uncertainty and apply an appropriate correction so that the robot maintained an ellipsoidal trajectory.  The authors produced a video demonstrating their successful application of VBL to improving a UUV’s ability to navigate.  In the video, you can see the UUV in the water while the scientists disturb its equilibrium to observe how well it regained is orientation and position.  They then test how well it was able to return to an ellipsoidal trajectory path when its vision was obscured or its tether was tugged on.

Screen shot from Magallanes, A. M. (2018, December 07). Retrieved April 11, 2019, from https://www.youtube.com/watch?v=v0Uj-IuztIs

While the team was able to improve UUV pose recovery using VBL, more research is needed to determine the limits of this method and set of rules.  Additionally, the team looks ahead to exploring how to overcome communication delays or unexpected water column dynamics that may influence the UUV behavior.

Research like what Manzanilla and his team do is important because robots are tools for both military and civilian applications.  Although our most common interactions with them are lighthearted ways to make our lives easier, such as fast check out and driver-less cars, they also can substitute for humans at dangerous jobs.  For instance, autonomous robots can inspect underwater bridge and dam infrastructure in high current water ways and they can search for mines in unfamiliar terrain.  The better the robots are able to recover from a disturbance, the more success they will have at completing their tasks.

link to UUV video: https://www.youtube.com/watch?v=v0Uj-IuztIs

Discussion

No comments yet.

Talk to us!

%d bloggers like this: