
Learning to see through cloudy waters
MIT-WHOI Joint Program student Amy Phung is combining optical and sonar data to help ROVs accomplish delicate tasks in tough conditions
Estimated reading time: 3 minutes
Remotely operated vehicles (ROVs) allow researchers to explore areas of the ocean that human divers can’t reach. But if the water is turbid—clouded with suspended particles or stirred-up sediment—ROVs are limited in what they can do. Human pilots need to be able to see clearly to carry out delicate tasks like manipulating dangerous objects or interacting with a fragile environment.
“If there are too many particulates in the water, it becomes very difficult for pilots operating from a surface ship to see what they're doing,” said Amy Phung, an MIT-WHOI Joint Program student. “In some cases, they just have to pause operations and wait for the turbidity to clear up, which can make for a really expensive operation.”
ROVs aren’t completely helpless in murky water—several are equipped with sonar sensors, which provide enough information for pilots to navigate and avoid obstacles. But really high-resolution sonar data takes time to process. It can’t provide the instant feedback pilots need to manipulate objects with robotic arms.
Phung is working to combine aspects of both visual and sonar data to give pilots the information they need to conduct delicate operations even when visibility is limited. This could help ROVs conduct scientific sampling in tougher conditions, inspect and repair offshore wind turbines or other infrastructure, or even complete hazardous tasks like collecting and removing unexploded munitions from the seafloor and surf zones.
“I’m trying to find a way to complete dexterous tasks in areas that are difficult for robotic systems right now, and are either too deep or too dangerous for human divers,” Phung said. “Using optical and acoustic data to complement each other can enable us to safely do robotic manipulation in these conditions.”
Phung is developing a method to process high-resolution sonar data in real-time to provide an ROV pilot with a 3D map of the nearby environment. This map provides enough information for an ROV to get close without colliding with anything, Phung said, but in most cases is not enough to identify or manipulate objects underwater. However, if the ROV can safely approach objects, it might be able to get a camera very close to them. With less murky water to see through, the camera could potentially start to pick up some visual data—the lumpy ridges of a coral head or the rust on a piece of old metal, for example—that could supplement the sonar information.
“If it was just camera data, we couldn't safely approach anything in very turbid environments because we don't know what the geometry of the scene looks like,” Phung said. “My plan is to use the map generated by the sonar data, so that I can get the camera really close to objects without accidental collisions with the environment, and start gathering color and texture information to fill in the gaps.”
Phung is still working on integrating the visual and sonar data, but her goal is to be able to render three-dimensional maps for pilots in real-time. She envisions ROV pilots viewing these maps with virtual reality headsets that provide additional context and depth perception as they navigate and manipulate objects underwater.
“By providing both the 3D information and the color information, and rendering it in a way in which pilots actually have depth perception, I think that will provide a lot of input that they don't currently have,” Phung said.
While Phung is currently focusing on helping ROV pilots, the work could also potentially be used to improve navigation systems in autonomous underwater vehicles, which are controlled by software instead of a pilot.
“Perception is an enabler for both robot autonomy and for human pilots,” Phung said. “I'm excited by the prospect of this work being usable in the immediate short term for pilots, but also having it open the door for autonomy.”