Next Article in Journal
Improvement of the Welding Process for Fillet Air Test for the Biggest Taiwan Shipyard
Previous Article in Journal
Acoustic Presence of Dolphins through Whistles Detection in Mediterranean Shallow Waters
Previous Article in Special Issue
Underwater Pipe and Valve 3D Recognition Using Deep Learning Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ROV Navigation in a Fish Cage with Laser-Camera Triangulation

1
SINTEF Digital, 0373 Oslo, Norway
2
SINTEF Ocean, 7010 Trondheim, Norway
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2021, 9(1), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/jmse9010079
Submission received: 8 December 2020 / Revised: 8 January 2021 / Accepted: 11 January 2021 / Published: 13 January 2021
(This article belongs to the Special Issue Localization, Mapping and SLAM in Marine and Underwater Environments)

Abstract

:
Aquaculture net cage inspection and maintenance is a central issue in fish farming. Inspection using autonomous underwater vehicles is a promising solution. This paper proposes laser-camera triangulation for pose estimation to enable autonomous net following for an autonomous vehicle. The laser triangulation 3D data is experimentally compared to a doppler velocity log (DVL) in an active fish farm. We show that our system is comparable in performance to a DVL for distance and angular pose measurements. Laser triangulation is promising as a short distance ranging sensor for autonomous vehicles at a low cost compared to acoustic sensors.

1. Introduction

Nearly half of the earth’s land is used for food production, and marine resources can help feed the growing population. The number of fish farms is increasing rapidly [1]. Typically, the fish is raised in open sea net cages, which consist of a floating collar, a net pen and a mooring system. These cages are in natural marine environments, and fish that escape from these environments may cause harm to the environment and its related food chain. To minimize the escape caused by the failure of the net cage, the net must be inspected routinely [2]. Net inspections today are commonly performed either by divers or by human-piloted remotely operated vehicles (ROVs). The ROV operations are challenging for the pilot as they require both precise maneuvering and a keen eye for detail in order to detect failures in the net cage from the video stream.
One of the main problems in applying autonomous underwater vehicle to fish cage inspection is the automatic detection and tracking of net pens. This is to maintain a safe distance from the net pen and to ensure complete coverage of the cage during the inspection. Cost constraints are in addition tight since the autonomous vehicle needs to be similar in cost, or ideally cost less than the human divers used today. Even though there are no industrial deployment of autonomous net inspection systems that we are aware of, it is an active topic in research [3]. There are also similar research programs within subsea oil and gas, [4], and seabed mapping [5]. Aquaculture net pens are especially challenging to inspect because their shape changes with the water current and due to biofouling changing their visual appearance [6].
Successful operation of autonomous underwater vehicles requires the ability to navigate, and to understand dynamic environments. There are many mature positioning systems which can position underwater vehicles. The long baseline method (LBL) and the ultrashort baseline method (USBL) use acoustic ranging relative to fixed beacons. These methods require pre-deployed and localized infrastructure, hence increasing the cost and the complexity of the operations [7,8]. Furthermore, underwater ranging systems are challenged by infrastructures prohibiting line-of-sight as, e.g., aquaculture sites [9]. Position measurements can be integrated with velocity measurements provided by an acoustic doppler velocity log (DVL) and an onboard digital compass. [10].
Several optical systems are used to get 3D data from underwater. Video cameras in combination with markers are commonly used for autonomy and navigation [11], for underwater stereo [12,13] or photogrammetry [14], which will need unique, non-repetitive features in the scene to estimate the disparity and thereby the depth measurement. Due to the repetitive structure of the nets to be inspected, stereovision is not well suited in our use-case.
Structured Light method uses typically a DMD (digital mirror device) to project a single pattern or a series of spatially coded patterns to get highly accurate 3D measurements of the scene in real-time. In [15], Multi-Frequency Phase Stepping patterns are used to acquire high resolution 3D data from a static scene in turbid water.
Scanning LIDARs (Light Detection and Ranging) are used for inspection tasks underwater [5]. Due to the scanning nature and the capturing time of this method, it needs a compensation for the relative motion of the vehicle. Flash-LIDARs do not include a scanning and provide real-time 3D data with depth precision below 1 cm at high signal levels, at 10 Hz [16].
Laser triangulation systems typically project a laser line [17,18,19,20,21] or point [22] onto the scene to triangulate distances between the laser and a camera. Commercial systems are also available, e.g., from 2GRobotics (www.2grobotics.com). These systems will need a scanning device for getting 3D data from the whole scene, which makes them slow, mechanically complex and expensive. We needed a cheap system, and to reduce complexity we wanted to use the ROV’s built-in camera. The suggested solution uses two laser lines to enable detection of a plane from one single image of the projected lasers. Parallel lines were chosen to get an optimal baseline geometry between the camera and both the laser line sources—this also results in a compact system suited for mounting on the available ROV, and also enables estimation of both pitch and yaw.
The algorithms which interpret the data from these sensors, to achieve autonomy, were first addressed in a probabilistic framework by [23], which is known as the Simultaneous Localization and Mapping (SLAM) problem. In view-based or dense SLAM, visual odometry is performed by comparing two complete views [24], e.g., by registering overlapping perceptual data, for example, optical imagery [25] or sonar bathymetry [26]. Unstructured underwater environments pose a more challenging task for feature extraction and data association than terrestrial environments. Hence, the application of feature-based SLAM frameworks has so far had limited success in real-world underwater environments [27].
To enable cost effective underwater SLAM for net inspection, this paper proposes using laser-camera triangulation consisting of two laser lines and one camera for pose estimation from one image. By assuming the net wall can be approximated to a plane, using two laser lines enables fitting a plane to the net cage’s wall based on one image only. This enables estimation of pose relative to the net pen in real time. The partial pose of a camera with respect to an observed net pen can be used for closed-loop net-following control.
The on-board camera used for net pen inspection was used for the laser triangulation. The only extra hardware needed are two lasers and their power supply, which drives down cost compared to acoustic sensors. No synchronization circuit or communication between laser and camera is needed as we run the lasers continuously.

2. Materials and Methods

2.1. The ROV and the Sensors Employed

The ROV employed in the trials is an Argus Mini, manufactured by Argus Remote Systems AS, shown in Figure 1. It is an observation class ROV specifically built for inspection and intervention operations in shallow waters, and meant to serve scientific purposes as well as the offshore, inshore, and fish farming industries. The Mini weights 90 kg with dimensions L × B × H = 0.9 m × 0.65 m × 0.6 m and is designed around six ARS800 thrusters. Four of the thrusters are placed in the horizontal plane, while the other two are placed in the vertical plane, hence guaranteeing actuation in 4 degrees of freedom (DOFs), i.e., surge, sway, heave, and yaw. The ROV is passively stabilized by gravity in roll and pitch.
The Argus Mini is equipped with 5 sensors: a SONY FCB-EV7100 Full HD camera, a fluxgate compass, a depth sensor, a gyro, and a Nortek DVL 1000 velocity sensor. In addition, position measurement is provided by a Sonardyne USBL system that consists of a Micro-Ranger Transceiver mounted onboard the support vessel and a Nano Transponder mounted on the vehicle. The ROV contains no sensors for direct measurement of acceleration.
The Nortek DVL is forward-looking, i.e., the instrument is mounted on the front of the ROV, pointing in the x-direction of the body/vehicle frame. This unconventional DVL configuration is employed with the purpose of enabling DVL lock on submerged vertical structures present in the aquaculture context, such as net cages of large fish farming cages (50 m in diameter). Such features of the DVL instrument, combined with its ranging features, are utilized in [28] to estimate the ROV distance and heading relative to a net cage and validate a guidance law for autonomous net following.

2.2. The ROV Control System

The company SINTEF employs its in-house control systems on the Argus Mini ROV. The ROV has three operational modes: manual (assisted with auto-heading (AH) and auto-depth (AD)), dynamic positioning and net-following. Relevant to the experiment presented here is the net-following controller, which uses feedback from the forward looking DVL. Net-following makes the ROV autonomously traverse aquaculture net cages at a given depth. The method exploits the four range measurements provided by the DVL beams to approximate the geometry of the net cage in front of the ROV as a plane through a least-squares regression. It then calculates the ROV position and orientation relative to this local plane. The relative position and orientation are subsequently fed as inputs to a nonlinear line-of-sight guidance law [29]. Further details on the employed net following (NF) guidance as well as the net cage geometry approximation by use of DVL range measurements can be found in [28].
A 4 DOF extended Kalman filter is also running to assist the dynamic positioning. The Kalman filter fuses the position measurements provided by the USBL, the velocity measurements form the DVL and the mathematical model of the ROV to estimate the vehicle state [30].

2.3. Sea Trials for Data Collection

The trials were executed at the SINTEF ACE Tristeinen aquaculture facility shown in Figure 2. SINTEF ACE is a full-scale laboratory designed to develop and test new aquaculture technologies under realistic conditions. The tests were performed inside a cage for salmon farming, in full operational state. ROV operations in fish farms are commonly performed inside the fish cages, not outside. This is due to the presence of ropes, chains and mooring lines on the outside of the cages which the ROV’s tether can get tangled into. When operating inside the cage, the fish will sometimes obstruct the cameras and sensor measurements, but the severity of this compared to the ROV being tangled is low. The cage has a cylindrical shape with a conical bottom, where the upper diameter is approximately 50 m and the total depth is about 30 m. The cage used in the trial is equipped with double nets in the regions around the main ropes to secure these regions against fish escapes. This double net setup is used in all fish cages operated by the company operating at SINTEF ACE, but it is unknown if this is standard for other companies. As will be shown, the presence of the double nets influences the quality of the distance measurements when using the DVL. The nets at the Tristeinen facility are square, with a mesh width of 33 mm. They had been cleaned eight days prior to the trials. The nets originally had a green coating, but some of this seems to have worn off. Fish population was approximately 190.000 individuals, which is normal.
The tests consisted of pointing the laser lines against the cage net and simultaneously recording HD videos at 60 frames per second (fps), while having the ROV executing several net cage traverses at constant depth by utilizing the NF guidance, interrupted by short intervals where the ROV was placed in dynamic positioning (DP) mode. Such a configuration allows the direct comparison of the laser-camera system capabilities with the DVL capabilities during the execution of net following tasks, which is highly relevant in the context of subsea aquaculture operations [9].

2.4. D Data Camera–Laser Line Triangulation

To get 3D data from the fish cage net, we chose to use the method of triangulating between two laser lines and one camera. Due to the repetitiveness of the pattern in the cage’s net, we chose a method not relying on correlating features in the scene and rather projecting the pattern (here, two laser lines). This also enables 3D data in the dark, e.g., at night or at larger depths. A blue laser was chosen to limit the effect of light scattering particles and attenuation of light in the water.
Two laser lines (OdicForce Lasers’s 80 mW Blue, 450 nm, Adjustable Locking Focus Direct Diode Module Line Pattern) where chosen to enable estimation of both the distance and the position relative to the net wall—assuming the wall is planar. The camera used was the camera available on the ROV (SONY FCB-EV7100 Full HD). The actual setup is shown in Figure 3.
We calibrated the system by capturing images of submerged checkerboards where the laser lines also were projected onto the checkerboard (Figure 4). The calibration was performed underwater. Then, the refraction glass–water is seen as a lens effect handled by the calibration routines; this effect is also reduced by using a dome shaped glass to ensure perpendicular surface from camera lens to glass. Attenuation and scattering due to the water and its turbidity is handled by using enough light matching the distances we operate. This aligns with the conclusion in [31]. The calibration was performed by moving the ROV to get a good dataset with different viewing angles. Using Zhang’s method of camera calibration [32], we recovered camera distortion parameters using openCV’s camera model implementation of the camera matrix A, and the distortion coefficients K,
A = [ c x 0 c x 0 f y c 0 0 1 ] ,
where f x , f x are the focal lengths and c x , c y are the principal point; and the distortion parameters,
K = [ k 1 , k 2 , k 3 , p 1 , p 2 ] ,
where k 1 , k 2 , k 3 are the radial distortion parameters and p 1 , p 2 are the tangential distortion parameters.
We also recover the 3D position of the laser lines on the checkerboards. This information was used to recover the plane parameters for the two projected laser planes, meaning that we could perform laser triangulation by intersecting camera rays (lines) with the laser plane for xyz recovery. Assuming camera as the origin, all camera rays can be expressed as points
p = d L
where L = [ x L , y L , z L ] is the direction of the ray and d is the position along the ray. The laser plane can be defined as
( p p 0 ) n = 0 ,
where p 0 is a point on the plane, n is the normal to the plane and p are the points of the plane. To determine d we solve for
d = ( p 0 l 0 ) n l n ,
meaning that we can find the xyz point of the intersection as d L .
To get the distance, yaw and pitch between the ROV and net cage wall, the laser lines were detected in images of the net when the two laser lines were projected. The laser line points’ positions in the images were located with sub pixel accuracy and looked up in the calibration lookup table to get the points’ absolute x, y and z distance from the camera’s center. The resulting positions from both laser lines were by fitted to a plane using MLESAC (Maximum Likelihood Estimation SAmple. Consensus) [33]. MLESAC is a generalization of the RANSAC (RANdom SAmple Consensus) algorithm picking a subset of points, fitting a plane and searching for the plane with highest maximum likelihood to all points. MLESAC also brings robustness improvements relative to the original RANSAC algorithm. From the plane parameter, we get the distance to the plane, the yaw and pitch angles of the camera relative to the net wall. Example images and corresponding fitted planes are in Figure 5. We handle the detected laser lines as point sets (search for points per image row) and do not try to make them into lines. This makes us robust for outlier detections and the fact that the nets have holes which reduces the number of line points detected. Points from both laser lines are used simultaneously in the MLESAC algorithm.

3. Results

This section compares the sensor readouts from a DVL and the laser-camera system for an experiment where an ROV is navigating inside a fish cage. Fish were swimming in the cage during the experiment, which contributes significantly to the noise. The DVL measurements are processed at the sensor and are filtered which can affect the apparent signal smoothness. The laser triangulation signal is the raw data, with no outlier rejection, smoothing, nor filtering applied. A graphic showing the geometry of the measurement setup is shown in Figure 6.
An indirect sensor-to-sensor calibration was performed to compare the DVL measurements with the camera measurements. The camera position relative to the DVL position was determined during the installation on the ROV by measuring the position of the mounting points. No closed-loop extrinsic calibration was performed to precisely position the DVL with respect to the camera. The distance calibration consists of: (1) a manual time synchronization to shift the camera signal to be in step with the DVL time with a static bias; and (2) a single static bias of 0.37 m, added to the laser measurement to bring the camera plane in line with the DVL plane. The yaw calibration only synchronizes the time, since the two sensors were mounted to have the same orientation—and should not need any signal correction. The net-to-ROV pitch angle is also measured but is not reported since the ROV is pitch stable—and the signal is small.

3.1. DVL vs. Laser Triangulation Depth Data

Figure 7 compares raw output data from the DVL and the laser triangulation distance measurements. The two sensors are largely in agreement. An attempt was made to use the USBL localization system as a third sensor to establish ground truth, and determine which sensor principle was more accurate in absolute terms. However, the dynamic nature of the net cages made the USBL data not usable for this purpose. The laser triangulation measurements are noisier than the DVL distance. The high level of agreement between the two sensors is surprising, given the open-loop calibration only along the depth axis to position the sensors relative to each other.
There are two areas in the distance data we will look at in detail. The first is the disagreement around t = 650 s. The DVL and laser have a disagreement of around 25 cm. A picture from taken at that time is shown in Figure 8. It is seen that a double net is the cause of the problem. The laser line algorithm returns the distance to the closest net, as seen in Figure 9. We conclude that the laser triangulation distance is more reliable than the DVL for distance to the double net.
The next area of interest is at the maximum distance of 2.5 m, achieved at t = 1050. Here the laser triangulation is significantly noisier than the DVL. Looking at the picture at that time, seen in Figure 10, it is evident that low laser visibility due to turbid water is the problem, and the triangulation is close to the maximum range. For higher ranges than this—we would need a brighter laser. Introducing a band-pass filter, will filter out the stray light from other light sources and improve the laser line contrast; but on the other hand, it will filter out information needed for net pen inspection.

3.2. DVL vs. Laser Triangulation Yaw Data

The net-relative Yaw angles for both sensors are seen in Figure 11. The data shows that the ROV was looking at the net head-on with a deviation of 20 degrees in yaw, i.e., the net following controller is performing well. The loss of DVL signal towards the end of the dataset is due to the ROV ascent, which indicates that the laser triangulation may be more robust than the DVL in shallow waters, since the laser is less affected by reflection from the water surface than acoustic signals. The agreement between the two sensors is impressive given that no extrinsic calibration was performed apart from time synchronization. The increase in noise at the end of the dataset is due to the large amount of fish at that time.

3.3. Kalman Filter Comparison

Figure 12 and Figure 13 shows the trajectory traversed by the ROV during the experiment estimated by an extended Kalman filter. The curve seen is the circular fish cage. The main sensor driving the Kalman filter is a ship attached USBL system. Changes to the USBL either due to ship movement, net movement, or other error sources show up as jumps in the position estimates—showing that a USBL only system is not sufficient for robust net inspection. The dots are net positions relative to the ROV from the DVL and the laser triangulation. It is seen that the two distance sensors are mostly in agreement. It seems feasible to base a net-following controller on the output of the laser triangulation sensor. Figure 14, Figure 15 and Figure 16 show the measured planes overlaid the Kalman pose estimate for only leg 3 to increase readability. The two sensors report similar data in this interval, showing that one could be exchanged for the other. The overlapping planes enable a well behaved pose-graph for a SLAM implementation.

3.4. Quantization

This section compares a quantization effect seen on the DVL to the laser triangulation sensor. Figure 17 shows two sections of the distance measurements. A quantization error of around 1 cm sometimes affect the DVL data. The laser triangulation quantization error is less than mm scale. Any similar quantization issues were not seen on the DVL yaw data. It is not known if this issue is related to the acoustics, or is a sensor-specific issue.

3.5. Noise Comparison

This section attempts to compare the noise levels between the laser measurements and the DVL distance measurements. This comparison will not be strictly correct since there is no ground truth but may still be of interest. A linear Kalman filter was used to fuse the two distance measurements to obtain a quasi-ground-truth. The filter was manually tuned, and in addition, outliers were manually removed. The outliers were removed because they would dominate the result otherwise. We compared a time series where both sensors returned signals at the same time. The results are shown in Figure 18, with the overall result the DVL distance has a standard deviation measurement error of 2.9 cm, and the laser sensor has a comparable but slightly larger error of 3.2 cm.
This proof of concept shows that an autonomous vehicle with a camera can be cheaply upgraded with net cage sensing capacities. Most inspection vehicles have RGB cameras already installed, which is the expensive part of the sensor, and given that camera sensors improve in performance per dollar per year—this sensor type will continue to be attractive in the future. Cost wise, a DVL costs in the range of thousands to tens of thousands USD. In comparison—the two lasers and housings cost $200, a factor of 25× to 100× in cost savings. These cost reductions are significant, especially for large fleets of inspection vehicles.
An added result is the verification that the DVL sensor measures the correct ROV-to-net distance. In, e.g., [9], a DVL is tested as a net cage navigation sensor, but since no independent measurement was available (it was not known that the DVL was unbiased for net pen measurements), comparative estimates indicate standard deviation of 3.2 cm for the laser system, and 2.9 cm for the DVL.
One particular concern was that the double nets, an outer net and an inner net, resulted in systematic bias on the DVL. The high degree of consistency between the laser and acoustic measurements show that either sensor is viable for net relative navigation. A fundamental advantage to the DVL is that it can measure velocity, but the laser triangulation sensor cannot.
The next steps include testing the laser sensor in closed-loop in an autonomous net following and mapping application.

4. Conclusions

We have shown experimentally that a laser triangulation can be used to navigate relative to an aquaculture net cage. The signal quality is nearly as good as a DVL, at less than 1/25th of the price.

Author Contributions

Software, H.B.A., S.J.O. and J.T.T.; data curation, W.C., S.J.O. and H.B.A.; formal analysis, J.T.T., M.B. and T.K. writing—original draft preparation, M.B., T.K.; writing—review and editing, M.B., T.K., W.C., H.B.A., S.J.O. and E.I.G. All authors have read and agreed to the published version of the manuscript.

Funding

The work is part of the project “SFI Exposed” (237790) with funding from the Research Council of Norway.

Institutional Review Board Statement

Ethical review and approval were waived for this study since appropriate operational measures were taken to minimize exposure of fish to laser radiation: (1) operate from inside the cage, (2) point constantly at the net form short distance, (3) turn on the laser only when necessary for the experiment itself.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Thanks to Gregory Bouquet for making the parallel laser illumination, and to Asbjørn Berge for fruitful discussions. Thanks, too, to Henrik Grønbech for helping with the modification of the Aqueous user interface.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bianchi, M.C.G.; Chopin, F.; Farmer, T.; Franz, N.; Fuentevilla, C.; Garibaldi, L.; Grainger, N.H.R.; Jara, F.; Karunasagar, I.; Laurenti, A.L.G. FAO: The State of World Fisheries and Aquaculture; Food and Agriculture Organization of the United Nations: Rome, Italy, 2014. [Google Scholar]
  2. Jensen, Ø.; Dempster, T.; Thorstad, E.; Uglem, I.; Fredheim, A. Escapes of fishes from Norwegian sea-cage aquaculture: Causes, consequences and prevention. Aquac. Environ. Interact. 2010, 1, 71–83. [Google Scholar] [CrossRef] [Green Version]
  3. Føre, M.; Frank, K.; Norton, T.; Svendsen, E.; Alfredsen, J.A.; Dempster, T.; Eguiraun, H.; Watson, W.; Stahl, A.; Sunde, L.M.; et al. Precision fish farming: A new framework to improve production in aquaculture. Biosyst. Eng. 2018, 173, 176–193. [Google Scholar] [CrossRef]
  4. Schjølberg, I.; Gjersvik, T.B.; Transeth, A.A.; Utne, I.B. Next Generation Subsea Inspection, Maintenance and Repair Operations. IFAC-PapersOnLine 2016, 49, 434–439. [Google Scholar] [CrossRef]
  5. McLeod, D.; Jacobson, J.; Hardy, M.; Embry, C. Autonomous inspection using an underwater 3D LiDAR. In Proceedings of the 2013 OCEANS—San Diego, San Diego, CA, USA, 23–27 September 2013; pp. 1–8. [Google Scholar]
  6. Bannister, J.; Sievers, M.; Bush, F.; Bloecher, N. Biofouling in marine aquaculture: A review of recent research and developments. Biofouling 2019, 35, 631–648. [Google Scholar] [CrossRef] [Green Version]
  7. Matos, A.; Cruz, N.; Martins, A.; Pereira, F.L. Development and implementation of a low-cost LBL navigation system for an AUV. In Proceedings of the Oceans ‘99. MTS/IEEE. Riding the Crest into the 21st Century. Conference and Exhibition. Conference Proceedings (IEEE Cat. No.99CH37008), Seattle, WA, USA, 13–16 September 1999; Volume 2, pp. 774–779. [Google Scholar]
  8. Alcocer, A.; Oliveira, P.; Pascoal, A. Study and implementation of an EKF GIB-based underwater positioning system. Control Eng. Pract. 2007, 15, 689–701. [Google Scholar] [CrossRef]
  9. Rundtop, P.; Frank, K. Experimental evaluation of hydroacoustic instruments for ROV navigation along aquaculture net pens. Aquac. Eng. 2016, 74, 143–156. [Google Scholar] [CrossRef] [Green Version]
  10. Rigby, P.; Pizarro, O.; Williams, S.B. Towards Geo-Referenced AUV Navigation Through Fusion of USBL and DVL Measurements. In Proceedings of the OCEANS 2006, Boston, MA, USA, 18–21 September 2006; pp. 1–6. [Google Scholar]
  11. Cesar, D.B.D.S.; Gaudig, C.; Fritsche, M.; Dos Reis, M.A.; Kirchner, F. An evaluation of artificial fiducial markers in underwater environments. In Proceedings of the OCEANS 2015—Genova, Genoa, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
  12. Massot-Campos, M.; Oliver-Codina, G. Optical Sensors and Methods for Underwater 3D Reconstruction. Sensors 2015, 15, 31525–31557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Leone, A.; Diraco, G.; Distante, C. Stereoscopic System for 3-D Seabed Mosaic Reconstruction. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16 September–19 October 2007; Volume 2, pp. 541–544. [Google Scholar]
  14. Telem, G.; Filin, S. Photogrammetric modeling of underwater environments. ISPRS J. Photogramm. Remote Sens. 2010, 65, 433–444. [Google Scholar] [CrossRef]
  15. Risholm, P.; Kirkhus, T.; Thielemann, J.T.; Thorstensen, J. Adaptive Structured Light with Scatter Correction for High-Precision Underwater 3D Measurements. Sensors 2019, 19, 1043. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Risholm, P.; Thorstensen, J.; Thielemann, J.T.; Kaspersen, K.; Tschudi, J.; Yates, C.; Softley, C.; Abrosimov, I.; Alexander, J.; Haugholt, K.H. Real-time super-resolved 3D in turbid water using a fast range-gated CMOS camera. Appl. Opt. 2018, 57, 3927–3937. [Google Scholar] [CrossRef]
  17. Jaffe, J. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  18. Prats, M.; Fernandez, J.J.; Sanz, P.J. An approach for semi-autonomous recovery of unknown objects in underwater environments. In Proceedings of the 2012 13th International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), Brasov, Romania, 24–26 May 2012; pp. 1452–1457. [Google Scholar]
  19. Hildebrandt, M.; Kerdels, J.; Albiez, J.; Kirchner, F. A practical underwater 3D-Laserscanner. In Proceedings of the OCEANS 2008, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–5. [Google Scholar]
  20. Narasimhan, S.G.; Nayar, S.K.; Sun, B.; Koppal, S.J. Structured Light in Scattering Media. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 1, pp. 420–427. [Google Scholar]
  21. Albiez, J.; Duda, A.; Fritsche, M.; Rehrmann, F.; Kirchner, F. CSurvey—An autonomous optical inspection head for AUVs. Robot. Auton. Syst. 2015, 67, 72–79. [Google Scholar] [CrossRef]
  22. Moore, K.D.; Jaffe, J.S.; Ochoa, B.L. Development of a New Underwater Bathymetric Laser Imaging System: L-Bath. J. Atmospheric Ocean. Technol. 2000, 17, 1106–1117. [Google Scholar] [CrossRef]
  23. Smith, R.C.; Cheeseman, P. On the Representation and Estimation of Spatial Uncertainty. Int. J. Robot. Res. 1986, 5, 56–68. [Google Scholar] [CrossRef]
  24. Hidalgo, F.; Bräunl, T. Review of underwater SLAM techniques. In Proceedings of the 2015 6th International Conference on Automation, Robotics and Applications (ICARA), Queenstown, New Zealand, 17–19 February 2015; pp. 306–311. [Google Scholar]
  25. Eustice, R.M.; Camilli, R.; Singh, H. Towards Bathymetry-Optimized Doppler Re-navigation for AUVs. In Proceedings of the Proceedings of OCEANS 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; pp. 1430–1436. [Google Scholar]
  26. Roman, N.C. Self Consistent Bathymetric Mapping from Robotic Vehicles in the Deep Ocean; Massachusetts Institute of Technology: Cambridge, MA, USA, 2005. [Google Scholar]
  27. Kinsey, J.C.; Eustice, R.M.; Whitcomb, L.L. A survey of underwater vehicle navigation: Recent advances and new challenges. In Proceedings of the IFAC Conference of Manoeuvering and Control of Marine Craft, Girona, Spain, 17–19 September 1997; Volume 88, pp. 1–12. [Google Scholar]
  28. Amundsen, H.B. Robust Nonlinear ROV Motion Control for Autonomous Inspections of Aquaculture Net Pens. Master’s Thesis, Department of Engineering Cybernetics, Norwegian University of Science and Technology (NTNU), Trondheim, Norway, 2020. [Google Scholar]
  29. Caharija, W.; Pettersen, K.Y.; Bibuli, M.; Calado, P.; Zereik, E.; Braga, J.; Gravdahl, J.T.; Sorensen, A.J.; Milovanovic, M.; Bruzzone, G. Integral Line-of-Sight Guidance and Control of Underactuated Marine Vehicles: Theory, Simulations, and Experiments. IEEE Trans. Control Syst. Technol. 2016, 24, 1623–1642. [Google Scholar] [CrossRef] [Green Version]
  30. Candeloro, M.; Sørensen, A.J.; Longhi, S.; Dukan, F. Observers for dynamic positioning of ROVs with experimental results. IFAC Proc. Vol. 2012, 45, 85–90. [Google Scholar] [CrossRef]
  31. Shortis, M. Camera Calibration Techniques for Accurate Measurement Underwater. In 3D Recording and Interpretation for Maritime Archaeology; McCarthy, J.K., Benjamin, J., Winton, T., van Duivenvoorde, W., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 11–27. [Google Scholar]
  32. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  33. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The Argus Mini remotely operated vehicle (ROV), courtesy of Argus Remote Systems AS.
Figure 1. The Argus Mini remotely operated vehicle (ROV), courtesy of Argus Remote Systems AS.
Jmse 09 00079 g001
Figure 2. SINTEF ACE, a full-scale laboratory facility designed to develop and test new aquaculture technologies.
Figure 2. SINTEF ACE, a full-scale laboratory facility designed to develop and test new aquaculture technologies.
Jmse 09 00079 g002
Figure 3. (a) The laser lines are mounted inside watertight tubes from Blue Robotics. The laser lines are perpendicular to the plane containing the two tubes and the camera. (b) The laser lines are mounted onto the ROV. The camera used is the on-board ROV camera, seen in the center of the ROV.
Figure 3. (a) The laser lines are mounted inside watertight tubes from Blue Robotics. The laser lines are perpendicular to the plane containing the two tubes and the camera. (b) The laser lines are mounted onto the ROV. The camera used is the on-board ROV camera, seen in the center of the ROV.
Jmse 09 00079 g003
Figure 4. Example of calibration images for the camera-laser 3D measurement. The laser lines can be seen in the left part of the checker board.
Figure 4. Example of calibration images for the camera-laser 3D measurement. The laser lines can be seen in the left part of the checker board.
Jmse 09 00079 g004
Figure 5. Example of ROV’s laser line images for the camera–laser 3D measurement and the fitted plane representing the net cage wall. Upper row is the images, and lower row the plane fitted to the x,y,z positions of the points; units are in mm.
Figure 5. Example of ROV’s laser line images for the camera–laser 3D measurement and the fitted plane representing the net cage wall. Upper row is the images, and lower row the plane fitted to the x,y,z positions of the points; units are in mm.
Jmse 09 00079 g005
Figure 6. The geometry of the fish cage and the measured net distance and yaw angle in the horizontal North-East plane. The ROV is roll and pitch stable, but the pitch varies ±10 degrees around the zero point, so the horizontal assumption is not perfect. The yaw angle is calculated by projecting into the North-East plane.
Figure 6. The geometry of the fish cage and the measured net distance and yaw angle in the horizontal North-East plane. The ROV is roll and pitch stable, but the pitch varies ±10 degrees around the zero point, so the horizontal assumption is not perfect. The yaw angle is calculated by projecting into the North-East plane.
Jmse 09 00079 g006
Figure 7. The doppler velocity log (DVL) distance measurement compared with the laser triangulation measurement.
Figure 7. The doppler velocity log (DVL) distance measurement compared with the laser triangulation measurement.
Jmse 09 00079 g007
Figure 8. A picture taken at t = 650 s, where the triangulation distance disagrees with the DVL distance. The double net pen net in the upper part of the image confuses the DVL. Four laser lines are visible, two from each net.
Figure 8. A picture taken at t = 650 s, where the triangulation distance disagrees with the DVL distance. The double net pen net in the upper part of the image confuses the DVL. Four laser lines are visible, two from each net.
Jmse 09 00079 g008
Figure 9. The laser line detection in the picture in Figure 10. The two brightest laser lines are found even when two extra less bright laser lines appear on a more distant net.
Figure 9. The laser line detection in the picture in Figure 10. The two brightest laser lines are found even when two extra less bright laser lines appear on a more distant net.
Jmse 09 00079 g009
Figure 10. A picture taken at t = 1020 s, where the triangulation data is significantly more noisy than the DVL distance. Weak laser signal due to large distance (>2.5 m) to the net makes the laser triangulation loose data. The image is brightness-corrected for display purposes.
Figure 10. A picture taken at t = 1020 s, where the triangulation data is significantly more noisy than the DVL distance. Weak laser signal due to large distance (>2.5 m) to the net makes the laser triangulation loose data. The image is brightness-corrected for display purposes.
Jmse 09 00079 g010
Figure 11. The DVL net-relative yaw compared with the laser triangulation measurement.
Figure 11. The DVL net-relative yaw compared with the laser triangulation measurement.
Jmse 09 00079 g011
Figure 12. The position of the ROV is the blue line, the red dots are DVL distance measurements, and the green dots are laser measurements.
Figure 12. The position of the ROV is the blue line, the red dots are DVL distance measurements, and the green dots are laser measurements.
Jmse 09 00079 g012
Figure 13. A Kalman filter estimate of the experiment showing the descent, navigation along the net pen, and the ascent.
Figure 13. A Kalman filter estimate of the experiment showing the descent, navigation along the net pen, and the ascent.
Jmse 09 00079 g013
Figure 14. Kalman filter data from leg 3 of the experiment, with the DVL measured planes overlaid seen in the North-East plane.
Figure 14. Kalman filter data from leg 3 of the experiment, with the DVL measured planes overlaid seen in the North-East plane.
Jmse 09 00079 g014
Figure 15. Kalman filter data from leg 3 of the experiment, with the laser measured planes overlaid seen in the North-East plane.
Figure 15. Kalman filter data from leg 3 of the experiment, with the laser measured planes overlaid seen in the North-East plane.
Jmse 09 00079 g015
Figure 16. A 3d view of the measured planes. The DVL data is in (a), laser data in (b).
Figure 16. A 3d view of the measured planes. The DVL data is in (a), laser data in (b).
Jmse 09 00079 g016
Figure 17. Zoomed in distance measurements from the laser triangulation sensor and the DVL which show quantization effects.
Figure 17. Zoomed in distance measurements from the laser triangulation sensor and the DVL which show quantization effects.
Jmse 09 00079 g017
Figure 18. (a) shows a Kalman filter fusion of the distance measurements, with outliers removed. The outliers are indicated with circles. (b) Shows the Kalman prediction error which is the basis for estimating the noise levels for the two sensors.
Figure 18. (a) shows a Kalman filter fusion of the distance measurements, with outliers removed. The outliers are indicated with circles. (b) Shows the Kalman prediction error which is the basis for estimating the noise levels for the two sensors.
Jmse 09 00079 g018
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bjerkeng, M.; Kirkhus, T.; Caharija, W.; T. Thielemann, J.; B. Amundsen, H.; Johan Ohrem, S.; Ingar Grøtli, E. ROV Navigation in a Fish Cage with Laser-Camera Triangulation. J. Mar. Sci. Eng. 2021, 9, 79. https://0-doi-org.brum.beds.ac.uk/10.3390/jmse9010079

AMA Style

Bjerkeng M, Kirkhus T, Caharija W, T. Thielemann J, B. Amundsen H, Johan Ohrem S, Ingar Grøtli E. ROV Navigation in a Fish Cage with Laser-Camera Triangulation. Journal of Marine Science and Engineering. 2021; 9(1):79. https://0-doi-org.brum.beds.ac.uk/10.3390/jmse9010079

Chicago/Turabian Style

Bjerkeng, Magnus, Trine Kirkhus, Walter Caharija, Jens T. Thielemann, Herman B. Amundsen, Sveinung Johan Ohrem, and Esten Ingar Grøtli. 2021. "ROV Navigation in a Fish Cage with Laser-Camera Triangulation" Journal of Marine Science and Engineering 9, no. 1: 79. https://0-doi-org.brum.beds.ac.uk/10.3390/jmse9010079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop