Diver Drone Intuitive Interactions (PACA Region / Notilo Plus)

  • Support : Pôle INPS (IAPS, LIS, COSMER, IMSIC)
  • Responsable COSMER: Claire DUNE
  • Financement : Région PACA et Société Notilo Plus
  • Durée : 2 ans (2020-2022)
  • Partenaires : Notilo Plus, FFESSM, FIRST/OCEANIDE, IFREMER, CEPHISMER.

Support: INPS cluster (IAPS, LIS, COSMER, IMSIC)
Responsible for COSMER: Claire DUNE
Funding: PACA Region and Société Notilo Plus
Duration: 2 years (2020-2022)
Partners: Notilo Plus, FFESSM, FIRST/OCEANIDE, IFREMER, CEPHISMER.

Following in the footsteps of their aerial counterparts, underwater drones are now available to the general public at affordable costs. Among them, the autonomous robot IBubble from the company NOTILO PLUS has made an interesting technological breakthrough by developing a new underwater remote operation mode: the diver is equipped with a remote control that allows the drone to locate him and to modify the operating mode of the drone during the dive. This new mode of interaction has set the stage for cooperation between the UAV and the diver it is tracking. 

However, the interaction remains limited to a master-slave relationship, reduced to selecting a behavior from a pre-recorded list. In this project, we propose to evolve this master-slave relationship by integrating the robot more closely in the hoist. This emancipation of the robot will be obtained by giving it high-performance sensory means, adapted cognitive capacities, and by giving it sufficient decision power to interpret the divers’ attitude and identify problematic situations, or simply to assist them more efficiently. 

 

The DPII project is based on 3 questions:  

1) What are the objective criteria for an accidental situation? 

2) How to make drone-diver communication bi-directional and intuitive? 

3) Does the presence of the UAV modify the behaviour of the longline? 

 

Air diving is a risky activity practiced by 3 million divers in Europe, 75% of whom choose the Mediterranean as their destination [1], which in 2018 accounted for more than 70% of national clubs and commercial structures [2]. Underwater videos and photography are sales arguments for commercial structures. In recent years, they have been equipped with cameraman drones to film baptisms, exploration dives and to help debriefing during training courses. They allow to bring up images of the dive and can light up an area of interest or identify species while freeing the hands of the guide. They offer new possibilities in terms of diver training and assistance [18,31]. However, it is questionable what impact they have on divers, whether in exploration or training. Can they be a helping factor in this unique moment when the lander learns to breathe underwater [4,5]? Could they induce stress or even risky behaviors? Or could they even reduce the responsibility of each diver by diluting individual responsibility in an “increased” collective (cf. état agentique and the work of Stanley Milgram). 

Most UAVs for the general public are wire-guided and operated from the surface vessel (ROV). The IBubble robot from the NOTILO PLUS company is a wireless robot. This AUV has an innovative underwater remote operation system: the diver is equipped with a remote control that allows the drone to locate him and to change the drone’s operating mode during the dive. This novel mode of interaction has set the stage for close cooperation between the UAV and the diver it is tracking. However, the interaction remains limited to selecting a behavior from a pre-recorded list. The UAV therefore has reduced cognitive and decision-making autonomy. For example, in “diver tracking” mode, the robot positions itself at a fixed relative distance and maintains this position even if the turbidity of the water occludes the diver. It also does not monitor divers entering its field of vision and who would be likely to interact with it (ask to maintain a depth, report a problem, show an area of interest to be filmed, ask for lighting…).

Most consumer UAVs are wire-guided and operated from the surface vessel (ROV). The NOTILO PLUS company’s IBubble robot is a wireless robot. This AUV has an innovative underwater remote operation system: the diver is equipped with a remote control that allows the drone to locate him and to change the drone’s operating mode during the dive. This novel mode of interaction has set the stage for close cooperation between the UAV and the diver it is tracking. However, the interaction remains limited to selecting a behavior from a pre-recorded list. The UAV therefore has reduced cognitive and decision-making autonomy. For example, in “diver tracking” mode, the robot positions itself at a fixed relative distance and maintains this position even if the turbidity of the water occludes the diver. It also does not monitor divers entering its field of vision and who would be likely to interact with it (ask to maintain a depth, report a problem, show an area of interest to be filmed, ask for lighting…).

The DPII project proposes to broaden the range of interactions between a diver and an AUV so that information can be transmitted in a more intuitive way. The European project FP7 CADDY [28], the work of the Minnesota Interactive Robotics and Vision Laboratory group [10-14], then the ADRIATIC project have explored different modalities of communication between an underwater robot and a diver: the use of tags, underwater tablet [28], the understanding of the diver’s gestures with marked [22] or instrumented [22] gloves, or bare hand [12] and more recently the robot’s movement [16].  The objective of the DPII project is to propose an intuitive drone-diver interaction mode as least intrusive as possible: pictogram on the tablet, body movements, gestures, light signals, sound signals, … These modes of interaction will be studied (AMIDP thesis) as well as their impact on the diver’s affects and on his degree of engagement. 

To interact with a diver, the drone must be positioned in front of him. This means that it is first able to detect [25] and locate the divers [23], to estimate their positions [6,15,20,24] and to predict their future movements in order to anticipate them by developing predictive control modes referenced to sensors. This implies that the robot plans its trajectory within the hoist so as not to collide with the divers or cause panic movements by sneaking between them too quickly. Terrestrial robotics has already taken over the problem of navigation of an autonomous robot in a crowd [7,8,9,17, 31, 32]. The DPII project proposes to develop a method of navigation in palanque adapted to the 3D underwater environment while questioning the impact of the movements of the drone on the emotional state of the divers. In addition, information on the spatio-temporal evolution of the dive seems to be a measurable and relevant criterion for the prevention of diving accidents: a diver who is cold loses interest in diving (visibly in particular by a reduction in head movements), stays away from the group, slows down his swimming rhythm… A diver who is affected by narcosis will rather have an anarchic behavior compared to the rest of the group, will not react to interactions… Other criteria to be identified could be studied to raise alarms, such as an important cloud of bubbles released by the divers symptomatic of breathlessness or, on the contrary, a reduction of the bubbles synonymous with a lack of air, their swimming rhythm related to cold or fatigue, etc… 

The DPII project proposes to study the following scenario: a drone cameraman follows a dive, monitors all its divers and characterizes the compactness of the dive. It detects a diver with singular behavior. He plans a trajectory taking into account the environment and the dive to position himself in front of him and interact. It then interacts with this diver to evaluate his state. In case of an anomaly, he signals the diver to the guide of the lift (luminous flash, circle above the diver in distress, sound signal). If it turns out that there is no problem, it updates its knowledge base to refine its detection of future anomalies. 

Each step of this scenario highlights a scientific knot: estimate the location of all the divers in a dive site with the sensors onboard an AUV, estimate their attitude in 3D, navigate within a dive site (we find the problem of navigation in a flotilla), detect an anomaly (using recent artificial intelligence techniques), interact with a diver using his usual equipment. Algorithms for estimating the position of divers and their attitudes (position and orientation) will be confronted with trajectories calculated by a dynamic motion tracking system (see equipment financing section). These scenarios will be tested on different groups of divers with different levels of expertise and interaction abilities: recreational divers of level 2 or 3 and divers with disabilities in partnership with the FFESSM as well as military divers, in partnership with CEPHISMER. 

The originality of this project lies in the construction of an interdisciplinary work totally interweaving the human and social sciences, the sciences and techniques of physical and sports activities, the basic sciences, and engineering sciences at all stages. 

 

 

 

 

 

 

 

 

Références

  1. http://livreplongee.fr/chiffres-cles-de-la-plongee-en-france
  2. http://coindespros.ffessm.fr/
  3. Miguel Simao. Segmentation et reconnaissance des gestes pour l’interaction homme-robot cognitive. Mécanique des matériaux [physics.class-ph]. École nationale supérieure d’arts et métiers – ENSAM; Universidade de Coimbra, 2018. Français. 
  4. Candace L. Sidner, Christopher Lee, Cory D. Kidd, Neal Lesh & Charles Rich. Explorations in engagement for humans and robots. Artif. Intell.,vol. 166, pages 140–164, August 2005.
  5. C. Rich, B. Ponsler, A. Holroyd & C.L. Sidner. Recognizing engagement in human-robot interaction. In Human-Robot Interaction (HRI), 2010 5thACM/IEEE International Conference on, pages 375 –382, March 2010.
  6. Stephanie Rosenthal & Manuela M. Veloso. Modeling Humans as Observation Providers using POMDPs. In RO-MAN, 2011
  7. E.A. Sisbot, L.F. Marin-Urias, R. Alami & T. Simeon.A Human Aware Mobile Robot Motion Planner. IEEE Transactions on Robotics, vol. 23,no. 5, pages 874–883, october 2007
  8. Emrah Akin Sisbot, Luis F. Marin-Urias, Xavier Broquere, Daniel Sidobre & Rachid Alami.Synthesizing Robot Motions Adapted to Human Presence. International Journal of Social Robotics, vol. 2, no. 3, pages 329–343,2010.
  9. Tarek Taha, Jaime Valls Miŕo & Gamini Dissanayake. POMDP-based long-term user intention prediction for wheelchair navigation. In ICRA, pages 3920–3925, 2008
  10. Sattar, Junaed & Dudek, Gregory. (2017). Visual identification of biological motion for underwater human–robot interaction. Autonomous Robots. 42. 1-14. 10.1007/s10514-017-9644-y. 
  11. Islam, Md Jahidul & Ho, Marc & Sattar, Junaed. (2018). Understanding human motion and gestures for underwater human–robot collaboration. Journal of Field Robotics. 36. 10.1002/rob.21837.
  12. J. Sattar, P. Giguere, G. Dudek, and C. Prahacs, “A Visual Servoing System for an Aquatic Swimming Robot,” inIEEE/RSJ International Conference on Intelligent Robots and Sys-tems (IROS). IEEE, 2005, pp. 1483–1488
  13. J. Sattar and G. Dudek, “Robust Servo-control for Underwater Robots using Banks of Visual Filters,” inIEEE International Conference on Robotics and Automation (ICRA).IEEE, 2009, pp. 3583–3588.
  14. Michael Fulton, Chelsey Edge, Junaed Sattar, “Robot Communication Via Motion: Closing the Underwater Human-Robot Interaction Loop”, Robotics and Automation (ICRA) 2019 International Conference on, pp. 4660-4666, 2019
  15. J. DeMarco, M. E. West, A. M. Howard, “Sonar-Based Detection and Tracking of a Diver for Underwater Human-Robot Interaction Scenarios”, 2013 IEEE International Conference on Systems Man and Cybernetics, pp. 2378-2383, Oct. 2013.
  16. M. J. Islam, M. Ho, J. Sattar, “Dynamic reconfiguration of mission parameters in underwater human-robot collaboration”, pp. 1-8, May 2018.
  17. M. J. Islam, J. Hong, J. Sattar, “Person following by autonomous robots: A categorical overview”, CoRR, 2018, 
  18. Marco Bibuli, “Diving with robots”, HANSA International Maritime Journal, pp. 66, May 2016
  19. Guštin, Franka; Rendulić, Ivor; Mišković, Nikola; Vukić, Zoran. Hand gesture recognition from multibeam sonar imagery, Proceedings of the 10th IFAC Conference on Control Applications in Marine Systems (CAMS’16), 470-475 https://doi.org/10.1016/j.ifacol.2016.10.450
  20. A. G. Chavez, C. A. Mueller, A. Birk, A. Babic and N. Miskovic, “Stereo-vision based diver pose estimation using LSTM recurrent neural networks for AUV navigation guidance,” OCEANS 2017 – Aberdeen, Aberdeen, 2017, pp. 1-7. doi: 10.1109/OCEANSE.2017.8085020
  21. Chiarella, D.; Bibuli, M.; Bruzzone, G.; Caccia, M.; Ranieri, A.; Zereik, E.; Marconi, L.; Cutugno, P. A Novel Gesture-Based Language for Underwater Human–Robot Interaction. J. Mar. Sci. Eng. 2018, 6, 91. 
  22. Đula Nađ; Walker, Christopher; Kvasić, Igor; Orbaugh Antillon, Derek; Mišković, Nikola; Anderson, Iain; Lončar, Ivan Towards Advancing Diver-Robot Interaction Capabilities  12th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles Daejeon, South Korea, 2019. str. 1-6
  23. Mandić, Filip; Mišković, Nikola. Tracking Underwater Target Using Extremum Seeking, Proceedings of the 4th IFAC Workshop on Navigation, Guidance and Control of Underwater Vehicles (NGCUV’2015).
  24. Rendulić, Ivor; Bibulić, Aleksandar; Mišković, Nikola. Estimating diver orientation from video using body markers, Proceedings of MIPRO 2015 Conference / Petar Biljanović (ur.). Rijeka : Croatian Society for Information and Communication Technology, Electronics and Microelectronics – MIPRO, 2015. 1257-1263
  25. Gomez Chavez, Arturo; Pfingsthorn, Max; Birk, Andreas; Rendulić, Ivor; Mišković, Nikola. Visual Diver Detection using Multi-Descriptor Nearest- Class-Mean Random Forests in the Context of Underwater Human Robot Interaction (HRI), Proceedings of MTS/IEEE OCEANS’15 Conference
  26. S. Murat Egi, Guy Thomas, Massimo Pieri, Danilo Cialoni, Costantino Balestra, Alessandro Marroni, Safety rules for the development of a Cognitive Autonomous Underwater Buddy (CADDY), ISUR – 8th International Symposium on Underwater Research; 26-29 March 2014, Procida, Italy.
  27. Mišković, Nikola; Pascoal, Antonio; Bibuli, Marco; Caccia, Massimo; Neasham, Jeffrey A.; Birk, Andreas; Egi, Murat; Grammer, Karl; Marroni, Alessandro; Vasilijević, Antonio et al. CADDY project, year 3: The final validation trials OCEANS 2017 – Aberdeen Aberdeen, Ujedinjeno Kraljevstvo: IEEE, 2017. str. 1-5 doi:10.1109/oceanse.2017.8084715 (poster, međunarodna recenzija, sažetak, ostalo)
  28. M. Menix, N. Mišković, Z. Vukić, Interpretation of divers’ symbolic language by using hidden Markov models, Proceedings of the 35th international convention on information and communication technology, electronics and microelectronics MIPRO/CTS 2014), Opatija, Croatia
  29. P. Abreu, B. Bayat, J. Botelho, P. Góis, A. Pascoal, J. Ribeiro, M. Ribeiro, M. Rufino, L. Sebastião, H. Silva, “Cooperative Control and Navigation in the scope of the EC CADDY Project”, submitted to OCEANS´15 MTS/IEEE Genova, Italy, 18-21 May, 2015
  30. Miskovic, Nikola & Vukić, Zoran & Vasilijevic, Antonio. (2013). Autonomous Marine Robots Assisting Divers. 8112. 357-364. 10.1007/978-3-642-53862-9-46.
  31. Jorge Rios-Martinez, Anne Spalanzani, Christian Laugier. From Proxemics Theory to Socially-Aware Navigation: A Survey. International Journal of Social Robotics, Springer, 2015
  32. Ginés Clavero, Jonatan & Martín, Francisco & Vargas, David & Rodríguez Lera, Francisco & Matellán, Vicente. (2019). Social Navigation in a Cognitive Architecture Using Dynamic Proxemic Zones. Sensors. 19. 5189. 10.3390/s19235189.