Next Article in Journal / Special Issue
Simulating Ionising Radiation in Gazebo for Robotic Nuclear Inspection Challenges
Previous Article in Journal
Model Predictive Control for Cooperative Transportation with Feasibility-Aware Policy
Previous Article in Special Issue
The Operation of UAV Propulsion Motors in the Presence of High External Magnetic Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robot-Assisted Glovebox Teleoperation for Nuclear Industry

Remote Applications in Challenging Environments (RACE), United Kingdom Atomic Energy Authority, Abingdon, Oxforshire OX14 3DB, UK
*
Author to whom correspondence should be addressed.
Current address: Humanoid Sensing and Perception, Italian Institute of Technology, 16163 Genova, Italy.
Submission received: 28 May 2021 / Revised: 20 June 2021 / Accepted: 29 June 2021 / Published: 3 July 2021
(This article belongs to the Special Issue Advances in Robots for Hazardous Environments in the UK)

Abstract

:
The nuclear industry has some of the most extreme environments in the world, with radiation levels and extremely harsh conditions restraining human access to many facilities. One method for enabling minimal human exposure to hazards under these conditions is through the use of gloveboxes that are sealed volumes with controlled access for performing handling. While gloveboxes allow operators to perform complex handling tasks, they put operators at considerable risk from breaking the confinement and, historically, serious examples including punctured gloves leading to lifetime doses have occurred. To date, robotic systems have had relatively little impact on the industry, even though it is clear that they offer major opportunities for improving productivity and significantly reducing risks to human health. This work presents the challenges of robotic and AI solutions for nuclear gloveboxes, and introduces a step forward for bringing cutting-edge technology to gloveboxes. The problem statement and challenges are highlighted and then an integrated demonstrator is proposed for robotic handling in nuclear gloveboxes for nuclear material handling. The proposed approach spans from tele-manipulation to shared autonomy, computer vision solutions for robotic manipulation to machine learning solutions for condition monitoring.

1. Introduction

Robots are indispensable tools for manipulation in challenging environments such as nuclear applications [1]. Robotics in the nuclear industry can not only ensure the safety of operators from unsafe levels of radiation but also provide cost-effective solutions for manipulation, inspection, and maintenance of nuclear sites.
The extreme conditions encountered in the nuclear industry leads to a conservative attitude towards cutting-edge robotics technology despite the fact that it has a high potential for solving problems that the industry faces [2]. In order to bridge the gap between state-of-the-art robotics research and the nuclear industry, the Robotics and AI in Nuclear (RAIN) Hub was established where various problems encountered on nuclear sites are being investigated and robotic solutions are being developed [3].
Nuclear gloveboxes are contained environments for the safe handling of hazardous objects and materials. A glovebox prevents the spread of contamination while handling nuclear materials or contaminated objects. One of the problems considered in the RAIN Hub is introducing modern robotics and AI technology into existing gloveboxes and paving the path for the next generation of glovebox designs. Our approach covers a wide range of technologies from computer vision to teleoperated robotics, assistive technologies to machine learning, aiming towards safer and efficient operations with nuclear gloveboxes.
As for all nuclear applications, the safety of the operator using the glovebox is the primary goal for every operation inside the glovebox. To establish safe operational conditions, operators are equipped with personal protective equipment (PPE) and are required to closely follow operational rules. However, glovebox operations do not fully mitigate all hazards and remain high risk activities for the operators [4].
Using PPE and working in a confined space with additional safety procedures lowers the manipulation capabilities of the operators [5]. Gloves severely reduce the tactile feedback from the hands. Moreover, working through glove ports that limits the arm movements of the operator introduces further challenges during the handling of high risk objects inside the glovebox. As a result, a simple task such as opening a screw lid of a container, becomes a strenuous and challenging task for the operator.
Gloveboxes, such as in Figure 1, may be cluttered, dynamic environments, and the tasks executed within can be complex, numerous, safety-critical, and often one-of-a-kind, therefore, a stand-alone autonomous robotic system cannot be expected to out-perform a human operator with the current technology. Moreover, due to safety concerns, human-in-the-loop solutions are deemed to be more desirable at least in early phases of deployment.
Novel technologies in robotics and artificial intelligence can be exploited to increase the safety in the legacy glovebox or to design new robotic gloveboxes [6,7,8]; in both cases, dexterous robotic manipulators, sensors, and control algorithms can avoid direct contact between the operator and hazardous material. Inside the unstructured environment of gloveboxes, robots could be controlled by the operator via teleoperation while more autonomous control strategies could be exploited in more standard tasks. Robot arms could be profitably used to accomplish operations that today are performed by an operator in order to reduce the workload and the risks of accident or contamination.
In this paper, the challenges encountered in nuclear applications, particularly nuclear gloveboxes, is described. Furthermore, this paper is drawing a general framework for bringing cutting-edge robotics and AI research into legacy gloveboxes. In this framework, autonomous robotic grasping, collision avoidance, condition monitoring of robotic manipulation systems is described as the solution for improving the safety and manipulation capability of legacy gloveboxes. Moreover, supporting the operators during complex task execution using operational management software is described as part of the general framework.
The following article presents the problem of nuclear glovebox robotics, and an integrated demonstrator into a proposed robotic handling system for nuclear gloveboxes, spanning teleoperation to autonomy. The paper is organised as follows. In Section 2.1 nuclear gloveboxes are introduced and the challenges for robotics and AI are presented. Section 3 presents the previous work on the use of robotics technology in nuclear gloveboxes. In Section 4, the hardware and the simulator build based on this hardware is presented. Section 5 defines the research fields of the project and describes how they address the challenges. Finally, Section 6 concludes the paper.

2. Challenge Statement

2.1. Glovebox Challenges

The majority of robotic applications that achieve success have structured, known, open environments where obstructions to motion and sensing is minimal. Moreover, the operational conditions are expected to be clean and suitable to the mechatronic systems as to not cause damage to the mechanisms and electronics. On the contrary, the working conditions inside nuclear gloveboxes are considered to be dirty, dark, dull, dangerous, and cluttered. Therefore, a thorough understanding of gloveboxes is key for the success of the robotic solution.
Gloveboxes are broken into six major components, which is illustrated in Figure 2: Hull, windows, glove ports, posting ports, monitoring equipment, and the glovebox internal.

2.1.1. Hull

The hull is the primary component of the glovebox that separates the glovebox internal environment from the external environment. In some glovebox solutions, the hull encloses a vacuum or a pressurized inert gas to ensure the containment of the radiation hazard. The hull is often lead lined for improved shielding. Due to the hazards inside the glovebox, it is imperative that the hull is not damaged or containment breached.

2.1.2. Windows

The windows allow for operators to see within the glovebox. The glass is often doped with lead to an increase in its nuclear shielding however, over time it is common for this glass to become yellowed (with lower visibility) and brittle from radiation damage. It is not uncommon for the glass to become crazed, further weakening the integrity of the containment, and reducing visibility.

2.1.3. Glove Ports

These are fixed holes in the hull that allow for the gloves, and hence the operators, to penetrate the hull. They are normally of a standard fixed dimension (e.g., 11 cm in radius), and most gloveboxes have multiple ports dotted around the hull to enable operators to reach anywhere in the glovebox interior. These ports have a fixed method for replacing them without losing containment and can house ports for non-glove equipment, such as cable routing. The gloves used by the operators are often thick, heavy, leaded, and when under pressure require the operator to hook their hands into them with their last two fingers to stop their hands being forced out. Overall, the glove design significantly increases the operator safety while sacrificing the dexterity and reducing the manipulation capability of the operator.

2.1.4. Posting in/out Ports

These ports allow operators to post items in or out of the hull through an airlock, which maintains the containment. Before posting out the items, it must be ensured that they are appropriately decontaminated. The posted out items are double bagged and they are of a limited fixed size.

2.1.5. Environment Monitoring and Maintenance Equipment

This is the equipment for monitoring the glovebox internal environment, maintaining any containment requirements (e.g., vacuum, temperature), and performing containment testing (e.g., leak tests).

2.1.6. Glovebox Internals

The glovebox internals include the operational equipment used by the operators. This is a wide and diverse set of objects, from chemical processing equipment to powered hand tools (e.g., dremels). Any operation for handling nuclear material/objects is performed inside the glovebox internal area.
As an example of a nuclear application consider post-operational clean-out operations (POCO), this requires nuclear gloveboxes that have been in service for decades to be dismantled and decontaminated from the inside-out, surveying, separating waste and radio-logical wastes, reducing the size of elements through deconstruction or cutting, draining liquids from process plant equipment, sweeping, and posting contained elements out. Beyond this it is common for operators to require additional complex PPE, or other equipment such as ladders to be able to access gloveboxes, whilst exposing them to a reduced amount of contamination.

2.2. Challenges of Robots in Gloveboxes

Whilst reducing the amount of time human arms are required in gloves reduces the risk to operators, new challenges are posed to the robots. POCO shall be used as the primary use case as it covers a wide range of complex tasks in nuclear gloveboxes.

2.2.1. Mechatronics Challenges

The first mechatronic challenge is how to place the robot into the area. In a new glovebox they can be built into the internal side of the hull but this causes issues for the maintenance of the robot, as they then must be maintained in situ. Alternatively, the robot can access the area through the glovebox ports. This then requires the robot to be able to fit through the glovebox port, whilst also having a long reach and a payload capability similar to a human. It is worth noting that this pushes the robots towards an inline joint configuration, rather than an offset approach such as those used by Universal Robots, for example.
As part of many of the glovebox operations swarf, grease, and dust is generated, making them exceptionally dirty. This poses issues for the robots and any mechanism, where if swarf enters joints it can become very damaging to the mechatronics. Similarly, other functionalities can be blocked, such as magnetic grippers being blocked by the amount of magnetic swarf. A unique challenge in nuclear glove boxes are alpha-emitting powders (e.g., plutonium). The alpha-emitting powders are highly abrasive, penetrative, radioactive, and volatile if not managed correctly.
This leads to the consideration of whether the robot should be in the glove or affixed directly to the port. The environments are filled with dust and detritus, which can damage joints. Moreover, it is preferred that robots do not become contaminated to simplify maintenance. This then pushes robot designs to being in the gloves. Manipulating from inside the glove will apply pressure to the robot and limit rotations and dexterity. It is worth noting that the end-effectors may be on the inside of the environment, connected to the robot through a modified glove that can dock a robot and end-effector.
In a similar fashion, the glove may have a window modified into it to allow the robot to have a wrist camera. External sensors may be challenging to install as their cabling, and themselves will have to be posted in, or they have to be able to cope with the reduced visibility glass interfering with their functioning. In the case of posting in, that will require the sensor to have to be able to withstand the environment, a mechanism for power and data to be connected without breaking containment, and affixing method to be determined. Moreover, it increases secondary waste generated in the decommissioning process. Secondary waste is waste generated in the process of decommissioning primary waste.
While robots that replicate human physiology will have an advantage in being able to replicate operations, other robot kinematic layouts will also have their advantages, such as slender continuum robots, which will have advantages in inspecting complex shapes and internals such as pipes.
Another significant challenge is radiation, which will degrade many parts of the robot. Gamma radiation is the most challenging type of radiation to protect a robotic system from nuclear gloveboxes, due to its penetrating power (shielding the whole robot would be impractical due to the thickness of material required to stop it) and its negative effects on sensitive components commonly used on robotics. Components and materials such as semiconductors (used in sensors, local motor drive electronics, etc.), plastics (polymers), optical components, and lubricants are degraded or rendered unusable after certain levels of accumulated Total Integrated Dose (TID) of gamma radiation. Electronic components using a standard layout will accumulate trapped charges inside various components that can change the voltage levels at which transistors are switched on or off, induce leakage currents in critical parts of devices, or even outright inhibit their functioning. Polymers suffer from either an increase or decrease in cross-linking, or from off-gassing, which can lead to a change in polymer morphology and liberate some of the (sometimes volatile) additives used in the polymer’s manufacturing. Either of these processes will change the polymer properties, which makes some polymers stiffer and more brittle and others more soluble or liable to melt at a lower temperature. Finally, greases such as mineral oil may oxidise and stiffen in a radiation environment due to the off-gassing of hydrocarbon molecules.
Since the damage caused by ionising gamma radiation is done over time as a consequence of the accumulating dose, limiting the amount of time the robotic tool spends inside the glovebox to active operations only is a good first step to extending its useful lifetime. However, this method requires a reliable mechanism for insertion and removal which does not rely on extensive human intervention and does not add an unacceptable risk of spreading contamination.
There are different approaches for dealing with the radiation degradation of a robot. One method is to utilise standard COTS components which are replaced on a regular basis and/or as they stop functioning. This has the advantage of being achievable with commercially-available technologies but puts requirements on the glovebox/robot design such that all “perishable” components are easy to remove and replace and that a robust safety system is in place to handle any unexpected robot failures at inconvenient times, since the mean time to failure due to radiation degradation cannot be easily predicted in COTS devices that have not been designed with this environment in mind, and they could fail after anything between 10 s and 1000 s of hours depending on dose rate and radiation sensitivity. There is also the risk of creating further secondary waste from this process.
A better long-term approach to this challenge is to use radiation hardened components, which are designed, manufactured, and certified to withstand a particular TID before failing. Historically, such technology has mainly been developed for use in the space sector, but electronics designed for spaceflight are often prohibitively expensive, and the space sector is more concerned with protecting devices from the effects of charged particles and high-energy electrons than gamma radiation due to these factors dominating the space environment.
Traditionally, the nuclear sector has been able to work around the lack of radiation-sensitive electronics through the extensive use of shielding and simple electro-mechanical solutions, but the maturing field of nuclear fusion has created a strong research push towards radiation tolerant sensors and electronics. For example, devices such as DC-to-DC converters, resolver-to-digital converters, relay drivers, and even sensor components such as digital camera image sensors [9] and LIDAR components [10] have been designed and qualified for multi-MGy TID tolerance. These devices are now in advanced prototype and/or early commercialisation stages, and would be capable of surviving many thousands of hours in a typical glovebox environment. This means that it is only a matter of time until the control systems of robots can be made tolerant to even the harshest glovebox radiation environments.

2.2.2. Control and Intelligent Systems Challenges

Now that there is a robot reaching into the environment, the next set of challenges present themselves. The biggest element of this is that these robots should be aiming to match or outperform the human operator.
Robotic solutions for gloveboxes mostly rely on teleoperation in order to keep the human in the decision making process. However, ideal robotics solutions will attain better productivity, reduced cost, and increased safety by relying on autonomous systems. Despite the considerable amount of pre-existing research, deploying an autonomous robotic system inside a glove box is not feasible with current technology however, certain parts of the task execution can benefit from autonomy or semi-autonomy.
Regardless of teleoperation or autonomy the area is cluttered, and the robot can not risk hitting the windows and breaking containment. This then requires the robot to be able to sense its location and environment and then avoid collisions.
Within teleoperation this primarily presents itself as a complex operation to be able to manage redundant joints re-orienting in the null space, without risking collisions or reducing manipulability. The cognitive load of managing these additional degrees of freedom is very mentally taxing on the operator.
A further challenge is the limited number of sensors and cluttered environment, which leads to limited visibility. This then affects the ability of intelligent systems to act within the glovebox.
The variety of tasks, events, and elements that the robot may encounter are numerous and unpredictable. For example, the faults that the robot may encounter can not be predicted, as testing for them through accelerated destructive testing would be prohibitively difficult. Similarly, an autonomous grasping system would be able to have a priori items that it can deal with, but many items such as shrapnel from decommissioning will be novel, possibly even in their physical characteristics.
The next issue is in assurance. The robot and control system must meet nuclear regulator and site owner requirements. The safety and operation must be verified and validated. This does not preclude advanced techniques such as deep learning, as verification through statistical methods have been used in nuclear but, it is a consideration.

3. Previous Work

In the last 40 years, the robotics research community has investigated innovative robotic solutions to improve the safety and efficiency of operational activities in nuclear environments. In [11], the authors highlight the importance of robotic solutions to accomplish inspections and decommissioning tasks in a hazardous environment and glovebox, this aspect in particular was investigated more in depth with preliminary experiments in [12], where a robotic manipulator was exploited to dismantle a JDPR reactor. Autonomous robotics and teleoperation are also key factors to innovate the dismantling of legacy gloveboxes in multiple nuclear facilities in the world. Up to now, operators have accomplished different tasks by inserting their hand (with proper equipment) into a hazardous environment where the consequence of an accident could be serious: The operator could be contaminated by accidental cuts of the rubber glove [13] or by an error in the operation process [14]. Robotics and artificial intelligence can be profitably used to remove the operator from these dangerous tasks while autonomous or semi-autonomous systems could accomplish the activities. To pursue this aim, it is necessary to improve the control strategies of manipulation systems in order to operate in complex environments with constraints and robot redundancy [15].
One preliminary study into the use of automated robotics within a glovebox is presented in [16], where an automation system and non-redundant robotic arms are proposed to mitigate human operator risks in handling activities. In order to reduce operational cost, robotic solutions are proposed to execute ad-hoc tasks [17,18] and simulations are developed to aid in mitigating hazards that may be introduced as a result of the deployment of robotic manipulators [19]. The solutions proposed above are not multipurpose because they are designed to solve specific tasks. In this scenario, redundant collaborative robots can potentially improve the system manipulation capabilities [20] as redundancy can be exploited to adapt robot poses, for example, to avoid collision with objects in the constrained space, or to handle an object with a higher quality grasping index [21] leading to more robust handling. At the same time, novel strategies need to be designed to exploit redundancy within individual applications or tasks with the aim to reduce the control complexity.
The same strategies could support the operators in manipulation and grasping tasks that are accomplished with difficulty by teleoperation inside the glovebox, as shown in [6,22].
While a training course can improve the ability in manipulation tasks [23] and reduce operator fatigue, in some cases an autonomous system could provide direct aid to the operator [24] to control the robot at any level of autonomy.
More recent research fields explore how to reduce the operator workload with high-level instructions given to the robot by voice command [2] while the usability of a humanoid robot is explored in order to do bi-manual tasks inside a legacy glovebox [25,26]. In general, all the solutions cited above exploit methods and strategies presented in robotics literature in order to identify reliable grasping poses.

4. The RAIN Solution: Teleoperated Robotic Manipulation

The following is a proposed testing framework for glovebox robotics. It does not attempt to represent the challenges of contamination, but does attempt to reproduce in a safe environment the other challenges presented in Section 2.

4.1. Hardware

To best represent a human-like kinematic chain, it is proposed to use a serial robot with inline joints, with a narrow diameter to fit through the glovebox ports. To limit possible forces exertible on internal surfaces, a cobot is desirable due to in-built force limitations. This leads to the proposed option of the Kinova Gen3. The robot will be in-glove and the end-effector will be inside the glovebox. This will allow for the end-effector to perform high dexterity tasks while minimising contamination and also enables the possibility of tool changing. Two robots are mounted at a standard port width of 450 mm on a mobile plinth that can be raised and lowered.
The Kinova Gen3 has a wrist mounted RGB-D camera. In addition, two external RGB-D camera sensors are installed, their positioning is subject to the operation being tested. All of this is integrated with ROS and MoveIt [27], to deliver path planning, collision avoidance, teleoperation, and visualisation.
The glovebox mock-up itself is an aluminium extrusion frame, with an enclosed upped section with closed panels and a support structure, as illustrated in Figure 3.
Continuum and cable-driven robots are promising solutions to manipulate objects in a constrained environment however, there is no commercially available continuum robot that is suitable for glovebox access and can be operated with required payloads. As a future work, cable-driven manipulators could be explored in order to evaluate the advantages and disadvantages of different solutions.
For the local (master) side of the teleoperator, a HTC Vive joystick is selected as the interface for remote (slave) robot control. Despite the success of many haptic teleoperation applications [28], a VR system controller is selected due to the intuitiveness of the system. The controller is a 6D motion tracking device that allows the operator to use hand motion to control the motion of the remote robot in a unilateral teleoperation architecture. Therefore, the resulting system requires less training for the operator while the motion control of the remote robot is a trivial task. The same intuitive interface with haptic feedback could be achieved by using a device such as in [29] however, an important goal of this project is to demonstrate robotics and AI capabilities of COTS systems in nuclear environments. Therefore, in the lack of satisfactory wearable haptic interfaces, a VR control lacking the haptic interface is preferred.
The teleoperation is system built with the aforementioned hardware is base unilateral teleoperation architecture. The hand motion of the operator is tracked by the local device and is sent to the remote robot for manipulation. The operator acquires visual and auditory feedback from the glovebox.

4.2. Simulator

An important asset for development and testing is a simulator, as it allows simpler, safer, faster, repeat testing without risk to humans or robots. For this reason a Glovebox Robot simulator was created [30].
The simulator has been generated in Gazebo and integrates the robots, the glovebox, and sensors. They have the same API for control and Moveit through ROS as the real robots. Additionally some tools in Python have been generated to enable easy scripting. Two versions of the simulator have been generated: A ROS package (https://github.com/ukaea/Glovebox-Simulator accessed on 2 July 2021) and a Docker container (https://github.com/ukaea/Glovebox-Simulator-Docker accessed on 2 July 2021). The docker option is essentially the same as the ROS package, but does not require installation, can start with a single command, and has an entirely browser-based interface with gzweb for visualisation and Jupyter notebooks for interaction.

5. Research Areas

5.1. Autonomous Grasping

As with all remote handling tasks, the robot most do more than inspect, it must be able interact with the world. This may be achived through specially-designed remote handling tooling, enabling mechanical automation to simplify tasks. Eventually the robot will need to handle objects. This may be achieved through teleoperation. However, for performance and repeatability it would advantageous to have an autonomous method.
The glovebox presents a few abnormal issues in respect to the state of the art for autonomous grasping. First is the constrained and cluttered environment which limits robot motions, and causes some optimal grasps to become unreachable. Then there is the nature of the objects to be grasped. If they are known, they may be damaged or contaminated, leading to them being desirably picked up from very particular points, with optimality and success rates degrading away from those points. Alternatively, many of the objects in the boxes may be entirely unique and novel in gloveboxes, with humans having not performed detailed inspections on them for 30 years. For this reason the system should also be capable of coping with a clutter of novel objects which will need to be sorted in order to be put into different waste streams, for example.

5.1.1. Grasp Synthesis

Operations in gloveboxes require the manipulation of objects and tools in order to follow complex procedures, in this context it can be concluded that grasping plays a fundamental role in ensuring safe and successful operation. Identifying a feasible grasp in an unstructured environment is one of the fundamental research questions that is yet to be solved. The synthesis of a reliable grasp is complex because of the need to (i) consider the geometric constraints (such as obstacles in the environment, the glovebox boundaries) on the arm/gripper pose, (ii) identify a suitable grasp pose on the manipulated object, and (iii) apply a suitable contact force distribution for a safe hold. In order to provide a reliable solution for the problem described above, autonomous grasping strategies have to be improved in order to provide novel tools for supporting operators in identifying feasible grasping poses or to develop robotic gloveboxes with a high degree of autonomy.
Grasping synthesis in a glovebox, in robotics literature, could be formulated as a problem of identifying feasible grasping solutions in a constrained workspace. Two different strategies are commonly used in order to identify feasible grasping poses that satisfy the environmental constraints: (1) Finding grasp poses without considering constraints and then filtering them to respect environment constraints [31,32,33,34], and (2) modelling the constraints inside the algorithm to find grasping poses [35,36,37,38].
Taking in account a priori knowledge of the proprieties of the object, the first group could be split into two different subgroups which use two different approaches based on: (i) The model of the object or (ii) sensor signals to partially estimate object properties.
Several strategies have been proposed to identify optimal grasping poses in environments without constraints. If the object model is available, swept volumes and continuous collision detection [39] or independent contact region algorithms [40] can be used to identify a handling pose. Force closure [31] and form closure index [41] optimisation could be considered a valid offline method to collect high quality grasping poses. In [32], a real-time algorithm is proposed to collect stable grasping poses.
In [42,43], the authors design an optimisation algorithm in order to identify suitable grasping poses taking into account optimal contact force distribution constraints. The environment constraints and hand kinematics are not considered in this work. A different approach is presented in [44], where support functions and wrench-oriented grasp quality measures are used; this solver is not tested in a real scenario where a cluttered environment restricts feasible grasping poses.

5.1.2. Grasping without Object Model

The object model could not be available in all the scenarios, in these cases sensor data is exploited to estimate some properties of the scene, then a partial reconstruction of the object is used to identify grasping poses.
One possible approach exploits a grasp quality neural network that is trained with information from a synthetic data set and RGB-D images; grasping pose candidates could be estimated in real time as shown in [45,46]. Usually, good performance is only achieved after extensive neural network training with a very large dataset.
Different light conditions and partial views of the scene could reduce the performance of these methods; in such conditions Gaussian Process Implicit Surfaces and Sequential Convex programming could be used to recover the performance as shown in [33].
Alternatively, grasping strategies could be inspired by human motor control, a tactile sensor could be used to implement human inspired grasping strategies as shown in [34] or a video recording of human handling sequence could be used to train the robot [47].

5.1.3. Grasping in Constrained Environments

Filtering grasping poses by constraints has the disadvantage that high quality grasping poses may not be identified, in which case an alternative approach could be used to model the constraints directly in the research algorithm. Following the concept above, in a constrained environment reliable kinematic chain configurations are identified by minimising a suitable cost index, the optimisation is subject to linear and nonlinear constraints, and is presented and tested on humanoid characters in [48,49].
A similar approach, for robotics applications, is provided by Graspit [35], an algorithm that synthesises stable holding poses in constrained environments by exploiting simulation and shape primitives.
In a structured scenario, the environment could be modelled and an accurate simulation tool can be developed using multi-body dynamics tools in order to avoid collisions [36]. A complete knowledge of the workspace could be useful to avoid collisions between the robot and objects as shown in [50] exploiting the motion constraint graph.
In some hazardous applications, it is mandatory to guarantee a safety distance of the gripper from dangerous objects in the scene, and in these scenes it is possible to use a list of grasp candidates associated with a metric [37]. In order to identify feasible grasping poses in glovebox environments, a constrained optimisation is proposed in [51], which allows the system to synthesise poses of the manipulation systems that are force closure and are not in collision with glovebox walls.
Visual feedback could be a valid alternative, in unstructured environments, to evaluate the constraints and object positions that are necessary to plan grasping poses [38] or to move obstacles in order to reach a target object.
In recent studies [52,53], environment constraints are exploited to perform grasping tasks; this approach is promising for application with a compliant hand in an environment where no risks are caused by interactions between the manipulation system and the environment however, it may apply to a wider range of situations.
In recent studies [52,53], environment constraints are exploited to perform grasping tasks; this approach is promising for application with a compliant hand in an environment where no risk is caused by interactions between the manipulation system and the environment however, it may apply to a wider range of situations.
Multiple manipulators or robotic hands with high dexterity could be used to grasp and re-grasp objects in the scene in order to handle the objects in a suitable configuration if the environment constraints do not allow to grasp the object with optimal grasp in the first time. In robotic literature, three different approaches are proposed. The first approach aims to use a dexterous robotic hand to change the configuration of the object during the grasp as shown in [54], and this approach could not be possible with grippers or manipulators with a low level of dexterity. A second approach allows to plan different grasp, pick, and place action with a manipulator such as proposed in [55]. As an alternative solution, in [56] an algorithm is proposed to grasp and pass an object between two different manipulators, which is a challenge when increasing the manipulation capability and the dexterity of the whole manipulation setup.
If the cluttered environment is populated with dynamic objects, grasping a desired object could be possible only if a sequence of actions is generated and executed in order to push the obstacles in the scene. A learning-based motion modelling method [57] is proposed for motion prediction of the obstacles being pushed by the manipulator, and then the trained models are utilised in the motion planning. An alternative solution aiming to get a clear work space and, after, plan a collision-free path is presented in [58], which is less efficient than the previous one. Finally, if the objects are fixed, it is necessary to avoid any possible collision exploiting for example [59].

5.2. Grasp Detection Using Deep Learning

Advancements in deep learning models, especially in computer vision, has led to their widespread application in robotics and they have been gaining popularity in autonomous operations. One of the limitations of this approach is that its performance is tied to the quality of the data, which is sometimes difficult to acquire. For an active agent in a dynamic environment, these data-driven models can become challenging to implement where accuracy and speed are an essential part of ensuring safety in operations. In recent years however, significant progress has been made leading to vastly improved levels of speed, accuracy, and generalisation that makes it possible to apply these models to a closed loop control system.
Robotic grasping is a difficult problem to solve due to the many sources of potential uncertainties such as object pose, shape, friction, and camera pose [45]. Nuclear industry gloveboxes include the added challenges of limited visibility, clutter, and objects with varying shapes and textures. In such cases, where finding an accurate model of the physical properties is difficult, data-driven approaches have demonstrated that a level of adaptability can be reached when the robots learn from example.

5.2.1. Grasp Estimation with Convolutional Neural Networks

There has been many different approaches with deep neural networks on the grasp detection problem. Instead of a separate module to extract object properties, and using that output for further processing to extract grasp information, these models estimate the grasp pose directly from the input data. While some models directly estimate 6dof gripper poses from 3D inputs such as pointclouds, others estimate 2D gripper poses from depth or RGB images and project them to 3D space. The availability of standardised grasp datasets such as Cornell [60] and Jacquard [61] and its relative speed of detection has made the 2D input models a popular choice for application in robotic grasping. These 2D input models can also be categorised based on the type of outputs produced. Earlier models generated a 6-dimensional vector that represented the position, angle, and width of a parallel plate gripper [60,62,63]. Models such as the grasp quality convolutional neural network (GQ-CNN) [45] performs grasp sampling, followed by a grasp quality evaluator model which ranks the sampled grasps. In recent developments, the grasp map estimator type of models such as the generative grasp convolutional neural network (GGCNN), first proposed in [64], has demonstrated the highest performance in terms of speed and accuracy. These networks, which generally follow an encoder-decoder structure similar to image segmentation models, generate 2D maps associated with position, angle, and width, with pixel-wise grasp representation.

5.2.2. Grasp Convolutional Neural Network with Variational Autoencoders

For autonomous grasping in a glovebox, it is important to identify feasible gripper poses for novel objects in a cluttered environment. For this purpose, a neural network was developed where a variational autoencoder (VAE) was added to a grasp map estimator type of model.
The VAEs, first proposed by Kingma and Welling in [65], maps the data into a distribution, also known as the latent space, from which samples drawn can generate data similar to the input. A VAE consist of two neural networks, the encoder and the decoder respectively, and a loss function. The encoder maps the input sample into a reduced size space, called latent space, containing the main characteristics of the sample. The decoder, in a similar way, maps back out from the latent space to the original form. The distinctiveness of a VAE is that the latent space has a form of Gaussian distribution, expressed as mean and logarithmic variance value. The loss function is given as the sum of two components: Reconstruction loss and latent loss. The former measures the ability of the VAE to reconstruct in output the presented input, while the latter is a metric of how much the latent space is in the form of a Gaussian distribution.
In the proposed models, variational autoencoders were used for modelling the grasp estimation neural network. Two different types of VAEs were explored in this work, conditional variational autoencoders (CVAE) [66] and vector quantized variational autoencoders (VQ-VAE) [67]. Similar to other grasp map estimation models such as [64,68], these models are also very lightweight and are able to generate grasp poses with relatively high speed with a response time of around 19 ms. Evaluation of these approaches on the Cornell dataset also demonstrated a high grasp detection accuracy of 95.4% for the VQ-VAE and 94.3% for the CVAE-based models. Figure 4 depicts the output (for a validation set from Cornell dataset) of the grasp neural network using VQVAE, which generates a grasp quality map (Q) and its associated angle and width map. The oriented rectangle representation of grasp (bottom row of Figure 4) is then calculated from the maximum pixel value of Q and its corresponding angle and width. These models were also evaluated with 3D models of objects with complex geometry such as the Evolved Grasping Analysis Dataset (EGAD) [69]. In Figure 5, a simulated testing platform developed in Gazebo is shown where EGAD objects with random pose are generated to replicate a cluttered environment. The simulated RGBD camera attached to the robot wrist is used for capturing the depth image as input to the neural network.
While grasp models using VAE have shown promising results, the full extent of its capabilities are currently being investigated in simulation and real world trials. Further improvements can be potentially introduced with its application on 3D input. Future work will include data from the simulation environment to train deep learning models to learn grasping pose directly from 3D data.

5.3. Assisting the Operator

Nuclear decommissioning requires material handling inside radioactively contaminated gloveboxes [6]. Working inside gloveboxes is not only dangerous for the operators, but also strenuous. These strenuous tasks typically include the various POCO tasks described in Section 2.1. This work introduces robot manipulators inside the nuclear gloveboxes so that the different glovebox tasks could be remotely handled using teleoperation [16]. Introducing a teleoperated robotic system into gloveboxes ensures the safety of the operator by detaching the operator from the hazardous glovebox environment. However, the resulting manipulation system is usually not intuitive to use and requires a certain level of familiarisation with the technology via extensive training in order to achieve effective use.
The teleoperated glovebox system improves the safety of the operator but the safety of the manipulation is not ensured by default. During the manipulation, the operator cannot omit the risks involving the robot and environment and, therefore, the operators have to pay the utmost attention to the movement of the robotic arms, consider the possible collision scenarios and ensure the safety of the manipulated objects and the environment. Overall, the task load on the operator during teleoperated manipulation is significantly high.
The RAIN project not only improves the safety of the operator, but also aims to improve the safety of the manipulation while keeping the task load on the operator as low as possible. Using a teleoperated robotic solution inherently implies the required operator safety however, how to ensure the safety of the operations, such as ensuring the safe manipulation of objects in the glovebox and avoiding collision that might damage either the robot or the integrity of the glovebox components, is the fundamental question of this research package.
The teleoperated robotic system in the RAIN project allows the operator to plan and execute the manipulation in the task space of the robot using an intuitive interface at the local (operator) side. Well-known telerobotic solutions, such as the Mascot system used in the Joint European Torus, provide two kinematically similar robotic interfaces for the tele-manipulation to achieve a simplified control architecture and to allow operators to control robots at the joint level. While this approach can be viewed as giving operators more control of the robot, the resulting teleoperation system is more costly (due to the use of similar robots) and is not always as intuitive as expected due to the kinematic structure of the robots. In order to achieve a cost effective solution with ease of use, the teleoperation system in RAIN gloveboxes are relying on local-remote devices with dissimilar kinematics where the local device is a hand tracking system while the remote robot is an industrial robotic arm.
The local device, a HTC Vive controller, is a vision-based tracking system which closely monitors the pose of the operator hand. The tracking system introduces an unmatched level of intuitiveness to the robot control by allowing the operators to use the hand motion to drive the end-effector remote robot. The reference signal, which is the operator hand pose, is tracked by the low level motion controller of the remote robot of the teleoperator. The choice of allowing the operators to plan and execute their actions in the task space of the remote robot is the first step in reducing the task load on the operator.
The intuitive control interface and task space control approach is prone to unwanted collisions because there is no mechanism to prevent the remote robots from colliding with the environment or objects. Therefore, without any assistance mechanism in the teleoperation, the resulting teleoperator system would require the operator to ensure the safety of the operation.
The motivation for this work is to achieve a system that follows a given end-effector motion reference without colliding with the environment or the obstacles while keeping the manipulation capability of the robot as high as possible.
An example setup is introduced in Figure 6 which depicts one of the remote robot arms with an obstacle inside the glovebox. The operator is expected to manoeuvre the robot while avoiding any collision with the obstacle however, in the given robot configuration, the elbow of the robot is likely to collide with the cylindrical object. Instead of relying on the operator’s skills for avoiding collisions and securing the operational safety, our approach utilises the redundancy available in the remote robot and implements a collision avoiding rule to the inverse kinematics solutions of the robot. Hence, the proposed approach still enjoys the task space planning and control of the robot arm during the tele-manipulation and the collisions are avoided at the inverse kinematics solutions.
Obtaining the joint space motion synthesis from a given end-effector trajectory is a challenging problem due to the inherent nonlinear relation between the joint and task space positions. For a majority of robots, this nonlinear mapping prevents obtaining analytical solutions to the inverse kinematics problem. As a result, numerical solution methods are popular for solving the inverse kinematics problem.
The inverse kinematics problem becomes more intricate for redundant robots, since the mapping between joint and task spaces become one to many; multiple joint space configurations are mapped to the same task space configuration. These multiple inverse kinematics solutions naturally vary with levels of optimality in respect to different performance measures, such as collision or singularity metrics.
Assisting the operator research package designs an inverse kinematics solution algorithm for the teleoperation of redundant remote robots. In this approach, the joint space trajectories, which are required to control the remote robot, is generated from the operator motion reference. The inverse kinematics solution simultaneously considers the collision of the robot arm with the objects/obstacles in the environment and improve the manipulability of the remote robot configuration for better manipulation.
Manoeuvring the teleoperated manipulators in a cluttered environment and/or a confined space is a well-established problem in the robotics literature [70]. The likes of [71,72,73] have addressed the problem of collision detection and trajectory generation for moving the manipulator through the clutter. However, the problem becomes more complicated when the space where the whole body of the manipulator will move becomes restricted due to scattered clutter. This situation is explained in the following example.
Figure 7 depicts a manipulator inside a confined space and the end-effector of the manipulator needs to reach to particular objects amidst a bunch of different objects inside the space. It should be noted that in addition to the end-effector, the links of the robot can collide with the objects in the glovebox. Then, precise trajectory estimation can facilitate to avoid catastrophic accidents. In this work, we are addressing the collision detection problem and primarily focusing on collision detection and avoidance of teleoperated robots inside nuclear gloveboxes.
Avoiding collisions is important for safe operations however, smooth manoeuvring the remote robot is another important step for reducing the task load on the operator. The ability of moving the robot end-effector in arbitrary direction is characterised by the manipulability of the robot. In this work, the collision avoidance in the inverse kinematics solution as well as the manipulability of the robot is taken into account so that safer and easier handling of objects is achieved.
Considering multiple performance metrics, such as collision avoidance or manipulability, in the inverse kinematics problem is a problem considering that the redundancy of the remote manipulator is less than the number of performance metrics considered in the inverse kinematics. In order to solve this problem, multiple performance metrics are combined into one single performance metric using a weighted sum approach. The weighted sum of the performance metrics not only allows us to consider as many metrics as possible in the inverse kinematics but also the trade-off between metrics can be studied by investigating different weight combinations.

Augmenting Sensing

The challenges of working with gloveboxes also extend to poor visibility caused due the combination of discoloured and damaged windows, a dark and cluttered environment, and wearing personal protective equipment which usually limits the field of view for operators. While the introduction of a simple camera view of the interiors can be useful, additional information related to the environment properties such as the type of objects, its position and pose, would not only provide helpful guidance during teleoperation, but also form an important component for grasp estimation and collision avoidance systems.
For the glovebox computer vision, multiple sources of visual information were acquired through RGBD and stereo cameras and different processing units were developed to extract valuable information about the environment. In addition to the static sensors, the RGBD wrist cameras attached to the Kinova robots were used for surveying the less accessible areas. The vision modules include object detection and tracking, semantic segmentation RGB image, grasp detection, and pointcloud segmentation.
Deep learning models were trained using custom annotated images that are representative of a glovebox environment. An object detection network was trained with the dataset from which the output detection were then fed into a tracking algorithm. For object detection, models similar to the You Only Look Once (YOLO) [74] were chosen since they generated detection at a much faster rate (45 frames per second). In addition, a scene segmentation model was also implemented to extract more detailed information about the environment. These models provide a pixel-wise categorisation of the image. Models such as Deeplab [75] demonstrated high accuracy, but had a much slower response time of 8 frames per second. The segmented objects were projected to 3D to extract segmented pointclouds. This technique was used mainly for estimating the object shape and pose of known objects and obtaining an initial map of the environment. While these supervised techniques for object detection and segmentation have demonstrated a high accuracy on the training dataset, there is less room for improvements in terms of generalising for novel objects. The grasp detection model was kept independent of object recognition and is able to detect grasping pose objects regardless of its type.
Unsupervised detection, which includes traditional computer vision techniques, was also introduced to extract objects with simpler geometries such as cylinders, cubes, and spheres. The PCL library [76] was used for pointcloud segmentation which implements a RANSAC- [77] based technique to extract object position, orientation, and size. This information was the input for the Grasp synthesis module (described in Section 5.1.3), which then generated optimal grasping pose for the objects. The extracted objects were also introduced into the simulation platform, which is useful for testing the algorithms before deployment.

5.4. Condition Monitoring of the Robots

In a robotic glovebox, it is extremely important to have confidence that the robot will not occur in any failure during operations. Such a failure can have dramatic impacts both on safety and on costs. A robot that is unable to be properly controlled can have catastrophic consequences, for example it can impact the glovebox’s walls and damage it. In addition, a robot that is unable to move can be difficult or impossible to retrieve and repair, which has a big impact on costs in terms of hardware costs and time delay.
A condition monitoring system (CMS) has the objective of monitoring robots measurements and identifying any anomalous behaviour.
In recent years many deep learning techniques have been used to identify anomalies in many different environments, from images to bank transactions. In this work we focused our attention on a variational autoencoder (VAE) (see Section 5.2.2).
We applied the VAE model to a set of automated moves we perform specifically for CMS as part of our operational routine. They are performed at the beginning and at the end of operations, in order to inform the operator that the robot is respectively safe to use or has not been damaged during the session.
Our VAE model consists of a fully-connected multiple-layer neural network. Encoder layers have been dimensioned respectively [ 512 , 256 , 128 , 64 , 32 ] with a latent space of dimension 6. The decoder has been implemented in a symmetric way. Measurements collected from the control system are very diverse in physical units and ranges, which are not limited to 0.0 and 1.0 . For this reason a ReLu activation function has been used. Moreover, mean absolute percentage error (MAPE) function is used as the reconstruction loss function. In this way, the reconstruction error will be weighted by measurement amplitude and errors will be evenly distributed across the measurements.
As already mentioned earlier, our glovebox consists of two identical Kinova Gen3 robots equipped with different end-effectors. We have used data collected from only one robot, from now on called the training robot, to train the model and data collected from the other robot, from now on called the testing robot, for testing purposes only.
In order to capture the dynamic behaviour of the system, we considered as a single sample at time t n o w all the measurement collected in the interval [ t n o w h ; t n o w ) , where h is the length of the time window. It is important to note that this does not affect the ability of the system of working online. The length of the interval has also an effect on the ability of the system capturing information and therefore identifying different types of anomalies.
In Figure 8, Figure 9 and Figure 10 it is possible to see how the trained VAE is able to reconstruct measurements collected from CMS moves. For simplicity we will report in our pictures only on the reconstruction of joint 3 in few time intervals. In particular Figure 8 shows actual measurements and their reconstruction of data collected from the training robot and included in the training set. In Figure 9, the same quantities are reported for data that are not included in the training set. Similarly, Figure 10 shows the actual measurements and their reconstruction in case of data collected from the testing robot.
It is clearly visible that in some time intervals the VAE is not able to correctly reconstruct the measurements. These time intervals should be considered as anomalies.
To have a quantitative metric of reconstruction quality needed to discriminate faults from nominal behaviours, we use the value returned by the loss function associated to a sample. This measures the mean absolute percentage difference between the sample presented in input and its reconstruction. The smaller the value, the better the reconstruction of the sample.
In the cases reported in Figure 8, Figure 9 and Figure 10 the maximum errors are respectively 1.73 % , 13.49 % , and 73.86 % . It is important to note that these errors are related only to the time windows reported in the pictures.
A more comprehensive analysis is needed for the whole length of the move. Figure 11 shows the VAE score of each sample of a CMS move in the three cases before, i.e., data coming from training a robot included in the VAE training set, data coming from training robot not included in the VAE training set, and data collected from the testing robot.
Results show that data collected from the training robot give similar results regardless if they have been included in the training of the VAE. In particular, the maximum score is about 19.78 % at sample 2640 in case of data not included in the training set, and it is about 4.00 % at sample 100 in case of data included in the training set.
Results also show that the testing robot performs quite differently from the training robot. In particular, the score reached the maximum value of 340.30 % at sample 5480. Authors believe that the difference is caused by the different end effectors weights and dimensions affecting the dynamic of the robot during the CMS move.

5.5. Operations

The Operations Management System (OMS) is a web application that supports the three main facets of operations: Management of the assets used or encountered during an operation, preparation of the operational procedures to be carried out, and the execution of those operational procedures. Built off 35,000 hours of remote handling operations at JET, OMS is a unique operations management tool.
In particular, RAIN intends to use the planning and execution capabilities of the OMS application to reduce cognitive load on the operator by the following means. Firstly, an in-built capability of procedures in OMS highlights a single action or decision at all times as the current operational activity to be addressed, with progression being tracked throughout the procedure, including along any sub-procedures or different branches resulting from decision points. Secondly, the planned procedures often give the operator the choice of completing the action via teleoperation or else allowing the robotic system to autonomously complete the action by submitting pre-configured commands through OMS.

6. Conclusions

Nuclear gloveboxes are designed for the safe handling of hazardous objects. The safety measures, personal protective equipment, and the glovebox construction provide some degree of assurance to the operators. However, they are still prone to hazards and working conditions are still challenging given the long working hours in a glovebox, which is an arduous task.
In the RAIN project, we are introducing and developing cutting-edge robotics and AI technology to the legacy gloveboxes for improving the safety of the operator and operations, along with ease of operation. Moreover, our approach potentially increases the efficiency in handling nuclear materials inside gloveboxes. The technologies we develop are automated, grasping for robotic manipulators working inside the gloveboxes, assistive teleoperation technologies for easing the task load of the operators using the developed robotic glovebox solution, and condition monitoring the robots for the early detection of failures in the robot’s hardware.
This paper has presented the problem statement and challenges related to utilising robots in a nuclear glovebox, presenting a glovebox mockup platform and simulation suitable for testing developing robotics and AI systems. Beyond this, it has presented a selection of technologies developed and integrated into the gloveboxes is a step forward for safer and more efficient manipulation interfaces for handling nuclear materials and contaminated objects. Furthermore, the next generation of gloveboxes will be based on these technologies.

Author Contributions

Conceptualisation, G.B. and R.S.; software platform, G.B., operations management system, M.F.T.; autonomous grasping, A.A.; grasping in constraint environments, R.N.; assisting the operator, O.T. and P.D.; condition monitoring of the robots, L.P.; writing O.T., P.D., R.N., L.P., A.A., G.B., and E.T.J.; supervision, G.B. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This project has been supported by the RAIN Hub, which is funded by the Industrial Strategy Challenge Fund, part of the government’s Industrial Strategy. The fund is delivered by UK Research and Innovation and managed by EPSRC [EP/R026084/1].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Bogue, R. Robots in the nuclear industry: A review of technologies and applications. Ind. Robot. 2011, 38, 113–118. [Google Scholar] [CrossRef]
  2. Ghosh, A.; Alonso Paredes Soto, D.; Veres, S.M.; Rossiter, J. Human robot interaction for future remote manipulations in Industry 4.0. IFAC-PapersOnLine 2020, 53, 10223–10228. [Google Scholar] [CrossRef]
  3. RAIN Hub. 2018. Available online: https://rainhub.org.uk/ (accessed on 18 June 2021).
  4. Worker exposed to Sellafield plutonium had skin removed. BBC News. 2 April 2019. Available online: https://www.bbc.co.uk/news/uk-england-cumbria-47786659 (accessed on 3 July 2021).
  5. Chen, S.; Demachi, K. A Vision-Based Approach for Ensuring Proper Use of Personal Protective Equipment (PPE) in Decommissioning of Fukushima Daiichi Nuclear Power Station. Appl. Sci. 2020, 10, 5129. [Google Scholar] [CrossRef]
  6. Talha, M.; Ghalamzan, E.; Takahashi, C.; Kuo, J.; Ingamells, W.; Stolkin, R. Towards robotic decommissioning of legacy nuclear plant: Results of human-factors experiments with tele-robotic manipulation, and a discussion of challenges and approaches for decommissioning. In Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, Switzerland, 23–27 October 2016; pp. 166–173. [Google Scholar]
  7. Domning, E.E.; McMahon, T.T.; Sievers, R.H. Robotic and Nuclear Safety for an Automated/Teleoperated Glove Box System; Technical Report; Lawrence Livermore National Lab.: Livermore, CA, USA, 1991. [Google Scholar]
  8. Ghosh, A.; Veres, S.M.; Paredes-Soto, D.; Clarke, J.E.; Rossiter, J.A. Intuitive Programming with Remotely Instructed Robots inside Future Gloveboxes. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 209–211. [Google Scholar]
  9. Goiffon, V.; Rolando, S.; Corbière, F.; Rizzolo, S.; Chabane, A.; Girard, S.; Baer, J.; Estribeau, M.; Magnan, P.; Paillet, P.; et al. Radiation Hardening of Digital Color CMOS Camera-on-a-Chip Building Blocks for Multi-MGy Total Ionizing Dose Environments. IEEE Trans. Nucl. Sci. 2017, 64, 45–53. [Google Scholar] [CrossRef] [Green Version]
  10. Cao, Y.; De Cock, W.; Steyaert, M.; Leroux, P. Design and Assessment of a 6 ps-Resolution Time-to-Digital Converter With 5 MGy Gamma-Dose Tolerance for LIDAR Application. IEEE Trans. Nucl. Sci. 2012, 59, 1382–1389. [Google Scholar] [CrossRef]
  11. Grasz, E.; Perez, M. Addressing Nuclear and Hostile Environment Challenges with Intelligent Automation; Technical Report; Lawrence Livermore National Lab.: Livermore, CA, USA, 1997. [Google Scholar]
  12. Akiyama, M. Research and development on decommissioning of nuclear facilities in Japan. Nucl. Eng. Des. 1996, 165, 307–319. [Google Scholar] [CrossRef]
  13. Rollow, T. Type a Accident Investigation of the March 16, 2000 Plutonium-238 Multiple Intake Event at the Plutonium Facility, Los Alamos National Laboratory, New Mexico, United States Department of Energy, Office of Oversight; Office of Oversight Office of Environment, Safety and Health U.S. Department of Energy: Washington, DC, USA, 2000. [Google Scholar]
  14. Hagemeyer, D.; McCormick, Y. DOE 2011 Occupational Radiation Exposure Report, _Prepared for the US Department of Energy, Office of Health, Safety and Security. December 2012; Technical Report; Oak Ridge Institute for Science and Education (ORISE): Oak Ridge, TN, USA, 2012. [Google Scholar]
  15. Wehe, D.K.; Lee, J.; Martin, W.R.; Mann, R.; Hamel, W.; Tulenko, J. Intelligent robotics and remote systems for the nuclear industry. Nucl. Eng. Des. 1989, 113, 259–267. [Google Scholar] [CrossRef] [Green Version]
  16. Harden, T.A.; Lloyd, J.A.; Turner, C.J. Robotics for Nuclear Material Handling at LANL: Capabilities and Needs; Technical Report; Los Alamos National Lab.: Los Alamos, NM, USA, 2009. [Google Scholar]
  17. Pegman, G.; Sands, D. Cost effective robotics in the nuclear industry. Ind. Robot. 2006, 33, 170–173. [Google Scholar] [CrossRef]
  18. Peterson, K.D. Robotic System for Automated Handling of Ceramic Pucks; Technical Report; Lawrence Livermore National Lab.: Livermore, CA, USA, 2000. [Google Scholar]
  19. Foster, C. Computer Model and Simulation of a Glove Box Process; Technical Report; Los Alamos National Lab.: Santa Fe, NM, USA, 2001. [Google Scholar]
  20. Turner, C.; Pehl, J. Design of Small Automation Work Cell System Demonstrations; Technical Report; Los Alamos National Lab.: Santa Fe, NM, USA, 2000. [Google Scholar]
  21. Roa, M.A.; Suárez, R. Grasp quality measures: Review and performance. Auton. Robot. 2015, 38, 65–88. [Google Scholar] [CrossRef] [Green Version]
  22. Kitamura, A.; Watahiki, M.; Kashiro, K. Remote glovebox size reduction in glovebox dismantling facility. Nucl. Eng. Des. 2011, 241, 999–1005. [Google Scholar] [CrossRef]
  23. Sharp, A.; Hom, M.W.; Pryor, M. Operator training for preferred manipulator trajectories in a glovebox. In Proceedings of the 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Austin, TX, USA, 8–10 March 2017; pp. 1–6. [Google Scholar]
  24. O’Neil, B.E. Graph-Based World-Model for Robotic Manipulation. Ph.D. Thesis, University of Texas at Austin, Austin, TX, USA, 2010. [Google Scholar]
  25. Önol, A.Ö.; Long, P.; Padır, T. Using contact to increase robot performance for glovebox D&D tasks. arXiv 2018, arXiv:1807.04198. [Google Scholar]
  26. Long, P.; Padir, T. Evaluating robot manipulability in constrained environments by velocity polytope reduction. In Proceedings of the 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), Beijing, China, 6–9 November 2018; pp. 1–9. [Google Scholar]
  27. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA 2009, Kobe, Japan, 12–17 May 2009; Volume 3, p. 5. [Google Scholar]
  28. González, C.; Solanes, J.E.; Muñoz, A.; Gracia, L.; Girbés-Juan, V.; Tornero, J. Advanced teleoperation and control system for industrial robots based on augmented virtuality and haptic feedback. J. Manuf. Syst. 2021, 59, 283–298. [Google Scholar] [CrossRef]
  29. Jacinto-Villegas, J.M.; Satler, M.; Filippeschi, A.; Bergamasco, M.; Ragaglia, M.; Argiolas, A.; Niccolini, M.; Avizzano, C.A. A Novel Wearable Haptic Controller for Teleoperating Robotic Platforms. IEEE Robot. Autom. Lett. 2017, 2, 2072–2079. [Google Scholar] [CrossRef]
  30. Burroughes, G. Glovebox Robotics Simulator; GitHub: San Francisco, CA, USA, 2020. [Google Scholar] [CrossRef]
  31. Averta, G.; Angelini, F.; Bonilla, M.; Bianchi, M.; Bicchi, A. Incrementality and hierarchies in the enrollment of multiple synergies for grasp planning. IEEE Robot. Autom. Lett. 2018, 3, 2686–2693. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, G.; Xu, J.; Wang, X.; Li, Z. On quality functions for grasp synthesis, fixture planning, and coordinated manipulation. IEEE Trans. Autom. Sci. Eng. 2004, 1, 146–162. [Google Scholar] [CrossRef]
  33. Mahler, J.; Patil, S.; Kehoe, B.; Van Den Berg, J.; Ciocarlie, M.; Abbeel, P.; Goldberg, K. Gp-gpis-opt: Grasp planning with shape uncertainty using gaussian process implicit surfaces and sequential convex programming. In Proceedings of the 2015 IEEE international conference on robotics and automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4919–4926. [Google Scholar]
  34. Romano, J.M.; Hsiao, K.; Niemeyer, G.; Chitta, S.; Kuchenbecker, K.J. Human-inspired robotic grasp control with tactile sensing. IEEE Trans. Robot. 2011, 27, 1067–1079. [Google Scholar] [CrossRef]
  35. Miller, A.T.; Knoop, S.; Christensen, H.I.; Allen, P.K. Automatic grasp planning using shape primitives. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 2, pp. 1824–1829. [Google Scholar]
  36. Bonilla, M.; Farnioli, E.; Piazza, C.; Catalano, M.; Grioli, G.; Garabini, M.; Gabiccini, M.; Bicchi, A. Grasping with soft hands. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 581–587. [Google Scholar]
  37. Berenson, D.; Diankov, R.; Nishiwaki, K.; Kagami, S.; Kuffner, J. Grasp planning in complex scenes. In Proceedings of the 2007 7th IEEE-RAS International Conference on Humanoid Robots, Pittsburgh, PA, USA, 29 November–1 December 2007; pp. 42–48. [Google Scholar]
  38. Diankov, R.; Kanade, T.; Kuffner, J. Integrating grasp planning and visual feedback for reliable manipulation. In Proceedings of the 2009 9th IEEE-RAS International Conference on Humanoid Robots, Paris, France, 7–10 December 2009; pp. 646–652. [Google Scholar]
  39. Xue, Z.; Zoellner, J.M.; Dillmann, R. Automatic optimal grasp planning based on found contact points. In Proceedings of the 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xi’an, China, 2–5 July 2008; pp. 1053–1058. [Google Scholar]
  40. Hertkorn, K. Shared Grasping: A Combination of Telepresence and Grasp Planning; KIT Scientific Publishing: Karlsruhe, Germany, 2016. [Google Scholar]
  41. Ding, D.; Liu, Y.H.; Wang, M.Y.; Wang, S. Automatic selection of fixturing surfaces and fixturing points for polyhedral workpieces. IEEE Trans. Robot. Autom. 2001, 17, 833–841. [Google Scholar] [CrossRef]
  42. El Khoury, S.; Li, M.; Billard, A. Bridging the gap: One shot grasp synthesis approach. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 2027–2034. [Google Scholar]
  43. El-Khoury, S.; Li, M.; Billard, A. On the generation of a variety of grasps. Robot. Auton. Syst. 2013, 61, 1335–1349. [Google Scholar] [CrossRef] [Green Version]
  44. Zheng, Y. Computing the best grasp in a discrete point set with wrench-oriented grasp quality measures. Auton. Robot. 2019, 43, 1041–1062. [Google Scholar] [CrossRef]
  45. Mahler, J.; Liang, J.; Niyaz, S.; Laskey, M.; Doan, R.; Liu, X.; Ojea, J.A.; Goldberg, K. Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics. arXiv 2017, arXiv:1703.09312. [Google Scholar]
  46. Kumra, S.; Kanan, C. Robotic grasp detection using deep convolutional neural networks. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 769–776. [Google Scholar]
  47. Lin, Y.; Sun, Y. Robot grasp planning based on demonstrated grasp strategies. Int. J. Robot. Res. 2015, 34, 26–42. [Google Scholar] [CrossRef]
  48. Baerlocher, P.; Boulic, R. An inverse kinematics architecture enforcing an arbitrary number of strict priority levels. Vis. Comput. 2004, 20, 402–417. [Google Scholar] [CrossRef]
  49. Shi, X.; Zhou, K.; Tong, Y.; Desbrun, M.; Bao, H.; Guo, B. Mesh Puppetry: Cascading Optimization of Mesh Deformation with Inverse Kinematics. ACM Trans. Graph. 2007, 26. [Google Scholar] [CrossRef]
  50. Azizi, V.; Kimmel, A.; Bekris, K.; Kapadia, M. Geometric reachability analysis for grasp planning in cluttered scenes for varying end-effectors. In Proceedings of the 2017 13th IEEE Conference on Automation Science and Engineering (CASE), Xi’an, China, 20–23 August 2017; pp. 764–769. [Google Scholar]
  51. Altobelli, A.; Tokatli, O.; Burroughes, G.; Skilton, R. Optimal Grasping Pose Synthesis in a Constrained Environment. Robotics 2021, 10, 4. [Google Scholar] [CrossRef]
  52. Eppner, C.; Deimel, R.; Alvarez-Ruiz, J.; Maertens, M.; Brock, O. Exploitation of environmental constraints in human and robotic grasping. Int. J. Robot. Res. 2015, 34, 1021–1038. [Google Scholar] [CrossRef] [Green Version]
  53. Salvietti, G.; Malvezzi, M.; Gioioso, G.; Prattichizzo, D. Modeling compliant grasps exploiting environmental constraints. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4941–4946. [Google Scholar]
  54. Saut, J.P.; Sahbani, A.; El-Khoury, S.; Perdereau, V. Dexterous manipulation planning using probabilistic roadmaps in continuous grasp subspaces. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 2907–2912. [Google Scholar] [CrossRef] [Green Version]
  55. Tournassoud, P.; Lozano-Perez, T.; Mazer, E. Regrasping. In Proceedings of the 1987 IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 31 March–3 April 1987; Volume 4, pp. 1924–1928. [Google Scholar] [CrossRef]
  56. Balaguer, B.; Carpin, S. Bimanual regrasping from unimanual machine learning. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3264–3270. [Google Scholar] [CrossRef] [Green Version]
  57. Wei, Z.; Chen, W.; Wang, H.; Wang, J. Manipulator motion planning using flexible obstacle avoidance based on model learning. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417703930. [Google Scholar] [CrossRef] [Green Version]
  58. Dogar, M.; Srinivasa, S. A framework for push-grasping in clutter. Robot. Sci. Syst. 2011, 1. [Google Scholar] [CrossRef]
  59. LaValle, S.M.; Kuffner, J.J., Jr. Randomized kinodynamic planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
  60. Lenz, I.; Lee, H.; Saxena, A. Deep Learning for Detecting Robotic Grasps. arXiv 2014, arXiv:1301.3592. [Google Scholar]
  61. Depierre, A.; Dellandréa, E.; Chen, L. Jacquard: A Large Scale Dataset for Robotic Grasp Detection. arXiv 2018, arXiv:1803.11469. [Google Scholar]
  62. Redmon, J.; Angelova, A. Real-Time Grasp Detection Using Convolutional Neural Networks. arXiv 2014, arXiv:1412.3128. [Google Scholar]
  63. Kumra, S.; Kanan, C. Robotic Grasp Detection using Deep Convolutional Neural Networks. arXiv 2017, arXiv:1611.08036. [Google Scholar]
  64. Morrison, D.; Corke, P.; Leitner, J. Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. arXiv 2018, arXiv:1804.05172. [Google Scholar]
  65. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar]
  66. Sohn, K.; Lee, H.; Yan, X. Learning Structured Output Representation using Deep Conditional Generative Models. In Advances in Neural Information Processing Systems; Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: San Jose, CA, USA, 2015; Volume 28. [Google Scholar]
  67. Oord, A.v.d.; Vinyals, O.; Kavukcuoglu, K. Neural Discrete Representation Learning. arXiv 2018, arXiv:1711.00937. [Google Scholar]
  68. Kumra, S.; Joshi, S.; Sahin, F. Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network. arXiv 2020, arXiv:1909.04810. [Google Scholar]
  69. Morrison, D.; Corke, P.; Leitner, J. EGAD! an Evolved Grasping Analysis Dataset for diversity and reproducibility in robotic manipulation. arXiv 2020, arXiv:2003.01314. [Google Scholar] [CrossRef]
  70. Jain, A.; Killpack, M.D.; Edsinger, A.; Kemp, C.C. Reaching in clutter with whole-arm tactile sensing. Int. J. Robot. Res. 2013, 32, 458–482. [Google Scholar] [CrossRef] [Green Version]
  71. Mi, K.; Zheng, J.; Wang, Y.; Hu, J. A Multi-Heuristic A* Algorithm Based on Stagnation Detection for Path Planning of Manipulators in Cluttered Environments. IEEE Access 2019, 7, 135870–135881. [Google Scholar] [CrossRef]
  72. Huber, L.; Billard, A.; Slotine, J. Avoidance of Convex and Concave Obstacles with Convergence Ensured Through Contraction. IEEE Robot. Autom. Lett. 2019, 4, 1462–1469. [Google Scholar] [CrossRef] [Green Version]
  73. Nguyen, P.D.H.; Hoffmann, M.; Pattacini, U.; Metta, G. A fast heuristic Cartesian space motion planning algorithm for many-DoF robotic manipulators in dynamic environments. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 884–891. [Google Scholar]
  74. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
  75. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. arXiv 2017, arXiv:1606.00915. [Google Scholar] [CrossRef] [PubMed]
  76. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
  77. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
Figure 1. A nuclear glovebox where the access to the interior is through the glove ports. The operator is wearing the specially designed gloves to protect himself from contamination. Source: Wikimedia Commons.
Figure 1. A nuclear glovebox where the access to the interior is through the glove ports. The operator is wearing the specially designed gloves to protect himself from contamination. Source: Wikimedia Commons.
Robotics 10 00085 g001
Figure 2. Sections of a glovebox: (1) Hull, (2) posting in/out port, (3) glove ports, (4) environmental monitoring/maintenance equipment, (5) glovebox window, and (6) glovebox internals.
Figure 2. Sections of a glovebox: (1) Hull, (2) posting in/out port, (3) glove ports, (4) environmental monitoring/maintenance equipment, (5) glovebox window, and (6) glovebox internals.
Robotics 10 00085 g002
Figure 3. The glovebox mock-up hardware. The dimensions of the glovebox mock-up is based on legacy gloveboxes that are still in use in the industry. The dimensions of the glovebox mock-up drawing is in millimetres.
Figure 3. The glovebox mock-up hardware. The dimensions of the glovebox mock-up is based on legacy gloveboxes that are still in use in the industry. The dimensions of the glovebox mock-up drawing is in millimetres.
Robotics 10 00085 g003
Figure 4. Grasp detection from the grasp quality, width and angle maps generated by the VQ-VAE grasp model on test images from the Cornell Dataset.
Figure 4. Grasp detection from the grasp quality, width and angle maps generated by the VQ-VAE grasp model on test images from the Cornell Dataset.
Robotics 10 00085 g004
Figure 5. Grasp evaluation in simulation for cluttered environment with objects from the EGAD dataset. The top two pictures from RViz show the image and estimated grasp map.
Figure 5. Grasp evaluation in simulation for cluttered environment with objects from the EGAD dataset. The top two pictures from RViz show the image and estimated grasp map.
Robotics 10 00085 g005
Figure 6. Remote robot colliding with an obstacle in the glovebox interior.
Figure 6. Remote robot colliding with an obstacle in the glovebox interior.
Robotics 10 00085 g006
Figure 7. Glovebox simulator built in the ROS/Gazebo environment. The simulator depicts the remote robot arms, glovebox, and obstacles for manipulation.
Figure 7. Glovebox simulator built in the ROS/Gazebo environment. The simulator depicts the remote robot arms, glovebox, and obstacles for manipulation.
Robotics 10 00085 g007
Figure 8. Example of data reconstructed by VAE in case of data collected from the training robot and included in the VAE training set. The light blue shows the reported original measurement, while the dark blue shows a different sample of reconstructed output.
Figure 8. Example of data reconstructed by VAE in case of data collected from the training robot and included in the VAE training set. The light blue shows the reported original measurement, while the dark blue shows a different sample of reconstructed output.
Robotics 10 00085 g008
Figure 9. Example of data reconstructed by VAE in case of data collected from the training robot but not included in the VAE training set. The light blue shows the original measurement, while the dark blue shows the different sample of reconstructed output.
Figure 9. Example of data reconstructed by VAE in case of data collected from the training robot but not included in the VAE training set. The light blue shows the original measurement, while the dark blue shows the different sample of reconstructed output.
Robotics 10 00085 g009
Figure 10. Example of data reconstructed by VAE in case of data collected from the testing robot. The light blue shows the original measurement, while the dark blue shows a different sample of reconstructed output.
Figure 10. Example of data reconstructed by VAE in case of data collected from the testing robot. The light blue shows the original measurement, while the dark blue shows a different sample of reconstructed output.
Robotics 10 00085 g010
Figure 11. Example of VAE score evolution over time for the following three cases: Data collected from the training robot and included in the training set (red line), data collected from the training robot but not included in the training set (blue line), and data collected from the testing robot (green line).
Figure 11. Example of VAE score evolution over time for the following three cases: Data collected from the training robot and included in the training set (red line), data collected from the training robot but not included in the training set (blue line), and data collected from the testing robot (green line).
Robotics 10 00085 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tokatli, O.; Das, P.; Nath, R.; Pangione, L.; Altobelli, A.; Burroughes, G.; Jonasson, E.T.; Turner, M.F.; Skilton, R. Robot-Assisted Glovebox Teleoperation for Nuclear Industry. Robotics 2021, 10, 85. https://0-doi-org.brum.beds.ac.uk/10.3390/robotics10030085

AMA Style

Tokatli O, Das P, Nath R, Pangione L, Altobelli A, Burroughes G, Jonasson ET, Turner MF, Skilton R. Robot-Assisted Glovebox Teleoperation for Nuclear Industry. Robotics. 2021; 10(3):85. https://0-doi-org.brum.beds.ac.uk/10.3390/robotics10030085

Chicago/Turabian Style

Tokatli, Ozan, Pragna Das, Radhika Nath, Luigi Pangione, Alessandro Altobelli, Guy Burroughes, Emil T. Jonasson, Matthew F. Turner, and Robert Skilton. 2021. "Robot-Assisted Glovebox Teleoperation for Nuclear Industry" Robotics 10, no. 3: 85. https://0-doi-org.brum.beds.ac.uk/10.3390/robotics10030085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop