Next Article in Journal
Electrical Breakdown Spectroscopy of Nano-/Micro-Thermites
Previous Article in Journal
iMakerSpace Best Practices for Shaping the 21st Century Workforce
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Reality in Industry 4.0 and Future Innovation Programs

Department of Industrial Engineering, University of Bologna, 40136 Bologna, Italy
*
Author to whom correspondence should be addressed.
Submission received: 25 March 2021 / Revised: 21 April 2021 / Accepted: 27 April 2021 / Published: 29 April 2021

Abstract

:
Augmented Reality (AR) is worldwide recognized as one of the leading technologies of the 21st century and one of the pillars of the new industrial revolution envisaged by the Industry 4.0 international program. Several papers describe, in detail, specific applications of Augmented Reality developed to test its potentiality in a variety of fields. However, there is a lack of sources detailing the current limits of this technology in the event of its introduction in a real working environment where everyday tasks could be carried out by operators using an AR-based approach. A literature analysis to detect AR strength and weakness has been carried out, and a set of case studies has been implemented by authors to find the limits of current AR technologies in industrial applications outside the laboratory-protected environment. The outcome of this paper is that, even though Augmented Reality is a well-consolidated computer graphic technique in research applications, several improvements both from a software and hardware point of view should be introduced before its introduction in industrial operations. The originality of this paper lies in the detection of guidelines to improve the Augmented Reality potentialities in factories and industries.

1. Introduction

Several definitions of Augmented Reality have been included in scientific papers from the 1990s onwards [1]. AR can be summarized as a computer graphic technique where an artificial “virtual” object (CAD model, symbol, picture, writing) is added to a real-time video streaming of the external real environment. The hardware and software necessary to implement it depend on the internal/external application, complexity of the virtual scene to add, device held by the user, real-time, and definition required by the application. However, the minimum hardware required to run AR application is given by a camera framing the external world, a screen or a lens to project a video streaming, and the computational resources (PC, smartphone processor, microcontroller) necessary to handle the video recording, the pose detection and the superimposition of the visual symbols to it. AR is one of the pillars of the Industry 4.0 program, whose aim is to introduce new and advanced technologies in manufacturing systems and factories. Some examples of other technologies proposed by Industry 4.0 (I4.0) are big data, analytics, Internet of Things, additive manufacturing, smart sensors, machine networking, and self-monitoring. Both AR and Industry 4.0 attracted the interest of researchers in recent years. However, as Figure 1 suggests, few researchers treat the integration of AR into Industry 4.0 factories. If we consider the SCOPUS database and we check the number of papers with keywords “Augmented Reality” and “Industry 4.0” alone, we find that more than 2000 papers are written each year for both topics (data are obtained as of the 6th April 2021 and data for 2021 are projections based on the trend to that date). However, if we look at the number of papers with both the keywords “Augmented Reality” AND “Industry 4.0” we find a very limited number of papers, around 100 per year. This suggests that AR and I4.0 are studied separately.
Another interesting statistic can be obtained by checking the other keywords of the papers, where at least one of them is “Augmented Reality” in the SCOPUS database.
The SCOPUS database indicates, to the date of 7th April 2021 that 26,621 papers (both conference and journals) show at least one keyword equal to “Augmented Reality”. Figure 2 shows the number of papers where one keyword is Augmented Reality and one of the other is that listed in the first column.
It is quite disappointing that keywords, such as maintenance, manufacturing, and Industry 4.0 are dramatically less cited in papers with respect to others such as “Virtual Reality”, “Design”, “Education”. Only 270 papers include both Augmented Reality and Industry 4.0 keywords at the same time, and themes like maintenance and manufacturing present low values as well (550 and 480, respectively). These figures suggest that literature spot the light on more theoretical aspects where AR is compared to VR or Mixed Reality, or general-purpose research where AR is detected as a potential technology useful to support design, education, engineering, or visualization.
This paper tries to focus the attention on the practical use of AR in a factory/manufacturing environment, describing the potentials of this technology, but also problems that are still open and limit the spreading of AR in manufacturing contexts.
Comments about AR and its introduction in an Industry 4.0 context are based on experiments carried out by authors and literature. The Scopus database has been set as the main source for papers considered in this research. Due to the huge number of papers under the AR and Industry 4.0 topics, a combination of keywords given by “Augmented Reality” plus another keyword (see list in Figure 2) has been set as the main criteria to unveil the best AR industrial applications. In particular, keywords such as “maintenance”, “training”, “manufacturing” and “medical” have been used to find relevant literature.
Figure 3 shows the logical structure of the paper. A first analysis of the AR state of the art is carried out, showing the best hardware and software available for any AR application. After this in depth research, industrial applications are analysed considering a combination of limitations due to not only from the hardware/software point of view but also because of the working conditions in real environments. Afterwards, the answer to the question of whether AR is a mature technology for industrial applications or not is provided in the conclusion section, where guidelines to make AR a suitable technology for AR4.0 and beyond are listed.
The paper is thus structured as follows: after this introduction, Section 2 describes in detail the AR technology; Section 3 describes AR applications developed up to the present and the limitations that the authors noticed with current AR technology; Section 4 provides guidelines about what must be improved to lead AT to a Technology Readiness Level 9; Section 5 ends the paper with conclusions.

2. State of the Art of Augmented Reality

Nowadays, there are many software packages and tools that allow for the creation of AR and VR applications [2,3]. Following the literature [4], there are three possible kinds of combinations of realities: Augmented Reality, Augmented Virtuality (AV), and Virtual Reality. The first is the integration of virtual objects in real life thanks to see-through head-mounted devices. This technology allows for the interaction between two worlds combining what is real and what is not, thus giving a more detailed perception of reality [5]. The second is based upon the merging of real objects in a virtual environment. Applications of this technology in maintenance can be found in the [6], where the reader can find a more detailed description of AV. Finally, Virtual Reality is a fully digitalized world, where the observer stands in the first person in a completely virtual environment populated by digital objects and scenes. VR requires the use of immersive devices, such as HMDs or Oculus Rift and Playstation VR. In this framework, the definition of “Mixed Reality” deals with the relationship between real and virtual. The bridge connecting real and virtual is populated by Mixed Reality technologies that are capable of blending virtual content into the real world or vice versa. A major contribution to this topic is represented by the seminal work by Paul Milgram et al. [7] describing the “Reality-Virtuality Continuum”.
This continuum spans from Real Environment to fully Virtual Environment: Augmented Reality and Augmented Virtuality can be considered intermediate steps between the outer limits (see Figure 4). Shortening this bridge means obtaining the best immersive experience, so that optimal results will be achieved when no differences between real and virtual will be perceived by the end-user (see Figure 5).
The following two sub sections will describe, in more detail, the aspects related to software tools and hardware devices developed to support AR.

2.1. Review of Augmented Reality Software

There are many libraries dedicated to Virtual and Augmented reality: a detailed comparison between various Software Development Kits (SDKs) can be found in [9]. Marker-based tools like ARToolkit are available on the market: they exploit white and black markers [10,11] to retrieve the orientation of the camera used to frame the external world and to refer correctly the several reference systems (camera, marker, object) necessary to implement AR. The market offers other programs, such as Vuforia, which allows for using more advanced technologies such as the marker-less AR strategy. In this latter case, it is not necessary to use a predefined marker, but the external scene itself is exploited to detect the orientation and position of the camera. This library can recognize patterns on real 2D (image tracking) and 3D objects (object tracking) [12], turning them into a marker, providing the final user with higher flexibility. In Figure 6, an example of the use of Vuforia to develop an AR application is presented, where virtual images are referenced to a real controller device exploiting marker-less technology.
For example, in maintenance applications the object to maintain can be used itself as a marker, avoiding the need for markers or other bulky tracking devices to support AR: this approach is essential to transform AR from a laboratory curiosity to a flexible tool, handy in real industrial applications [13]. Important tools have been recently released with the commercial name ARCore [14] and ARkit [15] by Google and Apple, respectively. Both the tools track the environment using their webcam device. The first, developed from the so-called “Project Tango”, operates on Google Pixel devices and, nowadays, on other top gamma devices too. In contrast to ARCore, the ARkit only works with iOS (iPhone Operating System) devices. These tools can anchor holograms to horizontal mapped environments (recent updates are investigating the possibility to work with vertical planes as well) and they can render 3D objects as shadows. Image processing libraries such as OpenCV are essential: this kind of tool can acquire images from webcams and apply filters and mathematical algorithms for pattern recognition, which represent the base of image tracking. It is worth citing Unity and Unreal Engine, which are software packages capable of handling all these image analysis libraries in a user-friendly way. The Unreal Engine is relatively new to the AR/VR world whereas Unity developers are more experienced in the development of effective and efficient integration tools. In this context, Unity can be considered as a container giving the experimenter the ability to use many codes at the same time and to create the ultimate scene that the AR application will run. This software can compile applications for many different platforms: from Windows to Mac OS, but also Linux, IOS, and Android. The point of strength, and at the same time the main weakness of this program, is that it has been originally conceived for general-purpose computer graphic applications: this includes a real-time render engine that helps the user to easily interact with the AR or VR most naturally, but it is unsuitable for mechanical and industrial applications. Unity is not a CAD. It is not possible to handle parametric objects (as even a basic3D CAD currently do) or to modify elaborated meshes. The purpose of this software is to collect models, set scenes with the interaction between object and compile an application for their final use: it can handle simple objects which could fit entertainment or research case study, but it currently does not support the complexity of a real CAD model. This statement is based on experiments carried out by authors. An AR application for smartphones based on Unity has been implemented for evaluation purposes, namely the ARdroid app. It can superimpose CAD models in OBJ format to the external view framed by a smartphone, provided the Astronaut marker from Unity is within the scene. When an OBJ model with a low weight (1MB) is loaded, all works fine, but when larger models are loaded (e.g., >3 MB) problems of visualization are noticed, as Figure 7 shows. In this picture, there is the outcome of an experiment made with a CAD model saved in STL with low and high quality (see top figure) and later saved in OBJ format. The low-resolution model weighs 142 kb, while the high-resolution model weighs 2729 kb. The results in the visualization are self-explanatory since the heavier model suffers from visualization problems where a part of the model is hidden by the code, and there are other surface defects. These issues are also related to the maximum number of polygons that Unity can handle.

2.2. Review of Augmented Reality Hardware

The scope of the final application is the main driver in the selection of hardware for AR applications. If we look at hardware specifically developed to tailor AR needs, Head Mounted Displays (HMDs) play a crucial role. HMDs can be classified into two main kinds: Video-See-Through (VST) and Optical-See-Through (OST). The VST technology is based upon the use of a camera that frames the external scene. In the following, the virtual symbols or models are added in real-time to the video streaming. The OST devices work in a different way, projecting synthetic virtual models onto semi-transparent lenses: in this case, the user sees the real external world, added by a virtual model. On the other hand, using VST hardware, the user sees the real world in the display of the device [16]: smartphone applications are typical examples of VST technology. A further classification of AR hardware can be carried out splitting hardware into two categories based on the processing unit: stand-alone devices include the electronic boards and processors capable of framing the external world, superimpose models, and visualizing the final streaming; other devices require additional hardware to work properly, which must be connected through cables, Bluetooth or Wi-Fi connection.

2.2.1. Video See through Devices

HMDs are not generally connected to VST since they do not need implemented cameras to see the reality and process the images on the screen made by standard pixels. Nevertheless, it is worth citing Oculus Rift S and HTC Vive PRO since they already have the possibility of stereo view since they implement two cameras mounted in front of the headset. Despite that, they are mainly used for VR applications leading to a minor usage of the resources. VST HDM could indeed be a breakthrough in a Mixed Reality environment since they can couple the powerful behaviour of Virtual Reality and the space perception of Augmented Reality. Unfortunately, they are not standalone, but they have no competitors when the display specifications are considered. Both Oculus Rift and the HTC Vive offer 1080 × 1200 pixel resolution for each eye; 2160 × 1200 in total. They have a 90Hz refresh rate, thus ensuring the frame rate is high enough to prevent motion sickness and provide a smooth experience overall. They offer a 110-degree FoV. HTC Vive and Oculus Rift S support stereographic 3D view and the large FoV lets the users have a more immersive experience. Both of them are equipped with controllers that can interact with the Virtual Reality environment. Due to their specs, VSTs are usually chosen for Virtual Reality applications, but this does not prevent a possible application in Augmented Reality with different purposes. On the other hand, smartphones can represent an alternative to other VST devices because they are usually equipped with one or even more cameras and a screen, and it is quite straightforward to write mobile applications. Apple opened to a wide range of applications implementing Lidar (Light Detection and Ranging) in its iPad Pro 2020, enhancing surface detection and allowing for a better representation of the environment where the holograms are placed.

2.2.2. Optical See-Through Devices

OST can exploit a set of technologies specifically developed to support AR needs. In particular, the attention of research and industry was directed towards the projection techniques that can be probably considered the heart of OST HMDs [17]. The simplest solution developed to project images in HMD is the half-mirror technology, but it suffers from a limited Field of View (FoV). Increasing the complexity, convex or free-form shaped mirrors can be used: this is useful to increase the FoV. To provide the reader with an example, the Meta2 device uses this kind of mirror to attain a large FoV and obtain a good display resolution. The waveguide grating, which is based on holographic and diffractive optics, is the lastest technology available on the market. The diffraction grating works, imitating what happens in a lens: in this technology, the light is blended into a thin layer of plastic or glass. The light beams rebound through a channel up towards the users’ eye. This kind of solution has been adopted by the HoloLens family of devices. In recent years, a large number of Companies developed proprietary AR devices. At first, Google introduced Google Glass; Wuzix brought to the market a wide set of eyewear devices, both stand-alone and requiring a host device. Microsoft developed its own proprietary solutions and it is worth citing the Meta2 and HoloLens products. Meta2 has no integrated computing capability; on the contrary, Microsoft HoloLens and some models of the family Vuzix Glasses integrate computational power. To provide the user with an example, Microsoft HoloLens exploits the computing power of an Intel Atom Processor that is used by the Vuzix M300 Smart Glasses too. On one hand, Meta2 are headsets that rely on an external PC to run any type of application, on the other Microsoft HoloLens can be defined as a stand-alone computer that the experimenter can wear on his/her head. Vuzix M300 Smart Glasses support the Android operating system and, therefore, support not only specifically eyewear applications, but also mobile applications. HoloLens by Microsoft exploits holographic lenses that are based on the waveguide principle; on the other hand, devices such as Meta2 work thanks to a convex mirror that is needed to project images directly in front of the experimenter’s view. If we compare these two devices, it appears that the solution adopted by Meta2 can guarantee a 90-degree field of view, which is useful for obtaining a superior immersive experience with respect to HoloLens. The main limiting performance of HoloLens is a 30-degree field of view: it is true that a stereographic 3D view experience is obtained, but the experimenter must direct her/his eyes towards the frontal direction due to the poor FoV. Moreover, the Meta2 larger FoV could be dangerous with On-Field usage because the user’s view may be obstructed by holograms. As the most significant Vuzix device currently on the market, the Vuzix M300 Smart Glasses are monocular, with a small screen that does not occupy the entire field of view, but only a small window.
Another difference between Meta2 and HoloLens that must be highlighted is gesture management. Meta2 presents superior performances because gestures implemented in it are more intuitive than movements required to manage HoloLens. The authors evaluated both these devices, and we believe that the Meta2 solution based on the use of the whole hand to handle holograms and fingertip to pick and move icons on the screen is more effective than what is offered by HoloLens. In HoloLens, the user must do a gesture described as “blooming” in the literature because it imitates the blossoming of flowers: this is quite an uncomfortable and unintuitive move. The alternative way to handle picking in HoloLens is based on the “air tapping” finger motion: this kind of control also presents criticism in the personal authors’ opinion. One of the main problems of this gesture is that it is well captured if performed using index and thumb. Moreover, the index should be straight parallel and in front of the sensors of the Hololens in an unnatural position, since the fingers are usually slightly curved. Finally, “air tapping” is not always user friendly because it imposes onthe user to maintain his arm straight forward his head, and sometimes this position can be difficult to be achieved in an industrial application. The Vuzix M300 Smart Glasses come with Android and iOS compatible software to be installed on tablets or mobile phones to allow them to function as controllers for eyewear devices: this is because many mobile applications may need a human–machine interface such as a touchpad or a keyboard.

2.2.3. Embedded Tracking Systems and Tools

The tracking system underlines that the devices analysed in the latter section are different. HTC Vive has a six DOF perception due to an IR sensor that can handle 360-degree tracking and cover an area of 4.5 × 4.5 m where the user can walk around freely; Oculus Rift is equipped with a similar number of sensors (IR Led) that can track the user over 360 degrees but in a smaller area defined by 2.5 square meters. On the other hand, HoloLens has no limits of space since its sensor continuously maps the game area. This is a huge advantage because HoloLens does not need an area setup. Its tracking system is based on two “environment understanding cameras” per side. The “environment understanding cameras” coupled with the depth sensor, achieve head tracking and environment detection. This latter feature lets the user position an object in a 3D space. Meta2 performs a similar tracking function compared to HoloLens because it is based on webcams, depth sensors, and IMU. The data fusion is obtained through an algorithm similar to SLAM (Simultaneous Localization and Mapping) [16]. In addition to these embedded devices, pure sensors exist, such as Microsoft Kinect, Leap Motion, and Occipital Structure Sensor 3D Scanner. These devices need integration with a computer, but if correctly connected can perform similar results at an affordable price. It worth noting that these devices are older than the embedded helmets and have made software development in the early years possible. In particular, Microsoft Kinect is the direct link to Microsoft HoloLens since it shares similar technology (webcam and IR sensor. Moreover, the gesture algorithms take advantage of the knowledge achieved through Leap Motion, a device designed on hand recognition for object interaction. This device can accurately detect each finger of a human hand, associating a task to each movement. This is possible by blending the data from two cameras and three IR sensors. Due to its precision in a range of 0.40 m over the sensor, Leap Motion has frequently been used as a support tool for older helmets in order to have more accurate integration with the augmented environment [17]. The Structure Sensor is a similar piece of apparatus that can be connected to a wide range of portable devices, such as smartphones and tablets: it is the first 3D sensor developed specifically for mobile devices. It is possible to capture detailed, full-colour 3D scans of objects or people in real-time by clipping the sensor to a compatible mobile device. It can be used for VR and mixed reality with the provided SDK in Unity 3D. The Structure Sensor device can be used to obtain positional tracking, similar to the HTC Vive without the need for a set-up or calibration of fixed lighthouses so that it does not require high computational power.
The future trend of evolution of the AR hardware points towards the reduction in the gap between the virtual world and the authentic world. Moreover, more attention will be given to the ability to implement the digital twin [18] concept: future hardware will allow for a reliable real-time switch between a genuine real object and its virtual facsimile. This to facilitate human–robot interactions, adopt better engineering decisions using real-world data, and support real-time alert management systems for improving safety [19].

3. Analysis of AR Applications and Comments

This section comments on the AR capabilities and drawbacks in industrial applications. As better explained in the following section, the main problem is that current AR technology presents good visualization performances, but it lacks interaction capabilities. Examples and case studies developed by authors support the statements in these sections that reflect the authors’ point of view about AR.

3.1. Applications Proposed by Literature

This section briefly describes some of the applications proposed by the literature where AR capabilities are exploited to enhance the users’ experience.
Several papers deal with the description of simple case studies where AR is adopted in an industrial environment to support maintenance [20,21]. The superimposition of virtual models to complex products [22] can help in detecting the correct part to operate on; on the other hand, enriching the real scene [23] with writing and animations showing assembly/disassembly procedures can be useful to solve complex engineering problems by unskilled operators. AR can be useful to reduce the complexity of user/maintenance manuals as well [24]. In this case, instead of pages and pages of small complex drawings that are often unclear or hard to detect, the contact with the real world can be useful to spare time and increase the process efficiency. Following the literature, and also maintenance training [25], could take advantage of AR: operators can be trained to operate on a real machine by acting on a virtual model so that there is no need to be in front of the object to familiarize, and eventual errors do not lead to damages. Remote collaboration capabilities have been proposed in the reference [26], where a remote maintenance centre can guide an operator in complex tasks by preparing the required animations to support him/her in the work on the field. Pick by vision is another interesting concept addressed in the literature: it has been proposed to guide operators in the selection of items to pick up in assembly tasks or logistics. As suggested by references [27,28,29], the AR can be used to project a “virtual eye bull” or a “tunnel” to guide the operator towards the component to select. The industrial environment is particularly advantageous for AR since a lot of highly detectable symbols lie in the factory and can be used as a marker. On the other hand, outdoor applications require the use of GPS, Inertial Measurement Units (IMU), and other devices to fix the position and point of view of the operator: bulky backpack with compass, IMU, and GPS can be necessary when the exploration of the unknown real word is demanded for AR.
Medicine is another scenario deeply described by the literature to test and apply AR. For training surgeons, the precise positioning of needles and medical devices during surgery, enhanced view, alignment between Computer Tomography (CT) images and the real patient is a not an inclusive list of proposed tasks to be carried out with AR support [30]. In this case, the precision which can be obtained through AR in the alignment of the virtual model with the real patient, and the need for augmentation systems able to continue tracking, even when a medical operator covers the field of view of cameras or markers, are the main limits of the current technology. Moreover, the need for good image contrast in a real surgery room, where lights can be switched on and off depending on the surgeons’ needs can be quite a challenging problem. Recent studies [31] tried to introduce AR—especially with OST HMDs—in the medical environment in order to help medical doctors in surgery [32].
Entertainment assures a high volume of income for the electronic industry. First-person games and scene augmentation games have been developed both for smartphones and for house entertainment game platforms for many years. In this case, there is no need for precise positioning or high performance and the AR can represent a mature, even if expensive, technology to be introduced in games.
Several outdoor navigation systems have been developed for smartphones to support tourism and provide a description of real environments. In this case, known images (e.g., a mountain silhouette or a building) can be tracked. Moreover, these applications require a precision that is not comparable to what is needed in medicine. Focusing attention on high-cost equipment, such as heads-up displays (HUDs), with information for pilots are the standard for high-performance aircraft; the automotive industry is also evaluating AR devices such as heads-up displays to support the driver and to present guide information on the windshield. In such a way, it is not necessary to move the eyesight to the panel to read data about speed, RPMs, fuel, settings, and knob position.
The projection of symbols and writing on the external environment for advertising is a commonly used capability exploited by toy manufacturers to show the content of boxes, or to obtain Augmented Business Cards, showing information in 3D. There are AR applications where, when a shop sign is within frame, additional messages appear on the smartphone of the client. In these cases, the need to run AR applications on smartphones or low-cost devices is more important with respect to high refresh rates or good precision in positioning: symbols and animations can be prepared once and there is no need to change it depending on specific tasks to accomplish: it is a simple content popping-up when an image or symbol is framed.

3.2. Hardware Limitations

There are different pros and cons while comparing and evaluating the projection techniques available on the market. The half-mirror solution offers a limited FoV, due to its light and small geometry; on the contrary, larger FoV would require bigger and heavier hardware devices. The convex-mirror-based solution gives the widest FoV, (e.g., 90° degree in Meta2 HMD). However, the hardware of this configuration is bulky and the resolution could change depending on how the combiner (the support device on which images are displayed) plastic material maintains its quality. Besides, the waveguide offers a limited FoV when compared to the convex mirror configuration. A brightness problem with OST HMD can be noticed. Therefore, the combination of the virtual image with the real-world lighted environment is the main drawback of see-through devices. To support this statement, it is fundamental to notice that the augmented images are faded and do not cover or block out the real world (Figure 8).
With AR see-through optics, a lot of display lights are thrown away to support seeing through to the real world, and the user has to deal with the light that is in the real world. The user could turn off the room lights and use a darker background for the holograms in order to accentuate the brightness of 3D objects, but in this way the “see-through” feature becomes useless. For instance, HoloLens is equipped with a transparent screen, and it can only add light to the user’s view, thus giving virtual objects a faded appearance. There is no additional layer that can turn the screen off and opaque pixel-by-pixel. The screen is so bright that, in a controlled environment such as darkened demo rooms, the background is effectively erased by virtual objects, but when viewing objects close to a bright background (e.g., a table lamp lying close), they become barely visible. According to the description of the features available for adding light to the user’s view, it is impossible to render total black 3D objects (RGB = [0,0,0]) with optical see-through devices because black colour is used as an alpha channel, that is the layer used for transparency. To provide an example for HoloLens, the Clear Flag pop-up menu of the main camera in the Unity Editor should be set to black solid colour in order to set the transparent colour. The same applies to the Meta2 device (see Figure 8 and Figure 9, for example).
This brightness problem also affects the resolution of the images and it represents the main difference from video see-through HMDs. In particular, with optical see-through HMDs holograms have a “glow” around them, so that 3D objects have no sharp contours, and they suffer from loss of details. On the other hand, the brightness problem does not appear in devices such as HTC Vive and Oculus Rift, thanks to the use of OLED displays and eyepieces. Furthermore, the occlusion problem in HMDs was solved earlier and more easily for the VST technology than for the OST HMDs. As a consequence, the use of eyepieces is necessary to cover the peripheral view, but this solution cannot be used with the OST technology. With VST devices, it is possible to completely turn off pixels with image processing libraries (for example OpenCV), in order to delete real objects and replace them with the augmented ones, or to render completely black objects. In contrast, the Meta2 projection system is based on an LCD screen that projects images onto the user’s eye thanks to a half-silver convex mirror. Meta2 features a wider FoV because of the adopted projection system. However, if the distance between the eye and the convex mirror is too much, the see-through feature fails due to the lenses’ distortion factor. The computational power is another big difference between optical see-through and video see-through HMDs. Currently, only Microsoft HoloLens and some Vuzix models have computational power embedded, thanks to their Intel Atom processors. HTC Vive and Oculus Rift support only apps running on external PCs and they must be wire-connected to the PC. Moreover, they rely on external devices for the tracking system. Therefore, they are not ergonomic in an industrial scenario. On the other hand, Microsoft’s product is much more versatile in the case of portable use since it does not need a wired connection for its use. On the contrary, it requires a constant and powerful wireless connection, since its spatial tracking system associates each of the environments, the so-called “Spaces”, to a Wi-Fi network. These meshes are saved in the internal storage memory and loaded when HoloLens detect the associated Wi-Fi network. For the above-cited reasons, the use of Microsoft HoloLens in the factory environment is not so easy.

3.3. Software Limitations

Talking about the industrial application, it is quite ambitious to say that Augmented Reality rather than Virtual Reality is a ready to use technology for factory applications because there are many difficulties to deal with. This happens because of the variety of software packages and libraries, not always combined: this can cause confusion and waste time for the final user. Long preparation times are required to produce just one animation. It is the opinion of authors that basic steps for a real industrial application are the creation of a 3D parametric CAD model, the exportation of these models in a mesh, simplifying the geometries tool, and the importation in Unity, where it is then possible to start scripting Augmented Reality for the final scene. It is clear that this chain of events requires not only a lot of time but also a wide knowledge of every single tool. Besides, polygon limitations for real-time rendering force the final user to obtain low detailed models that do not fit all industrial applications. This waste of time makes the technology not ready for maintenance manuals because it is nearly impossible to prepare the animations for all the possible tasks for operators. In this context, it is important to underline the efforts made by PTC Creo® in order to provide users with a single platform for the creation of 3D models and their visualization in AR. This feature has been achieved thanks to the integration of Vuforia inside the Parametric PTC CAD and making up the scene in the ThingWorx Studio software package. Even though the environment scene is limited compared to Unity Editor, this effort can be evaluated as a significant step towards the introduction of AR in real industrial applications, and not only in games or computer graphics. Despite these facts, AR is currently more applied to training than to maintenance or virtual manuals implementation. As a consequence, thanks to AR it is no longer necessary for the physical presence of the instructor where and when a training session is performed.

3.4. Usability Limitation in Industrial Environment

Aside from the software and hardware limitations, several usability issues should be taken into account while considering the use of AR in the industrial environment. State-of-the-art AR applications often require set-up operations before starting a working session: this can be acceptable in a laboratory, but it is not acceptable by a worker who is not skilled in computer graphics. Moreover, the industrial working flow does not accept wasting time in long and quite complex setup operations. Another issue to consider in the industrial environment is that current AR applications lack easy collision/occlusion capabilities: this effect is fundamental to provide depth sensation, realism and to allow maintenance operations where 3D parts should be moved and oriented in space. AR devices are sensitive to light levels: this can be a critical issue in factories where there are dark and bright areas, and the same zone can pass from a lighting level to another by simply switching on or off a light or opening a door. When human–machine interfaces are considered, it is worth saying that current touch-based command/input windows are not adequately robust for industrial applications: in an industrial environment, the user does not have time to use complex input devices [33,34]. In this case, finger pick and drag capabilities could represent a solution to increase the ease and accessibility, but these technologies are not a standard for AR applications today. While dealing with glasses, no comfortable AR devices are currently available for complex operations where high computational efforts are required: today glasses can be divided between ergonomic models, but with a scarce computational capability and definition, and high-performance Head Mounted Displays or bulky glasses. It is a matter of fact that nowadays glasses with good ergonomics, high resolution, and high computational capability are not available. The statement above reported are the result of a set of “first-person” experiments carried out by the authors in an industrial environment. To provide the reader with visual feedback, the following Figure 10a provides a picture of an operator holding see-through glasses in a workshop where an AR-assisted drilling operation is being simulated. Figure 10b shows the augmented view by the user.
As a matter of fact, the required precision for drilling (0.05 mm) typical of mechanical manufacturing can be hardly achieved: a time-consuming registration process is necessary even to obtain a precision of 1 mm (both with marker and marker-less technology), which is the typical value expected with current AR capabilities. Moreover, light conditions interfere with the brightness of the scene perceived holding the glasses: the opening of a door (or the switching on of a light bulb) is not compensated by AR software. Lastly, even a simple AR animation prepared with a dedicated AR software tool requires a long time to prepare the scene. In Figure 11, the virtual model of a Screw/Bolt has been added close to the real one to show how to disassemble the bolt: the CAD model has been imported in Vuforia, correctly oriented with respect to the marker (see Figure 11a), a symbol has been added and the animation has been loaded on Hololens glasses.
After a set of tests with several users, the average time required to prepare the animation by a skilled operator has been assessed in 2 h; a marker-less approach can be followed as well (see Figure 11b). In this latter case, the environment itself features one act as a marker: therefore, clear workspaces (e.g., flat floors and walls without details and uniform colour) could be unsuitable to track and pose the object. However, the typical workshop environment does not present this problem because machines usually show complex geometry and easily recognizable parts with edges and multi-oriented surfaces.

4. Bridging the Gap between AR and Industry Needs

4.1. Technology

It is the authors’ opinion that effective and efficient exploitation of AR capabilities in the industrial environment will be possible once a full Reality-Virtuality continuum will be implemented. This concept, introduced by Paul Milgram, implies going beyond the distinction between Reality and Virtuality and presents for the user the capability to set in a simple way the level of Virtuality he/she wants to work with. In AR industrial application, it is of paramount importance the capability to mix Reality and Virtuality and to switch between them in a simple way, under the complete control of the operator who should be able to decide to stress the attention on virtual or real depending on the task he/she has to carry out. Shadows, occlusion implemented in an effective way, rendered virtual models, high capability human–machine interfaces are some of the technologies which should be made available to users to overcome the distinction between real and virtual. This technology change impacts the hardware and software requirements to run AR, as better explained in the following sections.

4.2. Hardware

Concerning hardware, an improvement in the computational capabilities of glasses is required. As mentioned before, at present only Microsoft HoloLens and some Vuzix models implement embedded computational power. Other devices are comfortable but do not provide computational power and they require a connection with the external hardware. Besides, the calculation power installed in HoloLens and Vuzix Glasses is scarce, and the devices fail when they need to process complex geometries and are not sufficiently simplified, not to mention that they are quite bulky and unwieldy when used in an industrial scenario. The portability and freedom of movement in an external environment (i.e., on-field applications) represent the main advantage of internal computing capacity. Devices, such as Meta2, which rely on external computing power, offer higher performance but reduce the freedom of movement. Even if the hardware is installed in a backpack, the recommended hardware requirements for the use of tethered devices are quite demanding. For example, Meta2 requires the latest generation of processors and dedicated high-end graphics cards, which are not available in a common laptop of moderate size and weight. Besides, other devices such as HTC Vive and Oculus Rift, even if in a backpack configuration, could not be used in an external environment because of the tracking system which requires external sensors. The separation of the HMD from the calculation hardware can be considered another solution proposed by the literature to maintain portability. Just to provide an example, a solution of this type is proposed by Magic Leap [35] in which the calculation operations are performed by an easily wearable mini-pc, separated from the display. This approach could be a backup solution to solve the trade-off between computational capabilities and portability: as soon as electronic engineering will provide small and powerful computers with dedicated graphic processing units (GPUs), the processor will be integrated into the glasses frame structure. New optical see-through HMD generation should solve problems of environmental lighting in order to make it possible to use them even in outdoor environments. Furthermore, solving the problem of external light blockage can result in an effective occlusion for real objects that can be replaced by their virtual counterpart, filling the gap between real and virtual. The real-virtual data exchange needs to be improved by integrating the hardware needed to scan real objects and automatically import them into the virtual environment. When these capabilities are available, as envisaged by Segovia et al. [36], AR could be exploited to support manufacturing. However, dealing with manufacturing dimensional tolerances implies a precision in AR scene registration that is higher than the tolerance range to evaluate. The real–virtual data exchange also affects the way in which users can deal with real objects during their normal use. The concept of the digital twin is a pillar of industry 4.0: this means to interact with the virtual model by working on the real model (and vice versa) and use the captured data immediately in order to make better engineering decisions and designs using real-world data. Improvements in resolution, FPS, and refresh rates are also needed to reduce and minimize the wake effect during the movement of the viewpoint. The improvement in display definition, and, therefore, the definition of virtual objects, especially 3D object edges, will allow for the more accurate placement of 3D objects. The current refresh rate of 30 fps, which is nowadays available in several hardware items, is appropriate because it is beyond the threshold frequency appreciated by human eyes. Concerning the resolution of the screen, meaningful values depend on the final application. According to Ferwerda [37], there are three levels of realism in computer graphics: physical realism (same visual stimulation as the scene), photo-realism (same visual feedback as the scene), and functional realism (identical visual information as the scene). While dealing with maintenance, where functional realism should be assured the current resolution assured by devices, such as HoloLens should be appropriate. However, when dealing with photo-realism, much larger resolutions are necessary. With rendering requirements, a possible solution to overcome the problem of rendering completely black objects could be the use of lenses or glasses capable of obscuring the black zones, and exploiting new technologies, such as the use of LCD electrochromic films.

4.3. Software

For industrial applications, tracking based on symbols (e.g., road signs) can be exploited since there are many known panels in factories; similar technology is currently used in Laser-Guided Vehicles (LGV). SLAM (simultaneous localization and mapping) is another technology that can be used to recognize the environment for autonomous applications in the industry environment without the need for an operator. Software simplicity is the key point of all these applications: flexible integration among widespread popular programs is necessary. This gap can be covered by introducing some plugins that can open communication channels between AR libraries and commercial CAD without the need for manual conversion from models and scene creation in different programs. Regarding this issue, PTC has already presented their Vuforia Object Tracking [38] tool.
After the acquisition of the Vuforia technology, PTC created a direct link between the two pieces of software, integrating both the AR and CAD tools, thus helping a simple set-up of the augmented reality scene. In particular, thanks to object tracking, it is now possible to directly match real objects with CAD models without manually handling the conversion. With this link, it is possible to easily use AR applications in the most natural way. Aside form this, it is important to remark that AR is more promising than VR for industrial applications because it is less invasive and allows for a closer connection with reality. It is true that Virtual Reality can be used in training sessions, but it is impossible to conceive VR applications for real working sessions in industry, which is the typical scenario where the interaction with real objects is fundamental. The advantage of Augmented Reality over VR is also noticed in the higher resolution of images and holograms, which can be obtained by AR technology. Additionally, it is also very important to take care of image quality. Years ago, when Augmented Reality was first developed, the registration problem was the key point of research: today, with higher computing power and sophisticated tracking methods, it is possible to concentrate efforts on the increase in the realism of the projected objects. Signs of progress have been achieved by VTT [39] with their proposal for solving real-time illumination. Of course, this cannot be considered a standard solution, but it is a starting point to increase model accuracy. New software packages should allow for the reverse engineering of objects framed by cameras, and real and virtual objects should be fully and completely interchangeable. Optimizing the registration problem [40] and rendering techniques will lead to real “digital twin” capabilities. The understanding of the surrounding environment (in terms of scene reverse engineering and scanning) is a key feature to reach the goal of real- and virtual-world integration. Last, but not least, AR also needs lighter devices, allowing for more comfort and less stress on operators. The less demanding requirements on calculation power will give AR portability characteristics, thus allowing it to cover more and more fields of industrial applications.

5. Conclusions

The aim of this work is to provide the reader with the current state-of-the art of AR applications. In particular, it focuses on listing problems that limit applications in the Industry 4.0 environment. The paper proposes some interesting features for future development to stimulate a gentile introduction of AR in real all-day applications. AR is a well-known technology and literature covers applications in different fields from industry to entertainment and advertising. A wide review of hardware and software has been made showing the capabilities and limitations of VST and OST devices. Despite the technological know-how of Augmented Reality, it remains expensive for standalone devices and rather incomplete for the wide range of applications it pursues. The technology spans over many different scientific disciplines, from programming languages to tracking systems and electronic engineering, but it still struggles to find a specific application sector. The case studies implemented by authors suggest that the current AR technology is not mature for its introduction in the industrial environment. Several improvements must be achieved before applying it to factories, both from a hardware and software point of view. New generation glasses with high computational capabilities, good ergonomics and robustness to lighting conditions should be developed. From a software point of view, a better understanding of the following issues should be performed: shadows management with automatic setup of global and direct illumination depending on the position of the user in internal or external environments, occlusion and its realistic implementation, animations with CAD models in short times developing capability, integration of AR into CAD systems, user-friendly human–machine interfaces based on gesture and finger tracking, and automatic set up and tuning. Moreover, a lowering of HMD’s prices would enhance further research and push AR into the future. Once we have reached these milestones, AR could be effectively introduced in the industry to support tasks such as maintenance, training, user manuals, collaborative design, fast visualization of product changes and modifications.

Author Contributions

Formal analysis, A.C.; writing—original draft preparation, G.M.S., F.O.; writing—review and editing, G.M.S., A.C.; supervision, A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The CAD models and software examples presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

Authors declare no conflict of interest.

References

  1. Azuma, R.T. A survey of augmented reality. In Presence: Teleoperators and Virtual Environments; MIT Press: Cambridge, MA, USA, 1997; Volume 6, pp. 355–385. [Google Scholar] [CrossRef]
  2. Merino, L.; Schwarzl, M.; Kraus, M.; Sedlmair, M.; Schmalstieg, D.; Weiskopf, D. Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009–2019). In Proceedings of the 2020 IEEE International Symposium Mixed Augmented Reality, ISMAR 2020, Porto de Galinhas, Brazil, 9–13 November 2020; pp. 438–451. Available online: http://arxiv.org/abs/2010.05988 (accessed on 9 April 2021).
  3. Gattullo, M.; Evangelista, A.; Uva, A.E.; Fiorentino, M.; Gabbard, J. What, How, and Why are Visual Assets used in Industrial Augmented Reality? A Systematic Review and Classification in Maintenance, Assembly, and Training (from 1997 to 2019). IEEE Trans. Vis. Comput. Graph. 2020. [Google Scholar] [CrossRef]
  4. Simsarian, K.; Åkesson, K. Windows on the World: An example of Augmented Virtuality. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.4281 (accessed on 7 December 2020).
  5. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 2001, 21, 34–47. [Google Scholar] [CrossRef] [Green Version]
  6. Neges, M.; Adwernat, S.; Abramovici, M. Augmented Virtuality for maintenance training simulation under various stress conditions. Procedia Manuf. 2018, 19, 171–178. [Google Scholar] [CrossRef]
  7. Milgram, P.; Takemura, H.; Utsumi, A.; Kishino, F. Augmented reality: A class of displays on the reality-virtuality continuum. Telemanip. Telepresence Technol. 1995, 2351, 282–292. [Google Scholar] [CrossRef]
  8. Microsoft Official Website. Available online: https://developer.microsoft.com/en-us/windows/mixed-reality/mixed_reality (accessed on 7 December 2020).
  9. Amin, D.; Govilkar, S. Comparative Study of Augmented Reality Sdk’s. Int. J. Comput. Sci. Appl. 2015, 5, 11–26. [Google Scholar] [CrossRef]
  10. Siltanen, S. Texture generation over the marker area. In Proceedings of the ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, Santa Barbara, CA, USA, 22–25 October 2006; pp. 253–254. [Google Scholar] [CrossRef]
  11. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  12. Osti, F.; Ceruti, A.; Liverani, A.; Caligiana, G. Semi-automatic Design for Disassembly Strategy Planning: An Augmented Reality Approach. Procedia Manuf. 2017, 11, 1481–1488. [Google Scholar] [CrossRef]
  13. Gattullo, M.; Dammacco, L.; Ruospo, F.; Evangelista, A.; Fiorentino, M.; Schmitt, J.; Uva, A.E. Design preferences on Industrial Augmented Reality: A survey with potential technical writers. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020, Recife, Brazil, 9–13 November 2020; pp. 172–177. [Google Scholar] [CrossRef]
  14. ARCore Official Webste. Available online: https://developers.google.com/ar/ (accessed on 3 January 2018).
  15. ARKit Official Website. Available online: https://developer.apple.com/arkit/ (accessed on 3 January 2018).
  16. Meta Official Website. Available online: https://meta-eu.myshopify.com/ (accessed on 7 November 2020).
  17. Beattie, N.; Horan, B.; McKenzie, S. Taking the LEAP with the Oculus HMD and CAD—Plucking at thin Air? Procedia Technol. 2015, 20, 149–154. [Google Scholar] [CrossRef] [Green Version]
  18. Agnusdei, G.P.; Elia, V.; Gnoni, M.G. Is Digital Twin Technology Supporting Safety Management? A Bibliometric and Systematic Review. Appl. Sci. 2021, 11, 2767. [Google Scholar] [CrossRef]
  19. Agnusdei, G.P.; Elia, V.; Gnoni, M.G. A classification proposal of digital twin applications in the safety domain. Comput. Ind. Eng. 2021, 154, 107137. [Google Scholar] [CrossRef]
  20. De Marchi, L.; Ceruti, A.; Marzani, A.; Liverani, A. Augmented reality to support on-field post-impact maintenance operations on thin structures. J. Sens. 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  21. Mourtzis, D.; Siatras, V.; Angelopoulos, J. Real-Time Remote Maintenance Support Based on Augmented Reality (AR). Appl. Sci. 2020, 10, 1855. [Google Scholar] [CrossRef] [Green Version]
  22. Ceruti, A.; Liverani, A.; Bombardi, T. Augmented vision and interactive monitoring in 3D printing process. Int. J. Interact. Des. Manuf. 2017, 11, 385–395. [Google Scholar] [CrossRef]
  23. Baron, L.; Braune, A. Case study on applying augmented reality for process supervision in industrial use cases. In Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, Berlin, Germany, 6–9 September 2016; Volume 2016. [Google Scholar] [CrossRef]
  24. Di Donato, M.; Fiorentino, M.; Uva, A.E.; Gattullo, M.; Monno, G. Text legibility for projected Augmented Reality on industrial workbenches. Comput. Ind. 2015, 70, 70–78. [Google Scholar] [CrossRef]
  25. Maly, I.; Sedlacek, D.; Leitao, P. Augmented reality experiments with industrial robot in industry 4.0 environment. In Proceedings of the IEEE International Conference on Industrial Informatics (INDIN), Poitiers, France, 18–21 July 2016; pp. 176–181. [Google Scholar] [CrossRef] [Green Version]
  26. Fiorentino, M.; Uva, A.E.; Gattullo, M.; Debernardis, S.; Monno, G. Augmented reality on large screen for interactive maintenance instructions. Comput. Ind. 2014, 65, 270–278. [Google Scholar] [CrossRef]
  27. Hanson, R.; Falkenström, W.; Miettinen, M. Augmented reality as a means of conveying picking information in kit preparation for mixed-model assembly. Comput. Ind. Eng. 2017, 113, 570–575. [Google Scholar] [CrossRef]
  28. Reif, R.; Günthner, W.A.; Schwerdtfeger, B.; Klinker, G. Evaluation of an augmented reality supported picking system under practical conditions. Comput. Graph. Forum. 2010, 29, 2–12. [Google Scholar] [CrossRef]
  29. Schwerdtfeger, B.; Reif, R.; Günthner, W.A.; Klinker, G. Pick-by-vision: There is something to pick at the end of the augmented tunnel. Virtual Real. 2011, 15, 213–223. [Google Scholar] [CrossRef]
  30. Casari, F.A.; Navab, N.; Hruby, L.A.; Kriechling, P.; Nakamura, R.; Tori, R.; de Lourdes dos Santos Nunes, F.; Queiroz, M.C.; Fürnstahl, P.; Farshad, M. Augmented Reality in Orthopedic Surgery Is Emerging from Proof of Concept Towards Clinical Studies: A Literature Review Explaining the Technology and Current State of the Art. Curr. Rev. Musculoskelet. Med. 2021, 14. [Google Scholar] [CrossRef]
  31. Pratt, P.; Ives, M.; Lawton, G.; Simmons, J.; Radev, N.; Spyropoulou, L.; Amiras, D. Through the HoloLensTM looking glass: Augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels. Eur. Radiol. Exp. 2018, 2. [Google Scholar] [CrossRef] [PubMed]
  32. Tepper, O.M.; Rudy, H.L.; Lefkowitz, A.; Weimer, K.A.; Marks, S.M.; Stern, C.S.; Garfein, E.S. Mixed reality with hololens: Where virtual reality meets augmented reality in the operating room. Plast. Reconstr. Surg. 2017, 140, 1066–1070. [Google Scholar] [CrossRef] [PubMed]
  33. Lee, G.A.; Ahn, S.; Hoff, W.; Billinghurst, M. Enhancing First-Person View Task Instruction Videos with Augmented Reality Cues. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020, Porto de Galinhas, Brazil, 9–13 November 2020; pp. 498–508. [Google Scholar] [CrossRef]
  34. Wang, Z.; Bai, X.; Zhang, S.; Billinghurst, M.; He, W.; Wang, Y.; Han, D.; Chen, G.; Li, J. The role of user-centered AR instruction in improving novice spatial cognition in a high-precision procedural task. Adv. Eng. Inform. 2021, 47, 101250. [Google Scholar] [CrossRef]
  35. Magic Leap Official Website. Available online: https://www.magicleap.com/en-us (accessed on 7 January 2021).
  36. Segovia, D.; Ramírez, H.; Mendoza, M.; Mendoza, M.; Mendoza, E.; González, E. Machining and Dimensional Validation Training Using Augmented Reality for a Lean Process. Procedia Comput. Sci. 2015, 75, 195–204. [Google Scholar] [CrossRef] [Green Version]
  37. Ferwerda, J.A. Three varieties of realism in computer graphics. Hum. Vis. Electron. Imaging VIII 2003, 5007, 290. [Google Scholar] [CrossRef]
  38. Vuforia Official Website. Available online: https://library.vuforia.com/articles/Solution/model-target-test-app-user-guide.html (accessed on 1 December 2018).
  39. Aittala, M. Inverse lighting and photorealistic rendering for augmented reality. Visual Comput. 2010, 26, 669–678. [Google Scholar] [CrossRef]
  40. Gao, Q.H.; Wan, T.R.; Tang, W.; Chen, L. A stable and accurate marker-less augmented reality registration method. In Proceedings of the 2017 International Conference on Cyberworlds, CW 2017-in Cooperation with: Eurographics Association International Federation for Information Processing ACM SIGGRAPH, Chester, UK, 30 November 2017; Volume 2017, pp. 41–47. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Trend of the number of published papers with keywords “Augmented Reality”, “Industry 4.0” and “Augmented Reality” AND “Industry 4.0” from Scopus database.
Figure 1. Trend of the number of published papers with keywords “Augmented Reality”, “Industry 4.0” and “Augmented Reality” AND “Industry 4.0” from Scopus database.
Technologies 09 00033 g001
Figure 2. Number of papers with keywords Augmented Reality and expressions in “ ” listed in the first column.
Figure 2. Number of papers with keywords Augmented Reality and expressions in “ ” listed in the first column.
Technologies 09 00033 g002
Figure 3. Workflow of paper methodology starting from existing technology applied to industrial application for strategic guidelines production.
Figure 3. Workflow of paper methodology starting from existing technology applied to industrial application for strategic guidelines production.
Technologies 09 00033 g003
Figure 4. Simplified representation of a RV Continuum. Produced by authors adapting source [7].
Figure 4. Simplified representation of a RV Continuum. Produced by authors adapting source [7].
Technologies 09 00033 g004
Figure 5. Human–Computer–Environment interaction for MR creation. Produced by authors adapting source [8].
Figure 5. Human–Computer–Environment interaction for MR creation. Produced by authors adapting source [8].
Technologies 09 00033 g005
Figure 6. Application of Design for Disassembly developed at University of Bologna.
Figure 6. Application of Design for Disassembly developed at University of Bologna.
Technologies 09 00033 g006
Figure 7. ARdroid application developed in Unity by authors. Top left and right: high- and low-resolution CAD model. Bottom left and right: low- and high-weight OBJ model in ARdroid application.
Figure 7. ARdroid application developed in Unity by authors. Top left and right: high- and low-resolution CAD model. Bottom left and right: low- and high-weight OBJ model in ARdroid application.
Technologies 09 00033 g007
Figure 8. Solid red cube on the left and solid black cube (which is transparent but visible thanks to refraction) on the right in the Microsoft HoloLens display. Experiment carried out at University of Bologna by the authors.
Figure 8. Solid red cube on the left and solid black cube (which is transparent but visible thanks to refraction) on the right in the Microsoft HoloLens display. Experiment carried out at University of Bologna by the authors.
Technologies 09 00033 g008
Figure 9. A solid black cube framed with a webcam in the Unity Editor. Experiment carried out at University of Bologna by the authors.
Figure 9. A solid black cube framed with a webcam in the Unity Editor. Experiment carried out at University of Bologna by the authors.
Technologies 09 00033 g009
Figure 10. (a) Experiment carried out the University of Bologna workshop with AR technology by the authors to reproduce a factory environment. (b) 3D object positioned in real world.
Figure 10. (a) Experiment carried out the University of Bologna workshop with AR technology by the authors to reproduce a factory environment. (b) 3D object positioned in real world.
Technologies 09 00033 g010
Figure 11. Marker-less AR scene developed by the authors in the University of Bologna mechanical workshop (a) with marker, (b) marker less.
Figure 11. Marker-less AR scene developed by the authors in the University of Bologna mechanical workshop (a) with marker, (b) marker less.
Technologies 09 00033 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Santi, G.M.; Ceruti, A.; Liverani, A.; Osti, F. Augmented Reality in Industry 4.0 and Future Innovation Programs. Technologies 2021, 9, 33. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9020033

AMA Style

Santi GM, Ceruti A, Liverani A, Osti F. Augmented Reality in Industry 4.0 and Future Innovation Programs. Technologies. 2021; 9(2):33. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9020033

Chicago/Turabian Style

Santi, Gian Maria, Alessandro Ceruti, Alfredo Liverani, and Francesco Osti. 2021. "Augmented Reality in Industry 4.0 and Future Innovation Programs" Technologies 9, no. 2: 33. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9020033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop