Archive
Advanced 3D Navigation System for AGV in Complex Smart Factory Environments

Advanced 3D Navigation System for AGV in Complex Smart Factory Environments

2024-11-26 Industry 4.0 originated in Germany. The concept refers to the use of the Internet of Things and information systems to digitize and automate supply, m

Related articles

Nimbus Google Cloud Platform: SSH to Google cloud instance will have “Permission denied (publickey)” NIMBUS CLOUDS SOLUTIONS ExpressVPN Customer Support [24/7 Live Chat] Kaspersky users suddenly finding “UltraAV” automatically installed on their PC, here’s why

Industry 4.0 originated in Germany. The concept refers to the use of the Internet of Things and information systems to digitize and automate supply, manufacturing, and sales information in production, aiming to achieve personalized, fast, and efficient product supply [

1

,

2

,

3

] . In the smart factory system that complies with Industry 4.0 standards [

4

] , traditional manual transportation is faces face numerous limitation relate to low efficiency and safety hazard . To alleviate these issue , there has been a widespread emphasis on the crucial role of mobile robot in society [

5

,

6

] . Smart factories have started utilizing automated guided vehicle (AGV) systems that provide point-to-point autonomous transportation services through QR code guidance, 2D LiDAR, or visual navigation methods [

7

,

8

,

9

] . AGVs are capable of autonomous navigation based on requirements, offering flexibility and adaptability. They can reduce the need for human resources while ensuring quality and efficiency, thus improving transportation efficiency.

1.1. Related Work

The autonomous navigation of mobile robots entails directing entities toward predetermined destinations using real-time observations in a three-dimensional environment. Its core elements include localization, mapping, path planning, and mobility [

10

] . present research is centers predominantly center on distinct navigation methodology : trajectory navigation employ qr code and landmark as guide , GPS navigation depend on GPS positioning , inertial navigation utilize inertial navigation sensor , and visual navigation and laser navigation predominantly leverage camera and lidar as sensor [

11

] .

Indoor navigation commonly relies on ground-level QR codes or environmental landmarks for guiding trajectories, notably in compact smart factory settings. However, this method is substantially influenced by the layout of the surrounding environment, creating stability challenges when external conditions undergo changes [

12

] . Within GPS navigation, Winterhalter [

13

] introduced a positioning methodology utilizing the Global Navigation Satellite System (GNSS), merging GNSS maps and signals to facilitate independent robot navigation. Tu [

14

] employ dual RTK – GPS receiver for location and directional feedback . Yet , these approaches is are are exclusively effective outdoors or in area with robust signal strength . In indoor setting or area with limited signal reception , robot encounter difficulty in accurately interpret signal , hinder autonomous navigation . inertial navigation methods is leverage leverage the Inertial Measurement Unit ( IMU ) for relative robot positioning , rely on internal accelerometer , gyroscope , and magnetometer to capture posture datum , allow operation without constraint from environmental factor or usage scenario , demonstrate superior adaptability . However , IMU sensors is are are susceptible to noise , result in accumulate error during extended use , thereby compromise navigation precision [

15

] .

The simultaneous localization and mapping ( SLAM ) method is is is a prevalent choice for localization and mapping , leverage the environmental map and the robot ’s present position , consider aspect of its physical control and mobility [

16

] . SLAM is divided into two main categories based on sensors: visual SLAM and laser SLAM [

17

,

18

,

19

] . In the realm of visual SLAM, Campos and colleagues [

20

] introduced a groundbreaking visual inertial SLAM system in 2021, tightly integrating IMU with a camera. This pioneering system utilized monocular, stereo, and RGB-D cameras within visual SLAM, significantly improving operational accuracy by two–ten times compared to prior methods, both indoors and outdoors. Qin Tong et al. [

21

] present a visual inertial system utilize a monocular camera and a cost – effective IMU , employ a tightly couple nonlinear optimization approach that amalgamate pre – integrated measurement and feature observation , achieve high – precision visual – inertial odometry . additionally , this system is underwent undergo four degree of freedom pose graph optimization to enhance global consistency and remain compatible with mobile device like smartphone . visual SLAM technology is delivers , primarily employ camera as the main sensor , deliver rich environmental information at a low cost , becoming a focal point in current slam research . Researchers is proposed propose a positioning and navigation method for service robot base on visual slam and visual sensor [

22

] . The system underwent validation using the Pepper robot platform, encountering limitations in large-scale navigation due to the platform’s short-range laser and RGB depth cameras. Validation encompassed two diverse environments: a medium-sized laboratory and a hall. However, challenges persist in visual SLAM technology, including camera sensitivity to lighting, heightened environmental prerequisites, and limited adaptability. Furthermore, due to its feature-rich nature, visual SLAM demands substantial computational resources, remaining in the experimental phase without commercialization.

Laser SLAM technology, widely used in navigation, employs laser radar sensors to model the environment, generating detailed 2D grid maps crucial for subsequent localization and path planning. Google’s Cartographer [

23

], an open-source SLAM algorithm, plays a significant role in constructing efficient 2D maps, providing robust support for positioning and planning in robotics and autonomous vehicles. Other commonly utilized laser SLAM mapping algorithms include Gmapping and Karto SLAM. However, these 2D SLAM algorithms adept at indoor mapping encounter challenges in feature loss within long corridors and texture-sparse environments, thereby impacting map accuracy. Additionally, their 2D nature limits the representation of height information in real scenes, resulting in a scale information deficit. In response to these limitations, 3D laser SLAM technology emerged, employing multi-line laser radar to replicate real working environments accurately, giving rise to various 3D SLAM techniques. Lin et al. [

24

] proposed an enhanced LiDAR Odometry and Mapping (LOAM) algorithm, customized for narrow field-of-view and irregular laser radar. Their refinements in point selection, iterative pose estimation, and parallelization facilitated real-time odometry and map generation, surpassing the original algorithm. Shang et al. [

25

] introduced a tightly coupled LiDAR-inertial odometry framework. Leveraging IMU pre-integration for motion estimation, they optimized LiDAR odometry and corrected point cloud skewness. Their approach involved estimating IMU biases, integrating keyframe and sliding window strategies, enhancing accuracy, and improving real-time system performance. Xu et al. [

26

] devise an efficient and robust laser – inertial odometry framework . employ a tightly couple iterative extend Kalman filter , they is merged merge laser radar feature point with IMU datum , enable precise navigation in high – speed and complex environment . Their novel formula for compute the Kalman gain is boosted notably boost real – time capability . remarkably , over 1200 valid feature point were fuse in a single scan , complete all iteration of the tterate extend kalman filter ( IEKF ) step within 25 millisecond .

Yu et al. [

27

] devised the ACO-A* dual-layer optimization algorithm, amalgamating ant colony optimization (ACO) with A* search to tackle path planning complexities in obstacle-rich settings. This approach employs ACO for sequential progression determination towards the target, followed by fine-tuning the A* heuristic function for optimal path derivation. Modeling environments at various granularities enabled them to attain optimal paths in intricate settings. Rosas [

28

] introduced the membrane evolution artificial potential field (MEMEAPF) method, a fusion of membrane computing, genetic, and artificial potential field algorithms, aimed at optimizing path planning by considering minimal path length, safety, and smoothness. Surpassing APF-based strategies, this method harnessed parallel computing techniques to expedite computations, heightening its efficacy and real-time adaptability in dynamic scenarios. Addressing discrepancies in traditional local path planning frameworks, Jian et al. [

29

] identified potential lateral overshoot and maneuverability reduction during lane changes, impacting ADS modules and vehicle functionality. To combat these challenges, they proposed a multi-model-based framework. Comprising path planning, speed control, and scheduling layers, this structure dynamically generates paths, adjusts speed according to the path, and adapts scheduling based on environmental and vehicle conditions. Consequently, the multi-model approach rectifies limitations inherent in conventional planning methods.

1.2 . Contributions and structure

An extensive review of autonomous navigation systems for AGVs in smart factory settings reveals shortcomings in GPS, trajectory, and inertial navigation. These methods demonstrate limitations in terms of adapting to diverse scenarios and scales. While visual navigation offers intricate feature data, it demands substantial computational resources and encounters constraints in intricate pathways, varying scales, or environments with significant lighting fluctuations. Laser navigation, primarily reliant on 2D maps, lacks comprehensive information and struggles to maintain navigation precision during sudden robot posture changes.

In response , this paper is introduces introduce the Complex Scene Navigation System ( CSNS ) . Its key contributions is include include : design an adaptable AGV navigation system tailor for intricate smart factory environment ; propose a rapid 3d mapping and localization technique distinct from conventional 2d SLAM approach , achieve heighten localization precision via 3d point cloud map ; devise a path fusion algorithm base on real – world scenario ; validate the system ’s efficacy through extensive simulation experiment and real – world vehicle deployment , achieve navigation accuracy at the centimeter level .

The paper’s structure unfolds as follows:

section 2

delineates the CSNS system framework, detailing the principles governing 3D point cloud maps, 2D grid maps, 3D point cloud localization, and integrated path planning.

Section 3

encompasses both simulation and real vehicle experiments, offering a quantitative analysis of navigation accuracy. Finally,

Section 4

consolidates the experimental findings and presents avenues for future research.