Have any question ? +44 0204 549 9322

ISSN: 2977-0041 | Open Access

Journal of Material Sciences and Engineering Technology

Volume : 1 Issue : 3

Principle Operation of a Line Follower Robot

Muhammad Ahmad Baballe1*, Abdu Ibrahim Adamu2, Abdulkadir Shehu Bari2, and Amina Ibrahim3

1Department of Computer Engineering Technology, School of Technology, Kano State Polytechnic, Kano, Nigeria
2Department of Computer Science, Audu Bako College of Agriculture Danbatta, Kano, Nigeria
3Department of Computer Science, School of Technology, Kano State Polytechnic, Nigeria

*Corresponding author
Muhammad Ahmad Baballe, Department of Computer Engineering Technology, School of Technology, Kano State Polytechnic, Kano, Nigeria.

A rudimentary autonomously guided robot called a "Line Follower Robot" (LFR) follows a line written on the ground to either find a white line on a dark surface or a dark line on a white surface. Working on the LFR is quite interesting. We will discover how to construct a black line follower robot in this study utilizing an Arduino Uno and a few readily available parts.

Keywords: Arduino Uno, Path Following, Avoiding Obstacle, Mobile Robot

Mobile robots have drawn the attention of many academics in recent years because to their agility, adaptability, and capacity to be used in a variety of challenging missions, particularly in regards to autonomous navigation in the warehouse or restricted region [1-10]. There is a rising demand for complicated applications nowadays, when the working environment includes humans moving robots, or unexpected impediments [11-13]. At this point, robots and people are regarded as mobile impediments that make up the dynamic environment. Since then, the camera application-based "Navigation" approach has generated a lot of interest since it overcomes the drawbacks of the conventional line-detection method while also enabling path optimization. Furthermore, because the working space is typically a warehouse or small space, the robot's capacity to operate must be adaptable and able to follow a predefined trajectory while avoiding moving impediments that may occur without deviating from a safe planning trajectory [14-23]. Since the environment is static and mapped, and obstacle locations are assumed to be known in advance, there are currently many traditional researches and approaches, such as RRT*, A*, Visibility Graph, and Fast Marching Tree, that are primarily focused on autonomous path planning for mobile robots. In addition, used complex algorithms to prevent collisions, including Adaptive Genetic, Bacterial Evolutionary Algorithm, Predictive Behavior, and Partial Swarm [24-37]. The dynamic environment map must be built and updated manually using those approaches, which results in low forecast accuracy. The obstacle avoidance approach based on fuzzy logic control develops specific rules in accordance with the prior information. Recently, intelligent control algorithms have also been applied to obstacle avoidance. The method to fuzzy logic control has strong robustness, real-time performance, and less reliance on the environment [38-53]. However, there is a symmetry phenomenon that cannot be explained. The neural network-based obstacle avoidance technique creates controllers based on the location of obstacles. However, gathering data and teaching the network to discover a path take a lot of effort. When an obstacle's knowledge is lacking or unknowable, conventional obstacle avoidance algorithms are useless. Designing an intelligent control requires knowledge or experience [54,55]. Contrary to other artificial intelligence algorithms, reinforcement learning (RL) is a learning technique that doesn't need any rules [55-58]. RL is a machine learning technique that modifies the environment by using the environment's feedback as an input. One of the most widely used RL algorithms is Q-learning. With the help of the Q-value function, the environment is examined by the algorithm, which focuses on value-based reinforcement learning that is updated over time [57-60]. Recently, it has also been used to combine intelligent control and Q-learning [61-65]. These research, however, are limited to simulations and trials using straightforward static objects. Additionally, there isn't a clear discussion of how to construct the controller for a robot to follow a processed path (virtual). As is well known, Q-learning relies on a trial-and-error learning process where the agent experiences several failures before finally succeeding. Both money and time are very much in short supply. This indicates that the training is challenging to carry out in a real environment and is frequently done in simulation. From, it is clear that robot applications using Q-learning can be trained in virtual environments before being applied in the actual world [63-68]. The training environment for the RL agent in a virtual environment is established in this research to overcome these issues. The user can gather numerous responses in diverse locations while the virtual environment simulates the actual scenario. The contribution of this study is that:

  • Unlike other traditional, artificial and intelligence algorithms, the obstacle avoidance methodology for moving obstacle in this paper does not use any prior dataset for training or experience for design the controller. 
  • The training data are transferred to robot and work real time in the real application. 
  • From the experiment the RL proved to have better performance than intelligent control algorithm in terms of total time and errors [69].

How a Line Follower Robot Works
Line follower robots (LFRs), as previously mentioned, follow lines, and in order to do so, they must first be detected by the robot. The LFR's line detecting technique must now be implemented, which is the question at hand. We all understand that a black surface absorbs the most light, resulting in a maximum reflection on a white surface and a minimum reflection on a black surface. So, in order to find the line, we will exploit this quality of light. An IR sensor or an LDR (light-dependent resistor) can be used to detect light. We chose the IR sensor for our project because of its superior precision. Two IR sensors are placed on the left and right sides of the robot, respectively, in order to detect the line. The robot is then positioned on the line so that it is directly between the two sensors. A transmitter and a receiver are the two components that make up infrared sensors. The IR receiver is a photodiode, which detects the signal produced by the transmitter. The transmitter is just an IR LED that produces the signal. When an object is exposed to infrared light from an IR sensor, the light reaching the black part of the object absorbs it, producing a low output, while the light striking the white part of the object reflects back to the transmitter, being picked up by the infrared receiver, producing an analog signal. Using the aforementioned technique, we move the robot by operating the wheels that are connected to the motors, which are managed by a microprocessor. Let's call the left motor and right motor the two sets of motors that are typically seen in line-following robots. Based on the signals from the left and right sensors, respectively, both motors rotate. The robot must execute four sets of motions, including forward motion, left turn, right turn, and stopping. 

Moving Forward: In this scenario, the robot should advance, i.e., both motors should rotate so that the robot goes forward, when both sensors are on a white surface and the line is between the two sensors. Actually, due to the arrangement of the motors in our configuration, each should rotate in the opposite direction. But for ease of use, we'll refer to the motors as rotating forward.

Turning Left: The left sensor in this instance detects the black line and sends a signal to the microcontroller since it is on top of the dark line whereas the right sensor is on the white portion. The robot should turn to the left because the left sensor is sending a signal. As a result, the right motor rotates forward while the left motor rotates backward. The robot then pivots to the left. 

Turning Right: Although this circumstance is identical to the left case in that just the right sensor is present, the robot should nevertheless turn to the right in this instance. The left motor rotates forward to move the robot in the correct direction, while the right motor rotates backward to turn the robot in the opposite direction. 

Stopping: In this instance, the black line may be simultaneously detected by both sensors because they are on top of the line. The microcontroller is programmed to think of this as a reason to stop. As a result, both motors are turned off, which stops the robot from moving [70].

I've evaluated a lot of articles on line-moving robots and obstacle detection in this paper. Its basic operation and execution strategy are also addressed. 


  1. Mora A, Prados A, Mendez A, Barber R, Garrido S (2022) Sensor Fusion for Social Navigation on a Mobile Robot Based on Fast Marching Square and Gaussian Mixture Model. Sensors 22: 8728. 
  2. Sousa RM, Aranibar DB, Amado JD, Escarcina REP, Trindade RMP (2022) A New Approach for Including Social Conventions into Social Robots Navigation by Using Polygonal Triangulation and Group Asymmetric Gaussian Functions. Sensors 22: 4602. 
  3. Santos LC, Aguiar AS, Santos FN, Valente A, Petry M (2020) Occupancy Grid and Topological Maps Extraction from Satellite Images for Path Planning in Agricultural Robots. Robotics 9: 77. 
  4. Gao Y, Bai C, Fu R, Quan Q (2023) A non-potential orthogonal vector field method for more efficient robot navigation and control. Robotics and Autonomous Systems 159: 104291. 
  5. Santilli M, Mukherjee P, Williams RK, Gasparri A (2022) Multirobot Field of View Control with Adaptive Decentralization. IEEE Transactions on Robotics 38: 2131-2150. 
  6. Zhao Y, Wang T, Bi W (2019) Consensus Protocol for Multiagent Systems with Undirected Topologies and Binary-Valued Communications. in IEEE Transactions on Automatic Control 64: 206-221.
  7. Gadjov D, Pavel L (2019) A Passivity-Based Approach to Nash Equilibrium Seeking Over Networks. in IEEE Transactions on Automatic Control 64: 1077-1092. 
  8. HuaY, Dong X, Li Q, Ren Z (2018) Distributed Fault-Tolerant Time Varying Formation Control for Second-Order Multi-Agent Systems with Actuator Failures and Directed Topologies. in IEEE Transactions on Circuits and Systems II: Express Briefs 65: 774-778. 
  9. Carrio A, Tordesillas J, Vemprala S, Saripalli S, Campoy P, et al. (2020) Onboard Detection and Localization of Drones Using Depth Maps. in IEEE Access 8: 30480-30490. 
  10. Hanh1 LD, Cong VD (2023) Path Following and Avoiding Obstacle for Mobile Robot Under Dynamic Environments Using Reinforcement Learning. Journal of Robotics and Control (JRC) Volume 4: 157-164. 
  11. Çavaş M, Ahmad MB (2019) A REVIEW ON SPIDER ROBOTIC SYSTEM International Journal of New Computer Architectures and their Applications (IJNCAA) 9l: 19-24,
  12. Ahmad MB,. Muhammad AS (2020) A general review on advancement in the robotic system. Artificial & Computational Intelligence 1-7. 
  13. Baballe MA, Bello MI, Hussaini A, Musa US (2022) Pipeline Inspection Robot Monitoring System Journal of Advancement in Robotics 9. 
  14. Seo H, Cho G, Kim S-J, Chun J-H, Choi J (2022) Multievent Histogramming TDC With Pre-Post Weighted Histogramming Filter for CMOS LiDAR Sensors. in IEEE Sensors Journal 22: 22785-22798. 
  15. Tahir Z, Qureshi AH, Ayaz Y, Nawaz R (2018) Potentially guided bidirectionalized RRT* for fast optimal path planning in cluttered environments. Robotics and Autonomous Systems 108: 13- 27. 
  16. Khalilullah KMI, Ota S, Yasuda T, Jindai M (2018) Road area detection method based on DBNN for robot navigation using single camera in outdoor environments. Industrial Robot 45: 275-286. 
  17. Lei G, Yao R, Zhao Y, Zheng Y (2021) Detection and Modeling of Unstructured Roads in Forest Areas Based on Visual-2D Lidar Data Fusion. Forests 12: 820. 
  18. Amorós F, Payá L, Mayol-Cuevas W, Jiménez LM, Reinoso O (2020) Holistic Descriptors of Omnidirectional Color Images and Their Performance in Estimation of Position and Orientation. in IEEE Access 8: 81822-81848. 
  19. Yao AWL, Chen HC (2022) An Intelligent Color Image Recognition and Mobile Control System for Robotic Arm. International Journal of Robotics and Control Systems 2: 97-104. 
  20. Hassani I, Ergui I, Rekik C (2022) Turning Point and Free Segments Strategies for Navigation of Wheeled Mobile Robot. International Journal of Robotics and Control Systems 2: 172-186. 
  21. FaridG, Cocuzza S, Younas T, Razzaqi AA, Wattoo WA, et al. (2022) Modified A-Star (A*) Approach to Plan the Motion of a Quadrotor UAV in Three-Dimensional Obstacle-Cluttered Environment. Appl Sci 12: 5791. 
  22. Lin T-Y, Wu K-R, Chen Y-S, Shen Y-S (2022) Collision-Free Motion Algorithms for Sensors Automated Deployment to Enable a Smart Environmental Sensing-Net. in IEEE Transactions on Automation Science and Engineering 19: 3853-3870. 
  23. Li K, Hu Q, Liu J (2021) Path planning of mobile robot based on improved multiobjective genetic algorithm. Wireless Communications and Mobile Computing 2021: 1-12. 
  24. Hao K, Zhao J, Wang B, Liu Y, Wang C, (2021) The application of an adaptive genetic algorithm based on collision detection in path planning of mobile robots. Computational Intelligence and Neuroscience 2021: 1-20. 
  25. Noreen I, Khan K, Asghar K, Habib Z (2019) A path-planning performance comparison of RRT*-AB with MEA* in a 2-dimensional environment. Symmetry 11: 945-960. 
  26. Wang J, Chi W, Li C, Wang C, Meng MQ-H (2020) Neural RRT*: Learning-Based Optimal Path Planning. in IEEE Transactions on Automation Science and Engineering 17: 1748-1758. 
  27. Bai Y, Li G, Li N (2022) Motion Planning and Tracking Control of Autonomous Vehicle Based on Improved A* Algorithm. Journal of Advanced Transportation 2022: 1-14. 
  28. Min H, Xiong X, Wang P, Yu Y (2021) Autonomous driving path planning algorithm based on improved algorithm in unstructured environment. Proceedings of the Institution of Mechanical Engineers - Part D: Journal of Automobile Engineering 235: 513-526. 
  29. Fu B, Chen L, Zhou Y, Zheng D, Wei Z, et al. (2018) An improved A* algorithm for the industrial robot path planning with high success rate and short length. Robotics and Autonomous Systems 106: 26-37. 
  30. Latip NBA, Omar R, Debnath SK (2017) Optimal path planning using equilateral spaces-oriented visibility graph method. Intl J Electr Comput Eng 7: 3046-3051. 
  31. Janson L, Schmerling E, Clark A, Pavone M (2015) Fast marching tree: A fast marching sampling-based method for optimal motion planning in many dimensions. The International journal of robotics research 34: 883-921. 
  32. Montiel O, Orozco-Rosas U, Seplveda R (2015) Path planning for mobile robots using bacterial potential field for avoiding static and dynamic obstacles. Expert Syst Appl 42: 5177-5191. 
  33. Karami AH, Hasanzadeh M (2015) An adaptive genetic algorithm for robot motion planning in 2D complex environments. Computers & Electrical Engineering 43: 317-329. 
  34. Kamil F, Hong TS, Khaksar W, Moghrabiah MY, Zulkifli N, et al. (2017) New robot navigation algorithm for arbitrary unknown dynamic environments based on future prediction and priority behavior. Expert Syst Appl 86: 274-291. 
  35. Alajlan M, Chaari I, Koubaa A, Bennaceur H, Ammar A, et al. (2016) Global robot path planning using GA for large grid maps: Modelling performance and experimentation. International Journal of Robotics and Automation 31: 484-495. 
  36. Zhao Y, Liu X, Wang G, Wu S, Han S (2020) Dynamic Resource Reservation Based Collision and Deadlock Prevention for Multi AGVs. IEEE Access 8: 82120-82130. 
  37. Yue L, Fan H (2022) Dynamic Scheduling and Path Planning of Automated Guided Vehicles in Automatic Container Terminal. in IEEE/CAA Journal of Automatica Sinica 9: 2005-2019. 
  38. Xiao H, Wu X, Qin D, Zhai A (2020) A Collision and Deadlock Prevention Method with Traffic Sequence Optimization Strategy for UGN-Based AGVS. in IEEE Access 8: 209452-209470. 
  39. Faisal M, Hedjar R, Al-Sulaiman M, Al-Mutib K (2013) Fuzzy Logic Navigation and Obstacle Avoidance by a Mobile Robot in an Unknown Dynamic Environment. International Journal of Advanced Robotic Systems 10: 1. 
  40. Shitsukane A, Cheruiyot W, Otieno C, Mvurya M (2018) Fuzzy Logic Sensor Fusion for Obstacle Avoidance Mobile Robot. 2018 IST-Africa Week Conference (IST-Africa) 1-8. 
  41. Chiang S-Y (2017) Vision-based obstacle avoidance system with fuzzy logic for humanoid robots. The Knowledge Engineering Review, vol. 32: 9. 
  42. Ayub S, Singh N, Hussain MZ, Ashraf M, Singh DK, et al. (2022) Hybrid approach to implement multi-robotic navigation system using neural network, fuzzy logic, and bio-inspired optimization methodologies. Computational Intelligence 1-15. 
  43. Farah K, Mohammed MY (2021) Multilayer Decision-Based Fuzzy Logic Model to Navigate Mobile Robot in Unknown Dynamic Environments”, Fuzzy Information and Engineering 14: 51-73. 
  44. Haider HM, Wang Z, Khan AA, Ali H, Zeng H, et al. (2022) Robust mobile robot navigation in cluttered environments based on hybrid adaptive neuro-fuzzy inference and sensor fusion. Journal of King Saud University - Computer and Information Sciences 34: 9060-9070. 
  45. Szili FA, Botzheim J, Nagy B (2022) Bacterial Evolutionary Algorithm-Trained Interpolative Fuzzy System for Mobile Robot Navigation. Electronics 11: 1734. 
  46. Zubair S, ABU S, Ruzairi R, Andi A, Mohd H (2023) Non-Verbal Human-Robot Interaction Using Neural Network for The Application of Service Robot. IIUM Engineering Journal 24: 301-318. 
  47. Syed UA, Kunwar F, Iqbal M (2014) Guided autowave pulse coupled neural network (GAPCNN) based real time path planning and an obstacle avoidance scheme for mobile robots. Robotics and Autonomous Systems 62: 474-486. 
  48. Chi K-H, Lee M-FR (2011) Obstacle avoidance in mobile robot using Neural Network. 2011 International Conference on Consumer Electronics, Communications and Networks (CECNet) 5082-5085. 
  49. Medina-Santiago A, Camas-Anzueto JL, Vazquez-Feijoo JA, Hernández-de León HR, Mota-Grajales R (2014) Neural Control System in Obstacle Avoidance in Mobile Robots Using Ultrasonic Sensors. Journal of Applied Research and Technology 12: 104-110. 
  50. Rafai ANA, Adzhar N, Jaini NI (2022) A Review on Path Planning and Obstacle Avoidance Algorithms for Autonomous Mobile Robots. Journal of Robotics 2022: 1-14. 
  51. Qu F, Yu W, Xiao K, Liu C, Liu W (2022) Trajectory Generation and Optimization Using the Mutual Learning and Adaptive Ant Colony Algorithm in Uneven Environments. Applied Sciences 12: 4629. 
  52. Muhammad A, Ali MAH, Shanono IH (2020) Path Planning Methods for Mobile Robots: A systematic and Bibliometric Review. ELEKTRIKA- Journal of Electrical Engineering 19: 14- 34. 
  53. Zhu D, Tian C, Sun B, Luo C (2019) Complete Coverage Path Planning of Autonomous Underwater Vehicle Based on GBNN Algorithm. Journal of Intelligent & Robotic Systems 94:. 237-249. 
  54. Sun C, He W, Hong J (2017) Neural Network Control of a Flexible Robotic Manipulator Using the Lumped Spring-Mass Model. in IEEE Transactions on Systems, Man, and Cybernetics: Systems 47: 1863-1874. 
  55. Kumar A, Kumar PB, Parhi DR (2018) Intelligent Navigation of Humanoids in Cluttered Environments Using Regression Analysis and Genetic Algorithm. Arabian Journal for Science and Engineering 43: 7655-7678. 
  56. Abbas AK, Mashhadany YA, Hameed MJ, Algburi S (2022) Review of Intelligent Control Systems with Robotics. Indonesian Journal of Electrical Engineering and Informatics (IJEEI) 10 734-753. 
  57. Naeem M, Rizvi STH, Coronato A (2020) A Gentle Introduction to Reinforcement Learning and its Application in Different Fields. in IEEE Access 8: 209320-209344. 
  58. Naeem M, Coronato A, Ullah Z, Bashir S, Paragliola G (2022) Optimal User Scheduling in Multi Antenna System Using Multi Agent Reinforcement Learning. Sensors 22: 8278. 
  59. Kumwilaisak W, Phikulngoen S, Piriyataravet J, Thatphithakkul N, Hansakunbuntheung C (2022) Adaptive Call Center Workforce Management with Deep Neural Network and Reinforcement Learning. in IEEE Access 10: 35712-35724. 
  60. Alwarafy A, Abdallah M, Çiftler BS, Al-Fuqaha A, Hamdi M (2022) The Frontiers of Deep Reinforcement Learning for Resource Management in Future Wireless HetNets: Techniques, Challenges, and Research Directions. in IEEE Open Journal of the Communications Society 3: 322-365. 
  61. Jang B, Kim M, HarerimanaG, Kim JW (2019) Q-Learning Algorithms: A Comprehensive Classification and Applications. in IEEE Access 7: 133653-133667. 
  62. Sutisna N, Ilmy AMR, Syafalni L, Mulyawan R, Adiono T (2023) FARANE-Q: Fast Parallel and Pipeline Q-Learning Accelerator for Configurable Reinforcement Learning SoC. in IEEE Access 11: 144-161. 
  63. Sutisna N, Arifuzzaki ZN, Syafalni I, Mulyawan R, Adiono T (2022) Architecture Design of Q-Learning Accelerator for Intelligent Traffic Control System. 2022 International Symposium on Electronics and Smart Devices (ISESD) 1-6. 
  64. Chandrakar A, Paliwal P (2023) An Intelligent Mechanism for Utility and Active Customers in Demand Response Using Single and Double Q Learning Approach. Smart Energy and Advancement in Power Technologies 926: 397-413. 
  65. Raajan J, Srihari PV, Satya JP, Bhikkaji B, Pasumarthy R (2020) Real Time Path Planning of Robot using Deep Reinforcement Learning. IFAC-PapersOnLine 53: 15602-15607. 
  66. Cimurs R, Lee JH, Suh IH (2020) Goal-Oriented Obstacle Avoidance with Deep Reinforcement Learning in Continuous Action Space. Electronics 9: 411. 
  67. Huang L, Qu H, Fu M, Deng W (2018) Reinforcement Learning for Mobile Robot Obstacle Avoidance Under Dynamic Environments. PRICAI 2018: Trends in Artificial Intelligence 11012. 
  68. Pambudi AD, Agustinah T, Effendi R Reinforcement Point and Fuzzy Input Design of Fuzzy Q-Learning for Mobile Robot Navigation System. 2019 International Conference of Artificial Intelligence and Information Technology (ICAIIT) 186-191. 
  69. Wen S, Hu X, Li Z, Lam HK, Sun F, et al. (2020) NAO robot obstacle avoidance based on fuzzy Q-learning. Industrial Robot 47: 801-811.
  70. https://circuitdigest.com/microcontroller-projects/arduino-uno-line-follower-robot.