Artículos de investigación
Corresponding author: ga259965@uaeh.edu.mx

Abstract: The present project consists in the implementation of a Parrot Bebop 2R quadcopter as an autonomous line follower. Thedevelopment of the application is done through the operating system for robots ROSc , which, through scripts, establishes acommunication link with Wi-Fi technology between the quadcopter and the computer that performs the line detection task andflight control. The script is written in Python 3.7 language using the Atomc text editor. The following features are integrated in thescript: a control algorithm for the quadcopter flight, a line detector that operates with the video acquired from the quadcopter and adriver for the implementation of a joystick as a quadcopter manipulation method and security element.
Keywords: Cuadricóptero, Python, ROS, Seguidor de línea.
Resumen: El trabajo presentado consiste en la utilización de un cuadricóptero modelo Parrot Bebop 2 ́R como seguidor de línea autonomo. El desarrollo de la aplicación se hace a través del sistema operativo para robots ROS ́c (Robot Operating System), el cual, con a elaboración de scripts. Establece un enlace de comunicación con tecnología Wi-Fi entre el cuadricóptero y la computadora que realiza la tarea de detección de línea y control de vuelo. El script esta escrito en lenguaje Python 3.7 con uso del editor de texto Atomc . En el script se integran las siguientes características: un algoritmo de control para el vuelo del cuadricóptero, un detector de línea que opera con el video adquirido del cuadricóptero y un driver para la implementación de un joystick como método de manipulacion del cuadricóptero y elemento de seguridad.
Palabras clave: Line follower, Quadcopter, Python, ROS.
1. Introduction
Quadcopters are a very useful tools in areas like research,commercial or professional fields. These devices are used fordifferent tasks like patrolling, army, monitoring, navigation,mapping and even advertising tasks, this is due to the versa-tility of its implementation (Piotr Kardasz and Zarzycki, 2016).The quadcopters can be programmated with routines that fulfillsome activity, but in order to achieve this goal, a control algo-rithm is needed. This algorithm not only minimizes the errormargin but also guaranties the security of its implementationunder some circumstances, such like unexpected air currents(Johannes Meyer and von Stryk, 2012).

A flight control algorithm (depending on the application and purpose of the quadcopter) tend to be quite specific, in a line fo-llower case, the quadcopter has to be able to follow a drawn linethat indicates a fixed path to follow. There are different ways todo so, depending on how the quadcopter acquires informationabout its position. An example would be using the video ca-mera integrated in the quadcopter, then, through digital imageprocessing, it is obtained the relation ship between the line tofollow and some point of reference. The above example can beimplemented specially in patrolling fields where the advantage of a wider observation field and a unmanned remote controlleddevise tend to be more profitable (Alexandre S. Brandao and Soneguetti, 2015).
Conventionally, the line followers are limited to robots thatmove on the floor, which implement infrared sensors that detectthe absence or existence of a line to follow (Pakdaman and Sa-naatiyan, 2009) and with microcontrollers, operational ampli-fiers and mechanical switches do their job (M. Zafri Baharuddinand Chuan, 2016). This mention is made with the objective ofcontrasting the advantage of a quadcopter use, due that this oneit’s not attached to the floor; it can avoid obstacles and cover awider field of movements for unforeseen events.
The quadcopter used in this work is made by ParrotR com-pany. The Parrot Bebop 2R (Figure 1) model has a 14 megapi-xels HD 1080p video camera with a fisheye lens able to focuson 3 axes. It also has a digital stabilization system which isideal for coping with winds up to 60 km/h and, in comparison to other models of the same company, its price is inexpensive(Parrot, 2016).
The Bebop Parrot 2R can be handled through differentways, the most popular is with smartphone applications thatare developed officially by the manufacturer. In this work themanipulation of the quadcopter is done over the software de-velopment system for robots ROSc (Koubaa, 2017). This platform provides multiple open source tools for manipulation andcontrol of this kind of devices. Currently, it has a driver calledbebop autonomyc for the Parrot bebop 1.0R and Parrot bebop2.0R quadcopter models, which is based on the official SDK ofthe ParrotR company called ARDroneSDK3c (Bristeau et al.,2011). Bebop autonomyc is open source and can be downloa-ded from the GitHubc platform.
The bebop autonomyc driver uses the ROSc communica-tion system, therefore, the control of the same is based on the elaboration of scripts that allows us to obtain and manipulate the information that the quadcopter throws through topics. Pre-cisely, the video recorded is the most important feature of thework, since the information recorded on every frame is reinter-preted and processed in order to obtain an input to the control algorithm that has been developed (Monajjemi, 2015).
ROSc is only available for certain operating systems, theversion used in this case was ROS Kineticc (Koubaa, 2017) over the Ubuntu LTE 16.04c operating system. This platformworks mainly with 2 programming languages: C ++ and Pyt-hon. The elaborated scripts in this work were made in Python3.7. One of the advantages of using scripts is the compatibility with the use of language libraries focused on digital image pro-cessing such as OpenCVc (Bradski and Kaehler, 2008) whichis used in the elaboration of the work. With in the script, a driverfor the control of the quadcopter through a joystick was added,in order to provide a method of manipulation of the quadcopter, in addition to granting a security control element in case therewere any unforeseen event during the device flight.
The document is organized in sections as follows: Section 2 presents a general description of the implemented systemsand some considerations concerned to the set-up of the experi-mental plant. Section 3 describes the structure of the script de-veloped divided in 4 subsections: the communication system, joystick event reading, digital image processing and the imple-mentation of the control algorithm. The paper ends with some conclusions.
2. System’s structure
Figure 2 shows the system diagram. It represents the elements integrated as well as the relation ship they have among,the inputs of the system are the video camera and the joystickg amepad. It is important to note that the video camera in puthas priority over the joystick input as far as piloting is concerned. However, the joystick keeps being a fundamental elementsince the node of communication is initialized by one of the buttons of the joystick, as well as some commands, like takingoff and landing the quadcopter. The obtained outputs from the quadcopter are: the video camera focus position and the speedof the rotors for each helix that are given in 6 parameters di-vided into 2 categories: the linear speed and the angular speed (Figure 3), both categories are given in Cartesian coordinates in YAML format (for its acronym “YAML Ain’t Markup Language”), which is a standard serialization of package information for programming languages (Ben-Kiki et al., 2005).


2.1. Set-up and security elements
Along the performance and behavior testing of the quadcopter, the implementation of a mechanism to control and pilotthe quadcopter was needed, in this case a XBOX 360R joystickwas employed (Ikeda et al., 2012). The video recorded from the quadcopter, is visualized through an Ubuntu’s environment window, which is a direct reading of the quadcopter’s bebop/image raw topic. All tests were performed in isolated envi-ronments in order to safeguard the physical integrity of thirdparties, as well as the integrity of the quadcopter to unforeseen events, such as air currents that exceed the quadcopter’s self-stabilization capacity. For testing, a line of approximately 25meters was drawn with blue tape on a surface without unevenness.
3. Script structure
The script consists of 3 features. The first one is the methodof communication of ROSc , this allows communication between the ROSc core and the quadcopter, enabling the readingand writing of topics. The second feature is the manipulationof the quadcopter with the joystick, in this part the script identifies the characteristics of the joystick, as well as its input parameters; which are interpolated through a function to be published in the topics of movement speed and camera position ofthe quadcopter. The script also assigns the quadcopter’s take-off, landing and camera initialization commands to the joystick buttons. Although this can be done with command lines, theimmediate response of the joystick provides an extra securitymeasure for the piloting of the quadcopter. Finally, the thirdfeature is the digital image processing of the video recorded bythe quadcopter and the implementation of the control algorithm.In this part, the image is obtained from the quadcopter’s /image raw topic, then, the image is identified and manipulated by the script in order to process the information about the line tofollow and make decisions that handle the movement speed ofthe quadcopter.
3.1. ROSc communication system
The script was written with the qualities of a node with in the core of ROSc . To achieve this goal is necessary the useof the rospy library that is part of ROSc repositories (Quigleyet al., 2015). Then, the script is able to interact with topics that are available within the ROSc core at the moment. Figure 4 shows the initialization of the script as a ROSc node, Figure 5 shows the publisher metodology with 4 different objective topics which are: the topic of landing, taking off, camera controland movement speed of the quadcopter. These four topics arethe only ones to work with as far as publication is concerned. Figure 6 shows the initialization of the subscriber to the videocamera topic of the quadcopter, this element is a fundamental part since it is responsible of obtaining the image of the quadcopter.



3.2. Joystick event reading
The script recognizes the joystick device by reading a file with in the Ubuntuc operating system. That is hosted at /dev/in-put directory address (Nguyen, 2003) and consists of a structurethat records the events performed on the joystick. The ioctl function of the fcntl (Stevens et al., 2008) library of the repository of Unixc (W. Richard Stevens and Rudoff, 2004) based operating systems, allows the reading and recognition of the device through responses generated by the operations defined by theioctl function. With this function the name of the device andthe number and mapping of buttons and axis that the joystickhas are obtained. Both, the axis and the buttons have an alias inhexadecimal number, for practicality these numbers are storedin a dictionary within the script. The dictionary stores the hexadecimal numbers for the buttons and axis of the most commongeneric controls that can be implemented within the operational system (DAI and SHU, 2008).


Each time that there’s an event (pressing a button or movingan axis), the script reads and interprets the information corresponding to it, based on 4 parameters which are: time, value,type and number. Time corresponds to the moment in which the event occurred, value identifies in the case of the axis thestick position in a range of -32767 to 32767 and, in the case of buttons, 1 if it is pressed and 0 if it is not. Type identifies whatkind of event it was, if it was from the axis, button or initialization type. The initialization ones show information of the joystick status at the first moment of running the driver. Finally the number identifies which button or axis were changed (DAI and SHU, 2008).

Three buttons were assigned to control the quadcopter (Figure 7): the “X” button publishes to the landing topic, the “B” button publishes to the take off topic and the “mode” buttonstarts the subscription to the video camera as well as the image processing and control algorithm of the quadcopter (Figure 8). If the “mode” button has not been pressed the quadcopter canbe piloted with the following axis: the x axis controls the linearspeed at y, the y axis controls the linear speed at x, the rx axis controls the angular velocity at z and the axis ry controls the linear velocity at z (Figure 9). It is important to mention thatthere is a previous interpolation between the values obtained bythe joystick controller and the range allowed for the publication of the quadcopter’s operating speeds, this is done with a function of interpolation that returns the conditioned value (Figure10).

3.3. Digital image processing
The control algorithm begins at the moment when the “mode” button is pressed. The button starts the node subscription to the /bebop/image raw topic. At the same time, this instruction uses a “callback” function where the image processing takes place. OpenCVc offers different tools as far as image processing is concerned, with the help of the CVBridge function, an exception method is defined, this method allows to notify if there is an event that prevents the execution of an OpenCVcfunction. In order to process the image obtained from the /be-bop/image raw topic, it must be encoded from the message format of ROSc to an OpenCVc object, this is achieved with theCVBridge imgmsg to cv2 function (Martinez and Fernández, 2013).

The default setting adjusts the quadcopter’s camera to capture the front area of it self, so, to visualize the ground where the line to be detected is located, the speed parameter -83 has to be published on the /bebop/camera control topic. This modifies the angular “y” camera position and consequently the area of visualization. This is only done once and preferably it is not modified later. The camera has a position stabilizer which can modify the position of the camera, there fore if the instruction to visualize towards the ground is constant, it is expected that the stabilizer will not move the camera to another unwanted position.
By default, the size of the image obtained is 1920 x 1080 pixels, this resolution is not suitable for image processing, since the computation time per frame is significantly larger, so, the image is resized with a resolution of 160 x 120 pixels. Another of the libraries used in the process is NumPyc , this library is used for the manipulation and creation of matrices. The videorecorded, after being converted into an OpenCVc object, be-comes a three-dimensional arrangement that can be interpretedby NumPyc (Oliphant, 2006).
There are different color detection algorithms, the HSV(Hue-Saturation-Value) model is ideal for the detection of colors, since a range can be established between the hue of the color to be detected, despite the change of saturation or brightness, it will remain the same (Deswal1 and Sharma, 2014). The line to be detected is marked by a blue tape. To determine therange in HSV parameters of the blue color of the tape it self, it was necessary to take samples of the blue tape with a camera at different light exposures, then convert the RGB (Red-Green-Blue) values into HSV parameters.
With the function cv.inRange a matrix mask is obtained, it contains the information corresponding to the blue range of colors of the line to be detected. Then, with cv.bitwise and function, it’s extracted the blue line to follow from every frame of the video recorded by the quadcopter’s camera. The obtained video has only the range of color established by the mask previously created for the blue of the tape in a wide range ofbrightness (Figure 12).


The video obtained from the mask is converted to gray scale, then the OpenCVc function (cv.findContours) allows toidentify the contour of the tape of the line to follow. If there is any contour in the video, it means that the line exists; the way in which the noise detection of the image is reduced, is totake the larger detectable area. So, if there is any element thatis not the line to follow this becomes discarded unless it has alarger area than the line to follow it self (García et al., 2015).

Cv.moments function determines the centroid of the line to follow as well as its coordinates in the video, which are stored inthe variables cx (horizontal position) and cy (vertical position).These coordinates are given in a range defined by the resolutionof the processed image, which in this case is 120 (cy) x 160 (cx)pixels. The centroid is the reference point for the quadcopter piloting, its position determines in which direction the quadcoptermust move (Figure 13).
Any movement of the quadcopter in any direction will change the position of the centroid of the line to follow, then thequadcopter must be able to return to the point where the centroid of the line to follow is right at the center of the video, inorder to always have the line visible. This is carried out with thedetermination of areas throughout the video as seen in the Figure 14. Six vertical red lines in the video determines differentareas for possible cases where the line to follow can be detectedas well as the decisions regarding the control of the quadcopter.

3.4. Control algorithm
The six red lines drawn in the quadcopter video determine 7 areas. These lines are distributed according to the width in pi-xels of the processed video which in this case is 160 pixels, each determined area obeys different decisions by the quadcopter if the centroid of the line detected is found. This is interpreted as different movement speeds what will try to head the quadcopter to the central area. In addition, in the case where there is no line to follow found in the video, the quadcopter stops moving. Table 1 shows the relation ship between the published speeds to the topic /bebop/cmd vel and the area where is located the centroid of the line to follow. It is important to emphasize that thecx coordinate is a parameter given by the centroid of the detected line and it is fixed in a range of 0 to 160 as well as the width of every video frame.
Straight line (Figure 15(a)): the centroid of the line to follow is between 60 and 100 pixels with respect to the vertical axis (cx) of the image (central area), the quadcopter moves forward only.
Turn to the left (Figure 15(b)): the centroid of the line to follow is between 120 and 100 pixels with respect to the vertical axis (cx) of the image (left area). The quadcopter moves slightly to the left modifying both the angular velocity on the z axis, and the linear velocity on the y axis.
Also the linear velocity in x decreases by half to make a better turn without stop the movement on the line to follow.
Turn to the right (Figure 15(c)): the centroid of the line to follow is between 60 and 40 pixels with respect to the vertical axis (cx) of the image (right area). As in the leftturn the quadcopter moves slightly to the right, keeping the same values for the angular and linear angular velo-cities but with the opposite sign. The linear movement velocity in x also decreases but does not change its sign (reverse).



Figure 15: Quadcopter shift in different scenarios.
4. Conclusions
Part of the application precision comes from the detection of the line, where it matters the size of the tape, the distance between the quadcopter to the floor, the color detection algorithm, the movement speed of the quadcopter and the acquisition speed from the video camera.
During the development of this work, it was considered infirst instance to use a black tape for the line, however the techniques used for detection of this color failed under the effects of exposure to different levels of brightness, hence, the use ofa black tape was discarded, concluding that the use of the HSVformat is more useful to detect ranges of any other color.
The control algorithm does not implement any mathematical model, the proposed movement speeds are the product of the observation by the work developers.
The system can be improved implementing a more efficient control stage by using the same principle of line color, coordinates and centroid detection.
The work developed achieved its objective as a line follower, however, the results presented here can be used for different tasks where tracking requirements are implemented.
References
Alexandre S. Brandao, F. N. M., Soneguetti, H. B., 2015. A vision-based line following strategy for an autonomous uav. 12th International Conference on Informatics in Control, Automation and Robotics 2, 317–318.
Ben-Kiki, O., Evans, C., Ingerson, B., 2005. Yaml ain’t markup language (yaml) version 1.1. yaml. org, Tech. Rep, 23.
Bradski, G., Kaehler, A., 2008. Learning OpenCV: Computer vision with the OpenCV library. .O’Reilly Media, Inc.”.
Bristeau, P.-J., Callou, F., Vissiere, D., Petit, N., 2011. The navigation and con- trol technology inside the ar. drone micro uav. IFAC Proceedings Volumes 44 (1), 1477–1484.
DAI, Y.-f., SHU, W.-q., 2008. Linux-based implementation of game joysticks. Computer Engineering & Science (7), 36.
Deswal1, M., Sharma, N., 2014. A fast hsv image color and texture detection and image conversion algorithm. International Journal of Science and Re- search (IJSR) 3, 5.
García, G. B., Suarez, O. D., Aranda, J. L. E., Tercero, J. S., Gracia, I. S., Enano, N. V., 2015. Learning image processing with opencv. Packt Publishing Ltd.
Ikeda, J., Ledbetter, C. L., Bristol, P., Apr. 24 2012. Game controller with multi- positional state controller element. US Patent App. 29/368,839.
Johannes Meyer, Alexander Sendobry, S. K. U. K., von Stryk, O., 2012. Com- prehensive simulation of quadrotor uavs using ros and gazebo. Simulation, Modeling, and Programming for Autonomous Robots: Third International Conference 1, 1. DOI: 10.1007/978-3-642-34327-8 36
Koubaˆa, A., 2017. Robot Operating System (ROS). Springer.
Springer. M. Zafri Baharuddin, Izham Z. Abidin, S. S. K. M. Y. K. S., Chuan, J. T. T., 2016. Analysis of line sensor configuration for the advanced line follower robot, 1–2.
Martinez, A., Fernández, E., 2013. Learning ROS for robotics programming. Packt Publishing Ltd.
Monajjemi, M., 2015. bebop autonomy-ros driver for parrot bebop drone (qua- drocopter) 1.0 & 2.0. Recuperado el 14.
Nguyen, B., 2003. Linux Filesystem Hierarchy. Binh Nguyen.
Oliphant, T. E., 2006. A guide to NumPy. Vol. 1. Trelgol Publishing USA.
Pakdaman, M., Sanaatiyan, M. M., 2009. Design and implementation of line follower robot. IEEE 43, 1–2.
Parrot, S., 2016. Parrot bebop 2. Retrieved from Parrot. com:http://www.parrot.com/products/bebop2.
Piotr Kardasz, Jacek Doskocz, M. H. P. W., Zarzycki, H., 2016. Drones and possibilities of their using. Journal of Civil & Environmental Engineering 6, 5–6.
Quigley, M., Gerkey, B., Smart, W. D., 2015. Programming Robots with ROS: a practical introduction to the Robot Operating System. .O’Reilly Media, Inc.”.
Stevens, W. R., Fenner, B., Rudoff, A. M., 2008. UNIX Network Programming: The Socets Network API.
Addison-Wesley. W. Richard Stevens, B. F., Rudoff, A. M., 2004. Addison-Wesley, Boston, MA 02116, USA.
Author notes
ga259965@uaeh.edu.mx

