This section describes the adaptation of the existing FormulaPi code based to the Raspberry Pi and Arduino hardware. This code base serves as the foundation for the Elegoo autonomous vehicle project.
FormulaPi is an autonomous vehicle race series designed for people with little hardware or software experience. The league provides everyone with basic software and a common hardware platform. The PiBorg team provides all competitors with a common code base. This software contains the minimum amount of functions needed to process images and control the Yetiborg. The software is only available to competitors and is hosted privately on SourceForge. The intent is to keep the software simple enough for entry level competitors but to allow flexibility for more advanced competitors to modify the software to improve performance.
While the FormulaPi competition relies on PiBorg’s Yetiborg for the competition, integrating this code base onto the Elegoo vehicle allows users to modify the FormulaPi software and test those changes in hardware before submitting the code to the competition.
FormulaPi Code Installation
This section assumes the reader is a FormulaPi participant and has access to the FormulaPi code base. We will start with the Python3 port of the standard FormulaPi race code.
$ git clone https://git.code.sf.net/p/formulapi3/code formulapi3-code
There are a couple of advantages to using this code base. This version of the software was designed for Python 3.x instead of Python 2.7 as well as using OpenCV 3.x. Also, it provides a modified version of the standard ZeroBrog motor driver. Since we will be interfacing with the Arduino on the Elegoo, the modified ZeroBorg.py file interfaces with the l298n motor controller shield for the Arduino that is provided in the Elegoo vehicle kit. The driver uses the CmdMessenger library to interface with the Arduino and relies on an Arduino sketch to to run a PWM motor driver. Installation of the CmDMessenger library is covered in the Raspberry Pi Integration section. This modified driver also leaves open the possibility of integrating other sensors, such as the sonar sensor on the Elegoo, to create a more comprehensive autonomous vehicle system beyond the camera only based FormulaPi system.
FormulaPi Code Modifications
Some minor changes are necessary to update the software to the current versions of the package dependencies. This includes syntax and formatting changes to make the code compatible with Python 3 and OpenCv 3. It also involves removing some more advanced features to simplify the simulations to support understanding and debugging.
In SimulationImages.py make the following changes:
Line 17: +++ import importlib Line 149: --- reload(Settings) +++ importlib.reload(Settings) Line 258: --- image = cv2.imread(fileIn) +++ filepath = os.path.join(autoPath, fileIn) +++ image = cv2.imread(filepath)
In ImageProcessor.py, some more significant changes are made. Comment out the PerformOverrides function. This will make debugging easier since we aren’t initially concerned with some of the advanced features provided by this function.
Line 239: --- filteredSpeed, filteredSteering = self.PerformOverrides(filteredSpeed, filteredSteering) #TODO debug overrides +++ #filteredSpeed, filteredSteering = self.PerformOverrides(filteredSpeed, filteredSteering) Line 426: --- steering = self.pid0 + self.pid1 + self.pid2 +++ steering = self.pid0
Comment out this line to prevent each image from creating a new window when the image is displayed.
Line 494: --- image.itemset((y, x, 0), b +++ image.itemset((int(y), int(x), 0), b)
Also add the following line if you want to display the “Predator” mode images.
Line 911: +++ self.ShowImage('Predator', adjusted)
Add the following line to enable proper reading of frames from a video file rather than live from the camera.
Line 464: +++ image = cv2.resize(image, (Settings.imageWidth, Settings.imageHeight), interpolation = cv2.INTER_CUBIC)
Several changes are made regarding the PID control. The additional terms are taken out to simplify the tuning process of the PID gains. Only the estimate track position is used in the control system. The vehicle angle and estimated track position are ignored for now. The other additional lines ensure that when computing d1, the estimated vehicle in angle is in radians instead of pixel units.
Line 426: --- steering = self.pid0 + self.pid1 + self.pid2 +++ steering = self.pid0 Line 1058: +++ d1Pix = d1 +++ gradient = math.atan(d1/dXdY) +++ d1 = gradient
Some functions need to be updated to ensure the proper variable types are input as arguments.
Line 494: --- image.itemset((y, x, 0), b +++ image.itemset((int(y), int(x), 0), b) Line 496: --- image.itemset((y, x, 1), b +++ image.itemset((int(y), int(x), 1), b) Line 498: --- image.itemset((y, x, 2), b +++ image.itemset((int(y), int(x), 2), b)
The findCountours function returns 3 variables instead of 2 so the function call needs to be updated to reflect the additional variable.
Line 917-920: --- rContours, hierarchy = cv2.findContours(red, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) --- gContours, hierarchy = cv2.findContours(green, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) --- bContours, hierarchy = cv2.findContours(blue, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) --- kContours, hierarchy = cv2.findContours(black, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) +++ red, rContours, hierarchy = cv2.findContours(red, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) +++ green, gContours, hierarchy = cv2.findContours(green, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) +++ blue, bContours, hierarchy = cv2.findContours(blue, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) +++ black, kContours, hierarchy = cv2.findContours(black, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)
The changes below will fix some errors in the visualization of the track position, vehicle angle, and track curvature estimates.
Line 1071: --- offsetPoint = (offsetX + Settings.cropX1, offsetY + Settings.cropY1) +++ offsetPoint = (int(offsetX + Settings.cropX1), int(offsetY + Settings.cropY1)) Line 1073: --- dPoint = (int(centralPoint + dPlotY * d1), int(centralPoint - dPlotY)) +++ dPointC = (int(centralPoint + dPlotY * d1Pix), int(centralPoint - dPlotY)) +++ dPointO = (int(offsetPoint + dPlotY * d1Pix), int(offsetPoint - dPlotY)) Line 1077: --- cv2.line(displayImage, centralPoint, dPoint, lineColour, 3, lineType = cv2.CV_AA) +++ cv2.line(displayImage, offsetPoint, dPointO, lineColour, 3, cv2.LINE_AA) Line 1078: --- cv2.line(displayImage, d2PointA, d2PointB, d2Colour, 3, lineType = cv2.CV_AA) +++ cv2.line(displayImage, d2PointA, d2PointB, d2Colour, 3, cv2.LINE_AA) Line 1080: --- cv2.line(displayImage, centralPoint, dPoint, white, 1, lineType = cv2.CV_AA) +++ cv2.line(displayImage, centralPoint, dPointC, white, 1, cv2.LINE_AA) Line 1081: --- cv2.line(displayImage, d2PointA, d2PointB, white, 1, lineType = cv2.CV_AA) +++ cv2.line(displayImage, d2PointA, d2PointB, white, 1, cv2.LINE_AA)
Finally, these changes will fix errors in the estimated track position graphic
Line 1104-1111: --- x1 = section * line --- x2 = x1 + section --- cv2.line(displayImage, (x1, y), (x2, y), line, fat, lineType = cv2.CV_AA) --- y1 = y - Settings.offsetHeight --- y2 = y + Settings.offsetHeight --- x = int(section * (4 - d0Sum)) --- cv2.line(displayImage, (x, y1), (x, y2), black, fat, lineType = cv2.CV_AA) --- cv2.line(displayImage, (x, y1), (x, y2), orange, thin, lineType = cv2.CV_AA) +++ x1 = int(section * line) +++ x2 = int(x1 + section) +++ cv2.line(displayImage, (x1, y), (x2, y), line, fat, cv2.LINE_AA) +++ y1 = int(y - Settings.offsetHeight) +++ y2 = int(y + Settings.offsetHeight) +++ x = int(section * (4 - d0Sum)) +++ cv2.line(displayImage, (x, y1), (x, y2), black, fat, cv2.LINE_AA) +++ cv2.line(displayImage, (x, y1), (x, y2), orange, thin, cv2.LINE_AA)
In order to start with the simplest simulation as possible, the Race.py file was minimized so it contained only the following lines and was renamed to Race_img_sim.py. This removes any of the position estimation and starting lights identification functions and allows us to just focus on the core lane detection algorithms.
### Enable logging ### StartDetailedLoging() StartUserLog() ImageProcessor.SetImageMode(ImageProcessor.FOLLOW_TRACK)
In the original Zeroborg.py file, there were separate functions for the left wheels and the right wheels defined as SetMotor1 and SetMotor2. During experimentation, it was found that the vehicle responded more smoothly by sending motor commands to both sides in a single command. Therefore, in Zeroborg.py, we define the SetMotors function that takes desired left and right motor commands and sends them to the Arduino using the PyCmdMessenger library. The function is defined below.
def SetMotors(self, right, left): left = int(left) right = int(right) c.send("motors",150,left,right)
In order to use the SetMotors function, we need to modify Formula.py. We will use only the SetMotors function and comment out the remaining SetMotor# function commands. Additionally, update the file to use the simplified Race_img_sim.py file
Line 55: +++ ZB.SetMotors(driveRight * Settings.maxPower, driveLeft * Settings.maxPower) Line 200: --- <span class="diff-chunk diff-chunk-removed">exec(compile(open('Race.py').read(),</span><span class="diff-chunk diff-chunk-equal"> 'Race.py', 'exec'), raceGlobals.copy(), raceLocals.copy())</span> +++ <span class="diff-chunk diff-chunk-inserted">exec(compile(open('Race_img_sim.py').read(), 'Race_img_sim.py', 'exec'), raceGlobals.copy(), raceLocals.copy())</span>
Computer Vision Simulation
Before running the FormulaPi computer vision algorithms in real-time on video captured from the RaspberryPi camera onboard the vehicle, we first input some sample images into the algorithms and ensure the output is accurate.
Since the algorithms were optimized for the FormulaPi track at the YetiBorg offices, we can use sample images taken from their ZeroBorg vehicles on that track. I have compiled a range of sample photos into a .zip file that can be downloaded here.
The SimulationImages.py file is the primary file for running software-in-the-loop simulations to tune the FormulaPi lane detection algorithm. For the initial tests, we will use the formulapi_1.jpg file.
This results in the following changes being made to the SimulationImages.py file. Note that some changes depend on the file path where the sample images are located. This will be unique to each users machine.
fileIn = 'formulapi_1.jpg' autoPath = r'/home/pi/Desktop/data_sets/formulapi_sample_images/' ImageProcessor.filePattern = './formulapi_%01d.jpg'
We will also make sure the correct flags for the image processor debugging settings. This will allow us to see the output of the lane detection algorithm at various stages in the process.
ImageProcessor.writeRawImages = False ImageProcessor.writeImages = False ImageProcessor.debugImages = True ImageProcessor.showProcessing = True ImageProcessor.showFps = False ImageProcessor.showUnknownPoints = True ImageProcessor.predatorView = True
Lastly, make sure we are simulating our modified Race.py file
exec(compile(open('Race_img_sim.py').read(), 'Race_img_sim.py', 'exec'), raceGlobals.copy(), raceLocals.copy())
Once those changes are complete, run the simulation on the single formulapi_1.jpg image file.
$ python /path/to/SimulationImages.py
The following images will be displayed as a result
Here we see that each lane color is detected as well as the black wall. The lines between the lanes are identified and the estimated position of the vehicle is also accurate.
Several other images are useful to verify the accuracy of the lane detection algorithm. For example, the file formulapi_img3.jpg contains all of the lanes of the FormulaPi track.
Lastly, the lane detection uses the reg and green lanes to identify if the vehicle is facing the wrong direction. Specifically, the algorithm identifies if a green lane is ever to the left of a red lane. This only occurs if the vehicle is facing the opposite direction. The file formulapi_img10.jpg is used to test this portion of the algorithm.
When the algorithm detects that the vehicle is facing the wrong detection, it identifies the lane with white markers as shown in the output image below.
We can easily test all of the sample images in the formulapi_sample_images directory by ensuring that the autoDelay variable is set to 0 in the SimulationImages.py file.
autoDelay = 0
With this modification, the algorithm will process each image and display the results. To move onto the next image in the directory, the user presses a key. The next image will load, process, and the output will be displayed.
Once the user feels comfortable that the algorithm is functioning properly and that the user understands all the nuances of the lane detection algorithm and its associated variables, the user can input pre-recorded video footage to test the algorithm and motor control.
The DIY Robocars Oakland Meetup group and companion website were created for people who want to make and race DIY and pro-level autonomous cars on a budget that can be used indoors. The site contains a community and digital resources for those not able to participate at the Meetup events in person in California.
The Meetup group has replicated the FormulaPi track and at one event, the Cometa Team recorded some sample car-perspective video while driving around the track. Their complete data set is available as the DIY Robocars OaklandWarehouse dataset. The team recorded several runs around varying types of tracks. We will use their Video 3 file which begins with runs around the FormulaPi track. They’ve also provided a file with CSV for each frame with steering and throttle settings here.
For this simulation, the lane detection algorithm will be running and calculating motor commands to keep the vehicle centered on a specified lane. Since this means the motors will be receiving motor commands and spinning the wheels, some additional modifications must be made.
First, we want to make sure that the Arduino is configured to receive motor commands from the FormulaPi using the USB serial connection with the help of the CmdMessenger library. While this file is provided in the PiBorg’s Python3 port of the standard FormulaPi race code, it can also be downloaded here. Using the Arduino IDE, the ForumulaPi.ino should be compiled and uploaded to the Arduino on the Elegoo.
Additionally, modify the capture variable to read in the pre-recorded video file instead of using the Raspberry Pi camera in Formula.py
Line 55: +++ ZB.SetMotors(driveRight * Settings.maxPower, driveLeft * Settings.maxPower) Line 204: --- Globals.capture = cv2.VideoCapture(0) +++ Globals.capture = cv2.VideoCapture("/home/pi/Desktop/data_sets/74DA388EAC61-1484426563.flv")
Running formula.py will result in the processed frames from the video being displayed with the computer vision outputs and the motors will spin based on the commands generated by the control algorithm.