Have you ever wondered how does a vehicle detect the lane lines on the road? It is actually easier than you think. You can do this by using Canny edge detection and hough transformation. Below I am going to show the first lesson I learnt from Udacity's Self-Driving Car ND. (Of couse the algorithm is much more robust in actual application)
I broke down the whole process into 5 steps. First, I converted the images to grayscale, then I used cv2.Canny to detect the edges of a smoothened image (use Gaussion blur). Next the image is cropped such that only main lanes is included in the following analysis. To find the line segments, I used Hough transformation. Finally, the lanes were drawn onto the original image as line segments.
In order to draw a single line on the left and right lanes, I modified the draw_lines() function by first identify the left and right lanes using slope. Then I calculate the weighted average slope based on the length of each segment. Longer segments will have higher influence in slope determination. Next, the center point is picked based on the average of all the end points of left or right lanes. The slopes and center points define the lanes. The end points of right and left lane are defined based on the area of interest (vertices in code).
Define helper functions
Now, we can use the helper function to process the example image.
There you have it. Using Canny edge detection and Hough transformation, I can successfully detect the lane lines in this image. However, the example is probably the simpliest image for lane detection. Later I found the algorithm is not robust enough for comlex images. For example, the lane line is yellow; lane is curved; there are curbs or lots of other noises. To make it more robust, one way we can do is to look into color space. This is our next lesson, so stay tuned!
The entire project can be found here with great more details.