Hola People!! Recently, I was researching about self-driving cars and I was instantly hooked by the idea. I thought of experimenting with few of those concepts, that are related to image processing and I found that lane detection was one of those concepts that pertain to computer vision.

When we drive, we use our eyes to decide where to go. The lines on the road that show us where the lanes are act as our constant reference for where to steer the vehicle. Naturally, one of the first things we would like to do in developing a self-driving car is to automatically detect lane lines using an algorithm.

So, in today’s blog, I’ll explain how to build your own Lane detection system using OpenCV.

Here’s a look at the lane detection system we’ll be building…

Pretty cool, right? I can’t wait to get started and guide you on this computer vision inspired journey into the world of lane detection and self-driving cars using the OpenCV library. We will, of course, go through the Python code as well in this tutorial.

I’ll be using the following techniques:

  • Color Selection
  • Canny Edge Detection
  • Region of Interest Selection
  • Hough Transform Line Detection

I’ll apply all the techniques to process video clips to find lane lines in them.

Importing necessary Libraries

We’ll start by importing necessary dependencies.

I’ll be using:

  • cv2 for image processing
  • os, glob for directory operations
  • numpy for mathematical calculations
  • moviepy for reading and writing video files
import cv2
import os, glob
import numpy as np
from moviepy.editor import VideoFileClip

Test Images

Consider the following images

There are so many other objects in the scene apart from the lane markings. There are vehicles on the road, road-side barriers, street-lights, etc. And in a video, a scene changes at every frame. This mirrors real-life driving situations pretty well.

We have to find a way to ignore these unwanted objects from the driving scene. So, for this we need to process the image. One approach is to perform color selection on the image along with canny edge detection followed by narrowing down the area of interest and performing Hough line transforms at last.

Color Selection

Before we can detect our edges, we need to make it clear exactly what we’re looking for. Lane lines are always yellow and white. Yellow can be a tricky color to isolate in RGB space, so lets convert RGB color space to Hue Lightness Saturation or HSL color space.

def convert_hls(image):
    return cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
HSL image

We can see, both the white and yellow lines are clearly recognizable. .Next, we will apply a mask to the original RGB image to return the pixels we’re interested in.

  • Use cv2.inRange to filter the white color and the yellow color seperately.
    The function returns 255 when the filter conditon is satisfied. Otherwise, it returns 0.
  • Use cv2.bitwise_or to combine these two binary masks.
    The combined mask returns 255 when either white or yellow color is detected.
  • Use cv2.bitwise_and to apply the combined mask onto the original RGB image
def select_white_yellow(image):
    converted = convert_hls(image)
    # white color mask
    lower_white = np.uint8([  0, 200,   0])
    upper_white = np.uint8([255, 255, 255])
    white_mask = cv2.inRange(converted, lower_white, upper_white)
    # yellow color mask
    lower_yellow = np.uint8([ 10,   0, 100])
    upper_yellow = np.uint8([ 40, 255, 255])
    yellow_mask = cv2.inRange(converted, lower_yellow, upper_yellow)
    # combine the mask
    mask = cv2.bitwise_or(white_mask, yellow_mask)
    return cv2.bitwise_and(image, image, mask = mask)
RGB image

Canny Edge Detection

We need to find straight lane lines in order to detect edges.

For this, we’ll:

  • Convert images into gray scale
  • Smooth out rough lines
  • Find edges

To detect edge in the image, images should be converted into Grayscale. This is because the Canny edge detection measures the magnitude of pixel intensity changes.

Here, I’m converting the white and yellow line images from the above into gray scale for edge detection.

def convert_gray_scale(image):
    return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

Now, let’s apply a quick Gaussian blur. This filter will help to suppress noise in our Canny Edge Detection by averaging out the pixel values in a neighborhood.

def apply_smoothing(image, kernel_size=15):
    return cv2.GaussianBlur(image, (kernel_size, kernel_size), 0)
Gaussian Blur image

We’re ready! Let’s compute our Canny Edge Detection. Canny edge detection uses the fact that edges has high gradients.

def detect_edges(image, low_threshold=50, high_threshold=150):
    return cv2.Canny(image, low_threshold, high_threshold)
Detected Edges

ROI Selection

We’ve come a long way, but we’re not there yet. We don’t want our car to be paying attention to anything on the horizon, or even in the other lane. Our lane detection pipeline should focus on what’s in front of the car.


To do that, we are going to create another mask called our region of interest (ROI). Everything outside of the ROI will be set to black/zero, so we are only working with the relevant edges.

def filter_region(image, vertices):
    Create the mask using the vertices and apply it to the input image
    mask = np.zeros_like(image)
    if len(mask.shape)==2:
        cv2.fillPoly(mask, vertices, 255)
        cv2.fillPoly(mask, vertices, (255,)*mask.shape[2]) # in case, the input image has a channel dimension        
    return cv2.bitwise_and(image, mask)

def select_region(image):
    It keeps the region surrounded by the `vertices` (i.e. polygon).  Other area is set to 0 (black).
    # first, define the polygon by vertices
    rows, cols = image.shape[:2]
    bottom_left  = [cols*0.1, rows*0.95]
    top_left     = [cols*0.4, rows*0.6]
    bottom_right = [cols*0.9, rows*0.95]
    top_right    = [cols*0.6, rows*0.6] 
    # the vertices are an array of polygons (i.e array of arrays) and the data type must be integer
    vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32)
    return filter_region(image, vertices)

Hough Transform

Hough Transform is a technique to detect any shape that can be represented mathematically.

We are interested in detecting lane markings that can be represented as lines.

def hough_lines(image):
    #'image' should be the output of a Canny transform.
    #Returns hough lines (not the image with lines)
    return cv2.HoughLinesP(image, rho=1, theta=np.pi/180, threshold=20, minLineLength=20, maxLineGap=300)

There are multiple lines that can be detected for a lane line. So, we need to come up with an averaged line for that.

Also, we should extrapolate the line to cover full lane line length for partially recognized lane lines.

We want two lane lines: one for the left and the other for the right. The left lane should have a positive slope, and the right lane should have a negative slope. Therefore, we’ll collect positive slope lines and negative slope lines separately and take their averages.

def average_slope_intercept(lines):
    left_lines    = [] # (slope, intercept)
    left_weights  = [] # (length,)
    right_lines   = [] # (slope, intercept)
    right_weights = [] # (length,)
    for line in lines:
        for x1, y1, x2, y2 in line:
            if x2==x1:
                continue # ignore a vertical line
            slope = (y2-y1)/(x2-x1)
            intercept = y1 - slope*x1
            length = np.sqrt((y2-y1)**2+(x2-x1)**2)
            if slope < 0: # y is reversed in image
                left_lines.append((slope, intercept))
                right_lines.append((slope, intercept))
    # add more weight to longer lines    
    left_lane  = np.dot(left_weights,  left_lines) /np.sum(left_weights)  if len(left_weights) >0 else None
    right_lane = np.dot(right_weights, right_lines)/np.sum(right_weights) if len(right_weights)>0 else None
    return left_lane, right_lane # (slope, intercept), (slope, intercept)

Next, we’ll draw the lanes.

def make_line_points(y1, y2, line):
    #Convert a line represented in slope and intercept into pixel points  
    if line is None:
        return None    
    slope, intercept = line    
    # make sure everything is integer as cv2.line requires it
    x1 = int((y1 - intercept)/slope)
    x2 = int((y2 - intercept)/slope)
    y1 = int(y1)
    y2 = int(y2)
    return ((x1, y1), (x2, y2))
def lane_lines(image, lines):
    left_lane, right_lane = average_slope_intercept(lines)
    y1 = image.shape[0] # bottom of the image
    y2 = y1*0.6         # slightly lower than the middle

    left_line  = make_line_points(y1, y2, left_lane)
    right_line = make_line_points(y1, y2, right_lane)
    return left_line, right_line

def draw_lane_lines(image, lines, color=[255, 0, 0], thickness=20):
    # make a separate image to draw lines and combine with the orignal later
    line_image = np.zeros_like(image)
    for line in lines:
        if line is not None:
            cv2.line(line_image, *line,  color, thickness)
    # image1 * α + image2 * β + λ
    # image1 and image2 must be the same shape.
    return cv2.addWeighted(image, 1.0, line_image, 0.95, 0.0)

Detecting lanes and saving output

Next, I’ll be creating a LaneDetector class that contains a process() function which will process our image according to above algorithms and will generate a video output with detected lanes.

from collections import deque


class LaneDetector:
    def __init__(self):
        self.left_lines  = deque(maxlen=QUEUE_LENGTH)
        self.right_lines = deque(maxlen=QUEUE_LENGTH)

    def process(self, image):
        white_yellow = select_white_yellow(image)
        gray         = convert_gray_scale(white_yellow)
        smooth_gray  = apply_smoothing(gray)
        edges        = detect_edges(smooth_gray)
        regions      = select_region(edges)
        lines        = hough_lines(regions)
        left_line, right_line = lane_lines(image, lines)

        def mean_line(line, lines):
            if line is not None:

            if len(lines)>0:
                line = np.mean(lines, axis=0, dtype=np.int32)
                line = tuple(map(tuple, line)) # make sure it's tuples not numpy array for cv2.line to work
            return line

        left_line  = mean_line(left_line,  self.left_lines)
        right_line = mean_line(right_line, self.right_lines)

        return draw_lane_lines(image, (left_line, right_line))

Next, I’ll create a process_video() function that takes two arguments, the input video and output video. The function creates an object of LaneDetector class and calls its process() function and then saves the output video.

In the end we’ll call this function by providing path of inout and output videos.

def process_video(video_input, video_output):
    detector = LaneDetector()

    clip = VideoFileClip(os.path.join('test_videos', video_input))
    processed = clip.fl_image(detector.process)
    processed.write_videofile(os.path.join('output_videos', video_output), audio=False)

process_video('video_input.mp4', 'video_output.mp4')  


In this tutorial, we covered a simple technique for lane detection. We did not use any model or complex image features. Instead, our solution was purely based on certain image pre-processing operations.

However, there are going to be many scenarios where this solution will not work. For example, when there will be no lane markings, or when there is too much of traffic on the road, this system will fail. There are more sophisticated methods to overcome such problems in lane detection. I want you to explore them if the concept of self-driving cars interests you. Feel free to use the comments section in case you have any doubts or feedback for me.

You can access the source code on GitHub.


-Pratiksha Goyal


Leave a Reply

Your email address will not be published. Required fields are marked *

Insert math as
Additional settings
Formula color
Text color
Type math using LaTeX
Nothing to preview