Hey guys! In today’s blog, we’ll be looking at ways to automate the process of monitoring social distancing using computer vision and deep learning.

Managing social distancing is one of the best ways we have to avoid being exposed to CoronaVirus and to slow down its spread locally and across the globe. 

So, let’s not waste time and look at the steps involved in creating our own social distancing monitoring tool. 

Pedestrian Detection

Firstly, we’ll perform the pedestrian detection and tracking. In our model we are using an SSD (Single Shot Detection) pre-trained model. It is a multi-class classifier and is trained on 1000 classes from COCO dataset.

Here is a result of our object detection model on detecting pedestrians. 

Marking a Region Of Interest

Now, we’ll be finding the distance between two pedestrians in an image and marking them if that distance is greater than a certain threshold. In our case, we’ll take 6 feet as the threshold value.

One of my initial approaches to detect the distance between any two given pedestrians was to first ask the user for two points on a frame that would mark a certain distance. But I ran into the perspective issue which limited my approach. 

So what exactly is the perspective issue? 

As camera is not equidistant to all points on the plane so we cannot set a certain pixel limit for determining if two pedestrians are a certain distance apart. 

This issue can be solved by creating a region of interest where all points are equidistant from each other.

Creating a Bird’s Eye View

Next, I’ll be asking the user to mark four points which create a polygon denoting the region of interest. This creates a bird’s eye view of the space where every point is equidistant from every other point.

We then ask the user for two more points in the plane that would denote a certain threshold.

I have taken 6 feet as the threshold value and for this I marked a person on his head and toe with the two points.

Here is a demo of marking points to create the region of interest and mark 6 feet in the plane.

Summary

The complete set of steps our algorithm takes can be summed up as :

  • Asking the user to input four points to create a region of interest in the plane. 
  • Asking the user to input two more points to mark the 6 feet distance in the plane.
  • Detect Pedestrians in the model using a pre trained SSD object detection model.
  • Localize pedestrians and create bounding boxes around them. 
  • Change the perspective according to the region of interest to create a bird’s eye view for the plane.
  • Find distance between two pedestrians in the frame.
  • Add to the violations list if the distance is less than 6 feet. 
  • Display how many social distancing violations took place.

Outcome

To access the source code, click here.

Conclusion

We have successfully implemented our Social distancing monitoring tool.

SSD algorithm is more focused on speed than accuracy so the pedestrians can have multiple bounding boxes over them in a frame and that could create false positives for our result. 

To make it better we can apply Non Maxima Suppression to our model. We can increase the confidence level or the NMS threshold level of our SSD model. We could also use a more accurate model to see how it performs in comparison. 

I will be creating a similar model but with a different object detection algorithm like YOLO and will compare the results of both the algorithm. Stay tuned ! 

References : 

Demo Video Data

Pedestrian tracking using Deep Learning

Object Detection Models Tensorflow

Source Code

Roshan Mishra


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert