Hola people!! Do you love clicking pictures with different filters but bored with the same old apps around and want to develop your own filter then you are on the correct desk. Don’t worry, here we have an amazing filter developed in python to amuse you all. We have developed a filter that adds a tongue to your smiling face. Excited!!! To Learn how?? Then let’s get started!!
The trick is quite simple, it will first be going to detect your face then detect your smile and add the tongue filter and save your filtered image.

The underlying concept used here is Computer Vision and Image Processing.
Overview
Here, to design our tool that adds tongue on your smiling face, we have used haar cascade file and OpenCV library.
Haar cascade can be used for face detection which is a machine learning algorithm where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images.
OpenCV already contains many pre-trained Haar classifiers such as smile which is used in this model.
To apply these pre-trained classifiers, follow the following steps:
- Load the XML file of required classifiers.
- Load image in gray-scale mode as OpenCV mostly operates in gray scale.
- Apply necessary classifiers on the image.
Importing the necessary libraries
We’ll be using four main libraries of python.
- “cv2” for video capturing, frame reading, pixel management, color management of the video,
- “NumPy” which is used to do mathematical manipulation with the pixels of the video, flipping the order of matrices,
- “datetime” for saving date and time and adding delays and
- “os” for performing operations on directory.
import cv2 for reading the image import datetime import os import numpy as np
Detecting Face and Smile
Now, I’ll be detecting the face and smile using the cascade file. For this, we need to load the required XML classifiers and our input image (or video) in grayscale mode.
cascade_face = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') cascade_smile = cv2.CascadeClassifier('haarcascade_smile.xml') # if you want to add sticker on image img = cv2.imread('image.png')
Function to detect face and smile and add sticker
Now, we’ll find the face in the image. If faces are found, it returns the positions of detected faces as Rect(x,y,w,h). Once we get these locations, we can create a ROI for the face and apply smile detection on this ROI.
I have created a detection function that takes two arguments : the image and the grayscale i.e. black and white image. We used haar cascade to detect the face and store the result in a variable ‘face’. x_face, y_face, w_face, h_face are 2 top coordinates and width , depth of face respectively. Now, we’ll define a loop to detect smile on our face.
def detection(grayscale, img): face = cascade_face.detectMultiScale(grayscale, 1.3, 5) for (x_face, y_face, w_face, h_face) in face: #area of intreset in gray image ri_grayscale = grayscale[y_face:y_face+h_face, x_face:x_face+w_face] #area of intrest in coloured image ri_color = img[y_face:y_face+h_face, x_face:x_face+w_face] # smile detection , 1.7 is the scale factor , 20 is the min neighbour smile = cascade_smile.detectMultiScale(ri_grayscale, 1.7, 30) # uncomment to add rectangle to smile for (x_smile, y_smile, w_smile, h_smile) in smile: #cv2.rectangle(ri_color,(x_smile, y_smile),(x_smile+w_smile, y_smile+h_smile), (255, 0, 130), 2) #import image of tongue img_tongue= cv2.imread('tongue.png',-1) #depth of tongue, width dept_tong, width_tong= img_tongue.shape[:2] #make width to that of 1/3rd of smile width_for_tongue= w_smile/3 scale= width_for_tongue/width_tong #adjust dept to same but maintining te aspect ratio depth_for_tongue= scale* dept_tong width_for_tongue = int(width_for_tongue) depth_for_tongue = int(depth_for_tongue) ##roi tongue img_tongue= cv2.resize(img_tongue, (width_for_tongue, depth_for_tongue)) # white to black frame= img_tongue frame[np.where((frame == [255,255,255]).all(axis = 2))] = [0,0,0] # it works img_tongue= frame #finding centre of smile centre_smile_x= int(x_smile+ w_smile/2) centre_smile_y= int(y_smile+ h_smile/2) #finding the corrdinates for tongue x_tongue=int( centre_smile_x- width_for_tongue/2) y_tongue= int(centre_smile_y) #region of intrest for tongue roi_tongue= ri_color[y_tongue: y_tongue+depth_for_tongue, x_tongue: x_tongue+width_for_tongue] #making dimensions equal min_d= min(roi_tongue.shape[0], img_tongue.shape[0]) min_w= min(roi_tongue.shape[1], img_tongue.shape[1]) roi_tongue= ri_color[y_tongue: y_tongue+min_d, x_tongue: x_tongue+min_w] img_tongue= img_tongue[0: min_d,0: min_w] dst = cv2.addWeighted(roi_tongue, 0.9,img_tongue,0.5, 0) ri_color[y_tongue: y_tongue+depth_for_tongue, x_tongue: x_tongue+width_for_tongue] = dst return img
Now, as we have detected the smile using haar cascade classifier, we start looping with the 4 variables x_smile, y_smile, w_smile, h_smile that has face coordinates and every time the value is stored in img_tongue variable.
We have stored the depth and width of the image in variables and then resized the tongue image by taking the width as ⅓ rd the size of smile and depth to the aspect ratio.
Then, we have used numpy function to parse the image of tongue and replace the white colour by black. Note that white coordinates are [255,255,255] and black coordinates as [0, 0, 0].
Then, we have defined the x and y coordinates for the tongue with the help of centre of smile and dimensions of tongue and defines the area of interest of tongue.
I have cropped the two images so that their dimensions could be same and stored the values in the respective variables.
Function addWeighted( ) calculates the weighted sum of two arrays and it has the following attributes cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]]) → dst .
Parameters:
src1 – first input array
alpha – weight of the first array elements
src2 – second input array of the same size and channel number as src1
beta – weight of the second array elements
dst – output array that has the same size and number of channels as the input arrays
gamma – scalar added to each sum
dtype – optional depth of the output array; when both input arrays have the same depth, dtype can be set to -1, which will be equivalent to src1.depth().
The function addWeighted calculates the weighted sum of two arrays as follows:

where ‘I’ is a multi-dimensional index of array elements. In case of multi-channel arrays, each channel is processed independently.
Capturing the Image
Now, I have created a variable to capture the video and set the path to store the output file. If the file already exists, then we update the path therefore, the code is placed in try catch block to handle these exceptions.
vc = cv2.VideoCapture(0) #path path= os.getcwd() # Create directory dirName = 'tempimage_folder' try: os.mkdir(dirName) except FileExistsError: print("Directory " , dirName , " already exists") path= path+'/'+dirName
Now, after creating the directory, we have to capture the image and convert it into grayscale using cv2.cvtColour() function.Then, we’ll pass the arguments in detection function and store the result in final variable.
cnt=0 while cnt<500: #read status of camera and frame _, img = vc.read() #convert image ot grayscale grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #reuslt from detection function final = detection(grayscale, img) #showing captured image cv2.imshow('Video', final) #name of our image, wiht current time , so that it has new name each time. string = "pic"+str(datetime.datetime.now())+".jpg" #save image cv2.imwrite(os.path.join(path, string),final) if cv2.waitKey(1) & 0xFF == ord('q'): break cnt+=1 vc.release() cv2.destroyAllWindows()
After getting the result, next task is to display the image using cv2.imshow() function and then save the image with name appended with current date and time. File is saved using the cv2.imwrite() function which writes the next video frame. cv2.release() function will release the current capturing device and finally the function cv2.destroyAllWindows() will close all the screens and images.
Implementation
Conclusion
Finally, we have implemented our own tool to add tongue sticker on your face. If you like my work, please feel free to share it and also share your views in comments section.
-Suniti Jain
0 Comments