Hey Guys!! In today’s blog, I will be explaining “Facial Sentiment Analysis” in Real-time using TensorFlow, Keras, and Open CV. Basically, it is identifying the person’s current mood by his/her facial expressions.
Dataset Required
For this, I will be using the Kaggle Dataset For Emotion Detection. This dataset can detect 7 facial emotions as listed below:
- Happy
- Sad
- Neutral
- Disgust
- Fear
- Surprise
- Angry

It has 2 folders testing and validation. Both the folder contains an emotion folder.
In this tutorial, I will be training my model for the following 5 emotions:
- Happy
- Sad
- Angry
- Neutral
- Surprise
Training the Model
For training the model, we’ll first start by importing necessary libraries. Then, I’ll be loading and manipulating the dataset according to our requirements. Lastly, I’ll train the model and save it.
Importing necessary Libraries
from __future__ import print_function import keras from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization, Conv2D, MaxPooling2D from keras.optimizers import RMSprop, SGD, Adam from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau import os
Loading Dataset (Training/Validation)
I’ll start by storing the paths for train and validation datasets in two different variables.
train_data_dir = r'images\train' validation_data_dir = r'images\validation'
Now, I will be manipulating both the training and validation dataset by performing some operations on it such as rescaling, rotation, shear, width range, height range, horizontal flip, zoom, fill, etc.
train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=30, shear_range=0.3, zoom_range=0.3, width_shift_range=0.4, height_shift_range=0.4, horizontal_flip=True, fill_mode='nearest') validation_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_data_dir, color_mode='grayscale', target_size=(img_rows,img_cols), batch_size=batch_size, class_mode='categorical', shuffle=True) validation_generator = validation_datagen.flow_from_directory( validation_data_dir, color_mode='grayscale', target_size=(img_rows,img_cols), batch_size=batch_size, class_mode='categorical', shuffle=True)
Further, I will be optimizing the model using elu activation function. This will be done in 7 blocks, and in every block, I will be doubling the number of kernels as 32, 64, 128, and 256.
model = Sequential() # Block-1 model.add(Conv2D(32, (3,3), padding='same', kernel_initializer='he_normal', input_shape=(img_rows,img_cols,1))) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Conv2D(32,(3,3),padding='same',kernel_initializer='he_normal',input_shape=(img_rows,img_cols,1))) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) # Block-2 model.add(Conv2D(64,(3,3),padding='same',kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Conv2D(64,(3,3),padding='same',kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) # Block-3 model.add(Conv2D(128,(3,3),padding='same',kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Conv2D(128,(3,3),padding='same',kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) # Block-4 model.add(Conv2D(256,(3,3),padding='same',kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Conv2D(256,(3,3),padding='same',kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) # Block-5 model.add(Flatten()) model.add(Dense(64,kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) # Block-6 model.add(Dense(64,kernel_initializer='he_normal')) model.add(Activation('elu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) # Block-7 model.add(Dense(num_classes,kernel_initializer='he_normal')) model.add(Activation('softmax'))
In addition, I will be saving model using ModelCheckpoint, EarlyStopping, ReduceLROnPlateu as callbacks. I’ll compile the model using Adam optimizer.
checkpoint = ModelCheckpoint('Emotion_little_vgg.h5', monitor='val_loss', mode='min', save_best_only=True, verbose=1) earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, restore_best_weights=True ) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=3, verbose=1, min_delta=0.0001) callbacks = [earlystop,checkpoint,reduce_lr] model.compile(loss='categorical_crossentropy', optimizer = Adam(lr=0.001), metrics=['accuracy']) nb_train_samples = 24176 nb_validation_samples = 3006 epochs=25 history=model.fit_generator( train_generator, steps_per_epoch=nb_train_samples//batch_size, epochs=epochs, callbacks=callbacks, validation_data=validation_generator, validation_steps=nb_validation_samples//batch_size)
At this time our model will be saved as “Emotion_little_vgg.h5”.
Testing of the Model
After training, I will be testing the model to determine if it is predicting correct result or not.
Import Necessary Libraries
from keras.models import load_model from time import sleep from keras.preprocessing.image import img_to_array from keras.preprocessing import image import cv2 import numpy as np
Now, I will load “haarcascade_frontalface_default” for front face detection. I’ll also load our saved model “Emotion_little_vgg.h5” for detecting emotions.
face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') classifier =load_model('Emotion_little_vgg.h5')
Now, I will be passing images as input for recognizing the emotion in it.
class_labels = ['Angry','Happy','Neutral','Sad','Surprise'] cap = cv2.VideoCapture(0) while True: # Grab a single frame of video ret, frame = cap.read() labels = [] gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) faces = face_classifier.detectMultiScale(gray,1.3,5) for (x,y,w,h) in faces: cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h,x:x+w] roi_gray = cv2.resize(roi_gray,(48,48),interpolation=cv2.INTER_AREA) # rect,face,image = face_detector(frame) if np.sum([roi_gray])!=0: roi = roi_gray.astype('float')/255.0 roi = img_to_array(roi) roi = np.expand_dims(roi,axis=0) # make a prediction on the ROI, then lookup the class preds = classifier.predict(roi)[0] label=class_labels[preds.argmax()] label_position = (x,y) cv2.putText(frame,label,label_position,cv2.FONT_HERSHEY_SIMPLEX,2,(0,255,0),3) else: cv2.putText(frame,'No Face Found',(20,60),cv2.FONT_HERSHEY_SIMPLEX,2,(0,255,0),3) cv2.imshow('Emotion Detector',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()
The Input Image and output Image will be as:


Conclusion
Yeahh!! Finally we have performed facial sentiment analysis.
You can access the source code here.
–Shruti Sharma
0 Comments