Hello peeps! Hope you are doing great! Today, I just thought of a process to restore colors in black and white images and soon I started implementing it. I guess you must also have wondered about it. So, here is a handy blog to help you in that.
In this tutorial, you will learn how to colorize black and white images using OpenCV, Deep Learning, and Python.
Image colorization is the process of taking an input grayscale (black and white) image and then producing an output colorized image that represents the semantic colors and tones of the input
There is a well trained deep learning model that has been trained on how does colored photographs look when they are converted into grayscale. Note that each color that is recognized by the human eye heavily depends upon what intensity it has been falling with. Studying that pattern between the color and its converted black and white version can help in restoring back the color.
LAB image format is a way of storing the images just like the RGB one. L- luminosity ( our eyes have greater than 90% light receptors and rest colour ones.). A- for red-green ad B- for blue-yellow. This image format helps us study that pattern easily.
Notice how in this sample image, the color of Thor’s armor has been restored correctly to red. The trees and the complexion has been colorized correctly too!
Importing the libraries
We will be using Numpy to do the mathematics and cv2 for image processing.
import numpy as np import cv2
Loading the model
Next, we’ll define variables that will hold the path to our caffe protoxt file, pre-trained model, NumPy cluster center points file and to input black and white image.
protxt= "model/colorization_deploy_v2.prototxt" model= "model/colorization_release_v2.caffemodel" points= "model/pts_in_hull.npy" image= "images/bx1.jpeg"
Next, we’ll be loading our Caffe model and cluster center points. OpenCV can read Caffe models via the cv2.dnn.readNetFromCaffe function. We are using np.load() as the cluster point file is in numpy format.
net = cv2.dnn.readNetFromCaffe(protxt, model) pts = np.load(points)
Now, we’ll be loading the centers for ab channel quantization. We’ll treat each of the points as 1×1 convolutions and add them to the model.
class8 = net.getLayerId("class8_ab") conv8 = net.getLayerId("conv8_313_rh") pts = pts.transpose().reshape(2, 313, 1, 1) net.getLayer(class8).blobs = [pts.astype("float32")] net.getLayer(conv8).blobs = [np.full([1, 313], 2.606, dtype="float32")]
Pre-processing the image
Now, we’ll load the input image using cv2.imread() and then scale and convert them to type float. These calculations are specific to the deep learning model chosen. The image is then converted in LAB image format.
image = cv2.imread(image) scaled = image.astype("float32") / 255.0 lab = cv2.cvtColor(scaled, cv2.COLOR_BGR2LAB)
Next, we’ll be resizing the file in 224×224 dimension and then we’ll split the ‘L’ channel . Just a reminder, intensity of the light plays a major role in color determination.
resized = cv2.resize(lab, (224, 224)) L = cv2.split(resized) L -= 50
Now we can pass the input L channel through the network to predict the ab channels
net.setInput(cv2.dnn.blobFromImage(L)) ab = net.forward()[0, :, :, :].transpose((1, 2, 0))
We’ll resize the predicted ‘ab’ volume to the same dimensions as our input image
ab = cv2.resize(ab, (image.shape, image.shape))
Post-processing the image
Now, its time for post-processing. Post processing includes:
- Grabbing the L channel from the original input image and concatenating the original L channel and predicted ab channels together which results in colorized image.
- Converting the colorized image from the Lab color space to RGB
- Clipping any pixel intensities that fall outside the range [0, 1]
- Bringing the pixel intensities back into the range [0, 255]. We divided by 255, during the pre-processing steps and now during post-processing, we are multiplying by 255 . The scaling and “uint8” conversion isn’t a requirement but that it helps the code work between OpenCV 3.4.x and 4.x versions.
So, we’ll start by grabbing the L channel and concatenating it with predicted ab channels.
L = cv2.split(lab) colorized = np.concatenate((L[:, :, np.newaxis], ab), axis=2)
Next, we’ll be converting the LAB image to RGB one and restoring the values within 0-255 range.
colorized = cv2.cvtColor(colorized, cv2.COLOR_LAB2BGR) colorized = np.clip(colorized, 0, 1) colorized = (255 * colorized).astype("uint8")
Finally, both our original image and colorized image are displayed on the screen!
cv2.imshow("Original", image) cv2.imshow("Colorized", colorized) cv2.waitKey(0) cv2.destroyAllWindows()
sWe have successfully build and executed the code to restore colors in black and white images. If you liked my work feel free to share it and use the comments section for feedbacks.