TensorFlow is a free and open-source software library or framework for dataflow and differentiable programming across a range of tasks and platforms. It is a symbolic math library, and is also used for machine learning applications such as neural networks. This library is developed and maintained by Google Brain, Google. This library is mainly written in C++ and python.

This library can reduce tons of lines of codes into just few hundreds of lines of code. Take for example the case of neural network. We can make a neural network by using the library functions without hardcoding each unit of it.

There are two types of TensorFlow variants: Non GPU variant and the other one is GPU variant.

Let’s now understand it’s by working taking some examples.

TensorFlow is basically a software library for numerical computation using data flow graphs.

Take an example of a graph:

a and b are input tensors and c is a output tensor and add is operation.

First we will understand different components of it.

NOTE: To import TensorFlow in your program write the following line:

import tensorflow

So when we have to use any function of tensorflow say multiply. We have to write it as:

tensorflow.multiply()

We can also declare an abbreviation of tensorflow.

import tensorflow as tf

So the above function call can also be written as:

tf.multiply()

In tensorflow we have placeholders to hold data.

X_1 = tf.placeholder(tf.float32, name = "X_1") 
X_2 = tf.placeholder(tf.float32, name = "X_2")

Here tf.float32 tells the function that the type of data the placeholder will hold is of float type.

Now lets create a computation graph

multiply = tf.multiply(X_1, X_2, name = "multiply")

what this graph does is that it multiplies the two inputs X_1 and X_2.

But to execute this operation we need to understand the concept of session in tensorflow.

To execute operations in the graph, we have to create a session. In tensorflow, it is done by using the function tf.Session(). Now that we have a session we can ask the session to run operations on our computational graph by calling session. To run the computation, we need to use run function of tensorflow.  feed_dict is a parameter passed to the run function to give initial values of the variables.

with tf.Session() as session: 
result = session.run(multiply, feed_dict={X_1:[1,2,3],     X_2:[4,5,6]}) 
print(result)

So the complete code is :

X_1 = tf.placeholder(tf.float32, name = "X_1")
X_2 = tf.placeholder(tf.float32, name = "X_2")

multiply = tf.multiply(X_1, X_2, name = "multiply")

with tf.Session() as session:
    result = session.run(multiply, feed_dict={X_1:[1,2,3], X_2:[4,5,6]})
    print(result)

The output would be:

[ 4. 10. 18.]

We can initialize placeholder with some fixed size also.

X = tf.placeholder(tf.int32, shape=(3,1))
Y = tf.placeholder(tf.int32, shape=(1,3))
Z = tf.matmul(X,Y)
print(sess.run(Z, feed_dict={X:[[3],[2],[1]], Y:[[1,2,3]]}))

This will give an output tensor of shape 3×3

A simple tensorflow program is given below. The following block of code is used to recognize the number drawn in an image viz. 1,2,….9.

import tensorflow as tf
from keras.datasets import mnist
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)), 
        tf.keras.layers.Dense(128, activation='relu'),  	                                
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10,activation='softmax') 
])
model.compile(optimizer='adam',  loss='sparse_categorical_crossentropy',	metrics=['accuracy']) model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

Now lets breakdown each line of the program.

  1. First we import tensorflow and assign it a short name tf.
  2. Next we import the library from where we will load the dataset.
  3. Next we load the MNIST dataset using the function and store it in mnist.
  4. Next we split the data set into test and train.
  5. Next we normalize the dataset as the values varied a lot from 0 to 255.(Feature Scaling)
  6. Next we create the model. We make a call to the high level Keras API to access the neural network.
  7. Then we make a call to the compile function to compile the neural network architecture with adam as optimizer and sparse_categorical_crossentropy loss function as the output of neural network architecture outputs multiple values (0-9).
  8. Then we run the neural network for 5 epoch with x_train and y_train
  9. Then we evaluate the neural network for accuracy with x_test and y_test.
Categories: Deep Learning

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert