There are other performance measures except the accuracy which we have gone through in earlier blog. Other performance measures are Precision, Recall , F score and ROC. We are going to discuss one by one using the example and use of it.

Table of Contents

Precision:

Of the transaction classified positive fraudulent, how many are actually positive fraudulent?style=”font-size:17px;font-family:Georgia;color:black”Of the transaction classified positive fraudulent, how many are actually positive fraudulent?

precision= (True Positive)/(True Positive+False positive)

Recall

Of the transaction are actually positive fraudulent, how many are classified as positive fraudulent ?

Recall= (True positive)/(True positive + False negative)

Which performance measures (accuracy,precision,recall) are better for below problem and how to know model is doing well or not?

Let’s say there is a financial institute(FI) has below statistics .

Total Transaction=10000
Fraudulent Transactions=20
Non-Fraudulent Transactions=9800

The most important priority for any FI to figure out the Fraudulent Transactions from the dataset. Problem is to find out the model is well equipped for the company?

When we develop the machine learning model , it may be biased to higher classified data and start to predict non fraudulent transactions always.

Then how to know whether model is doing well or not ?

Here accuracy=(TP+TN)/(TP+FN+FP+TN)=9980/10000 =>.99 Model accuracy is 99% ,still not fulfilling the purpose of financial company as they require to predict fraudulent transactions more accurately, however existing model is predicting only “Non fraudulent” transactions.

Precision=TP/(TP+FP) =0

Recall=TP/(TP+FN)=0

Precision and Recall are zero, meaning model is not enough efficient. It require more feature engineering, different algorithms or parameters to train the model again which can predict with higher recall or higher precision.

Find the accuracy, precision and recall values for given below confusion matrix?

Calculate Accuracy,Precision,Recall

accuracy=(TP+TN)/N =>(9760+60)/10000=>.9820

precision=TP/(TP+FP)=>60/(60+140)=>0.3

recall=TP/(TP+FN)=> 60/100=>0.6

In some situation, we might know that we want to maximise either recall or precision at the expense of the other metric. For example, in preliminary disease screening of patients for follow-up examinations,we want to find all patients who actually have the disease so recall should be near to one and can adjust with low precision.


However, in cases where we want to keep precision and recall high or balance between them, then F1 score comes into the picture.

F1 Score

F score

Let’s say there are two models developed for one problem and performance measures are also been calculated.

Model-1
Precision-70%
Recall-60%

Model-2
Precision-80%
Recall-50%

Which model is better, Model-1 or Model-2 ?

F1 score for model-1=2* (70*60)/(70+60) =64.6

F1 score for model-2=61.4

Model-1 is better as F score for model-1 is better than model-2.

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert