Here we are discussing about ROC -AUC curve(Receiver operating characteristic-Area under curve), how they are using to decide the efficiency of model.

ROC-AUC

• ROC curve is graphical plot that illustrates the diagnostic ability of model.
• ROC is a probability curve, created by plotting the TPR(True Positive Rate) and FPR(False Positive Rate)
• It tells how much model is capable of distinguishing between classes
• Higher the AUC, the better the model is at predicting “Yes” and “No”.

Below is the ROC curve and defined the area under the curve(AUC).

Let’s discuss few terms again before going to deep dive in to the ROC curve.

What is TPR and FPR ?

TPR(True Positive Rate),also called Recall/Sensitivity.
It defines as how many are classified positive from all actual positive.

TPR= TP/(TP+FN)

FPR(False Positive Rate) defined as “1-Specificity“.

Specificity=TN/(TN+FP)
FPR=1-TN/(TN+FP)
FPR=FP/(TN+FP)

Below are the analytics between terms:

Sensitivity and Specificity are inversely proportional to each other.
TPR and FPR are proportional to each other.

Comparison of ROC curves :

When AUC is 1, it is called best model or ideal model as model is capable to distinguish all the classes perfectly and when AUC is 0, called worst model,not capable to distinguish the any of the classes. So AUC lies between 0 and 1.

This is ideal model which is able to predict all the classes correctly.

It introduces Type-1 and Type-2 error. When AUC is 0.7 , it means there is 70% chance that model is capable to distinguish the classes.

When AUC is near to 0.5, it means that model is not capable of distinguishing the classes.

Model is predicting negative as positive and vice versa.

$${}$$