CONFUSION MATRIX — Delving into its two types of errors and its application in Cyber Crime cases
In the domain of Data Science or Machine Learning, it’s important to visualize the model’s performance so as to understand the model in a more enhanced way.
Furthermore, it is important to understand the pros and cons of our model. It’s critical to be updated on the different types of errors the model can commit, even more so — which type of errors could be more critical for training the model. So here, confusion matrix plays a key role.
WHAT EXACTLY IS CONFUSION MATRIX?
So, the million dollar question arises — what exactly is Confusion Matrix? Why the term Confusion?
Confusion matrix is a matrix used to determine the performance of the classification models (or “classifier”) for a given set of test data for which the true values are known. Since it shows the errors in the model performance in the form of a matrix, hence also known as an error matrix.
THE LAYOUT OF THE MATRIX :
The matrix is divided into two dimensions, that are predicted values and actual values along with the total number of predictions. Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class, or vice versa.
Predicted values are those values, which are predicted by the model, and actual values are the true values for the given observations.
WHY CONFUSION MATRIX?
For a binarized classification question (to judge whether it’s true or false), there is not always a consistency between prediction and actual situation, where we have a confusion matrix. The confusion matrix shows the ways in which your classification model is confused when it makes predictions.
It gives us insight not only into the errors being made by our classifier but more importantly the types of errors that are being made. It is this breakdown that overcomes the limitation of using classification accuracy alone.
THE DETAILED STRUCTURE :

Matrix’s Columns : The first column represents every instance the classifier predicted to be “Yes” while the second column represents the instances predicted to be “No”.
Matrix’s Rows : The first row represents all of the instances that are actually “Yes” and the second row represents all of the instances that are actually “No”.
Each cell in the matrix represents one of four metrics for evaluating our model’s predictions.
Now, let’s explain the cells by considering the concept of predictions — for example : Predicting the winner of the recent Cricket World Cup be taken as a positive prediction, and the loser as negative.
True Positive (TP) : The predicted value matches the actual value. Where the actual value was positive and the model predicted a positive value.
For example, you had predicted that England would win the world cup, and it won.
True Negative (TN) : The predicted value matches the actual value. The actual value was negative and the model predicted a negative value.
For example : You had predicted that New Zealand would not win the WC, and it lost.
False Positive (FP) : The predicted value was falsely predicted. The actual value was negative but the model predicted a positive value.
For example : You had predicted that India would win, but it lost.
It is also known as the Type 1 error.
False Negative (FN) : The predicted value was falsely predicted. The actual value was positive but the model predicted a negative value.
For example : You had predicted that England would not win, but it won.
It is also known as the Type 2 error.
CALCULATIONS USING CONFUSION MATRIX
CLASSIFICATION ACCURACY :
It defines how often the model predicts the correct output. It can be calculated as the ratio of the number of correct predictions made by the classifier to all number of predictions made by the classifiers.

PRECISION :
Precision tells us how many of the correctly predicted cases actually turned out to be positive. Precision is a useful metric in cases where False Positive is a higher concern than False Negatives.

RECALL :
Recall tells us how many of the actual positive cases we were able to predict correctly with our model. Recall is a useful metric in cases where False Negative trumps False Positive.

F-MEASURE :
F1 score is a weighted average score of the true positive (recall) and precision. F1-score is a harmonic mean of Precision and Recall, and so it gives a combined idea about these two metrics.

ROC SCORE :
ROC curve shows the true positive rates against the false positive rate at various cut points.
NULL ERROR RATE: This term is used to define how many times your prediction would be wrong if you can predict the majority class.
WHAT ARE THE TWO TYPES OF ERROR?
Type I Errors: False Positives
In data science, False Positives are also commonly referred to as Type I Errors. These errors occur when our binary classification model incorrectly classifies an instance as “Yes".
INTERPRETATION : You predicted positive and it’s false.
Depending on the specific problem at hand, Type I Errors might become very costly.
An example might be a model that classifies incoming emails as SPAM or HAM (not-spam). A Type I Error occurs every time our model mislabels a HAM email as SPAM. A SPAM/HAM classifier with many Type I Errors threatens to flag important HAM emails as SPAM and hide them from the user.
Type II Errors: False Negatives
False Negatives are commonly referred to as Type II Errors and occur when our binary classification model incorrectly classifies an instance as “No”.
INTERPRETATION : Your prediction is positive, and it is false.
Just like with Type I Errors, Type II Errors can be much more costly than Type I Errors. A relevant example presented in a previous post about evaluation metrics involved a classification model that attempts to classify passengers as terrorists or non-terrorists. The cost of mislabeling the sole terrorist in a group of 1,000 passengers could be tragic.
CYBER CRIME CASES BASED ON THE IMPACT OF CONFUSION MATRIX
Cyber attacks constitute a significant threat to organizations with implications ranging from economic, reputational, and legal consequences. As cybercriminals’ techniques get sophisticated, information security professionals face a more significant challenge to protecting information systems.
FALSE POSITIVES : False positives or false/non-malicious alerts (SIEM events) increase noise for already over-worked security teams and can include software bugs, poorly written software, or unrecognized network traffic.
By default, most security teams are conditioned to ignore false positives. Unfortunately, this practice of ignoring security alerts — no matter how trivial they may seem — can create alert fatigue and cause your team to miss actual, important alerts related to a real/malicious cyber threats.
These false alarms account for roughly 40% of the alerts cybersecurity teams receive on a daily basis.
FALSE NEGATIVES : These are uncaught cyber threats — overlooked by security tooling because they’re dormant, highly sophisticated (i.e. file-less or capable of lateral movement) or the security infrastructure in place lacks the technological ability to detect these attacks.
These advanced/hidden cyber threats are capable of evading prevention technologies, like next-gen firewalls, antivirus software, and endpoint detection and response (EDR) platforms trained to look for “known” attacks and malware.
No cybersecurity or data breach prevention technology can block 100% of the threats they encounter. The analysts put in hours of work that could have been dedicated to more meaningful tasks when an alert turns out to be a false positive. At worst, true cybersecurity threats can be missed when busy IT departments aren’t able to spare the resources needed to examine every potential threat.
A policy that encourages employees to disregard security threats, no matter how small, can leave your company vulnerable to data privacy breaches and other cyber attacks.
WHAT WE CAN DO?
Advances in cybersecurity have led to a new generation of smart technology that can help you proactively combat the issue of both false negatives and positives like analyzing network traffic, limiting network access on IoT devices, using Web Application Firewalls, researching AI solutions, using asset discovery tools to discover the hosts, systems, servers, and applications within your network environment, implementing tools/technology to speed your speed of detection and time to respond.
These measures are key and can help your security team prevent a data breach.
CONCLUSION
Thus Confusion Matrix in the field of Machine Learning is significant in showing how any classification model is confused when it makes predictions. Confusion matrix not only gives us insight into the errors being made by the classifier but also types of errors that are being made.
This breakdown helps us to overcome the limitation of using classification accuracy alone.