Confusion Matrix and its Type1 , Type2 Errors
What is Confusion Matrix and why you need it?
Well, it is a performance measurement for machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.

True Positive:
Interpretation: You predicted positive and it’s true.
You predicted that a woman is pregnant and she actually is.
True Negative:
Interpretation: You predicted negative and it’s true.
You predicted that a man is not pregnant and he actually is not.
False Positive: (Type 1 Error)
Interpretation: You predicted positive and it’s false.
You predicted that a man is pregnant but he actually is not.
False Negative: (Type 2 Error)
Interpretation: You predicted negative and it’s false.
You predicted that a woman is not pregnant but she actually is.
Now we will see some common domains in which type1 and type2 are used:
Medical testing
False negatives and false positives are significant issues in medical testing.
Type I error (false positive): “The true fact is that the patients do not have a specific disease but the physicians judges the patients was ill according to the test reports.”
False positives can also produce serious and counter-intuitive problems when the condition being searched for is rare, as in screening. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the positives detected by that test will be false.
Type II error (false negative): “The true fact is that the disease is actually present but the test reports provide a falsely reassuring message to patients and physicians that the disease is absent.”
False negatives produce serious and counter-intuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10% is used to test a population with a true occurrence rate of 70%, many of the negatives detected by the test will be false.
This sometimes leads to inappropriate or inadequate treatment of both the patient and their disease.
Biometrics
Biometric matching, such as for fingerprint recognition, facial recognition is susceptible to type I and type II error.
Type I error (false reject rate): “The true fact is that the person is someone in the searched list but the system concludes that the person is not according to the data.”
Type II error (false match rate): “The true fact is that the person is not someone in the searched list but the system concludes that the person is someone whom we are looking for according to the data.”
The probability of type I errors is called the “false reject rate” (FRR) or false non-match rate (FNMR), while the probability of type II errors is called the “false accept rate” (FAR) or false match rate (FMR).
If the system is designed to rarely match suspects then the probability of type II errors can be called the “false alarm rate”. On the other hand, if the system is used for validation (and acceptance is the norm) then the FAR is a measure of system security, while the FRR measures user inconvenience level.
Computers
The notions of false positives and false negatives have a wide currency in the realm of computers and computer applications, including computer security, spam filtering, Malware detection and many others.
For example, in the case of a Cyber attack.
Type I error (false positive): “It says the attack has not been initiated but in the reality the attack has been initiated this will lead us to a nightmare.”
Type II error (false negative): “It says the attack has been initiated but in reality it is not. This is not that much hazardous as type1 error.