• Browse topics
Login

False negatives and false positives

What are they and what is the difference between them?

~15mins estimated

General

False negatives and positives: the basics

What are false negatives and false positives?

To start with a brief overview of what these terms mean, a false positive defines the event that a potential threat is flagged but it turns out to actually only be normal traffic. On the other hand, a false negative describes the event when no threat is detected even though there is an actual attack occurring. In terms of true identification (what everyone should be aiming for), a true positive is the successful identification of an attack and a true negative is when acceptable behavior is identified as acceptable and is not flagged by the detection system.

Both false negatives and positives are unfortunately quite common, but pose different problems to the entities involved. One of the main risks of false positives is alert fatigue. When a detection system is not properly calibrated and therefore flags more activity as suspicious than it should be flagging, cybersecurity employees have to check all of the alerts even though most are non-threatening. This causes them to start skimming over all alerts, to have a huge backlog of threats to check, or to ignore them completely, meaning real risks get easily overlooked. Additionally, too many false positives result in skewed risk assessments and the consequential misallocation of the organization’s resources.

Inversely, false negatives pose a serious security risk by not having employees look into enough of the suspicious traffic, or not having their detection algorithms even pick up on enough of it, and thus missing real threats. This is often considered the most dangerous state and can result in regulatory fines, legal action, and irreparable damage to user trust.

False negatives and positives confusion matrix.

About this lesson

Take a look at the difference between false negatives and false positives in cybersecurity, why they both pose such a threat, and methods developers can use to mitigate the harm they cause.

FUN FACT

Breach alert!

In 2010, a cybersecurity company called McAfee’s threat detection system incorrectly identified legitimate internal files (svchost.exe) as malware (a false positive!) and initialized an anti-virus update. However, as a result customers and internal employees experienced widespread system failures and connectivity issues. Even after withdrawing the update and releasing an emergency DAT file, many computers were still rendered useless and the company had to send IT admins in person to fix them. McAfee was bought by Intel later that year.

Importance of false negatives and positives

Now we’ll go into a little more detail about why false negatives and positives should concern you.

As mentioned before, one of the main threats of false positives is the potential to cause massive alert fatigue. Based on an International Data Corp. survey of 300 U.S. based IT executives, while real threats take employees an average of 30 minutes to investigate and measure the severity of, false positives take 32 minutes. With this in mind, it makes sense that based on a 2025 Splunk survey ~50% of employees think they spend too much time on alerting issues, which consequentially leads to the IDC’s finding that 27% of notifications are ignored or never investigated for companies with 500-1,499 employees (for 1,500-4,999 employees 30% and 5,000+ employees 23%). This might explain why IT departments have such high turnover rates, and on the administrative level, false positives are super costly to any organization and delay the functionality of software.

In a way, false negatives pose an even more palpable threat with their potential to cause disastrous data breaches. Plus, said data breaches often result in loss of intellectual property, which is terrible for profit margins, and introduce severe attacks like ransomware, especially in industries such as healthcare where protected health information is at stake (see our PHI lesson for more).

To contextualize this information, let’s take a look at a real-life example of false negatives and positives: the 2013 Target data breach. By infiltrating Target’s computer network, attackers were able to steal financial and personal information from ~110 million Target customers. On top of that, they took said sensitive data and relocated it to a server in Eastern Europe, and cost the company $18.5 million in settlements.

The attackers used malware to infect Target’s entire system and steal sensitive information. Because of this, their internal threat detection system issued several urgent warnings about malware installation and updates. However, somehow the threats that were detected were overlooked by employees, likely due a large volume of false positives, and more warnings weren’t issued throughout the kill chain because of insufficient protections leading to false negatives.

False negatives and positives in action

Clearly, when organizations and the developers within them don’t properly mitigate both false negatives and positives, employees and customers suffer. Before we discuss the ways that these issues can be mitigated in practice, let’s see an example of false positives and negatives in action.

Sof Tware is an enterprise security expert working within a large company...

She notices that she and her team are starting to get majorly burned out by the constant need to investigate false alarms raised by their intrusion detection system.

Here's what it might look like on their end:

Too many false positives!

  • STEP 1
  • STEP 2
  • STEP 3
  • STEP 4
  • STEP 5

Logging in to the enterprise security portal.

Sof needs to login to the portal to see what threats were picked up by the detection system since last time she checked.

Fn&p step 1

Understandably, you can imagine how each of the threats being false positives could get redundant and make missing a real threat much easier.

Sof decides that it makes sense to decrease the rigidity of their detection system to compensate for so many false positives, but unfortunately because of this new restriction an actual threat falls through the cracks! She went from one extreme to another... how is she possibly meant to balance these opposing forces?!

Let’s look more closely at methods she and her team could have used to repair their detection system without letting real threats in.

FUN FACT

Why fingerprints?

Your fingerprint is so singular that for any individual the likelihood of two small sections of separate fingerprints being similar enough to register as the same for touch ID is 1 in 50,000 (0.002%). Therefore, false negatives are prevented almost all of the time by touch ID on your personal devices (aka preventing the device not registering that someone else is trying to enter maliciously).

Scan your code & stay secure with Snyk - for FREE!

Did you know you can use Snyk for free to verify that your code
doesn't include this or other vulnerabilities?

Scan your code

False negatives and positives mitigation

The main issue arises in trying to balance missing real threats with assuming everything except (almost certainly) legitimate traffic is an attack. It is difficult to find an equilibrium where a reasonable amount of user traffic is flagged, but it is still strict enough that the real attacks are detected.

Improve detection

One of the most effective ways to strike this balance is by improving your detection algorithm. For example, one incorrect password attempt shouldn’t get flagged (or even up to 5 or so), but many incorrect attempts in rapid succession are more likely to signal a brute force attack. These algorithms are bound by rules and procedures to find potential threats and decrease the concentration of false positives, and the data they are trained on includes network traffic patterns, characteristics of files, user behavior, etc. Considering these aspects leads to a more dependable and accurate algorithm.

In order to refine these systems, detection algorithms should be trained by past alerts and the common triggers for false positives, making them more savvy to a company’s unique attack surface. One thing developers can do is adjust threshold values, which is the parameter used to determine likelihood of the flagged action being an actual threat and how high it needs to be for the system to flag it. Plus, adding more context to the algorithm and more in-depth pattern recognition techniques (ex. large file transfers, considering time of day, user roles, nature of files that are being transferred) can make the algorithm more clever and accurate. Prioritize what is most important to your organization’s specific needs.

Logging and contextual analysis

An adjacent concept to improving your threat detection algorithms overall is incorporating contextual analysis, which is often informed by historical data contained in logs. Logging allows developers to track what kinds of attacks have happened where, which can inform the strategy they use to protect against future attacks. Logging, system audits, and asset inventories are all vital parts of intrusion detection, but only record activity (they don’t actually prevent or catch any attacks). Instead, they provide ample data for developers to use to refine and inform their actual detection systems and algorithms.

Updates and system maintenance

Another general best practice is keeping updates and maintenance current. Updates often contain improved threat signatures (more extensive identification of common threats), behavioral analysis models that can more accurately pinpoint abnormal activity, and whitelisting procedures (which ensure only approved computers, accounts, etc. can access your service).

Outdated systems are not equipped with the most applicable threat detection and intelligence, so continuously fixing bugs and updating your system keeps everything modern and dependable. Not only does this help to prevent false positives, but also mitigates the appearance of false negatives.

Machine learning and AI

A big and newly emerging possibility to deal with these threats effectively is machine learning and AI. First of all, machine learning and AI really help prevent false positives especially by learning more effectively from historical data, adapting in real-time based on new information (ex. changing environments and evolving threats), and synthesizing data from multiple sources for higher accuracy. They also enable detection systems to analyze vast amounts of data in short periods of time, making it possible to follow complex patterns and learn more in depth from past experience.

However, the drawback is that it can be difficult to obtain such high volumes of quality data, and it takes a lot of human labor to constantly monitor and adjust the models. Not to mention, bad actors are starting to manipulate AI and Machine Learning algorithms themselves, which can introduce new vulnerabilities. See our OWASP Top 10 LLM and GenAI learning path for more. Machine learning and AI have a powerful potential to mitigate false negatives and positives but must be used with care.

Avoiding negative security models

Finally, one more way to alleviate the burden of false negatives in particular is by avoiding strict negative security models. This is a system that grants all traffic access unless it specifically fits a threat signature or can be explicitly identified as hostile. You can think of this as an ‘innocent until proven guilty’ mindset, where all traffic is assumed to be valid and must be reasonably suspicious to flag. However, this kind of reductive model really increases the risk of false negatives even though it does help to decrease false positives. Developers should add extra positive cybersecurity measures, which outlines a system where access is denied unless the traffic can be confirmed as valid - more of a ‘guilty until proven innocent’ type deal.

Again, though, it is important to balance this with decreasing your false positives as much as possible to avoid alert fatigue. That is where some of our other methods can come in, like using machine learning, keeping up with system maintenance, and implementing context within the detection algorithms.

The risks posed by false negatives and positives can have severe consequences as we have seen through our examples, and it is important for developers to understand these different threats and how best to balance their mitigation. In addition to the methods listed above, it is always important for an organization to keep their security team up to date with modern methods so they have the resources to sufficiently deal with potential threats, and for them to conduct frequent testing internally to see how different kinds of attacks will be handled by the detection system and those overseeing it.

Quiz

Test your knowledge!

Quiz

Which state would you classify the following scenario as: Your threat detection system is warning you that someone is trying to introduce malware into your backend code, but in actuality it is legitimate software that poses no threat to your company.

Keep learning

Continue your cybersecurity journey by checking out some of our other general/less technical lessons:

  • Here is our lesson all about securing your protected health information.
  • This lesson will teach you what common weakness enumerations are, and this one provides and overview of common vulnerabilities and how they are classified.

Congratulations

You’ve learned what false negatives and positives are, and what you can do to improve security detection systems. We hope you will apply your new knowledge wisely and make your code much safer. Also, make sure to check out our lessons on other common vulnerabilities.