Vulnerability Scans and False Positives

The importance of checking a web application for vulnerabilities is well understood, but it can take a lot of skill and time to do this manually.

There are many tools available that can automate the process but, as with all tools, it is important to understand their limitations.

Web application scanning tools will automatically review a website by crawling through all its links, reviewing each page using an algorithm to match responses to signatures.

If a match is found, the tool may perform additional checks to determine a degree of of certainty, if there is a vulnerability.

Example 1:

Using its database of signatures, the scanner identifies that a version of a library in use has vulnerabilities. It then reports the vulnerability and the page it was found on.

Example 2:

The scanner identifies an input field, which it tests to see if a blind injection attack is possible, inserting input that contains a delay and monitoring the speed of response for a delay. The response takes longer than normal, so the scanner marks the input field as being vulnerable to a blind injection attack.

However, are these genuine vulnerabilities? During a manual vulnerability test or penetration test, the tester will try to verify such findings before producing the report.

Example 1:

The tester attempts to get the web application to run the vulnerable function in the library; if it does, it is a genuine vulnerability. If the application does not use the function or allow the tester to trick it into calling the vulnerable function, it is not a vulnerability.

Example 2:

The tester uses a range of inputs with different delays to see if the response time changes correspondingly, while examining the output. If the response time changes according to the delay, it is a genuine vulnerability. If the response time is constant or the output explains the delay, such as a timeout because the application didn’t understand the input, then it is a false positive.

Automated scanners’ lack of precision about their findings is their most contentious issue, especially when testing web applications.

It is also important to understand they will not necessarily find all vulnerabilities. The same goes for human testing.

False positives and false negatives

Whether you are using a vulnerability scanning tool or another form of vulnerability identification, there are two types of errors to be aware of:

  • Type I error – false positive, a result that indicates a vulnerability is present when it is not. This creates noise and results in unnecessary remediation work.
  • Type II error – false negative, where a vulnerability is present but is not identified.

The false negative is the more serious error, as it creates a false sense of security. How to identify false negatives is beyond the scope of this article, but our general advice is to use multiple tools and techniques for vulnerability identification, and not to assume a clean result from a tool or tester means you are 100% secure.

See also:

Now let’s look at false positives. Web applications are vulnerable to injection exploits, which involve a variety of techniques such as SQL injection, cookie manipulation, command injection, HTML injection, cross-site scripting, and so on.

To prevent injection attacks, the application should use a combination of parameterised statements, escaping, pattern checking, database permissions and input validation.

It should also check input to ensure that input values are within range, and unexpected values are handled in a consistent manner.

Generated error messages should not give away information that may be useful to an attacker, but should help a user enter the required input, improving the user’s experience of the website.

The scanner tests for injection vulnerabilities by modifying GET and POST requests as well as cookies and other persistent data storage. It attempts an injection attack by changing the content to inject a piece of additional code before sending the modified requests to the web application.

The additional code can represent a SQL command, HTML code or operating system command, depending on whether the attack is simulated or not.

What happens next?

The scanner then examines the web application’s responses to determine if the injection attempt was successful. It looks for evidence of a successful execution of the code, which can be:

  • A delay in the return of the response;
  • A response including the injected input within it, which the browser interprets as HTML code;
  • An error message that has been detected; or
  • Data that has been retrieved by the simulated attack.

ASV (Approved Scanning Vendor) and vulnerability scans generate a large number of reactions when testing different injection techniques. These reactions can indicate that a vulnerability exists, or can be a false positive.

Often, scanning detects that the response has the injected code embedded even though it failed to execute as intended. The automated software includes this as a result in its report.

In reality, these results are false positives, as the attempt failed. They also indicate that the inputs to the application have not been sanitised to ensure only the expected range of inputs are processed by the application.

The modified input has passed through the application and been included in the web server’s response back to the vulnerability scan engine without being filtered out.

Although these false positives can be ignored, they may show that the application is not sanitising variables and values.

You must address these, cookies and other forms of persistent data within a web application environment to eliminate attack vectors and help protect against attacks in the future.

Pros and cons

The advantage of having an application that correctly sanitises input is that the number of false positives detected during vulnerability scanning is reduced. Therefore ‘noise’ that may be masking a true vulnerability is removed, which is especially important if ASV scans are being conducted.

A disadvantage of not sanitising input is that blocks of results are often classed as false positives, rather than examined individually.

Occasionally this means a true result can be incorrectly classed as a false positive, creating a type II error – a false negative.

There are other issues when attempting to manually examine the results from automated testing to identify false positives.

Vulnerability scanners use their own engines to generate HTTP requests and analyse responses, and trying to emulate this using browsers is problematic. Browsers are equipped with technology that is designed to reduce attack vectors by filtering the responses sent and the requests received. For example, Internet Explorer will detect and block cross-site scripting attempts, traversal attacks, etc. from the URL box.

As such, additional tools are required to manually test for vulnerabilities using browsers. Tools such as a web proxy (like WebScarab or Burp Suite) are used to intercept the request objects from the browser, allowing modification before sending them onto the server.

The proxy also allows the response objects to be intercepted before they reach the browser. This permits the response to be examined at an HTML source code level, rather than by the browser.

The average website can have hundreds of results from testing. Eliminating the generation of responses (especially the false ones) by correctly sanitising input to the application will make scanning and reporting more efficient and reduce the time spent on false positives in the future.

An organisation that looks at what is causing the generation of false positive responses to a test scenario and eliminates the causes rather than ignoring the false responses will be improving its security and making scanning more efficient, reducing the chance of a vulnerability being ignored.

In summary, it is important to ensure a web application correctly sanitises input to reduce the production of false positives and improve the effectiveness of vulnerability scanning.

If vulnerability scanning tools generate false positives, is there a use for them? Yes. Penetration testers use them to do the heavy lifting at the start of an engagement to identify areas that are potentially vulnerable and require more detailed testing.

For non-penetration testers, they are useful if the user understands what they are doing and what they return in the way of results. A craftsman does not blame his tools; he knows how to maintain them and what tool is appropriate for the job.

For a security professional, it is the same. They must know the appropriate tool, maintain it and understand its limitations as part of the process of securing an organisation.

The first time you use a vulnerability scanner or an automated scanning service on an application it will return results that you need to review to identify the false positives, which may take some time.

However, once you understand how it works, automated scanning is a cost-effective way of monitoring the attack surface of your site. You will be able to see when new vulnerabilities arise, and monitor how long it takes to remedy them. The scanner will also identify if updates reintroduce old vulnerabilities, which occurs when remediation is done to the production site and not the development code, and is more common than you might think.

Make the most of vulnerability scanning

Although vulnerability scanning is not a perfect solution, it’s an essential process – and there are ways of maximising the benefits while minimising the drawbacks.

For example, our Vulnerability Scanning Service contains all the advantages of an automated tool and the expertise of a security professional.

The tool scans for thousands of weaknesses each month, and you’ll receive a detailed vulnerability assessment that gives you a breakdown of the weak spots you must address.

Vulnerability scanning from just £49.95 a month with IT Governance

A version of this blog was originally published on 13 December 2012.