« Isreali Government Utilized SEO To Control Criticism | Main | Microsoft Fixes Clickjacking in IE8? »

Web Application Scanners Comparison

anantasec posted a scanner comparison to the web security mailing list today.

"In the past weeks, I've performed an evaluation/comparison of three

popular web vulnerability scanners.This evaluation was ordered by a
penetration testing company that will remain anonymous. The vendors
were not contacted during or after the evaluation.

The applications (web scanners) included in this evaluation are:
- Acunetix WVS version 6.0 (Build 20081217)
- IBM Rational AppScan version 7.7.620 Service Pack 2
- HP WebInspect version 7.7.869

I've tested 13 web applications (some of them containing a lot of
vulnerabilities), 3 demo applications provided by the vendors
(testphp.acunetix.com, demo.testfire.net, zero.webappsecurity.com) and
I've done some tests to verify Javascript execution capabilities.

In total, 16 applications were tested. I've tried to cover all the
major platforms, therefore I have applications in PHP, ASP, ASP.NET
and Java.

The report can be found at http://drop.io/anantasecfiles/
The full URL to the PDF document:
http://drop.io/download/497f0f4e/c1d8b2966f85fb8549a18cbe2d78922...

I've included enough information in this report (the javascript files
used for testing, exact version and URL for all the tested
applications) so anybody with enough patience can verify and reproduce
the results presented here.

Therefore, I will not respond to emails for vendors. You have the
information, fix your scanners!" - anantec

If you're lazy and just want the conclusions here they are

"Conclusions
Before starting this evaluation my favorite scanner was AppScan. They have a nice interface and I had the impression they are very fast.
After the evaluation, I've radically changed my opinion: AppScan scored worst in almost all the cases.
They are finishing the scan quickly because they don't do a comprehensive test.
And they have a huge rate of false positives. Almost all scans contain some false positives (most of the times for applications that are not
even installed on the machine). They have a lot of space for improvement.

Acunetix WVS and WebInspect are relatively good scanners.
If you are in the position to use the AcuSensor technology (PHP, ASP.NET and you are not required to do a blackbox testing) then
Acunetix WVS + AcuSensor is the better choice.

As these results show, blackbox testing is not enough anymore.

If you cannot use AcuSensor then you should decide between WebInspect and Acunetix WVS.
Both have their advantages and disadvantages. Browse the results and decide for yourself." - anantec

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.


All Comments are Moderated and will be delayed!



What policy were you using during your AppScan test? Comparing apples to apples with the policies set by these different scanners has to be a challenge if you are not using the same rigor of scan. Also AppScan requires some level of training to make sure that the pages you are browsing retain their session. Your results tend to not show that condition. Also, you should bring up the scan capabilities in new areas like Flash and Flex. Check out AppScan 7.8, there have been some major improvements.


Chris,

Reply to the post on the web security mailing list above (click posted).


Hi Chris,

I used the default policies for all the scanners. I didn't performed any tuning for any of them. That would be cheating. Each scanner had an equal chance to perform.

My opinion is that the big majority of the users of these tools don't have enough knowledge to configure these tools. Therefore I've scanned with the default settings.

It's possible that AppScan didn't retained the session in some cases. But if the other scanners managed to retain the session to find those vulnerabilities it means there's a problem with AppScan.

Anyway, let's be real: AppScan has a lot of problems to solve. It's not about retaining session or not. It's about not performing an extensive scan, your application is just rushing through the application to finish the scan quicker than the others.

You say that I should check scan capabilities in areas like Flash and Flex. Yes, no comparison is perfect, I might do that in a future evaluation.

I know about AppScan 7.8, but it was released in the middle of my evaluation.


I am guessing that you did not include Cenzic's solution because you were not asked?
Have you done an eval like this that included Cenzic?

Regards,
Doug


It is important to state that evaluating a tool without configuring it isn't typically a fair way to evaluate a product. Every single site needs to be customized for the most accurate scan possible.

To your point you're right, most customers have zero clue on how to configure these products. At the same time you're evaluating default configurations instead of actual tool capabilities from what it sounds like.

My 0.02


@Doug: I didn't included Cenzic and/or NTO Spider because I don't have a copy of these scanners. In the past, when I looked at Cenzic/NTO it wasn't easy to get a copy. Now things are changed from what I can see. I don't have an evaluation where Cenzic was included. Maybe I will make a future evaluation where I would include these scanners also. I'm pretty curious about the results, I've heard that NTO Spider has a very good crawler but I never had the chance to test it out.

@Robert: Yes, you are right. I'm just evaluating default configurations. It would be a nightmare to configure/tune all the scanners for all the websites and on the same time don't favor one or another. The vendors will always find something to pick about.


Agree with you that being the quickest scanner may be a value in some peoples minds but most security testers would only partially agree. There is a sweet spot between speed and performance that is sought after, completeness of scan being paramount. Default configurations are there purely to give the user a high level understanding of an applications risk based on a core set of analysis. We stick with the OWASP top ten as a rule of thumb. AppScan also provides Scan expert suggestions when you start a default scan, did you take the suggestions? I think another important criteria you over look in your comparison is what you get out of the result set. Does the scanner provide you with guidance on how to fix the issue? Will it deliver it in a format that a novice or non-security expert can understand? How does it deliver those results to developers and integration w/ bug tracking systems etc.


Yes, I'm aware that AppScan provides Scan expert suggestions and yes, I followed all the provided suggestions (that's a pretty obvious thing to do).
I've noticed that for some reason (maybe performance) you don't turn Javascript parsing ON by default. It will be turned on only if the Scan expert finds some javascript code in the initial analysis. That's not a good idea. The Scan expert doesn't crawl all the files from the application and therefore it may not enable Javascript parsing. At this point in time (2009) I think that Javascript parsing should be enabled by default on all the scanners.

Yes, you are right about guidance about fixing the issues. My report didn't included this aspect.

However, I have to congratulate you for this aspect: your scanner does provides the best guidance for fixing problems from all the scanners (by far).


@anantasec: AppScan has JavaScript parsing set to "ON" at all times. You are mixing parsing with JavaScript Execution, which is turned Off, and when needed, can be set to "ON". That's also the Scan Expert's recommendation that you have mentioned.

And BTW - assuming that a scanner should work 100% of the time in an out-of-the-box manner is ridiculously optimistic and naive.

Anyway, thanks for the valuable bakeoff, our team will research your findings soon.


Silliness.

We could debate scan configuration vs. accuracy until the cows come home. At the bottom line there are several things that must be "evaluated"... here they are:

1) Optimal scan configuration vs. "Out of the box" scan configuration results
2) Crawler performance (I can't stress this enough) should have separate results, and not directly influence the result of the scanning engine test
3) Difficulty in configuring an "advanced" scan configuration to do something like address multi-level nested javascript nav menus, or advanced state-tracking mechanisms

Some things to consider:
1) Yes, users are generally clueless - but not all of them are (consider educating the users?)
2) Metrics are important to standardize on - not all of the metrics people collect in "tests" like this make any sense... Scan "speed" and "number of checks" are useless metrics...
3) Scan policies are always interesting topics - what is included in one vendor's "standard" policy is often radically different (and maybe for a good reason?) than another vendor

Without transparency, and some publicly accessible information that you're critically missing from this "test"... this is just another review of "one person's opinion".

Cheers


Hi Rafael,

1) Optimal scan configuration vs. "Out of the box" scan configuration results

Yes, that's a good point. However, it's pretty hard to quantize/compare. Any ideas.

2) Crawler performance

Maybe I don't understand something here, please excuse me if that's the case, but: When you are comparing alerts you automatically compare crawling results (because if the crawler didn't find the link/parameter/combination/value the scanner cannot find the vulnerability).

3) Difficulty in configuring an "advanced" scan configuration

I agree, that's why I included some Javascript tests. Maybe they aren't complex enough because of lack of time but still they are included in the report.

1) Yes, users are generally clueless - but not all of them are (consider educating the users?)

Yes, but the ones that are not clueless don't need others to tell them what scanners to use, they can decide for themselves.

Educating users is a good proposal in theory but doesn't work good in practice for a number of reasons.

2) Metrics are important to standardize on - not all of the metrics people collect in "tests" like this make any sense... Scan "speed" and "number of checks" are useless metrics...

I totally agree but what this has no connection with my report. I didn't include any scan speed or number of checks in there. Maybe it would be a good idea to read it first and comment later.

3) Scan policies are always interesting topics - what is included in one vendor's "standard" policy is often radically different (and maybe for a good reason?) than another vendor

I only compared classic vulnerabilities like XSS, SQL Injection, File Inclusion and so on.
I don't see any reason (other than insanity) why any vendor wouldn't include these vulnerabilities in their standard policies.

Without transparency, and some publicly accessible information that you're critically missing from this "test"... this is just another review of "one person's opinion".

Let me know what information is missing from this test and if I have it I will try to make it available.


@Ory Segal: When I mentioned Javascript Parsing I was thinking about Javascript Execution. Maybe I'm missing something but I don't see how you could parse Javascript without executing the Javascript code. Ok, you could do some regex matching but that's just lame. BTW, I know that AppScan is doing exactly that.

Anyway, my point was that Javascript Execution is turned OFF in your scanner by default and I had cases when it didn't got turned ON and it should.

I mean, the Scan expert is just crawling a limited number of files. What if none of those files have Javascript code but other pages do?

>assuming that a scanner should work 100% of the >time in an out-of-the-box manner is ridiculously >optimistic and naive.

I totally agree that's a stupid point to make.
However, I never made that statement.
What I did mentioned and it still stands is that if AppScan doesn't find a vulnerability and the other two scaners are finding that vulnerability without any tuning, it means that AppScan has a problem.


Nice, thanks for the in-depth eval as this continues to be a difficult area for most to make a decision on, especially given how much the products cost!

Im fairly surprised w/ Accunetic results... I'll have to include them in our next eval. When we looked at it before, it was automatically discounted as an option because it had no enterprise platform capabilities, which I'm not sure if that has changed.

I would also definitely recommend you look at Cenzic's Hailstorm product as their base engine operates in a totally different fashion (e.g. renders responses in built-in browser to detect JS events, instead of just parsing for 200 OK, etc.), so generally there are less false positives.

It also depends heavily on what additional features you are interested in... Appscan has nice extensibility support, Hailstorm allows you to code/modify the attacks, Webinspect comes with additional tools for SQL injection, etc... Still it would seem these tools are mostly worthless in untrained hands and I don't see that changing any time soon.


Thanks Anonymous,

No, from what I know Acunetix doesn't have any enterprise platform capabilities.

Like I said before, I would like to take a look at Cenzic and NTO but didn't had a chance yet. Maybe this will change in the near future. I'm curious how they are performing.

And yes, it surely depends on what features you need. Everybody needs to investigate and decide for themselves.


Thanks for the test Anantasec. Even though I understand that the testing methodology has some shortcomings I agree that "out of the box" testing is still pretty fair. If you start tuning and teaching the scanner before the tests you are stepping on a wet swamp and will be accused even more for "doing this but not doing that". Crawler and the scan engine are different components and with this setup a bad crawler can spoil the results of an excellent engine but as should random tester really think about that? After all he or she is buying a web app scanner as a whole packet so he or she is entitled to trust that it performs well as a whole packet. In the end do web app scanner vendors state that they expect that their customers are trained web app testing professionals? I doubt not.

If someone wants' to make more in depth testing with even more tools I would be happy to see that testing report too!


Attackers are well-aware of the valuable information accessible through Web applications, and
their attempts to get at it are often unwittingly assisted by several important factors.
Conscientious organizations carefully protect their perimeters with intrusion detection systems
and firewalls, but these firewalls must keep ports 80 and 443 (SSL) open to conduct online
business. These ports represent open doors to attackers, who have figured out thousands of
ways to penetrate Web applications.
The standard security measures for protecting network traffic, network firewalls and Intrusion
Prevention Systems (IPS) and Intrusion Detection Systems (IDS), do not offer a solution to
application level threats. Network firewalls are designed to secure the internal network
perimeter, leaving organizations vulnerable to various application attacks.
Intrusion Prevention and Detection Systems (IDS/IPS) do not provide thorough analysis of
packet contents. Applications without an added layer of protection increase the risk of harmful
attacks and extreme vulnerabilities.

Web Application Level Attacks is the Achilles heel. In the past, security breaches occurred at the
network level of the corporate systems. Today, hackers are manipulating web applications
inside the corporate firewall. This entry enables them to access sensitive corporate and
customer data. An experienced hacker can break into most commercial websites with even the
smallest hole in a company’s website application code. These sophisticated attacks have
become increasingly threatening to organizations.

I recommend a service call GamaSec ( www.gamasec.com) remote online web vulnerability-assessment service
that tests web servers, web-interfaced systems and web-based applications against thousands
of known vulnerabilities with dynamic testing, and by simulating web-application attacks during
online scanning. The service identifies security vulnerabilities and produces recommended
solutions that can fix, or provide a viable workaround to the identified vulnerabilities

www.gamasec.com


@GamaSec (er I mean 'Anonymous'):

If you're gonna shill for your company, maybe you should mix it up a bit from your marketing copy -- http://www.gamasec.com/pdf/WebsiteSecurityTests.pdf

P.S. Foundstone has a new appliance as wella friend of mine just did the install and he said it
only took one day for both the install and training. Its built like a tank, so if you need
reliable hardware and software, check out Foundstones appliance. (see http://attrition.org/errata/sec-co/foundstone-01.html, for those who aren't in on the joke)


@Suzy

I hadn't seen the foundstone post before. Nice!


Ah, the sweet scent of astroturf.

***

Executing JavaScript is something to do with caution. Any code execution of a Turing-equivalent language is vulnerable to non-halting programs, the simplest being something like

while (1) {}

You can build heuristics around that but the problem is still underneath it all.

Less theoretically, JavaScript execution is also prone to doing things to the underlying website, such as actually submitting forms to cgis,possibly with real-world consequences.

Unless you know upfront that you're analysing a test system, potentially damaging things have to be turned off.

(It would be an interesting comparision to see how much damage the various scanners wreak with the intrusive parts turned on!)


@Henry: You can do like the browsers are doing it: You just terminate the script if it doesn't finish in a reasonable amount of time (a few seconds).


HI all,

Will just like to comment about out of the box scanning with Rational Appscan and HP WebInspect.

I was with a customer that scan the same application with the default settings and default test policies for both tools. The result came out that WI "identify" a lot of issues but actual fact if we dig into it a little closer, all these "issues" were just false positives.

Appscan did have false positives but definitely much much accurate compare to WI.

Not too sure about Acunetix as that time the Acusensor technology was not out yet.

I did have to educate customer on some of the configurations not set by using default settings.

Cheers
SS


Go with Open Source tools. I wouldn't spend tons of money for an automated tool that would not find anything other than low hanging fruit vulnerabilities. I found Powerfuzzer (http://www.powerfuzzer.com) usable and effective.


Just came across this...are the docs still around? The link above doesn't work anymore.


I don't think so. Try http://web.archive.org they may have a copy floating around.

Post a comment







Remember personal info?