Automating a brute force web attack

2017-03-24  Cyber Security

This week was all about hacking at the SANS Cyber Retraining Academy, as students attempted to take control of a drone before embarking on a two-day Netwars capture the flag marathon.

The challenges spanned a huge range of skills – we found ourselves doing everything from setting up backdoors and stealing WordPress credentials to delivering Metasploit payloads and gaining root access on Linux target machines. It was a lot of hard work, but also plenty of fun.

There’s plenty I could write about, but the most satisfying moments came when I managed to automate attacks, so I’ll focus on one specific time my command line skills paid off.

The goal of the exercise was to access a web page as an administrator. Authorisation was managed by a security code supplied with the HTTP GET request, which had to match a randomly-generated code on the page. This changed each time and was impossible to predict, so the only way forward was to subject the site to a brute force attack.

My initial reaction was to choose a valid security code and refresh the page with F5 until the values happened to align, but it quickly became clear that this would take forever and the repetitive action ran the risk of refreshing beyond the authenticated page.

So what, I thought, if I could automate the process within the Linux shell, saving the results of each attempt to remove the need to spot the target page in real time? Here’s what I came up with:

watch -n 1 wget http://targetsystem.com/?securitycode=2208

The first part of this command – watch -n 1 – tells Linux to repeat the following command once every second. The wget command fetches the file from the URL and saves it to Downloads.

Running this for 10 minutes put hundreds of HTML files in my Downloads folder. Spotting the page where authentication was successful was as easy as sorting them by size and spotting the anomaly. It was simple, but a solution to what might have been an overwhelming task.

The lessons I learnt from this trial-and-error process came in useful on several more occasions over the coming days, when I created more shell scripts and looping commands. It just goes to show that the most effective solution is not always the most complex one.

Looking for the comments? My website doesn't have a comments section because it would take a fair amount of effort to maintain and wouldn't usually present much value to readers. However, if you have thoughts to share I'd love to hear from you - feel free to send me a tweet or an email.