AverageSecurityGuy

Security, Programming, Pentesting

About

Mastodon

Linked In

Projects

Cheat Sheets

Book

DNS Footprinting at Scale

Recently I wrote an article on doing domain footprinting. Shortly after that article a friend on Twitter mentioned that he was doing zone transfer research against the Alexa top 1 million web sites so I decided to try my hand at it as well. My work on that project eventually resulted in code, raw data, and some analysis, which can be found here: https://github.com/averagesecurityguy/axfr.

Part of the analysis from the zone transfer research resulted in a huge list of subdomain names. I took the top 10,000 subdomains and used a modified version of the resolve_mt.py script to foot print the Alexa top 1000 domains.

The modified script, dnsbrute.py, that I used and the resulting dataset in tar.gz format are both available. The dnsbrute.py script only works on one domain at a time so I used the GNU parallel program to run 8 copies of the script in parallel. I can get through the top 1000 domains within a day. Most domains take anywhere from 2 -10 minutes depending on how fast their DNS servers respond.

To run the script in parallel use the following command:

parallel -a domain.list -j 8 ./dnsbrute {1} subdomain.list

Make sure you have Python3 installed and that the dnspython3, netaddr, and ipwhois libraries are installed.

DNS Footprinting

There are a lot of tools available for doing target footprinting, Spiderfoot, Maltego, and theHarvester to name a few. Unfortunately, I find something lacking in each of these tools. Spiderfoot and Maltego are too complicated for me. I really like the Unix philosophy of simple tools that do one thing well and both of these fall outside of that philosophy. TheHarvester fits much better into this philosophy but it also provides a lot of data I don’t want when doing network footprinting, like email addresses and shared hosts.

When I am trying to footprint a network I am often only given a domain name and I want to know DNS names and IP addresses associated with that domain name. In addition, I want to know about the network blocks those IP addresses belong to and other servers that may be in those network blocks. With that in mind I wrote the resolve.py Python script.

Overview

The resolve.py script takes a domain name and provides the SOA record, MX records, and NS records. It then attempts a zone transfer from each of the name servers and then brute forces DNS names using the provided word list. Next, it does a whois lookup to find Network blocks associated with any IP addresses found in the A and AAAA records. Finally, it performs a reverse lookup on all of the identified IP addresses and on the small network blocks.

Dependencies

The following Python3 libraries are needed.

  • dnspython3
  • netaddr
  • ipwhois

Usage

resolve.py domain wordlist

You can find an example of the output here: http://pastebin.com/yUxjeTcj

Update 2016-01-20:

There is now a multi-threaded version of the script, resolve_mt.py. The usage is the same as resolve.py.

Web Content Discovery with Parallel

In my previous post I showed you how to do content discovery using a bash one-liner and the dirb program. This works great if you have 5-10 servers but if you have more than that you may need to run the bash command on multiple servers at the same time. This is where the parallel command can help.

If the parallel command is not installed on your Kali box, you can install it with apt-get install parallel.

Using the following command we can run dirb against 16 servers at once.

cat websites.txt | parallel -j 16 dirb {} -f -o websites.dirb

All of the stdout from all 16 jobs will be written to the websites.dirb file. Once the command is completed you can grep the websites.dirb file for any identified files. The command grep + websites.dirb should produce results similar to the following:

+ https://yahoo.com/t (CODE:302|SIZE:257)
+ https://twitter.com/tos (CODE:200|SIZE:3751)
+ https://yahoo.com/ticket (CODE:200|SIZE:0)
+ https://yahoo.com/ticket_list (CODE:200|SIZE:0)
+ https://yahoo.com/ticket_new (CODE:200|SIZE:0)
+ https://yahoo.com/tickets (CODE:200|SIZE:0)
+ https://netflix.com/_borders (CODE:504|SIZE:0)
+ https://netflix.com/_database (CODE:504|SIZE:0)
+ https://netflix.com/_js (CODE:504|SIZE:0)
+ https://netflix.com/~apache (CODE:504|SIZE:0)
+ https://netflix.com/apis (CODE:301|SIZE:0)
+ https://netflix.com/crossdomain.xml (CODE:200|SIZE:3)
+ https://craigslist.org/about (CODE:302|SIZE:1)

We've Lost Sight of the Basics

I’m a penetration tester and I have a checklist that I use on just about every test I do. My checklist includes things like:

  • Scan the external network for open SMB ports.
  • Scan the internal network for shared folders with no authentication.
  • Scan web servers for files like info.php, .htaccess, config.php, etc.
  • Test every login for default credentials.

Do you know why these items are on my checklist? It’s because they work. Inevitably some sysadmin or web admin has misconfigured their firewall, screwed up the file permissions on their server, or installed a system and didn’t bother changing the default password (or worse yet, the vendor wouldn’t let them.)

The reason we are losing the war in infosec is because we’ve lost sight of the basics. We don’t segment our networks, we don’t modify default passwords, we don’t harden servers before putting them in production. Some of these tasks can be automated, some cannot but the initial investment of time is more than worth the long term benefit.

Many of the common vulnerabilities I find can be found with free or cheap tools in a matter of minutes. Often they can be fixed in a matter of days, if the desire to fix them is there.

It’s been at least five years since I had to defend a network so maybe I’m out of touch but unless we find a way to fix these dumb mistakes and prevent them from happening again, we will never win the infosec war.

Here’s a presentation I did recently to help illustrate the point You Will Get Owned in 2016

Web Content Discovery on Many Servers

Often on black box network tests I run across a large number of web servers on the network. I like to look for common files on all of the web servers I identify because you never know when you may run across a configuration file, a php info file, or some other interesting bit of information. When there are 10s of servers to check I don’t have time to kick off each scan and babysit it until it is done so I use a simple bash one-liner to help me out.

This one-liner assumes you have a text file with each of your web servers listed on a separate line in the following format: http(s)://servername_or_ip:<port>. It also assumes you have the dirb program installed, which should be installed in Kali or can be installed with apt-get install dirb.

Here is the one-liner:

for u in $(cat web_servers.txt); do dirb $u -f; done

If you would like to use a different word list add the path to the word list after the $u like this:

for u in $(cat web_servers.txt); do dirb $u /path/to/wordlist -f; done