Some questions while experimenting with domain enumeration

Experimenting with domain enumeration. I’m using amass to search for subdomains and running gobuster to search for directories and files. Should I be running gobuster on all the subdomains or will running it on the domain cover everything?

2 Likes

you need to run it on subdomains seperately. scanning on google.com won’t cover stuff on meme.google.com, since they resolve to different addresses.

Thanks, I was wondering if that was the case. Currently trying to make a bash script for the ultimate automatic enumeration tool. Gobuster, amass, sublist34, nmap, httprobe,. eyewitness…

One thing to keep in mind with this: it’s probably not a concern if you’re just checking out your own security on internal stuff, but running this kind of “scattershot discovery” on any sort of actual target is going to be incredibly noisy and will almost immediately flag your traffic as something to be looked into and watched

In my opinion, it’s pointless to try and automate all that enumeration unless you are smart with the stealth and you bring a new convenience to the process. such as a good method of formatting output or parsing all the output for only important bits or smth. Or making other tools output data in a format needed by another tool.

That’s relly the only good reason to automate that kind of things is if you’re automating it to run, unattended, over DAYS, with a LOT of random entropy added.

That’s the plan. I can set it off whilst I work my day job and hopefully come back to something interesting. Also I won’t be running all those script at once, I’ll be using switches to choose which scans I would like to run or a passive vs brute force method of enumeration.
I’ll be organising any findings into a near file structure for each target so aovid scraping through longs files of poetentially rubbish

And also only looking for e.g http code 200 to cut down on the volume of information.

Other status codes are highly important to take not of too though. For example, If a login page returns a 301 directing you to an error page on unsuccessful auth attmept, you can just edit the http response with a proxy to have it be a 200. If they don’t explicitly use a 403 or smth.

That’s the debate I’m having with myself. Balancing including the most amount of relevant information without having to perform a scan that takes like a week or sm. I think I’ll take just 200 http code and maybe a couple others like 301 for the automatic scan and can always perform a more in-depth manual scan if necessary. No one tool can do it all

then you probably wont detect things that can have sensitive info that you just dont have access to yet. But that you could access later. If you have a dir that 403’s but then later you get creds, you could revisit it since you knew it was there.