carnal0wnage [Shared Reader]

Sunday, July 8, 2007

Web-Based Directory Enumeration

inspired by a post on i decided to play with http-dir-enum.

In a simple Web Based Directory Enumeration attack we look for all other error codes besides 404, so 200, 403, etc. anything not a 404 deserves a closer look and may yield old, forgotten about or "hidden in plain site" directories that may contain useful information.

Lets run the perl script without any options to see the usage information:

SegFault:~/Desktop/http-dir-enum-0.4.2 cg$ ./
http-dir-enum v0.4.2 ( )
Copyright (C) 2006 Mark Lowe ( )

Given a URL and a wordlist, http-dir-enum will attempt to determine names of
directories that exist on a website.

Usage: [options] -f dir-file url

options are:
-m n Maximum number of worker processes (default: 8)
-f file File of potential directory names
-k file File of known directory names
-c 0|1 Close connection between each attempt (default: 0)
-r 0|1 Recursively enumerate sub directories (default: 1)
-t n Wait a maximum of n seconds for reply (default: 20)
-u user Username to use for basic authentication
-p pass Password to use for basic authentication
-H g|h HTTP method g=GET, h=HEAD (default: head)
-i code Ignore HTTP response code (e.g. 404 or '404|200')
-U str Set User-Agent header to str (default based on Firefox
-s 0|1 Add a trailing slash to the URL (default: 1)
-S 0|1 Case sensitive directory names (default: 1)
-a 0|1 Automatically determine HTTP response code to ignore (default: 1)
-l n Limit scan to n attempts per second (default: unlimited)
-R 0|1 Follow redirects (default: 0)
-q Quiet. Don't print out info ("[I]") messages
-n n Only read first n lines of dirs file (default: unlimited)
-o file Save XML report of dirs found to file (default: don't save a report)
-x regx Return only results that match this regular expression
-X regx Ignore results that match this regular expression
-P url Proxy URL
-C str Use cookie
-v Verbose
-d Debugging output
-D code Print out whole response if it has HTTP code "code" (e.g. 500)
-h This help message

The default options should be suitable most of the time, so the
typical usage would be: -f dirs.txt http://host


* Make sure the number of processes (-m) is less than the number of directories
passed via the -f option. It normally is anyway.

* Use a lower number of processes (e.g. 2) over fast connections like localhost. Use a
higher number (e.g. 8 or 32) over laggy connections.

lets run it against

SegFault:~/Desktop/http-dir-enum-0.4.2 cg$ ./ -v -o carnal-output -f directory-names.txt
Starting http-dir-enum v0.4.2 ( )
Copyright (C) 2006 Mark Lowe ( )

| Scan Information |

URL ....................
Processes .............. 8
Directory name file .... directory-names.txt
Query timeout .......... 20 secs
HTTP Method ............ HEAD
Max Queries / sec ...... unlimited
Trailing slash ......... On
Recursive dir search ... On
Close connections ...... Off
Follow redirects ....... Off
Case sensistive dirs ... On
Auto-ignore ............ On
Output file ............ carnal-output

######## Scan started on Sat Jul 7 22:23:40 2007 #########
[I] Processing directory: / (0 dirs remaining)
[I] Auto-ignoring HTTP code 404 for
cgi-bin 403
include 404
bin 404
images 403
license 404
man 404
logs 404
modules 404

######## Scan completed on Sat Jul 7 22:22:35 2007 #########
6 results.

4949 queries in 76 seconds (65 queries / sec)
XML report saved to output.

And the results:

SegFault:~/Desktop/http-dir-enum-0.4.2 cg$ cat carnal-output

dirs_found name="admin" code="200" />
dirs_found name="cgi-bin" code="403" />
dirs_found name="icons" code="200" />
dirs_found name="images" code="403" />
dirs_found name="images/blog" code="403" />
dirs_found name="research" code="403" />
scan_options name="auto_detect_ignore" value="On" />
scan_options name="close_connection" value="Off" />
scan_options name="cookie" value="[not_set]" />
scan_options name="dirsfile" value="directory-names.txt" />
scan_options name="end_time" value="1183872399" />
scan_options name="end_time_string" value="Sat Jul 7 22:26:39 2007" />
scan_options name="follow_redirects" value="Off" />
scan_options name="http_method" value="head" />
scan_options name="ignore_code" value="[not_set]" />
scan_options name="password" value="[not_set]" />
scan_options name="processes" value="8" />
scan_options name="proxy" value="None" />
scan_options name="recursive" value="On" />
scan_options name="scan_rate" value="unlimited" />
scan_options name="start_time" value="1183872319" />
scan_options name="start_time_string" value="Sat Jul 7 22:25:19 2007" />
scan_options name="starturl" value="" />
scan_options name="timeout" value="20" />
scan_options name="trailing_slash" value="On" />
scan_options name="username" value="[not_set]" />
scan_options name="version" value="0.4.2" />

Not too shabby :-)

have fun



Hello,Google. said...

CG,love to read your blog.
use private directory file,This script still absolutely useful.

Jock Pereira said...

Excellent information, thanks! I did a review of what I think are the top 10 web directory enumeration tools - you might be interested in s: of these...