Nimbuzzmasters forum
nwlve HI   GUEST nwlve
WELCOME TO NIMBUZZ MASTERS FORUM
PLEASE   REGISTER

TO
Dzs HAVE FULL ACCESS TO THE FORUM AND BE ABLE TO DOWNLOAD STUFF Dzs
Grp
STAY WITH US THANK YOU
Forum management ©️
Mzs
Nimbuzzmasters forum
nwlve HI   GUEST nwlve
WELCOME TO NIMBUZZ MASTERS FORUM
PLEASE   REGISTER

TO
Dzs HAVE FULL ACCESS TO THE FORUM AND BE ABLE TO DOWNLOAD STUFF Dzs
Grp
STAY WITH US THANK YOU
Forum management ©️
Mzs
Nimbuzzmasters forum

The forum of the nimbuzz forums


You are not connected. Please login or register

 
 

Web Pentest-part1 on information gathering

Message (Page 1 of 1)

#1

r00t d3str0y3r

avatar
 
Member
Member

Posted Sat Nov 01, 2014 8:24 pm

 
Information Gathering with online websites

Very Happy Hello and welcome to my first tutorial on Information Gathering.

in this tutorial we will gather information about our website using some freely online available websites.

1. Netcraft
2. YouGetSignal
3. Archive.org
4. robots.txt

Then the last thing we are going to use is the robots.txt file to view the paths, which web admin wants to hide from the bots and do not want them to be public. All such infomation can many times give your testing a boost start. I will Expain each with example one by one. Very Happy Very Happy

1. Netcraft
This website gives us a detailed information about the web hosting and the Server with detailed information on what is running on the server along with the IP, whoIs information, Server side technologies etc. All this Information should be saved in your reports so that you can use all the information to find the right tests and define the attack surface which is the most important part of a penetest.

2. YouGetSignalMany times the particular domain you are targetting is not so vulnerable or you are not able to find the right attack surface, in such case you can make a Reverse IP domain lookup and find the other domains on the server which may be vulnerable and allow you to enter the Server.

In such a way you can make your way towards into the website.

3. Archive.org
Archive.org is a website which is maintaining history of many websites over the internet. Many times you can get some information which is no more displayed on the website because of some security issue but something related to that can still be found there.

4. Robots.txt
Robots.txt is a file which is used by the websites to disallow crawlers to Crawl some of its sensitive data or the admin panels. And it can be viewed publically so in that case it could be useful if we find that data and use it later on.

After all this we can move to our target domain and view the robots.txt file, which is used by the web Admins or some Web-Application to hide private stuff from the web bots. But viewing it may allow you to get the path of all that content and later we can view those pages or paths and find some hidden content which could also be in an open form because of the foolishness of a web admin.
Very Happy Very Happy Very Happy

Author- r00t d3str0y3r



http://securitymafia.com


#2

Broken Angel

Broken Angel
 
Designer
Designer

Posted Sat Nov 01, 2014 11:08 pm

 
Post





Message (Page 1 of 1)

Permissions in this forum:
You cannot reply to topics in this forum


  • Total Posts:
  • Total Members:
  • Newest Member:
  • Most Online: Most users ever online was 583 on Thu Oct 14, 2021 1:28 am

In total there is 0 user online :: 0 Registered, 0 Hidden and 0 Guests
Users browsing this forum: None