Sunteți pe pagina 1din 3

OpenMPI parallel and distributed processing

Published on December 28, 2012 by modnet on www.modnet.org

OpenMPI & JTR

OpenMPI, How Does Work?


OpenMPI is an Message Passing Interface library used for parallel and distributed processing .OpenMPI itself is an open source project and an implementation of MPI-1 and MPI-2 standards.MPI is not a language, all MPI operations are expressed as functions, subroutines, or methods, according to the appropriate language bindings which for C,C++, Fortran-77, and Fortran-95, are part of the MPI standard.There are different versions of OpenMPI for different compiler families. We can start a program from a node of cluster called master and specify how many process we want to use , we will also specify the other nodes in the cluster we want to use.The program itself will start on each node of the cluster and each process will be able to communicate with another by using the MPI library and all results will be sent to that node from which this program was called (in this case to the master)

Building a distributed resource cluster


We need at least 2 Unix/Linux nodes using the same processor architecture, each node will need to be configured with a common user name with the same configuration path.The user must be authorized to log into each node without a password authentication (This can be done by using ssh PKI authentication.)To perform this configuration I will use an 64 bit Intel architecture.Im using Debian as a main operating system and these nodes are a part of a cluster( Morpheus 3 node Cluster build previously echelon Master ,phoenix Slave1, nexus Slave2) However the nodes can also be used as stand alone.On each node must be created the same user, (to build this i made a user named xmpi)To allow commands to be interactively run on multiple servers over an ssh connection can be used a tool called ClusterSSH (a.k.a cssh) This tool can save your time when you need to run the same command in real-time on more that one cluster node.It should be used carefully, especially when you perform commands like rm or shutdown.Add users set the passwords and generate a DSA ssh key useradd xmpi -m -b /home passwd xmpi ssh-keygen -t dsa Note:The user will be created by root but other commands should be performed by user itself.The keys shoud be copied across all nodes to permit xmpi user accessing without password

xmpi@echelon:~$ scp /home/xmpi/.ssh/id_dsa.pub xmpi@nexus:/home/xmpi.ssh/authorized_keys xmpi@echelon:~$ scp /home/xmpi/.ssh/id_dsa.pub xmpi@phoenix:/home/xmpi.ssh/authorized_keys Setting up OpenMPI Installing the package apt-get install openmpi-bin openmpi-common libopenmpi-dev Debian should install all needed dependencies , this action should be performed on each node.OpenMPI can be used with languages like C,C++, Fortran and provides some wrappers that can be used to compile code.mpicc is an OpenMPI C wrapper compiler , it can be used to compile C code.OpenMPI will require ssh to connect to each node , ensure that users have the appropriate permissions.First the program must be compiled and replicated to each node under the same path.An Example: report.c This program will stamp a Hello from each process from each node of the cluster xmpi@echelon:~$ mpicc -o report report.c To run this program across the cluster we must also create a file containing all host names on which we want to perform this action xmpi@echelon:~$ cat machine echelon phoenix nexus Running the code mpirun --mca btl ^openib -np 10 -machinefile machine report -np Run this many copies of the program on the given nodes. This option indicates that the specified file is an executable program and not an application context.mca btl ^openib since the cluster dont have OpenFabrics expansion card support Im disabling the openib option to suppress the errors.

OpenMPI Parallel and distributed processing

John the Ripper with OpenMPI?


Googling i found some some custom builds of John the Ripper which supports parallel and distributed processing I downloaded the latest john-1.7.9-jumbo-7-Linux-x86-64 and made some Benchmarking Without parallelization

Benchmarking: dynamic_10: md5($s.md5($s.$p)) [128/128 SSE2 intrinsics 10x4x3]... DONE Many salts: 1543K c/s real, 1543K c/s virtual Only one salt: 1307K c/s real, 1307K c/s virtual Benchmarking_no_parallelization.txt Enabling supports parallel and distributed processing using OpenMPI cat nodes.txt echelon slots=2 phoenix slots=2 mpirun -np 12 -hostfile nodes.txt ./john --test Benchmarking: dynamic_10: md5($s.md5($s.$p)) [128/128 SSE2 intrinsics 10x4x3]... DONE Many salts: 2016K c/s real, 12732K c/s virtual Only one salt: 780720 c/s real, 4592K c/s virtual Benchmarking_distributed_processing.txt References OpenMPI Project Homepage John the Ripper distributed processing www.modnet.org

S-ar putea să vă placă și