Really this raised many things. The cost of the cloud computing vs private computing. Also this sheds important light at your security at the cloud. I do not want to force my opinions so will leave you with the article. Enjoy!!!
Sameh Attia
-------------------------------------------------------------------------
As of today, Amazon EC2 is providing what they call "Cluster GPU Instances": An instance in the Amazon cloud that provides you with the power of two NVIDIA Tesla “Fermi” M2050 GPUs. The exact specifications look like this:
22 GB of memory
33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
2 x NVIDIA Tesla “Fermi” M2050 GPUs
1690 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)
API name: cg1.4xlarge
GPUs are known to be the best hardware accelerator for cracking passwords, so I decided to give it a try: How fast can this instance type be used to crack SHA1 hashes?
Using the CUDA-Multiforce, I was able to crack all hashes from this file with a password length from 1-6 in only 49 Minutes (1 hour costs 2.10$ by the way.):
1 2 3 | Compute done: Reference time 2950.1 seconds Stepping rate: 249.2M MD4/s Search rate: 3488.4M NTLM/s |
This just shows one more time that SHA1 for password hashing is deprecated - You really don't want to use it anymore! Instead, use something like scrypt or PBKDF2! Just imagine a whole cluster of this machines (Which is now easy to do for anybody thanks to Amazon) cracking passwords for you, pretty comfortable Large scaling password cracking for everybody!
Some more details:
If I find the time, I'll write a tool which uses the AWS-API to launch on-demand password-cracking instances with a preconfigured AMI. Stay tuned either via RSS or via Twitter.
Installation Instructions:
I used the "Cluster Instances HVM CentOS 5.5 (AMI Id: ami-aa30c7c3)" machine image as provided by Amazon (I choosed the image because it was the only one with CUDA support built in.) and selected "Cluster GPU (cg1.4xlarge, 22GB)" as the instance type. After launching the instance and SSHing into it, you can continue by installing the cracker:
I decided to install the "CUDA-Multiforcer" in version 0.7, as it's the latest version of which the source is available. To compile it, you first need to download the "GPU Computing SDK code samples":
1 2 3 4 | # wget http://developer.download.nvidia.com/compute/cuda/3_2/sdk/gpucomputingsdk_3.2.12_linux.run # chmod +x gpucomputingsdk_3.2.12_linux.run # ./gpucomputingsdk_3.2.12_linux.run (Just press enter when asked for the installation directory and the CUDA directory.) |
Now we need to install the g++ compiler:
1 | # yum install automake autoconf gcc-c++ |
The next step is compiling the libraries of the SDK samples:
1 2 3 | # cd ~/NVIDIA_GPU_Computing_SDK/C/ # make lib/libcutil.so # make shared/libshrutil.so |
Now it's time to download and compile the CUDA-Multiforcer:
1 2 3 4 5 6 7 | # cd ~/NVIDIA_GPU_Computing_SDK/C/ # wget http://www.cryptohaze.com/releases/CUDA-Multiforcer-src-0.7.tar.bz2 -O src/CUDA-Multiforcer.tar.bz2 # cd src/ # tar xjf CUDA-Multiforcer.tar.bz2 # cd CUDA-Multiforcer-Release/argtable2-9/ # ./configure && make && make install # cd ../ |
As the Makefile of the CUDA-Multiforcer doesn't work out of the box, we need to open it up and find the line
1 | CCFILES := -largtable2 -lcuda |
Replace CCFILES with LINKFLAGS so that the line looks like this:
1 | LINKFLAGS := -largtable2 -lcuda |
And type make. If everything worked out, you should have a file ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/CUDA-Multiforcer right now. You can try the Multiforcer by doing something like this:
1 2 3 4 | # export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH # export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH # cd ~/NVIDIA_GPU_Computing_SDK/C/src/CUDA-Multiforcer-Release/ # ../../bin/linux/release/CUDA-Multiforcer -h SHA1 -f test_hashes/Hashes-SHA1-Full.txt --min=1 --max=6 -c charsets/charset-upper-lower-numeric-symbol-95.chr |
Congratulations, you now have a fully working, CUDA-based hash-cracker running on an Amazon EC2 instance.
No comments:
Post a Comment