Wednesday, August 31, 2011

Data Deduplication with Linux


 Lessfs offers a flexible solution to utilize data deduplication on affordable commodity hardware.
In recent years, the storage industry has been busy providing some of the most advanced features to its customers, including data deduplication. Data deduplication is a unique data compression technique used to eliminate redundant data and decrease the total capacities consumed on an enabled storage volume. A volume can refer to a disk device, a partition or a grouped set of disk devices all represented as single device. During the process of deduplication, redundant data is deleted, leaving a single copy of the data to be stored on the storage volume.
One ideal use-case scenario is when multiple copies of a large e-mail message are distributed and stored on a mail server. An e-mail message the size of just a couple megabytes does not seem too bad, but if it were sent and forwarded to more than 100 recipients—that's more than 200MB of copies of the same file(s).
Another great example is in the arena of host virtualization. In recent years, virtualization has been the hottest trend in server administration. If you are deploying multiple virtual guests across a network that may share the same common operating system image, data deduplication significantly can reduce the total size of capacity consumed to a single copy and, in turn, reference the differences when and where needed.
Again, the primary focus of this technology is to identify large sections of data that can include entire files or large sections of files, which are identical, and store only one copy of it. Other benefits include reduced costs for additional capacities of storage equipment, which, in turn, can be used to increase volume sizes or protect large numbers of existing volumes (such as RAID, archivals and so on). Using less storage equipment also leads to a reduced cost in energy, space and cooling.
Two types of data deduplication exist: post-process and inline deduplication. Each has its advantages and disadvantages. To summarize, post-process deduplication occurs after the data has been written to the storage volume in a separate process. While you are not losing performance in computing the necessary deduplication, multiple copies of a single file will be written multiple times, until post-process deduplication has completed, and this may become problematic if the available capacity becomes low. During inline deduplication, less storage is required, because all deduplication is handled in real time as the data is written to the storage volume, although you will notice a degradation in performance as the process attempts to identify redundant copies of the data coming in.
Storage technology manufacturers have been providing the technology as part of their proprietary and external storage solutions, but with Linux, it also is possible to use the same technology on commodity and very affordable hardware. The solutions provided by these storage technology manufacturers are in some cases available only on the physical device level (that is, the block level) and are able to work only with redundant streams of data blocks as opposed to individual files, because the logic is unable to recognize separate files over the most commonly used protocols, such as SCSI, Serial Attached SCSI (SAS), Fibre Channel, InfiniBand and even Serial ATA (SATA). This is referred to as a chunking method. The filesystem I cover here is Lessfs, a block-level-based deduplication and FUSE-enabled Linux filesystem.
FUSE or Filesystem in USEr Space is a kernel module commonly seen on UNIX-like operating systems, which provides the ability for users to create their own filesystems without touching kernel code. It is designed to run filesystem code in user space while the FUSE module acts as a bridge for communication to the kernel interfaces.
In order to use these filesystems, it is required to install FUSE on the system. Most mainstream Linux distributions, such as Ubuntu and Fedora, most likely will have the module and userland tools already preinstalled, most likely to support the ntfs-3g filesystem.

Lessfs

Lessfs is a high-performance inline data deduplication filesystem written for Linux and is currently licensed under the GNU General Public License version 3. It also supports LZO, QuickLZ and BZip compression (among a couple others), and data encryption. At the time of this writing, the latest stable version is 1.3.3.1, which can be downloaded from the SourceForge project page: http://sourceforge.net/projects/lessfs/files/lessfs.
Before installing the lessfs package, make sure you install all known dependencies for it. Some, if not most, of these dependencies may be available in your distribution's package repositories. You will need to install a few manually though, including mhash, tokyocabinet and fuse (if not already installed).
Your distribution may have the libraries for mhash2 either available or installed, but lessfs still requires mhash. This also can be downloaded from SourceForge: http://sourceforge.net/projects/mhash/files/mhash. At the time of this writing, the latest stable build is 0.9.9.9. Download, build and install the package:

$ tar xvzf mhash-0.9.9.9.tar.gz
$ cd mhash-0.9.9.9/
$ ./configure
$ make
$ sudo make install

Lessfs also requires tokyocabinet, as it is the main database on which it relies. The latest stable build is 1-4.47. To build tokyocabinet, you need to have zlib1g-dev and libbz2-dev already installed, which usually are provided by most, if not all, mainstream Linux distributions.
Download, build and install the package using the same configure, make and sudo make install commands from earlier. On 32-bit systems, you need to append --enable-off64 to the configure command. Failure to use --enable-off64 limits the databases to a 2GB file size.
If it is not already installed or if you want to use the latest and greatest stable build of FUSE, download it from SourceForge: http://sourceforge.net/projects/fuse. At the time of this writing, the latest stable build is 2.8.5. Download, build and install the package using the same configure, make and sudo make install commands from earlier.

 After resolving all the more obscure dependencies, you're ready to build and install the lessfs package. Download, build and install the package using the same configure, make and sudo make install commands from earlier.
Now you're ready to go, but before you can do anything, some preparation is needed. In the lessfs source directory, there is a subdirectory called etc/, and in it is a configuration file. Copy the configuration file to the system's /etc directory path:

$ sudo cp etc/lessfs.cfg /etc/

This file defines the location of the databases among a few other details (which I discuss later in this article, but for now let's concentrate on getting the filesystem up and running). You will need to create the directory path for the file data (default is /data/dta) and also for the metadata (default is /data/mta) for all file I/O operations sent to/from the lessfs filesystem. Create the directory paths:

$ sudo mkdir -p /data/{dta,mta}

Initialize the databases in the directory paths with the mklessfs command:

$ sudo mklessfs -c /etc/lessfs.cfg

The -c option is used to specify the path and name of the configuration file. A man page does not exist for the command, but you still can invoke the on-line menu with the -h command option.
Now that the databases have been initialized, you're ready to mount a lessfs-enabled filesystem. In the following example, let's mount it to the /mnt path:

$ sudo lessfs /etc/lessfs.cfg /mnt

When mounted, the filesystem assumes the total capacity of the filesystem to which it is being mounted. In my case, it is the filesystem on /dev/sda1:

$ df -t fuse.lessfs
Filesystem        1K-blocks      Used Available Use% Mounted on
lessfs              5871080   3031812   2541028  55% /mnt

$ df -t ext4
Filesystem        1K-blocks      Used Available Use% Mounted on
/dev/sda1           5871080   3031812   2541028  55% /

Currently, you should see nothing but a hidden .lessfs subdirectory when listing the contents of the newly mounted lessfs volume:

$ ls -a /mnt/
.  ..  .lessfs

Once mounted, the lessfs volume can be unmounted like any other volume:

$ sudo umount /mnt

Let's put the volume to the test. Writing file data to a lessfs volume is no different from what it would be to any other filesystem. In the example below, I'm using the dd command to write approximately 100MB of all zeros to /mnt/test.dat:

$ sudo dd if=/dev/zero of=/mnt/test.dat bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 5.05418 s, 20.7 MB/s

Seeing how the filesystem is designed to eliminate all redundant copies of data and being that a file filled with nothing but zeros qualifies as a prime example of this, you can observe that only 48KB of capacity was consumed, and that may just be nothing more than the necessary data synchronized to the databases:

$ df -t fuse.lessfs
Filesystem        1K-blocks      Used Available Use% Mounted on
lessfs              5871080   3031860   2540980  55% /mnt

If you list a detailed listing of that same file in the lessfs-enabled directory, it appears that all 100MB have been written. Utilizing its embedded logic, lessfs reconstructs all data on the fly when additional read and write operations are initiated to the file(s):

$ ls -l
total 102400
-rw-r--r-- 1 root root 104857600 2011-02-26 13:57 test.dat

Now, let's work with something a bit more complex—something containing a lot of random data. For this example, I decided to download the latest stable release candidate of the Linux kernel source from http://www.kernel.org, but before I did, I listed the total capacity consumed available on the lessfs volume as a reference point:

$ df -t fuse.lessfs
Filesystem        1K-blocks      Used Available Use% Mounted on
lessfs              5871080   3031896   2540944  55% /mnt

$ sudo wget http://www.kernel.org/pub/linux/kernel/v2.6/
↪testing/linux-2.6.38-rc6.tar.bz2

Listing the contents, you can see that the package is approximately 75MB:

$ ls -l linux-2.6.38-rc6.tar.bz2 
-rw-r--r-- 1 root root 74783787 2011-02-21 19:50 
 ↪linux-2.6.38-rc6.tar.bz2

Listing the capacity used to store the Linux kernel source archive yields a difference of roughly 75MB:

$ df -t fuse.lessfs
Filesystem        1K-blocks      Used Available Use% Mounted on
lessfs              5871080   3106440   2466400  56% /mnt

Now, let's create a copy of the archived kernel source:

$ sudo cp linux-2.6.38-rc6.tar.bz2 linux-2.6.38-rc6.tar.bz2-bak

$ ls -l linux-2.6.38-rc6.tar.bz2*
-rw-r--r-- 1 root root 74783787 2011-02-21 19:50 
 ↪linux-2.6.38-rc6.tar.bz2
-rw-r--r-- 1 root root 74783787 2011-02-26 14:43 
 ↪linux-2.6.38-rc6.tar.bz2-bak

By having a redundant copy of the same file, an additional 44KB is consumed—not nearly as much as an additional 75MB:

$ df -t fuse.lessfs
Filesystem        1K-blocks      Used Available Use% Mounted on
lessfs              5871080   3106484   2466356  56% /mnt

And, because the databases contain the actual file and metadata, if an accidental or intentional system reboot occurred, or if for whatever reason you need to unmount the filesystem, the physical data will not be lost. All you need to do is invoke the same mount command and everything is restored:

$ sudo umount /mnt/
$ sudo lessfs /etc/lessfs.cfg /mnt
$ ls
linux-2.6.38-rc6.tar.bz2  linux-2.6.38-rc6.tar.bz2-bak 

In the situation when a system suffers from an accidental reboot, possibly due to power loss, as of version 1.0.4, lessfs supports transactions, which eliminates the need for an fsck after a crash.

 Shifting focus back to lessfs preparation, note that the lessfs volume's options can be defined by the user when mounting. For instance, you can define the desired options for big_write, max_read and max_write. The big_write improves throughput when used for backup purposes, and both max_read and max_write must be defined to use it. The max_read and max_write options always must be equal to one another and define the block size for lessfs to use: 4, 8, 16, 32, 64 and 128KB.
The definition of a block size can be used to tune the filesystem. For example, a larger block size, such as 128KB (131072), offers faster performance but, unfortunately, at the cost of less deduplication (remember from earlier that lessfs uses block-level deduplication). All other options are FUSE-generic options defined in the FUSE documentation. An example of the use of supported mount options can be found in the lessfs man page:

$ man 1 lessfs

The following example is given to mount lessfs with a 128KB block size:

$ sudo lessfs /etc/lessfs.cfg /fuse -o negative_timeout=0,\
        entry_timeout=0,attr_timeout=0,use_ino,\
        readdir_ino, default_permissions,allow_other,big_writes,\
        max_read=131072,max_write=131072

Additional configurable options for the database exist in your lessfs.cfg file (the same file you copied over to the /etc directory path earlier). The block size can be defined here as well as even the method of additional data compression to use on the deduplicated data and more. Below is an excerpt of what the configuration file contains. In order to define a new value for various options clearly, just uncomment the option desired and, in turn, comment everything else:

BLKSIZE=131072
#BLKSIZE=65536
#BLKSIZE=32768
#BLKSIZE=16384
#BLKSIZE=4096
#COMPRESSION=none
COMPRESSION=qlz
#COMPRESSION=lzo
#COMPRESSION=bzip
#COMPRESSION=deflate
#COMPRESSION=disabled

This excerpt defines the default block size to 128KB and the default compression method to QuickLZ. If the defaults are not to your liking, in this file you also can define the commit to disk intervals (default is 30 seconds) or a new path for your databases, but make sure to initialize the databases before use; otherwise, you'll get an error when you try to mount the lessfs filesystem.

Summary

Now, Linux is not limited to a single data deduplication solution. There also is SDFS, a file-level deduplication filesystem that also runs on the FUSE module. SDFS is a freely available cross-platform solution (Linux and Windows) made available by the Opendedup Project. On its official Web site, the project highlights the filesystem's scalability (it can dedup a petabyte or more of data); speed, performing deduplication/reduplication at a line speed of 290MB/s and higher; support for VMware while also mentioning its usage in Xen and KVM; flexibility in storage, as deduplicated data can be stored locally, on the network across multiple nodes (NFS/CIFS and iSCSI), or in the cloud; inline and batch mode deduplication (a method of post-process deduplication); and file and folder snapshot support. The project seems to be pushing itself as an enterprise-class solution, and with features like these, Opendedup means business.
It is also not surprising that since 2008, data deduplication has been a requested feature for Btrfs, the next-generation Linux filesystem. Although that also may be in response to Sun Microsystem's (now Oracle's) development of data deduplication into its advanced ZFS filesystem. Unfortunately, at this point in time, it is unknown if and when Btrfs will introduce data deduplication support, although it already contains support for various types of data compression (such as zlib and LZO).
Currently, the lessfs2 release is under development, and it is supposed to introduce snapshot support, fast inode cloning, new databases (including hamsterdb and possibly BerkeleyDB) apart from tokyocabinet, self-healing RAID (to repair corrupted chunks) and more.
As you can see, with a little time and effort, it is relatively simple to utilize the recent trend of data deduplication to reduce the total capacity consumed on a storage volume by removing all redundant copies of data. I recommend its usage in not only server administration but even for personal use, primarily because with implementations such as lessfs, even if there isn't too much redundant data, the additional data compression will help reduce the total size of the file when it is eventually written to disk. It is also worth mentioning that the lessfs-enabled volume does not need to remain local to the host system, but it also can be exported across a network via NFS to even iSCSI and utilized by other devices within that same network, providing a more flexible solution.

Resources

Official Lessfs Project Web Site: http://www.lessfs.com
Lessfs SourceForge Project: http://sourceforge.net/projects/lessfs
Opendedup (SDFS) Project: http://www.opendedup.org
Wikipedia: Data Deduplication: http://en.wikipedia.org/wiki/Data_deduplication
Notes on the Integration of Lessfs into Fedora 15: http://fedoraproject.org/wiki/Features/LessFS
Lessfs with SCST How-To: http://www.lessfs.com/wordpress/?page_id=577

Sunday, August 28, 2011

Hacking Joomla! -- the fast and easy way


Popular open source Content Management Systems (CMSs) like Drupal, Joomla! and WordPress, are regularly subject to source code reviews as well as blackbox pentesting. Thus, vulnerabilities in these systems are quickly identified and fixed. And security updates are frequently released.
Unfortunately, people tend to install the base CMS, add plugins, build their website and then never upgrade when security patches are available. Furthermore, third party developed plugins usually extend the offender's attack surface and expose the CMS-based website to new threats.

During pentests, and facing a CMS based website, I often look for open source security tools that are targeted specifically at the CMS in question. These tools usually excel at fingerprinting the CMS version used by the target, detecting installed plugins/themes, and identifying corresponding vulnerabilities. 
Of course, I'd love to fire up generic web active scanners (Skipfish, Arachni, w3af, etc), as well as my preferred proxy tools (ZAP and WebScarab) to perform a full-blown web pentest of the target application. However, during short-timed penetration tests, I'm compelled to look for the low hanging fruit. Hence, instead of trying to reinvent the wheel, I make good use of CMS-targetted tools.

In this post, I'm going to describe the free security tools I use against Joomla! based websites. If you know another utility/tip to use against Joomla! Installations, feel free to mention it below as a comment. 

Test Lab Setup
I'm going to run the tests against the default Joomla! installation on a TurnKey virtual machine. For those of you who are not familiar with TurnKey, it is a collection of 45+ free ready-to-use solutions, including popular CMSs like Joomla!. 
Anyway, the tools I'm going to demonstrate are:

The base operating system for the attack toolset is going to be BackTrack 5. Lucky me, all three tools are pre-installed on the distribution.

CMS Explorer
CMS Explorer is a tool developed by the creator of Nikto. It covers several CMSs like Drupal, WordPress, and Joomla!. 
The first thing you should do when using CMS Explorer is to create an osvdb.key containing an OSVDB API key, and place it into the CMS Explorer install directory. You can get an OSVDB API key for free from http://osvdb.org/api/about. The CMS Explorer install directory in BackTrack 5 is /pentest/enumeration/web/cms-explorer
Anyway, this key will be used by the tool to query OSVDB for vulnerabilities corresponding to the identified installed plugins and themes.
Here is the command line I run in order to launch a CMS Explorer scan:

root@bt:/pentest/enumeration/web/cms-explorer# ./cms-explorer.pl -url http://192.168.1.103/ -explore -type=Joomla -osvdb 

First, CMS Explorer will identify the themes and plugin installed on the Joomla!-based website: 


Then, it will identify all the vulnerabilities in OSVDB that correspond to the found plugins and themes.


Maybe CMS Explorer is a little too verbose.. But it does a decent job detecting Joomla! installed components and identifying vulnerabilities that are associated with these.

OWASP Joomla Vulnerability Scanner (aka joomscan) 
OWASP Joomla Vulnerability Scanner, or Joomscan is an official OWASP Project and a flagship Joomla! scanner. Joomscan features include thorough version detection as well as signature-based vulnerability identification of Joomla! installations. As of this writing, Joomscan vulnerability database contains 466 distinct entries.  
The tool is ready to use on BackTrack 5 and using it is as simple as running the following command:

root@bt:/pentest/web/scanners/joomscan# ./joomscan.pl -u http://192.168.1.103/ 


Joomscan will firstly perform version probing against the target as shown below:


Then, it will detect vulnerabilities affecting the target:

Nmap (http-joomla-brute NSE script) 
The final tool I'm going to demonstrate is Nmap, or more precisely http-joomla-brute NSE script. Written by @calderpwn, this Nmap script performs bruteforcing of Joomla! administration authentication forms. Unfortunately, it hasn't been added to the official repository yet but you can get it here: https://github.com/cldrn/nmap-nse-scripts/blob/master/scripts/http-joomla-brute.nse

First, let's add the script to the Nmap scripts directory:
root@bt:~# cp http-joomla-brute.nse /usr/local/share/nmap/scripts/ 
Then, we update the Nmap script database using the following command:
root@bt:~# nmap --script-updatedb 
Finally, we're ready to go:
root@bt:~# nmap -p80 --script http-joomla-brute –script-args 'userdb=/root/users.txt,passdb=/root/passwds.txt,http-joomla-brute.threads=3,brute.firstonly=true' 192.168.1.103 

users.txt and passwds.txt are two files containing usernnames and passwords that will be used when bruteforcing the form.


Well, that's it for today's Jommla! hacking round. I'm not going to compare the utilities as each one is specific and useful in its own way. Please don't forget to add your favorite Joomla! hacking tools and tips as a comment below. I'll try to keep this post updated, and hopefully post about other CMSs. Meanwhile, happy Joomla! hacking :) 

Protecting a Laptop from Simple and Sophisticated Attacks


I recently replaced my OSX based Macbook with an Ubuntu based Lenovo Thinkpad T420. I've done a number of things out of the ordinary to secure it, so thought I'd write an overview. You may find some of these techniques interesting, and maybe even useful. You may even learn about an attack or two that you were unaware of.
Defending from common thieves

My most likely adversary is the common thief. If my laptop is stolen, I want a chance to recover it, that doesn't involve relying solely on the police. Although the laptop came with Windows 7 installed, I had no intention of using it; Ubuntu is my current operating system of choice for laptops/desktops. Rather than wiping Windows 7, I've left it as a honeypot operating system. If a thief steals the laptop, when they turn it on, it will automatically boot up into Windows, without so much as even being prompted for a password. I installed a free application called Prey which will allow me to grab loads of information from the laptop, such as its location, and pictures from the built in webcam. The location is pretty accurate because the laptop came with an F5521gw (pdf) card, which provides GPS and 3G modem capability, and Prey is happy to take advantage of GPS data. Incidently this card also works fine under Linux using MBM. Hopefully, the thief will be too lazy or too dumb to do an immediate full reinstall of the OS, as it will just work out of the box as far as they're concerned.

To make room for Ubuntu on the disk, I installed GParted to a USB stick and booted that up. This allowed me to shrink the Windows 7 partition. The laptop only has a small 128GB drive though (SSD) so I had to try and recover as much space as possible. From Windows 7 I deleted the recovery partition, I disabled system restore, and I disabled swap. Disabling the swap file recovered a massive 8GB of space as the machine has 8GB of RAM.

Defending from experts

In the space recovered from Windows, went my Ubuntu installation. Natty Narwhal (11.04) was the latest version of Ubuntu at time of writing, so that's what I went with. I consider full disk encryption to be essential if you want to secure your laptop. However, there are several attacks against machines that use full disk encryption; I decided to address as many of them as possible.

Evil maid attacks

Even if you have a machine which uses full disk encryption, the boot partition and boot loader need to be stored somewhere unencrypted. Typically, people store it on the hard drive along with the encrypted partitions. The problem with doing this is, whenever you go to your machine, you don't know if somebody has tampered with the unencrypted data to install a software keylogger to capture your password. To get around this, I installed my boot partition and boot loader on a Corsair Survivor USB stick. I wanted a USB stick which would never leave my side. This particular USB stick is very strong, and water proof, so even when I go swimming or scuba diving, I don't need to leave it in a locker somewhere, unattended. I got one of my friends to take it on a scuba diving holiday before I used it. It survived several hours under the water at depths of between 10 and 15 metres.

Coldboot attacks

On a typical system with disk encryption, the encryption key is stored in RAM. This would be fine, if it weren't for the fact that there are several ways for an attacker with physical access, to read the contents of the RAM on a machine which is running, or which has been running recently. You might think that your machine's RAM is wiped as soon as it loses power, but that is not the case. It can take several minutes for the RAM to completely clear after losing power. Cooling the RAM with spray from an aircan, can extend that time period.

An attacker with access to the online machine, could simply hard reboot the machine from a USB stick or CD containing msramdmp to grab a copy of the RAM. You could password protect the BIOS and disable booting from anything other than the hard drive, but that still doesn't protect you. An attacker could cool the RAM, remove it from the running machine, place it in a second machine and boot from that instead.

The first defence I used against this attack is procedure based. I shut down the machine when it's not in use. My old Macbook was hardly ever shut down, and lived in suspend to RAM mode when not in use. The second defence I used is far more interesting. I use something called TRESOR. TRESOR is an implementation of AES as a cipher kernel module which stores the keys in the CPU debug registers, and which handles all of the crypto operations directly on the CPU, in a way which prevents the key from ever entering RAM. The laptop I purchased works perfectly with TRESOR as it contains a Core i5 processor which has the AES-NI instruction set.

Getting TRESOR to work was the most complicated part of installing my laptop. Not because it's particularly difficult, but because you have to build a custom kernel, with the TRESOR patch applied. And once you've got the custom kernel, you need to build custom installation media which uses that kernel. I did a basic Ubuntu installation without encryption to create a platform for building the custom kernel and custom installation media. Once I had the install CD ready, I did a second installation over the top of the first one using that CD instead. I'm not going to go into detail on how to do that, but I will link to the various HOWTOs that I used:

Building a custom Ubuntu Kernel Package
More about building a custom kernel package, but with some useful info about using custom flavours
Patching the kernel with TRESOR
Building a custom Ubuntu Live CD

Attacks via firewire

If a machine has a firewire port, or a card slot which would allow an attacker to insert a firewire card, then there's something else you need to address. It is possible to read the contents of RAM via a firewire port. Here is a great article detailing the issues and fixes for multiple operating systems. My laptop has a firewire port. I could have built a kernel without support for firewire and without firewire kernel modules, but I may need to use that port at some point. So instead, I built firewire as a set of kernel modules, and prevent the modules from loading under normal circumstances using /etc/modprobe.d/blacklist.

Preparing a disk for encryption

During my research, I found numerous people advocating that you should completely wipe a new hard drive with random data before setting up disk encryption. This is to make it impossible for somebody to be able to detect which parts of the drive have had encrypted data written to them. Doing this, is as simple as creating a partition on the space you want to fill with random data, and then using the "dd" command to copy data directly to that partition device in /dev/ from /dev/urandom. This took a few hours to run on my system. I complicated this procedure slightly by using something I purchased called an EntropyKey. The EntropyKey provides a much larger source of "real" random data, as opposed to the much more limited "pseudo" random data that is generated by the operating system. It talks to an application called ekeyd in order to feed /dev/random directly. I also use the entropy key when generating GnuPG keys and any other task which requires a source of good random data.

More on disk encryption

The LiveCD I modified doesn't have a nice GUI for handling full disk encryption. I needed to learn how to use the command line tool "cryptsetup" to set up encryption. Because TRESOR is built as a cipher kernel module, once you've booted from your custom LiveCD, you can just use the option "--cipher tresor" when using cryptsetup to create encrypted devices. It's worth spending some time playing with this tool and understanding what the various options do, if you don't want to lose access to your encrypted device.

When I initially did the installation, I chose to protect the full disk encryption key with a passphrase. It is also possible to protect it with a keyfile. The advantage of using a keyfile is that you can store it on an external device. An attacker can't just observe you entering the password, they also need to get hold of the keyfile. It's also much more difficult to brute force. I have now moved my laptop to using a keyfile. That keyfile is stored on the USB boot stick which never leaves my side, and it is GPG encrypted. Cryptsetup on Ubuntu comes with helper tools to do this. The basic process was:

  1. Generate the keyfile
  2. Use cryptsetup to add it to an additional key slot on the encrypted device
  3. Encrypt with gnupg's "--symmetric" option and copy the encrypted version to somewhere like /etc/keys/
  4. Update /etc/crypttab to use the new keyfile
  5. Run "update-initramfs -u" to build a new initrd on the boot partition

The update-initramfs command calls a hook script which copies the gpg binary, gpg protected key, and appropriate boot scripts to the initrd on the boot partition. Once I'd confirmed that I can still successfully boot the machine, I emptied the key slot which contained the original passphrase. It would now be impossible to compel me to decrypt my hard drive if I were to lose, "lose", or irreperably damage my USB boot drive.

Swap

If you need to use swap. Make sure it is encrypted too. The easiest way to make sure everything is encrypted is to create an encrypted device, and then use LVM on top of that so that all of your partitions and swap end up on top of the same encrypted device. As this laptop has 8GB of RAM, I decided to go without swap altogether. I'm not going to be using the suspend to disk function which requires swap, and I don't want swapping to cause wearing on my SSD.

Trusted Platform Modules

The laptop I purchased has something called a Trusted Platform Module. This TPM can handle a number of crypto operations it's self. It also provides a random number generator similar to the EntropyKey. Apparently a lot of modern laptops contain one of these. I decided to use the random number generator on the TPM as another source of entropy for when my EntropyKey isn't inserted. To do this I used a piece of software called TrouSerS. There is also a modified version of Grub calledTrusted Grub which can use the TPM to do a number of integrity checks on the system as it boots. I'm not sure that this is of any use to me though as my boot partition and boot loader will never leave my side.

Securing the Web browser

I use Firefox as my web browser. Surfing the web scares me; the browser strikes me as the most likely way in for a remote attacker. And yet, most people run the browser under the same user id as the rest of their programs. So if the browser is compromised, all of the files that your user can access are also instantly compromised. To try and minimise any damage if this happens, I decided to run Firefox in its own account. My normal user account is called "mike". For Firefox I created a new user account called "mike.firefox". "/usr/bin/firefox" was merely a symlink to /usr/lib/firefox-6.0/firefox.sh so I replaced it with a shell script which runs:
sudo -u mike.firefox -H /usr/lib/firefox-6.0/firefox.sh
I didn't want to be prompted for a password every time I tried to run firefox though, so I configured sudo to allow me to run that command without entering my password by adding this to the end of my /etc/sudoers (use the visudo command to do this)
mike ALL=(mike.firefox) NOPASSWD: /usr/lib/firefox-6.0/firefox.sh
The "mike.firefox" user doesn't have access to the X display though when I'm logged in as "mike". To give it access I went to "System->Preferences->Startup Applications" and told it to run the command "xhost +local:mike.firefox" when I log in. Now, when I run firefox, it runs as user mike.firefox instead. Something to look out for when you do this: Any command that firefox spawns, will it's self run as user mike.firefox. I noticed that when playing flash, there was no audio. This is because the mike.firefox user that I created did not have access to the audio device. To give it permission, I ran the command "adduser mike.firefox audio". I also set up permissions so that user "mike" could access "/home/mike.firefox/Downloads" as that is where Firefox will now download to. I symlinked /home/mike/Downloads/firefox to this directory for simplicity.

PGP smart cards

All of my incoming email is encrypted using my public GPG key. I detailed how I do this here. This means that I need to store my private GPG keys on my laptop. They're protected by a passphrase, but is this enough? If my account was compromised, an attacker could key log my passphrase and then steal my keys. Luckily, when I purchased my laptop, I ticked the "Smartcard Reader" option. I then purchased an OpenPGP Smartcard. My encryption and signing subkeys have been transferred to the smartcard, and the master key has been removed from my laptop. All that remains in the PGP private keyring on my laptop are stubs which refer to the keys on the smartcard. You can not read a key from a smartcard. If you want to decrypt or sign data, gpg sends that data to the smartcard, which then performs the crypto operations on board, and sends the results back. This isn't perfect of course. An undetected attacker could potentially use the card to decrypt data when it is inserted, without my knowledge. I wrote a custom "pinentry" application to further secure my smartcard, from observation attacks. You can read about that here.

Miscellaneous

I use the following Firefox addons to minimise the chance of MITM attacks against my browsing, and to prevent most XSS/CSRF attacks: Certificate Patrol, Cipherfox, DNSSEC Validator, HTTPS Everywhere, HTTPS Finder, NoScript, Perspectives and Request Policy.

I have installed OpenVPN. It connects to my Linode VPS and I route all traffic over it when I'm on untrusted networks such as cafes.

I installed a local DNS resolver called Unbound. It supports DNSSEC. I don't know how many sites support DNSSEC yet, but I should benefit more from this as time goes on.

I installed an application called blueproximity. It detects when my phone is in range, via bluetooth. If my phone moves out of range, the screen automatically locks. I've no doubt that this can be prevented via spoofing my phone, but it adds another layer of security.

My Windows honeypot also has a VPN to my Linode server, and Internet Explorer is configured to use the web proxy at the end of it. If my laptop is stolen, I should be able to intercept all of the browser traffic that comes from it.

Summary

Some people might say that many of these precautions are over the top and paranoid. I don't consider myself an "elite hacker", but I know that I could pull off most of the attacks that I've discussed above without much trouble. Cold boot and Evil maid are practical, easy to pull off, attacks. Why wouldn't I defend against them?

I'm not claiming that my laptop is impenetrable. An attacker could still grab me when I'm using the machine, rip the RAM out, and pull sensitive data from it. They could still grab my USB boot key and then beat the password out of me. They could still remotely compromise Firefox and then use an unknown kernel exploit to gain root privileges. The whole point of this exercise was to reduce the number of attack vectors, not eliminate them. That would be impossible.

If you do anything differently, or better, please let me know in the comments. I'd especially love to hear ways that I can make my Windows honeypot more effective.

If you want to read more stuff like this, follow my blog using TwitterAtom or RSS, and check out the rest of my articles: AllSecurity relatedWeb related

If you want to hire me, you can do it through my UK based consultancy: Cardwell IT Ltd.

Thursday, August 25, 2011

Windows 8 Explorer: improved copy, delete, and conflict resolution


Windows 8 copy/move/delete

Share This Article

I published not to show how advanced MS products are but to show how LATE they are as this was already done in GNOME and KDE years ago :)
Sameh Attia
------------------------------------
Ahh, the Windows Explorer progress dialog. For years it has been struggling to figure out how to calculate how long our copy and delete operations would take, sliding the progress bar back and forth in a seemingly random, haphazard way, the laws of time all but ceasing to exist — five seconds remaining one moment and 13 minutes the next. That’s (almost) all going to change, with the arrival of a greatly improved file management experience in Windows 8.
Over on the Building Windows 8 blog, Microsoft’s Alex Simmons, a director of program management for Windows, has laid bare most of the new functionality. If you’d rather look at the reworked dialog boxes, they’re in a video that’s embedded below; otherwise, read on.
Simmons states that his team’s focus was to improve the high-volume copying experience — which makes good sense, since Explorer really isn’t that bad in its present state if you’re just moving around a handful of files. Gone are the multiple progress windows that stack atop your Explorer taskbar icon in Windows 7. All operations will be consolidated into a single window, similar to the way Internet Explorer or Firefox handle your downloads. And, just like your browser’s download manager, the updated file dialog allows you to pause and cancel jobs with the click of a button.
Windows 8 copy speedWant some more in-depth knowledge about what’s going on? Tap the more details button, and you’re presented with a real-time graph (pictured right) that charts the current speed of your operation and also reports the time and number and amount of files remaining. As for those off-the-mark time estimates, Simmons says that coming up with a precise calculation is nearly impossible due to the variables involved — such as interference from security software or network congestion. To that end, the Windows 8 Explorer interface has been tweaked to play up elements that can be detailed precisely — like transfer speeds.
One more area Microsoft has focused on is conflict resolution, something that had already been improved in Windows 7. The new copy and replace options allow users greater flexibility when identically named files are dropped into a folder. In Windows 7, you can choose to replace, not copy, or keep both copies of a file and let Windows rename the new addition. This can be done on a file-by-file basis, or you can check off the box and apply your preference en masse. In Windows 8, Explorer consolidates conflicts onto a single thumbnailed pane (below left) where you can check off the versions you want to keep.
Windows 8 copy/move/delete
Last but not least, Simmons quietly mentions a tweak to delete dialogs in Windows 8. No longer will Windows default to notifying users every time they send a file off to the Recycle Bin (a toggle you could flip in earlier versions of Windows). The aim is to create a “quieter, less distracting experience,” but my admin sense is tingling. You’ve got to imagine that this change is going to lead to more than a couple Delete > Empty Recycle Bin operations.
Read more at Building Windows 8 or check out the XKCD comic tackling the tricky topic of progress estimation…

Setting Up Network RAID1 With DRBD On Debian Squeeze


 This tutorial shows how to set up network RAID1 with the help of DRBD on two Debian Squeeze systems. DRBD stands for Distributed Replicated Block Device and allows you to mirror block devices over a network. This is useful for high-availability setups (like a HA NFS server) because if one node fails, all data is still available from the other node.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

I will use two servers here (both running Debian Squeeze):
  • server1.example.com (IP address 192.168.0.100)
  • server2.example.com (IP address: 192.168.0.101)
Both nodes have an unpartitioned second drive (/dev/sdb) with identical size (30GB in this example) that I want to mirror over the network (network RAID1) with the help of DRBD.
It is important that both nodes can resolve each other, either through DNS or through /etc/hosts. If you did not create DNS records for server1.example.com and server2.example.com, you can modify /etc/hosts on both nodes as follows:
server1/server2:
vi /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

2 Synchronize Time

server1/server2:
It is very important that both nodes have the same time. Therefore we install the ntp packages:
apt-get install ntp ntpdate

3 Partition /dev/sdb

server1/server2:
Right now, our partitioning looks as follows:
fdisk -l
root@server1:~# fdisk -l

Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00029d5c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        3793    30461952   83  Linux
/dev/sda2            3793        3917      992257    5  Extended
/dev/sda5            3793        3917      992256   82  Linux swap / Solaris

Disk /dev/sdb: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
root@server1:~#
As you see, /dev/sdb is not partitioned. We change that now and create one big partition on it, /dev/sdb1:
fdisk /dev/sdb
root@server1:~# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x8042e800.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):
 <-- n
Command action
   e   extended
   p   primary partition (1-4)

<-- p
Partition number (1-4): <-- 1
First cylinder (1-3916, default 1): <-- ENTER
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3916, default 3916):
 <-- ENTER
Using default value 3916

Command (m for help):
 <-- t
Selected partition 1
Hex code (type L to list codes):
 <-- 83

Command (m for help): <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
root@server1:~#

Now run
fdisk -l
again, and you should find /dev/sdb1 in the output:
root@server1:~# fdisk -l

Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00029d5c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        3793    30461952   83  Linux
/dev/sda2            3793        3917      992257    5  Extended
/dev/sda5            3793        3917      992256   82  Linux swap / Solaris

Disk /dev/sdb: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x78f21e78

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3916    31455238+  83  Linux
root@server1:~#

4 Install And Configure DRBD

 
server1/server2:
Now install DRBD on both nodes as follows:
apt-get install drbd8-utils
Load the DRBD kernel module:
modprobe drbd
To check if it is loaded, run:
lsmod | grep drbd
Output should be similar to this one:
root@server1:~# lsmod | grep drbd
drbd                  193312  0
lru_cache               5042  1 drbd
cn                      4563  1 drbd
root@server1:~#
Now we back up the original /etc/drbd.conf file and create a new one on both nodes as follows:
cp /etc/drbd.conf /etc/drbd.conf_orig
cat /dev/null > /etc/drbd.conf
vi /etc/drbd.conf
global { usage-count no; }
common { syncer { rate 100M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on server1.example.com {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.100:7788;
                meta-disk internal;
        }
        on server2.example.com {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 192.168.0.101:7788;
                meta-disk internal;
        }
}
Make sure you use the correct node names in the file (instead of server1.example.com and server2.example.com) - please make sure you use the node names that the command
uname -n
shows on both nodes. Also make sure you fill in the correct IP addresses in the address lines and the correct disk in the disk lines (if you don't use /dev/sdb1).
Now we initialize the meta data storage. On both nodes run:
drbdadm create-md r0
root@server1:~# drbdadm create-md r0
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
root@server1:~#
Then start DRBD on both nodes:
/etc/init.d/drbd start
root@server1:~# /etc/init.d/drbd start
Starting DRBD resources:[ d(r0) s(r0) n(r0) ]....
root@server1:~#
The next step has to be carried out on server1 only:
server1:
Now make server1 the primary node:
drbdadm -- --overwrite-data-of-peer primary all
Afterwards, data will start to synchronize between server1 and server2.
server2:
Take a look at
cat /proc/drbd
to see the synchronization progress:
root@server2:~# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r----
    ns:0 nr:15790400 dw:15790144 dr:0 al:0 bm:963 lo:9 pe:29622 ua:8 ap:0 ep:1 wo:b oos:15664096
        [=========>..........] sync'ed: 50.3% (15296/30716)M
        finish: 0:02:44 speed: 95,212 (85,352) K/sec
root@server2:~#
(You can run
watch cat /proc/drbd
to get an ongoing output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished - output should be as follows:
root@server2:~# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----
    ns:0 nr:31454240 dw:31454240 dr:0 al:0 bm:1920 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
root@server2:~#
The snippet ro:Secondary/Primary tells you that this node is the secondary node.
server1:
On server1, the output of
cat /proc/drbd
is as follows (after the synchronization has finished):
root@server1:~# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
    ns:31454240 nr:0 dw:0 dr:31454440 al:0 bm:1920 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
root@server1:~#
The snippet Primary/Secondary tells you that this is the primary node.
Now that we have our new network RAID1 block device /dev/drbd0 (which consists of /dev/sdb1 from server1 and server2), let's create an ext3 filesystem on it and mount it to the /data directory. This has to be done only on server1!
mkfs.ext3 /dev/drbd0
mkdir /data
mount /dev/drbd0 /data
Afterwards you should see /dev/drbd0 in the outputs of...
mount
root@server1:~# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/drbd0 on /data type ext3 (rw)
root@server1:~#
... and:
df -h
root@server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              29G  775M   27G   3% /
tmpfs                 249M     0  249M   0% /lib/init/rw
udev                  244M  112K  244M   1% /dev
tmpfs                 249M     0  249M   0% /dev/shm
/dev/drbd0             30G  173M   28G   1% /data
root@server1:~#

5 Test

server1:
Now let's create some files or directories in the /data directory and check whether they get replicated to server2.
touch /data/test1.txt
touch /data/test2.txt
ls -l /data/
root@server1:~# ls -l /data/
total 16
drwx------ 2 root root 16384 Aug  8 12:45 lost+found
-rw-r--r-- 1 root root     0 Aug  8 12:48 test1.txt
-rw-r--r-- 1 root root     0 Aug  8 12:48 test2.txt
root@server1:~#
Now let's unmount the /data directory on server1:
umount /data
Then assign the secondary role to server1:
drbdadm secondary r0
Now we go to server2, make it the primary node and check if we can see the files/directories we created on server1 in the /data directory on server2.
server2:
First we assign the primary role to server2:
drbdadm primary r0
Check the output of
cat /proc/drbd
... and you should see that server2 is the primary node now:
root@server2:~# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
    ns:4 nr:32083300 dw:32083304 dr:325 al:1 bm:1920 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
root@server2:~#
Next we create the /data directory and mount /dev/drbd0 to it:
mkdir /data
mount /dev/drbd0 /data
Let's check the contents of the /data directory:
ls -l /data/
If everything went fine, it should contain the files/directories that we created on server1:
root@server2:~# ls -l /data/
total 16
drwx------ 2 root root 16384 Aug  8 12:45 lost+found
-rw-r--r-- 1 root root     0 Aug  8 12:48 test1.txt
-rw-r--r-- 1 root root     0 Aug  8 12:48 test2.txt
root@server2:~#
server1:
Now that we have switched roles, the output of
cat /proc/drbd
on server1 should show you that server1 has the secondary role:
root@server1:~# cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----
    ns:32083300 nr:4 dw:629064 dr:31454797 al:529 bm:2044 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
root@server1:~#

6 Links