Saturday, February 27, 2010

How To: Change Timezone in Linux/Unix From

In a linux/unix system, the time is the number of seconds elapsed since midnight UTC on the morning of January 1, 1970, not counting leap seconds.

There are different ways and procedures to change timezones in different flavors of linux/unix (which i will explain later in this HowTo) but universal procedure to do it in all flavors is explained below:

Using /etc/localtime to set timezone:
This is the file which sets the timezone of your system. Usually it is the symbolic link to the original file of the timezone which are all stored in “/usr/share/zoneinfo/”.

To change the timezone with this file follow this procedure:

 # cd /etc/

If you wish, backup the previous timezone configuration by copying it to a different location. Such as

# cp /etc/localtime  /etc/localtime-old

Browse through the timezones given in “/usr/share/zoneinfo/” and finalize which one you would like to use.

Then create a symlink of that timezone on the “/etc/” directory with name “localtime

# ln -sf /usr/share/zoneinfo/dir/zonefile localtime

For ex: if you want it to set to IST (Indian standard time) then use this command

# ln -sf /usr/share/zoneinfo/Asia/Calcutta locatime

You can verify the same using “date” command

# date

Wed Feb 24 22:50:50 IST 2010

If you have the utility “rdate“, update the current system time by executing

# /usr/bin/rdate -s time-a.nist.gov

Now set the ZONE entry in the file “/etc/sysconfig/clock” file (e.g. “Asia/Calcutta”, the ZONE parameter is only evaluated by system-config-date)

After doing all this, remember to sync the hardware clock with your new timezone setting, This can be done by executing following command:

# /sbin/hwclock --systohc

This is something by which you can change the timezone on the system level. But if you want to change a timezone to test some script, you can do it by changing your environment variable in your shell.

# export TZ=Asia/Calcutta

Remember, that in the above case, the timezone will only be changed for the same shell and will be temporary, means as soon as you logout of the shell, all you setting will be gone.

For different flavors of linux/unix you can use different commands if you don’t want to follow the above procedure:
  • Redhat: “redhat-config-date” will open a dialog box –> follow the instructions.
  • Centos/fedora: “system-config-date” will open a dialog box –> follow the instructions.
  • Slackware/Freebsd: “tzselect” will open a dialog box –> follow the instructions.
  • Ubuntu: “dpkg-reconfigure tzdata” OR “tzconfig” will open a dialog box –> follow the instructions.
In most of the above cases you will get an interface something like this.


Move to the timezone you would like to change and the simply press “OK”. This will change the timezone of your system.

Change Timezone on Linux Cellphones:
On mobile phones and other small devices that run Linux, the time zone is stored in /etc/TZ. So we need to follow the same procedure to change the timezone on cellphones and other small devices running on linux.

To setup UTC: To setup UTC timezone you need to do a simple step. Mark “UTC=true” in “/etc/sysconfig/clock” file.

# vi /etc/sysconfig/clock and change the UTC line to: "UTC=true"

BitTorrent Client for Linux

In this article, I didn’t mention about softwares which don’t have gui like rtorrent. In Linux there are more alternatives than following softwares’. I didn’t use some of the softwares’ last versions. I’ve also mentioned the versions which I used, near the titles.

BitTornado - 0.3.18
BitTornado uses average 30mb ram. It’s written with Python. Super seeding and web seeding were developed by BitTornado group. The torrent files which were newly added, open in separate windows. When the program has started each time, you must add torrent files again. Program doesn’t automatically add and doesn’t continue where it left off. While torrent were adding, you can’t choose the files which will download. However after the adding of torrent, you can do this procedure. You can give priority to the file which you choose(want).
http://bittornado.com/

Deluge - 1.2.0
Deluge which is written by PyGTK uses average 35 mb ram. Deluge has plugin support. Super(initial) seeding support will be done with libtorrent library’s 0.15 version. You can give priority to the file which you choose. You can give priority to any pieces of a file which was in torrent with pieces extension.
http://deluge-torrent.org/


KTorrent - 3.2.4
Ktorrent is a bittorrent client for KDE. You can use Ktorrent in other desktop environment. It’s distributed with GPL. It uses average 15 mb ram. Intended peer can kicked or banned. While torrent was adding, the files which you want to download, were selectable. You can give priority to files and torrents. In various conditions, you can give a warning with sound or writing or you can execute a command. For example, if you download torrent or you reach a specific share ratio, you can set as computer’s automatically shutdowning. Also you can specialize Ktorrent with extensions like search box, log viewer etc.
http://ktorrent.org/

qBittorrent - v2.1.3
qBittorrent has approxiametly 25 language support. qBittorrent is written with qt4 and c++. qBittorrent uses approxiametly 15 mb ram. There is a specializeable search box in qBittorrent. The search results can added directly to the downloading list. Intented peers can be banned, peers’ downloading and uploading speeds can be limited. You can choose undesired files and give priority to them.
http://qbittorrent.sourceforge.net/

vuze - 4.2.0.8
Vuze which’s name is Azureus, is written with java. Because vuze need jre, vuze uses more system resources than other clients. Vuze can shows detailed informations about peers, pieces, files etc. Vuze has plug in and super(initial) seeding support. You can share torrent files and chat on vuze with your friends. You can watch and share DVD and HD videos. You can directly send files to devices like iPhone. You can use vuze as a social platform. You can make commands and vote for files.
http://vuze.com/

Transmission - 1.75
Because of it’s written with c, transmission uses less cpu and ram than other clients. It uses approxiametly 10 mb ram. Transmission is fast and plain. It has simple gui. Gui can be changed by extensions. There are different versions which are written with gtk and qt. You can use gtk version for Gnome or a similar desktop environment. You can use qt version for kde or similar desktop environment. You can give priority to files and limit the speed.
http://transmissionbt.com/




BitStorm Lite - 0.2q
BitStorm Lite is a simple client. It hasn’t got a lot of features. It uses approxiametly 5 mb ram.
https://sourceforge.net/projects/bbom/




GNOME BitTorrent Downloader - 0.0.32
GNOME BitTorrent Downloader has a plain and simple gui. When the program started, it wants torrent files from you. Program doesn’t automatically continue where it left off. After you add the torrent file, if you choose a directory, it continues where it left off. Each torrents which were added, continues in different windows.
http://gnome-bt.sourceforge.net/


TopBT - 2.0
Because of TopBT is based on vuze, it needs jre. TopBT eveluates peers' network proximities rates by using ping and nmap tools. It can decrease the traffic aproximately %25 rate. It shows the traffic which saved on status bar and some statistics. TopBT's aim is increase unnecessary bittorrent traffic. Because, according to the IPOQUE's report which was published in 2007, 40% of Internet traffic constitutes bittorrent traffic.
http://topbt.cse.ohio-state.edu/



FrostWire - 4.18.6
FrostWire is a file sharing program which has bittorrent support and uses Gnutella network. FrostWire is written with java. FrostWire has search box and media player.
http://frostwire.com/

Friday, February 26, 2010

Introducing Linux virtual containers with LXC

In the past we have looked at using OpenVZ for container virtualization on Linux. OpenVZ is great as it allows you to run compartmentalized “servers” within an operating system so you can separate systems, much like running virtual machines on a host system.

With OpenVZ, you can get the benefits of virtualization without the overhead.

The downside of OpenVZ is that it isn’t in the mainline kernel. This means you need to run a kernel provided by the OpenVZ project.

By itself this isn’t necessarily a problem, unless you are running an unsupported Linux distribution, and also if you don’t mind a bit of lag from upstream security fixes.

Like OpenVZ, Linux Resource Containers (LXC) provide the ability to run containers that contain processes run within them to isolate them from the host operating system.

The project is part of the upstream kernel, which means that any Linux distribution using kernel 2.6.29 or later will have the kernel-level bits available, without resorting to a third-party to provide it.

For instance, Fedora 12 comes with the appropriate kernel and the user-space tools to use LXC.

To start using LXC, you must install the LXC user-space tools and have an appropriate kernel with LXC support enabled.

On Fedora 12, the kernel is provided and the user-space tools can be installed via:
# yum install lxc

The next step is to make sure the kernel properly supports LXC:
$ lxc-checkconfig

It will provide a list of capabilities; if every capability is listed as “enabled,” LXC is ready to be used with the kernel.

You must first create and mount the LXC control group filesystem:
# mkdir /cgroup
# mount none -t cgroup /cgroup
# echo "none /cgroup cgroup defaults 0 0" >> /etc/fstab

Next, you need to configure bridge networking. This can be done as root with the brctl command, part of the bridge-utils package (install this package if it is not already installed):
# brctl addbr br0
# brctl setfd br0 0
# ifconfig br0 192.168.250.52 promisc up
# brctl addif br0 eth0
# ifconfig eth0 0.0.0.0 up
# route add -net default gw 192.168.250.1 br0

This creates the bridge interface, br0, and assigns it the existing host IP address (in this case, 192.168.250.52).

You will need to do this locally, as once you bring br0 up, the network will go down until the rest of the reconfiguration is complete.

The next commands then reset the IP address of eth0 to 0.0.0.0, but since it is bound to the bridge interface, it will respond to the previous IP address anyways.

Finally, a route is added for br0, which will be used by containers to connect to the network.

Once this is done, we must create a configuration file for a new container. This is a very basic example, so create the configuration file with the following contents:
lxc.utsname = test
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 192.168.250.150
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596

Save it as /etc/lxc/lxc-test.conf or something similar. The next command will start a confined shell process:
# /usr/bin/lxc-execute -n test -f /etc/lxc/lxc-test.conf /bin/bash
[root@test lxc]# ps ax
  PID TTY      STAT   TIME COMMAND
    1 pts/1    S      0:00 /usr/libexec/lxc-init -- /bin/bash
    2 pts/1    S      0:00 /bin/bash
   20 pts/1    R+     0:00 ps ax

At this point, the confined shell can ping a remote host and can also be pinged by a remote host. It shares the same host filesystem, so /etc in this container is the same as /etc of the host, but as can be seen by the ps output, the process is fully isolated from the host process table.

On the host, you can use LXC tools to view the state of the container:
# lxc-info -n test
'test' is RUNNING
# lxc-ps
CONTAINER    PID TTY          TIME CMD
           13095 pts/2    00:00:00 su
           13099 pts/2    00:00:00 bash
           13134 pts/2    00:00:00 lxc-ps
           13135 pts/2    00:00:00 ps

The above is an example of an LXC application container. This example had full separate networking support, however you can also isolate a single application that uses the existing host network (as a result not requiring a configuration file) using:
# lxc-execute -n test /bin/bash

You can also create LXC system containers that are more similar to OpenVZ containers. These mimic an entire operating system with its own file system and network address, fully separate from the host operating system.

The simplest way to create these containers is to use OpenVZ templates. Next week, we will create an LXC-based system container.

LXC is powerful, and finally Linux users have something similar to the jail feature that BSD has enjoyed for years.

While OpenVZ works great, having something immediately available from your Linux vendor makes maintenance of the system easier as all the bits are already available, and even though LXC is not as mature as OpenVZ, it is quite capable and under active development.

Monday, February 22, 2010

Top 5 Best Linux Firewalls

As part of the contest we conducted recently, we got 160+ comments from the geeky readers who choose their favorite firewall.
Based on this data, the top spot goes to.. drum roll please..
iptables
If you are new to any of the top 5 firewalls mentioned here, please read the rest of the article to understand more about them.

1. Iptables

iptables is a user space application program that does packet filtering, network address translation (NAT), and port address translation (PAT).  iptables is for IPv4.  ip6tables is for IPv6.
iptables needs kernel with ip_tables packet filter (including Linux kernel 2.4.x and 2.6.x). Using iptables you can view, add, remove or modify the rules in the packet filter ruleset.

2. IPCop

IPCop is for small-office and home-office users. This is a Linux firewall distribution, that requires a separate low power PC to run the software. You can configure the firewall rules from a friendly web interface. This is a stateful firewall based on Linux netfilter.
You can take an old PC and convert it to a secure internet application with IPCop, which will secure the home/small-office network from internet and also improve web browser performance by keeping some frequently used information.

3. Shorewall

Shorewall firewall’s tag-line is: iptables made easy. It is also known as “Shoreline Firewall”. It is built upon the iptables/ipchains netfilter system.
If you have hard-time understanding the iptables rules, you should try shorewall, as this provides a high level abstraction of iptables rules using text files.
Shorewall contains the following packages:
  • Shorewall – Helps to create ipv4 firewall
  • Shorewall6 – Helps to create ipv6 firewall
  • Shorewall-lite – Helps to administer multiple ipv4 firewalls
  • Shorewall6-lite. Helps to administer multiple ipv6 firewalls
Additional information about shorewall:

4. UFW – Uncomplicated Firewall

UFW is a command line program that helps manage the netfilter iptables firewall. This provides few simple commands to manage iptables. Gufw is a graphical interface for the UFW that is used on Ubuntu distribution. It is very intuitive and easy to manage your iptables firewall using Gufw. You can run Gufw on any Linux distribution that has Python, GTK and ufw.
To allow ssh access in UFW you have to do the following. It’s that easy.
$ sudo ufw allow ssh/tcp

5. OpenBSD and PF

PF stands for packet filter. PF is licensed under BSD and developed on OpenBSD. PF firewall is installed by default on OpenBSD, FreeBSD, NetBSD.
PF does the following.
  • Packet Filtering
  • NAT
  • Traffic redirection (port forwarding)
  • Packet Queueing and Prioritization
  • Packet Tagging (Policy Filtering)
  • Excellent log capabilities
Additional information about PF:

Additional Firewall Software

Following are additional firewalls mentioned by readers along with the total number of votes it received.
  • CheckPoint FireWall-1 5
  • pfsense 5
  • Firestarter 5
  • Netfilter 4
  • SmoothWall Express 3
  • Guarddog 3
  • ipchain 3
  • Endian 2
  • Susefirewall 1
  • Cisco ASA/PIX 1
  • ClearOS 1
  • APF 1
  • Firewall Builder 1
  • Auto firewall in Puppy Linux 1
  • Drawbridge 1
  • Monowall 1
  • Firehol 1
  • SuSEfirewall2 1
  • Plesk 1

Thursday, February 18, 2010

Installing Kernel Security Updates Without Reboot With Ksplice Uptrack On Ubuntu 9.10 Desktop

Ksplice Uptrack is a subscription service that lets you apply 100% of the important kernel security updates released by your Linux vendor without rebooting.

Ksplice Uptrack is freely available for the desktop versions of Ubuntu 9.10 Karmic and Ubuntu 9.04 Jaunty. This tutorial shows how to install and use it on an Ubuntu 9.10 desktop.

I do not issue any guarantee that this will work for you!


1 Installing Ksplice Uptrack
Open Firefox and visit http://www.ksplice.com/uptrack/download. Click the Ksplice Uptrack for Ubuntu 9.10 Karmic - Download now button:

Click to enlarge


Select Open with GDebi Package Installer (default) in the Firefox download dialogue:

Click to enlarge


After the download has finished, the Package Installer will come up. Click the Install Package button:

Click to enlarge


Type in your password:

Click to enlarge


Afterwards, the dependencies for Ksplice Uptrack are being downloaded and installed:

Click to enlarge

Next you must accept the Ksplice Uptrack license:

Click to enlarge


Ksplice Uptrack is now being installed:

Click to enlarge


Click the Close button to leave the Package Installer afterwards:

Click to enlarge


2 Using Ksplice Uptrack
The Ksplice Uptrack Manager should have opened automatically at the end of the installation process. This is how it looks:


Click to enlarge


You should also find a Ksplice Uptrack icon in the taskbar - the Ksplice Uptrack Manager can as well be opened by clicking it.

This is how it looks when there are kernel updates available...


... and this is how it looks when there are no kernel updates (i.e., all kernel security fixes are installed):


If there are kernel updates available, you can install them by clicking the Install all updates button in the Ksplice Uptrack Manager.

The kernel updates are then being downloaded and installed:

Click to enlarge


Afterwards, there should be a green check in front of each update which means the system is up to date.

Click the Close button to leave the Ksplice Uptrack Manager:

Click to enlarge

Regular Expressions In grep

How do I use the Grep command with regular expressions under Linux operating systems?

Linux comes with GNU grep, which supports extended regular expressions. GNU grep is the default on all Linux systems. The grep command is used to locate information stored anywhere on your server or workstation.

Regular Expressions
Regular Expressions is nothing but a pattern to match for each input line. A pattern is a sequence of characters. Following all are examples of pattern:

^w1
w1|w2
[^ ]

grep Regular Expressions Examples

Search for 'vivek' in /etc/passswd

# grep vivek /etc/passwd

Sample outputs:

vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash
vivekgite:x:1001:1001::/home/vivekgite:/bin/sh
gitevivek:x:1002:1002::/home/gitevivek:/bin/sh

Search vivek in any case (i.e. case insensitive search)

# grep -i -w vivek /etc/passwd

Search vivek or raj in any case

# grep -E -i -w 'vivek|raj' /etc/passwd

The PATTERN in last example, used as an extended regular expression.
Anchors

You can use ^ and $ to force a regex to match only at the start or end of a line, respectively. The following example displays lines starting with the vivek only:

# grep ^vivek /etc/passwd

Sample outputs:

vivek:x:1000:1000:Vivek Gite,,,:/home/vivek:/bin/bash
vivekgite:x:1001:1001::/home/vivekgite:/bin/sh

You can display only lines starting with the word vivek only i.e. do not display vivekgite, vivekg etc:

# grep -w ^vivek /etc/passwd

Find lines ending with word foo:

# grep 'foo$' filename

Match line only containing foo:

# grep '^foo$' filename

You can search for blank lines with the following examples:

# grep '^$' filename

Character Class

Match Vivek or vivek:

# grep '[vV]ivek' filename

OR

# grep '[vV][iI][Vv][Ee][kK]' filename

You can also match digits (i.e match vivek1 or Vivek2 etc):

# grep -w '[vV]ivek[0-9]' filename

You can match two numeric digits (i.e. match foo11, foo12 etc):

# grep 'foo[0-9][0-9]' filename

You are not limited to digits, you can match at least one letter:

# grep '[A-Za-z]' filename

Display all the lines containing either a "w" or "n" character:

# grep [wn] filename

Within a bracket expression, the name of a character class enclosed in "[:" and ":]" stands for the list of all characters belonging to that class. Standard character class names are:

* [:alnum:] - Alphanumeric characters.
* [:alpha:] - Alphabetic characters
* [:blank:] - Blank characters: space and tab.
* [:digit:] - Digits: '0 1 2 3 4 5 6 7 8 9'.
* [:lower:] - Lower-case letters: 'a b c d e f g h i j k l m n o p q r s t u v w x y z'.
* [:space:] - Space characters: tab, newline, vertical tab, form feed, carriage return, and space.
* [:upper:] - Upper-case letters: 'A B C D E F G H I J K L M N O P Q R S T U V W X Y Z'.

In this example match all upper case letters:

# grep '[:upper:]' filename

Wildcards
You can use the "." for a single character match. In this example match all 3 character word starting with "b" and ending in "t":

# grep '\' filename

Where,

* \< Match the empty string at the beginning of word * \> Match the empty string at the end of word.

Print all lines with exactly two characters:


# grep '^..$' filename

Display any lines starting with a dot and digit:


# grep '^\.[0-9]' filename

Escaping the dot

The following regex to find an IP address 192.168.1.254 will not work:


# grep '192.168.1.254' /etc/hosts

All three dots need to be escaped:


# grep '192\.168\.1\.254' /etc/hosts

The following example will only match an IP address:

# egrep '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' filename


The following will match word Linux or UNIX in any case:


# egrep -i '^(linux|unix)' filename


How Do I Search a Pattern Which Has a Leading - Symbol?

Searches for all lines matching '--test--' using -e option Without -e, grep would attempt to parse '--test--' as a list of options:


# grep -e '--test--' filename


How Do I do OR with grep?

Use the following syntax:


# grep 'word1|word2' filename

How Do I do AND with grep?

Use the following syntax to display all lines that contain both 'word1' and 'word2'


# grep 'word1' filenae | grep 'word2'

How Do I Test Sequence?

You can test how often a character must be repeated in sequence using the following syntax:

{N}
{N,}
{min,max}

Match a character "v" two times:


# egrep "v{2}" filename

The following will match both "col" and "cool":


# egrep 'co{1,2}l' filename

The following will match any row of at least three letters 'c'.


# egrep 'c{3,}' filename

The following example will match mobile number which is in the following format 91-1234567890 (i.e twodigit-tendigit)

# grep "[[:digit:]]\{2\}[ -]\?[[:digit:]]\{10\}" filename

How Do I Hightlight with grep?

Use the following syntax:


# grep --color regex filename


How Do I Show Only The Matches, Not The Lines?

Use the following syntax:


# grep -o regex filename

Regular Expression Operator
Regex operator Meaning


. Matches any single character.
? The preceding item is optional and will be matched, at most, once.
* The preceding item will be matched zero or more times.
+ The preceding item will be matched one or more times.
{N} The preceding item is matched exactly N times.
{N,} The preceding item is matched N or more times.
{N,M} The preceding item is matched at least N times, but not more than M times.
- Represents the range if it's not first or last in a list or the ending point of a range in a list.
^ Matches the empty string at the beginning of a line; also represents the characters not in the range of a list.
$ Matches the empty string at the end of a line.
\b Matches the empty string at the edge of a word.
\B Matches the empty string provided it's not at the edge of a word.
\< Match the empty string at the beginning of word. \> Match the empty string at the end of word.


grep vs egrep
egrep is the same as grep -E. It interpret PATTERN as an extended regular expression. From the grep man page:

In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, \+, \{,
\|, \(, and \).

Traditional egrep did not support the { meta-character, and some egrep implementations support \{ instead, so portable scripts should avoid { in


# grep -E patterns and should use [{] to match a literal {.

GNU grep -E attempts to support traditional usage by assuming that { is not special if it would be the start of an invalid interval specification.


For example, the command grep -E '{1' searches for the two-character string {1 instead of reporting a syntax error in the regular expression.


POSIX.2 allows this behavior as an extension, but portable scripts should avoid it.

References:
* man page grep and regex(7)
* info page grep

Use Live USB Creator to install Fedora 12 from a USB stick

Linux runs great on netbooks, but unfortunately most of them come without an optical drive of any kind which can make it a challenge to install an operating system on them.

Unless you have an external DVD-ROM or CD-ROM drive to connect to them, the ideal solution would be to boot from a USB stick.

Since most modern computers, if not all of them, permit booting from a USB device, this makes for a simple solution.

Not only that, USB devices can be used multiple times, unlike the DVD you burn an ISO to and use a handful of times.

Fedora makes it very easy to create a bootable USB stick with the Live USB Creator tool. You can use an existing Fedora installation, and probably any other Linux distribution, to run the tool.

There is even a Windows application to allow for creating a Fedora-based bootable USB stick as well.

To begin, install the liveusb-creator package on Fedora:

# yum install liveusb-creator

or download the Windows installer from the project page. When it is installed, execute the liveusb-creator tool (it must be started as root, in Linux):

# liveusb-creator

Here you can use an existing downloaded LiveCD, or the tool can download a Fedora image for you to burn.

You can choose which version of Fedora to install (10, 11, or 12) and also for which desktop: KDE or GNOME.

You can also download the Sugar on a Stick operating system, which is an educational Sugar environment meant for children to be able to boot any computer into their own personalized Sugar environment.

You can also tell the tool how much persistent storage to reserve on the USB stick. This space can be used to save files and make modifications to the LiveCD image, allowing you to boot and run Fedora with any changes you make.

Insert the USB stick to use (should be at least 4GB in size), and when you have chosen which version of Fedora or Sugar on a Stick to install, or have supplied your own LiveCD image, click the Create Live USB button.

Make sure that the target device shows up properly; if it isn’t already selected, make sure you select the correct device (i.e., /dev/sdg1 on Linux or ‘E:’ in Windows).

Note that the install is completely non-destructive, so the device can contain other data as well.

Depending on what you have chosen to do, the installation can take some time, especially if you need to download the LiveCD image first.

A progress bar on the screen will indicate how far along it is, and the text pane indicates exactly what it is doing at any given point in time.

When the Live USB Creator is completed, you can eject the USB stick, insert it into your other computer (or reboot the existing computer) and indicate in the BIOS or via boot selection at startup which device to boot from.

Select the USB stick and watch Fedora boot, at which point you can either select to use Fedora as installed on the USB stick, or use it to install Fedora onto the computer.

Wednesday, February 17, 2010

Hacking Wi-Fi Password Using Ubuntu Linux

Hacking Wi-Fi Password Using Ubuntu Linux - I know a lot of you out there would love to know how to hack or crack Wi-Fi passwords from coffee shops or just about any place with managed or secured network.

I've already featured several hacking software (and more hacking tools) before, and some of which can help you crack Wi-Fi passwords be it WEP or WPA protected.

This time, I'm going to share with you some of my favorite wireless tools that can be used to hack Wi-Fi password using Ubuntu or any other Linux distribution:

Aircrack-ng
Aircrack-ng (a fork of aircrack) is my main tool for cracking Wi-Fi passwords. It has a wireless network detector, a packet sniffer, WEP and WPA/WPA2-PSK cracker, and an analysis tool for 802.11 wireless LANs.

Aircrack-ng works with any wireless card whose driver supports raw monitoring mode and can sniff 802.11a, 802.11b and 802.11g traffic.

Kismet
Kismet is a really good network detector, packet sniffer, and intrusion detection system for 802.11 wireless LANs.

It will work with any wireless card which supports raw monitoring mode, and can sniff 802.11a, 802.11b, 802.11g, and 802.11n traffic.

Kismet works in passive mode, which means it is capable of detecting the presence of both wireless access points and wireless clients without sending any loggable packets.

SWScanner
SWScanner is specifically designed to make the whole wardriving process a lot easier. It is also intended to manage many tasks related to wireless networking. SWScanner is compatible with NetStumbler files and can be integrated with GPS devices.

These are only three of the many wireless tools that can get you going in no time, so feel free to explore.

I don't want to give a step-by-step instruction just yet on how to hack or crack WiFi password using Ubuntu, but for a little inspiration, I'll share with you a YouTube video that pretty much illustrate the process of using those Wi-Fi hacking software:



Happy WiFi hacking, but be responsible and do it only for testing or if you have permission (*cough!).

Ripping DVDs on Linux

I recently assembled a new computer to use strictly for playing media files on my television/home theatre setup.

Sporting just over 2.5 terabytes of storage I decided it was about time to back up my DVD collection. There are a few different programs for ripping DVDs on Linux.

The following details my personal experience with four different applications over the last week and a half.

Personally I found DVD::RIP's GUI counter intuitive to use and did not care for the way it has you setup "projects" for each disc. Beyond this it also does not support "queuing" of video files, meaning if you are ripping a DVD that has multiple episodes on a disc you need to baby sit the ripping process to start the next episode after each one ends.

AcidRip is a GTK based GUI front end for MPlayer and MEncoder. It is simple, yet powerful. In addition to the GUI functions it also provides the user with access to editing the ripping command manually should you want to, meaning it does not take any control away from the power user. My only complaint about AcidRip is that it was unable to rip media files using the x264 video format on either of my two Linux systems.

As I'm sure many of you where able to guess from it's name K9Copy is a KDE ripping program. It provides a solid GUI that is easy to learn/navigate and it supports x264 encoding. Like DVD::RIP, K9Copy also does not support queuing media tracks from a disc, however I found that it is easy enough to open a couple copies of K9Copy and have it rip multiple titles at once. My only complaint about K9Copy is that the version provide in the Ubuntu repositories crashes on me every so often (and I was less than successful in getting the latest version to compile from source).

I saved the best for last. HandBrake is my preferred application at this current point in time for ripping video from DVDs. It provides a very sleek GUI that is easy to navigate and intuitive to use. It supports queuing media tracks and x264 encoding to .m4v files. Also worth mentioning is that HandBrake is the only of these four applications that is not included in the Ubuntu repositories, however it is still FOSS.


Fifteen seasons of television shows and a half dozen movies later I am still ripping DVDs, slow and steady. Is there another application you know of for ripping DVDs on Linux that I did not mention here? If so, feel free to drop a comment letting me know what it is. 

Configure or Remove ETags in Apache/HTTP

There are various steps you can perform to optimize your site.

One of them is to put some sort of cache/expiry mechanism so that if some client visits your site the second time then they need not to download the whole data, instead of that they just need to download the data which is changed respective of their last visit.

If you want to put on some sort of cache/expiry mechanism on the content served from your web server then you can use following methods:
  1. Using mod_expires module
  2. Use ETAGs (Entity Tags)
Here i will be explaining you how ETags works and how to configure/remove then according to your needs.

What are ETags?
Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser’s cache matches the one on the origin server.

ETag is a validator which can be used instead of, or in addition to, the Last-Modified header.

By sending a ETag, the server promises that the content is not changed until the ETag changes for a specific resource.

How ETags works:
The origin server specifies the component’s ETag using the ETag response header.
  1. The client requests a static resource (for ex: /a/test.gif) from server. Following is the header of a file client will receive from the server:
  2. HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195
  3. Later, when the client requests same file from the server and browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETag match, a 304 status code is returned reducing the response by 12195 bytes for this example.
    Following is the header of the file requested again from the same server:
    GET /a/test.gif HTTP/1.1
    Host: www.geekride.com
    If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
    If-None-Match: "10c24bc-4ab-457e1c1f"
    HTTP/1.1 304 Not Modified
    *** 304 status code means that the file you are requesting for, is the same which is present in the cache of the browser. (For more details about apache status codes please refer to: http-apache-status-error-codes)

Where not to use:

The problem with the ETags is that they are generated with attributes that make them unique to a server.

By default, Apache will generate an Etag based on the file’s inode number, last-modified date, and size.

So, if you have one file on multiple servers with same file size, permissions, timestamp, etc., even after that their ETag won’t be same as they can’t have the same inode number.

So, This creates the problem in the scenarios where you are having a cluster of web servers to serve the same content.

When a file is served from one server and later on validated from another server then the ETags for that file won’t match and hence complete file will be fetched again.

That means if you are having a cluster serving as a web server, then you shouldn’t use ETags.

Configure ETags:
ETags are configured by default, so you don’t need to do anything to configure them.

Remove ETags:
You need to follow following steps to remove the ETags from respective web servers.
  1. Apache: In apache you need to add this entry to the apache config file and restart the apache.
    Header unset
    ETagFileETag none
  2. IIS: This article explains how to remove ETags in IIS.
  3. Lighttpd: For lighttpd server you can disable ETags by putting this in the config file.
    static-file.etags = 'disable'

References:

  1. http://httpd.apache.org/docs/2.0/mod/core.html
  2. http://developer.yahoo.net/blog/archives/2007/07/high_performanc_11.html
  3. http://www.webscalingblog.com/performance/caching-http-headers-last-modified-and-etag.html

Linux cloning over the network using netcat

If you find yourself in a situation where you need to set up a series of Linux computers that use the same configuration, using dd and netcat is one solution to clone servers over the network.

Using netcat with tar
Netcat is known as the Linux Swiss Army knife, meaning that you can do lots of things with it. You can use netcat to open a port on one computer and use that port to pipe data through it from another computer.

For instance, you can use it to easily copy the contents of a directory, as shown in the sample command below where netcat and tar are combined.

On the receiving computer, you can start a netcat listener process. The following command tells netcat to listen on port 1968 and all it receives through that port, is piper to the tar x command, which will extract the tar file that comes through the pipe.

# netcat -l -p 1968 | tar x

At the other end of the connection is the netcat receiver. In this example, that would be the command that creates the tar file and pipes it through netcat to the destination host. This command looks like:

# tar c . | netcat 10.0.0.10 1968

The first part of the command starts a tar job on the contents of the current directory, the second part sends the result to netcat which listens on host 10.0.0.10, port 1968.

Netcat and multicast using tee
As you have read, netcat is an easy way to get a file from one computer to another. There is a disadvantage though, the command does not support multicast.

That means that you can't start netcat as a listener on multiple computers and have one computer send data to the multicast port.

But, you can use a workaround and connect multiple computers in a netcat chain. Let's imagine that you have ten computers.

On 10.0.0.10 there is a bunch of iso file that you want to distribute over the network to the computers 10.0.0.11 to 10.0.0.20.

You first have to prepare a netcat session on all of the computers, then on the computer that has the iso files, you would type the following command:

# tar c . | netcat 10.0.0.11 1968

That would send out the tar files to computer 10.0.0.11. That computer needs a netcat process waiting for incoming data, and then it can extract that data through a tar pipe.

At the same time, you need to send the data through to another computer, you can use the tee command. With tee, you can execute two commands on the output that comes in through a pipe. An example of this would look as in the following line:

# netcat -l -p 1968 | tee > (tar x) | netcat 10.0.0.12 1968

As you can see, with tee and the output redirector, the data is sent to the tar x command to extract the data. At the same time, the data is send to the computer with IP address 10.0.0.12, where a netcat process has to be listening on port 1968.

So on that computer, you would also have a netcast process waiting for incoming data:

# netcat -l -p 1968 | tee (tar x) | netcat 10.0.0.13 1968

This process is repeated through to the last computer in the chain, the one that has IP address 10.0.0.20.

On that computer you would just have netcat listening for incoming data and send that data directly through to the tar process.

So on 10.0.0.20, you would have the following command waiting:

# netcat -l -p 1968 | tar x

To start this multicast alike sequence, you have to start with the listener on 10.0.0.20, after that you enter the command on 10.0.0.19, all the way up to the netcat sender that is started in 10.0.0.10.

You will see the files being copied to all machines in the chain very quickly. But, this is just a test drive. Once you have confirmed that it works on your Linux system, you can get to the serious work, and use this method to distribute an image to multiple computers.

Distributing a Linux server image with netcat multicast
You just did a test drive distributing some files with tar. You can do the same with dd, which you can use to clone complete hard drives.

First, consider this command:

# dd if=/dev/sda of=/dev/sdb bs=4096

Using this command, you would copy block by block the entire /dev/sda disk to /dev/sdb.

If /dev/sdb for instance is a USB hard drive connected to your computer, once this command is complete, you would have a one-on-one copy of the original hard drive.

Make sure that you have tried this and understand it completely before proceeding to the next step.

What you can do with local hard drives, you can do over the network as well. That means that to clone a hard drive /dev/sdb that is connected to computer 10.0.0.10 to the /dev/sda on 10.0.0.11, you can use a combination of dd and netcat.

But: to make sure it works, you have to boot both of the computers from a live CD, so that there are no files on the local hard disk in use. If both the computers are booted from a live CD, just start the listener process on 10.0.0.11:

# netcat -l -p 1968 | dd of=/dev/sda

and on 10.0.0.10 start the sending process:

# dd if=/dev/sdb | netcat 10.0.0.11 1968

After you have verified that this works, you can create the netcat-dd daisy chain, by starting on the last computer in the range (10.0.0.20):

# netcat -l -p 1968 | dd of=/dev/sda

Next, on 10.0.0.19, start the following command:

# netcat -l -p 1968 | tee > (dd of=/dev/sda) | netcat 10.0.0.20 1968

and on 10.0.0.18 it would look like:

# netcat -l -p 1968 | tee > (dd of=/dev/sda) | netcat 10.0.0.29 1968

Next you continue up the chain until you are at the first computer, where you have started the initiating netcat process:

# dd if=/dev/sdb | netcat 10.0.0.11 1968

Once the work has been done, you have cloned a hard drive to multiple computers on the network.

This is a nice method to clone a Linux hard drive over the network to multiple computers using netcat.

However, if you have to do this type of work often, there are other solutions that you should consider, such as Clonezilla.

But, that tool does require you to set up a server, which is not the case for the netcat solution.