Wednesday, December 30, 2009

Continuous Monitoring With Tail Fails

If you can’t get tail command to continuously monitor a file, then read on. I was working on a script yesterday, a part of which depended on continuous monitoring of a text file.

I had used our trusty old “tail” command for this but while testing by manually putting in some data into the file, it was failing but curiously it was working fine when used in actual scenario.

Befuddled, I did a simple test. I created a simple text file “a.txt” with a few lines of data and then ran the following command.

# tail -f a.txt

It showed the last few lines of the file and kept waiting. So far so good. Then I opened the file in vim editor, wrote a few more lines, saved the file and then waited but nothing in the window that was running the tail command.

Thinking that the data might be buffered and not flushed to the disc yet, I ran the sync command but still nothing.


Then I got a hint that when I used the “-F” or “–follow=name” option instead of “-f”, the tail command was able to detect the change just fine, the only problem being that in this mode, it prints the last few lines again, not just the newly added line.

The main difference in these new options is that tail command tracks the file for changes by its name and not by the file descriptor, and then it dawned on me.

The problem is not in the tail command but my testing method itself. When I save the file opened in vim, it creates a new file with a new inode while the one opened by tail is still the old one (which is now a temporary file which has actually been deleted).

When I quit tail, then the kernel deletes the file automatically. This is also confirmed by running “lsof | grep a.txt” (lsof lists the open files and then we find the ones related to a.txt).

The output shown is;
tail 11966 shantanu 3r REG 8,6 8 224954 /home/shantanu/dev/perl/plot/a.txt~ (deleted)
vim 12576 shantanu 9u REG 8,6 12288 210918 /home/shantanu/dev/perl/plot/.a.txt.swp
which shows what we had discussed above.

This gets worked around when I use the -F option because then tail periodically reopens the file by name and reads it again, thus bypassing the above issue.

Then I simply tried running tail again on the same file and doing something like “echo abc >> a.txt” and I could see the behaviour as expected with tail immediately detecting the change and displaying it in its window.

Hope this helps if you have been pulling out your hair thinking you have gone crazy as your favourite little tool that you have been using for so many years has suddenly stopped working and no one else apart from you is even complaining :P

Tuesday, December 29, 2009

Fix Slow Browsing — and More — With OpenDNS


For most small businesses, a reliable connection to the Internet is vital for both communication and commerce.

A key component of Internet access is the Domain Name System (DNS), which allows you to reach sites using familiar and user-friendly names like smallbusinesscomputing.com rather than inscrutable and difficult to remember IP addresses like 63.236.73.55.

Whenever you access a Web site, send or receive e-mail, chat via instant messaging, or use any other type of Internet application, DNS is working behind the scenes matching domain names to IP addresses.

As you read this, your business is probably relying on ISP-provided DNS servers to reach sites and services on the Internet.

They often do an adequate job, but they’re prone to sluggishness (and sometimes outages). Switching your business over to the independent DNS service provider OpenDNS, on the other hand, can make Internet access a bit speedier and safer for everyone on your network, as well as provide added features like content filtering so you can determine which Web sites your employees can and can’t visit.

OpenDNS uses a combination of caching technology and a network of strategically located servers that generally perform DNS lookups much quicker than ISP servers do.

Considering that loading all the components of a single Web page can often involve lots of individual DNS lookups, saving even a fraction of a second on each can really add up.

OpenDNS also provides a phishing filter and checks every site you visit to make sure it’s legitimate before taking you to it.

Best of all, you can take advantage of OpenDNS for free (or at minimal cost) and without having to make any major configuration changes to your network or any of your computers.

Getting Started with OpenDNS

Getting up and running with OpenDNS ranges from easy to very easy.

If you’re a small firm that relies exclusively on ISP-provided DNS — that is, you don’t maintain your own DNS server — all you need to do is make a quick tweak to your router settings so that it uses OpenDNS’s DNS servers rather than your ISP’s.

The exact configuration steps vary by router, but it basically involves logging into it and looking for a screen similar to the one shown in Figure 1 where you can specify custom DNS servers. (The OpenDNS server addresses are 208.67.222.222 and 208.67.222.220.) You can also get manufacturer and model-specific setup instructions for consumer and small office routers at the OpenDNS store.

If your company is running a DNS server, you’ll need to configure it to use OpenDNS to look up addresses outside your own network (i.e. on the Internet).

The configuration process is simple, and this page on the OpenDNS Web site will give you step-by-step instructions on how to change your DNS on Windows, Mac or Unix/Linux-based servers.

Regardless of which setup method you use, when you’re finished you’ll have the benefit of OpenDNS’s speed improvements and phishing protection.

You can then verify that OpenDNS is properly configured, and go to the test site to see the phishing filter in action.

Content Filtering
To take advantage of the aforementioned Web content filtering, you’ll need to take the extra step of  creating an OpenDNS Basic account (still free), so that the service can identify your specific network and apply unique settings to it.

OpenDNS identifies your network by the public IP address assigned to it by your ISP.

Although it’s not too common with business-class Internet service, if your network’s public IP address is dynamic — i.e. subject to periodic changes — you’ll need to run a small utility on one of your systems (preferably one that’s left running all the time).

This will detect any changes to your public IP and update OpenDNS accordingly. (You’ll see a link to the Dynamic IP software when you set up your network, but you can also download the utility.)

Once your OpenDNS account is created and your network defined, you’re ready to apply network-specific configuration options via the Settings tab.

For content filtering, you can use general settings —minimal, low, moderate, high — or customized ones to filter almost 60 specific categories of inappropriate or time-wasting content (e.g. adult, games, social networking, Webmail, etc.).

You’ll also have the option to block or allow access to particular domain names, known as whitelisting or blacklisting.  (See Figure 2.)

Other benefits of using OpenDNS with an account include the capability to view statistics about your network’s DNS usage, such as which domains were visited most and which access attempts were blocked.

You’ll also be able to customize the message that’s displayed when the phishing or content filter blocks a site, as well as on the guide page, which presents a list of suggested alternatives when someone types in an invalid or unresponsive address.

It’s worth noting that OpenDNS only knows about your network and not its users, so it won’t allow you to apply different settings to individual employees.

Similarly, OpenDNS collects network stats in aggregate; it will be able to tell you when someone attempts  to access a forbidden site, but not that it was Fred in accounting. (Sorry to narc on you, Fred.)


What’s the Catch?
At this point you might be wondering how OpenDNS manages to provide its service for free. As is so often this case, “free” really means “advertising supported,” and the upshot is that sponsored links will appear on every block and guide page. 

If you’ d rather not deal with the ads and are willing to ante up $5 per user per year — still pretty cheap — to make them go away, you can upgrade to OpenDNS Deluxe.

Aside from being ad-free the Deluxe version offers a handful of additional enhancements including more customization options and a much longer stats history. (See a detailed comparison between Basic and Deluxe.)

Note: Google recently released a DNS service of its own called Google Public DNS, which promises  speed and security benefits similar to OpenDNS, but it doesn’t currently offer any advanced/customizable features.

While switching to OpenDNS isn’t going to make an Internet connection that’s inherently slow lightning-quick, nor will it protect you against every form of Internet-borne malady, if you want Internet access with more speed and security and you want more control and insight over how your small business’s Internet connection is used, it’s worth checking out.

Udev: Introduction to Device Management In Modern Linux System

Modern Linux distributions are capable of identifying a hardware component which is plugged into an already-running system.

There are a lot of user-friendly distributions like Ubuntu, which will automatically run specific applications like Rhythmbox when a portable device like an iPod is plugged into the system.

Hotplugging (which is the word used to describe the process of inserting devices into a running system) is achieved in a Linux distribution by a combination of three components: Udev, HAL, and Dbus.


Udev supplies a dynamic device directory containing only the nodes for devices which are connected to the system.

It creates or removes the device node files in the /dev directory as they are plugged in or taken out. Dbus is like a system bus which is used for inter-process communication.

The HAL gets information from the Udev service, when a device is attached to the system and it creates an XML representation of that device.

It then notifies the corresponding desktop application like Nautilus through the Dbus and Nautilus will open the mounted device’s files.

This article focuses only on Udev, which does the basic device identification.

What is Udev?
Udev is the device manager for the Linux 2.6 kernel that creates/removes device nodes in the /dev directory dynamically.

It is the successor of devfs and hotplug. It runs in userspace and the user can change device names using Udev rules.

Udev depends on the sysfs file system which was introduced in the 2.5 kernel. It is sysfs which makes devices visible in user space.

When a device is added or removed, kernel events are produced which will notify Udev in user space.

The external binary /sbin/hotplug was used in earlier releases to inform Udev about device state change. That has been replaced and Udev can now directly listen to those events through Netlink.

Why Do We Need It ?
In the older kernels, the /dev directory contained statics device files. But with dynamic device creation, device nodes for only those devices which are actually present in the system are created.

Let us see the disadvantages of the static /dev directory, which led to the development of Udev.

Problems Identifying the Exact Hardware Device for a Device Node in /dev
The kernel will assign a major/minor number pair when it detects a hardware device while booting the system. Let us consider two hard disks.

The connection/alignment is in such a way that one is connected as a master and the other, as a slave. The Linux system will call them, /dev/hdaand /dev/hdb.

Now, if we interchange the disks the device name will change.

This makes it difficult to identify the correct device that is related to the available static device node. The condition gets worse when there are a bunch of hard disks connected to the system.

Udev provides a persistent device naming system through the /dev directory, making it easier to identify the device.

The following is an example of persistent symbolic links created by Udev for the hard disks attached to a system.
$ ls -lR /dev/disk/
/dev/disk/by-id:
lrwxrwxrwx 1 root root 9 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593 -> ../../sda 
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part6 -> ../../sda6
lrwxrwxrwx 1 root root 10 Jul 4 06:48 scsi-SATA_WDC_WD800JD-75M_WD-WMAM9UT48593-part7 -> ../../sda7
/dev/disk/by-label:
lrwxrwxrwx 1 root root 10 Jul 4 06:48 1 -> ../../sda6
lrwxrwxrwx 1 root root 10 Jul 4 06:48 boot1 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 4 06:48 project -> ../../sda3
lrwxrwxrwx 1 root root 10 Jul 4 06:48 SWAP-sda7 -> ../../sda7
/dev/disk/by-path:
lrwxrwxrwx 1 root root 9 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part4 -> ../../sda4
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part6 -> ../../sda6
lrwxrwxrwx 1 root root 10 Jul 4 06:48 pci-0000:00:1f.2-scsi-0:0:0:0-part7 -> ../../sda7
/dev/disk/by-uuid:
lrwxrwxrwx 1 root root 10 Jul 4 06:48 18283DC6283DA422 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 4 06:48 25a4068c-e84a-44ac-85e6-461b064d08cd -> ../../sda6
lrwxrwxrwx 1 root root 10 Jul 4 06:48 3ea7cf15-511b-407a-a56b-c6bfa046fd9f -> ../../sda5
lrwxrwxrwx 1 root root 10 Jul 4 06:48 8878a0a4-604e-4ddf-b62c-637c4fa84d3f -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 4 06:48 e50bcd6d-61ea-4b05-81a8-3cbe17ad6674 -> ../../sda3

Persistent device naming helps to identify the hardware device without much trouble.

Huge Number of Device Nodes in /dev
In the static model of device node creation, no method was available to identify the hardware devices actually present in the system.

So, device nodes were created for all the devices that Linux was known to support at the time. The huge mess of device nodes in /dev made it difficult to identify the devices actually present in the system.

Not Enough Major/Minor Number Pairs
The number of static device nodes to be included increased a lot in recent times and the 8-bit scheme, that was used, proved to be insufficient for handling all the devices.

As a result the major/minor number pairs started running out.

Character devices and block devices have a fixed major/minor number pair assigned to them. The authority responsible for assigning the major/minor pair is the Linux Assigned Name and Number Authority.

But, a machine will not use all the available devices. So, there will be free major/minor numbers within a system.

In such a situation, the kernel of that machine will borrow major/minor numbers from those free devices and assign those numbers to other devices which require it.

This can create issues at times. The user space application which handles the device through the device node will not be aware of the number change.

For the user space application, the device number assigned by LANANA is very important. So, the user space application should be informed about the major/minor number change.

This is called dynamic assignment of major/minor numbers and Udev does this task.

Udev’s Goals
  • Run in user space.
  • Create persistent device names, take the device naming out of kernel space and implement rule based device naming.
  • Create a dynamic /dev with device nodes for devices present in the system and allocate major/minor numbers dynamically.
  • Provide a user space API to access the device information in the system.

Installation of Udev

Udev is the default device manager in the 2.6 kernel. Almost all modern Linux distributions come with Udev as part of the default installation.

You can also get Udev from http://www.kernel.org/pub/linux/utils/kernel/hotplug/. The latest version of Udev needs the 2.6.25 kernel with sysfs, procfs, signalfd, inotify, Unix domain sockets, networking, and hotplug enabled.

CONFIG_HOTPLUG=y
CONFIG_UEVENT_HELPER_PATH=”"
CONFIG_NET=y
CONFIG_UNIX=y
CONFIG_SYSFS=y
CONFIG_SYSFS_DEPRECATED*=n
CONFIG_PROC_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y 
CONFIG_INOTIFY=y
CONFIG_SIGNALFD=y

For a much more reliable operation, the kernel must not use the CONFIG_SYSFS_DEPRECATED* option.

Udev depends on the proc and sys file systems and they must be mounted on /proc and /sys.

Working of Udev
The Udev daemon listens to the netlink socket that the kernel uses for communicating with user space applications.

The kernel will send a bunch of data through the netlink socket when a device is added to, or removed from a system.

The Udev daemon catches all this data and will do the rest, i.e., device node creation, module loading etc.

Kernel Device Event Management
  • When bootup is initialized, the /dev directory is mounted in tmpfs.
  • After that, Udev will copy the static device nodes from /lib/udev/devices to the /dev directory.
  • The Udev daemon then runs and collects uevents from the kernel, for all the devices connected to the system.
  • The Udev daemon will parse the uevent data and it will match the data with the rules specified in /etc/udev/rules.d.
  • It will create the device nodes and symbolic links for the devices as specified in the rules.
  • The Udev daemon reads the rules from /etc/udev/rules.d/*.rules and stores them in the memory.
  • Udev will receive an inotify event, if any rules were changed. It will read the changes and will update the memory.

Device Driver Loading For Devices

Udev uses the modalias method to load device drivers. The modalias file located at/lib/modules/`uname -r`/modules.alias helps Udev to load the drivers.

The modalias file is created by the depmod binary and it contains alternate names for the device drivers.

Let us examine an example of device driver loading in Linux :
I am using a C program to collect data from the netlink socket that Udev uses to create device nodes and load modules.
[root@arch ~]# ./a.out
add@/devices/pci0000:00/0000:00:02.1/usb1/1-4
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:02.1/usb1/1-4
SUBSYSTEM=usb
MAJOR=189
MINOR=1
DEVTYPE=usb_device
DEVICE=/proc/bus/usb/001/002
PRODUCT=1058/1010/105
TYPE=0/0/0
BUSNUM=001
DEVNUM=002
SEQNUM=1163
add@/devices/pci0000:00/0000:00:02.1/usb1/1-4/1-4:1.0
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:02.1/usb1/1-4/1-4:1.0
SUBSYSTEM=usb
DEVTYPE=usb_interface
DEVICE=/proc/bus/usb/001/002
PRODUCT=1058/1010/105 …………………………………………

You can see that it provides a lot of information about the device. This includes the modalias variable that tells Udev to load a particular module.

The modalias data will look like :
MODALIAS=pci:v000010ECd00008169sv00001385sd0000311Abc02sc00i00

The modalias data contains all the information required to find the corresponding device driver :
pci :- Its a pci device 
v :- vendor ID of the device. Here it is 000010EC ( ie 10EC )
d :- device ID of the device. Here it is 00008169 ( ie : 8169 )
sv and sd are subsystem versions for both vendor and device. 

The best place to find the vendor/product from the id of a PCI device is http://www.pcidatabase.com.
Udev uses the modalias data to find the correct device driver from /lib/modules/`uname -r`/modules.alias.
$ grep -i 10EC /lib/modules/`uname -r`/modules.alias | grep -i 8169
alias pci:v000010ECd00008129sv*sd*bc*sc*i* r8169
alias pci:v000010ECd00008169sv*sd*bc*sc*i* r8169

You can see that the module which is suitable for the device is r8169. Let us get some more information about the driver.
$ /sbin/modinfo r8169
filename: /lib/modules/2.6.18-53.el5/kernel/drivers/net/r8169.ko
version: 2.2LK-NAPI
license: GPL
description: RealTek RTL-8169 Gigabit Ethernet driver
author: Realtek and the Linux r8169 crew 
srcversion: D5EDA4980B92CA2CF677B62
alias: pci:v00001737d00001032sv*sd00000024bc*sc*i*
alias: pci:v000016ECd00000116sv*sd*bc*sc*i*
alias: pci:v00001186d00004300sv*sd*bc*sc*i*
alias: pci:v000010ECd00008129sv*sd*bc*sc*i*
alias: pci:v000010ECd00008169sv*sd*bc*sc*i*
depends:
vermagic: 2.6.18-53.el5 SMP mod_unload 686 REGPARM 4KSTACKS gcc-4.1
parm: media:force phy operation. Deprecated by ethtool (8). (array of int)
parm: rx_copybreak:Copy breakpoint for copy-only-tiny-frames (int)
parm: use_dac:Enable PCI DAC. Unsafe on 32 bit PCI slot. (int)
parm: debug:Debug verbosity level (0=none, …, 16=all) (int)

Check out the line starting with “depends”. It describes the other modules which the r8169 module depends on. Udev will load these modules also.

Rule Processing and Device Node Creation

As already mentioned, Udev parses the rules in/etc/udev/rules.d/ for every device state change in the kernel.

The Udev rule can be used to manipulate the device node name/permission/symlink in user space.

Let us see some sample rules that will help you understand Udev rules better.

The data supplied by the kernel through netlink is used by Udev to create the device nodes. The data includes the major/minor number pair and other device specific data such as device/vendor id, device serial number etc.

The Udev rule can match all this data to change the name of the device node, create symbolic links or register the network link.

The following example shows how to write a Udev rule to rename the network device in a system.

We need to get the device information to create a rule.
# udevadm info -a -p /sys/class/net/eth0/llooking at device '/devices/pci0000:00/0000:00:04.0/0000:01:06.0/net/eth0':
KERNEL==”eth0″
SUBSYSTEM==”net”
DRIVER==”"
ATTR{addr_len}==”6″
ATTR{dev_id}==”0×0″
ATTR{ifalias}==”"
ATTR{iflink}==”3″
ATTR{ifindex}==”3″
ATTR{features}==”0×829″
ATTR{type}==”1″
ATTR{link_mode}==”0″
ATTR{address}==”00:80:48:62:2a:33″
ATTR{broadcast}==”ff:ff:ff:ff:ff:ff”
ATTR{carrier}==”1″
ATTR{dormant}==”0″
ATTR{operstate}==”unknown”
ATTR{mtu}==”1500″
ATTR{flags}==”0×1003″
ATTR{tx_queue_len}==”1000″
looking at parent device ‘/devices/pci0000:00/0000:00:04.0/0000:01:06.0′:
KERNELS==”0000:01:06.0″
SUBSYSTEMS==”pci”
DRIVERS==”8139too”
ATTRS{vendor}==”0×10ec”
ATTRS{device}==”0×8139″
ATTRS{subsystem_vendor}==”0×10ec”
ATTRS{subsystem_device}==”0×8139″
ATTRS{class}==”0×020000″
ATTRS{irq}==”19″
ATTRS{local_cpus}==”ff”
ATTRS{local_cpulist}==”0-7″
ATTRS{modalias}==”pci:v000010ECd00008139sv000010ECsd00008139bc02sc00i00″
ATTRS{enable}==”1″
ATTRS{broken_parity_status}==”0″
ATTRS{msi_bus}==”"
looking at parent device ‘/devices/pci0000:00/0000:00:04.0′:
KERNELS==”0000:00:04.0″
SUBSYSTEMS==”pci”
DRIVERS==”"
ATTRS{vendor}==”0×10de”
ATTRS{device}==”0×03f3″
ATTRS{subsystem_vendor}==”0×0000″
ATTRS{subsystem_device}==”0×0000″
ATTRS{class}==”0×060401″
ATTRS{irq}==”0″
ATTRS{local_cpus}==”ff”
ATTRS{local_cpulist}==”0-7″
ATTRS{modalias}==”pci:v000010DEd000003F3sv00000000sd00000000bc06sc04i01″
ATTRS{enable}==”1″
ATTRS{broken_parity_status}==”0″
ATTRS{msi_bus}==”1″
looking at parent device ‘/devices/pci0000:00′:
KERNELS==”pci0000:00″
SUBSYSTEMS==”"
DRIVERS==”"

You can see that Udev has a lot of information about the network device. Let us examine it in detail :
KERNEL=="eth0" :- kernel name of the device is eth0
DRIVERS==”8139too” :- driver loaded is 8139too
ATTR{address}==”00:80:48:62:2a:33″ :- hardware address of the device
ATTRS{vendor}==”0×10ec” :- vendor id
ATTRS{device}==”0×8139″ :- device id

Let us create a rule to rename this network device to eth1 (This name will be persistent and will not be reset after a reboot).
>[root@arch ~]# cat /etc/udev/rules.d/70-persistent-net.rules
SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:80:48:62:2a:33″, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth1″

This rule renames the device to eth1. We can easily manage the network and other device nodes in the system, this way.

Udev Utilities

Udev provides some user space utilities to manage devices and device nodes in a system. One such command that you will find in all of the latest Linux distributions is ‘udevadm’.

The udevadm command is functionally capable of doing all the tasks which were done by the separate commands shown above.

This utility can be used to regenerate the device nodes in a running system as shown:
[root@arch ~]# ls -l /dev/ | wc -l
150
[root@arch ~]# rm -rf /dev/*
rm: cannot remove `/dev/pts/0′: Operation not permitted
rm: cannot remove directory `/dev/shm’: Device or resource busy
[root@arch ~]# ls -l /dev/ | wc -l
4
[root@arch ~]# udevadm trigger
[root@arch ~]# ls -l /dev/ | wc -l
150

There are many other useful operations that can be done using the udevadm command. You can get more information from the man page of udevadm.

What is the Duture of Udev ?

It is impossible to predict the future of a Linux sub system. Linux is undergoing rapid development and it is probably not wise to predict the future of the Linux kernel.

The DEVfs system which was introduced as a solution to static device nodes disappeared within a short span of time.

But Udev has proven to be a successful device manager for the modern Linux kernel, and promises to be a more stable, feature rich device management system in future releases.

Friday, December 25, 2009

Top Ten Things I Miss in Windows

There is an old saying that goes "you can't miss what you never had" meaning that for those who have never had something of these things they will have no idea what they are missing out on.

Typically I use Ubuntu or some Linux flavor as my operating system for every day tasks, however as most techs know using Windows is unavoidable at times. (Whether it be because I am fixing someone else's machine, at work/school, or queuing up some Netflix watch instantly on my home system)

That being said the following are the top ten features/programs I find myself grumbling about/missing the most when I am working on the Windows platform:

10.) Klipper/Copy & Paste Manager - I use this one alot when I am either coding or writing a research paper for school.

More often than not I find I have copied something new only to discover I need to paste a link or block of code again from two copies back.

Having a tray icon where I can recall the last ten copies or so is mighty useful.

9.) Desktop Notifications - This is something that was first largely introduced in Ubuntu 9.04 and something I quickly grew accustomed to having.

Basically it is a small message (notification) the pops up in the upper right hand corner of your screen for a few moments when something happens in one of your programs (a torrent finishes, you get a new instant message, ect.) or you adjust the volume/brightness settings on your system.

8.) "Always on Top" Window Option - This is something I find useful when I am instant messaging while typing a paper, surfing the net, or watching a movie on my computer.

Essentially what it does is make sure that the window you have this option toggled on is always at the top of your viewing regardless of what program you have selected/are working in.

It is useful because it allows me to read instant messages with out having to click out of something else that I am working on.

7.) Multiple Work Spaces - When I get to really heavy multitasking on a system having multiple different desktops to assign applications to is a god send.

It allows for better organization of the different things I am working on and keeps me moving at a faster pace.

6.) Scrolling in the Window/Application the Cursor is Over - This one again is mostly applicable when some heavy multitasking is going on (but hey - its almost 2010, who isn't always doing at least three things at once right?).

Basically in Ubuntu/Gnome desktop when I use the scroll on my mouse (whether it is the multi-touch on my track pad or the scroll wheel on my USB mouse) it will scroll in what ever program/window my mouse is currently over instead of only scrolling in what ever application I have selected.

5.) Gnome-Do - Most anyone who uses the computer in their everyday work will tell you that less mouse clicks means faster speed and thus (typically) more productivity.

Gnome-Do is a program that allows you to cut down on mouse clicks (so long as you know what program you are looking to load).

The jist of what it does is this: you assign a series of hot keys to call up the search bar (personally I use control+alt+space) and then you start typing in the name of an application or folder you want to open and it will start searching for it - once the correct thing is displayed all you need to do is tap enter to load it up.

The best part is that it remembers which programs you use most often. Meaning that most times you only need to type the first letter or two of a commonly used application for it to find the one you are looking for.

4.) Tabbed File/Folder Viewing - Internet Explorer finally got tabs! Why can't the default Window's explorer for viewing files/folders join it in the world of twenty-first century computing?

Tabs are very useful and are a much cleaner option when sorting through files as opposed to having several windows open on your screen.

3.) Removable Media Should Not Have a Driver Letter - The system Windows uses for assigning letters to storage devices was clearly invented before flash drives existed and I feel it works very poorly for handling such devices.

It is confusing to new computer users that their removable media appears as a different drive letter on most every machine (and even on the same machine sometimes if you have multiple drives attached).

A better solution is something like Gnome/KDE/OSX do: have the drive appear as an icon on the desktop and have the name of drive displayed not the drive letter (its fine if the letter still exists - I under stand the media needs a mount point, just it adds confusion displaying this letter instead of the drive name)

2.) Hidden Files that are Easy/Make Sense - I love how Linux handles hidden files. You simply prefix your file name with a "." and the poof its gone unless you have your file browser set to view hidden folders.

I think it is goofy to have it setup as a togglbe option within the file's settings. Beyond that Windows has "hidden" files and "hidden" files to further confuse things.

1.) System Updates that Install/Configure Once - I've done more than my fair share of Windows installs and the update process it goes through each time irks me beyond belief.

The system downloads and "installs" the updates, then it needs to restart. Upon shutting down it "installs" the updates again and then proceeds to "configure" them.

Then once it comes back online it "installs" and "configures" the updates one last time. Why? On Ubuntu the only update I need to restart for is a kernel update - even then most times I stick with my older kernel most times unless I have a specific reason for changing to the new one.

0.) Wobby Windows - This one doesn't effect productivity or use-ability like the other ten, but I must say after using mostly Ubuntu for the last year and a half not having the windows wobble when I drag them around the screen is a huge kill joy.

I'm aware that a few of my above mentioned things can be added to Windows through third party software- however like I said most times when I am using Windows it is at work, school, or for a few moments on a friends system. Meaning I'm not about to go installing extra things on them/changing configurations.

Anyone else have some other key things/features they miss when using the Windows platform when coming from else where?

Wednesday, December 23, 2009

Windows vs Linux Server and why is Windows less secure than Linux?

On April 14th, 2006, Richard Stiennon wrote an article in ZDNet entitled Why Windows is less secure than Linux.

Stiennon starts by saying: "Many millions of words have been written and said on this topic. I have a couple of pictures.

The basic argument goes like this. In its long evolution, Windows has grown so complicated that it is harder to secure. Well these images make the point very well".

In his post, Stiennon explains that both images (shown here) represent a map of system calls that occur when a web server serves a single HTML page with a picture.

The same page and picture have been used on both servers for the purpose of testing. Richard further explains: "A system call is an opportunity to address memory.

A hacker investigates each memory access to see if it is vulnerable to a buffer overflow attack. The developer must do QA on each of these entry points.

The more system calls, the greater potential for vulnerability, the more effort needed to create secure applications".

The resulting images were generated by Sana Security. The first image is of the system calls that occur on a Linux server running Apache; while the second is of a Windows Server running IIS.

The images speak for themselves.




Thursday, December 17, 2009

It's About Time: Why Your Network Needs an NTP Server

Good time keeping is not an obvious priority for network administrators, but the more you think about it the clearer it is that accurate clocks have a crucial role to play on any network.

Let the clocks on your networked devices get out of sync and you could end up losing valuable corporate data.

Here are just a few things that rely on hardware clocks which are accurately set and in sync with each other:

Scheduled data backups
Successful backups are vital to any organization. Systems that are too far out of sync may fail to back up correctly, or even at all.
Network accelerators
These and other devices that use caching and wide area file systems may rely heavily on file time stamps to work out which version of a piece of data is the most current.

Bad time syncing could cause these systems to work incorrectly and use the wrong versions of data.

Network management systems
When things go wrong, examining system logs is a key part of fault diagnosis. But if the timing in these logs is out of sync it can take much longer than necessary to figure out what went wrong and to get systems up and running again

Intrusion analysis
In the event of a network intrusion, working out how your network was compromised and what data was accessed may only be possible if you have accurately time-stamped router and server logs.

Hackers will often delete logs if they can, but if they don't the job will be far harder, giving hackers more time to exploit your network, if the time data is inaccurate.

Compliance regulations
Sarbanes Oxley, HIPAA, GLBA and other regulations do or may in the future require accurate time stamping of some categories of transactions and data.

Trading systems
Companies in some sectors may make thousands of electronic trades per second. In this sort of environment system clocks need to be very accurate indeed.

Many companies set and synchronize their devices using Network Time Protocol (NTP), with NTP clients or daemons connecting to time servers on the network known as stratum-2 devices.

To ensure these stratum-2 time servers are accurate, they are synced over the Internet through port 123 with a stratum-1 device.

This public time server is connected directly (i.e. not over a network) to one or more stratum-0 devices– extremely accurate reference clocks.

Unfortunately, there are a number of potential problems with this approach. The most basic one is that the time that a stratum-2 server on a corporate network receives over the Internet from a stratum-1 server is not very precise.

That's because the time data has to travel over the Internet - from the time server to the corporate time source - in an unpredictable way, and at an unpredictable speed.

This means it always has a varying, and unknown, error factor. Although all the devices on a local area network that update themselves from the same corporate stratum-2 time server may be reasonably well synchronized (to within anything from 1 to about 100 milliseconds), keeping the time synchronized between stratum-2 devices on different local area networks to a reasonable degree of accuracy can be difficult.

Security Risks with NTP Servers
There are also security risks involved in using public stratum-1 NTP servers, most notably:
NTP clients and daemons are in themselves a potential security risk. Vulnerabilities in this type of software could be (and have in the past been) exploited by hackers sending appropriately crafted packets through the corporate firewall on port 123.

Organizations that use public NTP servers are susceptible to denial of service attacks by a hacker sending spoofed NTP data, making time syncing impossible during the attack.

For companies involved in activities such as financial trading—which requires very precise timing information—this could be very damaging.

Related Articles
One way to both avoid these potential security issues and to get more accurate time data is simply to run one or more stratum-1 servers inside your network, behind your corporate firewall.

Running Your Own Stratum-1 Servers
Stratum-1 time servers are available in a single 1U rack-mountable form factor that can easily be installed in your server room or data center and connected to your network, and most have a way of connecting to a stratum-0 reference clock built in.

The most commonly used ways to connect to a stratum-0 device are by terrestrial radio or GPS signals.

Terrestrial radio based connections use radio signals such as WWVB out of Fort Collins, Colorado, MSF from Anthorn, UK, or DCF77 from Frankfurt, Germany.

This is similar to the way consumer devices such as watches and alarm clocks update themselves with signals from reference clocks to keep accurate time.

Statum-1 time servers that sync with GPS satellite signals are more accurate, but are less convenient to install as they need to be connected to an antenna fitted in a suitable position on the roof of the building.

Using time data from a number of satellites, and by calculating the distance of each satellite from the antenna, a stratum-1 time server that uses GPS reference clock signals is able to get the precise time to within 50 or so nanoseconds.

More importantly, two or more of these servers at separate locations and running on separate local area networks can also remain in sync with each other to a similar degree of accuracy.

Companies that supply this type of equipment include Symmetricom, Spectracom, EndRun Technologies and Time Tools.

To provide redundancy, some larger organizations install multiple GPS-based time servers at each location.

An alternative is to have a radio-based time server as a back up to a GPS-based one in case the GPS server itself fails or, more likely, the GPS antenna is damaged, perhaps during bad weather.

Given that most radio and GPS based time servers cost between $1,000 and $5,000, purchasing two or more time servers is not a major investment for a medium or large organization.

Smaller companies, including those at isolated sites which are not connected to the Internet, can also use a low cost stratum-1 GPS PCI card (connected to an appropriate antenna) to enable a standard PC to act as a time server for the local area network, using the satellites as an external time source.

In the concluding piece in this series we'll take a look at how to implement a GPS-based time server in your data center.

My 10 UNIX Command Line Mistakes

Anyone who has never made a mistake has never tried anything new. -- Albert Einstein.

Here are a few mistakes that I made while working at UNIX prompt. Some mistakes caused me a good amount of downtime.

Most of these mistakes are from my early days as a UNIX admin.

userdel Command
The file /etc/deluser.conf was configured to remove the home directory (it was done by previous sys admin and it was my first day at work) and mail spool of the user to be removed.

I just wanted to remove the user account and I end up deleting everything (note -r was activated via deluser.conf):

# userdel foo


Rebooted Solaris Box
On Linux killall command kill processes by name (killall httpd). On Solaris it kill all active processes.

As root I killed all process, this was our main Oracle db box:

killall process-name


Destroyed named.conf
I wanted to append a new zone to /var/named/chroot/etc/named.conf file., but end up running:

# ./mkzone example.com > /var/named/chroot/etc/named.conf


Destroyed Working Backups with Tar and Rsync (personal backups)
I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x):


# cd /mnt/bacupusbharddisk
# tar -zcvf project.tar.gz functions



I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I’ve switched to rsnapshot)

# rsync -av -delete /dest /src


Again, I had no backup.

Deleted Apache DocumentRoot
I had sym links for my web server docroot (/home/httpd/http was symlinked to /www). I forgot about symlink issue. To save disk space, I ran rm -rf on http directory. Luckily, I had full working backup set.

Accidentally Changed Hostname and Triggered False Alarm
Accidentally changed the current hostname (I wanted to see current hostname settings) for one of our cluster node.

Within minutes I received an alert message on both mobile and email.

hostname foo.example.com


Public Network Interface Shutdown
I wanted to shutdown VPN interface eth0, but ended up shutting down eth1 while I was logged in via SSH:

# ifconfig eth1 down


Firewall Lockdown
I made changes to sshd_config and changed the ssh port number from 22 to 1022, but failed to update firewall rules.

After a quick kernel upgrade, I had rebooted the box. I had to call remote data center tech to reset firewall settings. (now I use firewall reset script to avoid lockdowns).

Typing UNIX Commands on Wrong Box
I wanted to shutdown my local Fedora desktop system, but I issued halt on remote server (I was logged into remote box via SSH):


# halt
# service httpd stop



Wrong CNAME DNS Entry
Created a wrong DNS CNAME entry in example.com zone file. The end result - a few visitors went to /dev/null:


# echo 'foo 86400 IN CNAME lb0.example.com' >> example.com && rndc reload

Failed To Update Postfix RBL Configuration
In 2006 ORDB went out of operation. But, I failed to update my Postfix RBL settings.

One day ORDB was re-activated and it was returning every IP address queried as being on its blacklist. The end result was a disaster.

Conclusion
All men make mistakes, but only wise men learn from their mistakes -- Winston Churchill.

From all those mistakes I’ve learnt that:
  1. Backup = ( Full + Removable tapes (or media) + Offline + Offsite + Tested )
  2. The clear choice for preserving all data of UNIX file systems is dump, which is only tool that guaranties recovery under all conditions. (see Torture-testing Backup and Archive Programs paper).
  3. Never use rsync with single backup directory. Create a snapshots using rsync or rsnapshots.
  4. Use CVS to store configuration files.
  5. Wait and read command line again before hitting the dam [Enter] key.
  6. Use your well tested perl / shell scripts and open source configuration management software such as puppet, Cfengine or Chef to configure all servers. This also applies to day today jobs such as creating the users and so on.
Mistakes are the inevitable, so did you made any mistakes that have caused some sort of downtime? Please add them into the comments below.

Top 20 OpenSSH Server Best Security Practices

OpenSSH is the implementation of the SSH protocol. OpenSSH is recommended for remote login, making backups, remote file transfer via scp or sftp, and much more.

SSH is perfect to keep confidentiality and integrity for data exchanged between two networks and systems.

However, the main advantage is server authentication, through the use of public key cryptography. From time to time there are rumors about OpenSSH zero day exploit.

Here are a few things you need to tweak in order to improve OpenSSH server security.

Default Config Files and SSH Port
  • /etc/ssh/sshd_config - OpenSSH server configuration file.
  • /etc/ssh/ssh_config - OpenSSH client configuration file.
  • ~/.ssh/ - Users ssh configuration directory.
  • ~/.ssh/authorized_keys or ~/.ssh/authorized_keys - Lists the public keys (RSA or DSA) that can be used to log into the user’s account
  • /etc/nologin - If this file exists, sshd refuses to let anyone except root log in.
  • /etc/hosts.allow and /etc/hosts.deny : Access controls lists that should be enforced by tcp-wrappers are defined here.
  • SSH default port : TCP 22
SSH Session in Action
SSH Session in Action

#1: Disable OpenSSH Server

Workstations and laptop can work without OpenSSH server. If you need not to provide the remote login and file transfer capabilities of SSH, disable and remove the SSHD server. CentOS / RHEL / Fedora Linux user can disable and remove openssh-server with yum command:


# chkconfig sshd off
# yum erase openssh-server



Debian / Ubuntu Linux user can disable and remove the same with apt-get command:


# apt-get remove openssh-server


You may need to update your iptables script to remove ssh exception rule. Under CentOS / RHEL / Fedora edit the files /etc/sysconfig/iptables and /etc/sysconfig/ip6tables. Once done restart iptables service:


# service iptables restart
# service ip6tables restart



#2: Only Use SSH Protocol 2
SSH protocol version 1 (SSH-1) has man-in-the-middle attacks problems and security vulnerabilities. SSH-1 is obsolete and should be avoided at all cost.

Open sshd_config file and make sure the following line exists:
Protocol 2
#3: Limit Users' SSH Access
By default all systems user can login via SSH using their password or public key. Sometime you create UNIX / Linux user account for ftp or email purpose.

However, those user can login to system using ssh.

They will have full access to system tools including compilers and scripting languages such as Perl, Python which can open network ports and do many other fancy things.

One of my client has really outdated php script and an attacker was able to create a new account on the system via a php script.

However, attacker failed to get into box via ssh because it wasn't in AllowUsers.

Only allow root, vivek and jerry user to use the system via SSH, add the following to sshd_config:
AllowUsers root vivek jerry

Alternatively, you can allow all users to login via SSH but deny only a few users, with the following line:
DenyUsers saroj anjali foo

You can also configure Linux PAM allows or deny login via the sshd server. You can allow list of group name to access or deny access to the ssh.

#4: Configure Idle Log Out Timeout Interval
User can login to server via ssh and you can set an idle timeout interval to avoid unattended ssh session.

Open sshd_config and make sure following values are configured:
ClientAliveInterval 300
ClientAliveCountMax 0

You are setting an idle timeout interval in seconds (300 secs = 5 minutes). After this interval has passed, the idle user will be automatically kicked out (read as logged out).

See how to automatically log BASH / TCSH / SSH users out after a period of inactivity for more details.

#5: Disable .rhosts Files
Don't read the user's ~/.rhosts and ~/.shosts files. Update sshd_config with the following settings:
IgnoreRhosts yes

SSH can emulate the behavior of the obsolete rsh command, just disable insecure access via RSH.

#6: Disable Host-Based Authentication
To disable host-based authentication, update sshd_config with the following option:
HostbasedAuthentication no
#7: Disable root Login via SSH
There is no need to login as root via ssh over a network. Normal users can use su or sudo (recommended) to gain root level access.

This also make sure you get full auditing information about who ran privileged commands on the system via sudo.

To disable root login via SSH, update sshd_config with the following line:
PermitRootLogin no

However, bob made excellent point:
Saying "don't login as root" is h******t. It stems from the days when people sniffed the first packets of sessions so logging in as yourself and su-ing decreased the chance an attacker would see the root pw, and decreast the chance you got spoofed as to your telnet host target, You'd get your password spoofed but not root's pw. Gimme a break. this is 2005 - We have ssh, used properly it's secure. used improperly none of this 1989 will make a damn bit of difference. -Bob

#8: Enable a Warning Banner

Set a warning banner by updating sshd_config with the following line:

Banner /etc/issue

Sample /etc/issue file:
----------------------------------------------------------------------------------------------
You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only.
By using this IS (which includes any device attached to this IS), you consent to the following conditions:

+ The XYZG routinely intercepts and monitors communications on this IS for purposes including, but not limited to,
penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM),
law enforcement (LE), and counterintelligence (CI) investigations. 

+ At any time, the XYZG may inspect and seize data stored on this IS.

+ Communications using, or data stored on, this IS are not private, are subject to routine monitoring,
interception, and search, and may be disclosed or used for any XYZG authorized purpose.

+ This IS includes security measures (e.g., authentication and access controls) to protect XYZG interests--not
for your personal benefit or privacy.

+ Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching
or monitoring of the content of privileged communications, or work product, related to personal representation
or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work
product are private and confidential. See User Agreement for details.
----------------------------------------------------------------------------------------------
Above is standard sample, consult your legal team for exact user agreement and legal notice details.

#8: Firewall SSH Port # 22

You need to firewall ssh port # 22 by updating iptables or pf firewall configurations. Usually, OpenSSH server must only accept connections from your LAN or other remote WAN sites only.

Netfilter (Iptables) Configuration

Update /etc/sysconfig/iptables (Redhat and friends specific file) to accept connection only from 192.168.1.0/24 and 202.54.1.5/29, enter:
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -s 202.54.1.5/29 -m state --state NEW -p tcp --dport 22 -j ACCEPT

If you've dual stacked sshd with IPv6, edit /etc/sysconfig/ip6tables (Redhat and friends specific file), enter:
-A RH-Firewall-1-INPUT -s ipv6network::/ipv6mask -m tcp -p tcp --dport 22 -j ACCEPT

Replace ipv6network::/ipv6mask with actual IPv6 ranges.

*BSD PF Firewall Configuration
If you are using PF firewall update /etc/pf.conf as follows:
pass in on $ext_if inet proto tcp from {192.168.1.0/24, 202.54.1.5/29} to $ssh_server_ip port ssh flags S/SA synproxy state
#9: Change SSH Port and Limit IP Binding
By default SSH listen to all available interfaces and IP address on the system. Limit ssh port binding and change ssh port (by default brute forcing scripts only try to connects to port # 22). To bind to 192.168.1.5 and 202.54.1.5 IPs and to port 300, add or correct the following line:
Port 300
ListenAddress 192.168.1.5
ListenAddress 202.54.1.5
A better approach to use proactive approaches scripts such as   fail2ban or denyhosts (see below).

#10: Use Strong SSH Passwords and Passphrase
It cannot be stressed enough how important it is to use strong user passwords and passphrase for your keys.

Brute force attack works because you use dictionary based passwords. You can force users to avoid passwords against a dictionary attack and use john the ripper tool to find out existing weak passwords.

Here is a sample random password generator (put in your ~/.bashrc):
genpasswd() {
 local l=$1
        [ "$l" == "" ] && l=20
       tr -dc A-Za-z0-9_ < /dev/urandom | head -c ${l} | xargs
}

Run it:


genpasswd 16


Output:
uw8CnDVMwC6vOKgW
#11:  Use Public Key Based Authentication
Use public/private key pair with password protection for the private key.

See how to use RSA and DSA key based authentication. Never ever use passphrase free key (passphrase key less) login.

#12: Use Keychain Based Authentication
keychain is a special bash script designed to make key-based authentication incredibly convenient and flexible.

It offers various security benefits over passphrase-free keys. See how to setup and use keychain software.

#13: Chroot SSHD (Lock Down Users To Their Home Directories)
By default users are allowed to browse the server directories such as /etc/, /bin and so on. You can protect ssh, using os based chroot or use special tools such as rssh.

With the release of OpenSSH 4.8p1 or 4.9p1, you no longer have to rely on third-party hacks such as rssh or complicated chroot(1) setups to lock users to their home directories.

See this blog post about new ChrootDirectory directive to lock down users to their home directories.

#14: Use TCP Wrappers
TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet. OpenSSH does supports TCP wrappers.

Just update your /etc/hosts.allow file as follows to allow SSH only from 192.168.1.2 172.16.23.12 :
sshd : 192.168.1.2 172.16.23.12 

See this FAQ about setting and using TCP wrappers under Linux / Mac OS X and UNIX like operating systems.

#15: Disable Empty Passwords
You need to explicitly disallow remote login from accounts with empty passwords, update sshd_config with the following line:
PermitEmptyPasswords no
#16: Thwart SSH Crackers (Brute Force Attack)
Brute force is a method of defeating a cryptographic scheme by trying a large number of possibilities using a single or distributed computer network.

To prevents brute force attacks against SSH, use the following softwares:
  • DenyHosts is a Python based security tool for SSH servers. It is intended to prevent brute force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses.
  • Explains how to setup DenyHosts under RHEL / Fedora and CentOS Linux.
  • Fail2ban is a similar program that prevents brute force attacks against SSH.
  • security/sshguard-pf protect hosts from brute force attacks against ssh and other services using pf.
  • security/sshguard-ipfw protect hosts from brute force attacks against ssh and other services using ipfw.
  • security/sshguard-ipfilter protect hosts from brute force attacks against ssh and other services using ipfilter.
  • security/sshblock block abusive SSH login attempts.
  • security/sshit checks for SSH/FTP bruteforce and blocks given IPs.
  • BlockHosts Automatic blocking of abusive IP hosts.
  • Blacklist Get rid of those bruteforce attempts.
  • Brute Force Detection A modular shell script for parsing application logs and checking for authentication failures. It does this using a rules system where application specific options are stored including regular expressions for each unique auth format.
  • IPQ BDB filter May be considered as a fail2ban lite.

#17: Rate-limit Incoming Port # 22 Connections

Both netfilter and pf provides rate-limit option to perform simple throttling on incoming connections on port # 22.

Iptables Example

The following example will drop incoming connections which make more than 5 connection attempts upon port 22 within 60 seconds:

#!/bin/bash
inet_if=eth1
ssh_port=22
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent  --set
$IPT -I INPUT -p tcp --dport ${ssh_port} -i ${inet_if} -m state --state NEW -m recent  --update --seconds 60 --hitcount 5 -j DROP
 
Call above script from your iptables scripts. Another config option:
$IPT -A INPUT  -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT
$IPT -A INPUT  -i ${inet_if} -p tcp --dport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${inet_if} -p tcp --sport ${ssh_port} -m state --state ESTABLISHED -j ACCEPT
# another one line example
# $IPT -A INPUT -i ${inet_if} -m state --state NEW,ESTABLISHED,RELATED -p tcp --dport 22 -m limit --limit 5/minute --limit-burst 5-j ACCEPT

See iptables man page for more details.

*BSD PF Example
The following will limits the maximum number of connections per source to 20 and rate limit the number of connections to 15 in a 5 second span.

If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections.

Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.

sshd_server_ip="202.54.1.5"
table  persist
block in quick from 
pass in on $ext_if proto tcp to $sshd_server_ip port ssh flags S/SA keep state (max-src-conn 20, max-src-conn-rate 15/5, overload  flush)

#18: Use Port Knocking
Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports.

Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s).

A sample port Knocking example for ssh using iptables:

$IPT -N stage1
$IPT -A stage1 -m recent --remove --name knock
$IPT -A stage1 -p tcp --dport 3456 -m recent --set --name knock2
 
$IPT -N stage2
$IPT -A stage2 -m recent --remove --name knock2
$IPT -A stage2 -p tcp --dport 2345 -m recent --set --name heaven
 
$IPT -N door
$IPT -A door -m recent --rcheck --seconds 5 --name knock2 -j stage2
$IPT -A door -m recent --rcheck --seconds 5 --name knock -j stage1
$IPT -A door -p tcp --dport 1234 -m recent --set --name knock
 
$IPT -A INPUT -m --state ESTABLISHED,RELATED -j ACCEPT
$IPT -A INPUT -p tcp --dport 22 -m recent --rcheck --seconds 5 --name heaven -j ACCEPT
$IPT -A INPUT -p tcp --syn -j doo
  • fwknop is an implementation that combines port knocking and passive OS fingerprinting.
  • Multiple-port knocking Netfilter/IPtables only implementation.

#19: Use Log Analyzer

Read your logs using logwatch or logcheck. These tools make your log reading life easier. It will go through your logs for a given period of time and make a report in the areas that you wish with the detail that you wish.

Make sure LogLevel is set to INFO or DEBUG in sshd_config:
LogLevel INFO
#20: Patch OpenSSH and Operating Systems
It is recommended that you use tools such as yum, apt-get, freebsd-update and others to keep systems up to date with the latest security patches.

Other Options
To hide openssh version, you need to update source code and compile openssh again. Make sure following options are enabled in sshd_config:
#  Turn on privilege separation
UsePrivilegeSeparation yes
# Prevent the use of insecure home directory and key file permissions
StrictModes yes
# Turn on  reverse name checking
VerifyReverseMapping yes
# Do you need port forwarding?
AllowTcpForwarding no
X11Forwarding no
#  Specifies whether password authentication is allowed.  The default is yes.
PasswordAuthentication no

Verify your sshd_config file before restarting / reloading changes:


# /usr/sbin/sshd -t

Tighter SSH security with two-factor or three-factor (or more) authentication.

References:
  1. The official OpenSSH project.
  2. Forum thread: Failed SSH login attempts and how to avoid brute ssh attacks
  3. man pages sshd_config, ssh_config, tcpd, yum, and apt-get.
If you have a technique or handy software not mentioned here, please share in the comments below to help your fellow readers keep their openssh based server secure.

20 Linux Server Hardening Security Tips

This summary is not available. Please click here to view the post.