Wednesday, December 31, 2014

How to check CPU info on Linux

Question: I would like to know detailed information about the CPU processor of my computer. What are the available methods to check CPU information on Linux?
Depending on your need, there are various pieces of information you may need to know about the CPU processor(s) of your computer, such as CPU vendor name, model name, clock speed, number of sockets/cores, L1/L2/L3 cache configuration, available processor capabilities (e.g., hardware virtualization, AES, MMX, SSE), and so on. In Linux, there are many command line or GUI-based tools that are used to show detailed information about your CPU hardware.

1. /proc/cpuinfo

The simpliest method is to check /proc/cpuinfo. This virtual file shows the configuration of available CPU hardware.
$ more /proc/cpuinfo

By inspecting this file, you can identify the number of physical processors, the number of cores per CPU, available CPU flags, and a number of other things.

2. cpufreq-info

The cpufreq-info command (which is part of cpufrequtils package) collects and reports CPU frequency information from the kernel/hardware. The command shows the hardware frequency that the CPU currently runs at, as well as the minimum/maximum CPU frequency allowed, CPUfreq policy/statistics, and so on. To check up on CPU #0:
$ cpufreq-info -c 0

3. cpuid

The cpuid command-line utility is a dedicated CPU information tool that displays verbose information about CPU hardware by using CPUID functions. Reported information includes processor type/family, CPU extensions, cache/TLB configuration, power management features, etc.
$ cpuid

4. dmidecode

The dmidecode command collects detailed information about system hardware directly from DMI data of the BIOS. Reported CPU information includes CPU vendor, version, CPU flags, maximum/current clock speed, (enabled) core count, L1/L2/L3 cache configuration, and so on.
$ sudo dmidecode

5. hardinfo

The hardinfo is a GUI-based system information tool which can give you an easy-to-understand summary of your CPU hardware, as well as other hardware components of your system.
$ hardinfo

6. inxi

inxi is a bash script written to gather system information in a human-friendly format. It shows a quick summary of CPU information including CPU model, cache size, clock speed, and supported CPU capabilities.
$ inxi -C

7. likwid-topology

likwid (Like I Knew What I'm Doing) is a collection of command-line tools to measure, configure and display hardware related properties. Among them is likwid-topology which shows CPU hardware (thread/cache/NUMA) topology information. It can also identify processor families (e.g., Intel Core 2, AMD Shanghai).

8. lscpu

The lscpu command summarizes /etc/cpuinfo content in a more user-friendly format, e.g., the number of (online/offline) CPUs, cores, sockets, NUMA nodes.
$ lscpu

9. lshw

The lshw command is a comprehensive hardware query tool. Unlike other tools, lshw requires root privilege because it query DMI information in system BIOS. It can report the total number of cores and enabled cores, but miss out on information such as L1/L2/L3 cache configuration. The GTK version lshw-gtk is also available.
$ sudo lshw -class processor

10. lstopo

The lstopo command (contained in hwloc package) visualizes the topology of the system which is composed of CPUs, cache, memory and I/O devices. This command is useful to identify the processor architecture and NUMA topology of the system.
$ lstopo

11. numactl

Originally developed to set the NUMA scheduling and memeory placement policy of Linux processes, the numactl command can also show information about NUMA topology of the CPU hardware from the command line.
$ numactl --hardware

How to boot into command line on Ubuntu or Debian

Linux desktop comes with a display manager (e.g., GDM, KDM, LightDM), which lets the desktop machine automatically boot into a GUI-based login environment. However, what if you want to disable GUI and boot straight into a text-mode console? For example, you are troubleshooting desktop related issues, or want to run a heavy-duty application that does not require desktop GUI.
Note that you can temporarily switch from desktop GUI to a virtual console by pressing Ctrl+Alt+F1 to F6. However, in this case your desktop GUI will be still running in the background, and thus is different from pure text-mode booting.
On Ubuntu or Debian desktop, you can enable text-mode booting by passing appropriate kernel parameters.

Boot into Command-line Temporarily

If you want to disable desktop GUI and boot in text-mode just one-time, you can use GRUB menu interface.
First, power on your desktop. When you see the initial GRUB menu, press 'e'.

This will lead you to the next screen, where you can modify kernel booting parameters. Scroll down the screen to look for a line that begins with "linux", which indicates a list of kernel parameters. Remove from the list "quiet" and "splash". Add "text" in the list instead.

The updated kernel parameter list looks like the following. Press Ctrl+x to continue booting. This will enable one-time console booting in verbose mode.

Boot into Command-line Permanently

If you want to boot into command-line permanently, you need to update GRUB configuration which defines kernel booting parameters.
Open a default GRUB config file with a text editor.
$ sudo vi /etc/default/grub
Look for a line that starts with GRUB_CMDLINE_LINUX_DEFAULT, and comment out that line by prepending # sign. This will disable the initial splash screen, and enable verbose mode (i.e., showing the detailed booting procedure).
Then change GRUB_CMDLINE_LINUX="" to:
Next, uncomment the line that says "#GRUB_TERMINAL=console".
The updated GRUB defult configuration looks like the following.

Finally, invoke update-grub command to re-generate a GRUB2 config file under /boot, based on these changes.
$ sudo update-grub
At this point, your desktop should be switched from GUI booting to console booting. Verify this by rebooting.

2 Ways To Fix The UEFI Bootloader When Dual Booting Windows And Ubuntu

The main problem that users experience after following my tutorials for dual booting Ubuntu and Windows 8 is that their computer continues to boot directly into Windows 8 with no option for running Ubuntu.
Here are two ways to fix the EFI boot loader to get the Ubuntu portion to boot correctly.
Set GRUB2 As The Bootloader.

1.  Make GRUB The Active Bootloader

There are a few things that may have gone wrong during the installation.
In theory if you have managed to install Ubuntu in the first place then you will have turned off fast boot.
Hopefully you followed this guide to create a bootable UEFI Ubuntu USB drive as this installs the correct UEFI boot loader.
If you have done both of these things as part of the installation, the bit that may have gone wrong is the part where you set GRUB2 as the boot manager.
To set GRUB2 as the default bootloader follow these steps:
  1. Login to Windows 8
  2. Go to the desktop
  3. Right click on the start button and choose administrator command prompt
  4. Type mountvol g: /s (This maps your EFI folder structure to the G drive). 
  5. Type cd g:\EFI
  6. When you do a directory listing you will see a folder for Ubuntu. Type dir.
  7. There should be options for grubx64.efi and shimx64.efi
  8. Run the following command to set grubx64.efi as the bootloader:

    bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi
  9. Reboot your computer
  10. You should now have a GRUB menu appear with options for Ubuntu and Windows.
  11. If your computer still boots straight to Windows repeat steps 1 through 7 again but this time type:

    bcdedit /set {bootmgr} path \EFI\ubuntu\shimx64.efi
  12. Reboot your computer
What you are doing here is logging into the Windows administration command prompt, mapping a drive to the EFI partition so that you can see where the Ubuntu bootloaders are installed and then either choosing grubx64.efi or shimx64.efi as the bootloader.
So what is the difference between grubx64.efi and shimx64.efi? You should choose grubx64.efi if secureboot is turned off. If secureboot is turned on you should choose shimx64.efi.
In my steps above I have suggested trying one and then trying another. The other option is to install one and then turn secure boot on or off within the UEFI firmware for your computer depending on the bootloader you chose.
rEFIind -

2.  Use rEFInd To Dual Boot Windows 8 And Ubuntu

The rEFInd boot loader works by listing all of your operating systems as icons. You will therefore be able to boot Windows, Ubuntu and operating systems from USB drives simply by clicking the appropriate icon.
To download rEFInd for Windows 8 click here.
After you have downloaded the file extract the zip file.
Now follow these steps to install rEFInd.
  1. Go to the desktop
  2. Right click on the start button and choose administrator command prompt
  3. Type mountvol g: /s (This maps your EFI folder structure to the G drive)
  4. Navigate to the extracted rEFInd folder. For example:

    cd c:\users\gary\downloads\refind-bin-0.8.4\refind-bin-0.8.4

    When you type dir you should see a folder for refind
  5. Type the following to copy refind to the EFI partition:

    xcopy /E refind g:\EFI\refind\
  6. Type the following to navigate to the refind folder

    cd g:\EFI\refind

  7. Rename the sample configuration file:

    rename refind.conf-sample refind.conf
  8. Run the following command to set rEFInd as the bootloader

    bcdedit /set {bootmgr} path \EFI\refind\refind_x64.efi
  9. Reboot your computer
  10. You should now have a menu similar to the image above with options to boot Windows and Ubuntu

This process is fairly similar to choosing the GRUB bootloader.

Basically it involves downloading rEFInd, extracting the files. copying the files to the EFI partition, renaming the configuration file and then setting rEFInd as the boot loader.



Hopefully this guide has solved the issues that some of you have been having with dual booting Ubuntu and Windows 8.1. If you are still having issues feel free to get back in touch using the email link above.

Sunday, December 28, 2014

How to configure fail2ban to protect Apache HTTP server

An Apache HTTP server in production environments can be under attack in various different ways. Attackers may attempt to gain access to unauthorized or forbidden directories by using brute-force attacks or executing evil scripts. Some malicious bots may scan your websites for any security vulnerability, or collect email addresses or web forms to send spams to.

Apache HTTP server comes with comprehensive logging capabilities capturing various abnormal events indicative of such attacks. However, it is still non-trivial to systematically parse detailed Apache logs and react to potential attacks quickly (e.g., ban/unban offending IP addresses) as they are perpetrated in the wild. That is when fail2ban comes to the rescue, making a sysadmin's life easier.
fail2ban is an open-source intrusion prevention tool which detects various attacks based on system logs and automatically initiates prevention actions e.g., banning IP addresses with iptables, blocking connections via /etc/hosts.deny, or notifying the events via emails. fail2ban comes with a set of predefined "jails" which use application-specific log filters to detect common attacks. You can also write custom jails to deter any specific attack on an arbitrary application.

In this tutorial, I am going to demonstrate how you can configure fail2ban to protect your Apache HTTP server. I assume that you have Apache HTTP server and fail2ban already installed. Refer to another tutorial for fail2ban installation.

What is a Fail2ban Jail

Let me go over more detail on fail2ban jails. A jail defines an application-specific policy under which fail2ban triggers an action to protect a given application. fail2ban comes with several jails pre-defined in /etc/fail2ban/jail.conf, for popular applications such as Apache, Dovecot, Lighttpd, MySQL, Postfix, SSH, etc. Each jail relies on application-specific log filters (found in /etc/fail2ban/fileter.d) to detect common attacks. Let's check out one example jail: SSH jail.
enabled   = true
port      = ssh
filter    = sshd
logpath   = /var/log/auth.log
maxretry  = 6
banaction = iptables-multiport
This SSH jail configuration is defined with several parameters:
  • [ssh]: the name of a jail with square brackets.
  • enabled: whether the jail is activated or not.
  • port: a port number to protect (either numeric number of well-known name).
  • filter: a log parsing rule to detect attacks with.
  • logpath: a log file to examine.
  • maxretry: maximum number of failures before banning.
  • banaction: a banning action.
Any parameter defined in a jail configuration will override a corresponding fail2ban-wide default parameter. Conversely, any parameter missing will be assgined a default value defined in [DEFAULT] section.
Predefined log filters are found in /etc/fail2ban/filter.d, and available actions are in /etc/fail2ban/action.d.

If you want to overwrite fail2ban defaults or define any custom jail, you can do so by creating /etc/fail2ban/jail.local file. In this tutorial, I am going to use /etc/fail2ban/jail.local.

Enable Predefined Apache Jails

Default installation of fail2ban offers several predefined jails and filters for Apache HTTP server. I am going to enable those built-in Apache jails. Due to slight differences between Debian and Red Hat configurations, let me provide fail2ban jail configurations for them separately.

Enable Apache Jails on Debian or Ubuntu

To enable predefined Apache jails on a Debian-based system, create /etc/fail2ban/jail.local as follows.
$ sudo vi /etc/fail2ban/jail.local
# detect password authentication failures
enabled  = true
port     = http,https
filter   = apache-auth
logpath  = /var/log/apache*/*error.log
maxretry = 6
# detect potential search for exploits and php vulnerabilities
enabled  = true
port     = http,https
filter   = apache-noscript
logpath  = /var/log/apache*/*error.log
maxretry = 6
# detect Apache overflow attempts
enabled  = true
port     = http,https
filter   = apache-overflows
logpath  = /var/log/apache*/*error.log
maxretry = 2
# detect failures to find a home directory on a server
enabled  = true
port     = http,https
filter   = apache-nohome
logpath  = /var/log/apache*/*error.log
maxretry = 2
Since none of the jails above specifies an action, all of these jails will perform a default action when triggered. To find out the default action, look for "banaction" under [DEFAULT] section in /etc/fail2ban/jail.conf.
banaction = iptables-multiport
In this case, the default action is iptables-multiport (defined in /etc/fail2ban/action.d/iptables-multiport.conf). This action bans an IP address using iptables with multiport module.
After enabling jails, you must restart fail2ban to load the jails.
$ sudo service fail2ban restart

Enable Apache Jails on CentOS/RHEL or Fedora

To enable predefined Apache jails on a Red Hat based system, create /etc/fail2ban/jail.local as follows.
$ sudo vi /etc/fail2ban/jail.local
# detect password authentication failures
enabled  = true
port     = http,https
filter   = apache-auth
logpath  = /var/log/httpd/*error_log
maxretry = 6
# detect spammer robots crawling email addresses
enabled  = true
port     = http,https
filter   = apache-badbots
logpath  = /var/log/httpd/*access_log
bantime  = 172800
maxretry = 1
# detect potential search for exploits and php vulnerabilities
enabled  = true
port     = http,https
filter   = apache-noscript
logpath  = /var/log/httpd/*error_log
maxretry = 6
# detect Apache overflow attempts
enabled  = true
port     = http,https
filter   = apache-overflows
logpath  = /var/log/httpd/*error_log
maxretry = 2
# detect failures to find a home directory on a server
enabled  = true
port     = http,https
filter   = apache-nohome
logpath  = /var/log/httpd/*error_log
maxretry = 2
# detect failures to execute non-existing scripts that
# are associated with several popular web services
# e.g. webmail, phpMyAdmin, WordPress
port     = http,https
filter   = apache-botsearch
logpath  = /var/log/httpd/*error_log
maxretry = 2
Note that the default action for all these jails is iptables-multiport (defined as "banaction" under [DEFAULT] in /etc/fail2ban/jail.conf). This action bans an IP address using iptables with multiport module.
After enabling jails, you must restart fail2ban to load the jails in fail2ban.
On Fedora or CentOS/RHEL 7:
$ sudo systemctl restart fail2ban
On CentOS/RHEL 6:
$ sudo service fail2ban restart

Check and Manage Fail2ban Banning Status

Once jails are activated, you can monitor current banning status with fail2ban-client command-line tool.
To see a list of active jails:
$ sudo fail2ban-client status
To see the status of a particular jail (including banned IP list):
$ sudo fail2ban-client status [name-of-jail]

You can also manually ban or unban IP addresses.
To ban an IP address with a particular jail:
$ sudo fail2ban-client set [name-of-jail] banip [ip-address]
To unban an IP address blocked by a particular jail:
$ sudo fail2ban-client set [name-of-jail] unbanip [ip-address]


This tutorial explains how a fail2ban jail works and how to protect an Apache HTTP server using built-in Apache jails. Depending on your environments and types of web services you need to protect, you may need to adapt existing jails, or write custom jails and log filters. Check outfail2ban's official Github page for more up-to-date examples of jails and filters.
Are you using fail2ban in any production environment? Share your experience.

Wednesday, December 24, 2014

How to check SSH protocol version on Linux

Question: I am aware that there exist SSH protocol version 1 and 2 (SSH1 and SSH2). What is the difference between SSH1 and SSH2, and how can I check which SSH protocol version is supported on a Linux server?

Secure Shell (SSH) is a network protocol that enables remote login or remote command execution between two hosts over a cryptographically secure communication channel. SSH was designed to replace insecure clear-text protocols such as telnet, rsh or rlogin. SSH provides a number of desirable features such as authentication, encryption, data integrity, authorization, and forwarding/tunneling.

SSH1 vs. SSH2

The SSH protocol specification has a number of minor version differences, but there are two major versions of the protocol: SSH1 (SSH version 1.XX) and SSH2 (SSH version 2.00).

In fact, SSH1 and SSH2 are two entirely different protocols with no compatibility in between. SSH2 is a significantly improved version of SSH1 in many respects. First of all, while SSH1 is a monolithic design where several different functions (e.g., authentication, transport, connection) are packed into a single protocol, SSH2 is a layered architecture designed with extensibility and flexibility in mind. In terms of security, SSH2 comes with a number of stronger security features than SSH1, such as MAC-based integrity check, flexible session re-keying, fully-negotiable cryptographic algorithms, public-key certificates, etc.

SSH2 is standardized by IETF, and as such its implementation is widely deployed and accepted in the industry. Due to SSH2's popularity and cryptographic superiority over SSH1, many products are dropping support for SSH1. As of this writing, OpenSSH still supports both SSH1 and SSH2, while on all modern Linux distributions, OpenSSH server comes with SSH1 disabled by default.

Check Supported SSH Protocol Version

Method One

If you want to check what SSH protocol version(s) are supported by a local OpenSSH server, you can refer to /etc/ssh/sshd_config file. Open /etc/ssh/sshd_config with a text editor, and look for "Protocol" field.

If it shows the following, it means that OpenSSH server supports SSH2 only.
Protocol 2

If it displays the following instead, OpenSSH server supports both SSH1 and SSH2.
Protocol 1,2

Method Two

If you cannot access /etc/ssh/sshd_config because OpenSSH server is running on a remote server, you can test its SSH protocol support by using SSH client program called ssh. More specifically, we force ssh to use a specific SSH protocol, and see how the remote SSH server responds.

The following command will force ssh command to use SSH1:
$ ssh -1 user@remote_server
The following command will force ssh command to use SSH2:
$ ssh -2 user@remote_server
If the remote SSH server supports SSH2 only, the first command with "-1" option will fails with an error message like this:
Protocol major versions differ: 1 vs. 2
If the SSH server supports both SSH1 and SSH2, both commands should work successfully.

Method Three

Another method to check supported SSH protocol version of a remote SSH server is to run an SSH scanning tool called scanssh. This command-line tool is useful when you want to check SSH protocol versions for a bulk of IP addresses or the entire local network to upgrade SSH1-capable SSH servers.

Here is the basic syntax of scanssh for SSH version scanning.
$ sudo scanssh -s ssh -n [ports] [IP addresses or CIDR prefix]

The "-n" option can specify the SSH port number(s) to scan. You can specify multiple port numbers separated by comma. Without this option, scanssh will scan port 22 by default.

Use the following command to discover SSH servers on local nework, and detect their SSH protocol versions:
$ sudo scan -s ssh

If scanssh reports "SSH-1.XX-XXXX" for a particular IP address, it implies that the minimum SSH protocol version supported by the corresponding SSH server is SSH1. If the remote server supports SSH2 only, scanssh will show "SSH-2.0-XXXX".

20 Linux Commands Interview Questions & Answers

Q:1 How to check current run level of a linux server ?
Ans: ‘who -r’ & ‘runlevel’ commands are used to check the current runlevel of a linux box.
Q:2 How to check the default gatway in linux ?
Ans: Using the commands “route -n” and “netstat -nr” , we can check default gateway. Apart from the default gateway info , these commands also display the current routing tables .
Q:3 How to rebuild initrd image file on Linux ?
Ans: In case of CentOS 5.X / RHEL 5.X , mkinitrd command is used to create initrd file , example is shown below :
# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
If you want to create initrd for a specific kernel version , then replace ‘uname -r’ with desired kernel
In Case of CentOS 6.X / RHEL 6.X , dracut command is used to create initrd file example is shown below :
# dracut -f
Above command will create the initrd file for the current version. To rebuild the initrd file for a specific kernel , use below command :
# dracut -f initramfs-2.x.xx-xx.el6.x86_64.img 2.x.xx-xx.el6.x86_64
Q:4 What is cpio command ?
Ans: cpio stands for Copy in and copy out. Cpio copies files, lists and extract files to and from a archive ( or a single file).
Q:5 What is patch command and where to use it ?
Ans: As the name suggest patch command is used to apply changes ( or patches) to the text file. Patch command generally accept output from the diff and convert older version of files into newer versions. For example Linux kernel source code consists of number of files with millions of lines , so whenever any contributor contribute the changes , then he/she will be send the only changes instead of sending the whole source code. Then the receiver will apply the changes with patch command to its original source code.
Create a diff file for use with patch,
# diff -Naur old_file new_file > diff_file
Where old_file and new_file are either single files or directories containing files. The r option supports recursion of a directory tree.
Once the diff file has been created, we can apply it to patch the old file into the new file:
# patch < diff_file
Q:6 What is use of aspell ?
Ans: As the name suggest aspell is an interactive spelling checker in linux operating system. The aspell command is the successor to an earlier program named ispell, and can be used, for the most part, as a drop-in replacement. While the aspell program is mostly used by other programs that require spell-checking capability, it can also be used very effectively as a stand-alone tool from the command line.
Q:7 How to check the SPF record of domain from command line ?
Ans: We can check SPF record of a domain using dig command. Example is shown below :
linuxtechi@localhost:~$ dig -t TXT
Q:8 How to identify which package the specified file (/etc/fstab) is associated with in linux ?
Ans: # rpm -qf /etc/fstab
Above command will list the package which provides file “/etc/fstab”
Q:9 Which command is used to check the status of bond0 ?
Ans: cat /proc/net/bonding/bond0
Q:10 What is the use of /proc file system in linux ?
Ans: The /proc file system is a RAM based file system which maintains information about the current state of the running kernel including details on CPU, memory, partitioning, interrupts, I/O addresses, DMA channels, and running processes. This file system is represented by various files which do not actually store the information, they point to the information in the memory. The /proc file system is maintained automatically by the system.
Q:11 How to find files larger than 10MB in size in /usr directory ?
Ans: # find /usr -size +10M
Q:12 How to find files in the /home directory that were modified more than 120 days ago ?
Ans: # find /home -mtime +l20
Q:13 How to find files in the /var directory that have not been accessed in the last 90 days ?
Ans: # find /var -atime -90
Q:14 Search for core files in the entire directory tree and delete them as found without prompting for confirmation
Ans: # find / -name core -exec rm {} \;
Q:15 What is the purpose of strings command ?
Ans: The strings command is used to extract and display the legible contents of a non-text file.
Q:16 What is the use tee filter ?
Ans: The tee filter is used to send an output to more than one destination. It can send one copy of the output to a file and another to the screen (or some other program) if used with pipe.
linuxtechi@localhost:~$ ll /etc | nl | tee /tmp/ll.out
In the above example, the output from ll is numbered and captured in /tmp/ll.out file. The output is also displayed on the screen.
Q:17 What would the command export PS1 = ”$LOGNAME@`hostname`:\$PWD: do ?
Ans: The export command provided will change the login prompt to display username, hostname, and the current working directory.
Q:18 What would the command ll | awk ‘{print $3,”owns”,$9}’ do ?
Ans: The ll command provided will display file names and their owners.
Q:19 What is the use of at command in linux ?
Ans: The at command is used to schedule a one-time execution of a program in the future. All submitted jobs are spooled in the /var/spool/at directory and executed by the atd daemon when the scheduled time arrives.
Q:20 What is the role of lspci command in linux ?
Ans: The lspci command displays information about PCI buses and the devices attached to your system. Specify -v, -vv, or -vvv for detailed output. With the -m option, the command produces more legible output.

Monday, December 22, 2014

How to rename multiple files on Linux

Question: I know I can rename a file using mv command. But what if I want to change the name of many files? It will be tedius to invoke mv command for every such file. Is there a more convenient way to rename multiple files at once?
In Linux, when you want to change a file name, mv command gets the job done. However, mv cannot rename multiple files using wildcard. There are ways to deal with multiple files by using a combination of sed, awk or find in combination of xargs. However, these are rather cumbersome and not user-friendly.

When it comes to renaming multiple files, the rename utility is probably the easiest and the most powerful command-line tool. The rename command is actually a Perl script, and comes pre-installed on all modern Linux distributions.

Here is the basic syntax of rename command.
rename [-v -n -f]  

is a Perl-compatible regular expression (PCRE) which represents file(s) to rename and how. 

This regular expression is in the form of 's/old-name/new-name/'.

The '-v' option shows the details of file name changes (e.g., XXX renamed as YYY).
The '-n' option tells rename to show how the files would be renamed without actually changing the names. This option is useful when you want to simulate filename change without touching files.
The '-f' option force overwriting existing files.

In the following, let's see several rename command examples.

Change File Extensions

Suppose you have many image files with .jpeg extension. You want to change their file names to *.jpg. The following command converts *.jpeg files to *.jpg.
$ rename 's/\.jpeg$/\.jpg/' *.jpeg

Convert Uppercase to Lowercase and Vice-Versa

In case you want to change text case in filenames, you can use the following commands.
To rename all files to lower-case:
# rename 'y/A-Z/a-z/' *
To rename all files to upper-case:
# rename 'y/a-z/A-Z/' *

Change File Name Patterns

Now let's consider more complex regular expressions which involve subpatterns. In PCRE, a subpattern captured within round brackets can be referenced by a number preceded by a dollar sign (e.g., $1, $2).
For example, the following command will rename 'img_NNNN.jpeg' to 'dan_NNNN.jpg'.
# rename -v 's/img_(\d{4})\.jpeg$/dan_$1\.jpg/' *.jpeg
img_5417.jpeg renamed as dan_5417.jpg
img_5418.jpeg renamed as dan_5418.jpg
img_5419.jpeg renamed as dan_5419.jpg
img_5420.jpeg renamed as dan_5420.jpg
img_5421.jpeg renamed as dan_5421.jpg
The next command will rename 'img_000NNNN.jpeg' to 'dan_NNNN.jpg'.
# rename -v 's/img_\d{3}(\d{4})\.jpeg$/dan_$1\.jpg/' *jpeg
img_0005417.jpeg renamed as dan_5417.jpg
img_0005418.jpeg renamed as dan_5418.jpg
img_0005419.jpeg renamed as dan_5419.jpg
img_0005420.jpeg renamed as dan_5420.jpg
img_0005421.jpeg renamed as dan_5421.jpg
In both cases above, the subpattern '\d{4}' captures four consecutive digits. The captured four digits are then referred to as $1, and used as part of new filenames.

Sunday, December 21, 2014

Introduction to Server-Sent Events with PHP example

Server-sent events (SSE) is a web technology where a browser receives automatic updates from a server via HTTP protocol. SSE was known before as EventSource and first is introduced in 2006 by Opera. During 2009 W3C started to work on first draft. And here is a latest W3C proposed recommendation from December 2014. It is little know feature that was implemented by all major web browsers except Internet Explorer and may be this is the reason why it is not widely known and used. The idea behind Server-sent events is very simple – a web application subscribes to a stream of updates generated by a server and, whenever a new event occurs, a notification is sent to the client.
But to really understand power of Server-Sent Events, we need to understand the limitations of AJAX version. First was Polling – polling is a technique used by majority of AJAX applications. Idea is that the JavaScript via AJAX repeatedly polls a server for a new data in a given interval (5 seconds for example). If there is new data, server returns it. If there is no new data, server simply return nothing. The problem with this technique is that creates additional overhead. Each time connection needs to be open and then closed.
Next method that was introduces was Long polling (aka COMET). Difference between polling and long polling is that when request is made and there is no data – server simply hangs until new data comes. Then server returns data and closes connection. This was also know as hanging GET method. So instead to returns empty response server waits until data comes, then returns data and closes HTTP connection.
Next comes WebSocket which is bi-directional rich media protocol and can be used in a lot of cases. But WebSocket needs a different protocol, different server side code and it is a little bit complicated compared to SSE.
So what is good for Server-sent events? It could be used in cases when data communicates in one way – from server to client. Here are couple of cases which SSE is very useful: real-time stock prices update; live score and sports events; server monitoring web applications.
Benefit of using Server-sent events instead of AJAX polling or long polling is that technology is directly supported by major web browsers; protocol that is used is HTTP so it is very easy to implement it on server side as well. SSE does not generate overhead and everything that you need is handled by web browser. Here is current state of SSE support.

Protocol description

Data are sent in plain text. So SSE is not suitable for binary data, but it is perfect for text events. First step is to set correct response header Content-Type to text/event-stream.
header("Content-Type: text/event-stream");
Next step is to construct and send data. Basically response contains keyword data followed by data you want to send and two new lines.
data: Hello World!\n\n
If you want to send multiple lines you can separate them by one new line.
data: Hello World!\n
data: This is a second line!\n\n
Here is how to send JSON data:
data: {"msg": "Hello World!"}\n\n
Or to split JSON in multiple lines:
data: {\n
data: "msg": "Hello World!",\n
data: "line2": "This is a second line!"\n
data: }\n\n
Here is PHP code that does the above:
header("Content-Type: text/event-stream");
echo "data: Hello World!\n\n";
This is one event with in multiple lines. Note that new line character is a separator. if you need to send new line character consider to escape it or to properly split message in multiple lines:
header("Content-Type: text/event-stream");
echo "data: Hello World!\n";
echo "data: This is a second line!\n\n";
Here is how to send to events with some delay:
header("Content-Type: text/event-stream");
echo "data: First message\n\n";

echo "data: Second message\n\n";
But what happens when connection is lost or closed – well browser opens again connection after 3 seconds. To control re-connection time you should use a keyword retry with first message. Number passed after retry is in milliseconds. Here is example – tell browser to reconnect after 2 seconds if HTTP connection is lost.
header("Content-Type: text/event-stream");
echo "retry: 2000\n";
echo "data: Hello World!\n\n";
When browser reconnects meanwhile some event happens how do you know which was sent and which was not – well you can associate unique id with each event. When reconnects browser send HTTP header Last-Event-ID. Based on that header you know which is the last event browser received.
id: 1\n
data: Hello World!\n\n

id: 2\n
data: Second message\n\n

JavaScript API

Using EventSource in browser is simple and easy. First you check that browser supports EventSource API then you create event source by passing URL to which to listen.
if (!!window.EventSource) {
    var source = new EventSource("data.php");
} else {
    alert("Your browser does not support Server-sent events! Please upgrade it!");
EventSource object has three listeners to subscribe. Most important is message, others are open and error.
source.addEventListener("message", function(e) {
}, false);

source.addEventListener("open", function(e) {
    console.log("Connection was opened.");
}, false);

source.addEventListener("error", function(e) {
    console.log("Error - connection was lost.");
}, false);
Most important properties of Event object passed to listener functions are data and lastEventId.
One interesting feature is named events. You can specify name of different events and on client side different listeners to be fired based on that events.
event: priceUp\n
data: GOOG:540\n\n
Then on client side you can subscribe to this event by passing it to listener function:
source.addEventListener("priceUp", function(e) {
    console.log("Price UP - " +;
}, false);

PHP Server Code

Only change in PHP code (or other server side code) is that you need of infinite loop to keep connection open. Here is some code from example:
header("Content-Type: text/event-stream");
header("Cache-Control: no-cache");
header("Connection: keep-alive");

if (isset($lastId) && !empty($lastId) && is_numeric($lastId)) {
    $lastId = intval($lastId);

while (true) {
    $data = \\ query DB or any other source - consider $lastId to avoid sending same data twice
    if ($data) {
        sendMessage($lastId, $data);

function sendMessage($id, $data) {
    echo "id: $id\n";
    echo "data: $data\n\n";

Show me example

After so many words and code snippets I put all together and created simple Stock Tickets web application that updates price of some selected stocks. Data source is not real but it is simple multidimensional array with very simple structure – ticked – price. Otherwise everything is real and could be used to explore and study the code.

What is good audio editing software on Linux

Whether you are an amateur musician or just a student recording his professor, you need to edit and work with audio recordings. If for a long time such task was exclusively attributed to Macintosh, this time is over, and Linux now has what it takes to do the job. In short, here is a non-exhaustive list of good audio editing software, fit for different tasks and needs.

1. Audacity

Let's get started head on with my personal favorite. Audacity works on Windows, Mac, and Linux. It is open source. It is easy to use. You get it: Audacity is almost perfect. This program lets you manipulate the audio waveform from a clean interface. In short, you can overlay tracks, cut and edit them easily, apply effects, perform advanced sound analysis, and finally export to a plethora of format. The reason I like it so much is that it combines both basic features with more complicated ones, but maintain an easy leaning curve. However, it is not a fully optimized software for hardcore musicians, or people with professional knowledge.

2. Jokosher

On a different level, Jokosher focuses more on the multi-track aspect for musical artists. Developed in Python and using the GTK+ interface with GStreamer for audio back-end, Jokosher really impressed me with its slick interface and its extensions. If the editing features are not the most advanced, the language is clear and directed to musicians. And I really like the association between tracks and instruments for example. In short, if you are starting as a musician, it might be a good place to get some experience before moving on to more complex suites.

3. Ardour

And talking about compex suites, Ardour is complete software for recording, editing, and mixing. Designed this time to appeal to all professionals, Ardour features in term of sound routing and plugins go way beyond my comprehension. So if you are looking for a beast and are not afraid to tame it, Ardour is probably a good pick. Again, the interface contributes to its charm, as well as its extensive documentation. I particularly appreciated the first-launch configuration tool.

4. Kwave

For all KDE lovers, KWave corresponds to your idea of design and features. There are plenty of shortcuts and interesting options, like memory management. Even if the few effects are nice, we are more dealing with a simple tool to cut/paste audio together. It becomes shard not to compare it with Audacity unfortunately. And on top of that, the interface did not appeal to me that much.

5. Qtractor

If Kwave is too simplistic for you but a Qt-based program really has some appeal, then Qtractor might be your option. It aims to be "simple enough for the average home user, and yet powerful enough for the professional user." Indeed the quantity of features and options is almost overwhelming. My favorite being of course customizable shortcuts. Apart from that, Qtractor is probably one of my favorite tools to deal with MIDI files.


Standing for Linux MultiMedia Studio, LMMS is directly targeted for music production. If you do not have prior experience and do not want to spend too much time getting some, go elsewhere. LMMS is one of those complex but powerful software that only a few will truly master. The number of features and effects is simply too long to list, but if I had to pick one, I would say that the Freeboy plugin to emulate Game Boy sound system is just magical. Past that, go see their amazing documentation.

7. Traverso

Finally, Traverso stood out to me for its unlimited track count and its direct integration with CD burning capacities. Aside from that, it appeared to me as a middle man between a simplistic software and a professional program. The interface is very KDE-like, and the keyboard configuration is always welcome. And cherry on the cake, Traverso monitors your resources and make sure that your CPU or hard drive does not go overboard.
To conclude, it is always a pleasure to see such a large diversity of applications on Linux. It makes finding the software that best fits your needs always possible. While my personal favorite stays Audacity, I was very surprised by the design of programs like LMMS or Jokosher.
Did we miss one? What do you use for audio editing on Linux? And why? Let us know in the comments.

Creating your first Linux App with Python and Flask

Creating your first  Linux  App with  Python and Flask
Whether playing on Linux or working on Linux there is a good chance you have come across a program written in python. Back in college I wish they thought us Python instead of Java, it’s fun to learn and useful in building practical applications like the yum package manager.
In this tutorial I will take you through how I built a simple application which displays useful information like memory usage per process, CPU percentage etc using python and a micro framework called flask.
Python Basics, Lists, Classes, Functions, Modules.
HTML/CSS (basic)
You don’t have to be an advanced python programmer to follow this tutorial, But before you go further I recommend you read
Installing Python 3 on Linux
On most Linux distributions python is installed by default. This is how you can find out the python version on your system.
[root@linux-vps ~]# python -V
Python 2.7.5
We will be using python version 3.x to build our app. As per all improvements are now only available in this version which is not backward compatible with python 2.
Caution: Before your proceed I strongly recommend you try this tutorial out on a Virtual machine, since python is a core component of many Linux Distributions any accidents may cause permanent damage to your system.
This step is for RedHat based variants like CentOS (6&7), Debian based variants like Ubuntu,Mint and Rasbian can skip this step as you should have python version 3 installed by default. If not use apt-get instead of yum to install the relevant packages below.
[leo@linux-vps] yum groupinstall 'Development Tools'
[leo@linux-vps] yum install -y zlib-dev openssl-devel sqlite-devel bzip2-devel
[leo@linux-vps] wget
[leo@linux-vps] tar -xvzf Python-3.4.2.tgz
[leo@linux-vps] cd Python-3.4.2
[leo@linux-vps] ./configure
[leo@linux-vps] make
# make altinstall  is recommended as make install can overwrite the current python binary, 
[leo@linux-vps]   make altinstall
After a successful, installation you should be able to access the python 3.4 shell with the command below.
[leo@linux-vps]# python3.4
Python 3.4.2 (default, Dec 12 2014, 08:01:15)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit ()
Installing packages in python with PIP
Python comes with it’s own package manager, similar to yum and apt-get. You will need to use it to download, install and uninstall packages.
[leo@linux-vps] pip3.4 install "packagename"

[leo@linux-vps] pip3.4 list

[leo@linux-vps] pip3.4 uninstall "packagename"
Python Virtual Environment
In Python a virtual environment is a directory where your projects dependencies are installed. This is a good way to segregate projects with different dependencies. It also allows you to install packages without the need for sudo access.
[leo@linux-vps] mkdir python3.4-flask
[leo@linux-vps] cd python3.4-flask 
 [leo@linux-vps python3.4-flask] pyvenv-3.4 venv
To create the virtual environment you will need to use the “pyvenv-3.4” command. This will create a directory called “lib” inside the venv folder where the dependencies for this project will be installed. It will also create a bin folder which will contain pip and python executables for this virtual environment.
Activating the Virtual Environment for our Linux system information project
 [leo@linux-vps python3.4-flask] source venv/bin/activate
 [leo@linux-vps python3.4-flask] which pip3.4
[leo@linux-vps python3.4-flask] which python3.4
Installing flask with PIP
Lets go ahead and install out first module the flask framework which will take care of the routing and template rendering of our app.
 [leo@linux-vps python3.4-flask]pip3.4 install flask
Creating your first app in flask.
Step 1:Create directories where your app will reside.
[leo@linux-vps python3.4-flask] mkdir  app
 [leo@linux-vps python3.4-flask]mkdir app/static
 [leo@linux-vps python3.4-flask]mkdir app/templates
Inside the python3.4-flask folder create a folder called app which will contain two sub-folders “static” and “templates”. Our python script will reside inside the app folder, files like css/js inside the static folder and templates folder will contain our html templates.
Step 2:Create an initialization file inside the app folder.
[leo@linux-vps python3.4-flask] vim app/
from flask import Flask

app = Flask(__name__)
from app import index
This file will create a new instance of Flask and load our python program stored in a file called which we will create next.
[leo@linux-vps python3.4-flask]vim app/
from app import app

def index():
 import subprocess
 cmd = subprocess.Popen(['ps_mem'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
 out,error = cmd.communicate()
 memory = out.splitlines()    
Routing in flask is handled by the route decorator. It is used to bind a URL to a function.
In order to run a shell command in python you can use the Popen class from Subprocess module.
This class will take a list as an argument, the first item of the list will default to being executable while the next item will be considered the option. Here is another example
subprocess.Popen(['ls', ‘-l’],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
stdout and stderr will store the output and error of this command respectively. You can then access this output via the communicate method of the Popen class.
out,error = cmd.communicate()
To display the output in a better way via the html template, I have used the splitlines () method,
memory = out.splitlines()
More information on python subprocess module is available in the docs at the end of this tutorial.
Step 3: Create an html template where we can display the output of our command.
In order to do this we need to use the Jinja2 template engine in flask which will do the template rendering for us.
Your final file should look as follows
from flask import render_template
from app import app

def index():
 import subprocess
 cmd = subprocess.Popen(['ps_mem'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
 out,error = cmd.communicate()
 memory = out.splitlines()     
return render_template('index.html', memory=memory)
Now create an index.html template inside the templates folder, flask will search for templates in this folder.
[leo@linux-vps python3.4-flask]vim app/templates/index.html

Memory usage per process

{% for line in memory %} {{ line.decode('utf-8') }} {% endfor %}
The Jinja2 template engine allows you to use the “{{ … }}” delimiter to print results and {% … %} for loops and value assignment. I used the “decode()” method for formatting purposes.
Step 4: Running the app.
[leo@linux-vps python3.4-flask]vim
from app import app
app.debug = True'', port=80)
The above code will run the app in debug mode. If you leave out the IP address and port it will default to localhost:5000.
[leo@linux-vps python3.4-flask] chmod +x
[leo@linux-vps python3.4-flask] python3.4
how to install python 3 and flask on linux
I have added more code to the app so that it gives you cpu, I/O and load avg as well.
how to install python 3 and flask on linux
You can access the code to this app here.
This is a brief introduction to flask and I recommend you reading the tutorials and docs below for indepth information.

Advance your OpenStack with new guides and howtos

The cloud is the future, and now is the time to start learning more about how you can use OpenStack to solve your organization's IT infrastructure needs. Fortunately, we're here to help with that.
Every month, we compile the very best of recently published how-tos, guides, tutorials, and tips into this handy collection. And don't forget, for much more information beyond these snippets, the official documentation for OpenStack is always a great place to look.
  • First up this month is a look at importing and converting disk images for OpenStack Glance directly in Ceph. Writes Sébastian Han, "Ceph, to work in optimal circumstances, requires the usage of RAW images. However, it is painful to upload RAW images in Glance because it takes a while. Let see how we can make our life easier."
  • Next, let's take a dive into dealing with attached storage volumes. What happens when a launched instance gets deleted but due to some, erm, strangeness in your system, the disk image attached to it remains? How do you go about getting rid of that disk? Learn what to do in this guide to deleting root volumes attached to non-existent images.
  • Are you an OpenStack contributor? This tip is for you. Have you ever wondered how to compare two different patchsets in Gerrit? Sylvain Bauza shares a little bit about how his review process works.
  • Most datacenters are a mixed environment of old and new, with different applications at different places in their lifecycle, and different support tools underlying them. It's not uncommon to need to support a variety of different hypervisors. Fortunately, OpenStack can handle that. Learn how to deploy OpenStack in a multi-hypervisor environment.
  • Your application layer may be stuck on a closed source operating system, but that doesn’t mean you can’t run it in your open source datacenter with OpenStack. If you need to build a Windows image to use with OpenStack, here's how. Bonus tip: want to do the same thing with FreeBSD? There's a solution to that too. Of course many Linux distributions, including the recently-released Fedora 21, are OpenStack-ready on day one.
  • Okay, one more for those OpenStack core developers. Setting up unit test suites with Python virtual environments is a common practice, but it can be a time consuming one. Daniel P. Berrangé writes up a way to do faster rebuilds for Python virtualenv trees.
  • Our final tutorial this month isn't just a single guide, but rather a collection of helpful posts that detail the process of deploying OpenStack through a variety of different methods. Edmund Haselwanter takes you through deploying OpenStack with Chef-server, Chef-zero, RPC, Fuel, and my favorite, with RDO Packstack.
That's it for this month. Check out our complete OpenStack tutorials collection for more great guides and hints. And if we missed your favorite new guide or resource, let us know in the comments!

How Linux containers can solve a problem for defense virtualization

Linux in government, department of defense

As the virtualization of U.S. defense agencies commences, the technology’s many attributes—and drawbacks—are becoming apparent.

Virtualization has enabled users to pack more computing power in a smaller space than ever before. It has also created an abstraction layer between the operating system and hardware, which gives users choice, flexibility, vendor competition and best value for their requirements. But there is a price to be paid in the form of expensive and cumbersome equipment, software licensing and acquisition fees, and long install times and patch cycles.

These challenges have led many administrators to turn to application container technology for answers to their virtualization needs. For this article, we’ll focus specifically on Linux containers, which are made of two core components: the container technology itself and application packaging technology. They enable multiple isolated Linux systems to run on a single control host. Most importantly, they enable the warfighter to have more capabilities in a fraction of the space required by traditional virtualization.

Getting past tradition

In traditional virtualization, each application runs on its own guest operating system. These operating systems need to be individually purchased, installed, and maintained throughout their lifecycles. That can be time-consuming and costly.

With Linux containers, only one Linux operating system needs to be purchased, installed, and maintained. Instead of separating every application by installing them on their own guest operating systems, Linux containers are separated using control groups (cgroups) for resource management; namespaces for process isolation; and NSA-developed Security-Enhanced Linux (SELinux) for security, which enables secure multi-tenancy and reduces the potential for security exploits.

Linux figure 1
Photo from Dave Egts
The SELinux-based Linux container isolation provides an additional layer of defense for KVM-based virtualized and cloud environments that use SELinux-based sVirt isolation technology. Similar to a Russian nesting doll, many Linux containers are packed in a VM, many VMs in a hypervisor, and many hypervisors in a secure cloud. The result is a fast, efficient, and lightweight solution that is independent of underlying physical hardware; ideal for the military embedded space.

Finally, by eliminating the overhead of a guest operating system for every application, Linux containers enable increased densities of 10x more applications than traditional virtualization. SWaP (size weight and power) is decreased significantly and the need for traditional virtualization is potentially eliminated, as containers can run natively on bare metal with Linux.

The Docker factor

The need for containerized applications to use the same runtime stack as the underlying system is now unnecessary with the open source Docker project. That’s because Docker enables an application to run the same Linux kernel as the underlying container host but use a wholly different runtime stack:
Linux figure 2
Photo from Dave Egts
Docker also:
  • Provides the ability to package mission applications and their user space runtime dependencies in a standard format. This enables “golden image” warfighter applications to be shared and deployed on Linux hosts from various vendors who also support Docker.
  • Works with Linux container hosts running on physical, virtual or cloud systems. Integrators can develop using agile methods in their cloud and field containerized applications on tactical bare metal appliances without the need of virtualized infrastructure.
  • Lets administrators layer containerized images and put them in an app store-like registry. For instance, the U.S. Army could develop a pre-STIGed Linux container and publish it in an Army app registry for all authorized government and integrator employees’ use. These images could be extended to contain certified layered products and services for Java application servers, Web servers, and more:
Linux figure 3
Photo from Dave Egts
Integrators could also develop applications based upon these containers and publish them back into the Army registry for use and remixing by the government and other integrators.

The Atomic option

Tactical environments require slimmer containerization footprints that are easier to maintain. Enter Project Atomic.
Linux figure 4
Photo from Dave Egts
Project Atomic provides an Atomic host that is actually a slimmed down enterprise Linux distribution whose sole job is to run Docker containers. Its name is derived from two plays upon words: “atomic,” for a small footprint as discussed above, and “atomic operations,” which must be performed entirely or not at all. Atomic hosts are compelling for tactical environments because they enable containerized apps to be uniformly “flashed,” or “reflashed” quickly if a system rebuilding or updating. This is quite different from the traditional approach of patching deployed systems, where configurations can drift from being identical over time, making security measurement extremely difficult.

As the U.S. military continues its march toward virtualization, it will need to operate in an environment that runs on more agile solutions. Linux containers fit that bill nicely, enabling Defense Department agencies to take full advantage of virtualization benefits.

Originally published on as How Linux containers can solve a problem for DOD virtualization. Reposted with permission.