Thursday, March 23, 2017

How To Find The Geolocation Of An IP Address From Commandline

Find The Geolocation Of An IP Address From Commandline
A while ago, we wrote an article that described how to find out your geolocation from commandline using whereami utility. Today, we will see how to find the geolocation of an IP address. Of course, you can see this details from a web browser. But, it is lot easier to find it from commandline. geoiplookup is a command line utility that can be used to find the Country that an IP address or hostname originates from. It uses the GeoIP library and database to collect the details of an IP address.
This brief guide describes how to install and use geoiplookup utility to find the location of an IP address in Unix-like operating systems.

Find The Geolocation Of An IP Address Using Geoiplookup From Commandline

Install Geoiplookup

Geoiplookup is available in the default repositories of most Linux operating systems.
To install it on Arch Linux and its derivatives, run:
sudo pacman -S geoip
On Debian, Ubuntu, Linux Mint:
sudo apt-get install geoip-bin
On RHEL, CentOS, Fedora, Scientific Linux:
sudo yum install geoip
sudo zypper install geoip


Once installed, you can find out any IP address’s geolocation like below.
The above command will find and display the Country that originates from, in the following format:
GeoIP Country Edition: NL, Netherlands

Download and update Geoip databases

Generally, the default location of Geoip databases is /usr/share/GeoIP/. The databases might be bit outdated. You can download the latest databases that contains the updated geolocation details, from Maxmind. It is the website that offers the geolocation of an IP address.
Go to geoip default database folder:
cd /usr/share/GeoIP/
Download the latest databases:
gunzip GeoIP.dat.gz
Now, run the geoiplookup command to find most up-to-date geolocation details of an IP address.
Sample output:
GeoIP Country Edition: US, United States
As you see in the above output, it displays only the country location. Geoiplookup can even display more details such as the state, city, zip code, latitude and longitude etc. To do so, you need to download the city databases from Maxmind like below. Make sure you’re downloading it in /user/share/GeoIP/ location.
gunzip GeoLiteCity.dat.gz
Now, run the below command to get more details of an IP address’s geolocation.
geoiplookup -f /usr/share/GeoIP/GeoLiteCity.dat
Sample output would be:
GeoIP City Edition, Rev 1: US, CA, California, Mountain View, 94043, 37.419201, -122.057404, 807, 650
If you have saved the database files in a custom location other than the default location, you can use ‘-d’ parameter to specify the path. Say for example, if you have saved the database files in /home/sk/geoip/, the command to find the geolocation of an IP address would be:
geoiplookup -d /home/sk/geoip/
For more details, see man pages.
man geoiplookup
Hope this helps. if you find this guide useful, please share it on your social networks and support us.

rtop – A Nifty Tool to Monitor Remote Server Over SSH

rtop is a simple, agent-less, remote server monitoring tool that works over SSH. It doesn’t required any other software to be installed on remote machine, except openSSH server package & remote server credentials.
rtop is written in golang, and requires Go version 1.2 or higher. It can able to monitor any modern Linux distributions. rtop can connect remote system with all possible way like using ssh-agent, private keys or password authentication. Choose the desired one and monitor it.
It works by establishing an SSH session, and running commands on the remote server to collect system metrics such as CPU, disk, memory, network. It keeps refreshing the information every few seconds, like top command utility.

How to Install rtop in Linux

Run go get command to build it. The rtop binary automatically saved under $GOPATH/bin and no run time dependencies or configuration needed.
$ go get
The rtop binary automatically saved under $GOPATH/bin
hello rtop
$ ls -lh /home/magi/go_proj/bin
total 5.9M
-rwxr-xr-x 1 magi magi 1.5M Mar  7 14:45 hello
-rwxr-xr-x 1 magi magi 4.4M Mar 21 13:33 rtop

How to Use rtop

rtop binary was present in $GOPATH/bin, so just run $GOBIN/rtop to get the usage information.
$ $GOBIN/rtop
rtop 1.0 - (c) 2015 RapidLoop - MIT Licensed -
rtop monitors server statistics over an ssh connection

Usage: rtop [-i private-key-file] [user@]host[:port] [interval]

    -i private-key-file
        PEM-encoded private key file to use (default: ~/.ssh/id_rsa if present)
        the SSH server to connect to, with optional username and port
        refresh interval in seconds (default: 5)
Just add remote host information followed by rtop command to monitor. Default refresh interval in seconds (default: 5)
$ $GOBIN/rtop   magi@
magi@'s password: 

2daygeek.vps up 21d 16h 59m 46s

    0.13 0.03 0.01

    0.00% user, 0.00% sys, 0.00% nice, 0.00% idle, 0.00% iowait, 0.00% hardirq, 0.00% softirq, 0.00% guest

    1 running of 29 total

    free    = 927.66 MiB
    used    =  55.77 MiB
    buffers = 0 bytes
    cached  =  40.57 MiB
    swap    = 128.00 MiB free of 128.00 MiB

           /:   9.40 GiB free of  10.20 GiB

Network Interfaces:
    lo -, ::1/128
      rx =  14.18 MiB, tx =  14.18 MiB

    venet0 -, 2607:5300:100:200::81a/56
      rx =  98.76 MiB, tx = 129.90 MiB
Add the refresh interval manually for better monitoring. I have added 10 seconds refresh interval instead of default one (default: 5).
$ $GOBIN/rtop magi@ 10
magi@'s password:

2daygeek.vps up 21d 17h 7m 1s

    0.00 0.00 0.00

    0.00% user, 0.00% sys, 0.00% nice, 0.00% idle, 0.00% iowait, 0.00% hardirq, 0.00% softirq, 0.00% guest

    1 running of 28 total

    free    = 926.83 MiB
    used    =  56.51 MiB
    buffers = 0 bytes
    cached  =  40.66 MiB
    swap    = 128.00 MiB free of 128.00 MiB

           /:   9.40 GiB free of  10.20 GiB

Network Interfaces:
    lo -, ::1/128
      rx =  14.18 MiB, tx =  14.18 MiB

    venet0 -, 2607:5300:100:200::81a/56
      rx =  98.94 MiB, tx = 130.33 MiB

Wednesday, March 8, 2017

Linux Disable USB Devices (Disable loading of USB Storage Driver)

In our research lab, would like to disable all USB devices connected to our HP Red Hat Linux based workstations. I would like to disable USB flash or hard drives, which users can use with physical access to a system to quickly copy sensitive data from it. How do I disable USB device support under CentOS Linux, RHEL version 5.x/6.x/7.x and Fedora latest version?

The USB storage drive automatically detects USB flash or hard drives. You can quickly force and disable USB storage devices under any Linux distribution. The modprobe program used for automatic kernel module loading. It can be configured not load the USB storage driver upon demand. This will prevent the modprobe program from loading the usb-storage module, but will not prevent root (or another privileged program) from using the insmod/modprobe program to load the module manually. USB sticks containing harmful malware may be used to steal your personal data. It is not uncommon for USB sticks to be used to carry and transmit destructive malware and viruses to computers. The attacker can target MS-Windows, macOS (OS X), Android and Linux based system.

usb-storage driver

The usb-storage.ko is the USB Mass Storage driver for Linux operating system. You can see the file typing the following command:
# ls -l /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko
All you have to do is disable or remove the usb-storage.ko driver to restrict to use USB devices on Linux such as:
  1. USB keyboards
  2. USB mice
  3. USB pen drive
  4. USB hard disk
  5. Other USB block storage

How to forbid to use USB-storage devices on using fake install method

Type the following command under CentOS or RHEL 5.x or older:
# echo 'install usb-storage : ' >> /etc/modprobe.conf
Please note that you can use : a shell builtin or /bin/true.
Type the following command under CentOS or RHEL 6.x/7.x or newer (including the latest version of Fedora):
# echo 'install usb-storage /bin/true' >> disable-usb-storage.conf
Save and close the file. Now the driver will not load. You can also remove USB Storage driver without rebooting the system, enter:
# modprobe -r usb-storage
# mv -v /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /root/
#### verify it ###
# modinfo usb-storage
# lsmod | grep -i usb-storage
# lsscsi -H

Sample outputs:

Fig.01: How to disable USB mass storage devices on physical Linux system?
Fig.01: How to disable USB mass storage devices on physical Linux system?

Blacklist usb-storage

Edit /etc/modprobe.d/blacklist.conf, enter:
# vi /etc/modprobe.d/blacklist.conf
Edit or append as follows:
blacklist usb-storage
Save and close the file.

BIOS option

You can also disable USB from system BIOS configuration option. Make sure BIOS is password protected. This is recommended option so that nobody can boot it from USB.

Encrypt hard disk

Linux supports the various cryptographic techniques to protect a hard disk, directory, and partition. See "Linux Hard Disk Encryption With LUKS [ cryptsetup Command ]" for more info.

Grub option

You can get rid of all USB devices by disabling kernel support for USB via GRUB. Open grub.conf or menu.lst and append "nousb" to the kernel line as follows (taken from RHEL 5.x):
kernel /vmlinuz-2.6.18-128.1.1.el5 ro root=LABEL=/ console=tty0 console=ttyS1,19200n8 nousb
Make sure you remove any other reference to usb-storage in the grub or grub2 config files. Save and close the file. Once done just reboot the system:
# reboot
For grub2 use /etc/default/grub config file under Fedora / Debian / Ubuntu / RHEL / CentOS Linux. I strongly suggest that you read RHEL/CentOS grub2 config and Ubuntu/Debian grub2 config help pages.

A Linux user's guide to Logical Volume Management

Logical Volume Management (LVM)
Image by :
Managing disk space has always been a significant task for sysadmins. Running out of disk space used to be the start of a long and complex series of tasks to increase the space available to a disk partition. It also required taking the system off-line. This usually involved installing a new hard drive, booting to recovery or single-user mode, creating a partition and a filesystem on the new hard drive, using temporary mount points to move the data from the too-small filesystem to the new, larger one, changing the content of the /etc/fstab file to reflect the correct device name for the new partition, and rebooting to remount the new filesystem on the correct mount point.
I have to tell you that, when LVM (Logical Volume Manager) first made its appearance in Fedora Linux, I resisted it rather strongly. My initial reaction was that I did not need this additional layer of abstraction between me and the hard drives. It turns out that I was wrong, and that logical volume management is very useful.
LVM allows for very flexible disk space management. It provides features like the ability to add disk space to a logical volume and its filesystem while that filesystem is mounted and active and it allows for the collection of multiple physical hard drives and partitions into a single volume group which can then be divided into logical volumes.
The volume manager also allows reducing the amount of disk space allocated to a logical volume, but there are a couple requirements. First, the volume must be unmounted. Second, the filesystem itself must be reduced in size before the volume on which it resides can be reduced.
It is important to note that the filesystem itself must allow resizing for this feature to work. The EXT2, 3, and 4 filesystems all allow both offline (unmounted) and online (mounted) resizing when increasing the size of a filesystem, and offline resizing when reducing the size. You should check the details of the filesystems you intend to use in order to verify whether they can be resized at all and especially whether they can be resized while online.

Expanding a filesystem on the fly

I always like to run new distributions in a VirtualBox virtual machine for a few days or weeks to ensure that I will not run into any devastating problems when I start installing it on my production machines. One morning a couple years ago I started installing a newly released version of Fedora in a virtual machine on my primary workstation. I thought that I had enough disk space allocated to the host filesystem in which the VM was being installed. I did not. About a third of the way through the installation I ran out of space on that filesystem. Fortunately, VirtualBox detected the out-of-space condition and paused the virtual machine, and even displayed an error message indicating the exact cause of the problem.
Note that this problem was not due to the fact that the virtual disk was too small, it was rather the logical volume on the host computer that was running out of space so that the virtual disk belonging to the virtual machine did not have enough space to expand on the host's logical volume.
Since most modern distributions use Logical Volume Management by default, and I had some free space available on the volume group, I was able to assign additional disk space to the appropriate logical volume and then expand filesystem of the host on the fly. This means that I did not have to reformat the entire hard drive and reinstall the operating system or even reboot. I simply assigned some of the available space to the appropriate logical volume and resized the filesystem—all while the filesystem was on-line and the running program, The virtual machine was still using the host filesystem. After resizing the logical volume and the filesystem I resumed running the virtual machine and the installation continued as if no problems had occurred.
Although this type of problem may never have happened to you, running out of disk space while a critical program is running has happened to many people. And while many programs, especially Windows programs, are not as well written and resilient as VirtualBox, Linux Logical Volume Management made it possible to recover without losing any data and without having to restart the time-consuming installation.

LVM Structure

The structure of a Logical Volume Manager disk environment is illustrated by Figure 1, below. Logical Volume Management enables the combining of multiple individual hard drives and/or disk partitions into a single volume group (VG). That volume group can then be subdivided into logical volumes (LV) or used as a single large volume. Regular file systems, such as EXT3 or EXT4, can then be created on a logical volume.
In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes.
Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups.
Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one.

Adding a new logical volume

There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently.
The basic steps for adding a new logical volume are as follows.
  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create a new logical volumes (LV) from the space in the volume group.
  6. Create a filesystem on the new logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.
Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems.


This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem.
WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install hard drive

If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps.

Create Physical Volume from hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.
pvcreate /dev/hdd
It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Extend the existing Volume Group

In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01.
vgextend /dev/MyVG01 /dev/hdd

Create the Logical Volume

First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.
lvcreate -L +50G --name Stuff MyVG01

Create the filesystem

Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume.
mkfs -t ext4 /dev/MyVG01/Stuff

Add a filesystem label

Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems.
e2label /dev/MyVG01/Stuff Stuff

Mount the filesystem

At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem.
You should also check to verify the volume has been created correctly. You can use the df, lvs, and vgs commands to do this.

Resizing a logical volume in an LVM filesystem

The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management.
  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group.
  6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.


This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment.
WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install the hard drive

If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps.

Create a Physical Volume from the hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.
pvcreate /dev/hdd
It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Add PV to existing Volume Group

For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01.
vgextend /dev/MyVG01 /dev/hdd

Extend the Logical Volume

Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.
lvextend -L +50G /dev/MyVG01/Stuff

Expand the filesystem

Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume.
resize2fs /dev/MyVG01/Stuff
You should check to verify the resizing has been performed correctly. You can use the df, lvs, and vgs commands to do this.


Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you.
  • Use the Extended file systems unless you have a clear reason to use another filesystem. Not all filesystems support resizing but EXT2, 3, and 4 do. The EXT filesystems are also very fast and efficient. In any event, they can be tuned by a knowledgeable sysadmin to meet the needs of most environments if the defaults tuning parameters do not.
  • Use meaningful volume and volume group names.
  • Use EXT filesystem labels.
I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch.

How to Create Virtual Machines in oVirt 4.0 Environment

To Create Virtual Machines from oVirt Engine Web Administrator portal first we have to make sure following things are set.
  • Data Center
  • Clusters
  • Hosts ( oVirt Node or hypervisor)
  •  Network ( default ovirtmgmt is created)
  • Storage Domain( ISO storage and Data Storage )
In our previous article we have already discuss the  oVirt Engine and oVirt Node / Hypervisor installation. Please refer the URL for “Installation Steps of oVirt Engine and Ovirt Node
Refer the following steps to complete above set of tasks. Login to your oVirt Engine Web Administrator Portal. In my Case web portal URL is “”

Step:1 Create new Data Center

Go to Data Centers Tab and then click on New
Specify the Data Center Name, Description and Storage Type and Compatibility version.In my case Data Center name is “test_dc

Step:2 Configure Cluster for Data Center

When we click on OK on above step, it will ask to configure Cluster. So Select “Configure Cluster” option
Specify the cluster name, Description, CPU architecture as per your setup leave the other parameters as it is. We can define optimization, migration and fencing policy as per our requirement  but i am not touching these policy as of now.
In my case Cluster name is “testcluster
Click on OK.
In the next step click on Configure Later.

Step:3 Add Host or oVirt Node to above created data center & cluster.

By default when we add any host or oVirt Node in oVirt Engine it is added to the default data center and Cluster. So to change the data center and cluster of any node first put the host in maintenance mode
Select the Node click on Maintenance option then click on OK
Now Select the Edit option and update the Data center and Cluster information for the selected host.
Click on OK
Now Click on Activate option to activate the host.

Step:4 Creating Storage Domains

As the name suggests storage domain is centralized repository of disk which is used for storing the VM disk images, ISO files and VMs meta data its Snapshots. Storage Domain is classified into three types :
  • Data Storage Domain : It is used for storing hard disk images of all the VMs
  • Export Storage Domain : It is used to store the backup copies of VMs, it also provides transitory storage for hard disk images and templates being transferred between data centers.
  • ISO Storage Domain : It is used for storing the ISO files.
In this article Data Storage and ISO storage is shared via NFS. Though data storage can be configure via ISCSI , GlusterFS and Storage using Fibre Channels. Following NFS share is available for Data Storage and ISO domain.
[root@ovirtnode ~]# showmount -e
Export list for
[root@ovirtnode ~]#
Create Data Storage Domain, Click on the Storage Tab and then Click on New Domain, Select the Domain function as “Data” and Storage Type as NFS and Specify the NFS servers’ share ip and name.
Now Again Click on New Domain from Storage Tab and Select Domain Function as “ISO” and Storage Type as “NFS”
As we see both the storage Domains are activated now. Once the storage Domain got activated then automatically our Data Center initialized and becomes active.

Step:5 Upload ISO files to ISO Storage Domain.

Transfer the ISO file to ovirt-engine and run the ‘engine-iso-uploader’. In my case i am uploading Ubuntu 16.04 LTS iso file.
[root@ovirtengine ~]# engine-iso-uploader -i ISO_Domain_test_dc upload ubuntu-16.04-desktop-amd64.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
Uploading, please wait...
INFO: Start uploading ubuntu-16.04-desktop-amd64.iso
Uploading: [########################################] 100%
INFO: ubuntu-16.04-desktop-amd64.iso uploaded successfully
[root@ovirtengine ~]#
Now we are ready create Virtual machines.

Step:6 Create Virtual Machine

As we have uploaded Ubuntu 16.04 ISO file so at this point of time we will create Ubuntu virtual machine.
Click on New VM from Virtual Machine Tab . Specify the followings parameters under the “General” Tab
  • Data Center “test_dc”
  • Operating System Type as “Linux”
  • Optimized for “Desktop”
  • Name as “Ubuntu 16.04”
  •  nic1 as “ovirtmgmt”
Specify the disk space for the Virtual machine. Click on Create option  which is available in front of “Instance Images” Specify the Disk Size and leave other parameters as it and click on OK.
Click on “Show Advance option” then Go to System Tab, Specify the Memory and CPU for the Virtual Machine
Go to “Boot Options” Tab , attach the Ubuntu 16.04 ISO file and change the boot sequence and Click on OK
Now Select the VM and Click on “Run Once” option from Virtual Machines Tab.
To Get the Console of  VM. Do the Right Click on VM and then select Console.
Click on Install Ubuntu and follow the screen instructions and reboot the VM once installation is completed
Change the Boot Sequence of VM so that it will boot from Disk.
Enter the Credentials that you set during installation
That’s all for this article. Hope you under stand how to create or deploy virtual machines in oVirt Environment.

10 tips for DIY IoT home automation

10 tips for DIY IoT home automation
Image by :
We live in an exciting time. Everyday more things become Internet-connected things. They have sensors and can communicate with other things, and help us perform tasks like never before. Especially at home.
Home automation is made possible by amaetuer developers and tinkers because the price of microcontrollers with the ability to talk over a network continue to drop. It all started for me when I was stuck in the office wishing I was at home playing with my kids. Since I couldn't be there physically, I built a squirt gun out of a microcontroller, a couple of servos, a solenoid valve, and a water hose for around $80 US. See how I did it.
I was on to something. Next I built what I call The Logical Living home automation system out of inexpensive microcontrollers, custom circuits, and other household components. And, I published the code at Code Project. My house now has hundreds of IoT features helping me help it run effeciently, and with more input from me, the home owner.
Along the way, I've learned a few things that can help other beginner IoT makers.

6 design lessons for getting started

Lesson 1: Make each thing smart.
It is hard to move things around when all of your things are connected with wires to a central controller. If each "thing" is self-contained, then it's easy to move it around and easy to take it with you when you move.
Lesson 2: Update the program (firmware) Over The Air (OTA).
It is important to select a microcontroller or microprocessor that has the capability to flash code updates to your remote device. I built a 20 foot outdoor Christmas tree made of lights that I can program while sitting in the office or anywhere with an Internet connection. This is especially nice when it is cold and raining outside. It is very inconvenient to plug my laptop into some of my other IoT projects to do code updates. There is a simple feature that I have been wanting to add for a long time to an IoT cat toy project built on a different platform but the pain of connecting my laptop to the hard to access microcontroller has kept me from making the update.
Lesson 3: Use DHCP or an identity service.
And have one program for all of the devices for each type of microcontroller in your fleet.
Lesson 4: Use a publish / subscribe model.
Do so with a broker to loosely couple all of your things. A broker is software middleware between the "thing" and whatever is communicating with it. Many of my previous IoT implementations were done with "things" that were tightly coupled to a broker to dispatch messages to other "things". I have learned that a well-designed broker can connect publishers with subscribers in a loose coupled approach without opening up a port in the firewall. It is a smart idea to leverage MQTT protocol and an open source broker like Mosquitto.
Lesson 5: Leverage existing cloud services.
Machine learning algorithms can be complex and you can develop new features much quicker by leveraging work from large teams of people with specialties in the area. I'm working on an IoT project to predict the health of my pets that I would not have the time to get the expertise to do without the help from existing cloud services.
Lesson 6: Make the code available to the community.
When I open sourced the code and made it available to the community, I put extra time and thought into making sure the code was clean, of high quality, and used best practices. I knew that many eyes would be looking at and reviewing the code which caused me to want to refactor it often. Open sourcing your project is a great way to get feedback from the community and improve.

4 tips for IoT in the home 

I've learned just as many lessons about people as I did about technology.
Lesson 1: With great power comes great responsibility.
I can control the TV, DVR, and music player with IR signals. So, to be funny, I'd randomly change the TV channel or music station when I was away from home, while my family was at home. It was my way of telling them I was thinking of them, but they didn't exactly see it that way! When I got home someone had disabled the control by removing wires from my circuit. Needless to say, I was proud they figured out which wires to remove to disable it. Smart!
Lesson 2: Be aware of pets.
We have a cat that likes to play in funny places, and she was particularly interested in my project to control the fireplace with speech-voice recognition. A burned kitty would mean the end of my IoT projects, so I quickly wired up a mesh screen to keep the cat out.
Lesson 3: Beware of fire.
I built an IoT-controlled pumpkin for Halloween that shot a 4-foot flame out of its face when mentioned on Twitter or alternatively controlled with a watch or phone. This was a huge hit but IT became difficult to keep all the kids at a safe distance all night long. This year, I'm building a 12-foot monster that shoots the flame way above the kids heads and is controlled by speech commands. See some of my other Halloween IoT projects.
Lesson 4: When it's in the home it needs to be nearly 100% reliable.
Family members are not forgiving of quality defects, and your home automation projects will not be used if they are not reliable.
Some of my microcontrollers would lock up after a couple of days because of Ethernet communication issues, and I knew I had a problem when my wife called me while I was traveling because the garden wasn't watering. I spent days working out the issue and finally resolved it by having the code detect the issue and then reboot the device to recover. The reboot is so fast that people don't usually notice the downtime.

5 Tips on Using OAuth 2.0 for Secure Authorization

OAuth 2.0 can be an effective authorization method. Here we offer tips on implementing and using an OAuth 2.0 authorization server using the OWIN framework.

By Aleksey Gavrilenko, Itransition
Approaches to security issues change constantly, along with evolving threats. One approach is to implement OAuth, an open authorization standard that provides secure access to server resources. OAuth is a broad topic with hundreds of articles covering dozens of its aspects. This particular article will help you create a secure authorization server using OAuth 2.0 in .NET to use for your mobile clients and web applications.

What is OAuth?

OAuth is an open standard in authorization that allows delegating access to remote resources without sharing the owner's credentials. Instead of credentials, OAuth introduces tokens generated by the authorization server and accepted by the resource owner.
In OAuth 1.0, each registered client was given a client secret and the token was provided in response to an authentication request signed by the client secret. That produced a secure implementation even in the case of communicating through an insecure channel, because the secret itself was only used to sign the request and was not passed across the network.
OAuth 2.0 is a more straightforward protocol passing the client secret with every authentication request. Therefore, this protocol is not backward compatible with OAuth 1.0. Moreover, it is deemed less secure because it relies solely on the SSL/TLS layer. One of OAuth contributors, Eran Hammer, even said that OAuth 2.0 may become "the road to hell," because:
"… OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result in a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations."
Despite this opinion, making a secure implementation of OAuth 2.0 is not that hard, because there are frameworks supporting it and best practices listed. SSL itself is a very reliable protocol that is impossible to compromise when proper certificate checks are thoroughly performed.
- Advertisement -
Of course, if you are using OAuth 1.0, then continue to use it; there is no point in migrating to OAuth 2.0. But if you are developing a new mobile or an Angular web application (and often mobile and web applications come together, sharing the same server), then OAuth 2.0 will be a better choice. It already has some built-in support in the OWIN framework for .NET that can be easily extended to create different clients and use different security settings.

Implementing OAuth 2.0 in OWIN

OWIN is a .NET framework for building ASP.NET Web API applications. It offers its own implementation of OAuth 2.0 protocol where two major OAuth terms (clients and refresh tokens) are not strictly defined and need to be implemented separately. On the one hand, it adds some complexity -- because each developer needs to decide how to implement them exactly -- and, on the other hand, it adds the extensibility and new opportunities.
The exact implementation with code snippets can be found in tutorials across the web and in open source projects at GitHub; and therefore it is out of scope of the current article. In particular, Taiseer Joudeh, a Microsoft consultant, has written an article with a step-by-step description of the exact implementation.
From my own experience, it's best to use the following techniques when implementing and using an OAuth 2.0 authorization server:
      1. Always use SSL. OAuth 2.0 security depends solely on SSL and using OAuth 2.0 without it is just like sending a password in a plaintext across an insecure Wi-Fi connection.
      2. Always check the SSL certificate to protect from the man-in-the-middle attacks. For web applications, the browser does that job and warns the user if the certificate is not to be trusted. For mobile applications, the application itself should check the certificate for validity.
      3. Do not store client secrets in the database in plaintext; store the hashed value instead. You may choose not to store client secrets at all (which is an acceptable solution if the authentication relies solely on passwords), but keeping them in plaintext will pose a security threat if they become critical in the future.
      4. Always use refresh tokens and make access tokens short-lived. Using refresh tokens will give you the following three benefits:
        • They can be used to avoid access tokens living forever and not forcing the user to re-enter credentials at the same time. As a bonus, for web applications they can be used to imitate session expiration. When the user is idle for some time, both the access and the refresh token will expire and the user will be forced to re-login.
        • They are revocable. When the user changes the password, the token can be revoked and the user will be forced to re-login on all mobile devices. This is very important because a device may be stolen and having a logged-in session on it will pose a significant security threat.
        • They can be used for updating access token content. Normally, access tokens are validated without a roundtrip to the database. This makes it faster to process, but user roles (that are cached in claims) may not be easily updated or, even more importantly, revoked if access token expiration takes a long time. Refresh tokens are of great help here because they shorten the access tokens' life.
      5. Choose the lifetime for access tokens and refresh tokens properly. For financial or other critical applications, the token's lifetime should be as short as possible: 30-60 seconds for access tokens and five to 10 minutes for refresh tokens. Non-critical applications may have refresh tokens living for weeks so that users are not bothered with re-entering credentials.

OWIN Implementation of OAuth 2.0 Offers Flexibility

Also, current OWIN implementation of OAuth 2.0 is flexible enough to be altered to fit particular business needs:
        1. If there is a background service that needs to act as any user, it can be integrated seamlessly into the authentication process in the following way:
          • Alter the clients table by adding a PasswordRequired column.
          • Handle the case when the password is not required in the source code.
          • Create a new client in the clients table and use it for the background service. Always secure the secret for this client as it will act like the master password. (Never store this secret in plaintext.)
        2. If there are several applications (mobile apps, admin console, etc.) that need to be restricted by roles, you can protect the client applications in the following way:
          • Alter the clients table by adding an AllowedRoles column.
          • Implement additional checks for the user role to the authentication code.
          • Dedicate different rows in the client's table for each application. Remember that the authorization checks in the server API must be implemented in any case.
        3. Sometimes the requirements may be vice versa: the same user logging in through different applications should have different business roles when accessing the server resources. In this case, the client's table can be altered by adding and maintaining a new BusinessRole column. The value from this column can be added to the access token claims to be eventually checked in the server API.

Remember, No Authentication Method Is Perfect

There is no ideal way to protect users from attacks when using applications, and even OAuth 2.0 has advantages and flaws exposed in implementations. By avoiding implementation mistakes and using the methods described in the article above, developers can help users stay more secure without breaking the seamless interaction with the app.

Tuesday, March 7, 2017

Create Virtual Machine Template in oVirt Environment

A template is a pre-installed and pre-configured virtual machine and Templates become beneficial where we need to deploy large number similar virtual machines.Templates help us to reduce the time to deploy virtual machine and also  reduce the amount of disk space needed.A template does not require to be cloned. Instead a small overlay can be put on top of the base image to store just the changes for one particular instance.
To Convert a virtual machine into a template we need to generalize the virtual machine or in other words sealing virtual machine.
In our previous articles we have already discuss the following topics.
I am assuming either CentOS 7 or RHEL 7 Virtual is already deployed in oVirt environment. We will be using this virtual machine and will convert it into a template. Refer the following steps :

Step:1 Login to Virtual Machine Console

SSH the virtual  machine as a root user.

Step:2 Remove SSH host keys  using rm command.

[root@linuxtechi ~]# rm -f /etc/ssh/ssh_host_*

Step:3 Remove the hostname and set it as local host

[root@linuxtechi ~]# hostnamectl set-hostname 'localhost'

Step:4 Remove the host specific information

Remove the followings :
  • udev rules
  • MAC Address & UUID
[root@linuxtechi ~]# rm -f /etc/udev/rules.d/*-persistent-*.rules
[root@linuxtechi ~]# sed -i '/^HWADDR=/d' /etc/sysconfig/network-scripts/ifcfg-*
[root@linuxtechi ~]# sed -i '/^UUID=/d' /etc/sysconfig/network-scripts/ifcfg-*

Step:5 Remove RHN systemid associated with virtual machine

[root@linuxtechi ~]# rm -f /etc/sysconfig/rhn/systemid

Step:6 Run the command sys-unconfig

Run the command sys-unconfig to complete the process and it will also shutdown the virtual machine.
[root@linuxtechi ~]# sys-unconfig

Now our Virtual Machine is ready for template.

Do the right click on the Machine and select the “Make Template” option
Specify the Name and Description of the template and click on OK
It will take couple of minutes to create template from the virtual machine. Once Done go to templates Tab and verify whether the newly created template is there or not.

Now start deploying virtual machine from template.

Got to the Virtual Machine Tab , click on “New VM“, Select the template that we have created in above steps. Specify the VM name and Description
When we click on OK , it will start creating the virtual machine from template. Example is shown below :
As we can see that after couple of minutes Virtual Machine “test_server1” has been successfully launched from template.
That’s all, hope you got an idea how to create a template from a Virtual machine.Please share your feedback and comments.

Linux Directory Structure (File System Hierarchy) Explained with Examples

Are you new to Linux ? If so, I would advise you to understand the Linux Directory Structure (File System Hierarchy) first. Don’t panic/scare after seeing the below image (File System Hierarchy). Getting confusion about /bin, /sbin, /usr/bin & /usr/sbin don’t worry, we are here to teach you like a baby.
The Filesystem Hierarchy Standard (FHS) defines the structure of file systems in Unix/Linux, like operating systems.
In Linux everything is a file, we can modify anything whenever it’s necessary but make sure, you should know what you are doing. If you don’t know, what you are doing & did something without knowing anything which will damage your system potentially. So try to learn from basic to avoid such kind of issues on production environment.
  • / : The Root Directory – Primary hierarchy root and root directory of the entire file system hierarchy which contains all other directories and files. Make a note / & /root is different.
  • /bin : Essential User Binaries – Contains Essential User Binaries, where all the users performing most commonly used basic commands like ps, ls, ping, grep, cp & cat
  • /boot : Static Boot Files – Contains boot loader related files which is needed to start up the system, such as Kernel initrd (Initial RAM Disk image), vmlinuz (Virtual Memory LINUx gZip – compressed Linux kernel Executable) & grub (Grand Unified Bootloader). Make a note, its a vmlinuz not a vmlinux vmlinuz – Virtual Memory LINUX, Non-compressed Linux Kernel Executable
  • /dev : Device Files – contains all device files for various hardware devices on the system, including hard drives, RAM, CPU, tty, cdrom, etc,. It’s not a regular files.
  • /etc : Configuration Files – contains system global configuration files, which affect the system’s behavior for all users when you modifying anything on it. Also having application service script, like (start, stop, enable, shutdown & status).
  • /home : User’s Home Directories – Users’ home directories, where users can save their persona files.
  • /lib : Essential Shared Libraries – Contains important dynamic libraries and kernel modules that supports the binaries found under /bin & /sbin directories.
  • /lost+found : Recovered Files – If the file system crashes (It happens for many reasons, power failure, applications are not properly closed, etc,.) the corrupted files will be placed under this directory. File system check will be performed on next boot.
  • /media : Removable Media – Temporary mount directory for external removable media/devices (floppies, CDs, DVDs).
  • /mnt : Temporary Mount Points – Temporary mount directory, where we can mount filesystems temporarily.
  • /opt : Optional Packages – opt stands for optional, Third party applications can be installed under /opt directory, which is not available in official repository or proprietary software.
  • /proc : Kernel & Process Files – A virtual filesystem that contains information about running process (/proc/(pid), kernel & system resources (/proc/uptime & /proc/vmstat).
  • /root : Root Home Directory – is the superuser’s home directory, which is not same as /.
  • /run : Application State Files – is a tmpfs (temporary file system) available early in the boot process, later files get truncated at the beginning of the boot process.
  • /sbin : System Administration Binaries/sbin also contains binary executable similar to /bin but it’s require superuser privilege to perform the commands, which is used for system maintenance purpose.
  • /selinux : SELinux Virtual File System – Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, applicable for RPM based systems, such as (RHEL, CentOS, Fedora, Oracle Linux, Scentific Linux & openSUSE).
  • /srv : Service Data – srv stands for service, contain data directories of varies services provided by the system such as HTTP (/srv/www/) or FTP(/srv/ftp/)
  • /sys : virtual filesystem or pseudo file system (sysfs) – Modern Linux distributions included a /sys directory, since 2.6.X kernels. It provides a set of virtual files by exporting information about various kernel subsystems, hardware devices and associated device drivers from the kernel’s device model to user space.
  • /tmp : Temporary Directory /tmp stands for Temporary (Temporary Files) – Applications store temporary files in the /tmp directory, when its running/required. Which will automatically deleted on next reboot.
  • /usr : User Binaries – Contains binaries, libraries, documentation and source-code for second level programs (read-only user data). Command binaries (/usr/bin), system binaries (/usr/sbin), libraries (/usr/lib) for the binaries. source code (/usr/src), documents (/usr/share/doc).
  • /var : Variable – var stands for Variable, It contains Application cache files (/var/cache), package manager & database files (/var/lib), lock file (/var/lock), various logs (/var/log), users mailboxes (/var/mail) & print queues and outgoing mail queue (/var/spool)

Running Asynchronous background Tasks on Linux with Python 3 Flask and Celery

Running Asynchronous background Tasks on Linux with Python 3 Flask and Celery
In this tutorial I will describe how you can run asynchronous tasks on Linux using Celery an asynchronous task queue manager.
While running scripts on Linux some tasks which take time to complete can be done asynchronously. For example a System Update. With Celery you can run such tasks asynchronously in the background and then fetch the results once the task is complete.
You can use celery in your python script and run it from the command line as well but in this tutorial I will be using Flask a Web framework for Python to show you how you can achieve this through a web application.
Before we start it’s good if you have some familiarity with Flask if not you can quickly read my earlier tutorial on building Web Applications on Linux with Flask before you proceed.
This tutorial is for Python 3.4, Flask 0.10, Celery 3.1.23 and rabbitmq-server 3.2.4-1
To make it easier for you I have generated all the code required for the web interface using Flask-Scaffold and
uploaded it at You will just need to clone the code and proceed with the installation and configuration as follows:
As described above the first step is to clone the code on your Linux server and install the requirements
git clone
cd Flask-Celery-Linux
virtualenv -p python3.4 venv-3.4
source venv-3.4/bin/activate
pip install -r requirements.txt
sudo apt-get install rabbitmq-server
Most of the requirements including Flask and Celery will be installed using ‘pip’ however we will need to install RabbitMQ via ‘apt-get’ or your distros default package manager.
What is RabbitMQ?
RabbitMQ is a message broker. Celery uses a message broker like RabbitMQ to mediate between clients and workers. To initiate a task, a client adds a message to the queue, which the broker then delivers to a worker. There are other message brokers as well but RabbitMQ is the recommended broker for Celery.
Configurations are stored in the file. There are two configurations that you will need to add,
One is your database details where the state and results of your tasks will be stored and two is
the RabbitMQ message broker URL for Celery.
#You can add either a Postgres or MySQL Database
#I am using MySQL for this tutorial
mysql_db_username = 'youruser'
mysql_db_password = 'yourpass'
mysql_db_name = 'flask_celery_linux'
mysql_db_hostname = 'localhost'
SQLALCHEMY_DATABASE_URI = "mysql+pymysql://{DB_USER}:DB_PASS}@{DB_ADDR}/{DB_NAME}".format(DB_USER=mysql_db_username,                                                                                        DB_PASS=mysql_db_password,                                                                                  DB_ADDR=mysql_db_hostname,                                                                                        DB_NAME=mysql_db_name)
#Celery Message Broker Configuration
CELERY_BROKER_URL = 'amqp://guest@localhost//'
Database Migrations
Run the script to create the database tables
python db init
python db migrate
python db upgrade
And finally run the in built web server with
You should be able to see the Web Interface at http://localhost:5000
celery flask tutorial
You will need to create a username and password by clicking on sign up, after which you can login.
Starting the Celery Worker Process
In a new window/terminal activate the virtual environment and start the celery worker process
cd Flask-Celery-Linux
source venv-3.4/bin/activate
celery worker -A celery_worker.celery --loglevel=debug
Now go back to the Web interface and click on Commands –> New. Here you can type in any Linux command and see it run asynchronously.
The video below will show you a live demonstration
To integrate Celery into your Python script or Web application you first need to create an instance of
celery with your application name and the message broker URL.
from celery import Celery
from config import CELERY_BROKER_URL
celery = Celery(__name__, broker=CELERY_BROKER_URL)
Any task that has to run asynchronously then needs to be wrapped by a Celery decorator
from app import celery
def run_command(command):
    cmd = subprocess.Popen(command,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
    stdout,error = cmd.communicate()
    return {"result":stdout, "error":error}
You can then call the task in your python scripts using the ‘delay’ or ‘apply_async’ method as follows:
The difference between the ‘delay’ and the ‘apply_async()’ method is that the latter allows you to specify a time post which the task will be executed.
run_command.apply_async(args=[command], countdown=30)
The above command will be executed on Linux after a 30 second delay.
In order to obtain the task status and result you will need the task id.
task = run_command.delay(cmd.split())
            task_id =
            task_status = run_command.AsyncResult(task_id)
            task_state = task_status.state
            result  =   str(
            #Store results in the database using SQlAlchemy
            from models import Commands
            command = Commands(request_dict['name'], task_id,  task_state,  result)
Tasks can have different states. Pre-defined states include PENDING, FAILURE and SUCCESS. You can
define custom states as well.
Incase a task takes a long time to execute or you want to terminate a task pre-maturely you have to use the ‘revoke’ method.
Just be sure to pass the terminate flag to it else it will be respawned when a celery worker process restarts.
Finally If you are using Flask Application Factories you will need to instantiate Celery when you create your Flask application.
def create_app(config_filename):
    app = Flask(__name__, static_folder='templates/static')
    # Init Flask-SQLAlchemy
    from app.basemodels import db
from app import create_app
app = create_app('config')
if __name__ == '__main__':['HOST'],
To run celery in the background you can use supervisord.
That’s it for now, if you have any suggestions add them in the comments below
Images are not mine and are found on the internet