Thursday, February 26, 2015

Understanding Linux CPU stats

http://blog.scoutapp.com/articles/2015/02/24/understanding-linuxs-cpu-stats

Your Linux server is running slow, so you follow standard procedure and run top. You see the CPU metrics:


But what do all of those 2-letter abbreviations mean?


The 3 CPU states

Let's take a step back. There are 3 general states your CPU can be in:
  • Idle, which means it has nothing to do.
  • Running a user space program, like a command shell, an email server, or a compiler.
  • Running the kernel, servicing interrupts or managing resources.
These three meta states can be further subdivided. For example, user space programs can be categorized as those running under their initial priority level or those running with a nice priority. Niceness is a way to tweak the priority level of a process so that it runs less frequently. The niceness level ranges from -20 (most favorable scheduling) to 19 (least favorable). By default processes on Linux are started with a niceness of 0. See our blog post Restricting process CPU usage using nice, cpulimit, and cgroups for more information on nice.

The 7 cpu statistics explained

There are several different ways to see the various CPU statistics. The most common is probably using the top command.
To start the top command you just type top at the command line:
The output from top is divided into two sections. The first few lines give a summary of the system resources including a breakdown of the number of tasks, the CPU statistics, and the current memory usage. Beneath these stats is a live list of the current running processes. This list can be sorted by PID, CPU usage, memory usage, and so on.
The CPU line will look something like this:
%Cpu(s): 24.8 us,  0.5 sy,  0.0 ni, 73.6 id,  0.4 wa,  0.0 hi,  0.2 si,  0.0 st
24.8 us - This tells us that the processor is spending 24.8% of its time running user space processes. A user space program is any process that doesn't belong to the kernel. Shells, compilers, databases, web servers, and the programs associated with the desktop are all user space processes. If the processor isn't idle, it is quite normal that the majority of the CPU time should be spent running user space processes.
73.6 id - Skipping over a few of the other statistics, just for a moment, the id statistic tell us that the processor was idle just over 73% of the time during the last sampling period. The total of the user space percentage - us, the niced percentage - ni, and the idle percentage - id, should be close to 100%. Which it is in this case. If the CPU is spending a more time in the other states then something is probably awry - see the Troubleshooting section below.
0.5 sy - This is the amount of time that the CPU spent running the kernel. All the processes and system resources are handled by the Linux kernel. When a user space process needs something from the system, for example when it needs to allocate memory, perform some I/O, or it needs to create a child process, then the kernel is running. In fact the scheduler itself which determines which process runs next is part of the kernel. The amount of time spent in the kernel should be as low as possible. In this case, just 0.5% of the time given to the different processes was spent in the kernel. This number can peak much higher, especially when there is a lot of I/O happening.
0.0 ni - As mentioned above, the priority level a user space process can be tweaked by adjusting its niceness. The ni stat shows how much time the CPU spent running user space processes that have been niced. On a system where no processes have been niced then the number will be 0.
0.4 wa - Input and output operations, like reading or writing to a disk, are slow compared to the speed of a CPU. Although this operations happen very fast compared to everyday human activities, they are still slow when compared to the performance of a CPU. There are times when the processor has initiated a read or write operation and then it has to wait for the result, but has nothing else to do. In other words it is idle while waiting for an I/O operation to complete. The time the CPU spends in this state is shown by the wa statistic.
0.0 hi & 0.2 si - These two statistics show how much time the processor has spent servicing interrupts. hi is for hardware interrupts, and si is for software interrupts. Hardware interrupts are physical interrupts sent to the CPU from various peripherals like disks and network interfaces. Software interrupts come from processes running on the system. A hardware interrupt will actually cause the CPU to stop what it is doing and go handle the interrupt. A software interrupt doesn't occur at the CPU level, but rather at the kernel level.
0.0 st - This last number only applies to virtual machines. When Linux is running as a virtual machine on a hypervisor, the st (short for stolen) statistic shows how long the virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Since in the real-world these virtual processors are sharing the same physical processor(s) then there will be times when the virtual machine wanted to run but the hypervisor scheduled another virtual machine instead.

Monitoring CPU Stats with Scout

Scout automatically monitors the key CPU statistics by default so you can see how they change overtime. Scout's realtime mode is great as you get more context wtih a chart than changing numbers in a terminal:
You can also easily compare CPU metrics across many servers:
cpu chart
You can try Scout free for 30 days.

Troubleshooting

On a busy server or desktop PC, you can expect the amount of time the CPU spends in idle to be small. However, if a system rarely has any idle time then then it is either a) overloaded (and you need a better one), or b) something is wrong.
Here is a brief look at some of the things that can go wrong and how they affect the CPU utilization.
High user mode - If a system suddenly jumps from having spare CPU cycles to running flat out, then the first thing to check is the amount of time the CPU spends running user space processes. If this is high then it probably means that a process has gone crazy and is eating up all the CPU time. Using the top command you will be able to see which process is to blame and restart the service or kill the process.
top
High kernel usage - Sometimes this is acceptable. For example a program that does lots of console I/O can cause the kernel usage to spike. However if it remains higher for long periods of time then it could be an indication that something isn't right. A possible cause of such spikes could be a problem with a driver/kernel module.
High niced value - If the amount of time the CPU is spending running processes with a niced priority value jumps then it means that someone has started some intensive CPU jobs on the system, but they have niced the task.
If the niceness level is greater than zero then the user has been courteous enough lower to the priority of the process and therefore avoid a CPU overload. There is probably little that needs to be done in this case, other than maybe find out who has started the process and talk about how you can help out!
But if the niceness level is less than 0, then you will need to investigate what is happening and who is responsible, as such a task could easily cripple the responsiveness of the system.
High waiting on I/O - This means that there are some intensive I/O tasks running on the system that don't use up much CPU time. If this number is high for anything other than short bursts then it means that either the I/O performed by the task is very inefficient, or the data is being transferred to a very slow device, or there is a potential problem with a hard disk that is taking a long time to process reads & writes.
High interrupt processing - This could be an indication of a broken peripheral that is causing lots of hardware interrupts or of a process that is issuing lots of software interrupts.
Large stolen time - Basically this means that the host system running the hypervisor is too busy. If possible, check the other virtual machines running on the hypervisor, and/or migrate to your virtual machine to another host.

TL;DR

Linux keeps statistics on how much time the CPU spends performing different tasks. Most of its time should be spent running user space programs or being idle. However there are several other execution states including running the kernel and servicing interrupts. Monitoring these different states can help you keep your system healthy and running smoothly.

Also see

See your CPU stats in realtime with Scout. Try Scout free for 30 days.

DNSMasq, the Pint-Sized Super Dæmon!

http://www.linuxjournal.com/content/dnsmasq-pint-sized-super-d%C3%A6mon

I've always been a fan of putting aftermarket firmware on consumer-grade routers. Whether it's DD-WRT, Tomato, OpenWRT or whatever your favorite flavor of "better than stock" firmware might be, it just makes economic sense. Unfortunately, my routing needs have surpassed my trusty Linksys router. Although I could certainly buy a several-hundred-dollar, business-class router, I really don't like spending money like that. Thankfully, I found an incredible little router (the EdgeRouter Lite) that can route a million packets per second and has three gigabit Ethernet ports. So far, it's an incredible router, but that's all it does—route. Which brings me to the point of this article.

I've always used the DHCP and DNS server built in to DD-WRT to serve my network. I like having those two services tied to the router, because if every other server on my network fails, I still can get on-line. I figure the next best thing is to have a Raspberry Pi dedicated to those services. Because all my RPi devices currently are attached to televisions around the house (running XBMC), I decided to enlist the Cubox computer I reviewed in November 2013 (Figure 1). It's been sitting on my shelf collecting dust, and I'd rather have it do something useful.
Figure 1. The Cubox is more powerful than a Raspberry Pi, but even an RPi is more power than DNSMasq requires!

 Although the Cubox certainly is powerful enough to run BIND and the ISC DHCP server, that's really overkill for my network. Plus, BIND really annoys me with its serial-number incrementation and such whenever an update is made. It wasn't until I started to research alternate DNS servers that I realized just how powerful DNSMasq can be. Plus, the way it works is simplicity at its finest. First, let's look at its features:
  • Extremely small memory and CPU footprint: I knew this was the case, because it's the program that runs on Linux-based consumer routers where memory and CPU are at a premium.
  • DNS server: DNSMasq approaches DNS in a different way from the traditional BIND dæmon. It doesn't offer the complexity of domain transfers, master/slave relationships and so on. It does offer extremely simple and highly configurable options that are, in my opinion, far more useful in a small- to medium-size network. It even does reverse DNS (PTR records) automatically! (More on those details later.)
  • DHCP server: where the DNS portion of DNSMasq lacks in certain advanced features, the DHCP services offered actually are extremely robust. Most routers running firmware like DD-WRT don't offer a Web interface to the advanced features DNSMasq provides, but it rivals and even surpasses some of the standalone DHCP servers.
  • TFTP server: working in perfect tandem with the advanced features of DHCP, DNSMasq even offers a built-in TFTP server for things like booting thin clients or sending configuration files.
  • A single configuration file: it's possible to use multiple configuration files, and I even recommend it for clarity's sake. In the end, however, DNSMasq requires you to edit only a single configuration file to manage all of its powerful services. That configuration file also is very well commented, which makes using it much nicer.

Installation

DNSMasq has been around for a very long time. Installing it on any Linux operating system should be as simple as searching for it in your distribution's package management system. On Debian-based systems that would mean something like:

sudo apt-get install dnsmasq

Or, on a Red Hat/CentOS system:

yum install dnsmasq (as root)

The configuration file (there's just one!) is usually stored at /etc/dnsmasq.conf, and like I mentioned earlier, it is very well commented. Figuring out even the most advanced features is usually as easy as reading the configuration file and un-commenting those directives you want to enable. There are even examples for those directives that require you to enter information specific to your environment.

After the dnsmasq package is installed, it most likely will get started automatically. From that point on, any time you make changes to the configuration (or make changes to the /etc/hosts file), you'll need to restart the service or send an HUP signal to the dæmon. I recommend using the init script to do that:

sudo service dnsmasq restart

But, if your system doesn't have the init service set up for dnsmasq, you can issue an HUP signal by typing something like this:

sudo kill -HUP $(pidof dnsmasq)

This will find the PID (process ID) and send the signal to reload its configuration files. Either way should work, but the init script will give you more feedback if there are errors.

First Up: DNS

Of all the features DNSMasq offers, I find its DNS services to be the most useful and awesome. You get the full functionality of your upstream DNS server (usually provided by your ISP), while seamlessly integrating DNS records for you own network. To accomplish that "split DNS"-type setup with BIND, you need to create a fake DNS master file, and even then you run into problems if you are missing a DNS name in your local master file, because BIND won't query another server by default for records it thinks it's in charge of serving. DNSMasq, on the other hand, follows a very simple procedure when it receives a request. Figure 2 shows that process.
Figure 2. DNSMasq makes DNS queries simple, flexible and highly configurable.

For my purposes, this means I can put a single entry into my server's /etc/hosts file for something like "server.brainofshawn.com", and DNSMasq will return the IP address in the /etc/hosts file. If a host queries DNSMask for an entry not in the server's /etc/hosts file, www.brainofshawn.com for instance, it will query the upstream DNS server and return the live IP for my Web host. DNSMasq makes a split DNS scenario extremely easy to maintain, and because it uses the server's /etc/hosts file, it's simple to modify entries.

My personal favorite feature of DNSMasq's DNS service, however, is that it supports round-robin load balancing. This isn't something that normally works with an /etc/hosts file entry, but with DNSMasq, it does. Say you have two entries in your /etc/hosts file like this:

192.168.1.10	webserver.example.com
192.168.1.11	webserver.example.com

On a regular system (that is, if you put it in your client's /etc/hosts file), the DNS query always will return 192.168.1.10 first. DNSMasq, on the other hand, will see those two entries and mix up their order every time it's queried. So instead of 192.168.1.10 being the first IP, half of the time, it will return 192.168.1.11 as the first IP. It's a very rudimentary form of load balancing, but it's a feature most people don't know exists!

Finally, DNSMasq automatically will create and serve reverse DNS responses based on entries found in the server's /etc/hosts file. In the previous example, running the command:

dig -x 192.168.1.10

would get the response "webserver.example.com" as the reverse DNS lookup. If you have multiple DNS entries for a single IP address, DNSMasq uses the first entry as the reverse DNS response. So if you have a line like this in your server's /etc/hosts file:

192.168.1.15 www.example.com mail.example.com ftp.example.com

Any regular DNS queries for www.example.com, mail.example.com or ftp.example.com will get answered with "192.168.1.15", but a reverse DNS lookup on 192.168.1.15 will get the single response "www.example.com".

DNSMasq is so flexible and feature-rich, it's hard to find a reason not to use it. Sure, there are valid reasons for using a more robust DNS server like BIND, but for most small to medium networks, DNSMasq is far more appropriate and much, much simpler.

Serving DHCP

It's possible to use DNSMasq for DNS only and disable the DHCP services it offers. Much like DNS, however, the simplicity and power offered by DNSMasq makes it a perfect candidate for small- to medium-sized networks. It supports both dynamic ranges for automatic IP assignment and static reservations based on the MAC address of computers on your network. Plus, since it also acts as the DNS server for your network, it has a really great hostname-DNS integration for computers connected to your network that may not have a DNS entry. How does that work? Figure 3 shows the modified method used when the DNS server receives a query if it's also serving as a DHCP server.

(The extra step is shown as the orange-colored diamond in the flowchart.)
Figure 3. If you use DHCP, it automatically integrates into your DNS system—awesome for finding dynamically assigned IPs!

Basically, if your friend brings a laptop to your house and connects to your network, when it requests a DHCP address, it tells the DNSMasq server its hostname. From that point on, until the lease expires, any DNS queries the server receives for that hostname will be returned as the IP it assigned via DHCP. This is very convenient if you have a computer connected to your network whose hostname you know, but it gets a dynamically assigned IP address. In my house, I have a Hackintosh computer that just gets a random IP assigned via DNSMasq's DHCP server. Figure 4 shows what happens when I ping the name "hackintosh" on my network. Even though it isn't listed in any of the server's configuration files, since it handles DHCP, it creates a DNS entry on the fly.
Figure 4. There are no DNS entries anywhere for my Hackintosh, but thanks to DNSMasq, it's pingable via its hostname.

Static DHCP entries can be entered in the single configuration file using this format:

dhcp-host=90:fb:a6:86:0d:60,xbmc-livingroom,192.168.1.20
dhcp-host=b8:27:eb:e3:4c:5f,xbmc-familyroom,192.168.1.21
dhcp-host=b8:27:eb:16:d9:08,xbmc-masterbedroom,192.168.1.22
dhcp-host=00:1b:a9:fa:98:a9,officelaser,192.168.1.100
dhcp-host=04:46:65:d4:e8:c9,birdcam,192.168.1.201

It's also valid to leave the hostname out of the static declaration, but adding it to the DHCP reservation adds it to the DNS server's list of known addresses, even if the client itself doesn't tell the DHCP server its hostname. You also could just add the hostname to your DNSMasq server's /etc/hosts file, but I prefer to make my static DHCP entries with hostnames, so I can tell at a glance what computer the reservation is for.

And If That's Not Enough...

The above scenarios are all I use DNSMasq for on my local network. It's more incredible than any DHCP/DNS combination I've ever used before, including the Windows and OS X server-based services I've used in large networks. It does provide even more services, however, for those folks needing them.

The TFTP server can be activated via configuration file to serve boot files, configuration files or any other TFTP files you might need served on your network. The service integrates flawlessly with the DHCP server to provide boot filenames, PXE/BOOTP information, and custom DHCP options needed for booting even the most finicky devices. Even if you need TFTP services for a non-boot-related reason, DNSMasq's server is just a standard TFTP service that will work for any computer or device requiring it.

If you've read Kyle Rankin's recent articles on DNSSEC and want to make sure your DNS information is secure, there's no need to install BIND. DNSMasq supports DNSSEC, and once again provides configuration examples in the configuration file.

Truly, DNSMasq is the unsung hero for consumer-grade Internet routers. It allows those tiny devices to provide DNS and DHCP for your entire network. If you install the program on a regular server (or teeny tiny Raspberry Pi or Cubox), however, it can become an extremely robust platform for all your network needs. If it weren't for my need to get a more powerful and reliable router, I never would have learned about just how amazing DNSMasq is. If you've ever been frustrated by BIND, or if you'd just like to have more control over the DNS and DHCP services on your network, I urge you to give DNSMasq a closer look. It's for more than just your DD-WRT router!

How to disable IPv6 on Linux

http://ask.xmodulo.com/disable-ipv6-linux.html

Question: I notice that one of my applications is trying to establish a connection over IPv6. But since our local network is not able to route IPv6 traffic, the IPv6 connection times out, and the application falls back to IPv4, which causes unnecessary delay. As I don't have any need for IPv6 at the moment, I would like to disable IPv6 on my Linux box. What is a proper way to turn off IPv6 on Linux?
IPv6 has been introduced as a replacement of IPv4, the traditional 32-bit address space used in the Internet, to solve the imminent exhaustion of available IPv4 address space. However, since IPv4 has been used by every host or device connected to the Internet, it is practically impossible to switch every one of them to IPv6 overnight. Numerous IPv4 to IPv6 transition mechanisms (e.g., dual IP stack, tunneling, proxying) have been proposed to facilitate the adoption of IPv6, and many applications are being rewritten, as we speak, to add support for IPv6. One thing for sure is that IPv4 and IPv6 will inevitably coexist for the forseeable future.
Ideally the ongoing IPv6 transition process should not be visible to end users, but the mixed IPv4/IPv6 environment might sometimes cause you to encounter various hiccups originating from unintended interaction between IPv4 and IPv6. For example, you may experience timeouts from applications such as apt-get or ssh trying to unsuccessfully connecting via IPv6, DNS server accidentally dropping AAAA DNS records for IPv6, or your IPv6-capable device not compatible with your ISP's legacy IPv4 network, etc.
Of course this doesn't mean that you should blindly disable IPv6 on you Linux box. With all the benefits promised by IPv6, we as a society want to fully embrace it eventually, but as part of troubleshooting process for end-user experienced hiccups, you may try turning off IPv6 to see if indeed IPv6 is a culprit.
Here are a few techniques allowing you to disable IPv6 partially (e.g., for a certain network interface) or completely on Linux. These tips should be applicable to all major Linux distributions including Ubuntu, Debian, Linux Mint, CentOS, Fedora, RHEL, and Arch Linux.

Check if IPv6 is Enabled on Linux

All modern Linux distributions have IPv6 automatically enabled by default. To see IPv6 is activated on your Linux, use ifconfig or ip commands. If you see "inet6" in the output of these commands, this means your Linux has IPv6 enabled.
$ ifconfig

$ ip addr

Disable IPv6 Temporarily

If you want to turn off IPv6 temporarily on your Linux system, you can use /proc file system. By "temporarily", we mean that the change we make to disable IPv6 will not be preserved across reboots. IPv6 will be enabled back again after you reboot your Linux box.
To disable IPv6 for a particular network interface, use the following command.
$ sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf//disable_ipv6'
For example, to disable IPv6 for eth0 interface:
$ sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf/eth0/disable_ipv6'

To enable IPv6 back on eth0 interface:
$ sudo sh -c 'echo 0 > /proc/sys/net/ipv6/conf/eth0/disable_ipv6'
If you want to disable IPv6 system-wide for all interfaces including loopback interface, use this command:
$ sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6'

Disable IPv6 Permanently across Reboots

The above method does not permanently disable IPv6 across reboots. IPv6 will be activated again once you reboot your system. If you want to turn off IPv6 for good, there are several ways you can do it.

Method One

The first method is to apply the above /proc changes persistently in /etc/sysctl.conf file.
That is, open /etc/sysctl.conf with a text editor, and add the following lines.
# to disable IPv6 on all interfaces system wide
net.ipv6.conf.all.disable_ipv6 = 1

# to disable IPv6 on a specific interface (e.g., eth0, lo)
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1
To activate these changes in /etc/sysctl.conf, run:
$ sudo sysctl -p /etc/sysctl.conf
or simply reboot.

Method Two

An alternative way to disable IPv6 permanently is to pass a necessary kernel parameter via GRUB/GRUB2 during boot time.
Open /etc/default/grub with a text editor, and add "ipv6.disable=1" to GRUB_CMDLINE_LINUX variable.
$ sudo vi /etc/default/grub
GRUB_CMDLINE_LINUX="xxxxx ipv6.disable=1"
In the above, "xxxxx" denotes any existing kernel parameter(s). Add "ipv6.disable=1" after them.

Finally, don't forget to apply the modified GRUB/GRUB2 settings by running:
On Debian, Ubuntu or Linux Mint:
$ sudo update-grub
On Fedora, CentOS/RHEL:
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Now IPv6 will be completely disabled once you reboot your Linux system.

Other Optional Steps after Disabling IPv6

Here are a few optional steps you can consider after disabling IPv6. This is because while you disable IPv6 in the kernel, other programs may still try to use IPv6. In most cases, such application behaviors will not break things, but you want to disable IPv6 for them for efficiency or safety reason.

/etc/hosts

Depending on your setup, /etc/hosts may contain one or more IPv6 hosts and their addresses. Open /etc/hosts with a text editor, and comment out all lines which contain IPv6 hosts.
$ sudo vi /etc/hosts
# comment these IPv6 hosts
# ::1     ip6-localhost ip6-loopback
# fe00::0 ip6-localnet
# ff00::0 ip6-mcastprefix
# ff02::1 ip6-allnodes
# ff02::2 ip6-allrouters

Network Manager

If you are using NetworkManager to manage your network settings, you can disable IPv6 on NetworkManager as follows. Open the wired connection on NetworkManager, click on "IPv6 Settings" tab, and choose "Ignore" in "Method" field. Save the change and exit.

SSH server

By default, OpenSSH server (sshd) tries to bind on both IPv4 and IPv6 addresses.
To force sshd to bind only on IPv4 address, open /etc/ssh/sshd_config with a text editor, and add the following line. inet is for IPv4 only, and inet6 is for IPv6 only.
$ sudo vi /etc/ssh/sshd_config
AddressFamily inet
and restart sshd server.

Wednesday, February 25, 2015

Scripted window actions on Ubuntu with Devilspie 2

https://www.howtoforge.com/tutorial/ubuntu-desktop-devilspie-2

Devilspie2 is a program that detects windows as they are created, and performs scripted actions on them. The scripts are written in LUA, allowing a great deal of customization. This tutorial will show you the installation of Devilspie 2 on Ubuntu 14.04 and give you a introduction into Devilspie scripting.

What is LUA?

Lua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.

For futher infomation visit: http://www.lua.org/

Installation.

Type in the following:
sudo apt-get install devilspie2
(make sure it is devilspie2, because the devilspie is kinda messed up and no longer in development.)
Unfortunately the rules of the original Devils Pie are not supported in Devilspie 2 anymore.

Config and Scripting.

If you don't give devilspie2 any folder with --folder, it will read LUA scripts from the ~/.config/devilspie2/ folder, and this folder will be created if it doesn't already exist. This folder is changeable with the --folder option. If devilspie2 doesn't find any LUA files in the folder, it will stop execution.



Above are some usage options...

Sample Scripts.

the debug_print command does only print anything to stdout 
-- if devilspie2 is run using the --debug option

debug_print("Window Name: "..	get_window_name());
debug_print("Application name: "..get_application_name())

I want my Xfce4-terminal to the right on the second screen of my two-monitor 
setup,

if (get_window_name()=="Terminal") then
	-- x,y, xsize, ysize
	set_window_geometry(1600,300,900,700);
end

Make Iceweasel always start maximized.

if (get_application_name()=="Iceweasel") then
	maximize();
end
To learn more about the scripting language visit the following:

See FAQ at

www.lua.org/FAQ.html 

Documentation at

www.lua.org/docs.html 

Tutorials at 

http://lua-users.org/wiki/TutorialDirectory
	

Sript Commands.

get_window_name()
     returns a string containing the name of the current window.

get_application_name()
     returns the application name of the current window.

set_window_position(xpos, ypos)
     Sets the position of a window.

set_window_size(xsize, ysize)
     Sets the size of a window.

set_window_geometry(xpos, ypos, xsize ysize)
     Set the geometry of a window.

make_always_on_top()
     Set the windows always on top flag.

set_on_top()
     Sets a window on top of the others (this will however not lock the window in this position).

debug_print()
     Debug helper that prints a string to stdout. It is only printed if devilspie2 is run with the --debug option.

shade()
     "Shades" a window, showing only the title-bar.

unshade()
     Unshades a window - the opposite of "shade"

maximize()
     maximizes a window

unmaximize()
     unmaximizes a window

maximize_vertically()
     maximizes the current window vertically.

maximize_horisontally()
     maximizes the current window horisontally.

minimize()
     minimizes a window

unminimize()
     unminimizes a window, that is bringing it back to screen from the minimized position/size.

decorate_window()
     Shows all window decoration.

undecorate_window()
     Removes all window decorations.

set_window_workspace(number)
     Moves a window to another workspace. The number variable starts counting at 1.

change_workspace(number)
     Changes the current workspace to another. The number variable starts counting at 1.

pin_window()
     asks the window manager to put the window on all workspaces.

unpin_window()
     Asks the window manager to put window only in the currently active workspace.

stick_window()
     Asks the window manager to keep the window's position fixed on the screen, even when the workspace or viewport scrolls.

unstick_window()
     Asks the window manager to not have window's position fixed on the screen when the workspace or viewport scrolls.

This will be the end of the tutorial for using devilspie2.

Links

Creating Forms for Easy LibreOffice Database Entry on Linux

http://www.linux.com/learn/tutorials/811444-creating-forms-for-easy-libreoffice-database-entry-on-linux

The LibreOffice suite of tools includes a very powerful database application ─ one that happens to be incredibly user-friendly. These databases can be managed/edited by any user and data can be entered by anyone using a LibreOffice-generated form. These forms are very simple to create and can be attached to existing databases or you can create both a database and a form in one fell swoop.
There are two ways to create LibreOffice Base forms:
  • Form Wizard
  • Design View.
Design view is a versatile drag and drop form creator that is quite powerful and allows you to add elements and assign those elements to database tables. The Form Wizard is a very simple step-by-step wizard that walks the user through the process of creating a form. Although the Wizard isn’t nearly as powerful as the Design View ─ it will get the job done quickly and doesn’t require any form design experience.
For this entry, I will address the Form Wizard (in a later post, I will walk you through the more challenging Design View). I will assume you already have a database created and ready for data entry. This database can either be created with LibreOffice and reside on the local system or be a remote database of the format:
  • Oracle JDBC
  • Spreadsheet
  • dBASE
  • Text
  • MySQL
  • ODBC.
For purposes of simplicity, we’ll go with a local LibreOffice Base-generated database. I’ve created a very simple database with two tables to be used for this process. Let’s create a data entry form for this database.

Opening the database

The first step is to open LibreOffice Base. When the Database Wizard window appears (Figure 1), select Open an existing database file, click the Open button, navigate to the database to be used, and click Finish
lo base form 1
Figure 1: Opening your database for editing in LibreOffice Base.
The next window to appear is the heart and soul of LibreOffice Base. Here (Figure 2) you can manage tables, run queries, create/edit forms, and view reports of the opened database.
lo base form 2
Figure 2: The heart and soul of LibreOffice Base.
Click the Forms button in the left-side navigation and then double-click Use Wizard to Create Form under Tasks.
When the database opens in the Form Wizard, your first step is to select the fields available to the form. You do not have to select all fields from the database. You can select them all or you can select as few as one.
If your database has more than one table, you can select between the tables in the Tables or queries drop-down (NOTE: You can only select fields from one table in the database at this point). Select the table to be used and then add the fields from the Available fields section to the Fields in the form section (Figure 3).
lo base form 3
Figure 3: Adding fields to be used with your form.

Add a sub-form

Once you’ve selected all the necessary fields, click Next. At this point, you can choose to add a sub-form. A sub-form is a form-within-a-form and allows you to add more specific data to the original form. For example, you can include secondary data for employee records (such as work history, raises, etc.) to a form. This is the point at which you can include fields from other tables (besides the initial table selected from the Tables or queries drop-down). If you opt to create a sub-form for your data, the steps include:
  • Selecting the table
  • Adding the fields
  • Joining the fields (such as AuthorID to ID ─ Figure 4).
lo base form 4
Figure 4: Adding sub-forms to your form.

Arrange form controls

After all sub-forms are added, click Next to continue on. In the next step, you must arrange the controls of the form. This is just another way of saying how you want to the form to look and feel (where do you want the data entry field to reside against the field label). You can have different layouts for forms and sub-forms (Figure 5).
lo base form 5
Figure 5: Selecting the arrangement of the form and sub-form controls.

Select data entry mode

Click Next when you’ve arranged your controls. The next step is to select the data entry mode (Figure 6). There are two data entry modes:
  • Enter new data only
  • Display all data.
If you want to use the form only as a means to enter new data, select Enter new data only. If, however, you know you’ll want to use the form to enter and view data, select Display all data. If you go for the latter option, you will want to select whether previously entered data can be modified or not. If you want to prevent write access to the previous data, select Do not allow modification of existing data.
lo base form 6
Figure 6: Selecting if the form is to be used only for entering new data or not.
Make your selection and click Next.

Start entering data

At this point you can select a style for your form. This allows you to pick a color and field border (no border, 3D border, or flat). Make your selection and click Next.
The last step is to name your form. In this same window you can select the option, immediately begin working with the form (Figure 7). Select that option and click Finish. At this point, your form will open and you can start entering data.
lo base form 7
Figure 7: You are ready to start working with your form!
After a form is created, and you’ve worked with and closed said form … how do you re-open a form to add more data? Simple:
  1. Open LibreOffice Base.
  2. Open the existing database (in the same manner you did when creating the form).
  3. Double-click the form name under Forms (Figure 8).
  4. Start entering data.
lo base form 8
Figure 8: Opening a previously created form.
As a final note, make sure, after you finish working with your forms, that you click File > Save in the LibreOffice Base main window, to ensure you save all of your work.
You can create as many forms as you need with a single database ─ there is no limit to what you can do.
If you’re looking to easily enter data into LibreOffice databases, creating user-friendly forms is just a few steps away. Next time we visit this topic, we’ll walk through the Design View method of form creation.

Localhost DNS Cache

http://www.linuxjournal.com/content/localhost-dns-cache

Is it weird to say that DNS is my favorite protocol? Because DNS is my favorite protocol. There's something about the simplicity of UDP packets combined with the power of a service that the entire Internet relies on that grabs my interest. Through the years, I've been impressed with just how few resources you need to run a modest DNS infrastructure for an internal network.

Recently, as one of my environments started to grow, I noticed that even though the DNS servers were keeping up with the load, the query logs were full of queries for the same hosts over and over within seconds of each other. You see, often a default Linux installation does not come with any sort of local DNS caching. That means that every time a hostname needs to be resolved to an IP, the external DNS server is hit no matter what TTL you set for that record.

This article explains how simple it is to set up a lightweight local DNS cache that does nothing more than forward DNS requests to your normal resolvers and honor the TTL of the records it gets back.

There are a number of different ways to implement DNS caching. In the past, I've used systems like nscd that intercept DNS queries before they would go to name servers in /etc/resolv.conf and see if they already are present in the cache. Although it works, I always found nscd more difficult to troubleshoot than DNS when something went wrong. What I really wanted was just a local DNS server that honored TTL but would forward all requests to my real name servers. That way, I would get the speed and load benefits of a local cache, while also being able to troubleshoot any errors with standard DNS tools.

The solution I found was dnsmasq. Normally I am not a big advocate for dnsmasq, because it's often touted as an easy-to-configure full DNS and DHCP server solution, and I prefer going with standalone services for that. Dnsmasq often will be configured to read /etc/resolv.conf for a list of upstream name servers to forward to and use /etc/hosts for zone configuration. I wanted something completely different. I had full-featured DNS servers already in place, and if I liked relying on /etc/hosts instead of DNS for hostname resolution, I'd hop in my DeLorean and go back to the early 1980s. Instead, the bulk of my dnsmasq configuration will be focused on disabling a lot of the default features.

The first step is to install dnsmasq. This software is widely available for most distributions, so just use your standard package manager to install the dnsmasq package. In my case, I'm installing this on Debian, so there are a few Debianisms to deal with that you might not have to consider if you use a different distribution. First is the fact that there are some rather important settings placed in /etc/default/dnsmasq. The file is fully commented, so I won't paste it here. Instead, I list two variables I made sure to set:

ENABLED=1
IGNORE_RESOLVCONF=yes

The first variable makes sure the service starts, and the second will tell dnsmasq to ignore any input from the resolvconf service (if it's installed) when determining what name servers to use. I will be specifying those manually anyway.

The next step is to configure dnsmasq itself. The default configuration file can be found at /etc/dnsmasq.conf, and you can edit it directly if you want, but in my case, Debian automatically sets up an /etc/dnsmasq.d directory and will load the configuration from any file you find in there. As a heavy user of configuration management systems, I prefer the servicename.d configuration model, as it makes it easy to push different configurations for different uses. If your distribution doesn't set up this directory for you, you can just edit /etc/dnsmasq.conf directly or look into adding an option like this to dnsmasq.conf:

conf-dir=/etc/dnsmasq.d

In my case, I created a new file called /etc/dnsmasq.d/dnscache.conf with the following settings:

no-hosts
no-resolv
listen-address=127.0.0.1
bind-interfaces
server=/dev.example.com/10.0.0.5
server=/10.in-addr.arpa/10.0.0.5
server=/dev.example.com/10.0.0.6
server=/10.in-addr.arpa/10.0.0.6
server=/dev.example.com/10.0.0.7
server=/10.in-addr.arpa/10.0.0.7

Let's go over each setting. The first, no-hosts, tells dnsmasq to ignore /etc/hosts and not use it as a source of DNS records. You want dnsmasq to use your upstream name servers only. The no-resolv setting tells dnsmasq not to use /etc/resolv.conf for the list of name servers to use. This is important, as later on, you will add dnsmasq's own IP to the top of /etc/resolv.conf, and you don't want it to end up in some loop. The next two settings, listen-address and bind-interfaces ensure that dnsmasq binds to and listens on only the localhost interface (127.0.0.1). You don't want to risk outsiders using your service as an open DNS relay.

The server configuration lines are where you add the upstream name servers you want dnsmasq to use. In my case, I added three different upstream name servers in my preferred order. The syntax for this line is server=/domain_to_use/nameserver_ip. So in the above example, it would use those name servers for dev.example.com resolution. In my case, I also wanted dnsmasq to use those name servers for IP-to-name resolution (PTR records), so since all the internal IPs are in the 10.x.x.x network, I added 10.in-addr.arpa as the domain.

Once this configuration file is in place, restart dnsmasq so the settings take effect. Then you can use dig pointed to localhost to test whether dnsmasq works:

$ dig ns1.dev.example.com @localhost

; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> ns1.dev.example.com @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- 00:59:18="" 0="" 10.0.0.5="" 127.0.0.1="" 18="" 1="" 2014="" 265="" 4208="" 56="" a="" additional:="" answer:="" answer="" authority:="" code="" flags:="" id:="" in="" msec="" msg="" noerror="" ns1.dev.example.com.="" opcode:="" qr="" query:="" query="" question="" ra="" rcvd:="" rd="" section:="" sep="" server:="" size="" status:="" thu="" time:="" when:="">

Here, I tested ns1.dev.example.com and saw that it correctly resolved to 10.0.0.5. If you inspect the dig output, you can see near the bottom of the output that SERVER: 127.0.0.1#53(127.0.0.1) confirms that I was indeed talking to 127.0.0.1 to get my answer. If you run this command again shortly afterward, you should notice that the TTL setting in the output (in the above example it was set to 265) will decrement. Dnsmasq is caching the response, and once the TTL gets to 0, dnsmasq will query a remote name server again.

After you have validated that dnsmasq functions, the final step is to edit /etc/resolv.conf and make sure that you have nameserver 127.0.0.1 listed above all other nameserver lines. Note that you can leave all of the existing name servers in place. In fact, that provides a means of safety in case dnsmasq ever were to crash. If you use DHCP to get an IP or otherwise have these values set from a different file (such as is the case when resolvconf is installed), you'll need to track down what files to modify instead; otherwise, the next time you get a DHCP lease, it will overwrite this with your new settings.

I deployed this simple change to around 100 servers in a particular environment, and it was amazing to see the dramatic drop in DNS traffic, load and log entries on my internal name servers. What's more, with this in place, the environment is even more tolerant in the case there ever were a real problem with downstream DNS servers—existing cached entries still would resolve for the host until TTL expired. So if you find your internal name servers are getting hammered with traffic, an internal DNS cache is something you definitely should consider.

10 quick tar command examples to create/extract archives in Linux

http://www.binarytides.com/linux-tar-command

Tar command on Linux

The tar (tape archive) command is a frequently used command on linux that allows you to store files into an archive.
The commonly seen file extensions are .tar.gz and .tar.bz2 which is a tar archive further compressed using gzip or bzip algorithms respectively.
In this tutorial we shall take a look at simple examples of using the tar command to do daily jobs of creating and extracting archives on linux desktops or servers.

Using the tar command

The tar command is available by default on most linux systems and you do not need to install it separately.
With tar there are 2 compression formats, gzip and bzip. The "z" option specifies gzip and "j" option specifies bzip. It is also possible to create uncompressed archives.

1. Extract a tar.gz archive

Well, the more common use is to extract tar archives. The following command shall extract the files out a tar.gz archive
$ tar -xvzf tarfile.tar.gz
Here is a quick explanation of the parameters used -
x - Extract files
v - verbose, print the file names as they are extracted one by one
z - The file is a "gzipped" file
f - Use the following tar archive for the operation
Those are some of the important options to memorise
Extract tar.bz2/bzip archives
Files with extension bz2 are compressed with the bzip algorithm and tar command can deal with them as well. Use the j option instead of the z option.
$ tar -xvjf archivefile.tar.bz2

2. Extract files to a specific directory or path

To extract out the files to a specific directory, specify the path using the "-C" option. Note that its a capital C.
$ tar -xvzf abc.tar.gz -C /opt/folder/
However first make sure that the destination directory exists, since tar is not going to create the directory for you and will fail if it does not exist.

3. Extract a single file

To extract a single file out of an archive just add the file name after the command like this
$ tar -xz -f abc.tar.gz "./new/abc.txt"
More than once file can be specified in the above command like this
$ tar -xv -f abc.tar.gz "./new/cde.txt" "./new/abc.txt"

4. Extract multiple files using wildcards

Wildcards can be used to extract out a bunch of files matching the given wildcards. For example all files with ".txt" extension.
$ tar -xv -f abc.tar.gz --wildcards "*.txt"



5. List and search contents of the tar archive

If you want to just list out the contents of the tar archive and not extract them, use the "-t" option. The following command prints the contents of a gzipped tar archive,
$ tar -tz -f abc.tar.gz
./new/
./new/cde.txt
./new/subdir/
./new/subdir/in.txt
./new/abc.txt
...
Pipe the output to grep to search a file or less command to browse the list. Using the "v" verbose option shall print additional details about each file.
For tar.bz2/bzip files use the "j" option
Use the above command in combination with the grep command to search the archive. Simple!
$ tar -tvz -f abc.tar.gz | grep abc.txt
-rw-rw-r-- enlightened/enlightened 0 2015-01-13 11:40 ./new/abc.txt

6. Create a tar/tar.gz archive

Now that we have learnt how to extract existing tar archives, its time to start creating new ones. The tar command can be told to put selected files in an archive or an entire directory. Here are some examples.
The following command creates a tar archive using a directory, adding all files in it and sub directories as well.
$ tar -cvf abc.tar ./new/
./new/
./new/cde.txt
./new/abc.txt
The above example does not create a compressed archive. Just a plain archive, that puts multiple files together without any real compression.
In order to compress, use the "z" or "j" option for gzip or bzip respectively.
$ tar -cvzf abc.tar.gz ./new/
The extension of the file name does not really matter. "tar.gz" and tgz are common extensions for files compressed with gzip. ".tar.bz2" and ".tbz" are commonly used extensions for bzip compressed files.

7. Ask confirmation before adding files

A useful option is "w" which makes tar ask for confirmation for every file before adding it to the archive. This can be sometimes useful.
Only those files would be added which are given a yes answer. If you do not enter anything, the default answer would be a "No".
# Add specific files

$ tar -czw -f abc.tar.gz ./new/*
add ‘./new/abc.txt’?y
add ‘./new/cde.txt’?y
add ‘./new/newfile.txt’?n
add ‘./new/subdir’?y
add ‘./new/subdir/in.txt’?n

# Now list the files added
$ tar -t -f abc.tar.gz 
./new/abc.txt
./new/cde.txt
./new/subdir/

8. Add files to existing archives

The r option can be used to add files to existing archives, without having to create new ones. Here is a quick example
$ tar -rv -f abc.tar abc.txt
Files cannot be added to compressed archives (gz or bzip). Files can only be added to plain tar archives.

9. Add files to compressed archives (tar.gz/tar.bz2)

Its already mentioned that its not possible to add files to compressed archives. However it can still be done with a simple trick. Use the gunzip command to uncompress the archive, add file to archive and compress it again.
$ gunzip archive.tar.gz
$ tar -rf archive.tar ./path/to/file
$ gzip archive.tar
For bzip files use the bzip2 and bunzip2 commands respectively.

10. Backup with tar

A real scenario is to backup directories at regular intervals. The tar command can be scheduled to take such backups via cron. Here is an example -
$ tar -cvz -f archive-$(date +%Y%m%d).tar.gz ./new/
Run the above command via cron and it would keep creating backup files with names like -
'archive-20150218.tar.gz'.
Ofcourse make sure that the disk space is not overflown with larger and larger archives.

11. Verify archive files while creation

The "W" option can be used to verify the files after creating archives. Here is a quick example.
$ tar -cvW -f abc.tar ./new/
./new/
./new/cde.txt
./new/subdir/
./new/subdir/in.txt
./new/newfile.txt
./new/abc.txt
Verify ./new/
Verify ./new/cde.txt
Verify ./new/subdir/
Verify ./new/subdir/in.txt
Verify ./new/newfile.txt                                                                                                                              
Verify ./new/abc.txt
Note that the verification cannot be done on compressed archives. It works only with uncompressed tar archives.
Thats all for now. For more check out the man page for tar command, with "man tar".

5 specialized Linux distributions for computer repair

http://opensource.com/life/15/2/five-specialized-linux-distributions-computer-repair

Computer and mouse kaleidoscope graphic
Image by : 
opensource.com
Computers are incredible tools that let users doing amazing things, but sometimes things go wrong. The problem could be as small as accidentally deleting files or forgetting a password—and as major as having an operating system rendered non-bootable by file system corruption. Or, worst case scenario, a hard drive dying completely. In each of these cases, and many more like them, there are specialized tools that can aid you in fixing problems with a computer or help you be prepared for when something bad does happen.

Many of these tools are actually highly-specialized Linux distributions. These distributions have a much narrower focus than the major desktop and server Linux distributions. So while you can find the vast majority of the same software packages are included in the repositories for the major distributions, these specialized distributions are designed to put all the programs you would need for computer repair or backup/restoration in one convenient place. Many of them even have customized user interfaces to make using the software easier.

Below, I look at five different Linux distributions designed to make your life easier when computers start giving you a headache. Give them a try, and make sure you keep CDs or USB drives with your favorites handy for when something does go wrong. If you like, you can even try using Scott Nesbitt's instructions for how to test drive Linux to install these distributions to a USB stick instead of burning a CD or using the sometimes more complex instructions available on the projects' websites for creating a bootable flash drive installation.

Clonezilla Live

Designed for backup and recovery of disk images and partitions, Clonezilla Live is an open source alternative to Norton Ghost. Clonezilla can save images to and restore images from a local device (such as a hard disk or USB flash drive) or over the network using SSH, Samba, or NFS. The underlying software used for creating images is Partclone, which provides a wide array of options and supports a large number of file systems. Clonezilla's user interface is a spartan ncurses-based menu system, but is very usable. The menu options in the interface walk you through everything. As an added bonus, once you have selected a task, Clonezilla provides you with the command line options you can use to run that task again without having to work your way through all the menus.
Clonezilla is developed by Taiwan's National Center for High-Performance Computing's Free Software Labs and is released under the GNU General Public License Version 2. Users needing an even more robust backup and recovery system should check out Clonezilla Server Edition, which works much like the Live version but requires a dedicated server installation.

Rescatux

Rescatux is a repair distribution designed to fix problems with both Linux and Windows. It's still a beta release, so there are some rough edges, but it provides easy access to various tools using its wizard, Rescapp. The wizard helps you perform various repair tasks without having to have extensive knowledge of the command line. You can reset passwords for Windows and Linux, restore GRUB or a Windows Master Boot Record, and perform a file system check for Linux systems. There are also a few "expert tools" for testing and repairing hard disks and recovering deleted files. Despite the beta nature of Rescatux, the inline documentation is already quite good, and you can learn even more by visiting the Rescatux wiki or by watching the tutorial videos on YouTube.
Based on Debian 7 (Wheezy), Rescatux is released under Version 3 of the GNU General Public License.

Redo Backup & Recovery

Like Clonezilla Live, Redo Backup & Recovery uses Partclone to clone disks and partitions. However, unlike Clonezilla, it has a polished graphic user interface. Redo Backup & Recovery boots into a graphic environment and has a lightweight desktop which provides access to other tools you can use while Redo Backup & Recovery completes its tasks. In addition to the backup & restore functionality, Redo Backup and Recovery's desktop includes a file manager, terminal, text editor, web browser, and utilities to recover deleted files, manage partitions and logical volumes, and to erase all data on a drive and restore it to factory defaults.
The Redo Backup & Recovery utility is released under the GNU General Public License Version 3 and is based on Ubuntu 12.04 LTS.

SystemRescueCD

Aimed at system administrators, SystemRescueCD is a powerful tool for repairing Linux systems. By default, SystemRescueCD boots into console interface with very little hand-holding, but a welcome message provides basic instructions for starting the network interface, running various command line programs (text editors and a web browser), enabling NTFS support in order to read Windows hard drives, and starting the XFCE-based graphical desktop environment. SystemRescueCD does include a large number of utilities, but you really need to know what you are doing to use it.
SystemRescueCD is based on Gentoo and is released under the GNU General Public License Version 2.

Trinity Rescue Kit

Designed for repairing Microsoft Windows, Trinity Rescue Kit provides a wide variety of tools to help rescue a broken Windows system. Trinity includes five different virus scanners: Clam AV, F-Prot, BitDefender, Vexira, and Avast (but Avast does require a license key). It also has an option for cleaning junk files, such as temp files and files in the Recycle Bin. Password resetting is handled by Winpass, which can reset passwords for the Administrator account or regular users. All of these features, and several other more advanced functions, are accessed using a interactive text menu, which does include a very extensive help file. It might intimidate someone not used to using a text-based interface, but Trinity Rescue Kit is really easy to use.
Trinity Rescue Kit is released under Version 2 of the GNU General Public License.

Sunday, February 22, 2015

Linux Basics: How To Check The State Of A Network Interface Card

http://www.unixmen.com/linux-basics-check-state-network-interface-card

Please shareShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on RedditDigg thisShare on StumbleUpon
Normally, we can easily check the state of a network interface card like whether the cable plugged in to the slot or the network card is up or down in Graphical mode. What if you have only command line mode? Ofcourse, you can turn around the system and check for the cable is properly plugged in, or you can do the same easily from your Terminal. Here is how to do that. This method is almost same for Debian and RPM based systems.

Check Network Card State

I have two ethernet cards on my laptop. One, eth0,  is wired, And another, wlan0, is wireless.
Let us check the state of the eth0.
cat /sys/class/net/eth0/carrier
Sample output:
0
Or, use the following command to check the status.
cat /sys/class/net/eth0/operstate
Sample output:
down
As you see in the above results, the NIC is down or cable is not connected.
Let me plug a network cable to the eth0 slot, and check again.
After plugged in the cable, I executed the above commands:
cat /sys/class/net/eth0/carrier
Sample output:
1
Or,
cat /sys/class/net/eth0/operstate
Sample output:
up
Voila, the eth0 is up or the cable is connected to eth0.
Be mindful, it doesn’t mean that the IP address has been assigned to the eth0. The cable is just connected to that slot. That’s all.
Let us check for the wlan0 state.
cat /sys/class/net/wlan0/carrier
Sample output:
1
The result is 1, which means the wlan0 is up and connected.
Or,
cat /sys/class/net/wlan0/operstate
Sample output:
up
Likewise, you can check all the network cards on your machine.
Cheers!

Thursday, February 19, 2015

How to share files between computers over network with btsync

http://xmodulo.com/share-files-between-computers-over-network.html

If you are the type of person who uses several devices to work online, I'm sure you must be using, or at least wishing to use, a method for syncing files and directories among those devices.
BitTorrent Sync, also known as btsync for short, is a cross-platform sync tool (freeware) which is powered by BitTorrent, the famous protocol for peer-to-peer (P2P) file sharing. Unlike classic BitTorrent clients, however, btsync encrypts traffic and grants access to shared files based on auto-generated keys across different operating system and device types.
More specifically, when you add files or folder to btsync as shareable, corresponding read/write keys (so-called secret codes) are created. These keys are then shared among different devices via HTTPS links, emails, QR codes, etc. Once two devices are paired via a key, the linked content can be synced directly between them. There is no file size limit, and transfer speeds are never throttled unless you explicitly say so. You will be able to create accounts inside btsync, under which you can create and manage keys and files to share via web interface.
BitTorrent Sync is available on multiple operating systems including Linux, MacOS X, Windows, as well as Android and iOS. In this tutorial, I will show you how to use BitTorrent Sync to sync files between a Linux box (a home server), and a Windows machine (a work laptop).

Installing Btsync on Linux

BitTorrent Sync is available for download from the project's website. I assume that the Windows version of BiTorrent Sync is installed on a Windows laptop, which can be done very easily. I will focus on installing and configuring it on the Linux server.
In the download page, choose your architecture, right click on the corresponding link, choose Copy link location (or similar, depending on your browser), and paste the link to wget in your terminal, as follows:
For 64-bit Linux:
# wget http://download.getsyncapp.com/endpoint/btsync/os/linux-x64/track/stable
For 32-bit Linux:
# wget http://download.getsyncapp.com/endpoint/btsync/os/linux-i386/track/stable

Once the download has completed, extract the contents of the tarball into a directory specially created for that purpose:
# cd /usr/local/bin
# mkdir btsync
# tar xzf stable -C btsync

You can now either add /usr/local/bin/btsync to your PATH environment variable.
export PATH=$PATH:/usr/local/bin/btsync
or run the btsync binary right from that folder. We'll go with the first option as it requires less typing and is easier to remember.

Configuring Btsync

Btsync comes with a built-in web server which is used as the management interface for BitTorrent Sync. To be able to access the web interface, you need to create a configuration file. You can do that with the following command:
# btsync --dump-sample-config > btsync.config
Then edit the btsync.config file (webui section) with your preferred text editor, as follows:
"listen" : "0.0.0.0:8888",
"login" : "yourusername",
"password" : "yourpassword"
You can choose any username and password.

Feel free to check the README file in /usr/local/bin/btsync directory if you want to tweak the configuration further, but this will do for now.

Running Btsync for the First Time

As system administrators we believe in logs! So before we launch btsync, we will create a log file for btsync.
# touch /var/log/btsync.log
Finally it's time to start btsync:
# btsync --config /usr/local/bin/btsync/btsync.config --log /var/log/btsync.log

Now point your web browser to the IP address of the Linux server and the port where btsync is listening on (192.168.0.15:8888 in my case), and agree to the privacy policies, terms, and EULA:

and you will be taken to the home page of your btsync installation:

Click on Add a folder, and choose a directory in your file system that you want to share. In our example, we will use /btsync:

That's enough by now. Please install BitTorrent Sync on your Windows machine (or another Linux box, if you want) before proceeding.

Sharing Files with Btsync

The following screencast shows how to sync an existing folder in a Windows 8 machine [192.168.0.106]. After adding the desired folder, get its key, and add it in your Linux installation via the "Enter a key or link" menu (as shown in the previous image), and the sync will start:
Now repeat the process for other computers or devices; selecting a folder or files to share, and importing the corresponding key(s) in your "central" btsync installation via the web interface on your Linux server.

Auto-start Btsync as a Normal User

You will notice that the synced files in the screencast were created in the /btsync directory belonging to user and group 'root'. That is because we launched BitTorrent Sync manually as the superuser. However, under normal circumstances, you will want to have BitTorrent Sync start on boot and running as a non-privileged user (www-data or other special account created for that purpose, btsync user for example).
To do so, create a user called btsync, and add the following stanza to the /etc/rc.local file (before the exit 0 line):
sudo -u btsync /usr/local/bin/btsync/btsync --config /usr/local/bin/btsync/btsync.config --log /var/log/btsync.log
Finally, create the pid file:
# touch /usr/local/bin/btsync/.sync//sync.pid
and change the ownership of /usr/local/bin/btsync recursively:
# chown -R btsync:root /usr/local/bin/btsync
Now reboot and verify that btsync is running as the intended user:

Based on your chosen distribution, you may find other ways to enable btsync to start on boot. In this tutorial I chose the rc.local approach since it's distribution-agnostic.

Final Remarks

As you can see, BitTorrent Sync is almost like server-less Dropbox for you. I said "almost" because of this: When you sync between devices on the same local network, sync happens directly between two devices. However, if you try to sync across different networks, and the devices to be paired are behind restrictive firewalls, there is a chance that the sync traffic goes through a third-party relay server operated by BitTorrent. While they claim that the traffic is AES-encrypted, you may still not want this to happen. For your privacy, be sure to turn off relay/tracker server options in every folder that you are sharing.

Hope it helps! Happy syncing!

Linux Namespaces

https://www.howtoforge.com/linux-namespaces

Background

Starting from kernel 2.6.24, Linux supports 6 different types of namespaces. Namespaces are useful in creating processes that are more isolated from the rest of the system, without needing to use full low level virtualization technology.
  • CLONE_NEWIPC: IPC Namespaces: SystemV IPC and POSIX Message Queues can be isolated.
  • CLONE_NEWPID: PID Namespaces: PIDs are isolated, meaning that a virtual PID inside of the namespace can conflict with a PID outside of the namespace. PIDs inside the namespace will be mapped to other PIDs outside of the namespace. The first PID inside the namespace will be '1' which outside of the namespace is assigned to init
  • CLONE_NEWNET: Network Namespaces: Networking (/proc/net, IPs, interfaces and routes) are isolated. Services can be run on the same ports within namespaces, and "duplicate" virtual interfaces can be created.
  • CLONE_NEWNS: Mount Namespaces. We have the ability to isolate mount points as they appear to processes. Using mount namespaces, we can achieve similar functionality to chroot() however with improved security.
  • CLONE_NEWUTS: UTS Namespaces. This namespaces primary purpose is to isolate the hostname and NIS name.
  • CLONE_NEWUSER: User Namespaces. Here, user and group IDs are different inside and outside of namespaces and can be duplicated.
Let's look first at the structure of a C program, required to demonstrate process namespaces. The following has been tested on Debian 6 and 7. First, we need to allocate a page of memory on the stack, and set a pointer to the end of that memory page. We use alloca to allocate stack memory rather than malloc which would allocate memory on the heap.
void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);
Next, we use clone to create a child process, passing the location of our child stack 'mem', as well as the required flags to specify a new namespace. We specify 'callee' as the function to execute within the child space:
mypid = clone(callee, mem, SIGCHLD | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWNS | CLONE_FILES, NULL);
After calling clone we then wait for the child process to finish, before terminating the parent. If not, the parent execution flow will continue and terminate immediately after, clearing up the child with it:
while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)
{
	continue;
}
Lastly, we'll return to the shell with the exit code of the child:
if (WIFEXITED(r))
{
	return WEXITSTATUS(r);
}
return EXIT_FAILURE;
Now, let's look at the callee function:
static int callee()
{
	int ret;
	mount("proc", "/proc", "proc", 0, "");
	setgid(u);
	setgroups(0, NULL);
	setuid(u);
	ret = execl("/bin/bash", "/bin/bash", NULL);
	return ret;
}
Here, we mount a /proc filesystem, and then set the uid (User ID) and gid (Group ID) to the value of 'u' before spawning the /bin/bash shell. LXC is an OS level virtualization tool utilizing cgroups and namespaces for resource isolation. Let's put it all together, setting 'u' to 65534 which is user "nobody" and group "nogroup" on Debian:
#define _GNU_SOURCE
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
static int callee();
const int u = 65534;
int main(int argc, char *argv[])
{
	int r;
	pid_t mypid;
	void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);
	mypid = clone(callee, mem, SIGCHLD | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWNS | CLONE_FILES, NULL);
	while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)
	{
		continue;
	}
	if (WIFEXITED(r))
	{
		return WEXITSTATUS(r);
	}
	return EXIT_FAILURE;
}
static int callee()
{
	int ret;
	mount("proc", "/proc", "proc", 0, "");
	setgid(u);
	setgroups(0, NULL);
	setuid(u);
	ret = execl("/bin/bash", "/bin/bash", NULL);
	return ret;
}
To execute the code produces the following:
root@w:~/pen/tmp# gcc -O -o ns.c -Wall -Werror -ansi -c89 ns.c
root@w:~/pen/tmp# ./ns
nobody@w:~/pen/tmp$ id
uid=65534(nobody) gid=65534(nogroup)
nobody@w:~/pen/tmp$ ps auxw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
nobody       1  0.0  0.0   4620  1816 pts/1    S    21:21   0:00 /bin/bash
nobody       5  0.0  0.0   2784  1064 pts/1    R+   21:21   0:00 ps auxw
nobody@w:~/pen/tmp$ 
Notice that the UID and GID are set to that of nobody and nogroup. Specifically notice that the full ps output shows only two running processes and that their PIDs are 1 and 5 respectively. Now, let's move on to using ip netns to work with network namespaces. First, let's confirm that no namespaces exist currently:
root@w:~# ip netns list
Object "netns" is unknown, try "ip help".
In this case, either ip needs an upgrade, or the kernel does. Assuming you have a kernel newer than 2.6.24, it's most likely ip. After upgrading, ip netns list should by default return nothing. Let's add a new namespace called 'ns1':
root@w:~# ip netns add ns1
root@w:~# ip netns list
ns1
First, let's list the current interfaces:
root@w:~# ip link list
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
Now to create a new virtual interface, and add it to our new namespace. Virtual interfaces are created in pairs, and are linked to each other - imagine a virtual crossover cable:
root@w:~# ip link add veth0 type veth peer name veth1
root@w:~# ip link list
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
3: veth1:  mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether d2:e9:52:18:19:ab brd ff:ff:ff:ff:ff:ff
4: veth0:  mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether f2:f7:5e:e2:22:ac brd ff:ff:ff:ff:ff:ff
ifconfig -a will also now show the addition of both veth0 and veth1. Great, now to assign our new interfaces to the namespace. Note that ip netns exec is used to execute commands within the namespace:
root@w:~# ip link set veth1 netns ns1
root@w:~# ip netns exec ns1 ip link list
1: lo:  mtu 65536 qdisc noop state DOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1:  mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether d2:e9:52:18:19:ab brd ff:ff:ff:ff:ff:ff
ifconfig -a will now only show veth0, as veth1 is in the ns1 namespace. Should we want to delete veth0/veth1:
ip netns exec ns1 ip link del veth1
We can now assign IP address 192.168.5.5/24 to veth0 on our host:
ifconfig veth0 192.168.5.5/24
And assign veth1 192.168.5.10/24 within ns1:
ip netns exec ns1 ifconfig veth1 192.168.5.10/24 up
To execute ip addr list on both our host and within our namespace:
root@w:~# ip addr list
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.122/24 brd 192.168.3.255 scope global eth0
    inet6 fe80::20c:29ff:fe65:259e/64 scope link 
       valid_lft forever preferred_lft forever
6: veth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 86:b2:c7:bd:c9:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.5/24 brd 192.168.5.255 scope global veth0
    inet6 fe80::84b2:c7ff:febd:c911/64 scope link 
       valid_lft forever preferred_lft forever
root@w:~# ip netns exec ns1 ip addr list
1: lo:  mtu 65536 qdisc noop state DOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 12:bd:b6:76:a6:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.10/24 brd 192.168.5.255 scope global veth1
    inet6 fe80::10bd:b6ff:fe76:a6eb/64 scope link 
       valid_lft forever preferred_lft forever
To view routing tables inside and outside of the namespace:
root@w:~# ip route list
default via 192.168.3.1 dev eth0  proto static 
192.168.3.0/24 dev eth0  proto kernel  scope link  src 192.168.3.122 
192.168.5.0/24 dev veth0  proto kernel  scope link  src 192.168.5.5 
root@w:~# ip netns exec ns1 ip route list
192.168.5.0/24 dev veth1  proto kernel  scope link  src 192.168.5.10 
Lastly, to connect our physical and virtual interfaces, we'll require a bridge. Let's bridge eth0 and veth0 on the host, and then use DHCP to gain an IP within the ns1 namespace:
root@w:~# brctl addbr br0
root@w:~# brctl addif br0 eth0
root@w:~# brctl addif br0 veth0
root@w:~# ifconfig eth0 0.0.0.0
root@w:~# ifconfig veth0 0.0.0.0
root@w:~# dhclient br0
root@w:~# ip addr list br0
7: br0:  mtu 1500 qdisc noqueue state UP 
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.122/24 brd 192.168.3.255 scope global br0
    inet6 fe80::20c:29ff:fe65:259e/64 scope link 
       valid_lft forever preferred_lft forever
br0 has been assigned an IP of 192.168.3.122/24. Now for the namespace:
root@w:~# ip netns exec ns1 dhclient veth1
root@w:~# ip netns exec ns1 ip addr list
1: lo:  mtu 65536 qdisc noop state DOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 12:bd:b6:76:a6:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.248/24 brd 192.168.3.255 scope global veth1
    inet6 fe80::10bd:b6ff:fe76:a6eb/64 scope link 
       valid_lft forever preferred_lft forever
Excellent! veth1 has been assigned 192.168.3.248/24

Links

IO Digital Sec
Linux Consultant