Sunday, December 27, 2015

Getting Started with Docker

https://www.linux.com/news/enterprise/systems-management/873287-getting-started-with-docker

Docker is the excellent new container application that is generating much buzz and many silly stock photos of shipping containers. Containers are not new; so, what's so great about Docker? Docker is built on Linux Containers (LXC). It runs on Linux, is easy to use, and is resource-efficient.
Docker containers are commonly compared with virtual machines. Virtual machines carry all the overhead of virtualized hardware running multiple operating systems. Docker containers, however, dump all that and share only the operating system. Docker can replace virtual machines in some use cases; for example, I now use Docker in my test lab to spin up various Linux distributions, instead of VirtualBox. It's a lot faster, and it's considerably lighter on system resources.
Docker is great for datacenters, as they can run many times more containers on the same hardware than virtual machines. It makes packaging and distributing software a lot easier:
"Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries -- anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in."
Docker runs natively on Linux, and in virtualized environments on Mac OS X and MS Windows. The good Docker people have made installation very easy on all three platforms.

Installing Docker

That's enough gasbagging; let's open a terminal and have some fun. The best way to install Docker is with the Docker installer, which is amazingly thorough. Note how it detects my Linux distro version and pulls in dependencies. The output is abbreviated to show the commands that the installer runs:
$ wget -qO- https://get.docker.com/ | sh
You're using 'linuxmint' version 'rebecca'.
Upstream release is 'ubuntu' version 'trusty'.
apparmor is enabled in the kernel, but apparmor_parser missing
+ sudo -E sh -c sleep 3; apt-get update
+ sudo -E sh -c sleep 3; apt-get install -y -q apparmor
+ sudo -E sh -c apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 
  --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
+ sudo -E sh -c mkdir -p /etc/apt/sources.list.d
+ sudo -E sh -c echo deb https://apt.dockerproject.org/repo ubuntu-trusty main > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c sleep 3; apt-get update; apt-get install -y -q docker-e
The following NEW packages will be installed:
  docker-engine
As you can see, it uses standard Linux commands. When it's finished, you should add yourself to the docker group so that you can run it without root permissions. (Remember to log out and then back in to activate your new group membership.)

Hello World!

We can run a Hello World example to test that Docker is installed correctly:
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
[snip]
Hello from Docker.
This message shows that your installation appears to be working correctly.
This downloads and runs the hello-world image from the Docker Hub. This contains a library of Docker images, which you can access with a simple registration. You can also upload and share your own images. Docker provides a fun test image to play with, Whalesay. Whalesay is an adaption of Cowsay that draws the Docker whale instead of a cow (see Figure 1 above).
$ docker run docker/whalesay cowsay "Visit Linux.com every day!"
The first time you run a new image from Docker Hub, it gets downloaded to your computer. Then, after that Docker uses your local copy. You can see which images are installed on your system.
$ docker images
REPOSITORY       TAG      IMAGE ID      CREATED       VIRTUAL SIZE
hello-world      latest   0a6ba66e537a  7 weeks ago   960 B
docker/whalesay  latest   ded5e192a685  6 months ago  247 MB
So, where, exactly, are these images stored? Look in /var/lib/docker.

Build a Docker Image

Now let's build our own Docker image. Docker Hub has a lot of prefab images to play with (Figure 2), and that's the best way to start because building one from scratch is a fair bit of work. (There is even an empty scratch image for building your image from the ground up.) There are many distro images, such as Ubuntu, CentOS, Arch Linux, and Debian.
docker-hub
Figure 2: Docker Hub.

We'll start with a plain Ubuntu image. Create a directory for your Docker project, change to it, and create a new Dockerfile with your favorite text editor.
$ mkdir dockerstuff
$ cd dockerstuff
$ nano Dockerfile
Enter a single line in your Dockerfile:
FROM ubuntu
Now build your new image and give it a name. In this example the name is testproj. Make sure to include the trailing dot:
$ docker build -t testproj .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu
 ---> 89d5d8e8bafb
Successfully built 89d5d8e8bafb
Now you can run your new Ubuntu image interactively:
$ docker run -it ubuntu
root@fc21879c961d:/#
And there you are at the root prompt of your image, which in this example is a minimal Ubuntu installation that you can run just like any Ubuntu system. You can see all of your local images:
$ docker images
REPOSITORY       TAG       IMAGE ID        CREATED        VIRTUAL SIZE
testproj         latest    89d5d8e8bafb    6 hours ago    187.9 MB
ubuntu           latest    89d5d8e8bafb    6 hours ago    187.9 MB
hello-world      latest    0a6ba66e537a    8 weeks ago    960 B
docker/whalesay  latest    ded5e192a685    6 months ago   247 MB
The real power of Docker lies in creating Dockerfiles that allow you to create customized images and quickly replicate them whenever you want. This simple example shows how to create a bare-bones Apache server. First, create a new directory, change to it, and start a new Dockerfile that includes the following lines.
FROM ubuntu

MAINTAINER DockerFan version 1.0

ENV DEBIAN_FRONTEND noninteractive

ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid

RUN apt-get update && apt-get install -y apache2

EXPOSE 8080

CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Now build your new project:
$ docker build -t apacheserver  .
This will take a little while as it downloads and installs the Apache packages. You'll see a lot of output on your screen, and when you see "Successfully built 538fea9dda79" (but with a different number, of course) then your image built successfully. Now you can run it. This runs it in the background:
$ docker run -d  apacheserver
8defbf68cc7926053a848bfe7b55ef507a05d471fb5f3f68da5c9aede8d75137
List your running containers:
$ docker ps
CONTAINER ID  IMAGE        COMMAND                 CREATED            
8defbf68cc79  apacheserver "/usr/sbin/apache2ctl"  34 seconds ago
And kill your running container:
$ docker kill 8defbf68cc79
You might want to run it interactively for testing and debugging:
$ docker run -it  apacheserver /bin/bash
root@495b998c031c:/# ps ax
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:00 /bin/bash
   14 ?        R+     0:00 ps ax
root@495b998c031c:/# apachectl start
AH00558: apache2: Could not reliably determine the server's fully qualified
domain name, using 172.17.0.3. Set the 'ServerName' directive globally to 
suppress this message
root@495b998c031c:/#
A more comprehensive Dockerfile could install a complete LAMP stack, load Apache modules, configuration files, and everything you need to launch a complete Web server with a single command.
We have come to the end of this introduction to Docker, but don't stop now. Visit docs.docker.com to study the excellent documentation and try a little Web searching for Dockerfile examples. There are thousands of them, all free and easy to try.

Saturday, December 26, 2015

How to install RegRipper registry data extraction tool on Linux

http://linuxconfig.org/how-to-install-regripper-registry-data-extraction-tool-on-linux#h5-regripper-command-examples

RegRipper is an open source forensic software used as a Windows Registry data extraction command line or GUI tool. It is written in Perl and this article will describe RegRipper command line tool installation on the Linux systems such as Debian, Ubuntu, Fedora, Centos or Redhat. For the most part, the installation process of command line tool RegRipper is OS agnostic except the part where we deal with installation pre-requisites.

1. Pre-requisites

Fist we need to install all prerequisites. Choose a relevant command below based on the Linux distribution you are running:
DEBIAN/UBUNTU
# apt-get install cpanminus make unzip wget
FEDORA
# dnf install perl-App-cpanminus.noarch make unzip wget perl-Archive-Extract-gz-gzip.noarch which
CENTOS/REDHAT
# yum install  perl-App-cpanminus.noarch make unzip wget perl-Archive-Extract-gz-gzip.noarch which

2. Installation of required libraries

The RegRipper command line tool depends on perl Parse::Win32Registry library. The following commands will take care of this pre-requisite and install this library into /usr/local/lib/rip-lib directory:
# mkdir /usr/local/lib/rip-lib
#  cpanm -l /usr/local/lib/rip-lib Parse::Win32Registry

3. RegRipper script installation

At this stage we are ready to install rip.pl script. The script is intended to run on MS Windows systems and as a result we need to make some small modifications. We will also include a path to the above installed Parse::Win32Registry library. Download RegRipper source code from https://regripper.googlecode.com/files/. Current version is 2.8:
#  wget -q https://regripper.googlecode.com/files/rrv2.8.zip
Extract rip.pl script:
# unzip -q rrv2.8.zip rip.pl 
Remove interpretor line and unwanted DOS new line character ^M:
 
# tail -n +2 rip.pl > rip
# perl -pi -e 'tr[\r][]d' rip
Modify script to include an interpretor relevant to your Linux system and also include library path to Parse::Win32Registry:
# sed -i "1i #!`which perl`" rip
# sed -i '2i use lib qw(/usr/local/lib/rip-lib/lib/perl5/);' rip
Install your RegRipper rip script and make it executable:
# cp rip /usr/local/bin
# chmod +x /usr/local/bin/rip

4. RegRipper Plugins installation

Lastly, we need to install RegRipper's Plugins.
# wget -q https://regripper.googlecode.com/files/plugins20130429.zip
# mkdir /usr/local/bin/plugins 
# unzip -q plugins20130429.zip -d /usr/local/bin/plugins
RegRipper registry data extraction tool is now installed on your system and available via rip command:
# rip
Rip v.2.8 - CLI RegRipper tool
Rip [-r Reg hive file] [-f plugin file] [-p plugin module] [-l] [-h]
Parse Windows Registry files, using either a single module, or a plugins file.

  -r Reg hive file...Registry hive file to parse
  -g ................Guess the hive file (experimental)
  -f [profile].......use the plugin file (default: plugins\plugins)
  -p plugin module...use only this module
  -l ................list all plugins
  -c ................Output list in CSV format (use with -l)
  -s system name.....Server name (TLN support)
  -u username........User name (TLN support)
  -h.................Help (print this information)
  
Ex: C:\>rip -r c:\case\system -f system
    C:\>rip -r c:\case\ntuser.dat -p userassist
    C:\>rip -l -c

All output goes to STDOUT; use redirection (ie, > or >>) to output to a file.
  
copyright 2013 Quantum Analytics Research, LLC

5. RegRipper command examples

Few examples using RegRipper and NTUSER.DAT registry hive file.

List all available plugins:
$ rip -l -c
List software installed by the user:
$ rip -p listsoft -r NTUSER.DAT
Launching listsoft v.20080324
listsoft v.20080324
(NTUSER.DAT) Lists contents of user's Software key

listsoft v.20080324
List the contents of the Software key in the NTUSER.DAT hive
file, in order by LastWrite time.

Mon Dec 14 06:06:41 2015Z       Google
Mon Dec 14 05:54:33 2015Z       Microsoft
Sun Dec 29 16:44:47 2013Z       Bitstream
Sun Dec 29 16:33:11 2013Z       Adobe
Sun Dec 29 12:56:03 2013Z       Corel
Thu Dec 12 07:34:40 2013Z       Clients
Thu Dec 12 07:34:40 2013Z       Mozilla
Thu Dec 12 07:30:08 2013Z       MozillaPlugins
Thu Dec 12 07:22:34 2013Z       AppDataLow
Thu Dec 12 07:22:34 2013Z       Wow6432Node
Thu Dec 12 07:22:32 2013Z       Policies
Extract all available information using all plugins and save it to case1.txt. file:
$ for i in $( rip -l -c | grep NTUSER.DAT | cut -d , -f1 ); do rip -p $i -r NTUSER.DAT &>> case1.txt ; done

How To Avoid Sudden Outburst Of Backup Shell Script or Program Disk I/O on Linux

http://www.cyberciti.biz/tips/linux-set-io-scheduling-class-priority.html

A sudden outburst of violent disk I/O activity can bring down your email or web server. Usually, a web, mysql, or mail server serving millions and millions pages (requests) per months are prone to this kind of problem. Backup activity can increase current system load too. To avoid this kind of sudden outburst problem, run your script with scheduling class and priority. Linux comes with various utilities to manage this kind of madness.

CFQ scheduler

You need Linux kernels 2.6.13+ with the CFQ IO scheduler. CFQ (Completely Fair Queuing) is an I/O scheduler for the Linux kernel, which is default in 2.6.18+ kernel. RHEL 4/ 5 and SuSE Linux has all scheduler built into kernel so no need to rebuild your kernel. To find out your scheduler name, enter:
# for d in /sys/block/sd[a-z]/queue/scheduler; do echo "$d => $(cat $d)" ; done
Sample output for each disk:
/sys/block/sda/queue/scheduler => noop anticipatory deadline [cfq]
/sys/block/sdb/queue/scheduler => noop anticipatory deadline [cfq]
/sys/block/sdc/queue/scheduler => noop anticipatory deadline [cfq] 
CFQ is default and recommended for good performance.

Old good nice program

You can run a program with modified scheduling priority using nice command (19 = least favorable):
# nice -n19 /path/to/backup.sh
Sample cronjob:
@midnight /bin/nice -n19 /path/to/backup.sh

Say hello to ionice utility

The ionice command provide better control as compare to nice command for the I/O scheduling class and priority of a program or script. It supports following three scheduling classes (quoting from the man page):
  • Idle : A program running with idle io priority will only get disk time when no other program has asked for disk io for a defined grace period. The impact of idle io processes on normal system activity should be zero. This scheduling class does not take a priority argument.
  • Best effort : This is the default scheduling class for any process that hasn’t asked for a specific io priority. Programs inherit the CPU nice setting for io priorities. This class takes a priority argument from 0-7, with lower number being higher priority. Programs running at the same best effort priority are served in a round-robin fashion. This is usually recommended for most application.
  • Real time : The RT scheduling class is given first access to the disk, regardless of what else is going on in the system. Thus the RT class needs to be used with some care, as it can starve other processes. As with the best effort class, 8 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window. This is should be avoided for all heavily loaded system.

Syntax

The syntax is:
 
ionice options  PID
ionice options -p PID
ionice -c1 -n0  PID
 

How do I use the ionice command on Linux?

Linux refers the scheduling class using following number system and priorities:
Scheduling classNumberPossible priority
real time18 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window
best-effort20-7, with lower number being higher priority
idle3Nil ( does not take a priority argument)

Examples

To display the class and priority of the running process, enter:
# ionice -p {PID}
# ionice -p 1

Sample output:
none: prio 0
Dump full web server disk / mysql or pgsql database backup using best effort scheduling (2) and 7 priority:
# /usr/bin/ionice -c2 -n7 /root/scripts/nas.backup.full
Open another terminal and watch disk I/O network stats using atop/tip or top or your favorite monitoring tool:
# atop
Sample cronjob:
@weekly /usr/bin/ionice -c2 -n7 /root/scripts/nas.backup.full >/dev/null 2>&1
You can set process with PID 1004 as an idle io process, enter:
# ionice -c3 -p 1004
Runs rsync.sh script as a best-effort program with highest priority, enter:
# ionice -c2 -n0 /path/to/rsync.sh
Type the following command to run 'zsh' as a best-effort program with highest priority.
# ionice -c 2 -n 0 zsh
Finally, you can combine both nice and ionice together:
# nice -n 19 ionice -c2 -n7 /path/to/shell.script
Related: chrt command to set / manipulate real time attributes of a Linux process and taskset command to retrieve or set a processes's CPU affinity.
To see help on options type:
$ ionice --help
Sample outputs:
 
Sets or gets the IO scheduling class and priority of processes.
 
Usage:
 ionice [options] -p ...
 ionice [options] -P ...
 ionice [options] -u ...
 ionice [options] 
 
Options:
 -c, --class     name or number of scheduling class,
                          0: none, 1: realtime, 2: best-effort, 3: idle
 -n, --classdata   priority (0..7) in the specified scheduling class,
                          only for the realtime and best-effort classes
 -p, --pid ...     act on these already running processes
 -P, --pgid ...   act on already running processes in these groups
 -t, --ignore           ignore failures
 -u, --uid ...     act on already running processes owned by these users
 
 -h, --help     display this help and exit
 -V, --version  output version information and exit
 

Other suggestion to improve disk I/O

  1. Use hardware RAID controller.
  2. Use fast SCSI / SA-SCSI / SAS 15k speed disk.
  3. Use fast SSD based storage (costly option).
  4. Use slave / passive server to backup MySQL
Recommended readings:

A simple way to install and configure puppet on CentOS 6

http://techarena51.com/index.php/a-simple-way-to-install-and-configure-a-puppet-server-on-linux


A simple way to install and configure puppet on CentOS 6
Puppet is an automation tool which allows you to automate the configuration of software like apache and nginx across multiple servers.
Puppet installation
In this tutorial we will be installing Puppet in the Puppet/Agent mode.You can install it in a Stand Alone mode as well.
OS & software Versions
Centos 6.5
Linux kernel 2.6.32
Puppet 3.6.2
Let’s get to it then.
Puppet server configuration
#Add Puppet repos 
[user@puppet ~]# sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

[user@puppet ~]# sudo yum install puppet-server

# Add your puppet server hostnames to the conf file under the [main] section
[user@puppet ~]# sudo vim /etc/puppet/puppet.conf

 dns_alt_names = puppet,puppet.yourserver.com

[user@puppet ~]# sudo  service puppetmaster start 
Puppet listens on port no 8140, ensure to unblock it in CSF or your firewall.
Puppet client configuration
#Add Puppet repos 
[user@client ~]# sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

[user@client ~]# sudo yum install puppet

#Open the conf file and add the puppet server hostname 
[user@client ~]#sudo vim /etc/puppet/puppet.conf
[main]
# The puppetmaster server
server=puppet.yourserver.com



[user@client ~]# sudo service puppet start
In the log file you should see the following lines.
info: Creating a new SSL key for vps.client.com
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for agent1.localdomain
info: Certificate Request fingerprint (md5): FD:E7:41:C9:5C:B7:5C:27:11:0C:8F:9C:1D:F6:F9:46
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled
Puppet uses SSL to communicate with it’s clients, when you start puppet on a client, it will automatically connect to the puppet server in it’s conf file and request for it’s certificate to be signed.
On the puppet server run
[user@puppet ~]# sudo  puppet cert list
vps.client.com (FD:E7:41:C9:2C:B7:5C:27:11:0C:8F:9C:1D:F6:F9:46)

[user@puppet ~]# sudo  puppet cert sign vps.client.com
notice: Signed certificate request for vps.client.com
notice: Removing file Puppet::SSL::CertificateRequest vps.client.com at '/etc/puppetlabs/puppet/ssl/ca/requests/vps.client.pem'
Now our client server “vps.client.com” is authorized to fetch and apply configurations from the puppet server. To understand how puppet ssl works and to troubleshoot any issues you can read http://docs.puppetlabs.com/learning/agent_master_basic.html
Let’s look at a sample puppet configuration.
Installing apache web server with puppet
Although puppet server configuration is stored in “/etc/puppet/puppet.conf”, client configurations are stored in files called manifests.
#On the puppet server run
[user@puppet ~]# sudo vim /etc/puppet/manifests/site.pp

node ‘vps.client.com’ {
             
              package { ‘httpd’ :
                     ensure => installed,
                           }
}
The configuration is pretty self explanatory, the first line indicates that we need to install this configuration on a client machine with the hostname ‘vps.client.com’. If you want to apply the configuration to the puppet server then replace ‘vps.client.com’ with ‘default’ .
Read node definitions for multiple node configurations.
The next two lines tell puppet that we need to ensure that the apache web server is installed. Puppet will check if apache is installed and if not, install it.
Think of a “package” as an object, “httpd” as the name of the object and “ensure => present” as the action to be performed on the object.
So if I wanted puppet to install a mysql database server, the configuration would be
node ‘vps.client.com’ {
package { ‘mysql-server’ :
ensure => installed,
}
}
The puppet server will compile this configuration into a catalog and serve it to a client when a request is sent to it.
How do I pull my configuration to a client immediately?
Puppet client’s usually pull configuration once every 30 minutes, But you can pull a configuration immediately buy running “service puppet restart or the following command.
[user@puppet ~]# sudo puppet agent --test
What if I wanted puppet to add a user ‘Tom’?
Then the object would be user, the name of the object would be ‘tom’ and the action would be ‘present’.
node ‘vps.client.com’ {
             
              user { ‘tomr’ :
                     ensure => present,
                           }
}
In puppet terms, these objects are known as Resources, the name of the objects are Titles and the actions are called Attributes.
Puppet has a number of these resources to help ease your automation, You can read about them at http://docs.puppetlabs.com/references/latest/type.html
How to ensure a service is running with puppet?
Once you have package like apache installed, you will want to ensure that it is running. On the command line you can do this with the service command, However in puppet you will need to use the manifest file and add the configuration as follows.
node ‘vps.client.com’ {
             
              package { ‘httpd’ :  
                     ensure => installed, 
                           }
             ->
             service { ‘httpd’ :  #Our resource and it’s title
                     ensure => running,  #Action to be performed on resource or attribute
                     enable     => true,   # Start apache at boot


                           }

}
Now you must have noticed I have added an “->” symbol. This is because Puppet is not particular about ordering, But we want the service command to run only after apache is installed and not before, hence I have added the arrow symbol which tells Puppet to run only after “httpd” is installed.
To know more about puppet ordering read.
How to automate installation of predefined conf files?
You may want to have a customised apache conf file for this client, which will have the vhost entry and other specific parameters you choose. In this case we need to use the file resource.
Before we go into the configuration, you should know how puppet serves files. A Puppet server provides access to custom files via mount points. One such mount point by default is the modules directory.
The modules directory is where you would add your modules. Modules make it easier to reuse configurations, rather than having to write configurations for every node we can store them as a module and call them whenever we like.
In order to write a module, you need to create a subdirectory inside the modules directory with the module name and create a manifest file called init.pp which should contain a class with the same name as the subdirectory.
[user@puppet ~]# cd /etc/puppet/modules
[user@puppet ~]# mkdir httpd
[user@puppet ~]# mkdir -p httpd/manifests httpd/files
[user@puppet ~]# vim httpd/manifests/init.pp


class httpd {     #Same name as our Sub Directory

  package { 'httpd':
      ensure => present,

         }
      ->
file {'/etc/httpd/conf/httpd.conf':  #Path to file on the client we want puppet to administer
     ensure  => file,  #Ensure it is a file, 
     mode => 0644,    #Permissions for the file
     source => 'puppet:///modules/httpd/httpd.conf', #Path to our customised file on the puppet server
     }

     ->
service { 'httpd':
      ensure     => running,
      enable     => true,
      subscribe => File['/etc/httpd/conf/httpd.conf']  # Restart service if any any change is made to httpd.conf

}
}
You need to add your custom httpd.conf file in the files subdirectory located at “/etc/puppet/modules/httpd/files/”
To understand the how the URI to the source attribute works read http://docs.puppetlabs.com/guides/file_serving.html
Now call the module in our main manifest file.
[user@puppet ~]#sudo vim /etc/puppet/manifests/site.pp

node ‘vps.client.com’ {
             
             include httpd

}

Incase you need a Web interface to  Manage your Linux Servers then read my tutorial Using Foreman, an Opensource Frontend for Puppet
Update: For more Automation and other System Administration/Devops Guides see https://github.com/Leo-G/DevopsWiki
Puppet FAQ
How do I change the time interval for a client to fetch it’s configuration from the server ?
Add “runinterval = 3600 “ under [main] section in “/etc/puppet/puppet.conf” on the client.
Time is in seconds.
How do I install modules from puppet forge?
[user@puppet ~]#sudo puppet module install "full module name"

#Example
[user@puppet ~]#sudo puppet module install puppetlabs-mysql
read more here and for publishing your own modules read http://docs.puppetlabs.com/puppet/latest/reference/modules_publishing.html

Protecting Apache Server From Denial-of-Service (Dos) Attack

http://www.unixmen.com/protecting-apache-server-denial-service-dos-attack

Denial-of-Service (DoS) attack is an attempt to make a machine or network resource unavailable to its intended users, such as to temporarily or indefinitely interrupt or suspend services of a host connected to the Internet. A distributed denial-of-service (DDoS) is where the attack source is more than one–and often thousands of-unique IP addresses.

What is mod_evasive?

mod_evasive is an evasive maneuvers module for Apache to provide evasive action in the event of an HTTP DoS or DDoS attack or brute force attack. It is also designed to be a detection and network management tool, and can be easily configured to talk to ipchains, firewalls, routers, and etcetera. mod_evasive presently reports abuses via email and syslog facilities.

Installing mod_evasive

  • Server Distro: Debian 8 jessie
  • Server IP: 10.42.0.109
  • Apache Version: Apache/2.4.10
mod_evasive appears to be in the Debian official repository, we will need to install using apt
# apt-get update
# apt-get install libapache2-mod-evasive

Setting up mod_evasive

We have mod_evasive installed but not configured, mod_evasive config is located at /etc/apache2/mods-available/evasive.conf. We will be editing that which should look similar to this

#DOSHashTableSize    3097
#DOSPageCount        2
#DOSSiteCount        50
#DOSPageInterval     1
#DOSSiteInterval     1
#DOSBlockingPeriod   10
#DOSEmailNotify      you@yourdomain.com
#DOSSystemCommand    "su - someuser -c '/sbin/... %s ...'"
#DOSLogDir           "/var/log/mod_evasive"

mod_evasive Configuration Directives

  • DOSHashTableSize
    This directive defines the hash table size, i.e. the number of top-level nodes for each child’s hash table. Increasing this number will provide faster performance by decreasing the number of iterations required to get to the record, but will consume more memory for table space. It is advisable to increase this parameter on heavy load web servers.
  • DOSPageCount:
    This sets threshold for total number of hits on same page (or URI) per page interval. Once this threshold is reached, the client IP is locked out and their requests will be dumped to 403, adding the IP to blacklist
  • DOSSiteCount:
    This sets the threshold for total number of request on any object by same client IP per site interval. Once this threshold is reached, the client IP is added to blacklist
  • DOSPageInterval:
    The page count interval, accepts real number as seconds. Default value is 1 second
  • DOSSiteInterval:
    The site count interval, accepts real number as seconds. Default value is 1 second
  • DOSBlockingPeriod:
    This directive sets the amount of time that a client will be blocked for if they are added to the blocking list. During this time, all subsequent requests from the client will result in 403 (Forbidden) response and the timer will be reset (e.g. for another 10 seconds). Since the timer is reset for every subsequent request, it is not necessary to have a long blocking period; in the event of a DoS attack, this timer will keep getting reset.The interval is specified in seconds and may be a real number.
  • DOSEmailNotify:
    This is an E-mail if provided will send notification once an IP is being blacklisted
  • DOSSystemCommand:
    This is a system command that can be executed once an IP is blacklist if enabled. Where %s is the blacklisted IP, this is designed for system call to IP filter or other tools
  • DOSLogDir:
    This is a directory where mod_evasive stores it’s log
This configuration is what I’m using which is working well and I recommend it if you don’t know how to go about the configuration

DOSHashTableSize    2048
DOSPageCount        5
DOSSiteCount        100
DOSPageInterval     1
DOSSiteInterval     2
DOSBlockingPeriod   10
DOSEmailNotify      you@yourdomain.com
#DOSSystemCommand    "su - someuser -c '/sbin/... %s ...'"
DOSLogDir           "/var/log/mod_evasive"
As you’ll replace you@yourdomain.com with your email. Since mod_evasive doesn’t create the log directory automatically, we are to create it for it:
# mkdir /var/log/mod_evasive
# chown :www-data /var/log/mod_evasive
# chmod 771 /var/log/mod_evasive
Once setup is done, make sure mod_evasive is enabled by typing:
# a2enmod evasive
Restart Apache for changes to take effect
# systemctl restart apache2

Testing mod_evasive Setup

mod_evasive set up correctly,  now we are going to test if our web server has protection again DoS attack using ab (Apache Benchmark). Install ab if you don’t have it by typing:
# apt-get install apache2-utils
Current stat of our /var/log/mod_evasive
root@debian-server:/var/log/mod_evasive# ls -l
total 0
root@debian-server:/var/log/mod_evasive#
We will now send bulk requests to the server, causing a DoS attack  by typing:
# ab -n 100 -c 10 http://10.42.0.109/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.42.0.109 (be patient).....done
Server Software:        Apache/2.4.10
Server Hostname:        10.42.0.109
Server Port:            80
Document Path:          /
Document Length:        11104 bytes
Concurrency Level:      10
Time taken for tests:   0.205 seconds
Complete requests:      100
Failed requests:        70
(Connect: 0, Receive: 0, Length: 70, Exceptions: 0)
Non-2xx responses:      70
Total transferred:      373960 bytes
HTML transferred:       353140 bytes
Requests per second:    488.51 [#/sec] (mean)
Time per request:       20.471 [ms] (mean)
Time per request:       2.047 [ms] (mean, across all concurrent requests)
Transfer rate:          1784.01 [Kbytes/sec] received
Connection Times (ms)
min  mean[+/-sd] median   max
Connect:        0    1   1.5      1       7
Processing:     3   15  28.0     10     177
Waiting:        2   14  28.0      9     176
Total:          3   17  28.4     12     182
Percentage of the requests served within a certain time (ms)
50%     12
66%     13
75%     14
80%     15
90%     18
95%     28
98%    175
99%    182
100%    182 (longest request)
Sending 100 request on 10 concurrent requests per request, the current stat of my /var/log/mod_evasive directory is now
root@debian-server:/var/log/mod_evasive# ls -l
total 4
-rw-r--r-- 1 www-data www-data 5 Dec 15 22:10 dos-10.42.0.1
Checking Apache access logs at /var/log/apache2/access.log we can see all connections from ApacheBench/2.3 were dropped to 403:
mod-evasive2
You see, with mod_evasive you can reduce the attack of DoS. Something that Nginx doesn’t have ;)

Linux: Use smartctl To Check Disk Behind Adaptec RAID Controllers

http://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers

I can use the "smartctl -d ata -a /dev/sdb" command to read hard disk health status directly connected to my system. But, how do I read smartctl command to check SAS or SCSI disk behind Adaptec RAID controller from the shell prompt on Linux operating system?

You need to use the following syntax to check SATA or SAS disk which are typically simulate a (logical) disk for each array of (physical) disks to the OS. /dev/sgX can be used as pass through I/O controls providing direct access to each physical disk for Adaptec raid controllers.

Is my Adaptec RAID card detected by Linux?

Type the following command:
# lspci | egrep -i 'raid|adaptec'
Sample outputs:
81:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09)

Download and install Adaptec Storage Manager

You need to install Adaptec Storage Manager for your Linux distribution as per installed RAID card. Visit this site to grab the software.

SATA Health Check Disk Syntax

To scan disk, enter:
# smartctl --scan
Sample outputs:
/dev/sda -d scsi # /dev/sda, SCSI device
So /dev/sda is one device reported as SCSI device. This RAID device is made of 4 disks located in /dev/sg{1,2,3,4}. Type the following smartclt command to check disk behind /dev/sda raid:
# smartctl -d sat --all /dev/sgX
# smartctl -d sat --all /dev/sg1

Ask the device to report its SMART health status or pending TapeAlert message if any, run:
# smartctl -d sat --all /dev/sg1 -H
For SAS disk use the following syntax:
# smartctl -d scsi --all /dev/sgX
# smartctl -d scsi --all /dev/sg1
### Ask the device to report its SMART health status or pending TapeAlert message ###
# smartctl -d scsi --all /dev/sg1 -H

Sample outputs:
smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
 
Device: SEAGATE  ST3146855SS      Version: 0002
Serial number: xxxxxxxxxxxxxxx
Device type: disk
Transport protocol: SAS
Local Time is: Wed Jul  7 04:34:30 2010 CDT
Device supports SMART and is Enabled
Temperature Warning Enabled
SMART Health Status: OK
 
Current Drive Temperature:     24 C
Drive Trip Temperature:        68 C
Elements in grown defect list: 0
Vendor (Seagate) cache information
  Blocks sent to initiator = 1857385803
  Blocks received from initiator = 1967221471
  Blocks read from cache and sent to initiator = 804439119
  Number of read and write commands whose size <= segment size = 312098925
  Number of read and write commands whose size > segment size = 45998
Vendor (Seagate/Hitachi) factory information
  number of hours powered up = 13224.42
  number of minutes until next internal SMART test = 42
 
Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:   58984049        1         0  58984050   58984050       3151.730           0
write:         0        0         0         0          0   9921230881.600           0
verify:     1308        0         0      1308       1308          0.000           0
 
Non-medium error count:        0
No self-tests have been logged
Long (extended) Self Test duration: 1367 seconds [22.8 minutes]
 
Here is another output from SAS based disk called /dev/sg2
# smartctl -d scsi --all /dev/sg2 -H
Sample outputs:
Fig.01: How To Check Hardware Raid Status in Linux Command Line
Fig.01: How To Check Hardware Raid Status in Linux Command Line

Replace /dev/sg1 with your disk number. If you've raid 10 array with 4 disks than:
  • /dev/sg0 - RAID 10 controller (you will not get any info or /dev/sg0).
  • /dev/sg1 - First disk in RAID 10 array.
  • /dev/sg2 - Second disk in RAID 10 array.
  • /dev/sg3 - Third disk in RAID 10 array.
  • /dev/sg4 - Fourth disk in RAID 10 array.

How do I run hard disk check?

Type the following command:
# smartctl -t short -d scsi /dev/sg2
# smartctl -t long -d scsi /dev/sg2

Where,
  1. -t short : Run short test.
  2. -t long : Run long test.
  3. -d scsi : Specify scsi as device type.
  4. --all : Show all SMART information for device.

How do I use Adaptec Storage Manager?

Another simple command to just check basic status is as follows:
# /usr/StorMan/arcconf getconfig 1 | more
# /usr/StorMan/arcconf getconfig 1 | grep State
# /usr/StorMan/arcconf getconfig 1 | grep -B 3 State

Sample outputs:
----------------------------------------------------------------------
      Device #0
         Device is a Hard drive
         State                              : Online
--
         S.M.A.R.T.                         : No
      Device #1
         Device is a Hard drive
         State                              : Online
--
         S.M.A.R.T.                         : No
      Device #2
         Device is a Hard drive
         State                              : Online
--
         S.M.A.R.T.                         : No
      Device #3
         Device is a Hard drive
         State                              : Online
 
Please note that newer version of arcconf is located in /usr/Adaptec_Event_Monitor directory. So your full path must be as follows:
# /usr/Adaptec_Event_Monitor/arcconf getconfig [AD | LD [LD#] | PD | MC | [AL]] [nologs]
Where,
 Prints controller configuration information.

    Option  AD  : Adapter information only
            LD  : Logical device information only
            LD# : Optionally display information about the specified logical device
            PD  : Physical device information only
            MC  : Maxcache 3.0 information only
            AL  : All information (optional)

How do I check the health of my Adaptec RAID array itself on Linux?

\
Simply use the following command:
# /usr/Adaptec_Event_Monitor/arcconf getconfig 1
OR (older version)
# /usr/StorMan/arcconf getconfig 1
Sample outputs:
Fig.02:  Device #1 is Online, while Device #2 is Failed i.e. you have a degraded array.
Fig.02: Device #1 is Online, while Device #2 is Failed i.e. you have a degraded array.

See also:

Friday, December 25, 2015

Linux / Unix Curl: Find Out If a Website Is Using Gzip / Deflate

http://www.cyberciti.biz/faq/linux-unix-curl-gzip-compression-test

How do I find out if a web-page is gzipped or compressed using Unix command line utility called curl? How do I make sure mod_deflate or mod_gzip is working under Apache web server?

When content is compressed, downloads are faster because the files are smaller—in many cases, less than a quarter the size of the original. This is very useful for JavaScript and CSS files (including html), faster downloads translates into faster rendering of web pages for end-user. The mod_deflate or mod_gzip Apache module provides the DEFLATE output filter that allows output from your server to be compressed before being sent to the client over the network. Most modern web browser support this feature. You can use the curl command to find out if a web page is gzipped or not using the the following simple syntax.

Syntax

The syntax is:

curl -I -H 'Accept-Encoding: gzip,deflate' http://example.com

OR

curl -s -I -L -H 'Accept-Encoding: gzip,deflate' http://example.com

Where,
  1. -s - Don't show progress meter or error messages.
  2. -I - Work on the HTTP-header only.
  3. -H 'Accept-Encoding: gzip,deflate' - Send extra header in the request when sending HTTP to a server.
  4. -L - f the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
  5. http://example.com - Your URL, it can start with http or https.

Examples

Type the following command:
 
curl -I -H 'Accept-Encoding: gzip,deflate' http://www.cyberciti.biz/
 
Sample outputs:
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 06 Nov 2012 18:59:26 GMT
Content-Type: text/html
Connection: keep-alive
X-Whom: l2-com-cyber
Vary: Cookie
Vary: Accept-Encoding
Last-Modified: Tue, 06 Nov 2012 18:51:58 GMT
Cache-Control: max-age=152, must-revalidate
Content-Encoding: gzip
X-Galaxy: Andromeda-1
X-Origin-Type: DynamicViaDAL

Curl command accept-encoding gzip bash test function

Create a bash shell function and add to your ~/.bashrc file:
 
gzipchk(){ curl -I -H 'Accept-Encoding: gzip,deflate' "$@" | grep --color 'Content-Encoding:'; }
 
OR use the silent mode to hide progress bar:
 
gzipchk(){ curl -sILH 'Accept-Encoding: gzip,deflate' "$@" | grep --color 'Content-Encoding:'; }
 
Save and close the file. Reload ~/.bashrc file, run:
$ source ~/.bashrc file
Test the gzipchk() as follows:
$ gzipchk www.cyberciti.biz
gzipchk http://www.redhat.com

Sample outputs:
Fig.01: Linux curl deflate gzip test in action
Fig.01: Linux curl deflate gzip test in action

Monday, December 14, 2015

Take Control of Your PC with UEFI Secure Boot

http://www.linuxjournal.com/content/take-control-your-pc-uefi-secure-boot

UEFI (Unified Extensible Firmware Interface) is the open, multi-vendor replacement for the aging BIOS standard, which first appeared in IBM computers in 1976. The UEFI standard is extensive, covering the full boot architecture. This article focuses on a single useful but typically overlooked feature of UEFI: secure boot.
Often maligned, you've probably encountered UEFI secure boot only when you disabled it during initial setup of your computer. Indeed, the introduction of secure boot was mired with controversy over Microsoft being in charge of signing third-party operating system code that would boot under a secure boot environment.
In this article, we explore the basics of secure boot and how to take control of it. We describe how to install your own keys and sign your own binaries with those keys. We also show how you can build a single standalone GRUB EFI binary, which will protect your system from tampering, such as cold-boot attacks. Finally, we show how full disk encryption can be used to protect the entire hard disk, including the kernel image (which ordinarily needs to be stored unencrypted).

UEFI Secure Boot

Secure boot is designed to protect a system against malicious code being loaded and executed early in the boot process, before the operating system has been loaded. This is to prevent malicious software from installing a "bootkit" and maintaining control over a computer to mask its presence. If an invalid binary is loaded while secure boot is enabled, the user is alerted, and the system will refuse to boot the tampered binary.
On each boot-up, the UEFI firmware inspects each EFI binary that is loaded and ensures that it has either a valid signature (backed by a locally trusted certificate) or that the binary's checksum is present on an allowed list. It also verifies that the signature or checksum does not appear in the deny list. Lists of trusted certificates or checksums are stored as EFI variables within the non-volatile memory used by the UEFI firmware environment to store settings and configuration data.

UEFI Key Overview

The four main EFI variables used for secure boot are shown in Figure a. The Platform Key (often abbreviated to PK) offers full control of the secure boot key hierarchy. The holder of the PK can install a new PK and update the KEK (Key Exchange Key). This is a second key, which either can sign executable EFI binaries directly or be used to sign the db and dbx databases. The db (signature database) variable contains a list of allowed signing certificates or the cryptographic hashes of allowed binaries. The dbx is the inverse of db, and it is used as a blacklist of specific certificates or hashes, which otherwise would have been accepted, but which should not be able to run. Only the KEK and db (shown in green) keys can sign binaries that may boot the system.
Figure a. Secure Boot Keys
The PK on most systems is issued by the manufacturer of the hardware, while a KEK is held by the operating system vendor (such as Microsoft). Hardware vendors also commonly have their own KEK installed (since multiple KEKs can be present). To take full ownership of a computer using secure boot, you need to replace (at a minimum) the PK and KEK, in order to prevent new keys being installed without your consent. You also should replace the signature database (db) if you want to prevent commercially signed EFI binaries from running on your system.
Secure boot is designed to allow someone with physical control over a computer to take control of the installed keys. A pre-installed manufacturer PK can be programmatically replaced only by signing it with the existing PK. With physical access to the computer, and access to the UEFI firmware environment, this key can be removed and a new one installed. Requiring physical access to the system to override the default keys is an important security requirement of secure boot to prevent malicious software from completing this process. Note that some locked-down ARM-based devices implement UEFI secure boot without the ability to change the pre-installed keys.

Testing Procedure

You can follow these procedures on a physical computer, or alternatively in a virtualized instance of the Intel Tianocore reference UEFI implementation. The ovmf package available in most Linux distributions includes this. The QEMU virtualization tool can launch an instance of ovmf for experimentation. Note that the fat argument specifies that a directory, storage, will be presented to the virtualized firmware as a persistent storage volume. Create this directory in the current working directory, and launch QEMU:

qemu-system-x86_64 -enable-kvm -net none \
-m 1024 -pflash /usr/share/ovmf/ovmf_x64.bin \
-hda fat:storage/
Files present in this folder when starting QEMU will appear as a volume to the virtualized UEFI firmware. Note that files added to it after starting QEMU will not appear in the system—restart QEMU and they will appear. This directory can be used to hold the public keys you want to install to the UEFI firmware, as well as UEFI images to be booted later in the process.

Generating Your Own Keys

Secure boot keys are self-signed 2048-bit RSA keys, in X.509 certificate format. Note that most implementations do not support key lengths greater than 2048 bits at present. You can generate a 2048-bit keypair (with a validity period of 3650 days, or ten years) with the following openssl command:

openssl req -new -x509 -newkey rsa:2048 -keyout PK.key \
-out PK.crt -days 3650 -subj "/CN=My Secure PK/"
The CN subject can be customized as you wish, and its value is not important. The resulting PK.key is a private key, and PK.crt is the corresponding certificate (containing the public key), which you will install into the UEFI firmware shortly. You should store the private key securely on an encrypted storage device in a safe place.
Now you can carry out the same process for both the KEK and for the db key. Note that the db and KEK EFI variables can contain multiple keys (and in the case of db, SHA256 hashes of bootable binaries), although for simplicity, this article considers only storing a single certificate in each. This is more than adequate for taking control of your own computer. Once again, the .key files are private keys, which should be stored securely, and the .crt files are public certificates to be installed into your UEFI system variables.

Taking Ownership and Installing Keys

Every UEFI firmware interface differs, and it is therefore not possible to provide step-by-step instructions on how to install your own keys. Refer to your motherboard or laptop's instruction manual, or search on-line for the maker of the UEFI firmware. Enter the UEFI firmware interface, usually by holding a key down at boot time, and locate the security menu. Here there should be a section or submenu for secure boot. Change the mode control to "custom" mode. This should allow you to access the key management menus.
Figure 1. Enabling Secure Boot and Entering Custom Mode
At this point, you should make a backup of the UEFI platform keys currently installed. You should not need this, since there should be an option within your UEFI firmware interface to restore the default keys, but it does no harm to be cautious. There should be an option to export or save the current keys to a USB Flash drive. It is best to format this with the FAT filesystem if you have any issues with it being detected.
After you have copied the backup keys somewhere safe, load the public certificate (.crt) files you created previously onto the USB Flash drive. Take care not to mix them up with the backup certificates from earlier. Enter the UEFI firmware interface, and use the option to reset or clear all existing secure boot keys.
Figure 2. Erasing the Existing Platform Key
This also might be referred to as "taking ownership" of secure boot. Your system is now in secure boot "setup" mode, which will remain until a new PK is installed. At this point, the EFI PK variable is unprotected by the system, and a new value can be loaded in from the UEFI firmware interface or from software running on the computer (such as an operating system).
Figure 3. Loading a New Key from a Storage Device
At this point, you should disable secure boot temporarily, in order to continue following this article. Your newly installed keys will remain in place for when secure boot is enabled.

Signing Binaries

After you have installed your custom UEFI signing keys, you need to sign your own EFI binaries. There are a variety of different ways to build (or obtain) these. Most modern Linux bootloaders are EFI-compatible (for example, GRUB 2, rEFInd or gummiboot), and the Linux kernel itself can be built as a bootable EFI binary since version 3.3. It's possible to sign and boot any valid EFI binary, although the approach you take here depends on your preference.
One option is to sign the kernel image directly. If your distribution uses a binary kernel, you would need to sign each new kernel update before rebooting your system. If you use a self-compiled kernel, you would need to sign each kernel after building it. This approach, however, requires you to keep on top of kernel updates and sign each image. This can become arduous, especially if you use a rolling-release distribution or test mainline release candidates. An alternative, and the approach we used in this article, is to sign a locked-down UEFI-compatible bootloader (GRUB 2 in the case of this article), and use this to boot various kernels from your system.
Some distributions configure GRUB to validate kernel image signatures against a distribution-specified public key (with which they sign all kernel binaries) and disable editing of the kernel cmdline variable when secure boot is in use. You therefore should refer to the documentation for your distribution, as the section on ensuring your boot images are encrypted would not be essential in this case.
The Linux sbsigntools package is available from the repositories of most Linux distributions and is a good first port of call when signing UEFI binaries. UEFI secure boot binaries should be signed with an Authenticode-format signature. The command of interest is sbsign, which is invoked as follows:

sbsign --key DB.key --cert DB.crt unsigned.efi \
--output signed.efi
Due to subtle variations in the implementation of the UEFI standards, some systems may reject a correctly signed binary from sbsign. The best alternative we found was to use the osslsigncode utility, which also generates Authenticode signatures. Although this tool was not specifically intended for use with secure boot, it produces signatures that match the required specification. Since osslsigncode does not appear to be commonly included in distribution repositories, you should build it from its source code. The process is relatively straightforward and simply requires running make, which will produce the executable binary. If you encounter any issues, ensure you have installed openssl and curl, which are dependencies of the package. (See Resources for a link to the source code repository.)
Binaries are signed with osslsigntool in a similar manner to sbsign (note that the hash is defined as sha256 per the UEFI specification; this should not be altered):

osslsigncode -certs DB.crt -key DB.key \
-h sha256 -in unsigned.efi -out signed.efi

Booting with UEFI

After you have signed an EFI binary (such as the GRUB bootloader binary), the obvious next step is to test it. Computers using the legacy BIOS boot technology load the initial operating system bootloader from the MBR (master boot record) of the selected boot device. The MBR contains code to load a further (and larger) bootloader held within the disk, which loads the operating system. In contrast, UEFI is designed to allow for more than one bootloader to exist on one drive, without the need for those bootloaders to cooperate or even know the others exist.
Bootable UEFI binaries are located on a storage device (such as a hard disk) within a standard path. The partition containing these binaries is referred to as the EFI System Partition. It has a partition ID of 0xEF00 in gdisk, the GPT-compatible equivalent to fdisk. This partition is conventionally located at the beginning of the filesystem and formatted with a FAT32 filesystem. UEFI-bootable binaries are then stored as files in the EFI/BOOT/ directory.
This signed binary should now boot if it is placed at EFI/BOOT/BOOTX64.EFI within the EFI system partition or an external drive, which is set as the boot device. It is possible to have multiple EFI binaries available on one EFI system partition, which makes it easier to create a multi-boot setup. For that to work however, the UEFI firmware needs a boot entry created in its non-volatile memory. Otherwise, the default filename (BOOTX64.EFI) will be used, if it exists.
To add a new EFI binary to your firmware's list of available binaries, you should use the efibootmgr utility. This tool can be found in distribution repositories and often is used automatically by the installers for popular bootloaders, such as GRUB.
At this point, you should re-enable secure boot within your UEFI firmware. To ensure that secure boot is operating correctly, you should attempt to boot an unsigned EFI binary. To do so, you can place a binary (such as an unsigned GRUB EFI binary) at EFI/BOOT/BOOTX64.EFI on a FAT32-formatted USB Flash drive. Use the UEFI firmware interface to set this drive as the current boot drive, and ensure that a security warning appears, which halts the boot process. You also should verify that an image signed with the default UEFI secure boot keys does not boot—an Ubuntu 12.04 (or newer) CD or bootable USB stick should allow you to verify this. Finally, you should ensure that your self-signed binary boots correctly and without error.

Installing Standalone GRUB

By default, the GRUB bootloader uses a configuration file stored at /boot/grub/grub.cfg. Ordinarily, this file could be edited by anyone able to modify the contents of your /boot partition, either by booting to another OS or by placing your drive in another computer.

Bootloader Security

Prior to the advent of secure boot and UEFI, someone with physical access to a computer was presumed to have full access to it. User passwords could be bypassed by simply adding init=/bin/bash to the kernel cmdline parameter, and the computer would boot straight up into a root shell, with full access to all files on the system.
Setting up full disk encryption is one way to protect your data from physical attack—if the contents of the hard disk is encrypted, the disk must be decrypted before the system can boot. It is not possible to mount the disk's partitions without the decryption key, so the data is protected.
Another approach is to prevent an attacker from altering the kernel cmdline parameter. This approach is easily bypassed on most computers, however, by installing a new bootloader. This bootloader need not respect the restrictions imposed by the original bootloader. In many cases, replacing the bootloader may prove unnecessary—GRUB and other bootloaders are fully configurable by means of a separate configuration file, which could be edited to bypass security restrictions, such as passwords.
Therefore, there would be no real security advantage in signing the GRUB bootloader, since the signed (and verified) bootloader would then load unsigned modules from the hard disk and use an unsigned configuration file. By having GRUB create a single, bootable EFI binary, containing all the necessary modules and configuration files, you no longer need to trust the modules and configuration file of your GRUB binary. After signing the GRUB binary, it cannot be modified without secure boot rejecting it and refusing to load. This failure would alert you to someone attempting to compromise your computer by modifying the bootloader.
As mentioned earlier, this step may not be necessary on some distributions, as their GRUB bootloader automatically will enforce similar restrictions and checks on kernels when booted with secure boot enabled. So, this section is intended for those who are not using such a distribution or who wish to implement something similar themselves for learning purposes.
To create a standalone GRUB binary, the grub-mkstandalone tool is needed. This tool should be included as part of recent GRUB2 distribution packages:

grub-mkstandalone -d /usr/lib/grub/x86_64-efi/ \
-O x86_64-efi --modules="part_gpt part_msdos" \
--fonts="unicode" --locales="en@quot" \
--themes=""  -o "/home/user/grub-standalone.efi" \
"boot/grub/grub.cfg=/boot/grub/grub.cfg"
A more detailed explanation of the arguments used here is available on the man page for grub-mkstandalone. The significant arguments are -o, which specifies the output file to be used, and the final string argument, specifying the path to the current GRUB configuration file. The resulting standalone GRUB binary is directly bootable and contains a memdisk, which holds the configuration file and modules, as well as the configuration file. This GRUB binary now can be signed and used to boot the system. Note that this process should be repeated when the GRUB configuration file is re-generated, such as after adding a new kernel, changing boot parameters or after adding a new operating system to the list, since the embedded configuration file will be out of date with the regular system one.

A Licensing Warning

As GRUB 2 is licensed under the GPLv3 (or later), this raises one consideration to be aware of. Although not a consideration for individual users (who simply can install new secure boot keys and boot a modified bootloader), if the GRUB 2 bootloader (or indeed any other GPL-v3-licensed bootloader) was signed with a private signing key, and the distributed computer system was designed to prevent the use of unsigned bootloaders, use of the GPL-v3-licensed software would not be in compliance with the licence. This is a result of the so-called anti-tivo'ization clause of GPLv3, which requires that users be able to install and execute their own modified version of GPLv3 software on a system, without being technically restricted from doing so.

Locking Down GRUB

To prevent a malicious user from modifying the kernel cmdline of your system (for example, to point to a different init binary), a GRUB password should be set. GRUB passwords are stored within the configuration file, after being hashed with a cryptographic hashing function. Generate a password hash with the grub-mkpasswd-pbkdf2 command, which will prompt you to enter a password.
The PBKDF2 function is a slow hash, designed to be computationally intensive and prevent brute-force attacks against the password. Its performance is adjusted using the -c parameter, if desired, to slow the process further on a fast computer by carrying out more rounds of PBKDF2. The default is for 10,000 rounds. After copying this password hash, it should be added to your GRUB configuration files (which normally are located in /etc/grub.d or similar). In the file 40_custom, add the following:

set superusers="root"
password_pbkdf2 root 
This will create a GRUB superuser account named root, which is able to boot any GRUB entry, edit existing boot items and enter a GRUB console. Without further configuration, this password also will be required to boot the system. If you prefer to have yet another password on boot-up, you can skip the next step. With full disk encryption in use though, there is little need in requiring a password on each boot-up.
To remove the requirement for the superuser password to be entered on a normal boot-up, edit the standard boot menu template (normally /etc/grub.d/10-linux), and locate the line creating a regular menu entry. It should look somewhat similar to this:

echo "menuentry '$(echo "$title" | grub_quote)' 
 ↪${CLASS} \$menuentry_id_option 
 ↪'gnulinux-$version-$type-$boot_device_id' {" | sed
 ↪"s/^/$submenu_indentation/"
Change this line by adding the argument --unrestricted, before the opening curly bracket. This change tells GRUB that booting this entry does not require a password prompt. Depending on your distribution and GRUB version, the exact contents of the line may differ. The resulting line should be similar to this:

echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS}
 ↪\$menuentry_id_option 
 ↪'gnulinux-$version-$type-$boot_device_id'
 ↪--unrestricted {" | sed "s/^/$submenu_indentation/"
After adding a superuser account and configuring the need (or otherwise) for boot-up passwords, the main GRUB configuration file should be re-generated. The command for this is distribution-specific, but is often update-grub or grub-mkconfig. The standalone GRUB binary also should be re-generated and tested.

Protecting the Kernel

At this point, you should have a system capable of booting a signed (and password-protected) GRUB bootloader. An adversary without access to your keys would not be able to modify the bootloader or its configuration or modules. Likewise, attackers would not be able to change the parameters passed by the bootloader to the kernel. They could, however, modify your kernel image (by swapping the hard disk into another computer). This would then be booted by GRUB. Although it is possible for GRUB to verify kernel image signatures, this requires you to re-sign each kernel update.
An alternative approach is to use full disk encryption to protect the full system, including kernel images, the root filesystem and your home directory. This prevents someone from removing your computer's drive and accessing your data or modifying it—without knowing your encryption password, the drive contents will be unreadable (and thus unmodifiable).
Most on-line guides will show full disk encryption but leave a separate, unencrypted /boot partition (which holds the kernel and initrd images) for ease of booting. By only creating a single, encrypted root partition, there won't be an unencrypted kernel or initrd stored on the disk. You can, of course, create a separate boot partition and encrypt it using dm-crypt as normal, if you prefer.
The full process of carrying out full disk encryption including the boot partition is worthy of an article in itself, given the various distribution-specific changes necessary. A good starting point, however, is the ArchLinux Wiki (see Resources). The main difference from a conventional encryption setup is the use of the GRUB GRUB_ENABLE_CRYPTODISK=y configuration parameter, which tells GRUB to attempt to decrypt an encrypted volume prior to loading the main GRUB menu.
To avoid having to enter the encryption password twice per boot-up, the system's /etc/crypttab can be used to decrypt the filesystem with a keyfile automatically. This keyfile then can be included in the (encrypted) initrd of the filesystem (refer to your distribution's documentation to find out how to add this to the initrd, so it will be included each time it is regenerated for a kernel update).
This keyfile should be owned by the root user and does not require any user or group to have read access to it. Likewise, you should give the initrd image (in the boot partition) the same protection to prevent it from being accessed while the system is powered up and the keyfile is being extracted.

Final Considerations

UEFI secure boot allows you to take control over what code can run on your computer. Installing your own keys allows you to prevent malicious people from easily booting their own code on your computer. Combining this with full disk encryption will keep your data protected against unauthorized access and theft, and prevent an attacker from tricking you into booting a malicious kernel.
As a final step, you should apply a password to your UEFI setup interface, in order to prevent a physical attacker from gaining access to your computer's setup interface and installing their own PK, KEK and db key, as these instructions did. You should be aware, however, that a weakness in your motherboard or laptop's implementation of UEFI could potentially allow this password to be bypassed or removed, and that the ability to re-flash the UEFI firmware through a "rescue mode" on your system could potentially clear NVRAM variables. Nonetheless, by taking control of secure boot and using it to protect your system, you should be better protected against malicious software or those with temporary physical access to your computer.

Resources

Information about third-party secure boot keys: http://mjg59.dreamwidth.org/23400.html
More information about the keys and inner workings of secure boot: http://blog.hansenpartnership.com/the-meaning-of-all-the-uefi-keys
osslsigncode repository: http://sourceforge.net/projects/osslsigncode
ArchLinux Wiki instructions for fully encrypted systems: https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#Encrypted_boot_partition_.28GRUB.29
Guide for full-disk encryption including kernel image: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption
Fedora Wiki on its secure boot implementation: https://fedoraproject.org/wiki/Features/SecureBoot

How to resume a large SCP file transfer on Linux

http://ask.xmodulo.com/resume-large-scp-file-transfer-linux.html

Question: I was downloading a large file using SCP, but the download transfer failed in the middle because my laptop got disconnected from the network. Is there a way to resume the interrupted SCP transfer where I left off, instead of downloading the file all over again?
Originally based on BSD RCP protocol, SCP (Secure copy) is a mechanism that allows you to transfer a file between two end points over a secure SSH connection. However, as a simple secure copy protocol, SCP does not understand range-request or partial transfer like HTTP does. As such, popular SCP implementations like the scp command line tool cannot resume aborted downloads from lost network connections.
If you want to resume an interrupted SCP transfer, you need to rely on other programs which support range requests. One popular such program is rsync. Similar to scp, rsync can also transfer files over SSH.
Suppose you were trying to download a file (bigdata.tgz) from a remote host remotehost.com using scp, but the SCP transfer was stopped in the middle due to a stalled SSH connection. You can use the following rsync command to easily resume the stopped transfer. Note that the remote server must have rsync installed as well.
$ cd /path/to/directory/of/partially_downloaded_file
$ rsync -P -rsh=ssh userid@remotehost.com:bigdata.tgz ./bigdata.tgz
The "-P" option is the same as "--partial --progress", allowing rsync to work with partially downloaded files. The "-rsh=ssh" option tells rsync to use ssh as a remote shell.
Once the command is invoked, rsync processes on local and remote hosts compare a local file (./bigdata.tgz) and a remote file (userid@remotehost.com:bigdata.tgz), determine among themselves what portion of the file is not the same, and transfer the discrepancy to either end. In this case, missing bytes in the partially downloaded local file is downloaded from a remote host.

If the above rsync session itself gets interrupted, you can resume it as many time as you want by typing the same command. rsync will automatically restart the transfer where it left off.

How do I forcefully unmount a Linux disk partition?

http://www.cyberciti.biz/tips/how-do-i-forcefully-unmount-a-disk-partition.html

Sometimes you try to unmount a disk partition or mounted CD/DVD disk or device, which is accessed by other users, then you will get an error umount: /xxx: device is busy. However, Linux or FreeBSD comes with the fuser command to kill forcefully mounted partition. For example, you can kill all processes accessing the file system mounted at /nas01 using the fuser command.

Understanding device error busy error

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent the data loss. Try the following command to find out what processes have activities on the device/partition. If your device name is /dev/sdb1, enter the following command as root user:
# lsof | grep '/dev/sda1'
Output:
vi 4453       vivek    3u      BLK        8,1                 8167 /dev/sda1
Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:
# umount /dev/sda1

How do I list the users on the file-system /nas01/?

Type the following command:
# fuser -u /nas01/
# fuser -u /var/www/

Sample outputs:
/var/www:             3781rc(root)  3782rc(nginx)  3783rc(nginx)  3784rc(nginx)  3785rc(nginx)  3786rc(nginx)  3787rc(nginx)  3788rc(nginx)  3789rc(nginx)  3790rc(nginx)  3791rc(nginx)  3792rc(nginx)  3793rc(nginx)  3794rc(nginx)  3795rc(nginx)  3796rc(nginx)  3797rc(nginx)  3798rc(nginx)  3800rc(nginx)  3801rc(nginx)  3802rc(nginx)  3803rc(nginx)  3804rc(nginx)  3805rc(nginx)  3807rc(nginx)  3808rc(nginx)  3809rc(nginx)  3810rc(nginx)  3811rc(nginx)  3812rc(nginx)  3813rc(nginx)  3815rc(nginx)  3816rc(nginx)  3817rc(nginx)
The following discussion allows you to unmout device and partition forcefully using mount or fuser Linux commands.

Linux fuser command to forcefully unmount a disk partition

Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:
WARNING! These examples may result into data loss if not executed properly (see "Understanding device error busy error" for more information).
Type the command to unmount /mnt forcefully:
# fuser -km /mnt
Where,
  • -k : Kill processes accessing the file.
  • -m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt
Linux umount command to unmount a disk partition.
You can also try the umount command with –l option on a Linux based system:
# umount -l /mnt
Where,
  • -l : Also known as Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option works with kernel version 2.4.11+ and above only.
If you would like to unmount a NFS mount point then try following command:
# umount -f /mnt
Where,
  • -f: Force unmount in case of an unreachable NFS system
Please note that using these commands or options can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.
See also:

Linux / Unix: jobs Command Examples

http://www.cyberciti.biz/faq/unix-linux-jobs-command-examples-usage-syntax

I am new Linux and Unix user. How do I show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH or POSIX based shell? How can I display status of jobs in the current session on Unix/Linux?

Job control is nothing but the ability to stop/suspend the execution of processes (command) and continue/resume their execution as per your requirements. This is done using your operating system and shell such as bash/ksh or POSIX shell.
jobs command details
DescriptionShow the active
jobs in shell
Category
Difficulty
Root privilegesNo
Estimated completion time10m
Contents
You shell keeps a table of currently executing jobs and can be displayed with jobs command.

Purpose

Displays status of jobs in the current shell session.

Syntax

The basic syntax is as follows:
jobs
OR
jobs jobID
OR
jobs [options] jobID

Starting few jobs for demonstration purpose

Before you start using jobs command, you need to start couple of jobs on your system. Type the following commands to start jobs:
## Start xeyes, calculator, and gedit text editor ###
xeyes &
gnome-calculator &
gedit fetch-stock-prices.py &
 
Finally, run ping command in foreground:
 
ping www.cyberciti.biz
 
To suspend ping command job hit the Ctrl-Z key sequence.

jobs command examples

To display the status of jobs in the current shell, enter:
$ jobs
Sample outputs:
[1]   7895 Running                 gpass &
[2]   7906 Running                 gnome-calculator &
[3]-  7910 Running                 gedit fetch-stock-prices.py &
[4]+  7946 Stopped                 ping cyberciti.biz
To display the process ID or jobs for the job whose name begins with "p," enter:
$ jobs -p %p
OR
$ jobs %p
Sample outputs:
[4]-  Stopped                 ping cyberciti.biz
The character % introduces a job specification. In this example, you are using the string whose name begins with suspended command such as %ping.

How do I show process IDs in addition to the normal information?

Pass the -l(lowercase L) option to jobs command for more information about each job listed, run:
$ jobs -l
Sample outputs:
Fig.01: Displaying the status of jobs in the shell
Fig.01: Displaying the status of jobs in the shell

How do I list only processes that have changed status since the last notification?

First, start a new job as follows:
$ sleep 100 &
Now, only show jobs that have stopped or exited since last notified, type:
$ jobs -n
Sample outputs:
[5]-  Running                 sleep 100 &

Display lists process IDs (PIDs) only

Pass the -p option to jobs command to display PIDs only:
$ jobs -p
Sample outputs:
7895
7906
7910
7946
7949

How do I display only running jobs?

Pass the -r option to jobs command to display only running jobs only, type:
$ jobs -r
Sample outputs:
[1]   Running                 gpass &
[2]   Running                 gnome-calculator &
[3]-  Running                 gedit fetch-stock-prices.py &

How do I display only jobs that have stopped?

Pass the -s option to jobs command to display only stopped jobs only, type:
$ jobs -s
Sample outputs:
[4]+  Stopped                 ping cyberciti.biz
To resume the ping cyberciti.biz job by entering the following bg command:
$ bg %4

jobs command options

From the bash(1) command man page:
OptionDescription
-lShow process id's in addition to the normal information.
-pShow process id's only.
-nShow only processes that have changed status since the last notification are printed.
-rRestrict output to running jobs only.
-sRestrict output to stopped jobs only.
-xCOMMAND is run after all job specifications that appear in ARGS have been replaced with the process ID of that job's process group leader./td>

A note about /usr/bin/jobs and shell builtin

Type the following type command to find out whether jobs is part of shell, external command or both:
$ type -a jobs
Sample outputs:
jobs is a shell builtin
jobs is /usr/bin/jobs
In almost all cases you need to use the jobs command that is implemented as a BASH/KSH/POSIX shell built-in. The /usr/bin/jobs command can not be used in the current shell. The /usr/bin/jobs command operates in a different environment and does not share the parent bash/ksh's shells understanding of jobs.

Related media

This tutorials is also available in a quick video format:
See also
CategoryList of Unix and Linux commands
File Managementcat
Network Utilitiesdighostip
Processes Managementbgchrootdisownfgjobskillkillallpwdxtimepidofpstree
Searchingwhereiswhich
User Informationgroupsidlastlastcommlognameuserswwhowhoamilidmembers