Friday, January 22, 2016

How to setup a intermediate compatible SSL website with LetsEncrypt certificate

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit.
The key principles behind Let’s Encrypt are:
  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.


First, we have to mention about some dark sides of Let's Encrypt service. However great the idea of free public and open certificate authority is, it brings also many troubles for us. Developers have tried to make the system of obtaining certificates as simple as possible, but it still requires a higher skill of server administering. Therefore, many of developers like the one from ISPConfig ( have implemented this solution directly into their. This brings people more effective deployment and supervision over the whole system much easier and flexible.

Real complication:

Many people have decided to implement Let's Encrypt into their production sites. I find this still a very bad idea to be done without being very (but really very) careful. Let's Encrypt brings you freedom but also limits you in using the certificate with SHA-256 RSA Encryption. Support for SHA-2 has improved over the last few years. Most browsers, platforms, mail clients and mobile devices already support SHA-2. However, some older operating systems such as Windows XP pre-SP3 do not support SHA-2 encryption. Many organizations will be able to convert to SHA-2 without running into user experience issues, and many may want to encourage users running older, less secure systems to upgrade.
In this tutorial, we are going to deal with this incompatibility in a simple, but still nasty way.


  • Apache version 2.4 and higher
  • OpenSSL version 1.0.1e and higher
  • Apache mod_rewrite enabled

The whole idea:

As mentioned before, there are still devices incompatible with SHA-256 signature in the Internet. When I was forced to deploy an SSL to some websites, I had to decide from two options:
  1. Using Let's Encrypt having it for free but not for all
  2. Buying a certificate with 128 bit signature
Well, still the option no. 1 was the only way as it was promised to customer long days ago (:

No more theory:

I hope I have explained the needed and now we can deal with the unsupported viewers of our website. There are many people using Windows XP machines with SP2 and lower (Yes, there are still plenty of them). So we have to filter these people.
In your “/etc/apache2/sites-available/” add following on the end of the file:
RewriteEngine on
RewriteCond %{HTTP_USER_AGENT} !(NT\ 5) [NC]
RewriteRule ^(.*) https:// [R]
RewriteCond gets a string from http header of the guest accessing your page. You can simply check yours and find more information here:
The condition we used tells us something like “if string doesn't contain 'NT 5'” then RewriteRule executes/applies the rule of redirecting [R] to https variant of your domain, NT 5 is a OS version string for Windows XP devices.
If you don't use this redirection, incompatible users won't be able to access your https website.
I have to warn you this solution is not 100% perfect as some of guest doesn't have to provide you relevant or real information. I have worked with AWstats to figure out what rate of unknown systems are accessing my page and it is about 1.3%, so pretty few requests. If you want to deal with unknown operating systems to ensure their compatibility, you can add unknown in the condition as well (RewriteCond %{HTTP_USER_AGENT} !(NT\ 5|unknown) [NC]).
Awstats graphic.
After successfully “non-redirecting” your incompatible visitors (keeping them in http insecure world) you can focus on https side.

HTTPS configuration:

Now we assume you already assigned the certificate to your web server and also enabled it.
In your vhost config file again, add following:
SSLProtocol All -SSLv2 -SSLv3 
SSLCompression off 
SSLHonorCipherOrder On 
The CipherSuite used here is a bit longer than usual. It's for better compatibility. You can get your own from: or
I must again mention, you wont ever get a perfect configuration to meet the high security policy and also compatibility. You should find a compromise.
After using these settings, you can test your server configuration and compatibility at:
You are going to find a long list of compatible devices and the incompatible ones, also some more information to point you for your own “perfect” solution.

What Is Fork Bomb And How Can You Prevent This Danger?

If you are not thrilled with the real bomb, you can try typing this  :(){ :|:& };:  on your Linux terminal to crash your computer. you do not need to be the root user to do that. That string is known as the Fork bomb. Before you get to know how that works, it would be better to know what a fork bomb does exactly.

The name sounds Fork bomb does not throw dining forks at you, when you executing the strings in terminal. In terms of nixology (Linux & Unix) the word fork means, to create a new process.Similarly, when you create a new process using ‘fork’ (actually a function that can be called on Linux/Unix-like machines), the new process is created from the image of the original one and is basically a inherited copy of the parent process.
A fork bomb will calls the fork function indefinitely and rapidly in no time, thus exhausting all system resources. It comes in the category of Denial of Service attack due to its nature of quickly ripping up system resources and making it unusable within a very short span of time.
All these special characters stands with their unique functionality in *nix operating system. Fork bomb actually associated with the concept of recursion in bash script by using functions in it.
Syntax of function in bash,


In this case ‘:’ colon is the function name then followed by ‘()’ parentheses and curly brace ‘{‘ to open a function, then the definition of ‘:|:&’ tells the bash  to launch the ‘:’ function and ‘|’ pipe its output to the same function ‘:’ and send the process to the background defined by ‘&’, so that it can’t be killed by hitting “Ctrl + C”. Then the close curly brace ‘}’ followed by ‘:;’ which points the function again to keep the process recursive.
To launch the bomb, all you need to do is just to copy or type   :(){ :|:& };:   this code in Terminal and hit Enter. Your session will get slow down to hell in matter of seconds and you will be left with no option but to go with warm reset. The actual time in which your system will succumb to paralysis depends on the speed of your processor, number of processing cores available and the amount of RAM installed. Even though the size of swap partition is also a factor, if it starts getting used by the bomb, the system would typically take long enough to respond for you.
Before we proceed further let me make sure that Fork bomb is not a property of Linux, the technique of creating new processes does work on Windows as well and also the same problem can be raised in other system programming languages such as C or C++, for your instance  compile the following code and execute it, you will come to know that.

int main(void) {
    return 0;

HOW TO PROTECT FROM FORK BOMB? Fork bombs hang the system by creating a ‘N’ number of processes. The worst part is that one does not even need to be the super user to launch a fork bomb. To shield your system from Fork bomb ensure that you are limiting the number of processes to the local users where they could create, you can limit around 1000 to 4000 process for the local users to create. Generally a user could work about 200-300 process at same time. However for people who do a lot of multitasking, 1000 might be little less.
Understanding /etc/security/limits.conf file!
Each line describes a limit for a user in the form:


  • can be:
    • an user name
    • a group name, with @group syntax
    • the wildcard *, for default entry
    • the wildcard %, can be also used with %group syntax, for maxlogin limit
  • can have the two values:
    • “soft” for enforcing the soft limits
    • “hard” for enforcing hard limits
  • can be one of the following:
    • core – limits the core file size (KB)
  • can be one of the following:
    • core – limits the core file size (KB)
    • data – max data size (KB)
    • fsize – maximum filesize (KB)
    • memlock – max locked-in-memory address space (KB)
    • nofile – max number of open files
    • rss – max resident set size (KB)
    • stack – max stack size (KB)
    • cpu – max CPU time (MIN)
    • nproc – max number of processes
    • as – address space limit
    • maxlogins – max number of logins for this user
    • maxsyslogins – max number of logins on the system
    • priority – the priority to run user process with
    • locks – max number of file locks the user can hold
    • sigpending – max number of pending signals
    • msgqueue – max memory used by POSIX message queues (bytes)
    • nice – max nice priority allowed to raise to
    • rtprio – max realtime priority
    • chroot – change root to directory (Debian-specific)
To limit the number of processes according to the users, you can open the file /etc/security/limits.conf and add the following line at the bottom:

mohammad hard nproc 4000

This will keep the the specified user account to restrict more than 4000 processes. Save the file and reboot the system and try with launching the Fork bomb. System should prevent the crash and withstand the attack now!
If a Fork bomb has already been launched and the restrictions for number of processes are active, you can login as root and kill all the bash processes to terminate the fork bomb. In case if a Fork bomb script is activated by a local user and you haven’t restrict the number of processes to that particular user, but still your CPU left with little time to clear the Fork bomb.
You should not use the following command to kill the script.

$killall -9 Script_Name

This will not work due the nature of a fork bomb. The reason is the killall does not hold a lock on the process table so each one that is killed a new one takes its place in the process table. Also you will not be able to run a killall due to the shell forking off another shell to run the killall.
Instead you can run these command to stop Fork bomb,

$ exec killall -9 ScritName

$ exec killall STOP ScriptName
This Restrictions will have no effect on the root user or any process with the CAP_SYS_ADMIN or CAP_SYS_RESOURCE capabilities are not affected by this kind of limitation on a Linux based system.
Hope you enjoyed the article , make me happy with your comments if you have any questions :-)

How to reset the password in an LXC container

Question: I created an LXC container, but I cannot log in to the container as I forgot the default user's password and the root password. How can I reset the password on an LXC container?
When you create an LXC container, it will have the default username/password set up. The default username/password will vary depending on which LXC template was used to create the container. For example, Debian LXC will have the default username/password set to root/root. Fedora LXC will have the root password set as expired, so it can be set on the first login. Ubuntu LXC will have ubuntu/ubuntu as the default username/password. For any pre-built container images downloaded from third-party repositories, their default username/password will also be image-specific.
If you do not know the default username/password of your LXC container, there is an easy way to find the default username and reset its password.
First of all, make sure to stop the LXC container before proceeding.
$ sudo lxc-stop -n

Find the Default User of an LXC Container

To find the default username created in an LXC container, open the /etc/passwd of the container, which can be found at /var/lib/lxc//rootfs/etc/passwd of the LXC host. In the passwd file of the container, look for "login-enabled" users, which have "/bin/bash" (or something similar) listed as their login shell. Any of such usernames can be the default username of the container. For example, in the screenshot below, the usernames "ubuntu" or "sdn" are login-enabled.

Any username which has "/usr/sbin/nologin" or "/bin/false" as its login shell is login-disabled.

Reset the User Password in an LXC Container

To reset the password of any login-enabled username, you can modify /etc/shadow file of the container, which can be fount at /var/lib/lxc//rootfs/etc/shadow of the LXC host. In Linux, the /etc/shadow file stores one-way encrypted passwords (password hashes) of user accounts. Each line in /etc/shadow is formatted as strings concatenated with ":" delimeter. The first two strings represent a username and its encrypted password.

If the password field is set to '!' or '*', it means the user account is locked for access or completely disabled for login.
To reset the password of any login-enabled username, all you have to do is to remove the password hash of the username and leave the ":" delimeter only. For example, for username "sdn", change:
Similarly, to reset the root password, simply delete the password hash of the root.
With the password field set to empty, you will be able to login to the user account without any password from the console. Now start the container, and verify password-less console login.
Don't forget to set a new password using passwd after successful login.

Wednesday, January 20, 2016

Monkeying around on the bash command line in 2016

Year of the Monkey
Soon it will be 2016 -- the Year of the Monkey in the Chinese Zodiac's 12 year cycle. People born in these years (e.g., 1920, 1932, 1944, 1956, 1968, 1980, 2004, and now 2016) are supposed to be quick-witted, optimistic, ambitious, etc. So, let's see what quick-witted, optimistic, ambitious things we can do to monkey around on the command line more gracefully.
To start, let's look at some of the more unusual and less obvious things that you can do with Linux history. The tricks that most command line users know include these relatively easy commands:
Example   Description
=======   ===========
!e        run the last command you ran that started
          with "e"
!22       run the 22nd command as stored in the
          history commands
!!        run the previously entered command
sudo !!   run the previous command using sudo (very
          helpful if you forgot to use sudo and don't
          want to retype the entire command)
sudo !e   run the last command you ran that starting
          with "e" using sudo
sudo !22  run the 22nd command in your history using
Less obvious are commands such as !-2 that run previous commands based on how far we have to reach back in our history. This one runs the command that was entered two commands ago (i.e., the command before the most recent command).
$ echo one
$ echo two
$ echo three
$ !-2
echo two
Command line substitution commands can also be very helpful. One that I find fairly useful is !$ which allows you to reuse the last string from the previous command. Here's an example where the argument is a fairly long path:

# ls /home/jdoe/scripts/bin
showmypath  trythis
# cd !$
cd /home/jdoe/scripts/bin
Here's another example that might make it easier to see that only the last of three arguments is reused.
# echo one two three
one two three
# echo !$
echo three
# echo one two three
one two three
# echo !^
echo one
Clearing history can be useful if you want to focus on just recent commands, though this doesn't clear out your .bash_history file. You should also remove the file if you want your command history removed from the system.
# history -c
# history
  238  history
Some other interesting options for history commands include using HISTCONTROL settings. These allow you to ignore commands entered with preceding blanks, ignore duplicate commands (when they've been entered consecutively), and do both.
  ignoredups ignore duplicate commands
  ignorespace ignore commands starting with spaces
  ignoreboth ignore consecutive duplicates and commands starting with blanks
HISTSIZE set the size of your history queue
The first of these (HISTCONTROL=ignoredups) ensures that, when you type the same command multiple times in a row, you will only see one of the repeated commands in your history file. Notice how the pwd command appears only once in the history output though we entered it three times.

# pwd
# pwd
# pwd
# echo hello
# history | tail -4
  249  HISTCONTROL=ignoredups
  250  pwd
  251  echo hello
  252  history | tail -4
The second option (HISTCONTROL=ignorespace) means that, when you start a command with blanks, it won't be stored in your history file. This can be very useful when you want to run commands that you don't want recorded (e.g., when they include a password).
#      echo do not store this command in my history
do not store this command in my history
# echo ok
# history -3
  244  clear
  245  echo ok
  246  history -3
The third (HISTCONTROL=ignoreboth) option sets up both of these history control settings.
The HISTSIZE setting adjusts the size of your history queue. Unless you make this change permanent by putting in one of your dot files (e.g., .bashrc), the setting won't persist and your history queue will go back to its original size.
The following command line options are useful, but hard enough to remember that many of these changes might be easier to do manually than trying to store these tricks in your head. And keep in mind that the position of the cursor on the command line often determines what each of the commands will do. Fortunately, there's an undo command to reverse any change you just made.
ctl+a    move to the beginning of the command line
ctl+e    move to end of line
alt+f    move to space following the end of the word
alt+b    move to start of current or previous word (if you're in
         the space)
ctl+t    swap 2 characters (the current and preceding)
alt+t    swap the current and previous words
ctl+u    cut text before cursor
ctl+w    cut part of the word before the cursor
ctl+k    cut text of current command after the curssor
ctl+y    paste the cut text after the cursor
alt+u    uppercase the next word or remaining part of current
         word (curson and on)
alt+l    lowercase the next word
alt+c    capitalize the next word
ctl+L    clear the screen
ctl+_    undo the change you just made

In all of these command line tricks, alt+ means hold down the alt key and then type the letter that follows while ctl+ mans hold down the control key. I show the subsequent letters all in lowercase since you don't need to use the shift key.
Position your cursor on the "three" in this echo command and you'll see the two and three swapping places:
$ echo one two three four five
one two three four five
$ echo one three two four five
While these are all nice options for manipulating your commands, you might find that many are just not worth trying to keep all the alt and ctl sequences straight. Maybe several will come in handy depending on the command line changes you frequently need to make.

Tuesday, January 19, 2016

Tools for Managing OpenStack

As I mentioned in the previous article in this series, at its most basic level, OpenStack consists of an API. The group heading up OpenStack has created developer software that implements OpenStack called DevStack. DevStack is meant for testing and development but not for running an actual data center. Various companies and organizations have created their own implementations of OpenStack that are intended for production.
Although these are all separate software products, they all share the fact that they expose an API consisted with the OpenStack specification. That API allows you to control the OpenStack software programmatically, which opens up a whole world of possibilities. Furthermore, the API is RESTful, allowing you to use it in a browser, or through any programming platform that allows you to make HTTP calls.
As a developer, this design allows you to take a couple approaches to manage an OpenStack infrastructure. For one, you could make calls to the API through your browser. Or, you can write scripts and programs that run from the command-line or desktop and make the calls. The scripts can be run using various automation tools.
First, let’s consider the browser apps. Remember that a browser app lives on two ends: The server side serving out the HTML and JavaScript and so on, and the app in the browser running said HTML and JavaScript. The code running in the browser is easily viewable and debuggable in the browser itself by an experienced developer. What this means is that you do not want to put any security code in the browser. That, in turn, means you typically wouldn’t make calls from the browser directly to the OpenStack API unless you’re operating in a strictly trusted development and testing environment.
The main reason for this is you don’t want to be sending private keys down to the browser that anyone could then access and pass around. Instead, you would follow best practices of web development and implement a security system between the browser and your server, and then have the server make the calls RESTful calls to the OpenStack API.
For the other case of scripts and programs outside of the browser, you have several options. You can make the RESTful calls yourself, or you can use a third-party library that understands OpenStack. These scripts and apps could manage your infrastructure by making the OpenStack API calls.
But, there’s yet another possibility. Various management tools allow you to manage an OpenStack environment using modules built specifically for OpenStack. Two such management tools are Puppet and Chef.


With Puppet, you first define the state of your IT infrastructure, and Puppet automatically enforces the desired state. So, to get started using Puppet, you need to create some configuration files. You can use these files in a descriptive sense, essentially describing the state of your system. However, the configuration language also includes procedural constructs such as loops, along with support for such constructs such as variables.
Puppet provides full support for OpenStack, and the OpenStack organization has even devoted a page to Puppet’s support. The modules described on this page are created by the OpenStack community for Puppet and as such reside on OpenStack’s own Git repository system.
puppet search
Figure 1: Supported OpenStack modules from Puppet Forge.

Additionally, the Puppet community has contributed modules that support OpenStack. If you head over to the Puppet Forge site, you can search by simply entering OpenStack into the search box. This brings up a few dozen modules (see Figure 1). Some are created by members of the community. The ones that are on OpenStack’s Git repository are also here as well. (Just a quick note here; in the list shown in the image, make sure you click on the module name -- the word after the slash -- not the username, which is the word before the slash. Clicking on the username takes you to a list of all modules by that user.)
Installing the modules for Puppet takes a quick and easy command, like so:
puppet module install openstack-keystone
This step installs the keystone module that’s created by the OpenStack organization. (Keystone is the name of OpenStack’s identity service.)
The modules come with examples, which you’ll want to study carefully. The openstack-keystone includes four examples, one of which is for a basic LDAP testing. Take a look at the file called ldap_identity.pp. It creates a class called keystone::roles::admin, which includes a username and password member.
Because this module is just for testing, the username and password are hardcoded in it. Then, it creates a class called keystone::ldap that contains information for connecting to LDAP, such as the following familiar-looking user string:
and other such members. The best way to become familiar with managing OpenStack through Puppet is to play with the examples and use them with a small OpenStack setup.


Chef offers similar tools for automating the provisioning and configuration of your infrastructure.
Chef uses cooking metaphors for its names. For example, a small piece of code is called a recipe, and a set of recipes is a cookbook. Here’s a page from the Chef documentation about working with OpenStack. If you’re planning to use Chef, this page includes a series of examples and explanations that will give you exactly what you need to get started (Figure 2).
Figure 2: Architecture diagram from Chef documentation [].

Like Puppet, Chef includes cookbooks for working with the different aspects of OpenStack, such as Keystone. Unlike Puppet, Chef doesn’t use an original scripting language. Instead, it uses Ruby. To use Chef, you don’t need to be a Ruby programming expert, however. In many cases, you can get by just knowing enough Ruby to configure your system. But, if you need to perform advanced tasks, because it’s Ruby, you can use other aspects of the language such as its procedural constructs.
Also like Puppet, Chef includes a searchable portal where you can find community-contributed recipes and cookbooks. Staying with the cooking metaphor, the portal is called the Supermarket. Note, however, that searching the Supermarket for OpenStack doesn’t provide as many libraries as with Puppet. Although I encourage you to browse through the Supermarket, you’ll want to pay close attention to Chef’s own documentation regarding OpenStack that I mentioned earlier.
You’ll also want to install the OpenStack Chef repo found on GitHub. This page contains the repo itself and shows a README page that also contains some great step-by-step information.


OpenStack is not small. Although you can control it programmatically from a browser or using HTTP calls within your own programming language of choice, you can also greatly simplify your life by embracing either Puppet or Chef. Which one should you choose? I suggest trying out both to see what works for you. Be forewarned that, in either case, you’ll need to learn some syntax of the files -- especially in the case of Chef, if you’re not already familiar with Ruby. Take some time to work through the examples. Install OpenStack on your own virtual machine, and go for it. You’ll be up to speed in no time.
Learn more about OpenStack. Download a free 26-page ebook, "IaaS and OpenStack - How to Get Started" from Linux Foundation Training. The ebook gives you an overview of the concepts and technologies involved in IaaS, as well as a practical guide for trying OpenStack yourself.


A wiki of Guides, Scripts, Tutorials related to devops
Devops tools

Table of Contents

  1. Vim
  2. Bash Guides and Scripts
  3. Python Guides and Scripts
  4. Awk Guides
  5. Sed
  6. Automation Guides
  7. Git
  8. Troubleshooting
  9. Backups
  10. Email Server Configuration
  11. Firewall and Monitoring
  12. Miscellaneous
  13. C programming
  14. Data Structures
  15. Code Editors
  16. Video Tutorials
  17. Continuous Integration
  18. Docker


Vim Cheat Sheet
Vim Regular Expressions 101

Bash Guides and Scripts

Real time file syncing daemon with inotify tools
Creating Init/Systemd Scripts
Building an RPM on CentOS
Bash Scripting Tutorials for Beginners
Bash variable Expansion
Bash Special Characters explained
Bash process substitution
Bash Indepth Tutorial
Back to top

Python Guides and Scripts

Python 3 String Encoding and Formatting
Python Local and Global Scopes
Building system monitoring apps in Python with Flask
Building a Database driven RESTFUL API in Python 3 with Flask
Building Database driven apps with MySQL or PostgreSQL using Python and SQLAlchemy ORM
Token based Authentication with Pyjwt
Script to automatically Scaffold a database driven CRUD app in python
Psutil a cross-platform Python library for retrieving information on running processes and system utilization (CPU, memory, disks, network)
Automating web testing with Selenium
Flask Github Webhook Handler
Flask Web Sockets
Understanding Threading and the Global Interpreter Lock
Packaging and Distributing Python Projects
Python Indepth Tutorial
Back to top

Awk Guides

An introduction to Awk
Text Processing examples with Awk
Back to top


An introduction and Tutorial
Back to top

Automation Guides

Automating Server Configs with Puppet
Automating Server Configs with the SaltStack
Using Foreman, an Opensource Frontend for Puppet
Using StackStorm, an Opensource platform for integration and automation across services and tools.
Back to top


Git Quick Start
Git Indepth Tutorial
Back to top


Troubleshooting Linux Server Memory Usage
Troubleshooting Programs on Linux with Strace
Using Watch to continously Monitor a command
Troubleshooting with Tcpdump
Back to top


BUP Git based Backup
Real time Backup Script written in bash
MySQL incremental Backup with Percona
Back to top

Email Server Configuration

Postfix configuration
Fail2ban configuration
Adding DMARC records
Back to top

Firewall and Monitoring

Configuring a Firewall for linux with CSF and LFD
Monitoring Linux Servers with Monit
Back to top


Linux System Calls
Linux one second boot
Installing a VPN server on Linux
Installing Ruby on Rails on Linux
Installing Gunicorn on Linux
Installing Django on Linux
The Twelve-Factor Software-As-A-Service App building methodology
Back to top

C programming

File I/O
C Programming Boot Camp
Beej's Guide to Network Programming
Back to top

Data Structures

Stack vs Heap
Back to top

Code Editors

Sublime Text
GNU Emacs
Back to top

Video Tutorials

Sys Admin
Youtube Channel
Back to top

Continuous Integration

Back to top



Server Hardening

Server hardening. The very words conjure up images of tempering soft steel into an unbreakable blade, or taking soft clay and firing it in a kiln, producing a hardened vessel that will last many years. Indeed, server hardening is very much like that. Putting an unprotected server out on the Internet is like putting chum in the ocean water you are swimming in—it won't be long and you'll have a lot of excited sharks circling you, and the outcome is unlikely to be good. Everyone knows it, but sometimes under the pressure of deadlines, not to mention the inevitable push from the business interests to prioritize those things with more immediate visibility and that add to the bottom line, it can be difficult to keep up with even what threats you need to mitigate, much less the best techniques to use to do so. This is how corners get cut—corners that increase our risk of catastrophe.
This isn't entirely inexcusable. A sysadmin must necessarily be a jack of all trades, and security is only one responsibility that must be considered, and not the one most likely to cause immediate pain. Even in organizations that have dedicated security staff, those parts of the organization dedicated to it often spend their time keeping up with the nitty gritty of the latest exploits and can't know the stack they are protecting as well as those who are knee deep in maintaining it. The more specialized and diversified the separate organizations, the more isolated each group becomes from the big picture. Without the big picture, sensible trade-offs between security and functionality are harder to make. Since a deep and thorough knowledge of the technology stack along with the business it serves is necessary to do a thorough job with security, it sometimes seems nearly hopeless.
A truly comprehensive work on server hardening would be beyond the scope not only of a single article, but a single (very large) book, yet all is not lost. It is true that there can be no "one true hardening procedure" due to the many and varied environments, technologies and purposes to which those technologies are put, but it is also true that you can develop a methodology for governing those technologies and the processes that put the technology to use that can guide you toward a sane setup. You can boil down the essentials to a few principles that you then can apply across the board. In this article, I explore some examples of application.
I also should say that server hardening, in itself, is almost a useless endeavor if you are going to undercut yourself with lazy choices like passwords of "abc123" or lack a holistic approach to security in the environment. Insecure coding practices can mean that the one hole you open is gaping, and users e-mailing passwords can negate all your hard work. The human element is key, and that means fostering security consciousness at all steps of the process. Security that is bolted on instead of baked in will never be as complete or as easy to maintain, but when you don't have executive support for organizational standards, bolting it on may be the best you can do. You can sleep well though knowing that at least the Linux server for which you are responsible is in fact properly if not exhaustively secured.
The single most important principle of server hardening is this: minimize your attack surface. The reason is simple and intuitive: a smaller target is harder to hit. Applying this principle across all facets of the server is essential. This begins with installing only the specific packages and software that are exactly necessary for the business purpose of the server and the minimal set of management and maintenance packages. Everything present must be vetted and trusted and maintained. Every line of code that can be run is another potential exploit on your system, and what is not installed can not be used against you. Every distribution and service of which I am aware has an option for a minimal install, and this is always where you should begin.
The second most important principle is like it: secure that which must be exposed. This likewise spans the environment from physical access to the hardware, to encrypting everything that you can everywhere—at rest on the disk, on the network and everywhere in between. For the physical location of the server, locks, biometrics, access logs—all the tools you can bring to bear to controlling and recording who gains physical access to your server are good things, because physical access, an accessible BIOS and a bootable USB drive are just one combination that can mean that your server might as well have grown legs and walked away with all your data on it. Rogue, hidden wireless SSIDs broadcast from a USB device can exist for some time before being stumbled upon.
For the purposes of this article though, I'm going to make a few assumptions that will shrink the topics to cover a bit. Let's assume you are putting a new Linux-based server on a cloud service like AWS or Rackspace. What do you need to do first? Since this is in someone else's data center, and you already have vetted the physical security practices of the provider (right?), you begin with your distribution of choice and a minimal install—just enough to boot and start SSH so you can access your shiny new server.
Within the parameters of this example scenario, there are levels of concern that differ depending on the purpose of the server, ranging from "this is a toy I'm playing with, and I don't care what happens to it" all the way to "governments will topple and masses of people die if this information is leaked", and although a different level of paranoia and effort needs to be applied in each case, the principles remain the same. Even if you don't care what ultimately happens to the server, you still don't want it joining a botnet and contributing to Internet Mayhem. If you don't care, you are bad and you should feel bad. If you are setting up a server for the latter purpose, you are probably more expert than myself and have no reason to be reading this article, so let's split the difference and assume that should your server be cracked, embarrassment, brand damage and loss of revenue (along with your job) will ensue.
In any of these cases, the very first thing to do is tighten your network access. If the hosting provider provides a mechanism for this, like Amazon's "Zones", use it, but don't stop there. Underneath securing what must be exposed is another principle: layers within layers containing hurdle after hurdle. Increase the effort required to reach the final destination, and you reduce the number that are willing and able to reach it. Zones, or network firewalls, can fail due to bugs, mistakes and who knows what factors that could come into play. Maximizing redundancy and backup systems in the case of failure is a good in itself. All of the most celebrated data thefts have happened when not just some but all of the advice contained in this article was ignored, and if only one hurdle had required some effort to surmount, it is likely that those responsible would have moved on to someone else with lower hanging fruit. Don't be the lower hanging fruit. You don't always have to outrun the bear.
The first principle, that which is not present (installed or running) can not be used against you, requires that you ensure you've both closed down and turned off all unnecessary services and ports in all runlevels and made them inaccessible via your server's firewall, in addition to whatever other firewalling you are doing on the network. This can be done via your distribution's tools or simply by editing filenames in /etc/rcX.d directories. If you aren't sure if you need something, turn it off, reboot, and see what breaks.
But, before doing the above, make sure you have an emergency console back door first! This won't be the last time you need it. When just beginning to tinker with securing a server, it is likely you will lock yourself out more than once. If your provider doesn't provide a console that works when the network is inaccessible, the next best thing is to take an image and roll back if the server goes dark.
I suggest first doing two things: running ps -ef and making sure you understand what all running processes are doing, and lsof -ni | grep LISTEN to make sure you understand why all the listening ports are open, and that the process you expect has opened them.
For instance, on one of my servers running WordPress, the results are these:

# ps -ef | grep -v \] | wc -l
I won't list out all of my process names, but after pulling out all the kernel processes, I have 39 other processes running, and I know exactly what all of them are and why they are running. Next I examine:

# lsof -ni | grep LISTEN
mysqld    1638    mysql  10u  IPv4  10579  0t0  TCP (LISTEN)
sshd      1952     root   3u  IPv4  11571  0t0  TCP *:ssh (LISTEN)
sshd      1952     root   4u  IPv6  11573  0t0  TCP *:ssh (LISTEN)
nginx     2319     root   7u  IPv4  12400  0t0  TCP *:http (LISTEN)
nginx     2319     root   8u  IPv4  12401  0t0  TCP *:https (LISTEN)
nginx     2319     root   9u  IPv6  12402  0t0  TCP *:http (LISTEN)
nginx     2320 www-data   7u  IPv4  12400  0t0  TCP *:http (LISTEN)
nginx     2320 www-data   8u  IPv4  12401  0t0  TCP *:https (LISTEN)
nginx     2320 www-data   9u  IPv6  12402  0t0  TCP *:http (LISTEN)
This is exactly as I expect, and it's the minimal set of ports necessary for the purpose of the server (to run WordPress).
Now, to make sure only the necessary ports are open, you need to tune your firewall. Most hosting providers, if you use one of their templates, will by default have all rules set to "accept". This is bad. This defies the second principle: whatever must be exposed must be secured. If, by some accident of nature, some software opened a port you did not expect, you need to make sure it will be inaccessible.
Every distribution has its tools for managing a firewall, and others are available in most package managers. I don't bother with them, as iptables (once you gain some familiarity with it) is fairly easy to understand and use, and it is the same on all systems. Like vi, you can expect its presence everywhere, so it pays to be able to use it. A basic firewall looks something like this:

# make sure forwarding is off and clear everything
# also turn off ipv6 cause if you don't need it 
# turn it off
sysctl net.ipv6.conf.all.disable_ipv6=1
sysctl net.ipv4.ip_forward=0
iptables -F
iptables --flush
iptables -t nat --flush
iptables -t mangle --flush
iptables --delete-chain
iptables -t nat --delete-chain
iptables -t mangle --delete-chain

#make the default -drop everything
iptables --policy INPUT DROP
iptables --policy OUTPUT ACCEPT
iptables --policy FORWARD DROP

#allow all in loopback
iptables -A INPUT -i lo -j ACCEPT

#allow related
iptables -A INPUT -m state --state 

#allow ssh
iptables -A INPUT -m tcp -p tcp --dport 22 -j ACCEPT
You can get fancy, wrap this in a script, drop a file in /etc/rc.d, link it to the runlevels in /etc/rcX.d, and have it start right after networking, or it might be sufficient for your purposes to run it straight out of /etc/rc.local. Then you modify this file as requirements change. For instance, to allow ssh, http and https traffic, you can switch the last line above to this one:

iptables -A INPUT -p tcp -m state --state NEW -m 
 ↪multiport --dports ssh,http,https -j ACCEPT
More specific rules are better. Let's say what you've built is an intranet server, and you know where your traffic will be coming from and on what interface. You instead could add something like this to the bottom of your iptables script:

iptables -A INPUT -i eth0 -s -p tcp 
 ↪-m state --state NEW -m multiport --dports http,https
There are a couple things to consider in this example that you might need to tweak. For one, this allows all outbound traffic initiated from the server. Depending on your needs and paranoia level, you may not wish to do so. Setting outbound traffic to default deny will significantly complicate maintenance for things like security updates, so weigh that complication against your level of concern about rootkits communicating outbound to phone home. Should you go with default deny for outbound, iptables is an extremely powerful and flexible tool—you can control outbound communications based on parameters like process name and owning user ID, rate limit connections—almost anything you can think of—so if you have the time to experiment, you can control your network traffic with a very high degree of granularity.
Second, I'm setting the default to DROP instead of REJECT. DROP is a bit of security by obscurity. It can discourage a script kiddie if his port scan takes too long, but since you have commonly scanned ports open, it will not deter a determined attacker, and it might complicate your own troubleshooting as you have to wait for the client-side timeout in the case you've blocked a port in iptables, either on purpose or by accident. Also, as I've detailed in a previous article in Linux Journal (, TCP-level rejects are very useful in high traffic situations to clear out the resources used to track connections statefully on the server and on network gear farther out. Your mileage may vary.
Finally, your distribution's minimal install might not have sysctl installed or on by default. You'll need that, so make sure it is on and works. It makes inspecting and changing system values much easier, as most versions support tab auto-completion. You also might need to include full paths to the binaries (usually /sbin/iptables and /sbin/sysctl), depending on the base path variable of your particular system.
All of the above probably should be finished within a few minutes of bringing up the server. I recommend not opening the ports for your application until after you've installed and configured the applications you are running on the server. So at the point when you have a new minimal server with only SSH open, you should apply all updates using your distribution's method. You can decide now if you want to do this manually on a schedule or set them to automatic, which your distribution probably has a mechanism to do. If not, a script dropped in cron.daily will do the trick. Sometimes updates break things, so evaluate carefully. Whether you do automatic updates or not, with the frequency with which critical flaws that sometimes require manual configuration changes are being uncovered right now, you need to monitor the appropriate lists and sites for critical security updates to your stack manually, and apply them as necessary.
Once you've dealt with updates, you can move on and continue to evaluate your server against the two security principles of 1) minimal attack surface and 2) secure everything that must be exposed. At this point, you are pretty solid on point one. On point two, there is more you can yet do.
The concept of hurdles requires that you not allow root to log in remotely. Gaining root should be at least a two-part process. This is easy enough; you simply set this line in /etc/ssh/sshd_config:

PermitRootLogin no
For that matter, root should not be able to log in directly at all. The account should have no password and should be accessible only via sudo—another hurdle to clear.
If a user doesn't need to have remote login, don't allow it, or better said, allow only users that you know need remote access. This satisfies both principles. Use the AllowUsers and AllowGroups settings in /etc/ssh/sshd_config to make sure you are allowing only the necessary users.
You can set a password policy on your server to require a complex password for any and all users, but I believe it is generally a better idea to bypass crackable passwords altogether and use key-only login, and have the key require a complex passphrase. This raises the bar for cracking into your system, as it is virtually impossible to brute force an RSA key. The key could be physically stolen from your client system, which is why you need the complex passphrase. Without getting into a discussion of length or strength of key or passphrase, one way to create it is like this:

ssh-keygen -t rsa
Then when prompted, enter and re-enter the desired passphrase. Copy the public portion ( or similar) into a file in the user's home directory called ~/.ssh/authorized_keys, and then in a new terminal window, try logging in, and troubleshoot as necessary. I store the key and the passphrase in a secure data vault provided by Personal, Inc. (, and this will allow me, even if away from home and away from my normal systems, to install the key and have the passphrase to unlock it, in case an emergency arises. (Disclaimer: Personal is the startup I work with currently.)
Once it works, change this line in /etc/ssh/sshd_config:

PasswordAuthentication no
Now you can log in only with the key. I still recommend keeping a complex password for the users, so that when you sudo, you have that layer of protection as well. Now to take complete control of your server, an attacker needs your private key, your passphrase and your password on the server—hurdle after hurdle. In fact, in my company, we also use multi-factor authentication in addition to these other methods, so you must have the key, the passphrase, the pre-secured device that will receive the notification of the login request and the user's password. That is a pretty steep hill to climb.
Encryption is a big part of keeping your server secure—encrypt everything that matters to you. Always be aware of how data, particularly authentication data, is stored and transmitted. Needless to say, you never should allow login or connections over an unencrypted channel like FTP, Telnet, rsh or other legacy protocols. These are huge no-nos that completely undo all the hard work you've put into securing your server. Anyone who can gain access to a switch nearby and perform reverse arp poisoning to mirror your traffic will own your servers. Always use sftp or scp for file transfers and ssh for secure shell access. Use https for logins to your applications, and never store passwords, only hashes.
Even with strong encryption in use, in the recent past, many flaws have been found in widely used programs and protocols—get used to turning ciphers on and off in both OpenSSH and OpenSSL. I'm not covering Web servers here, but the lines of interest you would put in your /etc/ssh/sshd_config file would look something like this:

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128
MACs hmac-sha1,,hmac-ripemd160
Then you can add or remove as necessary. See man sshd_config for all the details.
Depending on your level of paranoia and the purpose of your server, you might be tempted to stop here. I wouldn't. Get used to installing, using and tuning a few more security essentials, because these last few steps will make you exponentially more secure. I'm well into principle two now (secure everything that must be exposed), and I'm bordering on the third principle: assume that every measure will be defeated. There is definitely a point of diminishing returns with the third principle, where the change to the risk does not justify the additional time and effort, but where that point falls is something you and your organization have to decide.
The fact of the matter is that even though you've locked down your authentication, there still exists the chance, however small, that a configuration mistake or an update is changing/breaking your config, or by blind luck an attacker could find a way into your system, or even that the system came with a backdoor. There are a few things you can do that will further protect you from those risks.
Speaking of backdoors, everything from phones to the firmware of hard drives has backdoors pre-installed. Lenovo has been caught no less than three times pre-installing rootkits, and Sony rooted customer systems in a misguided attempt at DRM. A programming mistake in OpenSSL left a hole open that the NSA has been exploiting to defeat encryption for at least a decade without informing the community, and this was apparently only one of several. In the late 2000s, someone anonymously attempted to insert a two-line programming error into the Linux kernel that would cause a remote root exploit under certain conditions. So suffice it to say, I personally do not trust anything sourced from the NSA, and I turn SELinux off because I'm a fan of warrants and the fourth amendment. The instructions are generally available, but usually all you need to do is make this change to /etc/selinux/config:

#SELINUX=enforcing # comment out
SELINUX=disabled # turn it off, restart the system
In the spirit of turning off and blocking what isn't needed, since most of the malicious traffic on the Internet comes from just a few sources, why do you need to give them a shot at cracking your servers? I run a short script that collects various blacklists of exploited servers in botnets, Chinese and Russian CIDR ranges and so on, and creates a blocklist from them, updating once a day. Back in the day, you couldn't do this, as iptables gets bogged down matching more than a few thousand lines, so having a rule for every malicious IP out there just wasn't feasible. With the maturity of the ipset project, now it is. ipset uses a binary search algorithm that adds only one pass to the search each time the list doubles, so an arbitrarily large list can be searched efficiently for a match, although I believe there is a limit of 65k entries in the ipset table.
To make use of it, add this at the bottom of your iptables script:

#create iptables blocklist rule and ipset hash
ipset create blocklist hash:net
iptables -I INPUT 1 -m set --match-set blocklist 
 ↪src -j DROP
Then put this somewhere executable and run it out of cron once a day:


list="chinese nigerian russian lacnic exploited-servers"
"" # Project
 ↪Honey Pot Directory of Dictionary Attacker IPs
 ↪# TOR Exit Nodes
"" # MaxMind GeoIP 
 ↪Anonymous Proxies
 ↪# BruteForceBlocker IP List
 ↪# Emerging Threats - Russian Business Networks List
"" # Spamhaus Dont Route 
 ↪Or Peer List (DROP)
"" # C.I. Army Malicious 
 ↪IP List
""  # 30 day List
"" # Autoshun Shun List
"" # attackers

cd  $TMP_DIR
# This gets the various lists
for i in "${BLOCKLISTS[@]}"
    curl "$i" > $IP_TMP
    grep -Po '(?:\d{1,3}\.){3}\d{1,3}(?:/\d{1,2})?' $IP_TMP >>
for i in `echo $list`; do
    # This section gets wizcrafts lists
    wget --quiet$i-iptables-blocklist.html
    # Grep out all but ip blocks
    cat $i-iptables-blocklist.html | grep -v \< | grep -v \: |
     ↪grep -v \; | grep -v \# | grep [0-9] > $i.txt
    # Consolidate blocks into master list
    cat $i.txt >> $IP_BLOCKLIST_TMP


ipset flush blocklist
egrep -v "^#|^$" $IP_BLOCKLIST | while IFS= read -r ip
        ipset add blocklist $ip

rm -fR $TMP_DIR/*

exit 0
It's possible you don't want all these blocked. I usually leave tor exit nodes open to enable anonymity, or if you do business in China, you certainly can't block every IP range coming from there. Remove unwanted items from the URLs to be downloaded. When I turned this on, within 24 hours, the number of banned IPs triggered by brute-force crack attempts on SSH dropped from hundreds to less than ten.
Although there are many more areas to be hardened, since according to principle three we assume all measures will be defeated, I will have to leave things like locking down cron and bash as well as automating standard security configurations across environments for another day. There are a few more packages I consider security musts, including multiple methods to check for intrusion (I run both chkrootkit and rkhunter to update signatures and scan my systems at least daily). I want to conclude with one last must-use tool: Fail2ban.
Fail2ban is available in virtually every distribution's repositories now, and it has become my go-to. Not only is it an extensible Swiss-army knife of brute-force authentication prevention, it comes with an additional bevy of filters to detect other attempts to do bad things to your system. If you do nothing but install it, run it, keep it updated and turn on its filters for any services you run, especially SSH, you will be far better off than you were otherwise. As for me, I have other higher-level software like WordPress log to auth.log for filtering and banning of malefactors with Fail2ban. You can custom-configure how long to ban based on how many filter matches (like failed login attempts of various kinds) and specify longer bans for "recidivist" abusers that keep coming back.
Here's one example of the extensibility of the tool. During log review (another important component of a holistic security approach), I noticed many thousands of the following kinds of probes, coming especially from China:

sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth]
sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth]
sshd[***]: Received disconnect from **.**.**.**: 11: Bye Bye [preauth]
There were two forms of this, and I could not find any explanation of a known exploit that matched this pattern, but there had to be a reason I was getting so many so quickly. It wasn't enough to be a denial of service, but it was a steady flow. Either it was a zero-day exploit or some algorithm sending malformed requests of various kinds hoping to trigger a memory problem in hopes of uncovering an exploit—in any case, there was no reason to allow them to continue.
I added this line to the failregex = section of /etc/fail2ban/filter.d/sshd.local:

^%(__prefix_line)sReceived disconnect from : 
 ↪11: (Bye Bye)? \[preauth\]$
Within minutes, I had banned 20 new IP addresses, and my logs were almost completely clear of these lines going forward.
By now, you've seen my three primary principles of server hardening in action enough to know that systematically applying them to your systems will have you churning out reasonably hardened systems in no time. But, just to reiterate one more time:
  1. Minimize attack surface.
  2. Secure whatever remains and must be exposed.
  3. Assume all security measures will be defeated.
Feel free to give me a shout and let me know what you thought about the article. Let me know your thoughts on what I decided to include, any major omissions I cut for the sake of space you thought should have been included, and things you'd like to see in the future!

Squid Proxy Hide System’s Real IP Address

My squid proxy server is displaying system's real IP address. I've a corporate password protected squid proxy server located at My clients work from home or offices via A/DSL / cable connections. Squid should hide all system's IP address, but it is forwarding and displaying the system's IP address. How do I configure squid to hide client's real IP address?

Squid proxy server has a directive called forwarded_for. If set, Squid will include your system's IP address or a name of the HTTP requests it forwards. By default it looks like
If you disable this (set to "off"), it will appear as
X-Forwarded-For: unknown
If set to "transparent", Squid will not alter the X-Forwarded-For header in any way. If set to "delete", Squid will delete the entire X-Forwarded-For header. If set to "truncate", Squid will remove all existing X-Forwarded-For entries, and place the client IP as the sole entry.


Open squid.conf file:
# vi squid.conf
Or (for squid version 3)
# vi /etc/squid3/squid.conf
Set forwarded_for to off:
forwarded_for off
OR set it to delete:
forwarded_for delete
Save and close the file.

Reload squid server

You need to restart the squid server, enter:
# /etc/init.d/squid restart
# squid -k reconfigure
For squid version 3, run:
# squid3 -k reconfigure
Here are my options:
# Hide client ip #
forwarded_for delete
# Turn off via header #
via off
# Deny request for original source of a request
follow_x_forwarded_for deny all
# See below
request_header_access X-Forwarded-For deny all

Say hello to request_header_access

By default, all headers are allowed (no anonymizing is performed for privacy). You can anonymize outgoing HTTP headers (i.e. headers sent by Squid to the following HTTP hop such as a cache peer or an origin server) to create the standard or paranoid experience. The following option are only tested on squid server version 3.x:

Squid standard anonymizer privacy experience

Set the following options in squid3.conf:
 request_header_access From deny all
 request_header_access Referer deny all
 request_header_access User-Agent deny all
Save and close the file. Do not forget to restart the squid3 as described above.

Squid standard privacy experience

Set the following options in squid3.conf:
  request_header_access Authorization allow all
  request_header_access Proxy-Authorization allow all
  request_header_access Cache-Control allow all
  request_header_access Content-Length allow all
  request_header_access Content-Type allow all
  request_header_access Date allow all
  request_header_access Host allow all
  request_header_access If-Modified-Since allow all
  request_header_access Pragma allow all
  request_header_access Accept allow all
  request_header_access Accept-Charset allow all
  request_header_access Accept-Encoding allow all
  request_header_access Accept-Language allow all
  request_header_access Connection allow all
  request_header_access All deny all
Save and close the file. Do not forget to restart the squid3 as described above.