Tuesday, August 26, 2014

Security Hardening with Ansible

http://www.linuxjournal.com/content/security-hardening-ansible

Ansible is an open-source automation tool developed and released by Michael DeHaan and others in 2012. DeHaan calls it a "general-purpose automation pipeline" (see Resources for a link to the article "Ansible's Architecture: Beyond Configuration Management"). Not only can it be used for automated configuration management, but it also excels at orchestration, provisioning of systems, zero-time rolling updates and application deployment. Ansible can be used to keep all your systems configured exactly the way you want them, and if you have many identical systems, Ansible will ensure they stay identical. For Linux system administrators, Ansible is an indispensable tool in implementing and maintaining a strong security posture.
Ansible can be used to deploy and configure multiple Linux servers (Red Hat, Debian, CentOS, OS X, any of the BSDs and others) using secure shell (SSH) instead of the more common client-server methodologies used by other configuration management packages, such as Puppet and Chef (Chef does have a solo version that does not require a server, per se). Utilizing SSH is a more secure method because the traffic is encrypted. The secure shell transport layer protocol is used for communications between the Ansible server and the target hosts. Authentication is accomplished using Kerberos, public-key authentication or passwords.
When I began working in system administration some years ago, a senior colleague gave me a simple formula for success. He said, "Just remember, automate, automate, automate." If this is true, and I believe it is, then Ansible can be a crucial tool in making any administrator's career successful. If you do not have a few really good automation tools, every task must be accomplished manually. That wastes a lot of time, and time is precious. Ansible makes it possible to manage many servers almost effortlessly.
Ansible uses a very simple method called playbooks to orchestrate configurations. A playbook is a set of instructions written in YAML that tells the Ansible server what "plays" to carry out on the target hosts. YAML is a very simple, human-readable markup language that gives the user fine granularity when setting up configuration schemes. It is installed, along with Ansible, as a dependency. Ansible uses YAML because it is much easier to write than common data formats, like JSON and XML. The learning curve for YAML is very low, hence proficiency can be gained very quickly. For example, the simple playbook shown in Figure 1 keeps the Apache RPM on targeted Web servers up to date and current.
Figure 1. Example Playbook That Will Upgrade Apache to the Latest Version
From the Ansible management server, you can create a cron job to push the playbook to the target hosts on a regular basis, thus ensuring you always will have the latest-and-greatest version of the Apache Web server.
Using YAML, you can instruct Ansible to target a specific group of servers, the remote user you want to run as, tasks to assign and many other details. You can name each task, which makes for easier reading of the playbook. You can set variables, and use loops and conditional statements. If you have updated a configuration file that requires restarting a service, Ansible uses tasks called handlers to notify the system that a service restart is necessary. Handlers also can be used for other things, but this is the most common.
The ability to reuse certain tasks from previously written playbooks is another great feature. Ansible uses a mechanism called roles to accomplish this. Roles are organizational units that are used to implement a specific configuration on a group of hosts. A role can include a set of variable values, handlers and tasks that can be assigned to a host group, or hosts corresponding to specific patterns. For instance, you could create a role for installing and configuring MySQL on a group of targeted servers. Roles make this a very simple task.
Besides intelligent automation, you also can use Ansible for ad hoc commands to contact all your target hosts simultaneously. Ad hoc commands can be performed on the command line. It is a very quick method to use when you want to see a specific type of output from all your target machines, or just a subset of them. For example, if you want to see the uptime for all the hosts in a group called dbservers, you would type, as user root:

# ansible dbservers -a /usr/bin/uptime
The output will look like Figure 2.
Figure 2. Example of ad hoc Command Showing Uptime Output for All Targets
If you want to specify a particular user, use the command in this way:

# ansible dbservers -a /usr/bin/uptime -u username
If you are running the command as a particular user, but want to act as root, you can run it through sudo and have Ansible ask for the root password:

# ansible dbservers -a /usr/bin/uptime -u username 
 ↪--sudo [ask-sudo-pass]
You also can switch to a different user by using the -U option:

# ansible dbservers -a /usr/bin/uptime -u username 
 ↪-U otheruser --sudo
# [ask-sudo-pass]
Occasionally, you may want to run the command with 12 parallel forks, or processes:

# ansible dbservers -a /usr/bin/uptime -f 12
This will get the job done faster by using 12 simultaneous processes, instead of the default value of 5. If you would like to set a permanent default for the number of forks, you can set it in the Ansible configuration file, which is located in /etc/ansible/ansible.cfg.
It also is possible to use Ansible modules in ad hoc mode by using the -m option. In this example, Ansible pings the target hosts using the ping module:

# ansible dbservers -m ping
Figure 3. In this example, Ansible pings the target hosts using the ping module.
As I write this, Michael DeHaan has announced that, in a few weeks, a new command-line tool will be added to Ansible version 1.5 that will enable the encrypting of various data within the configuration. The new tool will be called ansible-vault. It will be implemented by using the new --ask-vault-pass option. According to DeHaan, anything you write in YAML for your configuration can be encrypted with ansible-vault by using a password.
Server security hardening is crucial to any IT enterprise. We must face the fact that we are protecting assets in what has become an informational war-zone. Almost daily, we hear of enterprise systems that have fallen prey to malevolent individuals. Ansible can help us, as administrators, protect our systems. I have developed a very simple way to use Ansible, along with an open-source project called Aqueduct, to harden RHEL6 Linux servers. These machines are secured according to the standards formulated by the Defense Information Systems Agency (DISA). DISA publishes Security Technical Implementation Guides (STIGs) for various operating systems that provide administrators with solid guidelines for securing systems.
In a typical client-server setup, the remote client dæmon communicates with a server dæmon. Usually, this communication is in the clear (not encrypted), although Puppet and Chef have their own proprietary mechanisms to encrypt traffic. The implementation of public-key authentication (PKI) in SSH has been well vetted for many years by security professionals and system administrators. For my purposes, SSH is strongly preferred. Typically, there is a greater risk in using proprietary client-server dæmons than using SSH. They may be relatively new and could be compromised by malevolent individuals using buffer-overflow attack strategies or denial-of-service attacks. Any time we can reduce the total number of services running on a server, it will be more secure.
To install the current version of Ansible (1.4.3 at the time of this writing), you will need Python 2.4 or later and the Extra Packages for Enterprise Linux (EPEL) repository RPM. For the purposes of this article, I use Ansible along with another set of scripts from an open-source project called Aqueduct. This is not, however, a requirement for Ansible. You also will need to install Git, if you are not already using it. Git will be used to pull down the Aqueduct package.
Vincent Passaro, Senior Security Architect at Fotis Networks, pilots the Aqueduct project, which consists of the development of both bash scripts and Puppet manifests. These are written to deploy the hardening guidelines provided in the STIGs. Also included are CIS (Center for Internet Security) benchmarks and several others. On the Aqueduct home page, Passaro says, "Content is currently being developed (by me) for the Red Hat Enterprise Linux 5 (RHEL 5) Draft STIG, CIS Benchmarks, NISPOM, PCI", but I have found RHEL6 bash scripts there as well. I combined these bash scripts to construct a very basic Ansible playbook to simplify security hardening of RHEL6 systems. I accomplished this by using the included Ansible module called script.
According to the Ansible documentation, "The script module takes the script name followed by a list of space-delimited arguments. The local script at path will be transferred to the remote node and then executed. The given script will be processed through the shell environment on the remote node. This module does not require Python on the remote system, much like the raw module."
Ansible modules are tiny bits of code used for specific purposes by the API to carry out tasks. The documentation states, "Ansible modules are reusable units of magic that can be used by the Ansible API, or by the ansible or ansible-playbook programs." I view them as being very much like functions or subroutines. Ansible ships with many modules ready for use. Administrators also can write modules to fit specific needs using any programming language. Many of the Ansible modules are idempotent, which means they will not make a change to your system if a change does not need to be made. In other words, it is safe to run these modules repeatedly without worrying they will break things. For instance, running a playbook that sets permissions on a certain file will, by default, update the permissions on that file only if its permissions differ from those specified in the playbook.
For my needs, the script module works perfectly. Each Aqueduct bash script corresponds to a hardening recommendation given in the STIG. The scripts are named according to the numbered sections of the STIG document.
In my test environment, I have a small high-performance compute cluster consisting of one management node and ten compute nodes. For this test, the SSH server dæmon is configured for public-key authentication for the root user. To install Ansible on RHEL6, the EPEL repository must first be installed. Download the EPEL RPM from the EPEL site (see Resources).
Then, install it on your management node:

# rpm -ivh epel-release-6-8.noarch.rpm
Now, you are ready to install Ansible:

# yum install ansible
Ansible's main configuration file is located in /etc/ansible/ansible.cfg. Unless you want to add your own customizations, you can configure it with the default settings.
Now, create a directory in /etc/ansible called prod. This is where you will copy the Aqueduct STIG bash scripts. Also, create a directory in /etc/ansible called plays, where you will keep your Ansible playbooks. Create another directory called manual-check. This will hold scripts with information that must be checked manually. Next, a hosts file must be created in /etc/ansible. It is simply called hosts. Figure 4 shows how I configured mine for the ten compute nodes.
Figure 4. The /etc/hosts File for My Test Cluster
Eight of the compute nodes are typical nodes, but two are equipped with GPGPUs, so there are two groups: "hosts" and "gpus". Provide the IP address of each node (the host name also can be given if your DNS is set up properly). With this tiny bit of configuration, Ansible is now functional. To test it, use Ansible in ad hoc mode and execute the following command on your management node:

# ansible all -m ping
If this results in a "success" message from each host, all is well.
The Aqueduct scripts must be downloaded using Git. If you do not have this on your management node, then:

# yum install git 
Git "is a distributed revision control and source code management (SCM) system with an emphasis on speed" (Wikipedia). The command-line for acquiring the Aqueduct package of scripts and manifests goes like this:
# git clone git://git.fedorahosted.org/git/aqueduct.git This will create a directory under the current directory called aqueduct. The bash scripts for RHEL6 are located in aqueduct/compliance/bash/stig/rhel-6/prod. Now, copy all scripts therein to /etc/ansible/prod. There are some other aspects of the STIG that will need to be checked by either running the scripts manually or reading the script and performing the required actions. These scripts are located in aqueduct/compliance/bash/stig/rhel-6/manual-check. Copy these scripts to /etc/ansible/manual-check.
Now that the scripts are in place, a playbook must be written to deploy them on all target hosts. Copy the playbook to /etc/ansible/plays. Make sure all scripts are executable. Figure 5 shows the contents of my simple playbook called aqueduct.yml.
Figure 5. My Simple Playbook to Execute STIG Scripts on All Targets
On a few of the STIG scripts, a few edits were needed to get them to execute correctly. Admittedly, a more eloquent solution would be to replace the STIG scripts by translating them into customized Ansible modules. For now, however, I am taking the easier route by calling the STIG scripts as described from my custom Ansible playbook. The script module makes this possible. Next, simply execute the playbook on the management node with the command:
# ansible-playbook aqueduct.yml This operation takes about five minutes to run on my ten nodes, with the understanding that the plays run in parallel on the target hosts. Ansible produces detailed output that shows the progress of each play and host. When Ansible finishes running the plays, all of the target machines should be identically hardened, and a summary is displayed. In this case, everything ran successfully.
Figure 6. Output Showing a Successful STIG Playbook Execution
For system security hardening, the combination of Ansible and Aqueduct is a powerfully productive force in keeping systems safe from intruders.
If you've ever worked as a system administrator, you know how much time a tool like this can save. The more I learn about Ansible, the more useful it becomes. I am constantly thinking of new ways to implement it. As my system administration duties drift more toward using virtual technologies, I plan on using Ansible to provision and manage my virtual configurations quickly. I am also looking for more avenues to explore in the way of managing high-performance computing systems, since this is my primary duty. Michael DeHaan has developed another tool called Cobbler, which is excellent for taking advantage of Red Hat's installation method, Kickstart, to build systems quickly. Together, Cobbler and Ansible create an impressive arsenal for system management.
As system administrators, we are living in exciting times. Creative developers are inventing an amazing array of tools that, not only make our jobs easier, but also more fun. I can only imagine what the future may hold. One thing is certain: we will be responsible for more and more systems. This is due to the automation wizardry of technologies like Ansible that enable a single administrator to manage hundreds or even thousands of servers. These tools will only improve, as they have continued to do. As security continues to become more and more crucial, their importance will only increase.

Resources

Ansible's Architecture: Beyond Configuration Management: http://blog.ansibleworks.com/2013/11/29/ansibles-architecture-beyond-configuration-management
Michael DeHaan's Blog: http://michaeldehaan.net
Git Home: http://git-scm.com
Aqueduct Home: http://www.vincentpassaro.com/open-source-projects/aqueduct-red-hat-enterprise-linux-security-development
Ansible Documentation: http://docs.ansible.com/index.html
EPEL Repository Home: https://fedoraproject.org/wiki/EPEL
DISA RHEL6 STIG: http://iase.disa.mil/stigs/os/unix/red_hat.html

jBilling tutorial – an open source billing platform

http://www.linuxuser.co.uk/tutorials/jbilling-tutorial-an-open-source-billing-platform

Discover jBilling and make managing invoices, payments and billing simple and stress-free


A lot more people are taking up the entrepreneurial route these days. To the uninitiated it looks very easy; you are your own boss and can do whatever you wish. But someone who has already taken the plunge knows that being an entrepreneur is a lot tougher – whether working as a freelancer or the founder of a start- up, you will almost always find yourself donning several hats. While managing everything is relatively easy when you are small, it can become a daunting task to manage things when you start growing rapidly. Multitasking becomes a real skill as you negotiate with clients, send proposals and work on current assignments. With all this chaos, you certainly don’t want to miss out on payments – after all, that’s what you’re working for!
Today we introduce jBilling, which can help you manage the most important aspect of your business – the income. This is not the typical invoice management kind of tool, rather a full-fledged platform with several innovative features. jBilling helps you manage invoices, track payments, bill your customers and more with little effort on your behalf – just what you want when juggling responsibilities.
In this tutorial we will first cover the necessary steps to install and set up jBilling before having a closer look at the various features that can help you manage your business better. We have used the latest stable community edition of jBilling, version 3.1.0, for demo purposes in this article.
The main menu bar gives you access to all the pages you'll use most frequently
The main menu bar gives you access to all the pages you’ll use most frequently

Step-by-step

Step 01 Installation
jBilling is integrated with the web server out of the box, which helps make the installation process straightforward. Just unzip the downloaded zip file to a folder (where you want the installation to be done), eg ‘my_jBilling’. Open the command prompt and navigate to the folder /path/my_jBilling/bin. Assign executable permissions to all the shell script files, with the command chmod +x *.sh. Also, remember to set the JAVA_HOME variable with your Java Home path. You can then start jBilling by running ./startup.sh. This completes the installation process – note that the process may slightly differ depending on the OS you use. As the startup.sh script executes, the command prompt shows five lines of logs indicating successful start. You can then access jBilling via your browser at http://localhost:8080/jbilling and login with credentials admin/123qwe. You can also access http://localhost:8080/jbilling/signup to create your new signup.
Step 02 Customers
No one wants to add a customer’s detail to the system every single time an invoice is sent to them! It is generally a good idea to keep the details of your customer with you and that’s precisely what jBilling lets you do – simply click on the ‘Customer’ button on the main menu to go to the customer page. Here you can view all the details related to the customer – but before that, you need to add a customer. To do so, click on the ‘Add New’ button and then fill in all of the relevant details. Note that once you add a customer, a separate login for the customer is also created and they can then log in to your jBilling system and manage their account as well (to make payments, view invoices and so on). This may seem trivial for smaller organisations with a smaller number of customers, but if you have a huge customer base and would like customers to handle payments themselves, you will definitely like this feature.
Step 03 Products
Besides customers, the other important aspect of a business is what you sell – your products or services. Handling your products in jBilling is nice and straightforward. Simply click on the ‘Products’ button to go to the products page. To add a new product here, you must add product categories first – click on the ‘Add Category’ button to do that. After the category is created, select it to add new products to that particular category or view all the products within it. Once you have all your products listed in the system, you can use them to create orders, invoices and so on.
Step 04 Orders
Before serving your customer you need an order from them. jBilling lets you handle orders in a way that closely resembles real-world scenarios. Clicking on the ‘Orders’ link on the main menu will take you to the orders page where you can view a list of all the orders received up to now. At this point you may be puzzled; unlike other pages there is no button to create an order here. To create an order you must first navigate to the particular customer you plan to create it for (in the customer page) and then click the ‘Create Order’ button (located below the customer details). This arrangement makes sure that there is tight coupling between an order and related customer. Once the order is created you can see it in the Order page. You can then edit orders to add products or create invoices out of it.
Step 05 Invoices
We have tight coupling with customers and orders, so it makes sense that invoices in jBilling should be related to an order too. So, to create an invoice you need to go to the order for which you are raising the invoice and click the ‘Generate Invoice’ button. The invoice is then created – note that you can even apply other orders to an invoice (if it hasn’t been paid). Also, an order can’t be used to generate an invoice if an earlier invoice (related to it) has already been paid. Having generated the invoice, you can send it via email or download it as a PDF. You may find that you want to change the invoice logo – but we’ll get to configuration and customisation later on. We will also see in later steps about how the payments related to an invoice can also be tracked.
Invoices
Invoices
Step 06 Billing
Billing is the feature that helps you automate the whole process of invoicing and payments. It can come in handy for businesses with a subscription model or other cases where customers are charged in a recurring manner. To set the billing process, you need to go the Configuration page first. Once you are on the page, click on ‘Billing Process’ on the left-hand menu bar to set the date and other parameters. With the parameters set, billing process runs automatically and shows a representation of the invoices. This output (invoices) needs to be approved by the admin – only once this has happened can the real invoices get generated and delivered to the customer. The customers (whose payments are not automatic) can then pay their bills with their own logins.
Step 07 Payments
Any payment made for an invoice is tracked on the Payments page, where you can view a list of all the payments already taken care of. To create a new payment, you need to select the customer (for whom payment is being made) on the Customer page and then click the ‘Make Payment’ button at the very bottom (next to the ‘Create Order’ button). This takes you to a page with details of all the paid/unpaid invoices (raised for that customer). Just select the relevant invoice and fill up the details of payment method to complete the payment process. Later, if there is a need to edit the payment details, you need to unlink the invoice before editing the details.
Step 08 Partners
Partners – for example, any affiliate marketing partners for an eCommerce website – are people or organisations that help your business grow. They are generally paid a mutually agreed percentage of the revenue they bring in. jBilling helps you manage partners in a easy, automated way. Click on the Partners link on the homepage to reach the Partners page and set about adding a new partner. Here you will need to fill in the details related to percentage rate, referral fee, payout date and period and so on. Now whenever a new customer is added (with the Partner ID field filled in) the relevant partner gets entitled to the commission percentage (as set during adding the partner) and the jBilling system keeps a track of the partner’s due payment. Note that, as with customers, partners also get their own login once you add their details to jBilling. It is up to you to give them the login access, though.
Step 09 Reports
The reporting engine of jBilling lets you have a bird’s-eye view of what’s going on with your company’s accounts. Click on the Reports link on the main menu; here there are four report types available – invoice, order, payment and customer. You can select one to reveal the different reports available inside that type. After a report is selected, you can see a brief summary of what the report is supposed to show. Set the end date and then click on the ‘Run Report’ button to run the report. Having done this, the system shows you the output. You can also change the output format to PDF, Excel or HTML.
Reports
Reports
Step 10 Configuration
The configuration page lets you fine-tune your jBilling installation settings. Click on the Configuration link and you will see a list of settings available on the left menu bar. The links are somewhat self-explanatory but we’ll run through the more useful ones. The Billing Process link allows you to set the billing run parameters. You can change the invoice logo using the Invoice Display setting. To add new users, simply click on the ‘Users’ link. To set the default currency or add a new currency to the system, click on the ‘Currencies’ link. You can even blacklist customers under the ‘Blacklist’ link. You will find many more settings to customise jBilling as per your tastes and requirements – just keep exploring and make jBilling work for you.

Linux Performance Tools at LinuxCon North America 2014

http://www.brendangregg.com/blog/2014-08-23/linux-perf-tools-linuxcon-na-2014.html

This week I spoke at LinuxCon North America 2014 in Chicago, which was also my first LinuxCon. I really enjoyed the conference, and it was a privilege to take part and contribute. I'll be returning to work with some useful ideas from talks and talking with attendees.
I included my latest Linux performance observability tools diagram, which I keep updated here:
But I was really excited to share some new diagrams, which are all in the slides:
I gave a similar talk two years ago at SCaLE11x, where I covered performance observability tools. This time, I covered observability, benchmarking, and tuning tools, providing a more complete picture of the performance tools landscape. I hope these help you in a similar way, when you move from observability to performing load tests with benchmarks, and finally tuning the system.
I also presented an updated summary on the state of tracing, after my recent discoveries with ftrace, which is able to serve some tracing needs in existing kernels. For more about ftrace, see my lwn.net article Ftrace: The hidden light switch, which was made open the same day as my talk.
At one point I included a blank template for observability tools (PNG):
My suggestion was to print this out and fill it in with whatever observability tools make most sense in your environment. This may include monitoring tools, both in-house and commercial, and can be supplemented by the server tools from my diagram above.

Photo by Linux Foundation
At Netflix, we have our own monitoring system to observe our thousands of cloud instances, and this diagram helps to see which Linux and server components it currently measures, and what can be developed next. (This monitoring tool also includes many application metrics.) As I said in the talk, we'll sometimes need to login to an instance using ssh, and run the regular server tools.
This diagram may also help you develop your own monitoring tools, by showing what would ideally be observed. It can also help rank commercial products: next time a salesperson tells you their tool can see everything, hand them this diagram and a pen. :-)
My talk was standing room only, and some people couldn't get in the room and missed out. Unfortunately, it wasn't videoed, either. Sorry, I should have figured this out sooner and arranged something in time. Given how popular it was, I suspect I'll give it again some time, and will hopefully get it on video.
Thanks to those who attended, and the Linux Foundation for having me and organizing a great event!