http://www.linuxjournal.com/content/hacking-safe-bash
Through the years, I have settled on maintaining my sensitive data in plain-text files that I then encrypt asymmetrically. Although I take care to harden my system and encrypt partitions with LUKS wherever possible, I want to secure my most important data using higher-level tools, thereby lessening dependence on the underlying system configuration. Many powerful tools and utilities exist in this space, but some introduce unacceptable levels of "bloat" in one way or another. Being a minimalist, I have little interest in dealing with GUI applications that slow down my work flow or application-specific solutions (such as browser password vaults) that are applicable only toward a subset of my sensitive data. Working with text files affords greater flexibility over how my data is structured and provides the ability to leverage standard tools I can expect to find most anywhere.
Let's take the classic example of managing credentials. This is a necessary evil and while both pass and KeePassC look interesting, I am not yet convinced they would fit into my work flow. Also, I am definitely not lulled by any "copy to clipboard" feature. You've all seen the inevitable clipboard spills on IRC and such—no thanks! For the time being, let's fold this job into a "safe" concept by managing this data in a file. Each line in the file will conform to a simple format of:
Despite the obscurity, leaving this data in the clear would be silly and irresponsible. Having GnuPG configured provides an opportunity to encrypt the data using your private key. After creating the file, my work flow was looking something like this:
Through the years, I have settled on maintaining my sensitive data in plain-text files that I then encrypt asymmetrically. Although I take care to harden my system and encrypt partitions with LUKS wherever possible, I want to secure my most important data using higher-level tools, thereby lessening dependence on the underlying system configuration. Many powerful tools and utilities exist in this space, but some introduce unacceptable levels of "bloat" in one way or another. Being a minimalist, I have little interest in dealing with GUI applications that slow down my work flow or application-specific solutions (such as browser password vaults) that are applicable only toward a subset of my sensitive data. Working with text files affords greater flexibility over how my data is structured and provides the ability to leverage standard tools I can expect to find most anywhere.
Asymmetric Encryption
Asymmetric encryption, or public-key cryptography, relies on the use of two keys: one of which is held private, while the other is published freely. This model offers greater security over the symmetric approach, which is based on a single key that must be shared between the sender and receiver. GnuPG is a free software implementation of the OpenPGP standard as defined by RFC4880. GnuPG supports both asymmetric and symmetric algorithms. Refer to https://gnupg.org for additional information.GPG
This article makes extensive use of GPG to interact with files stored in your safe. Many tutorials and HOWTOs exist that will walk you through how to set up and manage your keys properly. It is highly recommended to configure gpg-agent in order to avoid having to type your passphrase each time you interact with your private key. One popular approach used for this job is Keychain, because it also is capable of managing ssh-agent.Let's take the classic example of managing credentials. This is a necessary evil and while both pass and KeePassC look interesting, I am not yet convinced they would fit into my work flow. Also, I am definitely not lulled by any "copy to clipboard" feature. You've all seen the inevitable clipboard spills on IRC and such—no thanks! For the time being, let's fold this job into a "safe" concept by managing this data in a file. Each line in the file will conform to a simple format of:
resource:userid:password
Where "resource" is something mnemonic, such as an FQDN or even a hardware device
like a router that is limited to providing telnet access. Both
userid
and
password
fields are represented as hints. This hinting approach works nicely
given my conscious effort to limit the number of user IDs and passwords I
routinely use. This means a hint is all that is needed for muscle memory to
kick in. If a particular resource uses some exotic complexity rules, I
quickly can understand the slight variation by modifying the hint accordingly. For
example, a hint of "fo" might end up as "!fo" or
"fO". Another example of
achieving this balance between security and usability comes up when you need to
use an especially long password. One practical solution would be to combine
familiar passwords and document the hint accordingly. For example, a hint
representing a combination of "fo" and "ba" could
be represented as "fo..ba".
Finally, the hinting approach provides reasonable fall-back protection since
the limited information would be of little use to an intruder.
Despite the obscurity, leaving this data in the clear would be silly and irresponsible. Having GnuPG configured provides an opportunity to encrypt the data using your private key. After creating the file, my work flow was looking something like this:
$ gpg --ear
$ shred -u
Updating the file would involve decrypting, editing and repeating the steps
above. This was tolerable for a while since, practically speaking, I'm not
establishing credentials on a daily basis. However, I knew the day would
eventually come when the tedious routine would become too much of a burden.
As expected, that day came when I found myself keeping insurance-related notes
that I then considered encrypting using the same technique. Now, I am talking
about managing multiple files—a clear sign that it is time to write a script
to act as a wrapper. My requirements were simple:
- Leverage common tools, such as GPG, shred and bash built-ins.
- Reduce typing for common operations (encrypt, decrypt and so on).
- Keep things clean and readable in order to accommodate future growth.
-
Accommodate plain-text files but avoid having to micro-manage them.
Interestingly, the vim-gnupg Vim plugin easily can handle these requirements, because it integrates seamlessly with files ending in .asc, .gpg or .pgp extensions. Despite its abilities, I wanted to avoid having to manage multiple encrypted files and instead work with a higher-level "vault" of sorts. With that goal in mind, the initial scaffolding was cobbled together:
This framework is simple enough to build from and establishes some ground rules. For starters, you're going to avoid micro-managing files by maintaining them in a single tar archive. The#!/bin/bash CONF=${HOME}/.saferc [ -f $CONF ] && . $CONF [ -z "$SOURCE_DIR" ] && SOURCE_DIR=${HOME}/safe SOURCE_BASE=$(basename $SOURCE_DIR) TAR_ENC=$HOME/${SOURCE_BASE}.tar.gz.asc TAR="tar -C $(dirname $SOURCE_DIR)" usage() { cat <
$SOURCE_DIR
variable will fall back to $HOME/safe unless it is defined in ~/.saferc. Thinking ahead, this will allow people to collaborate on this project without clobbering the variable over and over. Either way, the value of$SOURCE_DIR
is used as a base for the$SOURCE_BASE
,$TAR_ENC
and$TAR
variables. If my ~/.saferc were to define$SOURCE_DIR
as $HOME/foo, my safe will be maintained as $HOME/foo.tar.gz.asc. If I choose not to maintain a ~/.saferc file, my safe will reside in $HOME/safe.tar.gz.asc.
Back to this primitive script, let's limit the focus simply to being able to open and close the safe. Let's work on thecreate_safe()
function first so you have something to extract later:
Thecreate_safe() { [ -d $SOURCE_DIR ] || { "Missing directory: $SOURCE_DIR"; exit 1; } $TAR -cz $SOURCE_BASE | gpg -ear $(whoami) --yes -o $TAR_ENC find $SOURCE_DIR -type f | xargs shred -u rm -fr $SOURCE_DIR }
create_safe()
function is looking pretty good at this point, since it automates a number of tedious steps. First, you ensure that the archive's base directory exists. If so, you compress the directory into a tar archive and pipe the output straight into GPG in order to encrypt the end result. Notice how the result ofwhoami
is used for GPG's-r
option. This assumes the private GPG key can be referenced using the same ID that is logged in to the system. This is strictly a convenience, as I have taken care to keep these elements in sync, but it will need to be modified if your setup is different. In fact, I could see eventually supporting an override of sorts with the ~/.saferc approach. For now though, let's put that idea on the back burner. Finally, the function calls the shred binary on all files within the base directory. This solves the annoying "Do I have a plain-text version laying around?" dilemma by automating the cleanup.
Now you should be able to create the safe. Assuming no ~/.saferc exists and the $PATH environment variable contains the directory containing safe.sh, you can begin to test this script:
You now should have a file named safe.tar.gz.asc in your home directory. This is an encrypted tarball containing the five files previously written to the ~/safe directory. You then cleaned things up by shredding each file and finally removing the ~/safe directory. This is probably a good time to recognize you are basing the design around an expectation to manage a single directory of files. For my purposes, this is acceptable. If subdirectories are needed, the code would need to be refactored accordingly.$ cd $ mkdir safe $ for i in $(seq 5); do echo "this is secret #$i" > ↪safe/file${i}.txt; done $ safe.sh -c
Now that you are able to create your safe, let's focus on being able to open it. The followingextract_safe()
function will do the trick nicely:
Essentially, you are just using GPG and tar in the opposite order. After opening the safe by running the script withextract_safe() { [ -f $TAR_ENC ] || { "Missing file: $TAR_ENC"; exit 1; } gpg --batch -q -d $TAR_ENC | $TAR -zx }
-x
, you should see the ~/safe directory.
Things seem to be moving along, but you easily can see the need to list the contents of your safe, because you do not want to have to open it each time in order to know what is inside. Let's add alist_safe()
function:
No big deal there, as you are just using tar's ability to list contents rather than extract them. While you are here, you can start DRYing this up a bit by consolidating all the file and directory tests into a single function. You even can add a handy little backup feature tolist_safe() { [ -f $TAR_ENC ] || { "Missing file: $TAR_ENC"; exit 1; } gpg --batch -q -d $TAR_ENC | tar -zt }
scp
your archive to a remote host. Listing 1 is an updated version of the script up to this point.
Listing 1. safe.sh
The new#!/bin/bash # # safe.sh - wrapper to interact with my encrypted file archive CONF=${HOME}/.saferc [ -f $CONF ] && . $CONF [ -z "$SOURCE_DIR" ] && SOURCE_DIR=${HOME}/safe SOURCE_BASE=$(basename $SOURCE_DIR) TAR_ENC=$HOME/${SOURCE_BASE}.tar.gz.asc TAR="tar -C $(dirname $SOURCE_DIR)" usage() { cat < /dev/null [ $? -eq 0 ] && echo OK || echo Failed done
-b
option requires a hostname passed as an argument. When used, the archive will bescp
'd accordingly. As a bonus, you can use the-b
option multiple times in order to back up to multiple hosts. This means you have the option to configure a routine cron job to automate your backups while still being able to run a "one off" at any point. Of course, you will want to manage your SSH keys and configure ssh-agent if you plan to automate your backups. Recently, I have converted over to pam_ssh () in order to fire up my ssh-agent, but that's a different discussion.
Back to the code, there is a smallis_or_die()
function that accepts an argument but falls back to the archive specified in$TAR_ENC
. This will help keep the script lean and mean since, depending on the option(s) used, you know you are going to want to check for one or more files and/or directories before taking action.
For the remainder of this article, I'm going to avoid writing out the updated script in its entirety. Instead, I simply provide small snippets as new functionality is added.
For starters, how about adding the ability to output the contents of a single file being stored in your safe? All you would need to do is check for the file's presence and modify your tar options appropriately. In fact, you have an opportunity to avoid re-inventing the wheel by simply refactoring yourextract_safe()
function. The updated function will operate on a single file if called accordingly. Otherwise, it will operate on the entire archive. Worth noting is the extra step to provide a bit of user-friendliness. Using the default$SOURCE_DIR
of ~/safe, the user can pass either safe/my_file or just my_file to the -o option:
The final version of safe.sh is maintained at https://github.com/windowsrefund/safe. It supports a few more use cases, such as the ability to add and remove files. When adding these features, I tried to avoid actually having to extract the archive to disk as a precursor to modifying its contents. I was unsuccessful due to GNU tar's refusal to read from STDIN whenlist_safe() { is_or_die gpg --batch -q -d $TAR_ENC | tar -zt | sort } search_safe() { is_or_die FILE=${1#*/} for f in $(list_safe); do ARCHIVE_FILE=${f#$SOURCE_BASE/} [ "$ARCHIVE_FILE" == "$FILE" ] && return done false } extract_safe() { is_or_die OPTS=" -zx" [ $# -eq 1 ] && OPTS+=" $SOURCE_BASE/${1#*/} -O" gpg --batch -q -d $TAR_ENC | $TAR $OPTS }
-r
is used. A nice alternative to connecting GPG with tar via pipes might exist in GnuPG's gpg-zip binary. However, the Arch package maintainer appears to have included only the gpg-zip man page. In short, I prefer the "keep things as simple as possible; but no simpler" approach. If anyone is interested in improving the methods used to add and remove files, feel free to submit your pull requests. This also applies to theedit_safe()
function, although I foresee refactoring that at some point given some recent activity with the vim-gnupg plugin.
Integrating with Mutt
My MUA of choice is mutt. Like many people, I have configured my mail client to interact with multiple IMAP accounts, each requiring authentication. In general, these credentials simply could be hard-coded in one or more configuration files but that would lead to shame, regret and terrible things. Instead, let's use a slight variation of Aaron Toponce's clever approach that empowers mutt with the ability to decrypt and source sensitive data:
Now that your safe contains the pass_mail file; you have mutt read it with this line in your ~/.muttrc:$ echo "set my_pass_imap = l@mepassw0rd" > /tmp/pass_mail $ safe.sh -a /tmp/pass_mail
By reading the file, mutt initializes a variable you have namedsource "safe.sh -o pass_mail |"
my_pass_imap
. That variable can be used in other areas of mutt's configuration. For example, another area of your mutt configuration can use these lines:
By combining appropriately named variables with mutt's ability to support multiple accounts, it is possible to use this technique to manage all of your mail-related credentials securely while never needing to store plain-text copies on your hard drive.set imap_user = "my_user_id" set imap_pass = $my_pass_imap set folder = "imaps://example.com" set smtp_url = smtp://$imap_user:$imap_pass@example.com
No comments:
Post a Comment