Wednesday, May 9, 2018

The Numfmt Command Tutorial With Examples For Beginners

https://www.ostechnix.com/the-numfmt-command-tutorial-with-examples-for-beginners


numfmt command
Today, I cam across an interesting and rather unknown command named “Numfmt” that converts the numbers to/from human readable format. It reads the numbers in various representations and reformats them in human readable format according to the specified options. If no numbers given, it reads the numbers from the standard input. It is part of the GNU coreutils package, so you need not bother installing it. In this brief tutorial, let us see the usage of Numfmt command with some practical examples.

The Numfmt Command Tutorial With Examples

Picture a complex number, for example ‘1003040500’. Of course the Mathematics ninjas can easily find the human readable representation of this number in seconds. But It is bit hard for me. This is where Numfmt commands comes in help. Run the following command to convert the given in human readable form.
$ numfmt --to=si 1003040500
1.1G
Let us go for some really long and complex number than the previous number. How about “10090008000700060005”? Bit hard, right? Yes. But the Numfmt command will display the human readable format of this number instantly.
$ numfmt --to=si 10090008000700060005
11E
Here, si refers the International System of Units (abbreviated SI from systeme internationale , the French version of the name).
So, if you use si, the numfmt command will auto-scale numbers according to the International System of Units (SI) standard.
The Numfmt also uses the following unit options too.
  • iec and iec-i – Auto-scale numbers according to the International Electrotechnical Commission (IEC) standard.
  • auto – With this method, numbers with ‘K’,‘M’,‘G’,‘T’,‘P’,‘E’,‘Z’,‘Y’ suffixes are interpreted as SI values, and numbers with ‘Ki’, ‘Mi’,‘Gi’,‘Ti’,‘Pi’,‘Ei’,‘Zi’,‘Yi’ suffixes are interpreted as IEC values.
  • none – no auto-scaling.
Here is some more examples for the above options.
$ numfmt --to=iec 10090008000700060005
8.8E
$ numfmt --to=iec-i 10090008000700060005
8.8Ei
We have seen how to convert the numbers to human readable format. Now let us do the reverse. I.e We are going to convert the numbers from human readable format. To do simply replace “–to” with “–from” option like below.
$ numfmt --from=si 1G
1000000000
$ numfmt --from=si 1M
1000000
$ numfmt --from=si 1P
1000000000000000
We can also do this with iec and iec-i standards.
$ numfmt --from=iec 1G
1073741824
$ numfmt --from=iec-i 1Gi
1073741824
$ numfmt --from=auto 1G
1000000000
$ numfmt --from=auto 1Gi
1073741824
Like I already mentioned, when using “auto”, the numbers with ‘K’,‘M’,‘G’,‘T’,‘P’,‘E’,‘Z’,‘Y’ suffixes are interpreted as SI values, and numbers with ‘Ki’, ‘Mi’,‘Gi’,‘Ti’,‘Pi’,‘Ei’,‘Zi’,‘Yi’ suffixes are interpreted as IEC values.
Numfmt command can also be used in conjunction with other commands. Have a look at the following examples.
$ echo 1G | numfmt --from=si
1000000000
$ echo 1G | numfmt --from=iec
1073741824
$ df -B1 | numfmt --header --field 2-4 --to=si
$ ls -l | numfmt --header --field 5 --to=si
Please note that the ls and df commands already have “–human-readable” option to display the outputs in human readable form. The above examples are given just for the demonstration purpose only.
You can tweak the output using “–format” or “–padding” options as well.
Pad to 5 characters, right aligned using ‘–format’ option:
$ du -s * | numfmt --to=si --format="%5f"
Pad to 5 characters, left aligned using ‘–format’ option:
$ du -s * | numfmt --to=si --format="%-5f"
Pad to 5 characters, right aligned using ‘–padding’ option:
$ du -s * | numfmt --to=si --padding=5
Pad to 5 characters, left aligned using ‘–padding’ option:
$ du -s * | numfmt --to=si --padding=-5
For more options and usage, refer man pages.
$ man numfmt
And, that’s all for now. More good stuffs to come. Stay tuned!
Cheers!
Resource:

Developing Console Applications with Bash

https://www.linuxjournal.com/content/developing-console-applications-bash

Bash screenshot from Wikipedia, https://en.wikipedia.org/wiki/Bash_(Unix_shell)
Bring the power of the Linux command line into your application development process.
As a novice software developer, the one thing I look for when choosing a programming language is this: is there a library that allows me to interface with the system to accomplish a task? If Python didn't have Flask, I might choose a different language to write a web application. For this same reason, I've begun to develop many, admittedly small, applications with Bash. Although Python, for example, has many modules to import and extend functionality, Bash has thousands of commands that perform a variety of features, including string manipulation, mathematic computation, encryption and database operations. In this article, I take a look at these features and how to use them easily within a Bash application.

Reusable Code Snippets

Bash provides three features that I've found particularly useful when creating reusable functions: aliases, functions and command substitution. An alias is a command-line shortcut for a long command. Here's an example:

alias getloadavg='cat /proc/loadavg'

The alias for this example is getloadavg. Once defined, it can be executed as any other Linux command. In this instance, alias will dump the contents of the /proc/loadavg file. Something to keep in mind is that this is a static command alias. No matter how many times it is executed, it always will dump the contents of the same file. If there is a need to vary the way a command is executed (by passing arguments, for instance), you can create a function. A function in Bash functions the same way as a function in any other language: arguments are evaluated, and commands within the function are executed. Here's an example function:

getfilecontent() {
    if [ -f $1 ]; then
        cat $1
    else
        echo "usage: getfilecontent "
    fi
}

This function declaration defines the function name as getfilecontent. The if/else statement checks whether the file specified as the first function argument ($1) exists. If it does, the contents of the file is outputted. If not, usage text is displayed. Because of the incorporation of the argument, the output of this function will vary based on the argument provided.
The final feature I want to cover is command substitution. This is a mechanism for reassigning output of a command. Because of the versatility of this feature, let's take a look at two examples. This one involves reassigning the output to a variable:

LOADAVG="$(cat /proc/loadavg)"

The syntax for command substitution is $(command) where "command" is the command to be executed. In this example, the LOADAVG variable will have the contents of the /proc/loadavg file stored in it. At this point, the variable can be evaluated, manipulated or simply echoed to the console.

Text Manipulation

If there is one feature that sets scripting on UNIX apart from other environments, it is the robust ability to process text. Although many text processing mechanisms are available when scripting in Linux, here I'm looking at grep, awk, sed and variable-based operations. The grep command allows for searching through text whether in a file or piped from another command. Here's a grep example:

alias searchdate='grep
 ↪"[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]"'

The alias created here will search through data for a date in the YYYY-MM-DD format. Like the grep command, text either can be provided as piped data or as a file path following the command. As the example shows, search syntax for the grep command includes the use of regular expressions (or regex).
When processing lines of text for the purpose of pulling out delimited fields, awk is the easiest tool for the job. You can use awk to create verbose output of the /proc/loadavg file:

awk '{ printf("1-minute: %s\n5-minute: %s\n15-minute:
 ↪%s\n",$1,$2,$3); }' /proc/loadavg

For the purpose of this example, let's examine the structure of the /proc/loadavg file. It is a single-line file, and there are typically five space-delimited fields, although this example uses only the first three fields. Much like Bash function arguments, fields in awk are references as variables are named by their position in the line ($1 is the first field and so on). In this example, the first three fields are referenced as arguments to the printf statement. The printf statement will display three lines, and each line will contain a description of the data and the data itself. Note that each %s is substituted with the corresponding parameter to the printf function.
Within all of the commands available for text processing on Linux, sed may be considered the Swiss army knife for text processing. Like grep, sed uses regex. The specific operation I'm looking at here involves regex substitution. For an accurate comparison, let's re-create the previous awk example using sed:

sed 's/^\([0-9]\+\.[0-9]\+\) \([0-9]\+\.[0-9]\+\)
 ↪\([0-9]\+\.[0-9]\+\).*$/1-minute: \1\n5-minute:
 ↪\2\n15-minute: \3/g' /proc/loadavg

Since this is a long example, I'm going to separate this into smaller parts. As I mentioned, this example uses regex substitution, which follows this syntax: s/search/replace/g. The "s" begins the definition of the substitution statement. The "search" value defines the text pattern you want to search for, and the "replace" value defines what you want to replace the search value with. The "g" at the end is a flag that denotes global substitution within the file and is one of many flags available with the substitute statement. The search pattern in this example is:

^\([0-9]\+\.[0-9]\+\) \([0-9]\+\.[0-9]\+\)
 ↪\([0-9]\+\.[0-9]\+\).*$

The caret (^) at the beginning of the string denotes the beginning of a line of text being processed, and the dollar sign ($) at the end of the string denotes the end of a line of text. Four things are being searched for within this example. The first three items are:

\([0-9]\+\.[0-9]\+\)

This entire string is enclosed with escaped parentheses, which makes the value within available for use in the replace value. Just like the grep example, the [0-9] will match a single numeric character. When followed by an escaped plus sign, it will match one or more numeric characters. The escaped period will match a single period. When you put this whole expression together, you get an pattern for a decimal digit.
The fourth item in the search value is simply a period followed by an asterisk. The period will match any character, and the asterisk will match zero or more of whatever preceded it. The replace value of the example is:

1-minute: \1\n5-minute: \2\n15-minute: \3

This is largely composed of plain text; however, it contains four unique special items. There are newline characters that are represented by the slash-"/n". The other three items are slashes followed by a number. This number corresponds to the patterns in the search value surrounded by parentheses. Slash-1 is the first pattern in parentheses, slash-2 is the second and so on. The output of this sed command will be exactly the same as the awk command from earlier.
The final mechanism for string manipulation that I want to discuss involves using Bash variables to manipulate strings. Although this is much less powerful than traditional regex, it provides a number of ways to manipulate text. Here are a few examples using Bash variables:

MYTEXT="my example string"
echo "String Length:  ${#MYTEXT}"
echo "First 5 Characters: ${MYTEXT:0:5}"
echo "Remove \"example\": ${MYTEXT/ example/}"

The variable named MYTEXT is the sample string this example works with. The first echo command shows how to determine the length of a string variable. The second echo command will return the first five characters of the string. This substring syntax involves the beginning character index (in this case, zero) and the length of the substring (in this case, five). The third echo command removes the word "example" along with a leading space.

Mathematic Computation

Although text processing might be what makes Bash scripting great, the need to do mathematics still exists. Basic math problems can be evaluated using either bc, awk or Bash arithmetic expansion. The bc command has the ability to evaluate math problems via an interactive console interface and piped input. For the purpose of this article, let's look at evaluating piped data. Consider the following:

pow() {
    if [ -z "$1" ]; then
        echo "usage: pow  "
    else
        echo "$1^$2" | bc
    fi
}

This example shows creating an implementation of the pow function from C++. The function requires two arguments. The result of the function will be the first number raised to the power of the second number. The math statement of "$1^$2" is piped into the bc command for calculation.
Although awk does provide the ability to do basic math calculation, the ability for awk to iterate through lines of text makes it especially useful for creating summary data. For instance, if you want to calculate the total size of all files within a folder, you might use something like this:

foldersize() {
    if [ -d $1 ]; then
        ls -alRF $1/ | grep '^-' | awk 'BEGIN {tot=0} {
         ↪tot=tot+$5 } END { print tot }'
    else
        echo "$1: folder does not exist"
    fi
    }

This function will do a recursive long-listing for all entries underneath the folder supplied as an argument. It then will search for all lines beginning with a dash (this will select all files). The final step is to use awk to iterate through the output and calculate the combined size of all files.
Here is how the awk statement breaks down. Before processing of the piped data begins, the BEGIN block sets a variable named tot to zero. Then for each line, the next block is executed. This block will add to tot the value of the fifth field in each line, which is the file size. Finally, after the piped data has been processed, the END block then will print the value of tot.
The other way to perform basic math is through arithmetic expansion. This will take a similar visual for the command substitution. Let's rewrite the previous example using arithmetic expansion:

pow() {
    if [ -z "$1" ]; then
        echo "usage: pow  "
    else
        echo "$[$1**$2]"
    fi
}

The syntax for arithmetic expansion is $[expression], where expression is a mathematic expression. Notice that instead of using the caret operator for exponents, this example uses a double-asterisk. Although there are differences and limitations to this method of calculation, the syntax can be more intuitive than piping data to the bc command.

Cryptography

The ability to perform cryptographic operations on data may be necessary depending on the needs of an application. If a string needs to be hashed, a file needs to be encrypted, or data needs to be base64-encoded, this all can be accomplished using the openssl command. Although openssl provides a large set of ciphers, hashing algorithms and other functions, I cover only a few here.
The first example shows encrypting a file using the blowfish cipher:

 $1.enc
    else
        echo "usage: bf-enc  "
    fi
}

This function requires two arguments: a file to encrypt and the password to use to encrypt it. After running, this script produces a file named the same as your original but with the file extension of "enc".
Once you have the data encrypted, you need a function to decrypt it. Here's the decryption function:

bf-dec() {
    if [ -f $1 ] && [ -n "$2" ]; then
        cat $1 | openssl enc -d -blowfish -pass pass:$2 >
         ↪${1%%.enc}
    else
        echo "usage: bf-dec  "
    fi
}

The syntax for the decryption function is almost identical to the encryption function with the addition of "-d" to decrypt the piped data and the syntax to remove ".enc" from the end of the decrypted filename.
Another piece of functionality provided by openssl is the ability to create hashes. Although files may be hashed using openssl, I'm going to focus on hashing strings here. Let's make a function to create an MD5 hash of a string:

md5hash() {
    if [ -z "$1" ]; then
        echo "usage: md5hash "
    else
        echo "$1" | openssl dgst -md5 | sed 's/^.*= //g'
    fi
}

This function will take the string argument provided to the function and generate an MD5 hash of that string. The sed statement at the end of the command will strip off text that openssl puts at the beginning of the command output, so that the only text returned by the function is the hash itself.
The way that you would validate a hash (as opposed to decrypting it) is to create a new hash and compare it to the old hash. If the hashes match, the original strings will match.
I also want to discuss the ability to create a base64-encoded string of data. One particular application that I have found this useful for is creating an HTTP basic authentication header string (this contains username:password). Here is a function that accomplishes this:

basicauth() {
    if [ -z "$1" ]; then
        echo "usage: basicauth "
    else
        echo "$1:$(read -s -p "Enter password: " pass ;
         ↪echo $pass)" | openssl enc -base64
    fi
}

This function will take the user name provided as the first function argument and the password provided by user input through command substitution and use openssl to base64-encode the string. This string then can be added to an HTTP authorization header field.

Database Operations

An application is only as useful as the data that sits behind it. Although there are command-line tools to interact with database server software, here I focus on the SQLite file-based database. Something that can be difficult when moving an application from one computer to another is that depending on the version of SQLite, the executable may be named differently (typically either sqlite or sqlite3). Using command substitution, you can create a fool-proof way of calling sqlite:

$(ls /usr/bin/sqlite* | grep 'sqlite[0-9]*$' | head -n1)

This will return the full file path of the sqlite executable available on a system.
Consider an application that, upon first execution, creates an empty database. If this syntax is used to invoke the sqlite binary, the empty database always will be created using the correct version of sqlite on that system.
Here's an example of how to create a new database with a table for personal information:

$(ls /usr/bin/sqlite* | grep 'sqlite[0-9]*$' | head -n1) test.db
 ↪"CREATE TABLE people(fname text, lname text, age int)"

This will create a database file named test.db and will create the people table as described. This same syntax could be used to perform any SQL operations that SQLite provides, including SELECT, INSERT, DELETE, DROP and many more.
This article barely scrapes the surface of commands available to develop console applications on Linux. There are a number of great resources for learning more in-depth scripting techniques, whether in Bash, awk, sed or any other console-based toolset. See the Resources section for links to more helpful information.

Resources

How to Use Systemd Timers as a Cron Replacement

https://www.maketecheasier.com/use-systemd-timers-as-cron-replacement


As a Linux user you’re probably familiar with cron. It has worked as the go-to Unix time-based job scheduler for many years. Now many users are seeing Systemd timers begin to replace cron’s dominance.
This article will discuss the basics of how to set up your own timer and make sure it’s running properly on your system.
If you’re already using Systemd as an init system – many popular Linux distros run it by default, including Arch, Debian, Fedora, Red Hat, and Ubuntu – you will see timers already in use. There’s nothing left to do other than use that feature already installed.
The easiest way you can check that a timer exists on your computer is with the command:
You don’t have to run this as root.
The --all option here shows inactive timers as well. There aren’t any inactive timers currently on this system.
You should find an output similar to the following image:
Systemd timer list
You can see the date and time each timer will activate, the countdown until that point, how much time has passed since it last ran, the unit name of the timer itself, and the service each timer unit activates.
All timers must be paired with a corresponding service. In the next section you will see how to create a “.timer” file that activates a “.service” file.
You can create a new timer by placing a custom .timer file in “/etc/systemd/system/.” In creating a new timer for my DuckDNS dynamic DNS service file, I ended up with this text:

1. [Unit] section

The “Description=…” option in the file tells you the name/description of the timer itself. In this case my “duckdns.timer” will update my DNS by telling the “duckdns.service” file to run a series of commands.
You can change the wording after “Description=” to say whatever you like.

2. [Timer] section

“OnCalendar=…” here shows one way of telling the timer when to activate. *-*-* stands for “Year-Month-Day, and the asterisks mean it will run every day of every month of every year from hereon forward. The time that follows the asterisks shows what time of the day the timer should run.
“Persistent=true” just means that the timer will run automatically if it missed the previous start time. This could happen because the computer was turned off before the event could take place. This is optional but recommended.

3. [Install] section

Finally, “WantedBy=timers.target” shows that the Systemd timers.target will use this .timer file. This line in the file shows the dependency chain from one file to another. You should not omit or change this line.

Other options

You can find many other features by scanning Systemd’s man page with man systemd.timer. Navigate to the “OPTIONS” section to discover options for accuracy, persistence, and running after boot.
Activate any timer you’ve created with the systemctl enable and systemctl start syntax.
Look again at systemctl list-timers to see your timer in action.
Systemd timer list
You can see if your timer ran as expected by inspecting its corresponding service file with systemctl status. In this case you can see that my timer ran at 11:43:00 like it was supposed to.
Systemd status
Although many third-party programs, including DuckDNS, come with scripts that allow them to update as needed, creating timers in Systemd is a helpful skill to know. My creation of a timer for DuckDNS here was unnecessary, but it shows how other types of Systemd timers would work.
This knowledge will be helpful, for instance, for creating and running your own Bash scripts. You could even use it to alter an existing timer to better suit your preferences. Furthermore, it’s always good to know how your computer operates, and since Systemd controls many basic functions, this one piece of the puzzle can help you better understand the manner in which events are triggered every day.
Thanks for following along. Good luck with creating your own timers.