Saturday, March 14, 2026

MultiTail – What It Is and How It Can Make You a Better SysAdmin

https://idolinux.com/multitail-what-it-is-and-how-it-can-make-you-a-better-sysadmin

MultiTail – What It Is and How It Can Make You a Better SysAdmin

As a Linux administrator, you already know how important it is to master tools like iptables reject vs drop, netcat, df, du, kernel 6.19.3, the LS command, and vim. These are fundamentals. But once your infrastructure grows beyond a single service and a couple of log files, the classic tail -f workflow starts to feel painfully limited.

This is where MultiTail becomes a game changer.

In this in-depth guide, we’ll explore what MultiTail is, how it works, why it’s superior to traditional approaches, and how mastering it can seriously improve your effectiveness as a sysadmin.


What Is MultiTail?

MultiTail is a powerful terminal-based utility that allows you to view multiple log files simultaneously in a single terminal window. Think of it as tail -f on steroids.

Instead of opening several terminal tabs or splitting your screen with tmux, MultiTail creates dynamically managed panes inside one terminal session. Each pane can follow a different file, command output, or even network stream.

At its core, MultiTail is designed to:

  • Monitor multiple log files at once
  • Display them in split windows
  • Apply colorization rules
  • Merge multiple files into one unified view
  • Filter content live
  • Follow new files dynamically

If you manage web servers, databases, firewalls, containers, or microservices, this is not just convenient — it’s transformative.


Why tail -f Is No Longer Enough

Before diving into MultiTail, let’s be honest about traditional workflows.

Most admins start with:

tail -f /var/log/syslog

Then maybe:

tail -f /var/log/nginx/access.log

Then another terminal for:

tail -f /var/log/nginx/error.log

Soon you’re juggling:

  • Multiple SSH sessions
  • Split panes in tmux
  • Scroll chaos
  • Missed correlations between logs

Correlating events across multiple files in real time becomes difficult. When debugging production issues, seconds matter.

MultiTail solves this problem elegantly.


Installing MultiTail

On Debian/Ubuntu systems:

sudo apt update
sudo apt install multitail

On RHEL/CentOS (if available via EPEL):

sudo yum install multitail

To verify installation:

multitail --version

That’s it. No complex configuration required to get started.


Basic Usage: Viewing Multiple Files

The simplest use case:

multitail /var/log/syslog /var/log/auth.log

The terminal splits automatically into sections. Each file gets its own pane.

You can move between panes using keyboard shortcuts (like pressing b to switch windows).

Already more powerful than multiple tail -f sessions.


Vertical and Horizontal Splits

MultiTail allows layout control.

For vertical split:

multitail -s 2 /var/log/syslog /var/log/auth.log

For horizontal layout control:

multitail -l "tail -f /var/log/syslog" -l "tail -f /var/log/auth.log"

The -l option lets you monitor command output instead of just files.

This means you’re not limited to logs — you can monitor any command in real time.


Monitoring Commands Instead of Files

You can follow dynamic command outputs like:

multitail -l "dmesg -w" -l "journalctl -f"

Or combine log files and commands:

multitail /var/log/syslog -l "netstat -tulpn"

This is incredibly useful when debugging:

  • Network activity
  • Firewall events
  • Kernel messages
  • Service logs

Imagine diagnosing connectivity issues while watching firewall drops and application logs side by side.


Merging Multiple Logs Into One View

Sometimes separate panes are not what you want. You want a chronological, merged stream.

MultiTail can combine logs:

multitail -M /var/log/syslog /var/log/auth.log

This merges both files into a single window, ordered by timestamp.

This is extremely useful when correlating authentication failures with system events.


Automatic Detection of New Files

One powerful feature often overlooked: MultiTail can track files that appear dynamically.

Example scenario:

Your application generates logs like:

app-2026-03-01.log
app-2026-03-02.log

Instead of restarting your monitoring session daily, you can use wildcards:

multitail /var/log/app-*.log

It will follow newly created matching files automatically.

This is particularly useful in environments where logs rotate frequently.


Color Highlighting and Filtering

MultiTail supports automatic colorization and filtering.

You can filter a specific word:

multitail -e "ERROR" /var/log/syslog

Or display separate filtered views:

multitail -l "grep ERROR /var/log/syslog" -l "grep WARNING /var/log/syslog"

With color rules enabled, errors can appear red, warnings yellow, and info messages green.

This dramatically improves visual parsing speed during incident response.


Recursive Monitoring of Directories

If you need to monitor many logs recursively:

multitail -R 3 /var/log/

This searches log files recursively up to a specified depth.

For large infrastructures with complex logging trees, this feature saves enormous time.


Using MultiTail with systemd journalctl

Modern Linux systems use systemd, and logs often live in the journal.

You can combine MultiTail with journalctl:

multitail -l "journalctl -f -u nginx" -l "journalctl -f -u mysql"

Now you monitor multiple systemd services in parallel.

This avoids multiple terminal tabs and gives you synchronized visibility.


Navigating Inside MultiTail

MultiTail isn’t just a viewer — it’s interactive.

Common controls:

  • b – switch to next window
  • q – quit
  • Ctrl + c – stop command in active pane
  • Scroll up support (depending on configuration)
  • Resize windows dynamically

You’re no longer blind to previous output; you can inspect context more effectively than with plain tail -f.


Advanced Example: Real Incident Debugging

Let’s imagine a real production issue:

Users report slow logins.

You open:

multitail \
/var/log/nginx/access.log \
/var/log/nginx/error.log \
/var/log/auth.log \
-l "journalctl -f -u php-fpm"

In one terminal window you see:

  • Incoming requests
  • Backend errors
  • Authentication failures
  • PHP processing logs

Instead of context switching between terminals, everything appears in one place. Correlation becomes almost effortless.

This is where you transition from reactive administrator to proactive operator.


How MultiTail Makes You a Better SysAdmin

Mastering MultiTail improves you in multiple ways:

1. Faster Diagnosis

Less tab switching means faster thinking.
Faster thinking means faster resolution.

2. Better Event Correlation

Seeing logs side-by-side exposes patterns you would otherwise miss.

3. Reduced Cognitive Load

Instead of managing terminal sessions, you focus on the problem.

4. Improved Incident Handling

During outages, structure matters. MultiTail gives you structured visibility.

5. Stronger Command-Line Fluency

MultiTail encourages combining tools like:

  • grep
  • awk
  • journalctl
  • netstat
  • dmesg

This deepens your Linux proficiency overall.


MultiTail vs Alternatives

You could use:

  • tmux splits with multiple tail -f
  • watch command
  • less +F
  • GUI log aggregators

But MultiTail provides:

  • Native multi-pane layout
  • Built-in merging
  • Automatic file detection
  • Color coding
  • Interactive controls
  • Lightweight execution

No heavy centralized logging stack required.


Final Thoughts

If tools like df, du, vim, and the LS command are part of your daily routine, MultiTail deserves a place next to them.

It’s lightweight, powerful, and extremely practical.

You won’t notice how much time you’re wasting with traditional tail -f workflows — until you start using MultiTail.

After that, going back feels primitive.

In modern Linux environments where logs multiply rapidly and services interact constantly, MultiTail gives you clarity, speed, and confidence.

And those are exactly the qualities that separate average administrators from excellent ones.


Monday, July 14, 2025

How to Run a Python Script Using Docker

https://www.maketecheasier.com/run-python-script-using-docker

How to Run a Python Script Using Docker

Run Python Script Docker

Running Python scripts is one of the most common tasks in automation. However, managing dependencies across different systems can be challenging. That’s where Docker comes in. Docker lets you package your Python script along with all its required dependencies into a container, ensuring it runs the same way on any machine. In this step-by-step guide, we’ll walk through the process of creating a real-life Python script and running it inside a Docker container.

Why Use Docker for Python Scripts

When you’re working with Python scripts, things can get messy/complex very fast. Different projects need different libraries, and what runs on your machine might break on someone else’s. Docker solves that by packaging your script and its environment together. So instead of saying “It works on my machine”, you can be sure it works the same everywhere.

It also keeps your system clean. You don’t have to install every Python package globally or worry about version conflicts. Everything stays inside the container.

If you’re deploying or handing your script off to someone else, Docker makes that easy, too. No setup instructions, no “install this and that”. Just one command, and it runs.

Write the Python Script

Let’s create a project directory to keep your Python script and Dockerfile. Once created, navigate into this directory using the cd command:

mkdir docker_file_organizer
cd docker_file_organizer

Create a script named “organize_files.py” to scan a directory and group files into folders based on their file extensions:

nano organize_files.py

Paste the following code into the “organize_file.py” file. Here, we use two pre-built Python modules, named os and shutil, to handle files and create directories dynamically:

import os
import shutil

SOURCE_DIR = "/files"

def organize_by_extension(directory):
try:
for fname in os.listdir(directory):
path = os.path.join(directory, fname)
if os.path.isfile(path):
ext = fname.split('.')[-1].lower() if '.' in fname else 'no_extension'
dest_dir = os.path.join(directory, ext)
os.makedirs(dest_dir, exist_ok=True)
shutil.move(path, os.path.join(dest_dir, fname))
print(f"Moved: {fname} → {ext}/")
except Exception as e:
print(f"Error organizing files: {e}")

if __name__ == "__main__":
organize_by_extension(SOURCE_DIR)

In this script, we organize files in a given directory based on their extensions. We use the os module to list the files, check if each item is a file, extract its extension, and create folders named after those extensions (if they don’t already exist). Then, we use the shutil module to move each file into its corresponding folder. For each move, we print a message showing the file’s new location.

Create the Dockerfile

Now, create a Dockerfile to define the environment in which your script will run:

FROM python:latest
LABEL maintainer="you@example.com"
WORKDIR /usr/src/app
COPY organize_files.py .
CMD ["python", "./organize_files.py"]

We use this Dockerfile to create a container with Python, add our script to it, and make sure the script runs automatically when the container starts:

Create Docker File

Build the Docker Image

Before you can build the Docker image, you need to install Docker first. After that, run the following command to package everything into a Docker image:

sudo docker build -t file-organizer .

It reads our Dockerfile and puts together the Python setup and our script so they’re ready to run in a single container image:

Build Docker Image

Create a Sample Folder with Files

To see our script in action, we create a test folder named “sample_files” with a few files of different types. We created these files just to make the folder a bit messy and see how our Python script handles it:

mkdir ~/sample_files
touch ~/sample_files/test.txt
touch ~/sample_files/image.jpg
touch ~/sample_files/data.csv

Run the Script Inside Docker

Finally, we run our Docker container and mount the sample folder into it. The -v flag mounts your local “~/sample_files” directory to the “/files” directory in the container, which allows the Python script to read and organize files on your host machine:

docker run --rm -v ~/sample_files:/files file-organizer

Here, we use the --rm option to remove the container automatically after it finishes running, which saves disk space:

Run Script In Docker

In the end, we use the tree command to check if the files have been sorted into folders based on their extensions:

tree sample_files
Verify Result With Tree Command

Note: The tree command isn’t pre-installed on most systems. You can easily install it using a package manager like apt on Ubuntu, brew on macOS, and so on.

Final Thoughts

With your Python script running inside Docker, you’re all set to take full advantage of a clean, portable, and consistent development setup. You can easily reuse this containerized workflow for other automation tasks, share your script without worrying about dependencies, and keep your system clutter-free. As a next step, consider exploring how to build multi-script Docker images, schedule containers with cron jobs, or integrate your scripts with other tools like Git, Jenkins, or even cloud platforms to streamline your automation and deployment process. 

Saturday, May 24, 2025

LogKeys: Monitor Keyboard Keystrokes in Linux

https://www.tecmint.com/logkeys-monitor-keyboard-keystroke-linux

LogKeys: Monitor Keyboard Keystrokes in Linux

Keylogging, short for “keystroke logging” is the process of recording the keys struck on a keyboard, usually without the user’s knowledge.

Keyloggers can be implemented via hardware or software:

  • Hardware keyloggers intercept data at the physical level (e.g., between the keyboard and computer).
  • Software keyloggers, like LogKeys, capture keystrokes through the operating system.

This article explains how to use a popular open-source Linux keylogger called LogKeys for educational or testing purposes only. Unauthorized use of keyloggers to monitor someone else’s activity is unethical and illegal.

What is LogKeys?

LogKeys is an open-source keylogger for Linux that captures and logs keyboard input, including characters, function keys, and special keys. It is designed to work reliably across a wide range of Linux systems without crashing the X server.

LogKeys also correctly handles modifier keys like Alt and Shift, and is compatible with both USB and serial keyboards.

While there are numerous keylogger tools available for Windows, Linux has fewer well-supported options. Although LogKeys has not been actively maintained since 2019, it remains one of the more stable and functional keyloggers available for Linux as of today.

Installation of Logkeys in Linux

If you’ve previously installed Linux packages from a tarball (source), you should find installing the LogKeys package straightforward.

However, if you’ve never built a package from source before, you’ll need to install some required development tools first, such as C++ compilers and GCC libraries, before proceeding.

Installing Prerequisites

Before building LogKeys from source, ensure your system has the required development tools and libraries installed:

On Debian/Ubuntu:

sudo apt update
sudo apt install build-essential autotools-dev autoconf kbd

On Fedora/CentOS/RHEL:

sudo dnf install automake make gcc-c++ kbd

On openSUSE:

sudo zypper install automake gcc-c++ kbd

On Arch Linux:

sudo pacman -S base-devel kbd

Installing LogKeys from Source

First, download the latest LogKeys source package using the wget command, then, extract the ZIP archive and navigate into the extracted directory:

wget https://github.com/kernc/logkeys/archive/master.zip
unzip master.zip  
cd logkeys-master/

or clone the repository using Git, as shown below:

git clone https://github.com/kernc/logkeys.git
cd logkeys

Next, run the following commands to build and install LogKeys:

./autogen.sh         # Generate build configuration scripts
cd build                  # Switch to build directory
../configure              # Configure the build
make                      # Compile the source code
sudo make install         # Install binaries and man pages

If you encounter issues related to keyboard layout or character encoding, regenerate your locale settings:

sudo locale-gen

Usage of LogKeys in Linux

Once LogKeys is installed, you can begin using it to monitor and log keyboard input using the following commands.

Start Keylogging

This command starts the keylogging process, which must be run with superuser (root) privileges because it needs access to low-level input devices. Once started, LogKeys begins recording all keystrokes and saves them to the default log file: /var/log/logkeys.log.

Note: You won’t see any output in the terminal; logging runs silently in the background.

sudo logkeys --start

Stop Keylogging

This command terminates the keylogging process that was started earlier, which is important to stop LogKeys when you’re done, both to conserve system resources and to ensure the log file is safely closed.

sudo logkeys --kill

Get Help / View Available Options

The follwing command will displays all available command-line options and flags you can use with LogKeys.

logkeys --help

Useful options include:

  • --start : Start the logger
  • --kill : Stop the logger
  • --output <file> : Specify a custom log output file
  • --no-func-keys : Don’t log function keys (F1-F12)
  • --no-control-keys : Skip control characters (e.g., Ctrl+C, Backspace)

View the Logged Keystrokes

The cat command displays the contents of the default log file where LogKeys saves keystrokes.

sudo cat /var/log/logkeys.log

You can also open it with a text editor like nano or less:

sudo nano /var/log/logkeys.log
or
sudo less /var/log/logkeys.log

Uninstall LogKeys in Linux

To remove LogKeys from your system and clean up the installed binaries, manuals, and scripts, use the following commands:

cd build
sudo make uninstall

This will remove all files that were installed with make install, including the logkeys binary and man pages.

Conclusion

LogKeys is a powerful keylogger for Linux that enables users to monitor keystrokes in a variety of environments. Its compatibility with modern systems and ease of installation make it a valuable tool for security auditing, parental control testing, and educational research.

However, it’s crucial to emphasize that keylogging should only be used in ethical, lawful contexts—such as with explicit user consent or for personal system monitoring. Misuse can lead to serious legal consequences. Use responsibly and stay informed.