Saturday, May 24, 2025

LogKeys: Monitor Keyboard Keystrokes in Linux

https://www.tecmint.com/logkeys-monitor-keyboard-keystroke-linux

LogKeys: Monitor Keyboard Keystrokes in Linux

Keylogging, short for “keystroke logging” is the process of recording the keys struck on a keyboard, usually without the user’s knowledge.

Keyloggers can be implemented via hardware or software:

  • Hardware keyloggers intercept data at the physical level (e.g., between the keyboard and computer).
  • Software keyloggers, like LogKeys, capture keystrokes through the operating system.

This article explains how to use a popular open-source Linux keylogger called LogKeys for educational or testing purposes only. Unauthorized use of keyloggers to monitor someone else’s activity is unethical and illegal.

What is LogKeys?

LogKeys is an open-source keylogger for Linux that captures and logs keyboard input, including characters, function keys, and special keys. It is designed to work reliably across a wide range of Linux systems without crashing the X server.

LogKeys also correctly handles modifier keys like Alt and Shift, and is compatible with both USB and serial keyboards.

While there are numerous keylogger tools available for Windows, Linux has fewer well-supported options. Although LogKeys has not been actively maintained since 2019, it remains one of the more stable and functional keyloggers available for Linux as of today.

Installation of Logkeys in Linux

If you’ve previously installed Linux packages from a tarball (source), you should find installing the LogKeys package straightforward.

However, if you’ve never built a package from source before, you’ll need to install some required development tools first, such as C++ compilers and GCC libraries, before proceeding.

Installing Prerequisites

Before building LogKeys from source, ensure your system has the required development tools and libraries installed:

On Debian/Ubuntu:

sudo apt update
sudo apt install build-essential autotools-dev autoconf kbd

On Fedora/CentOS/RHEL:

sudo dnf install automake make gcc-c++ kbd

On openSUSE:

sudo zypper install automake gcc-c++ kbd

On Arch Linux:

sudo pacman -S base-devel kbd

Installing LogKeys from Source

First, download the latest LogKeys source package using the wget command, then, extract the ZIP archive and navigate into the extracted directory:

wget https://github.com/kernc/logkeys/archive/master.zip
unzip master.zip  
cd logkeys-master/

or clone the repository using Git, as shown below:

git clone https://github.com/kernc/logkeys.git
cd logkeys

Next, run the following commands to build and install LogKeys:

./autogen.sh         # Generate build configuration scripts
cd build                  # Switch to build directory
../configure              # Configure the build
make                      # Compile the source code
sudo make install         # Install binaries and man pages

If you encounter issues related to keyboard layout or character encoding, regenerate your locale settings:

sudo locale-gen

Usage of LogKeys in Linux

Once LogKeys is installed, you can begin using it to monitor and log keyboard input using the following commands.

Start Keylogging

This command starts the keylogging process, which must be run with superuser (root) privileges because it needs access to low-level input devices. Once started, LogKeys begins recording all keystrokes and saves them to the default log file: /var/log/logkeys.log.

Note: You won’t see any output in the terminal; logging runs silently in the background.

sudo logkeys --start

Stop Keylogging

This command terminates the keylogging process that was started earlier, which is important to stop LogKeys when you’re done, both to conserve system resources and to ensure the log file is safely closed.

sudo logkeys --kill

Get Help / View Available Options

The follwing command will displays all available command-line options and flags you can use with LogKeys.

logkeys --help

Useful options include:

  • --start : Start the logger
  • --kill : Stop the logger
  • --output <file> : Specify a custom log output file
  • --no-func-keys : Don’t log function keys (F1-F12)
  • --no-control-keys : Skip control characters (e.g., Ctrl+C, Backspace)

View the Logged Keystrokes

The cat command displays the contents of the default log file where LogKeys saves keystrokes.

sudo cat /var/log/logkeys.log

You can also open it with a text editor like nano or less:

sudo nano /var/log/logkeys.log
or
sudo less /var/log/logkeys.log

Uninstall LogKeys in Linux

To remove LogKeys from your system and clean up the installed binaries, manuals, and scripts, use the following commands:

cd build
sudo make uninstall

This will remove all files that were installed with make install, including the logkeys binary and man pages.

Conclusion

LogKeys is a powerful keylogger for Linux that enables users to monitor keystrokes in a variety of environments. Its compatibility with modern systems and ease of installation make it a valuable tool for security auditing, parental control testing, and educational research.

However, it’s crucial to emphasize that keylogging should only be used in ethical, lawful contexts—such as with explicit user consent or for personal system monitoring. Misuse can lead to serious legal consequences. Use responsibly and stay informed.


Sunday, May 11, 2025

How to Use Systemd to Run Bash Scripts at Boot in Linux

https://www.tecmint.com/create-new-service-units-in-systemd

How to Use Systemd to Run Bash Scripts at Boot in Linux

A few days ago, I came across a CentOS 8 32-bit distro and decided to test it on an old 32-bit machine. After booting up, I realized there was a bug causing the network connection to drop. Every time I rebooted, I had to manually bring the network back up, which led me to wonder: How can I automate this process with a script that runs every time the system boots?

The solution is straightforward, and today, I’ll show you how to do this using systemd service units, but before we jump into that, let’s first take a quick look at what a service unit is and how it works.

In this article, we’ll cover the basics of systemd service units, their relationship with “targets,” and how to set up a service unit to run a script at boot. I’ll keep things simple, focusing on the practical steps, so you’ll have everything you need to know to tackle this on your own.

What is a Systemd Service Unit?

In simple terms, a service unit in systemd is a configuration file that defines how a service should behave on your system. It could be something like a network service, a program, or even a script that needs to run when your computer boots or at a specific point during the boot process.

These service units are grouped into targets, which can be seen as milestones or stages in the boot process. For example, when your system reaches the multi-user target (runlevel 3), certain services will be started. You can think of these targets as “collections” of services that work together at various stages of the boot sequence.

If you’d like to see the services running in a particular target (for example, graphical.target), you can use the systemctl command:

systemctl --type=service

This will show you all active services in your current target. Some services run continuously, while others start up once and then exit.

List All Active Services
List All Active Services

Checking the Status of a Service

If you’re curious about a particular service, you can use systemctl status to see whether it’s active or inactive:

systemctl status firewalld.service

This command checks the status of the firewalld service. You’ll notice that it’s active, meaning it’s running, and enabled, which means it will start automatically on the next boot.

You can also stop a service temporarily (until the next boot) using:

systemctl stop firewalld.service
systemctl status firewalld.service

This will stop the firewalld service for this session, but won’t prevent it from starting up next time.

Check Status of Service
Check Status of Service

Enabling and Disabling Services

To ensure a service starts automatically on boot, you need to enable it, which will create a symbolic link in the appropriate target’s wants folder:

systemctl enable firewalld.service

To disable it, you would simply run:

systemctl disable firewalld.service
Enabling and Disabling Systemd Services
Enabling and Disabling Systemd Services

Creating a Custom Service Unit

To set up a service that runs a script at boot, we’ll create a new service unit under the /etc/systemd/system directory, here you’ll see existing service unit files and folders for different targets.

cd /etc/systemd/system
ls -l
Existing Service Unit Files
Existing Service Unit Files

Let’s create our own service unit called connection.service using Vim or your preferred text editor to create it:

vim connection.service
Or
vi connection.service

Add the following content to the file.

[Unit]
Description=Bring up network connection
After=network.target

[Service]
ExecStart=/root/scripts/conup.sh

[Install]
WantedBy=multi-user.target

Explanation:

  • [Unit]: The unit’s metadata. We’ve given it a description and told it to run after network.target, meaning it will only execute after the network has been initialized.
  • [Service]: This section defines the command to execute when the service starts. In this case, it runs the script conup.sh.
  • [Install]: This section tells systemd that the service should be loaded at the multi-user target, which is the standard runlevel for most systems.

Now, enable the service so it will start automatically on the next boot:

systemctl enable connection.service

You can confirm that it has been enabled by checking the multi-user.target.wants directory:

ls -l multi-user.target.wants/

The symbolic link to connection.service should now be present. However, we still need to create the script that this service will run.

Verify Service Unit File
Verify Service Unit File

Creating the Script

Let’s now create the conup.sh script that will bring the network connection up.

cd /root
mkdir scripts
cd scripts
vi conup.sh

Add the following line to bring the network up, here the script uses the nmcli command to bring the network connection on the enp0s3 interface up.

#!/bin/bash
nmcli connection up enp0s3

Don’t forget to make the script executable.

chmod +x conup.sh

At this point, the service is ready to go.

SELinux Contexts (For RHEL/CentOS Users)

If you’re using a RHEL-based system (like CentOS or Rocky Linux), don’t forget about SELinux, which can block scripts from running if the correct security context isn’t applied.

To temporarily set the context so the system treats the script as a regular executable, use:

chcon -t bin_t /root/scripts/conup.sh

However, this change won’t survive a reboot or file relabeling.

To make it permanent, use:

semanage fcontext -a -t bin_t "/root/scripts/conup.sh"
restorecon -v /root/scripts/conup.sh

This step ensures the script continues to run properly even after reboots or SELinux policy reloads.

Testing the Service

To test it without rebooting, you can start it manually.

systemctl start connection.service

If everything is set up correctly, the service will execute the script, and your network connection should be restored.

Alternatively, if you wrote a simpler script like touch /tmp/testbootfile, you can check if the file was created in /tmp to confirm the service is running as expected.

Conclusion

By now, you should have a good understanding of systemd service units and how to create and manage them on your system. You’ve also automated a common task – bringing up the network connection on boot using a simple service unit.

Hopefully, this guide helps you get more comfortable with managing services, targets, and scripts in systemd, making your system more automated and efficient.


15 Useful ‘dpkg’ Commands for Debian and Ubuntu Users [With Examples]

https://www.tecmint.com/dpkg-command-examples

15 Useful ‘dpkg’ Commands for Debian and Ubuntu Users [With Examples]

Debian GNU/Linux is the backbone of several popular Linux distributions like Knoppix, Kali, Ubuntu, Mint, and more. One of its strongest features is its robust package management system, which makes installing, removing, and managing software a breeze.

Debian and its derivatives use a variety of package managers such as dpkg, apt, apt-get, aptitude, synaptic, tasksel, dselect, dpkg-deb, and dpkg-split. each serving a different purpose.

Let’s quickly go over the most common ones before diving deeper into the dpkg command.

Common Debian-Based Package Managers

Command Description
apt apt, short for Advanced Package Tool, is used in Debian-based systems to install, remove, and update software packages.
aptitude aptitude is a text-based front-end to apt, great for those who prefer a terminal-based interface with menus.
synaptic synaptic is a graphical package manager that makes it easy to install, upgrade, and uninstall packages even for novices.
tasksel tasksel allows users to install all packages related to a specific task (like a desktop environment or a LAMP server).
dselect dselect is a menu-driven package management tool initially used during the first install, and is now replaced with aptitude.
dpkg-deb Used for working directly with .deb archives – creating, extracting, and inspecting them.
dpkg-split dpkg-split is useful for splitting and merging large files into chunks of smaller files to be stored on media of smaller sizes, such as floppy disks.

dpkg is the main package management program in Debian and Debian-based systems, used to install, build, remove, and manage packages. aptitude is the primary front-end to dpkg.

Some of the most commonly used dpkg commands, along with their usages, are listed here:

1. Install a Package on Ubuntu

To install a package using dpkg, you need to download .deb package file from the following official package repository sites for Debian and Ubuntu-based distributions.

Once downloaded, you can install it using the -i option followed by the name of the .deb package file.

sudo dpkg -i 2048-qt_0.1.6-2+b2_amd64.deb
Install Deb Package
Install the Deb Package

2. List Installed Packages on Ubuntu

To view and list all the installed packages, use the “-l” option along with the command.

dpkg -l
List Installed Deb Packages
List Installed Debian Packages

To view a specific package installed or not, use the option “-l” along with the package name. For example, check whether the apache2 package is installed or not.

dpkg -l apache2
Check Package Installation
Check Package Installation

3. Remove a Package on Ubuntu

To remove the “.deb” package, we must specify the package name “2048-qt” with the “-r” option, which is used to remove/uninstall a package.

sudo dpkg -r 2048-qt
Remove Deb Package
Remove Deb Package

You can also use ‘p‘ option in place of ‘r' which will remove the package along with the configuration file. The ‘r‘ option will only remove the package and not the configuration files.

[root@tecmint~]# dpkg -p flashpluginnonfree

4. View Contents of a .deb Package

To view the content of a particular .deb package, use the “-c” option, which will display the contents of a deb package in long-list format.

dpkg -c 2048-qt_0.1.6-2+b2_amd64.deb
View Contents of Deb Package
View Contents of Deb Package

5. Check Status of Deb Package Installation

Using “-s” option with the package name will display whether a deb package is installed or not.

dpkg -s 2048-qt
Check Deb Package Installation
Check Deb Package Installation

6. List Files Installed by Deb Package

To list the location of all the files installed by a particular package, use the -L option as shown.

dpkg -L 2048-qt
List Files Installed by Deb Package
List Files Installed by Deb Package

7. Install Multiple Debian Packages from a Directory

Recursively install all .deb files found in specified directories and all of their subdirectories, use the '-R' and '--install' options.

For example, to install all '.deb' packages from the directory named ‘debpackages‘.

sudo dpkg -R --install debpackages
Install All Deb Packages
Install All Deb Packages

8. Extract Contents of a Deb Package

To extract the contents of a .deb package but does not configure the package, use the --unpack option.

sudo dpkg --unpack 2048-qt_0.1.6-2+b2_amd64.deb
Extract Contents of Deb Package
Extract Contents of Deb Package

9. Reconfigure a Unpacked Deb Package

To configure a package that has been unpacked but not yet configured, use the “--configure” option as shown.

sudo dpkg --configure flashplugin-nonfree

10. Updating Package Information in System Database

The “–-update-avail” option replaces the old information with the available information for the package file in the package management system’s database.

sudo dpkg --update-avail package_name

11. Delete Information of Package

The action “--clear-avaial” will erase the current information about what packages are available.

sudo dpkg –-clear-avail

12. Forget Uninstalled and Unavailable Packages

The dpkg command with the option “–forget-old-unavail” will automatically forget uninstalled and unavailable packages.

sudo dpkg --forget-old-unavail

13. Display dpkg Licence

dpkg --licence

14. Display dpkg Version

The “--version” argument will display the dpkg version information.

dpkg –version

15. View dpkg Help

The “--help” option will display a list of available options of the dpkg command.

dpkg –help

That’s all for now. I’ll soon be here again with another interesting article. If I’ve missed any commands in the list, do let me know via comments.

Till then, stay tuned and keep connected to Tecmint. Like and share with us and help us spread. Don’t forget to mention your valuable thoughts in a comment.


Wednesday, May 7, 2025

5 Best Tools to Monitor and Debug Disk I/O Performance in Linux

https://www.tecmint.com/monitor-linux-disk-io-performance

5 Best Tools to Monitor and Debug Disk I/O Performance in Linux

Brief: In this guide, we will discuss the best tools for monitoring and debugging disk I/O activity (performance) on Linux servers.

A key performance metric to monitor on a Linux server is disk I/O (input/output) activity, which can significantly impact several aspects of a Linux server, particularly the speed of saving to or retrieving from disk, of files or data (especially on database servers). This has a ripple effect on the performance of applications and services.

1. iostat – Shows Device Input and Output Statistics

iosat is one of the many terminal-based system monitoring utilities in the sysstat package, which is a widely used utility designed for reporting CPU statistics and I/O statistics for block devices and partitions.

To use iostat on your Linux server, you need to install the sysstat package on your Linux system by running the applicable command for your Linux distribution.

sudo apt install sysstat          [On Debian, Ubuntu and Mint]
sudo yum install sysstat          [On RHEL/CentOS/Fedora and Rocky Linux/AlmaLinux]
sudo emerge -a app-admin/sysstat  [On Gentoo Linux]
sudo apk add sysstat              [On Alpine Linux]
sudo pacman -S sysstat            [On Arch Linux]
sudo zypper install sysstat       [On OpenSUSE]    

To show a simple device utilization report, run iostat with the -d command line option. Usually, the first report provides statistics about the time since the system startup (boot time), and each subsequent report is concerned with the time since the previous report.

Use the -x for an extended statistics report and the -t flag to enable time for each report. Besides, If you wish to eliminate devices without any activity in the report output, add the -z flag:

iostat -d -t 
OR
iostat -d -x -t 
iostat - Monitor Device Statistics in Linux
iostat – Monitor Device Statistics in Linux

To display statistics in kilobytes per second as opposed to blocks per second, add the -k flag, or use the -m flag to display stats in megabytes per second.

iostat -d -k
OR
iostat -d -m

iostat can also display continuous device reports at x second intervals. For example, the following command displays reports at two-second intervals:

iostat -d 2

Related to the previous command, you can display n number of reports at x second intervals. The following command will display 10 reports at two-second intervals.

iostat -d 2 10

Alternatively, you can save the report to a file for later analysis.

iostat -d 2 10 > disk_io_report.txt &

For more information about the report columns, read the iostat man page:

man iostat

2. sar – Show Linux System Activity

sar is another useful utility that ships with the sysstat package, intended to collect, report, or save system activity information. Before you can start using it, you need to set it up as follows.

First, enable it to collect data in the /etc/default/sysstat file.

vi /etc/default/sysstat

Look for the following line and change the value to “true” as shown.

ENABLED="true"
Enable Sar in Linux
Enable Sar in Linux

Next, you need to reduce the data collection interval defined in the sysstat cron jobs. By default, it is set to every 10 minutes, you can lower it to every 2 minutes.

You can do this in the /etc/cron.d/sysstat file:

# vi /etc/cron.d/sysstat
Configure Sar Cron in Linux
Configure Sar Cron in Linux

Save the file and close it.

Finally, enable and start the sysstat service using the following systemctl command:

systemctl enable --now sysstat.service
systemctl start sysstat.service

Next, wait for 2 minutes to start viewing sar reports. Use the sar command and the -b command line option to report I/O and transfer rate statistics and -d to report activity for each block device as shown.

sar -d -b
Sar - Monitor Linux System Activity
Sar – Monitor Linux System Activity

3. iotop – Monitor Linux Disk I/O Usage

Similar to the top monitoring tool in terms of design, iotop is a simple utility that enables you to monitor disk I/O activity and usage on a per-process basis.

You can get it installed on your Linux server as follows (remember to run the appropriate command for your Linux distribution):

sudo apt install iotop             [On Debian, Ubuntu and Mint]
sudo yum install iotop             [On RHEL/CentOS/Fedora and Rocky Linux/AlmaLinux]
sudo emerge -a sys-processs/iotop  [On Gentoo Linux]
sudo apk add iotop                 [On Alpine Linux]
sudo pacman -S iotop               [On Arch Linux]
sudo zypper install iotop          [On OpenSUSE]    

To monitor per-process I/O activity, you can run iotop without any arguments as follows. By default, the delay between iterations is 1 second. You can change this using the -d flag.

iotop
OR
iotop -d 2
iotop - Monitor Linux Disk Usage
iotop – Monitor Linux Disk Usage

iotop will by default display all threads of a process. To change this behavior so that it only shows processes, use the -P command line option.

iotop -P

Also, using the -a option, you can instruct it to display accumulated I/O as opposed to showing bandwidth. In this mode, iotop shows the amount of I/O processes performed since iotop was invoked.

iotop -P -a

4. dstat – Versatile Real-Time Resource Statistics

dstat is a powerful all-in-one replacement for older tools like vmstat, iostat, netstat, and others. It provides real-time stats for various system resources—including CPU, disk, memory, and network—in a clean, color-coded format.

To install dstat, use the relevant command for your Linux distro:

sudo apt install dstat             # On Debian, Ubuntu, and Mint
sudo yum install dstat             # On RHEL, CentOS, Fedora, Rocky Linux, AlmaLinux
sudo emerge -a sys-process/dstat   # On Gentoo Linux
sudo apk add dstat                 # On Alpine Linux
sudo pacman -S dstat               # On Arch Linux
sudo zypper install dstat          # On OpenSUSE

To run it with default settings (which includes CPU, disk, and network I/O):

dstat

If you want to focus only on disk activity, use:

dstat -d

You can also mix and match different options. For example, to monitor CPU, memory, and disk:

dstat -cdm

To log output to a CSV file for later analysis:

dstat -cdm --output system_stats.csv

dstat is super flexible and great for getting a quick, holistic view of your system in real time.

5. atop – Advanced System and Process Monitor

atop is like top, but on steroids, which gives you detailed, per-process resource usage, including disk I/O, memory, CPU, and network, making it great for in-depth analysis, especially when diagnosing performance issues over time.

Install it using your distro’s package manager:

sudo apt install atop             # On Debian, Ubuntu, and Mint
sudo yum install atop             # On RHEL, CentOS, Fedora, Rocky Linux, AlmaLinux
sudo emerge -a sys-process/atop   # On Gentoo Linux
sudo apk add atop                 # On Alpine Linux
sudo pacman -S atop               # On Arch Linux
sudo zypper install atop          # On OpenSUSE

To launch it:

atop

By default, it updates every 10 seconds. You can change the interval like this:

atop 2

One of its best features is that, it records data to a log file automatically (usually in /var/log/atop/).

atop -r /var/log/atop/atop_YYYYMMDD

It’s especially useful for tracing performance issues after they’ve already happened.

That’s all we had for you! We would like to know your thoughts about this guide or the above tools. Leave a comment via the feedback form below.

You can also inform us about tools that you think are missing in this list, but deserve to appear here.


How to Delete All Files in a Folder Except Certain Extensions

https://www.tecmint.com/delete-files-except-certain-file-extensions

How to Delete All Files in a Folder Except Certain Extensions

Sometimes, you may find yourself in a situation where you need to delete all files in a directory or simply clean up a directory by removing all files except those with a specific extension (e.g., files ending with a particular type).

In this article, we will show you how to delete files in a directory, excluding certain file extensions or types, using the rm, find, and globignore commands.

Before we move any further, let us start by briefly having a look at one important concept in Linux – filename pattern matching, which will enable us to deal with our issue at hand.

In Linux, a shell pattern is a string that consists of the following special characters, known as wildcards or metacharacters:

  • * – matches zero or more characters
  • ? – matches any single character
  • [seq] – matches any character in seq
  • [!seq] – matches any character not in seq

There are three possible methods we shall explore here, and these include:

Delete Files Using Extended Pattern Matching Operators

The extended pattern matching operators are listed below. In this case, pattern-list refers to one or more filenames, separated using the | character:

  • *(pattern-list) – matches zero or more occurrences of the specified patterns
  • ?(pattern-list) – matches zero or one occurrence of the specified patterns
  • +(pattern-list) – matches one or more occurrences of the specified patterns
  • @(pattern-list) – matches one of the specified patterns
  • !(pattern-list) – matches anything except one of the given patterns

To use them, enable the extglob shell option as follows:

shopt -s extglob

1. To delete all files in a directory except a specific file, type the following command:

rm -v !("filename")
Delete All Files Except One File in Linux
Delete All Files Except One File in Linux

2. To delete all files with the exception of filename1 and filename2:

rm -v !("filename1"|"filename2") 
Delete All Files Except Few Files in Linux
Delete All Files Except a Few Files in Linux

3. To remove all files except for .zip files, interactively:

rm -i !(*.zip)
Delete All Files Except Zip Files in Linux
Delete All Files Except Zip Files in Linux

4. To delete all files except for .zip and .odt files while displaying the actions being performed:

rm -v !(*.zip|*.odt)
Delete All Files Except Certain File Extensions
Delete All Files Except Certain File Extensions

Once you have all the required commands, turn off the extglob shell option like so:

shopt -u extglob

Delete Files Using Linux find Command

Under this method, we can use find command exclusively with appropriate options or in conjunction with the xargs command by employing a pipeline as in the forms below:

find /directory/ -type f -not -name 'PATTERN' -delete
find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm {}
find /directory/ -type f -not -name 'PATTERN' -print0 | xargs -0 -I {} rm [options] {}

5. The following command will delete all files apart from .gz files in the current directory:

find . -type f -not -name '*.gz'-delete
Command find - Remove All Files Except .gz Files
Command find – Remove All Files Except .gz Files

6. Using a pipeline and xargs, you can modify the case above as follows:

find . -type f -not -name '*gz' -print0 | xargs -0  -I {} rm -v {}
Remove Files Using find and xargs Commands
Remove Files Using find and xargs Commands

7. Let us look at one additional example, the command below will wipe out all files excluding .gz, .odt, and .jpg files in the current directory:

find . -type f -not \(-name '*gz' -or -name '*odt' -or -name '*.jpg' \) -delete
Remove All Files Except File Extensions
Remove All Files Except File Extensions

Delete Files Using Bash GLOBIGNORE Variable

This last approach, however, only works with bash. Here, the GLOBIGNORE variable stores a colon-separated pattern-list (filenames) to be ignored by pathname expansion.

To employ this method, move into the directory that you wish to clean up, then set the GLOBIGNORE variable as follows:

cd test
GLOBIGNORE=*.odt:*.iso:*.txt

In this instance, all files other than .odt, .iso, and .txt files with be removed from the current directory.

Now run the command to clean up the directory:

rm -v *

Afterwards, turn off GLOBIGNORE variable:

$ unset GLOBIGNORE
Delete Files Using Bash GLOBIGNORE Variable
Delete Files Using Bash GLOBIGNORE Variable

Note: To understand the meaning of the flags employed in the commands above, refer to the man pages of each command we have used in the various illustrations.

Conclusion

These are a few simple and effective ways to delete files in Linux, keeping only those with specific extensions or filenames intact. If you know of any other useful command-line techniques for cleaning up directories, feel free to share them in the feedback section below.


How to Detect Bad Sectors or Bad Blocks on Linux Hard Drives

https://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks

How to Detect Bad Sectors or Bad Blocks on Linux Hard Drives

Let’s start by defining a bad sector/bad block, it is a section on a disk drive or flash memory that can no longer be read from or written to, which usually happens due to permanent physical damage on the disk surface or failing flash memory transistors.

As more bad sectors build up, they can seriously impact your storage device’s performance, reduce its capacity, or even lead to complete hardware failure.

It is also important to note that the presence of bad blocks should alert you to start thinking of getting a new disk drive or simply mark the bad blocks as unusable.

Therefore, in this article, we will go through the necessary steps that can enable you to determine the presence or absence of bad sectors on your Linux disk drive or flash memory using certain disk scanning utilities.

That said, below are the methods:

1. Check for Bad Sectors Using the badblocks Tool

The badblocks tool lets you scan a storage device, like a hard disk or external drive, for bad sectors. Devices are usually listed as files like /dev/sdc or /dev/sda.

Step 1: List All Disks and Partitions

Firstly, use the fdisk command with superuser privileges to display information about all your disk drives or flash memory plus their partitions:

sudo fdisk -l
List Linux Filesystem Partitions
List Linux Filesystem Partitions

This will help you identify the correct device name to scan.

Step 2: Scan for Bad Blocks

Then scan your Linux disk drive to check for bad sectors/blocks by typing:

sudo badblocks -v /dev/sda10 > badsectors.txt
Scan Hard Disk Bad Sectors in Linux
Scan Hard Disk Bad Sectors in Linux

In the command above, badblocks is scanning device /dev/sda10 (remember to specify your actual device) with the -v enabling it to display details of the operation. In addition, the results of the operation are stored in the file badsectors.txt by means of output redirection.

In case you discover any bad sectors on your disk drive, unmount the disk and instruct the operating system not to write to the reported sectors as follows.

Step 3: Mark Bad Sectors as Unusable

You will need to employ e2fsck (for ext2/ext3/ext4 file systems) or fsck command with the badsectors.txt file and the device file as in the command below.

For ext2/ext3/ext4 File Systems:

sudo e2fsck -l badsectors.txt /dev/sda10

For Other File Systems:

sudo fsck -l badsectors.txt /dev/sda10

2. Scan Disk Health with Smartmontools (Recommended)

This method is more reliable and efficient for modern disks (ATA/SATA and SCSI/SAS hard drives and solid-state drives) which ship in with a S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) system that helps detect, report and possibly log their health status, so that you can figure out any impending hardware failures.

Step 1: Install smartmontools in Linux

You can install smartmontools by running the command below:

sudo apt install smartmontools  #For Debian-based
sudo dnf install smartmontools  #For RHEL-based

Step 2: Use smartctl to Run Health Checks

Once the installation is complete, use smartctl, which controls the S.M.A.R.T system integrated into a disk. You can look through its man page or help page as follows:

man smartctl
smartctl -h

Step 3: Run a Basic Health Test

Now execute the smartctrl command and name your specific device as an argument, as in the following command, the flag -H or --health is included to display the SMART overall health self-assessment test result.

sudo smartctl -H /dev/sda10
Check Linux Hard Disk Health
Check Linux Hard Disk Health

The result above indicates that your hard disk is healthy and may not experience hardware failures anytime soon.

Optional: View Full SMART Report

For an overview of disk information, use the -a or --all option to print out all SMART information concerning a disk and -x or --xall which displays all SMART and non-SMART information about a disk.

sudo smartctl -a /dev/sda10

Or even more comprehensive:

$ sudo smartctl -x /dev/sda10
Wrapping Up

In this guide, we explored how to identify and manage bad sectors on Linux drives using badblocks and smartmontools. Keeping tabs on your storage health is crucial—and these tools make it pretty straightforward.

If you have any questions, feedback, or suggestions, feel free to reach out in the comment section below. And as always, stay tuned to Tecmint for more Linux tips and tutorials!


Tuesday, May 6, 2025

fuser – Find and Kill Processes by File, Directory, or Port

https://www.tecmint.com/fuser-find-monitor-kill-linux-processes

fuser – Find and Kill Processes by File, Directory, or Port

One of the most important tasks in Linux systems administration is process management, which involves several operations such as monitoring, signaling processes, and setting process priorities on the system.

There are numerous Linux tools/utilities designed for monitoring and handling processes, such as top, ps, pgrep, kill, killall, nice, and many others.

In this article, we shall uncover how to find processes using a powerful and resourceful Linux utility called fuser.

What is a fuser in Linux?

fuser is a simple yet powerful command-line utility intended to locate processes based on the files, directories, or sockets a particular process is accessing. In short, it helps a system user identify which processes are using specific files or sockets.

The basic syntax for using fuser is:

fuser [options] [file|socket]
fuser [options] -SIGNAL [file|socket]
fuser -l 

Find Which Process is Accessing a Directory

Running fuser without any options displays the PIDs of processes currently accessing your current working directory.

fuser .
OR
fuser /home/tecmint
Find Running Processes of Directory
Find Running Processes of a Directory

Find Running Processes of a Directory (Verbose Output)

For a more detailed and clear output, enable the -v or --verbose as follows. In the output, fuser prints out the name of the current directory, then columns of the process owner (USER), process ID (PID), the access type (ACCESS), and command (COMMAND) as in the image below.

fuser -v .
List of Running Processes of Directory
List of Running Processes of the Directory

Under the ACCESS column, you will see access types signified by the following letters:

  • c – current directory.
  • e – an executable file being run.
  • f – open file, however, f is left out in the output.
  • F – open file for writing, F is as well excluded from the output.
  • r – root directory.
  • m – mmap’ed file or shared library.

Find Which Process is Accessing a File or Filesystem

To determine which processes are accessing your ~/.bashrc file, run:

fuser -v -m .bashrc

The -m NAME or --mount NAME option shows all processes accessing the given file or directory. If you pass a directory as NAME, it automatically appends a / to reference the file system mounted on that directory.

Check Which Process is Using Your ~/.bashrc File
Check Which Process is Using Your ~/.bashrc File

Find Which Process is Using a Specific Port

Another practical use case is identifying which process is using a specific network port, which is especially useful for debugging service conflicts.

sudo fuser 80/tcp
OR
sudo fuser -v 80/tcp

This shows the PID of the process using TCP port 80. Add -v for detailed output.

Find Which Process is Using a Specific Port
Find Which Process is Using a Specific Port

How to Kill and Signal Processes Using fuser

To kill all processes accessing a file or socket, use the -k or --kill option.

sudo fuser -k .

To interactively kill a process, where you are asked to confirm your intention to kill the processes accessing a file or socket, make use of -i or --interactive option.

sudo fuser -ki .
Interactively Kill Process in Linux
Interactively Kill Process in Linux

The two previous commands will kill all processes accessing your current directory; the default signal sent to the processes is SIGKILL, except when -SIGNAL is used.

List All Available Signals in Linux

You can list all the signals using the -l or –-list-signals options as below.

sudo fuser --list-signals 
List All Kill Process Signals
List All Kill Process Signals

Send a Specific Signal to Processes

Therefore, you can send a signal to processes as in the next command, where SIGNAL is any of the signals listed in the output above.

sudo fuser -k -SIGNAL

For example, to send the HUP (hang up) signal to processes accessing /boot.

sudo fuser -k -HUP /boot

For advanced usage and more details, check the fuser manual page.

man fuser
Conclusion

The fuser command might not be the first tool that comes to mind when managing processes, but it’s a hidden gem for any Linux user or system admin. It’s perfect for finding out which processes are using specific files, directories, or ports – and gives you the power to deal with them directly.

If you’re working with files, directories, or network services on a Linux system, learning how to use fuser is 100% worth your time.

 

Monday, May 5, 2025

How to Use diff3 Command for File Merging in Linux

https://www.tecmint.com/diff3-command-in-linux

How to Use diff3 Command for File Merging in Linux

The diff3 command in Linux is a helpful tool that compares three files and shows their differences, which is mainly useful for programmers and system administrators who work with multiple versions of the same file and need to merge them, or identify changes between different versions.

In this article, we’ll go through the basics of using the diff3 command, its common options, and a few examples to understand how it works in Linux.

What is the diff3 Command?

diff3 is a tool that compares three files line by line, identifies the differences, and displays them in a format that’s easy to understand.

It can be used to:

  • Find differences between the three files.
  • Automatically merge changes from different files.
  • Handle conflicts that occur when merging file versions.

The diff3 command is similar to the diff command or sdiff command but works with three files instead of two, which is particularly useful when you have multiple contributors working on the same file, and you need to merge their changes into a single version.

Basic Syntax of diff3 Command

The basic syntax of the diff3 command is:

diff3 [options] file1 file2 file3

Explanation of the above command.

  • file1: The first version of the file.
  • file2: The second version of the file.
  • file3: The third version of the file.

Commonly Used Options

Following are some commonly used options of diff3 Command:

  • -e: Create an ed script that can be used to apply changes to a file.
  • -m: Automatically merge the files.
  • -A: Include all changes from all files.
  • -E: Attempt to merge files even if conflicts are found.
  • -3: Show only changes that differ between all three files.

Finding Differences Between Files in Linux

Let’s say you have three files: file1.txt, file2.txt, and file3.txt. Each file contains a slightly different version of the same content, and you want to compare them to see where the differences lie.

Example Files
Example Files

To compare these three files, you can use the following command:

diff3 file1.txt file2.txt file3.txt
Compare Files for Differences
Compare Files for Differences

Here’s what this output means:

  • 1:2c: This shows that in file1.txt, the change occurs at line 2, and the content of line 2 is This is line 2..
  • 2:2c: This shows that in file2.txt, the change also happens at line 2, but the content of that line has been modified to This is modified line 2..
  • 3:2,3c: This shows that in file3.txt, there are changes in lines 2 and 3. Line 2 remains the same (This is line 2.), but line 3 is an additional line that states: This is an added line..

Merging Files with diff3 in Linux

If you want to merge the three files and create a new file with all the changes, you can use the -m option:

diff3 -m file1.txt file2.txt file3.txt

This will output the merged content with conflict markers showing where there are conflicting changes.

Merging Files in Linux
Merging Files in Linux

Here’s what this output means:

  • <<<<<<< file1.txt: This marks the beginning of a conflict and shows the version from file1.txt.
  • ||||||| file2.txt: This line shows the content from file2.txt (middle file in the comparison).
  • =======: This separates the conflicting lines.
  • >>>>>>> file3.txt: This marks the version from file3.txt and the end of the conflict block.

You can manually edit this to keep the changes you want.

Applying Changes from Multiple Files to One with diff3

You can also use diff3 to create an ed script that applies changes from file2.txt and file3.txt to file1.txt. This can be done using the -e option:

diff3 -e file1.txt file2.txt file3.txt > scriptfile

This command creates a file named scriptfile that contains the generated ed script, which you can use the ed command to apply the script from scriptfile to file1.txt.

ed file1.txt < scriptfile

This will modify file1.txt according to the changes specified in the scriptfile, you can verify by the following cat command to see if the changes have been applied:

cat file1.txt
Resolving Conflicts with diff3 Command
Resolving Conflicts with diff3 Command

This is helpful if you want to automate the merging of files using scripts.

Resolving Conflicts in diff3 Merges

When using diff3 for merging, conflicts may arise when there are differences between all three files at the same location. These conflicts are marked in the output, and you’ll need to manually resolve them.

  • To resolve conflicts, open the file that contains the conflict markers.
  • Edit the file to remove the unwanted lines and keep the changes you want.
  • After resolving the conflict, save the file.
Conclusion

The diff3 command is a powerful tool for comparing and merging three files in Linux, which is particularly useful for handling multiple versions of the same file and resolving conflicts when merging changes.

By understanding its basic usage and options, you can effectively manage file versions and collaborate with others on projects.

 

Sunday, May 4, 2025

How to Set the Default Text Editor in csh and tcsh

https://idolinux.com/how-to-set-the-default-text-editor-in-csh-and-tcsh

How to Set the Default Text Editor in csh and tcsh

Choosing your default text editor is an important part of customizing your Unix or BSD system environment. Whether you are editing configuration files or writing scripts, having your preferred editor available by default can make your workflow much more efficient. In this guide, we will explore how to set the default text editor in the C shell (csh) and the TENEX C shell (tcsh), both temporarily and permanently.

The default shell in systems like FreeBSD and PC-BSD is often the C shell (csh), making this information especially useful for *BSD users. Although Linux users typically work with bash, zsh, or other shells, knowing how to configure csh or tcsh remains valuable for certain environments or legacy systems.

Setting the Default Text Editor Temporarily

If you want to set the default text editor for just the current session, you can do so by setting the VISUAL and EDITOR environment variables. Here is how you can temporarily set nano as the default editor:

$ setenv VISUAL /usr/local/bin/nano
$ setenv EDITOR /usr/local/bin/nano

The VISUAL and EDITOR environment variables are both used by programs to determine which editor to launch. While VISUAL is often prioritized for visual editors (like nano or vim), EDITOR is used more generally. Setting both variables to the same value helps prevent inconsistencies across different programs and scripts that rely on these variables.

Remember that setting the editor this way is temporary. Once you close the shell or log out, these settings will be lost.

Making the Default Editor Permanent

To make your preferred text editor setting permanent across shell sessions, you need to add the setenv commands to your shell configuration file. In csh and tcsh, this is typically the ~/.cshrc file, or the ~/.tcshrc file if you are using tcsh.

You can add the following lines to ~/.cshrc:

$ echo "setenv VISUAL /usr/local/bin/nano" >> ~/.cshrc
$ echo "setenv EDITOR /usr/local/bin/nano" >> ~/.cshrc

Or, if you are using tcsh and the ~/.tcshrc file exists, you should add them there instead:

$ echo "setenv VISUAL /usr/local/bin/nano" >> ~/.tcshrc
$ echo "setenv EDITOR /usr/local/bin/nano" >> ~/.tcshrc

After editing the configuration file, either log out and log back in, or reload the configuration by sourcing the file manually:

$ source ~/.cshrc

or

$ source ~/.tcshrc

This will apply your new settings immediately without the need to restart the terminal.

Why Setting Both VISUAL and EDITOR Matters

You might wonder why it’s necessary to set both VISUAL and EDITOR. Some programs respect only one of these environment variables, and different programs may prioritize them differently. For example, crontab -e may use EDITOR, while a graphical program that launches a terminal editor may check VISUAL first. Setting both ensures consistent behavior across the system.

Moreover, if you prefer using more advanced editors like vim or simpler ones like nano, setting both variables avoids confusion, especially when dealing with scripts, cron jobs, version control systems, or remote server administration tasks.

Conclusion

Customizing your environment by setting the default text editor in csh and tcsh is a straightforward but important step for anyone using BSD or similar Unix systems. By setting both VISUAL and EDITOR environment variables—temporarily for a session or permanently for all future sessions—you ensure a smoother and more predictable editing experience.

Remember to always set both variables to the same path to avoid inconsistencies and unexpected behavior. Whether you are a seasoned system administrator or a beginner exploring *BSD systems, mastering these small configuration tweaks can significantly enhance your productivity and comfort.

 


Monday, April 28, 2025

How to Verify Debian and Ubuntu Packages Using MD5 Checksums

https://www.tecmint.com/check-verify-md5sum-packages-files-in-linux

How to Verify Debian and Ubuntu Packages Using MD5 Checksums

Have you ever wondered why a given binary or package installed on your system does not work according to your expectations, meaning it does not function correctly as it is supposed to, or perhaps it cannot even start at all?

While downloading packages, you may face challenges such as unsteady network connections or unexpected power blackouts. This can result in the installation of a corrupted package.

Considering this an important factor in maintaining uncorrupted packages on your system, it is therefore a vital step to verify the files on the file system against the information stored in the package.

In this article, we will explain how to verify the MD5 checksums of installed packages on Debian-based distributions such as Ubuntu and Mint.

How to Verify Installed Packages Against MD5 Checksums

On Debian/Ubuntu systems, you can use the debsums tool to check the MD5 sums of installed packages. If you want to know more about the debsums package before installing it, you can use the apt-cache command as follows:

apt-cache search debsums

Next, install it using the apt command.

sudo apt install debsums
Install debsums in Ubuntu
Install debsums in Ubuntu

Now it’s time to learn how to use the debsums tool to verify the MD5 sum of installed packages.

Note: I have used sudo with all the commands below, because certain files may not have read permissions for regular users.

Understanding the Output of debsums

The output from the debsums command shows you the file location on the left and the check results on the right.

There are three possible results you can get:

  • OK – indicates that a file’s MD5 sum is good.
  • FAILED – shows that a file’s MD5 sum does not match.
  • REPLACED – means that the specific file has been replaced by a file from another package.

When you run it without any options, debsums checks every file on your system against the stock MD5 sum files.

sudo debsums
Verify MD5 Checksums of Installed Packages
Verify MD5 Checksums of Installed Packages

Checking MD5 Sums of All Files for Changes

To enable checking every file and configuration file for changes, include the -a or --all option.

sudo debsums --all
Check Every File and Configuration for Changes
Check Every File and Configuration for Changes

Checking MD5 Sums of Only Configuration Files

It is also possible to check only the configuration files, excluding all other package files, by using the -e or --config option.

sudo debsums --config
Check MD5 Sums of Configuration Files
Check MD5 Sums of Configuration Files

Displaying Only Changed Files

To display only the changed files in the output of debsums, use the -c or --changed option.

sudo debsums --changed
Checking for Modified Files
Checking for Modified Files

Listing Missing MD5 Sums of Files

To display files that do not have MD5 sum information, use the -l or --list-missing option. On my system, this command does not show any files.

sudo debsums --list-missing

Verify the MD5 Sum of a Single Package

You can also verify the MD5 sum of a single package by specifying its name.

sudo debsums curl
Verify MD5 Checksums of Single Package
Verify MD5 Checksums of Single Package

Ignoring File Permission Errors in Debsums

Assuming that you are running debsums as a regular user without sudo, you can treat permission errors as warnings by using the --ignore-permissions option:

debsums --ignore-permissions
Using Debsums Without Sudo Privileges
Using Debsums Without Sudo Privileges

How to Generate MD5 Sums from .Deb Files

The -g option tells debsums to generate MD5 sums from the .deb contents.

Here are the additional options you can use:

  • missing – instructs debsums to generate MD5 sums from the .deb for packages that don’t provide one.
  • all – directs debsums to ignore the on-disk sums and use the one present in the .deb file, or generate one from it if none exists.
  • keep – tells debsums to write the extracted/generated sums to /var/lib/dpkg/info/package.md5sums file.
  • nocheck – means the extracted/generated sums are not checked against the installed package.

When you look at the contents of the /var/lib/dpkg/info/ directory, you will see MD5 sums for various files that packages include, as shown below:

cd /var/lib/dpkg/info
ls *.md5sums
Listing MD5 Sum Files from Installed Packages
Listing MD5 Sum Files from Installed Packages

You can generate an MD5 sum for the apache2 package by running the following command:

sudo debsums --generate=missing apache2

Since the apache2 package on my system already has MD5 sums, it will show the same output as running.

sudo debsums apache2
Generating MD5 Sums for a Specific Package
Generating MD5 Sums for a Specific Package

For more interesting options and usage information, look through the debsums man page:

man debsums
Conclusion

In this article, we shared how to verify installed Debian/Ubuntu packages against MD5 checksums. This can be useful to avoid installing and executing corrupted binaries or package files on your system by checking the files on the file system against the information stored in the package.

For any questions or feedback, feel free to use the comment form below. You can also offer one or two suggestions to make this post better.


Sunday, April 20, 2025

WattWise: Monitor Your Computer’s Power Usage in Real-Time

https://ostechnix.com/wattwise-monitor-computer-power-usage

WattWise: Monitor Your Computer’s Power Usage in Real-Time

Track Your PC's Power Consumption in Real-time with WattWise!

836 views 7 mins read

Have you ever wondered just how much power your computer is using? With energy costs going up, it’s good to know. That’s why Naveen Kulandaivelu, a robotics and machine learning engineer, created WattWise. It’s a real time power monitoring cli tool that runs in your computer’s terminal and helps you track power usage.

What is WattWise?

WattWise is a lightweight, opensource, command-line tool to monitor the power usage of your system in real-time.

WattWise Dashboard
WattWise Dashboard

WattWise leverages smart plugs (primarily TP-Link Kasa) to gather real-time power data and presents it in a user-friendly terminal-based dashboard.

Initially, the developer created it to track the power usage of his high-performance workstation. The future goal of this project is to automatically throttle CPU and GPU performance based on electricity pricing (Time-of-Use) and system load, aiming to reduce energy costs during peak hours.

The power monitoring functionality is currently available, while the automatic power management features are under development.

Features

Here are some cool things WattWise can do right now:

1. Real-time power monitoring

It shows you the current power your system is using in watts (that's the amount of power) and amperes (that's the electrical current).

2. Multiple connection options

You can connect WattWise directly to your TP-Link Kasa smart plugs or even through Home Assistant if you use that smart home platform. This gives you flexibility in how you get the power data.

3. Colour-coded display

To make it super easy to understand, the power usage is shown with colours.

  • If it's green, you're using less than 300 watts;
  • yellow means you're between 300 and 1200 watts;
  • and red pops up if you're going over 1200 watts.

It's a quick way to see if your system is working hard!

4. Historical data

WattWise can even show you charts of your power usage over time right in the terminal. It uses simple block characters so it works on pretty much any terminal. This helps you see trends and how your power consumption changes.

5. Simple command-line interface

Don't worry if you're not a super techy person! The commands are straightforward.

For example, just typing wattwise can give you a quick view, and wattwise --watch keeps monitoring in real-time with those cool charts.

6. Raw output

If you're into scripting and want to use the power data in other tools, WattWise can even output just the raw wattage number using the --raw flag.

7. Configurable refresh

You can even tell WattWise how often you want it to check the power usage, say every 5 seconds with wattwise --watch --interval 5.

Why Was WattWise Created?

Naveen built a powerful workstation for tasks like AI work. But using a high-performance system means higher electricity bills.

He had TP-Link Kasa smart plugs in his home. These plugs can measure electricity use. The Kasa app and Home Assistant could show the data, but switching between apps was inconvenient.

He wanted something that worked inside the terminal. That’s how WattWise was born.

How to get WattWise?

WattWise is open-source and free to use! You can grab it from GitHub. There are a couple of ways to install it: Using Pip or Using Docker.

Please note that WattWise requires Python 3.8 or later. For Docker usage, you will need to have Docker installed on your system.

Also, note that the power management features, which include automatic throttling, currently require Linux systems with appropriate CPU/GPU frequency control capabilities.

Install WattWise via pip

Clone the WattWise repository from GitHub:

git clone https://github.com/naveenkul/WattWise.git

Navigate to the WattWise directory:

cd WattWise

Install the required Python packages first:

pip install -r requirements.txt

Finally, install WattWise itself using pip:

pip install .

Install WattWise using Docker

Go to the directory where you cloned the WattWise.

Build the Docker image from the project directory (if you have cloned the repository):

docker build -t wattwise .

Create directories for persistence (this is usually done only the first time):

mkdir -p ~/.config/wattwise
mkdir -p ~/.local/share/wattwise

These directories are mounted into the Docker container to store configuration and data.

For first time setup, you need to configure a data source. You'll need to configure either Home Assistant or a Kasa smart plug.

To configure Home Assistant:

docker run -it --rm --network host \
  -v ~/.config/wattwise:/root/.config/wattwise \
  wattwise config ha

OR to configure a Kasa smart plug:

docker run -it --rm --network host \
  -v ~/.config/wattwise:/root/.config/wattwise \
  wattwise config kasa

Once configured, you can run a single check with Docker:

docker run -it --rm --network host \
  -v ~/.config/wattwise:/root/.config/wattwise \
  -v ~/.local/share/wattwise:/root/.local/share/wattwise \
  wattwise

Or you can run WattWise with continuous monitoring using the --watch flag:

docker run -it --rm --network host \
  -v ~/.config/wattwise:/root/.config/wattwise \
  -v ~/.local/share/wattwise:/root/.local/share/wattwise \
  wattwise --watch

For easier Docker usage, you can also create a bash alias as described in the documentation:

Add the following line to your ~/.bashrc or ~/.zshrc file:

alias wattwise='docker run -it --rm --network host \
  -v ~/.config/wattwise:/root/.config/wattwise \
  -v ~/.local/share/wattwise:/root/.local/share/wattwise \
  wattwise'

After adding the alias, you'll need to source your bash configuration (e.g., source ~/.bashrc or source ~/.zshrc) for the alias to take effect.

You can then use the wattwise command directly, just like the normal command:

wattwise
wattwise --watch

Here's the visual demonstration of WattWise:

Monitor Your PC's Power Usage Using WattWise
Monitor Your PC's Power Usage Using WattWise

Future Plans

While the current version is great for monitoring, Naveen's original idea was even more ambitious.

Since his electricity provider uses Time-of-Use (ToU) pricing – meaning electricity costs more during peak hours – he wanted WattWise to be able to automatically adjust his computer's performance based on these prices.

Imagine this: during those expensive peak hours, WattWise could automatically reduce the speed of his CPU and maybe even his GPUs.

This would use less power and save him some money. Then, when the prices drop, it could go back to full speed.

Naveen has even done some testing that showed reducing his CPU speed could save around 225 watts. That can add up! The plan is to use some clever tech, like a Proportional-Integral (PI) controller, to manage this power and performance balancing act.

Current Limitations

It's still a work in progress, and there are a few things to keep in mind:

  • Right now, it only supports one smart plug at a time.
  • It only works with TP-Link Kasa smart plugs that can monitor energy usage (like the EP25).
  • For the power management features (the automatic throttling), you'll need a Linux system with the ability to control CPU/GPU frequencies.
  • The automatic power optimiser part isn't fully ready yet – the current open-source version on GitHub is mostly the monitoring dashboard.

What's next for WattWise?

Naveen has lots of ideas for the future, including:

  • Supporting multiple smart plugs and showing combined power usage.
  • Adding compatibility for more brands of smart plugs.
  • Improving the visualisations and maybe even allowing you to export the data.
  • Integrating with other power management tools.
  • Making the predictions for power usage even smarter.

Naveen's goal with WattWise was simple: to solve his own problem of wanting to monitor his power-hungry workstation from the terminal he always has open. The idea of automatically saving money during peak electricity hours is just a fantastic bonus.

The dashboard part of WattWise is open-source under the MIT license. So, if you're interested, feel free to check out the official WattWise GitHub Repository. You can also contribute your own ideas and help make it even better!