Disable swap in systemd

Tested in ubuntu 18 / debian 9

Usually you can disable the swap by this command:

~ $ swapoff -a

And then comment entry swap line in /etc/fstab file for making reboot persistent. But sometimes in systems with systemd this is not enough, steps would be these:

Get the swap service name in use:

~ $ systemctl --type swap

UNIT                                       LOAD   ACTIVE SUB    DESCRIPTION                  
dev-mapper-ubuntu\x2d\x2dvg\x2dswap_1.swap loaded active active /dev/mapper/ubuntu--vg-swap_1

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

Stop service, note “‘” on start and end in service name:

~ $ systemctl stop 'dev-mapper-ubuntu\x2d\x2dvg\x2dswap_1.swap'

Mask service:

~ $ systemctl mask 'dev-mapper-ubuntu\x2d\x2dvg\x2dswap_1.swap'
Created symlink /etc/systemd/system/dev-mapper-ubuntu\x2d\x2dvg\x2dswap_1.swap → /dev/null.

Finally check service:

~ $ systemctl --type swap
0 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

If you want enable swap again:

~ $ systemctl unmask 'dev-mapper-ubuntu\x2d\x2dvg\x2dswap_1.swap'
~ $ systemctl start 'dev-mapper-ubuntu\x2d\x2dvg\x2dswap_1.swap'

Getting the parent process pid

Tested in debian 8

Sometimes can be useful to know the parent pid of a process for getting info or to kill it. You can use ps with ppid option:


root      1208  0.0 11.0 312388 113172 ?       Ss   Oct10   0:06 /usr/sbin/apache2 -k start
www-data  3915  0.0 11.1 1363796 113332 ?      Sl   06:25   0:05  \_ /usr/sbin/apache2 -k start
www-data  3916  0.0 10.9 1362304 112000 ?      Sl   06:25   0:03  \_ /usr/sbin/apache2 -k start

Getting de ppid knowing the child pid:

~ $ ps -O ppid= -p 3916
  PID       S TTY          TIME COMMAND
 3916  1208 S ?        00:00:03 /usr/sbin/apache2 -k start

Or short format with only pid:

~ $ ps -o ppid= -p 3916

Or knowing the child name:

~ $ ps -O ppid= -p $(pgrep apache2)
  PID       S TTY          TIME COMMAND
 1208     1 S ?        00:00:06 /usr/sbin/apache2 -k start
 3915  1208 S ?        00:00:05 /usr/sbin/apache2 -k start
 3916  1208 S ?        00:00:04 /usr/sbin/apache2 -k start

And viceversa, knowing the parent pid get pid from all childs:

~ $ ps --ppid=1208 -f
www-data  3915  1208  0 06:25 ?        00:00:05 /usr/sbin/apache2 -k start
www-data  3916  1208  0 06:25 ?        00:00:04 /usr/sbin/apache2 -k start

Special characters in URL rewrite with mod_rewrite

Tested in debian 8 / Apache 2.4.10

In apache enviroment with mod_rewrite, you can use the flag NE (no escape) for rewriting urls con special characters like #, ?, & … , example:

RewriteEngine On
RewriteRule ^(.*)$ "http://domain.com/#tag" [R=301,NC,L,NE]

R=301: type of redirection, 301 in this case
NC: no case or case insensitive
L: stop processing the rule set, like “break” in C

Separate varnishncsa logs per domain

Tested in ubuntu 16 / Varnish 4.1.9

Here a init.d script it starts a daemon per domain using varnishncsa:

### BEGIN INIT INFO                                                                                                                                                                                         
# Provides:          vhostlog
# Required-Start:    $local_fs $remote_fs $network
# Required-Stop:     $all
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts vhostlog service
# Description:       starts vhostlog service


case "$1" in

        while read domain; do
            varnishncsa -D -q 'ReqHeader:Host ~ "^(www\.)?'$domain'$"' \
	        -a -w $LogsPath/$domain-access.log \
	        -F '%h %l %u %t "%m %U %H" %s %b "%{Referer}i" "%{User-agent}i"'

        done < /path/to/domains-list.txt

        killall varnishncsa
        sleep 3


        $0 stop
        $0 start

        echo "Usage: $0 {start|stop|restart}"
        exit 1

The content of file /path/to/domains-list.txt could be like this:


If varnish is behind another proxy (like nginx to serve SSL for example) you can change %h by %{X-Forwarded-For}i or %{X-Real-IP}i.

Once created update it in init.d:

~ $ update-rc.d vhostlog defaults

An finally start it:

~ $ /etc/init.d/vhostlog start

Snow in your shell

Command to generate snow on your shell console, it’s cool:

~ $ clear;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS)) $(printf "\u2744\n");sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH%s \033[0;0H",a[x],x,$4;}}'

You will need gawk installed.

Source: http://climagic.org/coolstuff/let-it-snow.html (@climagic)

Suunto Moon Age App

Tested in Suunto Traverse

This app calculate moon age in days:

/* While in sport mode do this once per second */
MoonMonth = 29.53;
FirstNewMoonDayIn2000 = 6;

DaysFrom2000 = SUUNTO_DAYS_AFTER_1_1_2000 + (SUUNTO_TIME/86400);
DaysFromNewMoon = DaysFrom2000 - FirstNewMoonDayIn2000;
MoonAge = Suunto.mod(DaysFromNewMoon/MoonMonth,1) * MoonMonth;

postfix = "Days";
RESULT = MoonAge;

Glusterfs server and client at the same node

Tested in Ubuntu 16 / Glusterfs 3.8

We’re going to configure a glusterfs cluster on two nodes with server and client on both hosts and without a dedicated partition or disk for storage.

First add name and IP address to the /etc/hosts file on both nodes, it’s important to configure glusterfs in a local network or to use a firewall to drop external traffic, for security reasons:


Then add glusterfs repositories, in this case the stable version was 3.8:

~ $ echo 'deb http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu xenial main' > /etc/apt/sources.list.d/gluster-glusterfs.list 

Update and install needed packages:

~ $ apt-get upgrade
~ $ apt-get purge glusterfs-client glusterfs-server glusterfs-common 

Start glusterfs daemon:

~ $ /etc/init.d/glusterfs-server start 

Configure peers, in server01 type:

~ $ gluster peer probe server02 
~ $ gluster peer status

Or if you do it from server02, then:

~ $ gluster peer probe server01 
~ $ gluster peer status

List peers in the pool:

~ $ gluster pool list
UUID					Hostname   	State
bbc3443a-2433-4bba-25a1-0474ec77b571	server02	Connected 
df55a706-a32c-4d5b-a240-b29b6d16024b	localhost  	Connected

Now is the time to create a volume:

~ $ gluster volume create storage-volume replica 2 transport tcp server01:/storage-volume server02:/storage-volume force

gluster volume create: create a volume named storage-volume
replica 2: volume replication with two replicas, each node have a copy of all data
transport tcp: type protocol to use
server01:/storage-volume and server02:/storage-volume: node bricks
force: force create a volume in a root partition (root filesystem)

Start volume:

~ $ gluster volume start storage-volume 

Show the volume status:

~ $ gluster volume status 

Show the volume info:

~ $ gluster volume info 

You can configure a lot of settings for tuning performance and security, for example permit traffic only between nodes:

~ $ gluster volume set storage-volume auth.allow, 

Or for improve IO performance (be carefully because it could be inconsistent):

~ $ gluster volume set storage-volume performance.flush-behind on 

Now create a directory where mount the volume:

~ $ mkdir /mnt/dir-storage-volume 

And finally mount in both nodes:

~ $ mount -t glusterfs /mnt/dir-storage-volume 

Now test the replication, writing in /mnt/dir-storage-volume directory on the first node and watch if changes are traslated to the second node, and vice-versa.

TIP: If you need add more bricks/nodes to extend the size of volume, first add the bricks and then extend replication with rebalance, remember we’re using two replicas:

~ $ gluster add-brick storage-volume replica 2 server03:/storage-volume server04:/storage-volume 
~ $ gluster rebalance storage-volume start
~ $ gluster rebalance storage-volume status

Boot on LVM root partition in Raspbian

Tested in Raspberrypi 2 Model B / Raspbian Jessie

It’s possible booting on a LVM root partition using a external USB disk instead a SD card in Raspberrypi. Steps are the next (create lvm partitions is out of this post):

Check if your raspberrypi supports booting from initrd, is necessary for activate LVM on system boot:

~ $ zcat /proc/config.gz | grep INITRD

If don’t exist /proc/config.gz load configs before:

~ $ modprobe configs

If result is “y” (usually) then create initrd file:

~ $ mkinitramfs -o /boot/initramfs.gz

In /boot/cmdline.txt file change the root partition replacing:

~ $ root=/dev/mmcblk0p2

with lvm disk using mapper designation, supposing vg0 as lvm group and lv01 as lvm volume:

~ $ root=/dev/mapper/vg0-lv01

You can add rootdelay=5 because interfaces can take a while to appear. At last add:

~ $ initramfs initramfs.gz

at the end of file /boot/config.txt. Finally reboot, with a little luck your rapsberrypi should boot on the new lvm root partition.

Check opendkim keys

For checking opendkim private and public keys you can use opendkim-testkey:

~ $ opendkim-testkey -d mydomain.com -s default -k /etc/opendkim/keys/mydomain.com/key.private -vvv
opendkim-testkey: key loaded from /etc/opendkim/keys/mydomain.com/key.private
opendkim-testkey: checking key 'default._domainkey.mydomain.com'
opendkim-testkey: key not secure
opendkim-testkey: key OK

-d: domain
-s: selector, in this case “default”
-k: local path to the private key
-vvv: extra verbose info

Message key not secure is just due that the domain has not DNSSEC configured.

Custom errors in apache + php-fpm

Tested in Apache 2.4 / PHP-FPM 5.6

By default php-fpm shows “File not found” when someone try to request a php file that doesn’t exist. If you use php-fpm in apache with mod_proxy for using custom errors just configure this in apache2.conf:

ProxyErrorOverride on