Saturday, January 26, 2013

Howto Raspberry Pi - Use your Pi as a secure Reverse Proxy gateway to your Web internal Sites and Services

Last update 02/01/2013

The Goal: 

You have a Raspberry Pi and want to use it as your secure Web reverse proxy gateway to access to your various Internal services through your main fully qualified domain name or IP.

Let's say:
  • You have a main router or ISP Box
  • Your Rpi will be in front of the Internet by redirecting http/https por
  • For this configuration to work from both inside and outside your home network, your domain name (here "") must be associated with your public IP
  • ts from your router to your Rpi
  • You have or not internal servers providing Web sites or services your want to access from your public IP / domain name
We will use:
  • nginx as the great secure reverse proxy instance
  • SSL with auto signed or officially signed certificate to secure our web traffic
  • htpasswd to password protect your shellinbox from being visible and accessible whitout credentials
  • shellinabox to host a nice Web SSH frontend

Summary of steps: 

Step 1: OPTIONAL - Get a fully Qualified Domain Name (FQDN)
Step 2: Manage your SSL certificate
Step 3: Put a Shellinabox in your Pi ^^
Step 4: Install and configure Nginx

Step 1: OPTIONAL - Get a Fully Qualified Domain Name  

This is absolutely optional, but you could think about getting a qualified domain name to access to your home network. (a domain costs very few per year, and your can dynamically associate it with your public IP)

In many cases, when you connect from secure places (such as your company site), trying to access to a web site using its public IP will be prohibited by internals web proxies and firewalls.
By using a fqdn to associate your public IP to a real Internal domain name, your site is as official as any Internet company web site :-)

As an alternative to buy your own domain name, you can also use dynamic free domain name services such as, but most of company proxy will also block them.

And finally, this is just clean and beautiful ^^

In this post, i will assume for the example purpose that your domaine name is "". (still the fqdn is optional)
Step 2: Manage your SSL certificate

Off course, we will want to secure our Web traffic using an SSL certificate, there is 2 ways to achieve this:

1. Generating an "auto-signed" SSL certificate

You can very easily generate an auto-signed SSL certificate, you will have exactly the same security and encrypting level than any official certificate but this certificate won't be officially recognized over the Internet.

That means that when connecting to your site, your Web browser will warn you about the impossibility to guaranty your security connecting to this site, and you have to accept this.

I personally prefer having an official SSL certificate :-)

2. Buy and generate an Officially signed SSL certificate

You can also buy an official SSL certificate for very few, in this case your browser will automatically recognize your certificate as valid and you won't get any warn.

There is some places where you can get a free official SSL certificate for personal use. (look for "startssl")

In both cases, Google is your friend ^^

How to generate an auto signed certificate:

Install OpenSSL:
$ sudo apt-get install openssl

Generate your self signed certificate:
sudo mkdir -p /etc/ssl/localcerts
openssl req -new -x509 -days 3650 -nodes -out /etc/ssl/localcerts/autosigned.crt -keyout /etc/ssl/localcerts/autosigned.key
chmod 600 /etc/ssl/localcerts/*

Note: Respond to OpenSSL questions as you wish, it does not really mind as your certificate is a self-signed anyway

Step 3: Put a shellinabox in your Pi ^^

As explained before, shellinabox is a wonderfull web frontend to SSH, this way you will access to your SSH server without having to deal with an SSH client.

By the past, i wrote an article about an other SSH web frontend "ajaxterm" which is nice too, but in my opinion much more limited and low.
So i recommend to use shellinabox instead.

You will be able to access to your SSH server using standard Web ports even when connecting from places where external SSH traffic is prohibited :-) 

To install:
# sudo apt-get install shellinabox

By default, shellinabox uses the port "4200" to listen to, you can let that as it is as your nginx reverse proxy take care about redirecting our request to this internal service. 

If you want to manage your shellinabox configuration, take a look at main config files:
  • /etc/default/shellinabox
  • /etc/shellinabox/*
Default configuration is ok for us, test your shellinabox by connecting from a browser inside your network: http://<mypiserver>:4200

Note that even if we won't use it, shellinabox comes with embeded SSL auto-signed certificate configuration to redirect http to https and secure your web traffic.

Step 4: Install and configure Nginx

Ok, serious things now, let's install and configure nginx.

Nginx is an extremely powerful Opensource Web server, light secure and fast, that can be used as reverse proxy instance gateway to your internal Web services.

It is more and more used by many companies web site with high load Web Sites, do not hesitate to take a look at official sites:
I used by the past Apache running as a reverse proxy to do this job, but nginx assumes this job with great success, it's very modular and easy to maintain, this is why i recommend your Nginx.

To install:
# sudo apt-get install nginx-full

Now let's configure the beast:

First, some configuration in main config file "/etc/nginx/nginx.conf", here is a sample config file:
# /etc/nginx/nginx.conf

user www-data;
worker_processes 4;
pid /var/run/;

events {
 worker_connections 768;

http {

 sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 2048;

 include /etc/nginx/mime.types;
 default_type application/octet-stream;

 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;

 gzip on;
 gzip_disable "msie6";

 include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;

Please note that as Apache configuration style under Debian/Ubuntu, any configuration file (for site or module) included in conf.d or sites-enabled will be loaded at Nginx start

A good practice is to create a symbolic link from "sites-available" to "sites-enabled".

Let's deactivate the default web site we won't use by removing its symbolic link:
$ sudo rm /etc/nginx/sites-enable/default 

Create an htpasswd file that will contain your credentials (adapt <username>)
$ sudo htpasswd -c /etc/nginx/.htpasswd <username>

Now create your main web site configuration file, example:
  • /etc/nginx/sites-available/main
Here is a sample secured configuration:
access_log off;
add_header Cache-Control public;
server_tokens off;

# HTTP 80
server {
 listen         80;
 server_name _;
 rewrite ^$request_uri? permanent;

# HTTPS 443
server  {

 include    /etc/nginx/proxy.conf;

 listen 443 ssl;
 keepalive_timeout 70;


 # SSL config
 ssl on;
 ssl_certificate /etc/ssl/localcerts/autosigned.crt;
 ssl_certificate_key /etc/ssl/localcerts/autosigned.key;

 ssl_session_timeout 5m;
 ssl_protocols SSLv3 TLSv1.2;
 ssl_ciphers RC4:HIGH:!aNULL:!MD5;
 ssl_prefer_server_ciphers on;
 ssl_session_cache shared:SSL:10m;

 add_header X-Frame-Options DENY;

 # DDOS protection - Tune Values or deactivate in case of issue
 # limit_conn conn_limit_per_ip 20;
 # limit_req zone=req_limit_per_ip burst=20 nodelay;

 # status for ngxin auditing
 location /nginx-status {
      stub_status on;
      access_log off;
      deny all;

 location / {
  rewrite ^ permanent;

 location /shellinabox/ {
  proxy_pass http://localhost:4200;
                auth_basic            "Access Restricted";
                auth_basic_user_file  "/etc/nginx/.htpasswd";
                access_log /var/log/nginx/shellinabox.access.log;
                error_log /var/log/nginx/shellinabox.error.log;

Create the file "/etc/nginx/proxy.conf" with following content:
proxy_redirect          off;
proxy_set_header        Host            $host;
proxy_set_header        X-Real-IP       $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size    10m;
client_body_buffer_size 128k;
proxy_connect_timeout   90;
proxy_send_timeout      90;
proxy_read_timeout      90;
proxy_buffers           32 4k;

Activate your nginx web site and restart:
sudo ln -s /etc/nginx/sites-enables/main /etc/nginx/sites-available/main
sudo service nginx restart


For this configuration to work from both inside and outside your home network, your domain name (here "") must be associated with your public IP

Now test accessing to your Web site from both internal and external access :-)

As you understood, you can manage as many internal Web sites as you need through a unique Web instance and virtual hosts. (called location in Nginx)

In the sample config, shellinabox is the default site accessible with your domain name, but you change it and/or add any other internal web sites very easily.

Just add a new location related to your internal Web site you want to be able to access and you're done :-)

Friday, January 18, 2013

Howto Raspberry Pi: Monitor your Raspberry Pi with Observium!

The Goal: 

With Observium associated with Unix agent check_mk the goal will be to monitor any available indicator (CPU, Mem, Traffic interface...) and most of all, specific Raspberry Pi main indicators dynamically allocated when running Overclocked with Turbo mode:
  • CPU Frequency
  • CORE Frequency
  • CORE Voltage
  • BCM2835 Soc Temperature

Corresponding "vgencmd" commands:
# CPU Frequency
vcgencmd measure_clock arm

# CORE Frequency
vcgencmd measure_clock core

# CORE Voltage
vcgencmd measure_volts core

# SoC Temp
vcgencmd measure_temp

There are also other indicators you may want to monitor, even i don't feed it myself useful.
The present article will take care of these 4 indicators.

Take a look here:

Global list of indicators available through "vgencmd":
vcgencmd measure_clock arm
vcgencmd measure_clock core
vcgencmd measure_clock h264
vcgencmd measure_clock isp
vcgencmd measure_clock v3d
vcgencmd measure_clock uart
vcgencmd measure_clock pwm
vcgencmd measure_clock emmc
vcgencmd measure_clock pixel
vcgencmd measure_clock vec
vcgencmd measure_clock hdmi
vcgencmd measure_clock dpi
vcgencmd measure_volts core
vcgencmd measure_volts sdram_c
vcgencmd measure_volts sdram_i
vcgencmd measure_volts sdram_p

Installing Observium is out of the scope of this article, Observium installations documentations and well known and easy to read, see above.

Main sources:

I recommend to install Observium and Mysql into a central server which will request our Rpi to generate graphs and so on.

We will use an additional agent called "check_mk" to request the Rpi, system load generated by snmp and Unix agent are very limited which is very great, the Rpi is a small power device and you don't want monitoring to generate high system load!

One time you have Observium up and running, follow this guide to integrate any Raspberry Pi you want to monitor :-)

Summary of steps: 

Step 1: Install and configure snmpd
Step 2: Install check_mk agent (Unix Agent)
Step 3: Add the custom Raspberry agent script
Step 4: Observium custom application configuration
Step 5: Configure your Rpi in Observium, the easy part!


Step 1: Install and configure snmpd

First thing, we will begin by installing the snmpd daemon, to do so:
$ sudo apt-get install snmpd snmp-mibs-downloader
Let's configure some little things:

Edit "/etc/default/snmpd" and:
  • set: export MIBS=UCD-SNMP-MIB
  • Replace the line "SNMPDOPTS=" with the following values to prevent snmpd to log each connection (default behavior):
SNMPDOPTS='-LS 0-4 d -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/'

Edit "/etc/snmp/snmpd.conf" and:
  • Comment with "#" the default line "agentaddress udp:" which only allows connections from the localhost itself
  • Comment out the line "agentaddress udp:161,udp6:[::1]:161" to allow remote connections
  • Comment out the line "rocommunity secret <LANSUBNET>" (adapt <LANSUBNET> to the CIDR value of your LAN subnet, example: 192.168.0/24" 

Note: "secret" will the name of the snmp community, only accessible through your local network) 

  • Configure "sysLocation" and "sysContact"
  • Look for the section "EXTENDING THE AGENT" and add the following line:
extend . distro /usr/bin/distro
  • Install the "distro" script coming from observium (to recognize the remote OS)
$ sudo wget -O /usr/bin/distro
$ sudo chmod 755 /usr/bin/distro

Finally restart snmpd daemon:
$ sudo service snmpd restart

Step 2: Install check_mk agent (Unix agent)

We will used the great Unix agent "check_mk" called Unix agent by Observium.

If you want more information about this very cool tool, check its main Web site:

Install Xinetd requirement:
$ sudo apt-get install xinetd

Download and install check_mk:
$ wget
$ sudo dpkg -i check-mk-agent_1.2.0p3-2_all.deb

Verify that the package installation generated the xinetd configuration file called "

If not (it seems this part fails under Rpi), create the file with the following content:
# +------------------------------------------------------------------+
# |             ____ _               _        __  __ _  __           |
# |            / ___| |__   ___  ___| | __   |  \/  | |/ /           |
# |           | |   | '_ \ / _ \/ __| |/ /   | |\/| | ' /            |
# |           | |___| | | |  __/ (__|   <    | |  | | . \            |
# |            \____|_| |_|\___|\___|_|\_\___|_|  |_|_|\_\           |
# |                                                                  |
# | Copyright Mathias Kettner 2012    |
# +------------------------------------------------------------------+
# This file is part of Check_MK.
# The official homepage is at
# check_mk is free software;  you can redistribute it and/or modify it
# under the  terms of the  GNU General Public License  as published by
# the Free Software Foundation in version 2.  check_mk is  distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY;  with-
# out even the implied warranty of  MERCHANTABILITY  or  FITNESS FOR A
# PARTICULAR PURPOSE. See the  GNU General Public License for more de-
# ails.  You should have  received  a copy of the  GNU  General Public
# License along with GNU Make; see the file  COPYING.  If  not,  write
# to the Free Software Foundation, Inc., 51 Franklin St,  Fifth Floor,
# Boston, MA 02110-1301 USA.

service check_mk
 type           = UNLISTED
 port           = 6556
 socket_type    = stream
 protocol       = tcp
 wait           = no
 user           = root
 server         = /usr/bin/check_mk_agent

 # If you use fully redundant monitoring and poll the client
 # from more then one monitoring servers in parallel you might
 # want to use the agent cache wrapper:
 #server         = /usr/bin/check_mk_caching_agent

 # configure the IP address(es) of your Nagios server here:
 #only_from      =

 # Don't be too verbose. Don't log every check. This might be
 # commented out for debugging. If this option is commented out
 # the default options will be used for this service.
 log_on_success =

 disable        = no

Restart xinetd:

$ sudo service xinetd restart

Finally, ensure your Observium machine willbe authorized to access to the Rpi check_mk service running on port TCP/6556.

Step 3: Add the custom Raspberry agent script

Create a new file "/usr/lib/check_mk_agent/local/raspberry":
#set -x
echo "<<<app-raspberry>>>"
# CPU Frequency
expr `vcgencmd measure_clock arm|cut -f 2 -d "="` / 1000000
# CORE Frequency
expr `vcgencmd measure_clock core|cut -f 2 -d "="` / 1000000
# CORE Voltage
vcgencmd measure_volts core|cut -f 2 -d "="|cut -f 1 -d "V"
# SoC Temp
vcgencmd measure_temp|cut -f 2 -d "="| cut -f 1 -d "'"

Add execution right:
$ sudo chmod a+rx /usr/lib/check_mk_agent/local/raspberry

This script will be called by Observium at each poller time.

Step 4: Observium custom application configuration

Ok now a bigger part, we need to configure Observium to add our custom application has any other.
By this way, we could run this with as many Rpi as you want ;-)

To do so, we need to create and/or modify different configuration files.

Go into your Observium root directory, usually "/opt/observium"

1. "./includes/polling/" (modify)

Look for the section containing:
      if ($section == "apache") { $sa = "app"; $sb = "apache"; }

And add new one just under :
      if ($section == "raspberry") { $sa = "app"; $sb = "raspberry"; }

2. "./includes/polling/applications/" (create)

Create with following content:

if (!empty($agent_data['app']['raspberry']))
  $raspberry = $agent_data['app']['raspberry'];

$raspberry_rrd  = $config['rrd_dir'] . "/" . $device['hostname'] . "/app-raspberry-".$app['app_id'].".rrd";

echo(" raspberry statistics\n");

list($cpufreq, $corefreq, $corevoltage, $soctemp) = explode("\n", $raspberry);
if (!is_file($raspberry_rrd))
  rrdtool_create ($raspberry_rrd, "--step 300 \
        DS:cpufreq:GAUGE:600:0:125000000000 \
        DS:corefreq:GAUGE:600:0:125000000000 \
        DS:corevoltage:GAUGE:600:0:125000000000 \
        DS:soctemp:GAUGE:600:0:125000000000 ".$config['rrd_rra']);

print "cpufreq: $cpufreq corefreq: $corefreq corevoltage: $corevoltage soctemp: $soctemp";
rrdtool_update($raspberry_rrd, "N:$cpufreq:$corefreq:$corevoltage:$soctemp");

// Unset the variables we set here



3. "./html/includes/graphs/application/" (create)

Create with following content:

$scale_min = 0;


$raspberry_rrd   = $config['rrd_dir'] . "/" . $device['hostname'] . "/app-raspberry-".$app['app_id'].".rrd";

if (is_file($raspberry_rrd))
  $rrd_filename = $raspberry_rrd;

$ds = "soctemp";

$colour_area = "F0E68C";
$colour_line = "FF4500";

$colour_area_max = "FFEE99";

$graph_max = 1;

$unit_text = "°C";



4. "./html/includes/graphs/application/" (create)

Create with following content:

$scale_min = 0;


$raspberry_rrd   = $config['rrd_dir'] . "/" . $device['hostname'] . "/app-raspberry-".$app['app_id'].".rrd";

if (is_file($raspberry_rrd))
  $rrd_filename = $raspberry_rrd;

$ds = "corevoltage";

$colour_area = "CDEB8B";
$colour_line = "006600";

$colour_area_max = "FFEE99";

$graph_max = 1;

$unit_text = "Volts";



5. "./html/includes/graphs/application/" (create)

Create with following content:

$scale_min = 0;


$raspberry_rrd   = $config['rrd_dir'] . "/" . $device['hostname'] . "/app-raspberry-".$app['app_id'].".rrd";

if (is_file($raspberry_rrd))
  $rrd_filename = $raspberry_rrd;

$ds = "corefreq";

$colour_area = "B0C4DE";
$colour_line = "191970";

$colour_area_max = "FFEE99";

$graph_max = 1;

$unit_text = "Mhz";



6. "./html/includes/graphs/application/" (create)

Create with following content:

$scale_min = 0;


$raspberry_rrd   = $config['rrd_dir'] . "/" . $device['hostname'] . "/app-raspberry-".$app['app_id'].".rrd";

if (is_file($raspberry_rrd))
  $rrd_filename = $raspberry_rrd;

$ds = "cpufreq";

$colour_area = "B0C4DE";
$colour_line = "191970";

$colour_area_max = "FFEE99";

$graph_max = 1;

$unit_text = "Mhz";



7. "./html/pages/device/apps/" (create)

Create with following content:

global $config;

$graphs = array('raspberry_cpufreq' => 'CPU Frequency',
                'raspberry_corefreq' => 'CORE Frequency',
                'raspberry_corevoltage' => 'CORE Voltage',
                'raspberry_soctemp' => 'BCM2835 SoC Temperature',


foreach ($graphs as $key => $text)

  $graph_array['to']     = $config['time']['now'];
  $graph_array['id']     = $app['app_id'];
  $graph_array['type']   = "application_".$key;


  echo("<tr bgcolor='$row_colour'><td colspan=5>");




8. "./html/pages/" (modify)

Look for the section containing:
$graphs['apache']     = array('bits', 'hits', 'scoreboard', 'cpu');

And add new one just under :
$graphs['raspberry']  = array('cpufreq', 'corefreq', 'corevoltage', 'soctemp');

Ok, we're done!

Step 5: Configure your Rpi in Observium, the easy part!

Now the easiest, add your Rpi into Observium, go to the menu <Devices>, <Add device>.

In our case:
  • Hostname: Enter the hostname or IP of your Rpi
  • snmp Community: secret

Let all the rest by default.

The Rpi shall be detected with sucess, and the Debian logo appears:

Now enter the device and go to device settings:

Go to "Applications" and activate the box corresponding to our Raspberry application:

Then, Go to "Modules" and Activate the Unix agent (disabled by default):

Great, you're done will all configuration parts, wait for a view poller execution (by default Observium proposes a cron task every 5 minutes)

You can run manually the poller under the host running Observium:
$ sudo /opt/observium/poller.php -h all

And if you want to run it into debug mode to get more details:
$ sudo /opt/observium/poller.php -h all -d

In my experience, you have to wait for 10-15 minutes before getting data being graphed.

Some screenshots with application data:

CPU Frequency:

CORE Frequency:

CORE Voltage:

BCM2835 Soc Temperature:

Great :-)

Wednesday, January 16, 2013

Howto: Raspberry Pi Root NFS share - boot your System over NFS share and definitively deal with Flash data corruption

*** Updated March 14, 2013  ***

The Goal: 

If you have a Raspberry Pi and a Linux Server (or a NAS), you shall really be interested by this post! :-)

I had a lot of issue with my main Rpi when overclocked generating File System data corruption...
And finally, the real best solution, solid and efficient has been to convert my root installation into booting rootfs over NFS.

I recommend the most easy solution to first have a running installation of your system into your Flash card and simply migrate it to root fs over NFS.

Since i've done this, never had any system freeze, corruption or event kernel panic with my Rpi overclocked to turbo mode :-)

Also, you should ensure before beginning that your system is up to date (sudo apt-get update && sudo apt-get dist-upgrade -f) and your have the last firmware version (sudo rpi-update)

Major source: 

Summary of steps: 

Step 1: Set your NFS share
Step 2: Copy your root fs into your NFS share
Step 3: Modify your Raspberry Pi boot configuration
Step 4: Adapt your Rpi fstab
Step 5: Boot your Rpi!
Step 6: Correct your swap configuration by migrating to a loop device


Step 1: Set your NFS share

If you have a Linux Home Server or NAS, then you probably already share data using NFS.

To set a NFS share dedicated for your Rpi Root fs, add your share into "/etc/exports":
Adapt <raspberrypi_ip> with your Rpi LAN IP or LAN subnet if you prefer
# Raspberry Root FS                                     
/data/rpi_rootfs <raspberrypi_ip>(rw,sync,no_root_squash,no_subtree_check)

Under Debian / Ubuntu, reload your NFS server config:
sudo /etc/init.d/nfs-kernel-server reload

Step 2: Copy your Rpi root fs into your NFS share

Then simply copy all of your Rpi root fs into your new nfs share, you can do it directly under the Rpi or by plugging your Flash card into a client computer:

Example with a client computer having the flash card and NFS share mounted:
cp -rav /media/mmcblk0p2/* /data/rpi_rootfs/ 

Step 3: Modify your Raspberry Pi boot configuration

The only partition you will need to keep in your Rpi Flash card will be the boot partition (first partition), containing main boot configuration files and the Rpi firmware.

To boot over NFS, we need to modify the file "/boot/cmdline.txt" (contained into the first fat partition of your Flash card) to add/correct some sections:
  • root= --> Will be pointing to "/dev/nfs"
  • nfsroot=<nfs_server_ip>:/data/rpi_rootfs,udp,vers=3 ip=dhcp (replace<nfs_server_ip> with your NFS server IP)
  • rootfstype=nfs
  • smsc95xx.turbo_mode=N --> is a workaround to prevent kernel panic under high network load (i recommend this)


In this example, we use DHCP to set the Rpi Lan IP at boot time, this is in my opinion the easiest way to do as you preset a fix address in your DHCP server for your Rpi.
Still you can also manually a fix IP at boot time.

Also note we will be using NFS V3 running under UDP for better performances. (see Memorandum for performances fine tuning)

"cmdline.txt" example with DHCP (the file must contain only one line):

  • <nfs_server_ip> with the NFS server IP
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1
root=/dev/nfs nfsroot=<nfs_server_ip>:/data/rpi_rootfs,udp,vers=3
ip=dhcp rootfstype=nfs smsc95xx.turbo_mode=N 

"cmdline.txt" example with Fix IP(the file must contain only one line):

  • <raspberrypi_ip> with the Lan IP of your Rpi
  • <nfs_server_ip> with the NFS server IP
  • <default_gateway> with the IP of your local gateway
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1
root=/dev/nfs nfsroot=<nfs_server_ip>:/data/rpi_rootfs,udp,vers=3
ip=<raspberrypi_ip>:<nfs_server_ip>:<default_gateway>:<mask>:rpi:eth0:off rootfstype=nfs smsc95xx.turbo_mode=N

Step 4: Adapt your Rpi fstab

Now edit the Rpi "/etc/fstab" file before trying to boot, do under your NFS server or client computer.

  • Delete the original line corresponding to your root fs and pointing to the second partition of your flash card  (/dev/mmcblk0p2), we don't need anymore as it will automatically be mounted by the firmware at boot time

Step 5: Boot your Rpi!

Ok, let's go, time to boot :-)

If you follow all steps carefully, you system should boot with no major issue.

Therefore, you will not have anymore swap available, by default Raspbian uses dphys-swapfile os use a local file as swap.

We will correct this now.

Step 6: Correct your swap configuration by migrating to a loop device

By default, Raspbian uses dphys-swapfile to generate a local file being used as swap, this won't work anymore when booting under NFS.

I don't recommend to use your Flash card as a swap partition, this may generates system freeze or kernel panics if you have data corruption:

The better way is to set a local file as a loop device that will be used a swap device, here is how.

Clean current non working swap file and uninstall dphys-swapfile:
sudo apt-get remove --purge dphys-swapfile
sudo rm /var/swap
sudo rm /etc/init.d/dphys-swapfile
sudo update-rc.d dphys-swapfile remove

Create a new swap file, create the loop swap device and activate swap (exemple with 1GB swap):
sudo dd if=/dev/zero of=/var/swapfile bs=1M count=1024
sudo losetup /dev/loop0 /var/swapfile
sudo mkswap /dev/loop0
sudo swapon /dev/loop0

Check your current swap availability:
$ free

Output example:
             total       used       free     shared    buffers     cached
Mem:        237656     213092      24564          0         24      93192
-/+ buffers/cache:     119876     117780
Swap:      1048572       1556    1047016

Make it permanent, edit "/etc/rc.local" and add this section before "exit 0":
echo "Setting up loopy/var/swapfile.."
sleep 2
losetup /dev/loop0 /var/swapfile
mkswap /dev/loop0
swapon /dev/loop0

Step 7: Other tunings

There is also some other little thing to tune:

Edit "/etc/default/rcS" and:

Edit "/etc/sysctl.conf" and:
  • add or set: vm.min_free_kbytes = 12288 
This will ensure the system will always have 12Mb or RAM free to prevent kernel panic risk, your may try lower value if your prefer.


  • NFS version and fine tuning
You may want to try different settings to get the better performance possible.

First, if you test your write speed, using dd will be very easy:

Create a 10 Mb file test:
$ dd if=/dev/zero of=/tmp/test.file bs=1M count=10

Output sample:
10+0 enregistrements lus                                                                                                      
10+0 enregistrements écrits                                                                                                   
10485760 octets (10 MB) copiés, 1,52525 s, 6,9 MB/s

You may want to try with different NFS version, change in cmdline.txt:
  • Change the section nfsroot "vers=2/3"
You may want to try TCP versus UDP
  • Change the section nfsroot  "udp" or "tcp"
You may want to try different values of "rsize" and "wsize", example with NFS V3 and TCP:

  • root=/dev/nfs nfsroot=<nfs_server_ip>:/data/rpi_rootfsrsize=32768,wsize=32768,tcp,vers=3

In my case, it did not really change anything, so i kept kernel default values for wsize and rsize, udp with NFS V3.

  • Netfilter Iptables
When modifying your Iptables configuration, keep in mind that NFS traffic with your NFS server will result in system halt.

Applying default outbound policy to DROP (usually "iptables -P OUTPUT DROP") will in system crash.
You should apply instead "iptables -P OUTPUT ACCEPT" which will permit any outbound traffic from your Rpi (not a big deal, usually you trust your own machine)

Also you can ensure to accept NFS traffic with your NFS server before applying any other rules.

Tuesday, January 15, 2013

Splunk Howto - Splunk for Fail2ban, get a the Fail2ban Multi-host frontend with Splunk!

*** Updated June 9, 2013  ***

Current Version = 2.02

Splunk (if you don't yet know it) is an incredibly powerful solution that collects, indexes and exploits any kind of data from any system, offering you as many solution as you need and even the possibility to create custom applications with graphical front-ends. (dashboards, reports, saved searches...)

In a few words, i am really impressed by Splunk, i think i've been looking for this for many many years!

Don't hesitate to take a look at main Splunk Website, you will easily find a lot of information and great documentations:

Splunk can be used for free with some little restrictions. (not more than 500Mb of input data per day)

I developed my first Splunk application "Splunk For Fail2ban" to provide a cool frontend and log managing tool associated with the well known and powerful Fail2ban tool. (take a look at my older post:

To install this addon, follow this link on Splunkbase or install it through the standard Splunk application process search online: 

Splunk pre-requirements:

Ensure to install requirements Splunk addons:

Splunk For Fail2ban provides:

A complete Dashboard Overview of Fail2ban activity for all managed systems: 

Home page with realtime quick summary activity overview and links to interfaces:

A complete Dashboard Overview of Fail2ban activity for all managed systems: 

Activity overview:

Activity and Alert Trend:

Various Top 10 Charts and stats:

Google Maps Dashboard, identify the source of connexion attempts

A Fail2ban Event search interface with selection per kind of data (IPs, ID, Jail...)

Pre-defined major searches to get all the most important information

System view: Index activity

Installation and utilization


Installing and configuring Splunk is out of the scope of this post, still installing Splunk is really easy and well done, in 10 minutes you'll be done ^^

As a brieve description, here is how Splunk for Fail2ban works:

- We modify Fail2ban to add a specific message for each ban action and containing fields Splunk will analyse
- Through Syslog, we can manage as many Fail2ban servers as required
- Splunk collects our data and produces the IT intelligency

Installation and configuration will be done in a few steps:

1. Modifying Fail2ban configuration files related to the ban action (the goal is send fields we will analyse with Splunk)
2. Setting up Fail2ban to log to Syslog system
3. Setting up Syslog to trap custom Fail2ban events into a specific log file (can be local or remote Syslog if numerous Fail2ban hosts)
4. Installation and configuring Splunk for Fail2ban

Part 1: Configure Fail2ban

1. Set Fail2ban output to Syslog

I recommend the use of "rsyslog" as your main Syslog management, it comes with much more improvement than the standard Syslog. (

First, we need to set Fail2ban to log its messages into Syslog instead of a standard log file.

To do so, edit "/etc/fail2ban/fail2ban.conf" and set:
logtarget = SYSLOG

2. Add a new action.d configuration file for events logging

See this configuration sample if required: splunk.conf.example

Create a new file: "/etc/fail2ban/actions.d/splunk.conf" with the following content:
actionban = logger -i "[fail2ban.banevent]: fail2ban_host: [`hostname`] \
Banhost: [<ip>] jailname: [<name>] numberoffailures: [<failures>] \
logmessage: [ `grep '\<<ip>\>' <logpath> | tail -1` ] "


3. Configure "/etc/fail2ban/jail.conf":

Depending of your wish, you can set Fail2ban to use 1 of these 3 actions: (by editing /etc/fail2ban/jail.conf)
  • action_ = Fail2ban will temporarely ban the IP source host
  • action_mw = Fail2ban will temporarely ban the IP host and send a warning mail including whois result request
  • action_mwl = Fail2ban will temporarely ban the IP host and send a warning mail including whois result request and log traces
All you need is to modify jail.conf for all these action level to include our specific logging for Splunk.

See this configuration sample if required: jail.conf.example

In jail.conf, add the following line just before the 3 action definition lines (action_, action_mw, action_mwl)
# Name of Splunk config file
splunkconf = splunk

Then, add a new line related splunk underneath each action level, your configuration file will looks like:
# Action shortcuts. To be used to define action parameter

# The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
            %(splunkconf)s[name=%(__name__)s, logpath=%(logpath)s]

# ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
              %(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
                %(splunkconf)s[name=%(__name__)s, logpath=%(logpath)s]

# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
               %(mta)s-whois-lines[name=%(__name__)s, dest="%(destemail)s", logpath=%(logpath)s, chain="%(chain)s"]
                %(splunkconf)s[name=%(__name__)s, logpath=%(logpath)s]

3. Restart Fail2ban, check logging to Syslog:

Now let's test your system, generate a ban event (try to log in through SSH with bad credentials) and check

your Syslog file to find the generated event. (look for the pattern "fail2ban.banevent")

You should find a ban event like this:
Jan 11 20:24:34 myhostname logger[30720]: [fail2ban.banevent]: fail2ban_host: [myfail2ban] Banhost: [xx.xx.xx.xx] jailname: [ssh] numberoffailures: [6] logmessage: [ Jan 11 20:24:32 myhostname sshd[30706]: Received disconnect from xx.xx.xx.xx: 11: Bye Bye [preauth] ] 

Now you're done with Fail2ban, let's configure Syslog ^^

Part 2: Configure Syslog - Standalone and Multi-Hosts

In 2 steps:
  • if you want to manage different Fail2ban servers from Splunk, then read the Multiple Fail2ban client configuration note
  • If you just one host to manage (Fail2ban and Splunk are installed in the same host), then just follow the common configuration section

MULTIPLE FAIL2BAN CLIENT CONFIGURATION NOTE: Remote and centralized Syslog configuration

Configuring Syslog to send events from a Syslog host to a remote Syslog server is out of the scope of this guide.

Therefore, if you want to collect fail2ban events from different hosts, you can choose between different solutions, as:
  • Sending events using Syslog to a remote centralized Syslog
  • Sending events from local log file using Splunk forwarder module
  • Others (homemade scripts, file sharing...)
I would recommend using Rsyslog (default enhanced Syslog for many Linux systems) to achieve this, which is in deed easy enough, robust and efficient.

Here is in 2 steps a quick rsyslog centralized configuration: (remember to restart rsyslog after each modification)

1. In each client rsyslog host, modify "/etc/rsyslog.conf" and add a section to send any events to your Syslog server: (adapt the example IP)

*.* @ 

2. In syslog server configuration, create a configuration file that will trapp any remote client Syslog events and put then into a dedicated per host log file:

Ensure your configuration name will be read after the fail2ban syslog config file you will create after. (see above, this is very )

Create "/etc/rsyslog.d/10-fail2ban.conf" with the following content: (Note: The fail2ban config we will create after will be called 08 to be read before this and intercept messages)

$template RemoteHostFileFormat,"%TIMESTAMP% %HOSTNAME% %syslogfacility-text% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::space-cc,drop-last-lf%\n" 
:inputname, isequal, "imudp" ?PerHostLog;RemoteHostFileFormat
& ~

Restart rsyslog after any config modification.

COMMON CONFIGURATION for Single and Multiple (for the centralized rsyslog server) Fail2ban installation: 

1. Set Syslog to trap ban events to a dedicated logfile

This configuration part will depend on your system and needs, i recommend the use of "rsyslog"

The goal is to configure syslog to trap any event containing a key word "[fail2ban.banevent]" into a dedicated log file

In Debian/Ubuntu systeprintfms for example, create an rsyslog configuration file, example:
Create "/etc/rsyslog.d/08-fail2ban.conf" with the following content: 

:msg, contains, "[fail2ban.banevent]" /var/log/fail2ban_banevent.log
& ~

Restart rsyslog to take effect:
sudo service rsyslog restart

2. Generate a ban event and check your logfile

Generate a new ban event and check your log file, you should see a new ban event message! 

If you are ok with that, then you're done with system configuration ^^ 

Part 3: Configuration of Splunk (the easy part!)

Here comes the easier part with no doubts :-)

1. Configure Input file
Go to "manager", "Data Input" and configure MANUALLY a new input file pointing to your Fail2ban log file, with following settings:


You can let the default settings, it does not mind as we don't use it to recognize the fail2ban reporting server.

Source type:

- Set the source Type: Manual
- Source type: fail2ban_banevent


- Set the destination Index: fail2ban_index

Good news, you're done!!!
Just wait a few minutes to let Splunk get the content of your fail2ban log file, then go to the splunk application Splunk for Fail2ban

Don't hesitate to share any comment with me, this is my very first Splunk application and it may still needs some improvement :-)