Graylog2 and Nagios Integration   1 comment

Say you have a Nagios system monitoring everything already in your system and in addition to that you have a Graylog2 installation which parses logs from anywhere and provides you with invaluable feedback on what’s really going on in your system.
And then comes the problem, or one of them:

  • Graylo2 is not really good in sending alerts (or maybe it is?)
  • Nagios is already configured to send alerts and you would like to use the same contact groups for instance

The solution is below.

Design

Before making you read through the whole blog entry, I’ll just outline the solution I’ve chosen to implement and you can decide whether it’s good for you or not. Here it is in a nutshell:

  • An alert is being generated in Graylo2 in a configured stream
  • Graylog2 will use exec callback plugin to call an external alerting command, call it graylog2-alert.sh for instance
  • graylog2-alert.sh will push an alert using send_ncsa
  • Nagios parses the alert and notifies whoever is subscribed on that service

Pretty simple and bullet proof.

Graylog2 Configuration

I assume you already have Graylo2 fully configured, in this case download the wonderful exec callback plugin and place it under the plugin/alarm_callbacks directory (under the Graylog2 directory obviously).

Login to Graylog2 and enable under Settings->System the Exec alarm callback.

Click configure and point it to /usr/local/sbin/graylog2-alert.sh

That’s it for now on the Graylog2 interface side.

NSCA – Nagios Service Check Acceptor

Properly configure NSCA to work in your nagios configuration. That means usually:

  • Opening port 5667 (or another port) on your nagios server
  • Choosing a password for symmetrical encryption on the nagios server and the NSCA clients
  • Starting the nsca daemon on the nagios server, so it will accept NSCA communications

Generally speaking configuring NSCA is out of the scope of this article and more information can be found here:
http://www.nsclient.org/nscp/wiki/doc/usage/nagios/nsca

That said, I’ll just mention that when everything works, you should be able to run successfully:

echo "HOSTNAME;SERVICE;2;Critical" | send_nsca -d ';' -H NAGIOS_HOSTNAME

graylog2-alert.sh

On the Graylog2 host, place the following file under /usr/local/sbin/graylog2-alert.sh:

#!/bin/bash

# nagios servers to notify
NAGIOS_SERVERS="NAGIOS_SERVER_1 NAGIOS_SERVER_2 NAGIOS_SERVER_3"
# add a link to the nagios message, so it's easy to access the interface
# on your mobile device once you get an alert
GL2_LINK="http://GRAYLOG_URL/streams"

main() {
	local tmp_file=`mktemp`
	local gl2_topic=`echo $GL2_TOPIC | cut -d'[' -f2 | cut -d']' -f1`
	echo `hostname`";Graylog2-$gl2_topic;2;$GL2_LINK $GL2_DESCRIPTION" > $tmp_file
	local nagios_server
	for nagios_server in $NAGIOS_SERVERS; do
		/usr/sbin/send_nsca -d ';' -H $nagios_server < $tmp_file
	done
	rm -f $tmp_file
}

main "$@"

This in combination of what we did before will fire alerts from Graylo2 -> exec callback plugin -> graylog2-alert.sh -> NSCA -> nagios server.

The nagios side

All you have left to do now is to define services for use with Graylog2 alerts. It is a rather straight forward service configuration for nagios, here is mine (generated puppet in case you wonder):

define service {
        service_description            Graylog2-STREAM_NAME
        host                           REPLACE_WITH_YOUR_GRAYLOG2_HOST
        use                            generic-service
        contact_groups                 Graylog2-STREAM_NAME
        passive_checks_enabled         1
        max_check_attempts             1
        # enable active checks only to reset the alarm
        active_checks_enabled          1
        check_command                  check_tcp!22
        normal_check_interval          10
        notification_interval          10
        # set the contact group
        contact_groups                 Graylog2-STREAM_NAME
        flap_detection_enabled         0
}

define contactgroup{
        contactgroup_name       Graylog2-STREAM_NAME
        alias                   Graylog2-STREAM_NAME
        members                 dan
}

We usually have a contact group per Graylog2 stream. We just associate developers with the topic that’s relevant to them.

Restart your nagios and you’re set. Don’t forget to also start nsca!!

Resetting the alert

Graylog2 and NSCA will never generate “positive” OK alerts, but only critical ones. So you need a mechanism to reset the alert every once in a while. If you will scroll up you will see that I check port 22 (SSH) on the Graylog2 host.
How often you ask?
When configuring a new stream in Graylog2, it is best if you match the Grace period in Graylog2 to the normal_check_interval in nagios. Which would guarantee the alert will be reset before a new one comes in.

Puppet

The whole shenanigans is obviously puppetized in our environment. Tailoring nagios to an environment is usually very different between environments so I have decided it is rather redundant to paste puppet recipes altogether.

I hope you can find this semi-tutorial helpful.

Advertisements

Posted May 26, 2013 by malkodan in Bash, Linux, System Administration

Tagged with , , , , , ,

Hebrew Keyboard Layout In Linux   1 comment

Since I got this question from way too many people, I wanted to just share my “cross distribution” and “cross desktop environment” way of doing that very simple thing of enabling a Hebrew keyboard layout under Linux.

Easy As

After logging into your desktop environment, type this:

setxkbmap -option grp:switch,grp:alt_shift_toggle,grp_led:scroll us,il

Alt+Shift will get you between Hebrew and English. Easy as.

Sustainability

Making it permanent is just as easy:

mkdir -p ~/.config/autostart && cat <<EOF > ~/.config/autostart/hebrew.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Hebrew
Comment=Enable a Hebrew keyboard layout
Exec=setxkbmap -option grp:switch,grp:alt_shift_toggle,grp_led:scroll us,il
EOF

Should sustain logout/login, reboots, reinstalls (as long as you keep /home on a different partition), distribution changes and choosing a different desktop environment (KDE, GNOME, LXDE, etc.).

Posted May 3, 2013 by malkodan in Bash, Linux

Tagged with , , , , , ,

Handling many files in one directory   Leave a comment

The Assignment

You have a directory with gazillion files. Since most filesystems are not very efficient with many files in one directory, it is advisable to spread them among a hierarchy of directories. Write a program (or script) which handles a directory with many files and spreads them in an efficient hierarchy.

Does that sounds like a University assignment or something? Yes, it does.

Well apparently such a situation just happened to me in real life. Searching across the internet I couldn’t find anything too useful. And I will stand corrected if there is something which already deals with that problem. Post ahead if so.

And yes, thank god I’m using Unix (Linux), don’t even want to think what one would do on Windows.

The Situation

An application was spooling many files to the same directory, generating up to a million files in the same directory. I’m sorry I cannot disclose any more information about it, but lets just say it is a well known open source application.
Access to these files was obviously fast having ext4 and dir_index, but the directory index is too big to actually list files or do anything else without clogging everything in the system. And we need these files.

So we’ve decided to model the files in a way that’ll be more efficient for browsing and we can then handle it from there.

The Solution

After implementing something pretty quick and dirty for the situation, to mitigate the pain, I’ve sat down and wrote something a bit more generic. I’m happy to introduce the spread_files.sh utility.
What does it take care of:

  • Reading the directory index just once
  • Hierarchy depth as parameter
  • Stacking up to X files per mv command
  • Has recursion in Bash!!
  • Obviously the best solution would be to never get to that situation, however if you do, feel free to use spread_files.sh.

SSHing efficiently   6 comments

I personally have a numerous number of hosts which I sometimes have to SSH to. It can get rather confusing and inefficient if you get lost among them.

I’m going to show you here how you can get your SSHing to be heaps more efficient with just 5 minutes of your time.

.ssh/config

In $HOME/.ssh/config I usually store all my hosts in such a way:

Host host1
    Port 1234
    User root
    HostName host1.potentially.very.long.domain.name.com

Host host2
    Port 5678
    User root
    HostName host2.potentially.very.long.domain.name.com

Host host3
    Port 9012
    User root
    HostName host3.potentially.very.long.domain.name.com

You obviously got the idea. So if I’d like to ssh to host2, all I have to do is:

ssh host2

That will ssh to root@host2.potentially.very.long.domain.name.com:5678 – saves a bit of time.

I usually manage all of my hosts in that file. Makes life simpler, even use git if you feel like it…

Auto complete

I’ve added to my .bashrc the following:

_ssh_hosts() {
    local cur="${COMP_WORDS[COMP_CWORD]}"
    COMPREPLY=()
    local ssh_hosts=`grep ^Host ~/.ssh/config | cut -d' ' -f2 | xargs`
    [[ ! ${cur} == -* ]] &amp;&amp; COMPREPLY=( $(compgen -W "${ssh_hosts}" -- ${cur}) )
}

complete -o bashdefault -o default -o nospace -F _ssh_hosts ssh 2&gt;/dev/null \
    || complete -o default -o nospace -F _ssh_hosts ssh
complete -o bashdefault -o default -o nospace -F _ssh_hosts scp 2&gt;/dev/null \
    || complete -o default -o nospace -F _ssh_hosts scp

Sweet. All that you have to do now is:

$ ssh TAB TAB
host1 host2 host3

We are a bit more efficient today.

Posted March 31, 2013 by malkodan in Bash, Linux, System Administration

Tagged with , , ,

NetworkManager-ssh   3 comments

SSH is amazing

Show me one unix machine today without SSH. It’s everywhere, for a reason.
OpenSSH specifically allows you to do so much with it. What would we have done without SSH?

OpenSSH Tunnelling and full VPN

Tunnelling with SSH is really cool, utilizing the secure SSH connection you can virtually secure any TCP/IP connection using port forwarding (-R and -L):
http://www.openssh.org/faq.html#2.11

However for full VPN support, you can use -w which opens a tun/tap device on both ends of connection, allowing you potentially to have all of your network passing via your SSH connection. In other words – full VPN support for free!!!

Server configuration

On the server, the configuration would be minimal:

  • Allow tunnelling in sshd configuration
  • echo 'PermitTunnel=yes' >> /etc/ssh/sshd_config
    service sshd reload
    
  • Allow forwarding
  • -I FORWARD -i tun+ -j ACCEPT
    -I FORWARD -o tun+ -j ACCEPT
    -I INPUT -i tun+ -j ACCEPT
    -I POSTROUTING -o EXTERNAL_INTERFACE -j MASQUERADE
    echo 1 > /proc/sys/net/ipv4/ip_forward
    

That’s all!! Congratulations on your new VPN server!!

Client configuration (your personal linux machine)

These 2 commands will configure you with a very simple VPN (run as root!!!):

ssh -f -v -o Tunnel=point-to-point \
  -o ServerAliveInterval=10 \
  -o TCPKeepAlive=yes \
  -w 100:100 root@YOUR_SSH_SERVER \
  '/sbin/ifconfig tun100 172.16.40.1 netmask 255.255.255.252 pointopoint 172.16.40.2' && \
/sbin/ifconfig tun100 172.16.40.2 netmask 255.255.255.252 pointopoint 172.16.40.1

The only downside of this awesome VPN is that you have to be root on both ends.
But this whole setup is rather clumsy, lets use some UI for that, no?

NetworkManager-ssh

Somewhere in time, after intensively working in a company dealing with VPNs (but no SSH VPNs at all) I was looking at my taskbar at NetworkManager and thinking “Hey! There’s an OpenVPN, PPTP and IPSEC plugin for NetworkManager, why not build a SSH VPN plugin?”
And hell, why not?
I started searching the Internet frantically, believing that someone already implemented that ingenious idea (like most good ideas), but except for one mailing list post from a few years ago where someone suggested to implement it – nada.

Guess it’s my prime time. Within a week of forking the code of NetworkManager-openvpn (the NetworkManager OpenVPN plugin) I managed to get something that actually works (ssh-agent authentication only). I was surprised because I’ve never dealt with glib/gtk infrastructure not to mention UI programming (I’m a pure backend/infrastructure developer for the most of it).

And today?

I’m writing this post perhaps 2 months after I started development and committed my first alpha release. While writing this post I’m trying to submit NetworkManager-ssh to fedora (fedora-extras to be precise).

Getting into the bits and bytes behind it is redundant, all that you have to know is that the source is available here:
https://github.com/danfruehauf/NetworkManager-ssh
It compiles easily into a RPM or DEB for your convenience. I urge you to give it a shot and please open me issues on github if you find any.

Posted March 23, 2013 by malkodan in C++, Linux

Tagged with , , , , , , , , ,

Creating a puppet ready image (CentOS/Fedora)   10 comments

Cloud computing and being lazy

The need to create template images in our cloud environment is obvious. Especially with Amazon EC2 offering an amazing API and spot instances in ridiculously low prices.
In the following post I’ll show what I am doing in order to prepare a “puppet-ready” image.

Puppet for the rescue

In my environment I have puppet configured and provisioning any of my machines. With puppet I can deploy anything I need – “if it’s not in puppet – it doesn’t exist”.
Coupled with Puppet dashboard the interface is rather simple for manually adding nodes. But doing stuff manually is slow. I assume that given the right base image I (and you) can deploy and configure that machine with puppet.
In other words, the ability to convert a bare machine to a usable machine is taken for granted (although it is heaps of work on its own).

Handling the “bare” image

Most cloud computing providers today provide you (usually) with an interface for starting/stopping/provisioning machines on its cloud.
The images the cloud providers are usually supplying are bare, such as CentOS 6.3 with nothing. Configuring an image like that will require some manual labour as you can’t even auto-login to it without some random password or something similar.

Create a “puppet ready” image

So if I boot up a simple CentOS 6.x image, these are the steps I’m taking in order to configure it to be “puppet ready” (and I’ll do it only once per cloud computing provider):

# install EPEL, because it's really useful
rpm -q epel-release-6-8 || rpm -Uvh http://download.fedoraproject.org/pub/epel/6/`uname -i`/epel-release-6-8.noarch.rpm

# install puppet labs repository
rpm -q puppetlabs-release-6-6 || rpm -ivh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-6.noarch.rpm

# i usually disable selinux, because it's mostly a pain
setenforce 0
sed -i -e 's!^SELINUX=.*!SELINUX=disabled!' /etc/selinux/config

# install puppet
yum -y install puppet

# basic puppet configuration
echo '[agent]' > /etc/puppet/puppet.conf
echo '  pluginsync = true' >> /etc/puppet/puppet.conf
echo '  report = true' >> /etc/puppet/puppet.conf
echo '  server = YOUR_PUPPETMASTER_ADDRESS' >> /etc/puppet/puppet.conf
echo '  rundir = /var/run/puppet' >> /etc/puppet/puppet.conf

# run an update
yum update -y

# highly recommended is to install any package you might deploy later on
# the reason behind it is that it will save a lot of precious time if you
# install 'httpd' just once, instead of 300 times, if you deploy 300 machines
# also recommended is to run any 'baseline' configuration you have for your nodes here
# such as changing SSH port or applying common firewall configuration for instance
yum install -y MANY_PACKAGES_YOU_MIGHT_USE

# and now comes the cleanup phase, where we actually make the machine "bare", removing
# any identity it could have

# set machine hostname to 'changeme'
hostname changeme
sed -i -e "s/^HOSTNAME=.*/HOSTNAME=changeme" /etc/sysconfig/network

# remove puppet generated certificates (they should be recreated)
rm -rf /etc/puppet/ssl

# stop puppet, as you should change the hostname before it will be permitted to run again
service puppet stop; chkconfig puppet off

# remove SSH keys - they should be recreated with the new machine identity
rm -f /etc/ssh/ssh_host_*

# finally add your key to authorized_keys
mkdir -p /root/.ssh; echo "YOUR_SSH_PUBLIC_KEY" &gt; /root/.ssh/authorized_keys

Power off the machine and create an image. This is your “puppet-ready” image.

Using the image

Now you’re good to go, create a new image from that machine and any machine you’re going to create in the future should be based on that image.

When creating a new machine the steps you should follow are:

  • Start the machine with the “puppet-ready” image
  • Set the machine’s hostname
  • hostname=uga.bait.com
    hostname $hostname
    sed -i -e "s/^HOSTNAME=.*/HOSTNAME=$hostname/" /etc/sysconfig/network
    
  • Run ‘puppet agent –test’ to generate a new certificate request
  • Add the puppet configuration for the machine, for puppet dashboard it’ll be something similar to:
  • hostname=uga.bait.com
    sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:add name=$hostname
    sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:groups name=$hostname groups=group1,group2
    sudo -u puppet-dashboard RAILS_ENV=production rake -f /usr/share/puppet-dashboard/Rakefile node:parameters name=$hostname parameters=parameter1=value1,parameter2=value2
    
  • Authorize the machine in puppetmaster (if autosign is disabled)
  • Run puppet:
    # initial run, might actually change stuff
    puppet agent --test
    service puppet start; chkconfig puppet on
    

This is 90% of the work if you want to quickly create usable machines on the fly, it shortens the process significantly and can be easily implemented to support virtually any cloud computing provider!

I personally have it all scripted and a new instance on EC2 takes me 2-3 minutes to load + configure. It even notifies me politely via email when it’s done.

I’m such a lazy bastard.

Posted March 23, 2013 by malkodan in Bash, Linux, System Administration

Tagged with , , , , , , , , , , ,

Primus (primusrun) and FC18   Leave a comment

Continued from my previous article at:
https://bashinglinux.wordpress.com/2013/02/18/bumblebee-and-fc18-a-horror-show/

This is a little manual about running primus with the previous setup I’ve suggested.

Packages

Pretty simple:

yum install glibc-devel.x86_64 glibc-devel.i686 libX11-devel.x86_64 libX11-devel.i686

We should be good to go in terms of packages (both x86_64 and i686)

Download and compile primus

Clone from github:

cd /tmp && git clone https://github.com/amonakov/primus.git

Compiling for x86_64:

export PRIMUS_libGLd='/usr/lib64/libGL.so.1'
export PRIMUS_libGLa='/usr/lib64/nvidia/libGL.so.1'
LIBDIR=lib64 make
unset PRIMUS_libGLd PRIMUS_libGLa

And for i686 (32 bit):

export PRIMUS_libGLd='/usr/lib/libGL.so.1'
export PRIMUS_libGLa='/usr/lib/nvidia/libGL.so.1'
CXX=g++\ -m32 LIBDIR=lib make
unset PRIMUS_libGLd PRIMUS_libGLa

Running

Running with x86_64:

cd /tmp/primus && \
LD_LIBRARY_PATH=/usr/lib64/nvidia:lib64 ./primusrun glxspheres

Untested by me, but that should be the procedure for i686 (32 bit):

cd /tmp/primus && \
LD_LIBRARY_PATH=/usr/lib/nvidia:lib ./primusrun YOUR_32_BIT_OPENGL_APP