Archive for the ‘lazy’ Tag

Rocket science   Leave a comment

I still remember my Linux nightmares of the previous century. Trying to install Linux and wiping my whole HD while trying to multi boot RedHat 5.0.
It was for a reason that they said you have to be a rocket scientist in order to install Linux properly.
Times have changed, Linux is easy to install. Perhaps two things are different, one is that objectively Linux became much easier to handle and the second is probably the fact I gained much more experience.
In my opinion – one of the reasons that Linux became easier along the years is the improving support for various device drivers. For the home users – it is excellent news. However, for the SysAdmin who deals mostly with servers and some high-end devices, the headache, I believe, still exists.
If you thought that having a NIC without a driver is a problem, I can assure you that having a RAID controller without a driver is ten times the headache.
I bring you here the story of the RocketRAID device, how to remaster initrd and driver disks and of course, how to become a rocket scientist!

WTF

With Centos 5.4 you get an ugly error in the middle of the installation saying you have no devices you can partition.
DOH!!! Because it discovered no HDs.

So now you’re asking yourself, where am I going? – Google of course.
RocketRAID 3530 driver page

And you discover you have drivers only for RHEL/CentOS 5.3. Oh! but there’s also source code!
It means we can do either of both:

  1. Remaster initrd and insert the RocketRAID drivers where needed
  2. Create a new driver disk and use it

I’ll show how we do them both.
I’ll assume you have the RocketRAID driver compiled for the installation kernel.
In addition, I’m also going to assume you have a network installation that’s easy to remaster.

Remastering the initrd

What do we have?

# file initrd.img
initrd.img: gzip compressed data, from Unix, last modified: Sun Jul 26 17:39:09 2009, max compression

I’ll make it quicker for you. It’s a gzipped cpio archive.
Let’s open it:

# mkdir initrd; gunzip -c initrd.img | (cd initrd && cpio -idm)
12113 blocks

It’s open, let’s modify what’s needed.

  • modules/modules.alias – Contains a list of PCI device IDs and the module to load
  • modules/pci.ids – Common names for PCI devices
  • modules/modules.dep – Dependency tree for modules (loading order of modules)
  • modules/modules.cgz – The actual modules inside this initrd

Most of the work was done for us already in the official driver package from HighPoint.
Edit modules.alias and add there the relevant new IDs:

alias pci:v00001103d00003220sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003320sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003410sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003510sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003511sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003520sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003521sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003522sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003530sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003540sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00003560sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004210sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004211sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004310sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004311sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004320sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004321sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004322sv*sd*bc*sc*i* hptiop
alias pci:v00001103d00004400sv*sd*bc*sc*i* hptiop

This was taken from the RHEL5.3 package on the HighPoint website.

So now the installer (anaconda) knows it should load hptiop for our relevant devices. But it needs the module itself!
Download the source package and do the usual configure/make/make install – I’m not planning to go into it. I assume you now have your hptiop.ko compiled against the kernel version the installation is going to use.
OK, so the real deal is in modules.cgz, let’s open it:

# file modules/modules.cgz
modules/modules.cgz: gzip compressed data, from Unix, last modified: Sat Mar 21 15:13:43 2009, max compression
# mkdir /tmp/modules; gunzip -c modules/modules.cgz | (cd /tmp/modules && cpio -idm)
41082 blocks
# cp /home/dan/hptiop.ko /tmp/modules/2.6.18-164.el5/x86_64

Now we need to repackage both modules.cgz and initrd.img:

# (cd /tmp/modules && find . -print | cpio -c -o | gzip -c9 > /tmp/initrd/modules/modules.cgz)
41083 blocks
# (cd /tmp/initrd && find . -print | cpio -c -o | gzip -c9 > /tmp/initrd-with-rr.img)

Great, use initrd-with-rr.img now for your installation, it should load your RocketRAID device!

A driver disk

Creating a driver disk is much cleaner in my opinion. You do not remaster a stock initrd just for a stupid driver.
So you ask what is a driver disk? – Without going into the bits and bytes, I’ll just say that it’s a brilliant way of incorporating a custom modules.cgz and modules.alias without touching the installation initrd at all!
I knew I couldn’t live quietly with the initrd remaster so choosing the driver disk (dd in short) option was inevitable.
As I noted before, HighPoint provided me only a RHEL/CentOS 5.3 driver disk (and binary), but they also provided the source. I knew it was a matter of some adjustments to get it to work also for 5.4.
It is much easier to approach the driver disk now as we are much more familiar with how the installation initrd works.
I’m lazy, I already created a script that takes the 5.3 driver package and creates a dd:

#!/bin/bash

# $1 - driver_package
# $2 - destination of driver disk
make_rocketraid_driverdisk() {
        local driver_package=$1; shift
        local destination=$1; shift

        local tmp_image=`mktemp`
        local tmp_mount_dir=`mktemp -d`

        dd if=/dev/zero of=$tmp_image count=1 bs=1M && \
        mkdosfs $tmp_image && \
        mount -o loop $tmp_image $tmp_mount_dir && \
        tar -xf $driver_package -C $tmp_mount_dir && \
        umount $tmp_mount_dir && \
        local -i retval=$?

        if [ $retval -eq 0 ]; then
                cp -aL $tmp_image $destination
                chmod 644 $destination
                echo "Driver disk created at: $destination"
        fi

        rm -f $tmp_image
        rmdir $tmp_mount_dir

        return $retval
}

make_rocketraid_driverdisk rr3xxx_4xxx-rhel_centos-5u3-x86_64-v1.6.09.0702.tgz /tmp/rr.img

Want it for 5.4? – easy. Just remaster the modules.cgz that’s inside rr3xxx_4xxx-rhel_centos-5u3-x86_64-v1.6.09.0702.tgz and replace it with a relevant hptiop.ko module 🙂

Edit your kickstart to load the driver disk:

driverdisk --source=http://UGAT/HA/BAIT/INC/HighPoint/RocketRAID/3xxx-4xxx/rr3xxx-4xxx-2.6.18-164.el5.img

Make sure you have this line in the main section and not meta generated in your %pre section as the driverdisk directive is being processed before the %pre section.

The OS doesn’t boot after installation

You moron! This is because the installation kernel/initrd and the one that boots afterwards are not the same!
You can fix it in one of the 3 following ways:

  1. Recompile the CentOS/RHEL kernel and repackage it with the RocketRAID driver – pretty ugly, not to mention time consuming.
  2. Build a module RPM for the specific kernel version you’re going to use – very clean but also very time consuming!
  3. Just build the module for the relevant kernel in the %post section – my way.

In the %post section of your kickstart, add the following:

(cd /tmp && \
        wget http://UGAT/HA/BAIT/INC/rr3xxx_4xxx-linux-src-v1.6-072009-1131.tar.gz && \
        tar -xf rr3xxx_4xxx-linux-src-v1.6-072009-1131.tar.gz && \
        cd rr3xxx_4xxx-linux-src-v1.6 && \
        make install)

The next boot obviously have a different initrd image. Generally speaking, initrd creation is done after the %post section, so you should not bother about it too much…
Server should boot now. Go play with your 12x2TB RAID array.

I hope I could teach you something in this post. It was a hell of a war discovering how to properly do all of these.
Now if you’ll excuse me – I’ll be going to play with spaceships and shoot rockets!

Advertisements

got r00t?   5 comments

Introduction

Landing in a new startup company has its cons and pros.
The pros being:

  1. You can do almost whatever you want

The cons:

  1. You have to do it from scracth!

The Developers

Linux developers are not dumb. They can’t be. If they were dumb, they couldn’t have developed anything on Linux. They might have been called developers on some other platforms.
I was opted quite early about the question of:
“Am I, as a SysAdmin, going to give those Linux developers root access on their machines?”

Why not:

  1. They can cause a mess and break their system in a second.
    A fellow developer (the chowner) who ran:

    # chown -R his_username:his_group *
    

    He came to me saying “My Linux workstation stopped working well!!!”
    Later on I also discovered he was at /, when performing this command! 🙂
    For his defence he added: “But I stopped the command quickly! after I saw the mistake!”

  2. And there’s no 2, I think this is the only main reason, given that these are actually people I generally trust.

Why yes:

  1. They’ll bother me less with small things such as mounting/umounting media.
  2. If they need to perform any other administrative action – they’ll learn from it.
  3. Heck, it’s their own workstation, if they really want, they’ll get root access, so who am I to play god with them?

Choosing the former and letting the developers rejoice with their root access on their machines, I had to perform some proactive actions in order to avoid unwanted situations I might encounter.

Installation

Your flavor of installation should be idempotent, in terms of letting the user destroy his workstation, but still be able to reinstall and get to the same position.
Let’s take for example the chowner developer. His workstation was ruined. I never even thought of starting to change back permissions to their originals. It would cause much more trouble in the long run than any good.
We reinstalled his workstation and after 15 minutes he was happy again to continue development.

Automatic network installations are too easy to implement today on Linux. If you don’t have one, you must be living in the medieval times or so.
I can give you one suggestion though about partitioning – make sure your developers have a /home on a different partition. It’ll be easier when reinstalling to preserve /home and remove all the rest.

Consolidating software

I consider installing non-packaged software on Linux a very dirty action.
The reasons for that are:

  1. You can’t uninstall it using standard ways
  2. You can’t upgrade it using standard ways
  3. You can’t keep track of it

In addition to installing packaged software, you must also have all your workstations and server synchronize against the same software repositories.
If user A installs software from repository A and user B from repository B, they might run into different behavior on their software.
Have you ever heard: “How come it works on my computer and doesn’t work on yours??”
As a SysAdmin, you must eliminate the possibilities of this to happen to a zero.

How do you do it?
Well, using CentOS – use a YUM repository and cache whatever packages you need from the various internet repositories out there.
Debian? – just the same – just with apt.

Remember – if you have any software on workstations that is not well packaged or not well controlled – you’ll run into awkward situations very soon.

Today

Up until today Linux developers in my company still posses their root access, but they barely use it. To be honest I don’t think they even really need it. However, they have it. It is also about educating the developers that they are given the root access because they are being trusted. If they blew it, it’s mostly their fault, not yours.

I’ll continue to let them be root when needed. They have proved worthy so far.
And I’ll ask you another question – do you really think that someone who can’t handle his own workstation be a good developer? – think again!

Posted December 5, 2009 by malkodan in System Administration

Tagged with , , , , , , , , , ,

Bash RPC   2 comments

Auto configuring complex cluster architectures is a task many of you might probably skip, using the excuse that it’s a one time task and it’ll never repeat itself. WRONG!
Being lazy as I could and the need to quickly deploy systems for hungry customers, I started out with a small little infrastructure that helped me along the way to auto configure clusters of two nodes or more.
My use cases were:
1. RedHat cluster
2. LVS
3. Heartbeat
4. Oracle RAC

Auto configuring an Oracle RAC DB is not an easy task at all. However, with the proper infrastructure, it can become noticeably easier.
The common denominators for all cluster configurations I had to carry were:
1. They had to run on more than one node
2. Except from entering a password once for all nodes, I didn’t want any interaction
3. They all consisted from steps that should either run on all nodes, or on just one node
4. Sometimes you could logically split the task into a few phases of configuration, making it easier to comprehend the tasks you have to achieve

Even though it is not the most tidy piece of Bash code I’m going to post here, I’m very proud of it as it saved me countless hours. I give you the skeleton of code, which is the essence of what I nicknamed Bash RPC. On top of this you should be able to easily auto configure various configurations involving more than one computer.

Using it

The sample attached is a simple script that should be able to bake /home/cake on all relevant nodes.
In order to use the script properly, edit it using your favorite editor and stroll through the configuration_vars() function. Populate the array HOST_IP with your relevant hosts.
Now you can simply run the script.
I’m aware to the slight disadvantage that you can’t have your configuration come from command line, on the other hand – when dealing with big, complex, do you really think your configuration can be defined in a single line of arguments?

Taming it

OK, this is the interesting part. Obviously no one needs to bake cakes on his nodes, it is truly pointless and was merely given as an example.
So how would you go about customizing this skeleton to your needs?
First and foremost, we must plan. Planning and designing is the key for every tech related activity you carry. Be it a SysAdmin task or a pure development task. While configuring your cluster for the first time, keep notes of the steps (and commands) you have to go through. Later on, try to logically separate the steps into phases. When you have them all, we can start hacking Bash RPC.
Start drawing your phases and steps, using functions in the form of PHASEx_STEPx.
Fill up your functions with your ideas and start testing! and that’s it!

How does it work?

Simplicity is the key for everything.
Bash RPC can be ran in 2 ways – either running all phases (full run) or running just one step.
If you give Bash RPC just one argument, it assumes it is a function you have to run. If no arguments are given it will run the whole script.
Have a look at run_function_on_node(). This function receives a node and functions it should run on. It will copy the script to the destination node and initiate it with the arguments it received.
And this is more or less the essence of Bash RPC. REALLY!

Oh, there’s one small thing. Sometimes you have to run things on just one host, in that case you can add a suffix of ___SINGLE_HOST for your steps. This will make sure the step will run on just one host (the first one you defined).

I’m more than aware that this skeleton of Bash RPC can be polished some more and indeed I have a list of TODOs for this skeleton. But all in all – I really think this one is a big time saver.

Real world use cases

I consider this script a success mainly because of 2 real world use cases.
The first one is the act of configuring from A to Z Oracle RAC. Those of you who had to go through this nightmare can testify that configuring Oracle RAC takes 2-3 days (modest estimation) of both a DBA and SysAdmin working closely together. How about an unattended script running in the background, notifying you 45 minutes later you have an Oracle RAC ready for service?

The second use case is my good friend and former colleague, Oren Held. Oren could easily take this skeleton and use it for auto configuring a totally different cluster using LVS over Heartbeat. He was even satisfied while using it. Oren never consulted me while performing this task – and this is the great achievement.

I hope you could use this code snippet and customize it for your own needs, continuing with the YOU CAN CREATE A SCRIPT FOR EVERYTHING attitude!!

Have a look at cluster-config-cake.sh to get an idea about how it’s being done.

Posted October 10, 2009 by malkodan in Bash, Linux, System Administration

Tagged with , , , , , , , , , ,

Buying time   1 comment

Wearing both hats of developer and SysAdmin, I believe you shouldn’t work hard as a SysAdmin. It is for a reason that sometimes developers look at SysAdmins as inferior. It’s not because SysAdmins are really inferior, or has an easier job. I think it is mainly because a large portion of their workload could be automated. And if it can’t be automated (very little things), you should at least be able to do it quickly.

Obviously, we cannot buy time, but we can at least be efficient. In the following post I’ll introduce a few of my most useful aliases (or functions) I’d written and used in the last few years. Some of them are development related while the others could make you a happy SysAdmin.
You may find them childish – but trust me, sometimes the most childish alias is your best friend.

Jumping on the tree

Our subversion trunk really reminds me of a tree sometimes. The trunk is very thick, but it has many branches and eventually many leaves. While dealing with the leaves is rare, jumping on the branches is very common. Many times i have found myself typing a lot of ‘cd’ commands, where some are longer than others, repeatedly, just to get to a certain place. Here my stupid aliases come to help me.

lib takes me straight away to our libraries sub directory, where sim takes me to the simulators sub directory. Not to mention tr (shortcut for trunk), which takes me exactly to the sub directory where the sources are checked out. Oh, and pup which takes me to my puppet root, on my puppet master. Yes, I’m aware to the fact that you probably wonder now “Hey, is this guy going to teach me something new today? – Aliases are for babies!!”, I can identify. I didn’t come to teach you how to write aliases, I’m here to preach you to start using aliases. Ask yourself how many useful day-to-day aliases you really have defined. Do you have any at all? – Don’t be shy to answer no.

*nix jobs are diverse and heterogeneous, but let’s see if I can encourage you to write some useful aliases after all.
In case you need some ideas for aliases, run the following:

$ history | tr -s ' ' | cut -d' ' -f3- | sort | uniq -c | sort -n

Yes, this will show the count of the most recent commands you have used. Still not getting it? – OK, I’ll give you a hint, it should be similar to this:

$ alias recently_used_commands="history | tr -s ' ' | cut -d' ' -f3- | sort | uniq -c | sort -n"

If you did it – you’ve just kick-started your way to liberation. Enjoy.
As a dessert – my last childish alias:

$ alias rsrc='source ~/.bashrc'

Always useful if you want to re-source your .bashrc while working on some new aliases.

Two more things I must mention though:

  1. Enlarge your history size, I guess you can figure out alone how to do it.
  2. If you’re feeling generous – periodically collect the history files from your fellow team members (automatically of course, with another alias) and create aliases that will suit them too.

The serial SSHer

Our network is on 192.168.8.0/24. Many times I’ve found myself issuing commands like:

$ ssh root@192.168.8.1
$ ssh root@192.168.8.2
$ ssh root@192.168.8.3

Dozens of these in a single day. It was really frustrating. One day I decided to make an end to it:

# function for easier ssh
# $1 - network
# $2 - subnet
# $3 - host in subnet
_ssh_specific_network() {
	local network=$1; shift
	local subnet=$1; shift
	local host=$1; shift
	ssh root@$network.$subnet.$host
}

# easy ssh to 192.168.8.0/24
# $1 - host
_ssh_net_192_168_8() {
	local host=$1; shift
	_ssh_specific_network 192.168 8 $host
}
alias ssh8='_ssh_net_192_168_8'

Splendid, now I can run the following:

$ ssh8 1

Which is equal to:

$ ssh root@192.168.8.1

Childish, but extremely efficient. Do it also for other commands you would like to use, such as ping, telnet, rdesktop and many others.

The cop

Using KDE’s konsole? – I really like DCOP, let’s add some spice to the above function, we’ll rename the session name to the host we’ll ssh to, and then restore the session name back after logging out:

# returns session name
_konsole_get_session_name() {
	# using dcop - obtain session name
	dcop $KONSOLE_DCOP_SESSION sessionName
}

# renames a konsole session
# $1 - session name
_konsole_rename_session_name() {
	local session_name=$1; shift
	_konsole_store_session_name `_konsole_get_session_name`
	dcop $KONSOLE_DCOP_SESSION renameSession "$session_name"
}

# store the current session name
_konsole_store_session_name() {
	STORED_SESSION_NAME=`_konsole_get_session_name`
}

# restores session name
_konsole_restore_session_name() {
	if [ x"$STORED_SESSION_NAME" != x ]; then
		_konsole_rename_session_name "$STORED_SESSION_NAME"
	fi
}

# function for easier ssh
# $1 - network
# $2 - subnet
# $3 - host in subnet
_ssh_specific_network() {
	local network=$1; shift
	local subnet=$1; shift
	local host=$1; shift
	# rename the konsole session name
	_konsole_rename_session_name .$subnet.$host
	ssh root@$network.$subnet.$host
	# restore the konsole session name
	_konsole_restore_session_name
}

Extend it as needed, this is only the tip of the iceberg! I can assure you that my aliases are much more complex than these.

Finders keepers

For the next one I’m not taking full credit – this one belongs to Uri Sivan – obviously one of the better developers I’ve met along the way.
Grepping cpp files is essential, many times I’ve found myself looking for a function reference on all of our cpp files.
The following usually does it:

$ find . -name "*.cpp" | xargs grep -H -r 'HomeCake'

But seriously, do I look like someone that likes to work hard?

# $* - string to grep
grepcpp() {
	local grep_string="$*";
	local filename=""
	find . -name "*.cpp" -exec grep -l "$grep_string" "{}" ";" | while read filename; do
		echo "=== $filename"
		grep -C 3 --color=AUTO "$grep_string" "$filename"
		echo ""
	done
}

OK, let’s generalize it:

# grepping made easy, taken from the suri
# grepext - grep by extension
# $1 - extension of file
# $* - string to grep
_grepext() {
	local extension=$1; shift
	local grep_string="$*"
	local filename=""
	find . -name "*.${extension}" -exec grep -l "$grep_string" "{}" ";" | while read filename; do
		echo "=== $filename"
		grep -C 3 --color=AUTO "$grep_string" "$filename"
		echo ""
	done
}

# meta generate the grepext functions
declare -r GREPEXT_EXTENSIONS="h c cpp spec vpj sh php html js"
_meta_generate_grepext_functions() {
	local tmp_grepext_functions=`mktemp`
	local extension=$1; shift
	for extension in $GREPEXT_EXTENSIONS; do
		echo "grep$extension() {" >> $tmp_grepext_functions
		echo '  local grep_string=$*' >> $tmp_grepext_functions
		echo '  _grepext '"$extension"' "$grep_string"' >> $tmp_grepext_functions
		echo "}" >> $tmp_grepext_functions
	done
	source $tmp_grepext_functions
	rm -f $tmp_grepext_functions
}

After this, you have all of your C++/Bash/PHP/etc developers happy!

Time to showoff

My development environment is my theme park, here is my proof:

$ (env; declare -f; alias) | wc -l
2791

I encourage you to run this as well, if my line count is small and I’m bragging about something I shouldn’t – let me know!

Posted August 21, 2009 by malkodan in Bash, Linux, System Administration

Tagged with , , , , , , , , ,

The master of puppets   10 comments

After several conversations with my good friend and now also boss I decided to open a small little Blog which will describe my Linux/SysAdmin/C++/Bash adventures.

I’m Dan, you can learn a lot about me from my website at http://nevela.com . My expertise is Unix/Linux System administration and C++ development. Linux is my home and Bash is my mother’s tongue. In this Blog I hope to share interesting topics, stories and decisions in the mentioned fields.

I’ll start with something fresh and new for me – Puppet (http://reductivelabs.com/trac/puppet).

I believe System administration is an art, just like programming. SysAdmins divide into 2 types, from what I’ve seen:

  1. The fireman – most likely the mediocre SysAdmin type you are very familiar with. This type of SysAdmin knows how to troubleshoot and apply the specific fix where’s it’s needed. They get things done, but not more than that. Gradually their workload grows (especially if they do a bad job) and consume all of their time. This is the bad type of SysAdmin. Words like planning are usually not among their jargon (although they might argue with you).
  2. The developer – this type of SysAdmin plans before doing anything. I hope I can belong to this group. He thinks as a developer – whenever a problem arises in a system he manages – he treats it as a bug rather then a problem. This type of SysAdmin will always have his configuration stored in a source control repository. He will always solve bugs he encounter by fixing the script or program that caused the so called problem.

If you’re a fireman – then  this post is not for you. However, if you’re the second type, then puppet is exactly for you.

As a SysAdmin in a small startup company, I got to a position where I have to manage 2 types of machines:

  1. Production machines
  2. In house machines for internal company usage

Luckily managing dozens of production machines is easy for us (for me :)) as the configuration is most likely to be the same on all of them. On the other hand, ironically, managing the in house machines became a more time consuming task, as each machine has many different and diverse jobs. This is where puppet kicks in.

Up until today, we had a few main servers doing most of the simple tasks, among them:

  1. LDAP
  2. Samba/NFS file sharing
  3. SVN
  4. Apache
  5. In house DNS and DNS caching
  6. Buildbot master
  7. Nagios
  8. And many more other things (you got the point)

Obviously I have all of my configuration stored in our SVN and backed up. Given that one of these servers would die, I could generate a new one within hours. But yet, most of the configuration there was done manually. Who is the crazy SysAdmin that would write a script to meta-generate named.conf? Who would auto create SVN repositories (especially if you have just one repository to manage)?

Yes, this is where puppet goes in. Puppet forces you to work correctly. By working with puppet, you realize that the way you’re going to work is only by writing puppet recipes and automating everything that makes your servers what they are. Puppet is also revolutionary in many other ways, but I’ll focus on the way that it forces the SysAdmin to work in a clean manner.

One thing I have to warn from, puppet has a slightly steep learning curve, but if you’re the serious Linux SysAdmin type – you wouldn’t find that one steep. You’ll struggle with it. You must.

So I said that no one (most likely) would write a script to generate its DNS configuration, but I saved mine in SVN and wrote a recipe for DNS servers to use:

$dns_username = 'named'
$dns_group = 'named'
$dns_configuration_file = '/etc/named.conf'
class dns-server {
	# puppet takes care to install the latest package bind and it's cross platform!
	package { bind:
		ensure => latest
	}
	# configuring named. subscribing it on /etc/named.conf, which means that every time this
	# file would change, named will refresh
	service { named:
		ensure    => running,
		enable    => true,
		subscribe => File[$dns_configuration_file],
		require   => Package["bind"]
	}
	# this is the /etc/named.conf file. in subclasses we'll specify the source of the file
	file { $dns_configuration_file:
		owner   => $dns_username,
		group   => $dns_group,
		replace => true,
		mode    => 640
	}
}

Alright, this puppet code snippet only configures /etc/named.conf for usage with bind – but we never told puppet where to take the configuration from. I’ll share with you my DNS master server configuration:

class dns-master inherits dns-server {
	# if it's a DNS master - lets take the named-master.conf file (obviously stored in SVN as well)
	File[$dns_configuration_file] {
		source => "$puppet_fileserver_base/etc/named/named-master.conf",
	}

	# copy all the zone files
	file { "/var/named":
		source       => "$puppet_fileserver_base/etc/named/zones",
		ignore       => ".svn*",
		checksum     => "md5",
		sourceselect => all,
		recurse      => true,
		purge        => false,
		owner        => $dns_username,
		group        => $dns_group,
		replace      => true,
		mode         => 640
	}

	# subscribe all the zone files in named
	Service[named] {
		subscribe +> File["/var/named"]
	}
}

Great, so now every time I’ll change a zone in my puppet master, named will automatically restart. I believe this is great. Needless to say I do not use dynamic DNS updates, so I care not about zone files being updated.

Yes, so I did it once for DNS. If I ever needed now to change the configuration, I’d do it via puppet and let the configuration propagate. Maybe a small little phase of checking the new configuration, but right afterwards would come the svn commit – and it’s inside. Now I know I’m tidy and clean forever.

Needless to say that if I’ll ever need to configure another DNS server, I’ll use this recipe, or extend it as needed – puppet lets you inherit from base classes and add properties and behavior as needed (I have also a class for a dns-slave as you could have imagined).

OK, I should sum it up. To be honest – I’m glad I’m writing this part of the post after several discussions with a few of my wise colleagues.

Why should I use puppet?!

The answer to this one if not easy – and I’m not the one who should answer it. It is you who should answer it.

Adopting puppet reflects one main thing about you as a SysAdmin:

You are a lazy, strict and tidy SysAdmin, an expert.

Posted July 27, 2009 by malkodan in Linux, System Administration

Tagged with , , , , , , , ,