Archive for the ‘strict’ Tag

got r00t?   5 comments


Landing in a new startup company has its cons and pros.
The pros being:

  1. You can do almost whatever you want

The cons:

  1. You have to do it from scracth!

The Developers

Linux developers are not dumb. They can’t be. If they were dumb, they couldn’t have developed anything on Linux. They might have been called developers on some other platforms.
I was opted quite early about the question of:
“Am I, as a SysAdmin, going to give those Linux developers root access on their machines?”

Why not:

  1. They can cause a mess and break their system in a second.
    A fellow developer (the chowner) who ran:

    # chown -R his_username:his_group *

    He came to me saying “My Linux workstation stopped working well!!!”
    Later on I also discovered he was at /, when performing this command! 🙂
    For his defence he added: “But I stopped the command quickly! after I saw the mistake!”

  2. And there’s no 2, I think this is the only main reason, given that these are actually people I generally trust.

Why yes:

  1. They’ll bother me less with small things such as mounting/umounting media.
  2. If they need to perform any other administrative action – they’ll learn from it.
  3. Heck, it’s their own workstation, if they really want, they’ll get root access, so who am I to play god with them?

Choosing the former and letting the developers rejoice with their root access on their machines, I had to perform some proactive actions in order to avoid unwanted situations I might encounter.


Your flavor of installation should be idempotent, in terms of letting the user destroy his workstation, but still be able to reinstall and get to the same position.
Let’s take for example the chowner developer. His workstation was ruined. I never even thought of starting to change back permissions to their originals. It would cause much more trouble in the long run than any good.
We reinstalled his workstation and after 15 minutes he was happy again to continue development.

Automatic network installations are too easy to implement today on Linux. If you don’t have one, you must be living in the medieval times or so.
I can give you one suggestion though about partitioning – make sure your developers have a /home on a different partition. It’ll be easier when reinstalling to preserve /home and remove all the rest.

Consolidating software

I consider installing non-packaged software on Linux a very dirty action.
The reasons for that are:

  1. You can’t uninstall it using standard ways
  2. You can’t upgrade it using standard ways
  3. You can’t keep track of it

In addition to installing packaged software, you must also have all your workstations and server synchronize against the same software repositories.
If user A installs software from repository A and user B from repository B, they might run into different behavior on their software.
Have you ever heard: “How come it works on my computer and doesn’t work on yours??”
As a SysAdmin, you must eliminate the possibilities of this to happen to a zero.

How do you do it?
Well, using CentOS – use a YUM repository and cache whatever packages you need from the various internet repositories out there.
Debian? – just the same – just with apt.

Remember – if you have any software on workstations that is not well packaged or not well controlled – you’ll run into awkward situations very soon.


Up until today Linux developers in my company still posses their root access, but they barely use it. To be honest I don’t think they even really need it. However, they have it. It is also about educating the developers that they are given the root access because they are being trusted. If they blew it, it’s mostly their fault, not yours.

I’ll continue to let them be root when needed. They have proved worthy so far.
And I’ll ask you another question – do you really think that someone who can’t handle his own workstation be a good developer? – think again!


Posted December 5, 2009 by malkodan in System Administration

Tagged with , , , , , , , , , ,

Conventions   11 comments

I like Bash, I really like Bash. It’s simple, it’s neat, it’s quick, it’s many things. No one will convince me otherwise. It’s not for big things though, I know, but for most system administration tasks it will suffice.
The problem with Bash begins where incompetent SysAdmins start to use it. They will most likely treat it poorly and lamely.
Conventions are truly needed when writing in Bash. In our subversion we have ~30,000 lines of bash. And HELL NO! our product is definitely not a lame conglomerate of Bash scripts gluing many pieces of binaries together. Bash is there for everything that C++ (in our case) shouldn’t do, things like:

  • Packaging and anything related to packaging (post scripts of packages for instance)
  • SysV service infrastructure
  • Backups
  • Customization of the development environment
  • Deployment infrastructure

Yes, so where have we been? – Oh, how do we manage ~30,000 lines of Bash without getting everything messed out. For this, we need 3 main things:

  • Version control system (CVS, SVN, git, pick your kick)
  • Competent people
  • Coding conventions

Configuring a version control system is easy, espcially if you are a competent developer, I’ll skip to the 3rd one.

Conventions. Show me just one competent C/C++/Java/C# developer who would skip on using coding conventions and program like an ape. OK, I know, there might be some, but we both think the same thing about them. Scripting in Bash shouldn’t be an exception in that case. We should use strict scripting conventions on Bash in particular and on any other scripting language in general.
There’s nothing uglier and messier than a Bash script that is written without conventions (well, maybe Perl without conventions).

I’ll try in the following post to introduce my Bash scripting conventions and if you find them neat – feel free to adopt them. Up until today, sadly, I havn’t seen anyone suggesting any Bash scripting conventions.

Before starting any Bash script, I have the following skeleton :


main() {

main "$@"

Call me crazy, but without my main() function I ain’t going anywhere. Now we can start writing some bash. Once you have your main() it means you’re going to write code ONLY inside functions. I don’t care it’s a scripting language – let’s be strict.


# $1 - hostname
# $2 - destination directory
main() {
	local hostname=$1; shift
	local destination_directory=$1; shift

main "$@"

Need to receive any arguments in a function? – Do the following:

  • Document the variables above the function
  • Use shift after receiving each variable, and always use $1, it’s easier to later move the order of the variables

Notice that i’m using the local keyword to make sure the scope of the variable is only within its function. Be strict with variables. If for some reason you decide you need a global variable, it’s OK, but make sure it’s in capital letters and declared as needed:

# a read only variable
declare -r CONFIGURATION_FILE="/etc/named.conf"
# a read only integer variable
declare -i -r TIMEOUT=600

# inside a function
sample_function() {
	local -i retries=0

You are getting the point – I’m not going to make it any easier for you. Adopt whatever you like and remember that being strict in Bash yields code in higher quality, not to mention better readability. Feeling like writing a sloppy script today? – Go home and come back tomorrow.

Following is a script written by the conventions I’m suggesting. This is a very simple script that simply displays a dialog and asks the user which host he would like to ping, then displays the result in a tailbox dialog. This is more or less how many of my scripts look like. I admit it took me a while to find a script which is not too long (under 100 lines) and can still represent some of the ideas I was mentioning.

I urge you to question my way and conventions and suggest some of your own, anyway, here it is:


# dialog options (notice it's read only)
# notice as well it's in capital letters
declare -r DIALOG_OPTS="0 0 0"
declare -i -r OUTPUT_DIALOG_WIDTH=60

# ping timeout in seconds (using declare -i because it's an integer and -r because it's read only)
declare -i -r PING_TIMEOUT=5
# size of pings
declare -i -r DEFAULT_SIZE_OF_PINGS=56
# number of pings to perform

# neatly pipes a command to a tailbox dialog
# here we can specify the parameters the function expects
# this is how i like to do it, perhaps there are smarter ways that may integrate better
# with doxygen or some other tools
# $1 - tailbox dialog height
# $2 - title
# $3 - command
pipe_command_to_tailbox_dialog() {
	# parameters extraction, always using $1, with `shift` right afterwards
	# in case you want to play with the order of parameters, just move the lines around
	local -i height=5+$1; shift
	local title="$1"; shift
	local command="$1"; shift

	# need a temporary file? - always use `mktemp`, please spare me the `/tmp/$$` stuff
	# or other insecure alternative to temporary file creations, use only `mktemp`
	local output_file=`mktemp`
	# run in a subshell and with eval, so we can pass pipes and stuff...
	# eval is a favorite of mine, it means you truely understand the Bash shell
	# nevertheless - it is really needed in this case
	(eval $command) >& $output_file &
	# need to obtain a pid? - it's surely an integer, so use 'local -i'
	local -i bg_job=$!

	# ok, show the user the dialog
	dialog --title "$title" --tailbox $output_file $height $OUTPUT_DIALOG_WIDTH

	# TODO my lame way of checking if a process is running on linux, anyone has a better way??
	# my way of specifying a 'TODO' is in the above line
	if [ -d /proc/$bg_job ]; then
		# if the process is stubborn, use 'kill -9'
		kill $bg_job || kill -9 $bg_job >& /dev/null
	# wait for process to end itself
	wait $bg_job

	# not cleaning up your temporary files is similar to a memory leak in C++
	rm -f $output_file

# pings a host with a nice dialog
ping_host_dialog() {
	local ping_params_tmp=`mktemp`
	# slice lines nicely, i have long commands, i'm not going even to mention
	# line indentation - that goes without saying
	# i like to use dialogs, it makes more people use your scripts
	if dialog --ok-label "Submit" \
		--form "Ping host" \
			"Address:" 1 1 "" 1 30 40 0 \
			"Size of pings:" 2 1 "$DEFAULT_SIZE_OF_PINGS" 2 30 40 0 \
			"Number of pings:" 3 1 "$DEFAULT_NUMBER_OF_PINGS" 3 30 40 0 2> $ping_params_tmp; then
		# ping_params_tmp will be empty if the user aborted the dialog...
		local address=`head -1 $ping_params_tmp | tail -1`
		# yet again if you expect an integer, use 'local -i'
		local -i size_of_pings=`head -2 $ping_params_tmp | tail -1`
		local -i number_of_pings=`head -3 $ping_params_tmp | tail -1`
	rm -f $ping_params_tmp

	# this is my standard way of checking if a variable is empty
	# may not be the prettiest way, but it surely catches the eye...
	if [ x"$address" != x ]; then
		pipe_command_to_tailbox_dialog 15 "Pinging host \"$address\"" "ping -c $number_of_pings -s $size_of_pings -W $PING_TIMEOUT \"$address\""

# main function, can't live without it
main() {

# although there are no parameters passed in this script, i still pass $* to main, as a habit
#main $*
# after Uri's comment, I'm fixing the following and calling "$@" instead
main "$@"

I don’t want this post to be too long (it is already quite long), but I think you got the idea. I still have some conventions I did not introduce here. In case you liked what you’ve seen – do not hesitate to contact me so I can provide you with a document describing all of my Bash scripting conventions.