the Relief

Yet another Blosxom weblog.

The Humble Developer Manifesto

  1. You don’t run this. Users do.

  2. You don’t define what your program is. Users do.

  3. You don’t own the environment. Users do.

  4. It’s not a merit but a privilege to write software.

  5. If people’s workflows depend on it, it’s not a bug, it’s a feature.

  6. Documentation, not behavior, is what matters in the definition.

  7. Project name, commands/options/module/function names—the whole interface is part of the documentation.

  8. By doing mistakes, you may get yourself locked inside the bug/feature maze. You belong there but you may deserve your way out by means of proper versioning. It can be a very long way.

  9. Code is not for computers. Code is for humans.

Big Bang and the migration to FLOSS

In past 40-50 years, there has been a huge big bang that ended up making computers become parts of everyone’s lives, but since the whole thing was so SCI-FI, and mysterious, vast majority of “normals” got easily lured into illusion that it’s OK to have proprietary formats.

Should someone offer cars that can only drive on one type of road, they had no chance but given this mysteriousness of the whole thing and the infinite usability and the infinite fun, it felt natural to pay and easy to agree to everything.

The point I’m trying to make is that I don’t think any massive migration to FLOSS is going to happen before people wake up from the dream and realize that the IT revolution can and should mean way more than new blinking gadgets and fast messages.

That the revolution was that the power went their hands. And as every time people got more power in history, it took generations to learn how to use it without stepping on each other’s toes.

It’s not going to be different this time, but it can be way faster.

Frankly, I believe the only way out of this mess is education. Explain them who and how is in power. Explain them what is important (hint: it’s the people behind FLOSS—and that could be them, then the data formats, then the software) and where they should put their money.

Before that, they will always just seem to take the more colorful box. But once they understand, they will build the world where proprietary means inferior.

Stop fixing bugs!

Yes dear developers, you hear me right: don’t fix bugs!

You know, for most users, using software is like geocaching: you find a bug, you are happy about it. Some people go brag about it in a bug tracker, some don’t but that’s OK, as long as we are all having fun.

Those complaining about caches are just nitpicks or old timers who just want to spoil the fun. Or have some kind of OCD.

Do cache owners remove cahes once they are found? No! In fact, some caches are found by thousands and still remain in place!

Please don’t waste your time removing bugs when you could be adding more!

Playing with OpenLMI indications

Before I forget this, here’s how I managed to get OpenLMI indications working. Furthermore, I’ll be showing how to do it in an interactive LMIShell (although the same should work with “plain” Python shell or e.g. IPython) so that you end up with an Indication object to play with.

I will be creating an account deletion indication on my Fedora 20 testing machine.

Prepare the CIMOM server

To make this work, you will need to have CIMOM running. Installation of the server including the OpenLMI components used here is described on official OpenLMI pages.

However, if you are brave enough to jump right in, here’s what works for me on my Fedora 20:

Warning: In case you miss that, the configuration below is dirty, dangerous and should be only used on a throw-away testing machine.

yum install tog-pegasus openlmi-providers openlmi-tools
echo " $(hostname)" >> /etc/hosts
service tog-pegasus start      # to generate server.pem
service tog-pegasus stop
cp "/etc/Pegasus/server.pem" \
echo "setting pegasus password"
echo "pegasus:aaa" | chpasswd

As you can see, I used the Pegasus as is currently recommended. Unfortunately, to a bug, I needed to run Pegasus outside systemd, i.e. as an ordinary process as opposed to proper service. And due to other bug, I needed to disable SELinux. (Yes, OpenLMI is in pretty much early phase ;)

# setenforce 0
# cimserver daemon=false forceProviderProcesses=false \
> &>/dev/null &
[1] 4567

Now the CIMOM should be up and running.

Subscribe to indications

In order to let CIMOM know where we (as an application that receives the indication) are listening and what are we interested in, we need to subscribe for the indication. This can be done in single LMIShell instance with setting up the listener, but as most of things in OpenLMI, you can also do it from another machine.

To subscribe manually, open a lmishell and connect to CIMOM and type following code:

> c = connect("https://hostname", "pegasus", "aaa")
> q = ("select * from LMI_AccountInstanceDeletionIndication"
... " where sourceinstance isa LMI_Account")
> c.subscribe_indication(
... Name="hello",
... Destination="http://localhost:1234",
... Query=q)
LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='')

From this point on, CIMOM expects that on http://localhost:1234 is a HTTP server that is interested in account deletion. So if it detects such event, it will pack a CIM Indication in XML and HTTP-POST it to that port.

The query format is CQL, i.e. CIM Query Language, which is similar to SQL tailored for use in querying CIMOM. (Don’t confuse with Cassandra Query Language)

Create and start a listener

Now we need to create a listener i.e. something that will accept the mentioned POST and help us make use use of it. Sure, we could write our own server in any language, but LMIShell already has a nice one.

The LMIIndicationListener is a simple HTTP server that apart from accepting the POST will also convert it to proper object, take care of threading, and also when the message arrives, launch any handlers we will have added to it.

In order to do this in interactive shell so that we can explore and play with it, we will use a dictionary as holder for the arrived indication. The only task of the handler will be to store the indication in that dictionary:

> import
> d = {}
> l ="", 1234)
> han = lambda ind, d: d.update({'ind': ind})
> l.add_handler("hello", han, d)
> l.start()

So at this moment, we have everything set up:


The only thing we need now is to actually trigger the event. For example, we can open another terminal window and fire:

# useradd foo && userdel -r foo


Now if you return to the interactive LMIShell, the d should now contain a new element. With knowledge of LMI/CIM structure, we can for example see the name of account that was deleted:

> d
{'ind': < object at 0xf7b650>}
> d['ind'].exported_objects()[0]['SourceInstance'].properties['name'].value

Easy, right? ;)


Here’s a copy-pasting friendly version of the above commands:

c = connect("https://hostname", "pegasus", "aaa")
q = ("select * from LMI_AccountInstanceDeletionIndication where sourceinstance isa LMI_Account")
c.subscribe_indication(Name="hello", Destination="http://localhost:1234", Query=q)


d = {}
l ="", 1234)
han = lambda ind, d: d.update({'ind': ind})
l.add_handler("hello", han, d)



Notch: O patentech

Objevil jsem celkem zajímavý článek na blogu Markuse Perssona, alias “Notch”, švédského vývojáře populární hry Minecraft na téma patenty. Zde je můj překlad:

O patentech

Řekněme, že jste Neo, a jste prvním, kdo přišel s nápadem novely. Je to jako povídka, jenom delší, a jste na to opravdu hrdý.

Pak k Vám přiběhne Trinity a pár kopií Vaší čerstvě vytištěné novely si vezme. To nechcete, protože jste za výtisk dobře zaplatil, a doufáte, že se Vám ty peníze vrátí, tak jí omráčíte taserem. Trinity se právě pokusila o krádež.

Trinity chvíli trucuje a pak se zeptá, jestli si jednu kopii může půjčit na přečtení. “Jasně”, řeknete, ale ona se pak protáhne ke kopírce a začne si z Vaší novely tisknout vlastní kopie. To nechcete, protože chcete být jediným kdo bude dělat nové kopie, a chcete na tom vydělat, tak jí omráčíte taserem. Trinity se právě pokusila o porušení autorského práva.

Trinity ještě chvíli vzlyká, a nakonec začne psát vlastní novelu. To nechcete, protože přece psát delší povídky byl Váš nápad, a chcete vydělávat na všech novelách co kdy kdo napíše, tak jí omráčíte taserem. Trinity se právě pokusila o porušení patentové ochrany.

Nemám problém s konceptem vlastnictví, takže jsem proti krádeži. Pokud lidé nemohou vlastnit věci, společnost se rozpadá.

Celkově nemám problém ani s konceptem “prodávám, co jsem vymyslel”, takže jsem i proti porušování autorského práva. Nemyslím si, že je to stejně špatné jako krádež, a nejsem si jistý, jestli je pro společnost dobré, že některé profese mohou být placené zas a znova, potom co odvedli nějakou práci (jako, dejme tomu, vývojář her), a jiné musí práci odvádět zas a znova, aby vůbec dostali zaplaceno (jako, dejme tomu, kadeřnice nebo právník). Ale jo, princip “prodávám, co jsem vymyslel” je dobrý.

Ale na světě1 není způsob, jak mne přesvědčíte, že je pro společnost dobré se o nápady nedělit. Nápady jsou zdarma2. Vylepšují staré věci aby byly lepší, a to postupně spěje ke zlepšování celé společnosti. Zdokonalujeme se právě tím, že se o nápady dělíme.

Častým argumentem pro patenty je, že vynálezci nebudou vynalézat, dokud své nápady nebudou moci chránit. Problém je ale v tom, že patenty platí, i pokud porušitel s nápadem přišel nezávisle na původním autorovi. Pokud je ten nápad tak jednoduchý, proč musíme odměňovat toho, kdo byl teda náhodou první?

Jsou oblasti, kde je výzkum velmi drahý, ale dlouhodobý přínos pro lidstvo je velmi pozitivní. Osobně bych u těchto preferoval, aby byl takový výzkum dotován vládou (jako např. CERN nebo NASA) a osvobozen od patentů—na rozdíl od toho co se děje s medicínou, ale ještě pochopím, proč si někteří lidé myslí, že patenty jsou pro tyto oblasti dobré.

Jednoduché patenty, jako pro software, jsou kontraproduktivní (zpomalují technický pokrok), zlé (obětují housátka Bhaalovi) a drahé (firmy se díky nim zamotávají do nesmyslných žalob).

Pokud vlastníte softwarový patent, hanba vám.

Perl diamond operator: input priority

Q: When using diamond operator in Perl and providing both STDIN and arguments, which input is used?

A: If arguments are provided, they are used in order as they appear. Otherwise, STDIN is used. So with this script as

while (<>) {
    print "I found: $_\n";

Following behavior is seen:

me@here:~$ echo AAA > a
me@here:~$ echo BBB > b
me@here:~$ ./ a b
I found: AAA
I found: BBB
me@here:~$ echo CCC | ./ a b
I found: AAA
I found: BBB
me@here:~$ echo CCC | ./
I found: CCC
me@here:~$ ./ 
I found: DDD
I found: EEE

Got it?

Explanation of immutability (finally!)

I had kind of trouble understanding concept of (string) immutability in some scripting languages. And by understanding, I usually mean understanding to the point when I’m able to explain it to another person. So here it is (using Python in examples):

Immutability means that you cannot change value of something. You can always get a new value and assign it to another (or the same) variable, but you cannot change the original value. No, not in a sense you would change meaning of “2” forever—that is obviously unthinkable—but rather in the sense that you can’t change the data.

Consider these assignments:

number = 123
string = "hello"

For both numbers and strings, it is typical to perform operations on them like this:

number = number + 321
string = string + " world"

Now immutability of string means, that you cannot do something like this:

string[0] = "H"

While I never had problem disciplining myself not to want to do things like that, until recently I couldn’t resist asking: Well, but why? Isn’t that just another operation? Like the ones above? And if I have a pointer (varable name) and data, why can’t I just go and alter the data as I wish?

Recently a nice analogy popped up in my head: If you could do

string[0] = "H"

wouldn’t you also want to do

number[2] = 5

and expect to have 153 in the same place in memory as 123 was previously?

Now that sounds crazy, doesn’t that? I think that one of possible reasons for this immutability is following:

Most data structures are much more complicated than they might seem. There are various notations for numbers, encodings for strings and countless ways of how these values can be actually stored.

Scripting languages are designed to be portable. And scripting languages are designed to be high-level. If you wanted to change “2” in “123”, we would not only need to be sure what base (10, 16, 8, … 2?) are we talking about on both sides of the equal sign, but also without knowledge of the underlying storage capacity, performance impact would be unpredictable.

Since strings (especially when considering Unicode) are even more complicated than that, enabling user mangle with them without giving them control about exact aspects would lead to unpredictable code. On the other side, giving them the control would lead to non-portable code.

The only easy way out: Do not mangle with values. Say what you want, but do not mangle with values.

Nicest PS1 so far (git-enabled)

Today I lost my patience over not seeing current git branch at my prompt.

But after adding the __git_ps1 call to my PS1 variable in .bashrc, it got way too long. No way to fit it in 80 characters. Also, there’s a problem that while variables are expanded at the moment when .bashrc is sourced, you don’t want that to happen to the __git_ps1 call, otherwise you would be stuck with the same vale all the time.

Instead, you need to pass it to the variable literally, i.e. by use of single quotes (') or backslashes (\). Now the problem is that if you’re like me and are scared of those butt-ugly color codes, you use variables for them.

But for this to work, you need exactly the opposite behavior: you want them to expand at the moment when PS1 is defined and exported! Let’s say you decide to be a real hero and use the codes at last. So as a “reward”, what you get in your .bashrc is not oly way uglier but also even longer.

So I decided to solve it once and for all:

# get some fancy colorz

# and use to assemble own PS1
ps1u="$lyellow\u$white";                    # user
ps1h="$red\h$white";                        # host
ps1w="$lblue\w$white";                      # working directory
ps1G='$(__git_ps1 "(%s)")';                 # the git call
ps1g="$green$ps1G$white";                   # coloring around the git call 
export PS1="$ps1u@$ps1h:$ps1w$ps1g\$ ";     # final PS1

Not only that I created enough space to neatly comment the composition. As I use different colors for different hosts (and for different users), it’s now even easier to share this over my multiple machines and users—it’s much more obvious which one is the right spot where color needs to be changed.

My First Roundcube Installation

Today I decided to try out roundcube for my server.

If you don’t know it, roundcube is a web-based e-mail client with a pretty cool looks that simply works in most browsers I know. (Well, I admit that IIUC I only really used it in Opera).


So on my Debian VPS box, the installation itself was trivial:

$ sudo aptitude install roundcube

After initial aptitude stuff, installer gave me only two questions:


I always feel so stupid when tackling with Apache configuration, but since I wanted to

this was a bit “tricky”, and actually made me read about whole 3 paragraphs of Apache documentation.

So finally it boiled down to:

  1. Create new Apache VirtualHost (assuming DNS part is already solved):

    $ cd /etc/apache2/sites-available/
    $ sudo cp default webmail
  2. Add ServerName directives as you need (depends on how browsers will call you) e.g.:

    ServerName webmail
  3. Change DocumentRoot to /var/lib/roundcube

    DocumentRoot /var/lib/roundcube
  4. Add Alias to TinyMCE (needed by Roundcube internally), i.e. add this to webmail

    Alias /roundcube/program/js/tiny_mce/ /usr/share/tinymce/www/
  5. Enable new site and restart Apache

    $ sudo a2ensite webmail
    $ sudo apache2ctl restart

And I was free to go!

SSL Configuration

Now the next step is the SSL part. This is what worked for me:

  1. Generate SSL certs using

    # make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/ssl/private/webmail.crt

    Note that above command places both certifcate and key together. Do not make them readable by anyone but root. (www-data won’t need it as its parent process takes care of this.)

  2. Copy default-ssl to mysite-ssl

    # cd /etc/apache2/sites-available
    # cp default-ssl mysite-ssl
  3. Configure the site similar to webmail above (ServerName, DocumentRoot…)

  4. Set SSLCertificateFile directive to the above certifiate:

    SSLCertificateFile /etc/ssl/private/webmail.crt

    (For reason I mention above, the SSLCertificateKeyFile can be removed.)

  5. If you want to force use of the SSL by redirecting the old non-SSL version to new one, also add this to old webmail site:

    RewriteEngine On
    RewriteCond %{SERVER_PORT} !^443$
    RewriteRule ^(.*)$1
  6. Enable the site, restart Apache and have fun!

    $ sudo a2ensite webmail-ssl
    $ sudo apache2ctl restart


What is missing yet is

How I set up my `ssh-agent` 2

In one of my previous post, I described how I set up my box to make proper use of ssh-agent.

I was quite pleased with my new super-secure .-) system, but soon I realized the bad news: this won’t work on my terminal-only VPS. And since I’m using the box for development purposes more and more, which involves using git+GitHub all the time, I have grown tired of typing my password around and around.

Fortunately, all was as easy as reading the rest of forementioned post and some copy & pasting. There are three solutions; I chose the last one:

## '''''''''''''''' ##



function start_agent {
     echo "Initialising new SSH agent..."
     /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
     echo succeeded
     chmod 600 "${SSH_ENV}"
     . "${SSH_ENV}" > /dev/null

# Source SSH settings, if applicable

if [ -f "${SSH_ENV}" ]; then
     . "${SSH_ENV}" > /dev/null
     #ps ${SSH_AGENT_PID} doesn't work under cywgin
     ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {

I put the above at the end of .bashrc and everything works like charm.

Tale of: Node.js installation on my Debians

Today I decided to install the marvellous thing called Node.js on both of my Debians (for learnvelopment purposes).

Here’s the story.

The first catch

As usual, first approach I tried to take was using official Debian repositories. However, there was a catch: The boxes are both Wheezy but Node.js is only available on Sid.

So what next? I tried so-called apt pinnig, which is a method that enables you to install a particular package (and its dependencies) from newer branch, while leaving rest of your system as is.

Basically how you achieve that is that you add all relevant repos to your /etc/apt/sources.list, then set up apt to prefer the one you want to stick with (so your whole system does not end up on Sid), and install package from given branch using syntax pkgname/branchname, e.g. aptitude install nodejs/unstable.

The next catch

After I had apt successfully deliver nodejs to my box (without breaking everything), I have noticed one discrepancy: In the Sid’s debian package, the main Node.js engine binary is called nodejs, as opposed to node when installed from source.

OK, according to the fact that both of these sources are quite a bit moving targets, we don’t have to be scared by this: hopefully this will be settled at some point; definitely before Node.js can be actually considered stable enough to get to Debian/stable.

The only real problem with this would be rather a nuissance (well, a ugly kind of one) in case you needed to run the same script on multiple machines where actual node binary would live in different places. Then you would have to update shebang line every time!

But as I said, these are quite moving targets, I’m not sure whether a settled consensus already exists.

The showstopper

Next thing I wanted to do was to add module node-tap (TAP producer for Node.js) to my project. Recommended way is to add it using npm, a standard Node.js package manager.

Turns out that npm is not part of nodejs, so one more aptitude install was missing. But now the problem: I won’t go into details, but trying to install npm, I found out that it has dependency problems with nodejs itself! Only option to resolve dependencies for npm involved removing nodejs!

I guess this is what you get when you mess with Sid. aptitude purge nodejs.

Old way the gold way

As Julian Knight says on SO:

Node.js is rather too fast moving for Debian packages to keep up. I would strongly recommend that you do your own installs until such a time as Node settles down. For example, there was an unexpected update recently that fixes an important security loophole - you do not want to be dependent on a package that is for an out-of-date version of Debian when things need to move fast.

I finally I ended up with building custom install. Thanks to Tim Matison’s blog post (leaving out the CouchDB), it was quite easy:

CORES=2             # set to how many cores you want to use
git clone
cd node
export JOBS=$CORES
mkdir ~/local
./configure --prefix=$HOME/local/node
make -j$CORES
make install

After the install, only thing I did was that I added aliases for node and npm to my .bashrc:

alias node='~/local/node/bin/node'
alias npm='~/local/node/bin/npm'

OK, there is one more thing: as for all custom installs, scripts downloaded from wherever won’t know where the binary is. So you will have to remember to alter shebang line appropriately.

You could avoid that by either doing full install or by creating symbolic link somewhere on system-wide PATH. In other words, going one step further towards full integration to your system. However, due to reasons regarding development stage of Node.js mentioned aove, I decided to avoid both.

The morale

Don’t be afraid of building from scratch. There’s one nice advantage: as long as you decide reasonable folders for source and binaries, you can always update as “fast” as git pull && make -j2 && make install.

Hex vs. Dec via `printf`

It’s possible to convert between hexadecimal and decimal format using only bash and printf, but I often find myself re-creating the syntax for a while. So here isthe reminder.

The principle is very easy. You just need to tell printf what format to use for output, and provide the number in correct notation as argument.

Notation for decimal numbers is as usual, e.g. 123; for hexadecimal numbers we prepend 0x, e.g. 0xf4c5. The case does not matter, 0XF4C5 is the same as 0xf4c5, although it’s unusual to use capital X.

The way we tell pfintf what to use is via placeholders. Placeholder for decimal number is %d, and for hexadecimal it’s %x. For better readability, we’ll also include \n for newline, as printf does not append it buy default. So, to convert B1EF hex to decimal and back:

aloism@azzgoat:~$ printf "%d\n" 0xB1EF
aloism@azzgoat:~$ printf "%x\n" 45551

Now more advanced example, to convert IPv4 address to IPv4-mapped IPv6 address, you may want to use following:

aloism@azzgoat:~$ printf "::FFFF:%02x%02x:%02x%02x\n" 192 0 43 10

See printf(3) manpage for more on printf syntax.

Nice feature: nmap stats

Just noticed a nice feature in nmap:

While nmap scan is running, you can press any key to show the progress. this is how it looked like when I pressed arrow three times after starting nmap -sV scan:

lennycz@hugo:~$ nmap -sV

Starting Nmap 6.00 ( ) at 2012-07-18 20:14 CEST
Stats: 0:00:04 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 51.33% done; ETC: 20:14 (0:00:04 remaining)
Stats: 0:00:06 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 51.93% done; ETC: 20:14 (0:00:06 remaining)
Stats: 0:00:09 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 52.93% done; ETC: 20:14 (0:00:09 remaining)

So if the scan takes longer and you are impatient, just press any key to be able to see how it stands.

How I set up my `ssh-agent`

Based on this nice guide from Mark A. Hershberger, it was quite easy to set up basic ssh-agent configuration. What worked nice for me on my Debian Wheezy box with Xfce4:

  1. I created a key-pair and added it to my favorite servers (well, I did that a pretty long time ago, but that’s just a basic stuff)

  2. I added ssh-agent xfce4-session to my ~/.xsession file

  3. And that’s it. Every time I want to use ssh (or scp or whatever), I just need to do ssh-add, which defaults to ~/.ssh/id_rsa (or I could add another identity by providing path to file as only parameter), but I need to do it once per session.

  4. From that point, none of ssh-agent-compatible clients will ask me for passphrase.

  5. Well, at this point, physical access to my box would mean easy access to the identity and therefore to any servers that trust it, so if I get paranoid, I can always remove the identity by ssh-agent -d, (or ssh-agent -d some/other/id_rsa respectively).

Provided that the actual host is and user is me, a nice addition to convenient use of ssh is to provide following into your ~/.ssh/config:

Host mh
User me

This will create a mh “alias” for, so you can ssh mh or scp foo mh: instead of typing all the long stuff over and over. You can set multiple aliases and also other host-specific options like Port or actual identity file to use. More on that in this nice article on How-To Geek.

StackExchange and others

Stack Exchange Network is a family of Q&A sites. A huge family of Q&A sites, each with its own topic, ranging from programming and PC gaming through physics and mathemathics to cooking, bicycles and english language. Currently there are over 80 sites already established and another about 50 sites in beta.

These are some of the most significant things about StackExchange sites:

Sounds great, yeah? It is. In fact, it’s awesome. But there’s one warning. Do follow it carefully or you might get frustrated:

The idea of what is considered good or wrong by community is presented in form of a FAQ. (The FAQ is almost identical on most sites, although there are rare exceptions.) You’ll soon find that many community members can be very pedantic and bitter regarding quality of posts, and often refer users to the FAQ.

Because FAQ is something like a highest law of all SE sites. Disobey it and you will lose reputation. It may sound harsh, but on the other hand, it works. And it is rewarding—obey the rules and your chances to get good answers will go up!

So be prepared and do read the FAQ. It’s really not very long.

Wireshark off-the-screen: Workaround

Due to bug 553, Wireshark sometimes appears off-the-screen, making it impossible to drag it by window title. This happens particularly often with multi-monitor setups.

The workaround is:

  1. Move window to desired position using keyboard
    • e.g. Alt+Space, M, Enter (you don’t actually have to move it, just press Enter and Windows will snap the window to the nearest space)
  2. Go to “Edit” → “Preferences…”
  3. Turn on “Save window position” check-box
  4. Restart Wireshark

Next time, Wireshark will appear in desired position.

Thanks to Jasper for his answer at

Older versions of Firefox

It always took me some time fo find an older version of Firefox. On official Mozilla page it’s almost impossible to find anything older in 30s, and seems to offer only en_US localization.

Note to self: Next time, your favorite link is the official FTP archive.

Now I have SmartyPants!

I have just installed SmartyPants:

SmartyPants can perform the following transformations:

  • Straight quotes ( ” and ’ ) into “curly” quote HTML entities
  • Backticks-style quotes (“like this”) into “curly” quote HTML entities
  • Dashes (“—” and “—-”) into en- and em-dash entities
  • Three consecutive dots (“…”) into an ellipsis entity

One more thing I had to do was switching Blosxom to UTF-8, which was easy
thanks to x3ro’s blog post.

Markdown: *example*

Blosxom supports Markdown, which is great. Actually, thanks to SOFU, I got used to Markdown so bad that I almost cannot imagine writing in something else.

Markdown is

…as well as is Blosxom :)

Here is the code to this article up to this point:

Markdown: *example*
Blosxom supports [Markdown](, which
is great.  Actually, thanks to SOFU, I got used to Markdown so bad that I almost
cannot imagine writing in something else.

Markdown is

* Simple
* Readable
* Usable well as is Blosxom :)

Here is the code to this article up to this

Designing test cases using All-Pairs method (AKA pairwise testing)

All-pairs (or Pairwise) testing is method of designing test cases so that instead of covering every possible combinations of input values, you only look for every possible pair of values.

Here is list of tools available at

For example, you have a sandwich shop where they offer:

There are 36 (2*2*3*3) possible sandwiches. If you were to test every one of them… ehm, that would mean eating nothing but sandwiches for at least one week, pretty costly and unhealthy. :)

Now All-pairs means that you select only such a certain subset of all possible sandwiches so that you test every possible pair of two options. The final set of 10 sandwiches is much more reasonable:

type    base    meat      vegetable
brown   salad   chicken   lettuce
white   butter  chicken   tomato
white   salad   ham       lettuce
brown   butter  ham       tomato
brown   salad   none      none
white   butter  none      lettuce
white   butter  chicken   none
brown   salad   ham       tomato
white   salad   ham       none
brown   butter  none      tomato

This might not seem as such a huge saving, but in cases there are more parameters with more values, the difference may grow astronomic. As James Bach says in manual to his AllPairs script:

Consider the case of 10 parameters with 10 values each. Allpairs will do it in 177 cases. The smallest number of test cases possible is somewhere between 100 and 177. I suspect it’s something down around 130. But compared to the alternative of 10 billion test cases to achieve all permutations, 177 is not too bad.

That isn’t bad indeed, is it?