M. Ward’s Fuel for Fire samples Saint-Saens’ “The Swan”

February 15, 2013

I was listening to M. Ward’s Fuel for Fire recently and I couldn’t place the Cello melody. I’d heard the song before, but for some reason this time I needed to identify the quote. The best you’ll get on the Internet is that it’s a sample from a Pablo Casals track. Googling for “famous cello melodies” is surprisngly ineffective, and searching YouTube leaves you with a lot of unaccompinied Bach Sonatas and Elgar Concertos.

In a sudden flash of clarity, I remembered there was a famoous cello solo from Swan Lake. Okay, maybe not from Swan Lake, Google said to me, but from Saint-Saens’ “Carnival of the Animals”, specifically The Swan. In any case, I got to where I was going, and so I’m posting this here in the hopes that it’ll help out anyone that is similarly annoyed.

Sloppy Focus or Focus Follows Mouse

November 17, 2011

Some people like Focus Follows Mouse or (my preferred) Sloppy Focus. I’m sure some of those same people probably also like autoraise. I don’t.

You can easily get either sloppy focus or FFM with gnome-tweak-tool. The autoraise thing is a bit tricker, but I found the answer here. All you need to do is open up gconf-editor, and then uncheck /apps/metacity/general/auto_raise, like so:

Buffer Bloat: Experiment 1

August 11, 2011

Ever since I first saw Jim Getty’s post on testing bufferbloat, I wanted in. But at the time I was living with room mates, and I didn’t feel like taking the router out of our network for testing that had already been done multiple times over.

Now that I’m on my own though, I have a chance to experiment on a clean network without disturbing anyone else’s network throughput. I expect his numbers will be remarkably similar to mine, simply because we both have very similar hardware (revisions of Linksys WRT54, I have the WRT54G).

Experiment 1a:

$ nttcp -t -D -n2048000 nfsserver & ping -n nfsserver 
[1] 5193 
PING nfsserver.lan ( 56(84) bytes of data. 
64 bytes from icmp_req=1 ttl=64 time=0.293 ms 
64 bytes from icmp_req=2 ttl=64 time=48.3 ms 
64 bytes from icmp_req=3 ttl=64 time=91.5 ms 
64 bytes from icmp_req=4 ttl=64 time=140 ms 
64 bytes from icmp_req=5 ttl=64 time=183 ms 
64 bytes from icmp_req=6 ttl=64 time=221 ms 
64 bytes from icmp_req=7 ttl=64 time=222 ms 
64 bytes from icmp_req=8 ttl=64 time=224 ms 
64 bytes from icmp_req=9 ttl=64 time=220 ms 
64 bytes from icmp_req=10 ttl=64 time=222 ms 
64 bytes from icmp_req=11 ttl=64 time=223 ms 
64 bytes from icmp_req=12 ttl=64 time=224 ms 

Experiment 1b:

For the rest of the experiment, I’m just going to use the format that Gettys uses in comment 843.

First tune down the tx ring buffer to 64 as he has done with ethtool -G eth0 tx 256. Then run across multiple txqueuelen values. These can be tuned with ifconfig. For example:

ifconfig eth0 txqueuelen 0 

Here are the numbers in tabular form:

txqueuelen tx-ring RTT 

1000 256 220ms (default - Experiment 1a) 

0 64 2ms 
2 64 3ms 
5 64 10ms 
10 64 45ms 
25 64 110ms 
50 64 190ms 
100 64 210ms 
1000 64 225ms 
10000 64 228ms 

Experiment 1b:

Experiment with a different tx-ring value. Set this with ethtool -G eth0 tx 4096:

txqueuelen tx-ring RTT 

0 4096 130ms 
2 4096 140ms 
5 4096 160ms 
10 4096 180ms 
25 4096 190ms 
50 4096 200ms 
100 4096 225ms 
1000 4096 230ms 
10000 4096 230ms 

Experiment 1d:

Don’t have a Windows or Mac system, and not intending to get either any time soon. By the way, for those that are curious:

$ uname -r 3.0.0-gentoo 


This hardware configuration seems to give very similar results to Gettys results. My results were a little better in general, with the same trend. Also for big buffers my performance seemed to suffer more .We’re both running with Intel cards, and a Linksys WRT-54 variant. If there’s any bit of difference, it’s that I’m using an ancient 3Com NIC in my server, but that didn’t seem to have a huge effect on the tests. Tomorrow I’ll continue with Experiments 2+3, but for now I want to try my new router🙂

Textinput troubles

August 10, 2011

Those who use dark GTK+ themes in Firefox on Linux have an interesting problem. If a website assumes that you are using a light theme, it may set the text color aspect of the CSS tag but not the background color. Or it might do the reverse. You can find a pretty good summary at

Now, I realize that I’m a small minority of the minority of Linux users that has this problem, so there has to be another solution. In Firefox 4 I tried in vain to disable system colors, because no matter what I did Firefox kept insisting on using system colors. So I found this dark textbox fix, using a Firefox Add-on called Stylish. Oddly enough the “dark textbox” fix defaults your textboxes to white with black text. The basic template gives you enough information to figure it out though, so I modified it a bit:

@namespace url(http://www.w3.org/1999/xhtml);
@-moz-document url-prefix(http), url-prefix(file) {

    select {
        color: -moz-FieldText !important;
        background: -moz-Field !important;

This uses the colors that Firefox would use by default, which means if you change themes you won’t have to change the script.

Of course the websites that do a good job of modifing both the text and the background color are always using defaults now, which makes for a lot of design uglyness. For an simple example, here is google.com:

google.com with simple blue Mist GTK+ theme

They Say Yum Is Not Slow, But it Feels That Way

September 23, 2010

When the usual RPM/dpkg flame fest comes around, Debian/Ubuntu advocates will not hesitate to point out that Yum is slower than APT, and RPM is slower than dpkg. That may be true, but its usually iterated without any suggestions for a fix, or even any supporting data.

The flame fest resulted in a link to “Lies, damn lies, and benchmarks”, by James Antill. His point is that benchmarkers rarely standardize their variables, and often compare numbers that have nothing to do with one another, drawing the wrong conclusions in the process. So I guess Yum/RPM is just as fast as APT/dpkg.

Yet as I type this I’m trying to get a simple source RPM for libvirt over a mobile “broadband” connection, and I’ve got:

updates-source/primary_db 77% [============    ] 9.0 kB/s | 610 kB     00:19 ETA

Yet if I claimed that Yum was slower than APT I would be falling in to the exact trap that Antill was describing, so I won’t do that. It still sucks that it’s going to take me 10 minutes to get a simple RPM though, so what can we do? Well, first of all, by default I have 3 repos enabled, I should have used --disablerepo=* --enablerepo="updates". But even that would have taken a while.

The problem is my cache was invalidated, which, as I understand it, is not a problem when using APT, due to what the APT guys would call a design flaw in Yum. However, it is the Fedora users that have to live with this, so how do we mitigate the problem?

I have two potential solutions. One is that we could have a cron job download all your enabled repo metadata whenever you’re on a high
bandwidth link, so that if you’re on the road you can still update necessary packages but won’t need to spend forever downloading
metadata. The other possibility is that primary_db is a database of everything, and isn’t just new data (presumably for the “old” updates I already have the metadata cache). This was a design flaw in up2date, yet Yum choose to repeat it. But I have no idea how Yellowdog Updater worked and what the history is, so I’ll hold my tongue.

We could solve this in a manner similar to delta RPMs, we’ll have deltaMetadata or deltaYum. Diff primary_db every time, and have the server recompose whatever is necessary. Again I’m not positive primary_db is the entire database, so this may be a non-solution to a problem that doesn’t exist.

The APT people may be right about the design flaw however. As far as I know it is against policy to pull packages from yum, or it’s very rare in any case. If true, then the links for the old packages will be there anyway, but yum will go ahead and get the metadata on the off chance that libvirt has been updated in the interim. I haven’t checked, but I doubt in this specific case it was, and even if it was, I would have been fine with the old source RPM. Even if the package was pulled from the repo, we would get a HTTP 404 and could decide to re download metadata then and there.

Possibly there is a way to specify to use cache only with Yum, but it isn’t default and I don’t know how to do it. And let us leave the package churn discussion for another time, that’s hard to solve and has a lot of people smarter than me thinking about it.

In any case, Antill and I are saying the same thing. Don’t benchmark the app, benchmark the primitives. If you do with this with Yum and APT, you probably won’t notice a difference, or it’ll be on the order of seconds. However, in Fedora we don’t address the design issues that make Yum feel slower, and so the perception (perhaps rightly) persists. And perception isn’t benchmarked, so looking for technical solutions and benchmarks for feelings is the wrong thing to do anyway.

Nilfs2: Fedora 13 Follow Up

August 31, 2010

As you might imagine, I have been running with F13 for a while now,
but I was too lazy to update the blog. In any case, I did run the
numbers on the Toshiba. They aren’t pretty:


$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=4k count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 3.6206 s, 1.2 MB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=4
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.22621 s, 18.5 MB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=256
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 15.2122 s, 17.6 MB/s


$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=4k count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 10.2423 s, 410 kB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=4
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.204465 s, 20.5 MB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=256
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 13.1899 s, 20.4 MB/s

I have no idea how to explain Ext4’s awful performance on small
blocks. However, the Intel/Ext4 combo won in just about every
category, so I’m keeping the Toshiba as the backup drive for now.

Interestingly, Nilfs2 performs better on the Toshiba than it does on
Intel. If we combine this with the observation that small block sizes
do relatively poorly on the Toshiba drive, than we have a pretty
convincing case that the Intel drive is doing some magic under the
covers to make it awesome on traditional filesystems, even at the
expense of filesystems specifically designed for SSDs.

In retrospect, I regret not trying out the other obvious filesystem,
btrfs. However that was never really in the running anyway, since I
can’t boot to a btrfs filesystem, even with Grub 2. Would have been
interesting to chart though.

Thus concludes my analysis of Nilfs2 for SSD. I encourage everyone with
an SSD to test, and get your results out there. Hopefully as Grub 2
matures and more distributions than Ubuntu use it, and as more SSDs
make it to the market, we can see what is really the fastest
filesystem out there.

Nilfs2: A File System to Make SSDs Quiet

August 15, 2010

I have been using Nilfs2 ever since I installed Fedora 12, which happened right after it came out. The graphs from Nilfs2: A Filesystem to Make SSDs Scream convinced me I had to have it, because, my ThinkPad X301 actually does have an SSD. Managing to get a Fedora 12 install with a root Nilfs2 filesystem was a bit of a challenge though. If I recall correctly,
I did it by installing on ext3 and then rsyncing everything over, which is a hassle to say the least.

However, I was never sure if I set up the alignment properly, and this article by Ted Tso made me even less sure. Additionally, it made sense to start backing up my system, now that I was getting an error message on every boot:

mount.nilfs2: WARNING! - The NILFS on-disk format may change at any time.
mount.nilfs2: WARNING! - Do not place critical data on a NILFS filesystem.

So I thought I’d try out Jamie Zawinski’s backup strategy since it’s relatively simple and made a lot of sense. I would need a spare drive for the install anyway, and I wasn’t sure that the drive that shipped with my ThinkPad had TRIM, which is supposed to be essential for the lifetime of the drive. Rather than order a mystery drive from Lenovo, I settled on the X18-M. The guys over at NotebookReview helped with that decision.

I did want to make sure I was using the right filesystem for the job. After all, it’s nearly impossible to bitch slap Fedora the right way to get the thing to boot Nilfs2. And Evan Hoffman seems to think there is no benefit whatsoever, although his test, “ghetto” by his own admission, is quite flawed. The obvious flaw is not using sync(). The test is basically returning how long it takes to dirty page cache. The second flaw, pointed out by the comment section, is that the block size is too small. Doesn’t make sense to me since 4k is the filesystem block size and the storage layer should coalesce the I/O operations at the block layer, but it doesn’t hurt to test.


$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=4k count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 2.64042 s, 1.6 MB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=4
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.199603 s, 21.0 MB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=256
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 11.0575 s, 24.3 MB/s


$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=4 count=1024 
1024+0 records in
1024+0 records out
4096 bytes (4.1 kB) copied, 4.20935 s, 1.0 kB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=4
4+0 records in
4+0 records out
4194304 bytes (4.2 MB) copied, 0.257392 s, 16.3 MB/s
$ dd if=/dev/zero of=./zeros.dat oflag=sync bs=1024k count=256
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 16.2356 s, 16.5 MB/s

Wow, okay. So he’s right, Nilfs2 is slower for this test. Even though this is kind of a simple benchmark, this disparity probably shows up in other benchmarks like the “extract kernel” benchmark, and in real world use as well. And block size does make quite a difference. Both filesystems performed better with a 1 Meg block size, and here Ext4 beat Nilfs by a higher margin.

So where does that leave us in terms of the original Linux Magazine article? They point to two (old at this point) studies that show that Nilfs2 is pretty good. Chris Samuel’s comprehensive showed test that Nilfs2 was best in class for sequential delete, even for rotational media, and Dongjun Shin from Samsung had some graphs that showed Nilfs2 performance to be off the charts for solid state media. So let’s try to recreate his results:

nilfs2 sucks

Postmark results comparing two filesystems on SSD

Underwhelming, to say the least. So how is it possible for two people to do the exact same thing and get wildly different results? I have a few theories:

Even Linux Magazine admits that the tests they’re basing the entire article on are quite old. The editor relied on previous research for everything, and didn’t even bother doing his own testing. It’s possible that in that time either Nilfs2 got really slow for some reason, or that Ext4 woke up and started kicking ass on SSD, once filesystem developers realized the popularity of that use case.

The second thing is that SSD vendors are getting really good at emulating rotational media. The Intel drives are known to be quite good in this regard, and there is an open question as to weather people should even bother setting up filesystems differently for SSDs. It’s possible that the guys over at Intel benchmarked for traditional filesystems like ext4 (or NTFS), and that the resulting specific and hyper tuned optimization came at a cost to log based filesystems, which theoretically should be better. Maybe Nilfs2 is still better on dumb drives.

I plan on testing both these theories soon after installing Fedora 13, by testing the Toshiba drive in my laptop on the newer Fedora 13 kernel. I’m going to miss continuous snapshotting, but for now it looks like I’ll be using ext4. I’ll catch you on the flip side.

Move from Blogger

August 14, 2010

Hi all! I recently moved from blogger, because blogger now has some AJAX shit that steals control key characters. I don’t know about you, but I need my Emacs key bindings in Firefox or I flip. And because, you know, open source. And because everyone else is doing it.


Get every new post delivered to your Inbox.