Of Linux and Distros
The Linux box was so old (AMD Athlon x2 4400+ with 3 GB) that it was becoming a pain to use for any kind of desktop applications. I ended up neglecting it and spending most of my time on my windows box (I know! The horror!) with multiple putty sessions open. Also, it was running Gentoo Linux. In my local Linux user group, we have determined that people who are over 30 don't use Gentoo. It is a worthy distro and has many strengths. I have been using it for over 10 years on some system or another and because of its minimal nature, I learned a lot about Linux that I would otherwise be oblivious to. But it can be a real pain, especially to maintain for desktop use. I was getting tired of every other emerge --sync generating package conflicts, lengthy config file merges and the occasional system toolchain breakage that you had to resolve before rebooting (or loosing power!) otherwise the system would completely eat itself. So, being a distinguished 31 years of age now, I decided it was time to drop Gentoo. So long and thanks for all the fish.
So what to replace it with? Well I've been an Ubuntu user for a while as well. But they really worked hard to tick me off with the whole Unity business. It's not that I am completely opposed to new things. But I think Ubuntu messed up pretty good in this transition. First, they rolled out Unity to the netbook edition a year ago, before it was ready. It crashed the first time I logged in. Then it continued to be buggy. Also, I really, really don't like the mac-style combined launch/task bar that is central to Unity. I want a taskbar that displays currently running programs. Not some auto-hiding, moving-with-your-mouse contraption that shows a whole host of programs, some of which may be running and some of which may not. The running ones have a dot on them or something? Whatver. Just give me a taskbar. And now with Ubuntu 11.10 the option to go back to regular old gnome 2 is mostly gone. It's either Unity or Gnome3 with gnome-shell. On my netbook I now use gnome-shell and it works well enough for such a limited device with a small screen. But I'm not really a fan of some of the unilateral decisions they made when it comes to running it on a dual monitor desktop.
Enter this blog post from Linux Mint. The condensed version: "Gnome 2 is dead so we have to move to something newer. But we'll ease the transition by giving you similar functionality to what you've been used to for the last 15 years and let you choose how much new stuff you want." Perfect. So right now I'm typing this on a Linux Mint 12 release candidate install. So far I'm pretty happy. I even find myself using some of the new gnome-shell features from time to time. But I still have my taskbar, traditional alt-tab and always-visible notification area. I'm still trying to hammer out a few bugs I'm encountering. One thing that annoyed me greatly was the default workspace switching for dual monitors. Only my left monitor switched. The right one always displayed the same workspace. Turns out it is just a dumb default setting made by a gnome developer. Unfortunately they don't provide an easy checkbox to change this in the settings. Luckily I found this blog post that fixed it for me in 30 seconds.
Gnome-shell does seem to be a little bit crashy but I'm I'm going to blame that on the ATI video card I have in this thing. I bought this card several years ago right after ATI announced that they would open up their hardware specs to allow people to write decent open source drivers for ATI cards. It seems that hasn't happened as quickly as I had hoped. Notice I'm back to nvidia for my new gaming card. Luckily when gnome-shell crashes it doesn't take down the whole system and I can usually hit ATL+F2 to bring up a run dialog (even though it doesn't appear on the screen) and execute "r" which tells gnome-shell to restart itself. Otherwise, I can also switch to a virtual terminal run a gnome-shell --replace command to make everything happy again.
Update: I think I have tracked down the gnome 3 crashiness. It is in the Mint media player plugin. This bug on launchpad indicates that there is a fix which should be rolled out to the masses shortly. The workaround until then is to just disable the extension.
Of Zabbix and Migrating Old Data
I have been using Zabbix to monitor and track system metrics for a while. Not that I have anything super critical running on my boxes here at home but sometimes it yields some interesting data and I'm kind of a sucker for graphs.
I installed the latest zabbix on my new machine but I didn't really want to migrate my whole database from the old one because there was some cruft in it and several old hosts that haven't existed for a while. But there are a few monitored items that I wanted to migrate. In particular, the data that generates my license acceptance graphs. Zabbix does have an export feature that will export a given item but it only exports the definition, not the data. Fortunately it keeps all its data in a pretty plain mysql database and I'm kind of handy with databases so I managed to transition the data as follows:
- Create the item in the new Zabbix install. I did an export from the old system and imported it into the new one to create the item. In this case I also had to copy a shell script that zabbix calls to update the counts and then configure the new zabbix agent to poll it.
- Export the data from the old mysql database. First you need the item ID. This can be found by looking at the item you are exporting in the zabbix web web UI and examining the URL. You are looking for an itemid=???? parameter. Then plug that in to the following query which will give you a CSV file. Note that the mysql user you connect as must have the file permission - I just logged in as the mysql root user.
select * into outfile '/tmp/history_uint_agreed.csv' fields terminated by ',' lines terminated by '\n' from history_uint where itemid = 19913;
- Now, find the item ID in the target zabbix install. Then use sed or vim or something to do a string substitution. I used vim with the following command:
- Import the data into the new database:
load data infile '/tmp/history_uint_agreed.csv' into table history_uint fields terminated by ',' lines terminated by '\n'
- Finally, activate the item in the new zabbix so that it starts updating with new data.
Of PostGIS, osmosis and the jxapi
Since I was using PostgreSQL 8.4 on my old system and 9.1 is out now, I ended up just dumping my existing OSM database that I blogged about in the past and setting up a brand new one from scratch. I set up the database right before I left for Thanksgiving. I started the planet file torrent and then drove to Nebraska where my only access to the internet was tethering through my phone which only gets EDGE data service up there. Plus, I was of course being (a little) social with family so I didn't have as much time to tinker with it. For a few tweaks dealing with PostgreSQL 9.1 vs 8.4 I followed Paul Norman's post on the subject. But I didn't want to do the dump files with osmosis for a couple of reasons so I went with the --write-pgsql task instead of the --write-pgsql-dump one so that the data would get read out of the planet file and put straight into the database. Well almost. It still creates some temporary files along the way but you don't have to mess with loading them into the database after it finishes. Unfortunately the planet file has grown too much since I last did this and the nodes no longer fit in my 12 GB of memory so I had to use the TempFile option for node location storage. It actually didn't impact the speed nearly as much as I thought it would. I'm guessing Linux's aggressive caching probably paid off and a good chunk of the temporary node file ended up in the 12 GB of memory the system has. My final osmosis command line looked like this:
bzcat planet-111123.osm.bz2 | osmosis --fast-read-xml file=- --log-progress interval=30 --write-pgsql nodeLocationStoreType=TempFile host=localhost database=xapi user=xapi password=xapi
Osmosis ran for 53 hours but when it was done, the database was ready to go.
For fun, I ran a few benchmark queries that Paul came up with a while ago against my old PostgreSQL 8.4 and then again against the 9.1 database on new hardware. Before running each query I restarted PostgreSQL and flushed the disk cache using the command
sync; echo 3 > /proc/sys/vm/drop_cachesto make sure I was getting the raw hardware speed, not assisted by the various caches that the operating system and the database both have. Vital stats of the old and new systems:
Old: AMD Athlon x2 4400+ (2.2 GHz) with 3 GB of memory
New: Intel Core i5 (2.67 GHz) with 12 GB of memory
The I/O system remained pretty much the same: software RAID 5 of four 7,200 RPM 1TB drives with LVM on top of it.
|Query||Old system||New system||speedup|
|map?bbox=11.54,48.14,11.543,48.145||61 sec||46 sec||25%|
|*[bbox=11.54,48.14,11.543,48.145]||56 sec||42 sec||25%|
|*[amenity=library][bbox=-0.57,51.24,0.31,51.75]||172 sec||153 sec||11%|
|node[power=generator]||55 sec||85 sec||-55%|
|*[amenity=parking][capacity:disabled=*][bbox=-0.57,51.24,0.31,51.75]||192 sec||177 sec||8%|
|*[waterway=sluice_gate]||40 sec||9 sec||78%|
|*[bdad6d21a=1378b02bc]||6.3 sec||3.3 sec||47%|
|*[power=generator]||405 sec||367 sec||9%|
|map?bbox=11.54,48.14,11.543,48.145||45 sec||43 sec||4%|
|map?bbox=-124,49,-123,50||272 sec||227 sec||17%|
Several noticeably faster ones but nothing too earth shattering. Then there is that one that is 55% slower. No clue what's up with that. Of course this is all pretty anecdotal. So much changed between the two runs that it is hard to pinpoint whether the speedup came from the faster CPU, more memory, improvements in PostgreSQL or changes in the data. Probably a little from all of the above.
I leave you with a picture of the ridiculous video card fans that I mentioned in the first paragraph. Notice at the upper right corner of the card that it also requires two power connections. I hope it will help me achieve some epic frags at the upcoming semiannual LAN party I have with my college friends. It has already propelled my BOINC score to new heights.