New Feature for HN Collapsible Threads: Collapse Whole Thread

I have added a feature to the HN Collapsible Threads bookmarklet that enables you to close a whole thread from any point within the thread:

This is useful when you are reading a thread and decided that you are having enough of it and want to move on to the next thread. Before you had to scroll all the way up to the top post and collapse that one.

Drag this to your bookmarks bar: collapsible threads

Install Greasemonkey script

Posted in web

preg_match, UTF-8 and whitespace

Just a quick note, be careful when using the whitespace character \s in preg_match when operating with UTF-8 strings.

Suppose you have a string containing a dagger symbol. When you try to strip all whitespace from the string like this, you will end up with an invalid UTF-8 character:

$ php -r 'echo preg_replace("#\s#", "", "?");' | xxd
0000000: e280

(On a side note: xxd displays all bytes in hexadecimal representation. The resulting string here consists of two bytes e2 and 80)

\s stripped away the a0 byte. I was unaware that this character was included in the whitespace list, but actually it represents the non-breaking space.

So actually use the u (PCRE8) modifier as it will be aware of the a0 “belonging” to the dagger:

$ php -r 'echo preg_replace("#\s#u", "", "?");' | xxd
0000000: e280 a0

By the way, trim() doesn’t strip non-breaking spaces and can therefore safely be used for UTF-8 strings. (If you still want to trim non-breaking spaces with trim, read this comment on PHP.net)

Finally here you can see the ASCII characters matched by \s when using the u modifier.

$ php -r '$i = 0; while (++$i < 256) echo preg_replace("#[^\s]#", "", chr($i));' | xxd 0000000: 090a 0c0d 2085 a0 $ php -r '$i = 0; while (++$i < 256) echo preg_replace("#[^\s]#u", "", chr($i));' | xxd 0000000: 090a 0c0d 20

Functions operating just on the ASCII characters (with a byte code below 128) are generally safe, as the multi-byte characters of UTF-8 have a leading bit of one (and are therefore above 128).

Restoring single objects in mongodb

Today I had the need to restore single objects from a mongodb installation. mongodb offers two tools for this mongodump and mongorestore, both of which seem to be designed to only dump and restore whole collections.

So I’ll demonstrate the workflow just to restore a bunch of objects. Maybe it’s a clumsy way, but we’ll improve this over time.

So, we have an existing backup, done with mongodump (for example through a daily, over-night backup). This consists of several .bson files, one for each collection.

  1. Restore a whole collection to a new database: mongorestore -d newdb collection.bson
  2. Open this database: mongo newdb
  3. Find the items you want to restore through a query, for example: db.collection.find({"_id": {"$gte": ObjectId("4da4231c747359d16c370000")}});
  4. Back on the command line again, just dump these lines to a new bson file: mongodump -d newdb -c collection -q '{"_id": {"$gte": ObjectId("4da4231c747359d16c370000")}}'
  5. Now you can finally import just those objects into your existing collection: mongorestore -d realdb collection.bson

Safari Extension: Clean URLs

I have been picking up and developing a fork of Grant Heaslip’s Safari extension URL clenser which removes all sorts of un-necessary junk for the URL so that you can easily pass on a clean URL to someone else. Things being removed include:

  • Google Analytics parameters (utm_source=, utm_medium, etc.)
  • Youtube related parameters (feature=)
  • Partner tracking stuff for NYTimes, Macword, CNN, CBC Canada and The Star

You can download my version here: url_cleanser.safariextz

Posted in web

trac Report for Feature Voting

I use trac for quite a few projects of mine. Recently I tried to find a plugin for deciding which features to implement next. Usually trac hacks has something in store for that, but not this time.

I wanted to be able to create a ticket and then collect user feedback as comments for the feature, with each piece of feedback being a vote for that feature, like this:

After searching for a bit I came up with a solution by using just a report with a nicely constructed SQL query.

SELECT p.value AS __color__,
   t.type AS `type`, id AS ticket, count(tc.ticket) as votes, summary, component, version, milestone,
   t.time AS created,
   changetime AS _changetime, description AS _description,
   reporter AS _reporter
  FROM ticket t, ticket_change tc, enum p
  WHERE t.status <> 'closed'
AND tc.ticket = t.id and tc.field = 'comment' and tc.newvalue like '%#vote%'
AND p.name = t.priority AND p.type = 'priority'
GROUP BY id, summary, component, version, milestone, t.type, owner, t.time,
  changetime, description, reporter, p.value, status
HAVING count(tc.ticket) >= 1
 ORDER BY votes DESC, milestone, t.type, t.time

So just by including “#vote” in a comment, it would count towards the number of votes. You can change this text to anything you want, of course. For example like this:

I hope this can be useful for someone else, too.

iOS 2011 Alarm Clock Bug

Just to add to the speculation about the causes of the 2011 alarm clock bug of iOS where the one-time alarms would not activate on January 1 and January 2, 2011.

My guess is that the code that sets off the alarm takes day, month and year into account when checking whether the alarm should go off. But instead of using the “normal” year, an ISO-8601 year could have been used. This type of year is calculated by the number of the week (with Monday as the first day of the week), thus for the week 52 (from December 27, 2010 to January 2, 2011) the respective year remains 2010.

When setting the date to January 1, 2012, the alarm doesn’t go off as well (week 52 of 2011). This adds to my theory and also means that this hasn’t been a one-time issue and requires a bug fix by Apple.

Title Junk: Solve it with Javascript

There is some back and forth by John Gruber and others, about HTML <title> tags, with Gruber complaining (and rightly so) that for SEO reasons the titles are filled up with junk having little to do with the real page content.

The writers of cam.ly suggest to use the SEO title in the HTML and have something proper be displayed in Google by using an OpenSearch description. But this still doesn’t solve the problem of bloated window titles and bookmarks.

So my solution to that: use JavaScript. If you want to satisfy your readers with a good title and present a nice title to Google, simply set the title to something nice after the page has loaded with JavaScript:


document.title = "Title Junk: Solve it with JavaScript";

Everyone happy. Except those who have JavaScript disabled maybe.

I have also created a tiny WordPress plugin that does just that: title-junk.zip

Discussion on Hacker News

Posted in web

Colorillo: Draw on an LED Wall

Colorillo currently powers a collaborative drawing event in Vienna: At Adria Wien there is a temporary LED wall on which you can draw with the help of Colorillo.

Take your mobile phone (iPhone or Android, also iPod Touch, iPad or Laptop works) out of your pocket, enter the URL that you find there, and on your screen you will see what’s on the LED wall. Then you can paint on that. Of course with the technology of Colorillo, multiple people can draw at the same time.

The resolution is a little limited, as the LED wall had been built by hand by students of architecture, so in my experience it’s a little more like splash painting with colors, but we’ll see how it turns out tonight.

Tonight (July 22, 2010) at 8:30pm there is a special drawing event with John Megill of FM4 as a DJ. If you’re around, come by! It will be fun!

The whole event is hosted by the Ärzte ohne Grenzen (Doctors without borders) beach that has been set up for the duration of the AIDS conference that is being held in Vienna this year.

You can also join the event on Facebook and check out more info at the Colorillo blog post.

Reddit-like Collapsible Threads for Hacker News

I enjoy consuming and participating at Hacker News by Y Combinator resp. Paul Graham.

One thing that needs improvement is the reading comments there. At times it happens that the first comment develops into a huge thread, and then the second top-level comment (which might also be well worth reading) disappears somewhere down into the page.

Collapsible Threads at Hacker News through a bookmarkletReddit has combatted this common problem by making threads easily collapsible. I think it is worth having this also on Hacker News, so I implemented it and wrapped it into a bookmarklet so that you can use this functionality on-demand at Hacker News.

Drag this to your bookmarks bar: collapsible threads

As soon as it is available in your bookmarks bar, go to Hacker News and click on it when viewing a comments page. Next to each thread a symbol [+] will appear. Click it to collapse the thread and it will change to a [-]. Click that to expand the thread again.

I have licensed the source code under an MIT License. Click here to view the source code of hackernews-collapsible-threads.js. (Actually for caching reasons the bookmarklet currently loads hackernews-collapsible-threads-v6.js which is actually just the same)

The Hacker News HTML source code seems quite fragile in the sense that the comments section of a page can’t be identified in a really unique way (for example it does not have an HTML id attribute), so it might break when the layout of the page changes. This is why the bookmarklet is actually only a loader for the script on my server. I have tuned the HTTP headers in a way that your browser should properly cache the script so that the speed of my server should not affect the loading of the bookmarklet.

Enjoy :)

If you use Hackernews on another URL than news.ycombinator.com or hackerne.ws, use this bookmarklet: collapsible threads (no domain check)

Update March 18, 2011: Paul Biggar has contributed a greasemonkey script that also works on Firefox 4. I have adapted it so that it also works (which basically involved copying the jQuery script above mine) in Safari and Chrome (using NinjaKit).

Install Greasemonkey script

Install Paul Biggar’s Greasemonkey script

Update November 22, 2011: Eemeli Aro has sent me a little CSS tweak so that the lines don’t move around when collapsing. The code downloadable from above contains his code. Thank you!

Posted in web

Colorillo

Currently I am doing my civillian service in Austria (though only 1.5 months to go), but in summer when I had a little free time I built something small and neat: Colorillo.

Colorillo

Colorillo is a very simple drawing program on a web site. What makes it particularly fun is that you can draw together with other people. Whatever someone draws on the page you are currently on, you will see right away.

Colorillo

Colorillo makes use of a plethora of interesting technologies to accomplish simultaneous drawing. Among them are:

Then in October I got the chance to bring Colorillo onto the LED wall of the archdiploma 2009 exhibition at Kunsthalle Vienna, so at the moment you can stand in front of the building and use Colorillo to draw on it. It’s fun!

Colorillo on Kunsthalle Wien

Even Faster Web Sites, a book by Steve Souders

Steve Souders has recently released something like a sequel to his previous book “High Performance Web Sites” (HPWS) which I have already reviewed earlier. With Even Faster Web Sites he and his co-authors (specialists in their fields, such as Doug Crockford (JavaScript: The Good Parts) on Javascript) elaborate on some of the rules Steve postulated in HPWS.

It needs to be stated first that if you haven’t read and followed Steve’s first book, you should go and do that first. It’s a must-read that makes it pretty easy to understand why your page might be slow and how to improve it.

In “Even Faster Web Sites”, Steve and his co-authors walk a fine line between fast and maintainable code. While most techniques described in his first book could be integrated with an intelligent deployment process, it is much harder with “Even Faster Web Sites”.

In the chapters that Steve wrote himself for “Even Faster Web Sites,” he is pretty much obsessed with analyzing when, in what sequence, and how parallel the parts of a web page are loaded. Being able to have resources transfered in parallel lead to the highest gains in page loading speed. The enemy of the parallel download is the script tag, so Steve spends (like in HPWS but in greater detail in this book) quite a few pages analyzing which technique of embedding external scripts lead to which sequence in loading the resources of the page.

Steve also covers interesting techniques such as ways to split the initial payload of a web site (lazy loading) and also chunked HTTP responses into consideration that allow sending back HTTP responses even before the script has finished. Downgrading to HTTP/1.0 can only be considered as hard-core technique that just huge sites such as Wikipedia are using right now and should be considered being covered for educational reasons only.

There is a section focussing on Optimizing Images which thankfully takes the deployment process into consideration and shows how to automate the techniques they suggest to optimize the images.

My only real disappointment with “Even Faster Web Sites” is the section by Nicolas C. Zakas. He writes about how to Write Efficient JavaScript but fails to prove it. To be fair: in the first section of the chapter he shows benchmarks and draws conclusions that I can confirm in the real world (accessing properties of objects and their child-objects can be expensive). But then he gives advice for writing code that can hardly be called maintainable (e.g. re-ordering and nesting if-statements (!), re-writing loops as repeated statements (!!!)) and then doesn’t even prove that this makes the code any faster. I suspect that the gains of these micro-optimizations are negligible, so chapters like these should be (if at all) included in an appendix.

Speaking of appendices, I love what Steve has put in here: he shows a selection of the finest performance tools that can be found in the field.

This book can help you make your site dangerously fast. You also need to be dangerously careful what tips you follow and how you try to keep your site maintainable at the same time. “Even Faster Web Sites” is great for people who can’t get enough of site optimization and therefore a worthy sequel to “High Performance Web Sites,” but just make sure that you also read and follow Steve’s first book first.

The book has been published by O’Reilly in June 2009, ISBN 9780596522308.

Posted in web

Debugging PHP on Mac OS X

[factolex]

I have been using Mac OS X as my primary operating system for a few years now, and only today I have found a very neat way to debug PHP code, like it is common for application code (i.e. stepping through code for debugging purposes).

The solution is a combination of Xdebug and MacGDBp.

macgdbp-debugger

I am using the PHP package by Marc Liyanage almost ever since I have been working on OS X, because it’s far more flexible than the PHP shipped with OS X.

Unfortunately, installing Xdebug the usual pecl install xdebug doesn’t work. But on the internetz you can find a solution to this problem.

Basically you need to download the source tarball and use the magic command CFLAGS='-arch x86_64' ./configure --enable-xdebug for configuring it. (The same works for installing APC by the way)


/usr/local/php5/php.d $ cat 50-extension-xdebug.ini
[xdebug]
zend_extension=/usr/local/php5/lib/php/extensions/no-debug-non-zts-20060613/xdebug.so

xdebug.remote_autostart=on
xdebug.remote_enable=on
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_host=localhost
xdebug.remote_port=9000

Now you can use MacGDBp. There is an article on Particletree that describes the interface in a little more detail.

I really enjoy using this method to only fire up this external program, when I want to debug some PHP code, and can continue to use my small editor, so that I don’t have to switch to a huge IDE to accomplish the same.

Posted in php

Website Optimization, a book by Andrew B. King

Website Optimization

This time I’m reviewing a book by Andy King. Unlike High Performance website by Steve Souders, it doesn’t solely focus on the speed side of optimization, but it adds the art of Search Engine Optimization to form a compelling mix in a single book.

If you have a website that underperforms your expectations, this single book can be your one-stop shop to get all the knowledge you need.

Andy uses interesting examples of how he succeeded in improving his clients’ pages that illustrate well what he describes in theory before. He not only focuses on how to make your website show up at high ranks in search engines (what he calls “natural SEO”), but also discusses in detail how to use pay per click (PPC) ads to drive even more people to one’s site. I especially liked how Andy describes how to find the best keywords to pick and also describes how to monitor success of PPC.

The part about the optimization for speed feels a little too separated in the book. It is a good read and provides similar content as Steve Souders book, though the level of detail feels a little awkward considering how different the audience for the SEO part of the book is. Still, programmers can easily get deep knowledge about how to get that page load fast.

Unfortunately Andy missed out a little on bringing this all into the grand picture. Why would I want to follow not only SEO but also optimize the speed of the page? There is a chapter meant to “bridge” the topics, but it turns out to be about how to properly do statistics and use the correct metrics. Important, but not enough to really connect the topics (and actually I would have expected this bridging beforehand).

Altogether I would have structured things a little different. For example: It’s the content that makes search engines find the page and makes people return to a page, yet Andy explains how to pick the right keywords for the content first whereas he tells the reader how to create it only afterwards.
Everything is there, I had just hoped for a different organization of things.

All in all, the book really deserves the broad title “Website Optimization.” Other books leave out SEO which usually is the thing that people mean when they want to optimize their websites (or have them optimized).

I really liked that the topics are combined a book and I highly recommend the book for everyone who wants to get his or her website in shape.

The book has been published by O’Reilly in July 2008, ISBN 9780596515089. Also take a look at the Website Optimization Secrets companion site.

Thanks to Andy for providing me a review copy of this book.

Upgrade WordPress Script

Whenever a new version of WordPress comes out (as just WordPress 2.6 did), it is somewhat of a pain to upgrade it.

But not for me anymore, because I have created a small (and simple) script some versions ago which I would like to share with you.


$ cat upgrade_wordpress.sh
wget http://www.wordpress.org/latest.tar.gz
mv www wordpress
tar --overwrite -xzf latest.tar.gz
rm latest.tar.gz
mv wordpress www

www is my document root and the script sits outside of it. It downloads the most recent version, extracts it while overwriting the already existing files. The script doesn’t contain anything extra-ordinary, but makes upgrading real easy.

Of course this script is only useful if you have ssh access to your web server, but if you do that script might ease the (almost too frequent) pain of upgrading WordPress.

bash completion for the pear command

I am only scratching my own itch here, but maybe someone can use it or expand from it.

I just always found annoying that pear run-tests tab gives all files instead of just *.phpt. This is what this snippet actually does.

Paste this into the file /opt/local/etc/bash_completion on OSX (for me it is just before _filedir_xspec()) or into a new file /etc/bash_completion.d/pear on Debian.


# pear completion
#
have pear &&
{
_pear()
{
local cur prev commands options command

COMPREPLY=()
cur=${COMP_WORDS[COMP_CWORD]}

commands='build bundle channel-add channel-alias channel-delete channel-discover channel-info channel-update clear-cache config-create config-get config-help config-set config-show convert cvsdiff cvstag download download-all info install list list-all list-channels list-files list-upgrades login logout makerpm package package-dependencies package-validate pickle remote-info remote-list run-scripts run-tests search shell-test sign uninstall update-channels upgrade upgrade-all'

if [[ $COMP_CWORD -eq 1 ]] ; then
if [[ "$cur" == -* ]]; then
COMPREPLY=( $( compgen -W '-V' -- $cur ) )
else
COMPREPLY=( $( compgen -W "$commands" -- $cur ) )
fi
else

command=${COMP_WORDS[1]}

case $command in
run-tests)
_filedir 'phpt'
;;
esac
fi

return 0
}
complete -F _pear $default pear
}

Then re-source your bashrc or logout and re-login.

I am far from being an expert in bash_completion programming, so I hope someone can go on from here (or maybe has something more complete lying around?).

Facebook discloses its users to 3rd party web sites

Q&A with Dave Morin of Facebook

Just a quick post, because what I read at Joshua Porter’s blog somewhat alarms me: Facebook?s Brilliant but Evil design.

I feel more and more reassured at why I don’t use Facebook and have a bad feeling about them.

The gist is this: when you buy something at a participating web site (Ethan Zuckerman shows how it is done at overstock.com), Facebook discloses to that 3rd party web site, that you are a user of Facebook, and hands over some more details about you — while you are only visiting that 3rd party page (and not facebook.com)!!

This goes against the idea of separate Domains on the Internet. Joshua fortunately also goes into technical detail, how this could be done.

In my opinion Facebook users should quit the service and heavily protest against these practices. But I am afraid, few of them will even notice that this is happening.

Posted in web

This was FOWA Expo 2007

fowa.jpg

I have been attending this year’s Future of Web Apps Expo in London’s ExCeL centre.

There were a ton of interesting speakers and I enjoyed listening a lot. Amongst others there were Steve Souders of Yahoo (High Performance Web Sites), Paul Graham of Y Combinator (The future of web startups), Matt Mullenweg of WordPress.com (The architecture of WordPress.com, he was the only one to go into some detail) and Kevin Rose of digg (Launching Startups).

I also enjoyed Robin Christopherson’s talk very much. He is vision impaired and showed how he browses the web (amazing how fast he had set the speed of his screen reader — I know why and guess that most vision impared people turn up the speed, yet it still feels awkward to listen to it) and which challenges therefore arise. Unfortunately Chris Shiflett only held a workshop which I was not attending.

The conference was clearly not so much for developers (at some points I would have greatly enjoyed some delving into code), so I am trying to keep my eyes open for even nerdier conferences :) Any suggestions?

On the evening of the first day there was a “live” diggnation recorded which was pretty fun.

According to Ryan Carson, he will be publishing audio files of the talks on www.futureofwebapps.com soon. Thanks to Carsonified for installing this great conference. I hope I will be able to return next year.

I have posted more photos to flickr.

,

High Performance Web Sites, a book by Steve Souders

I’d like to introduce you to this great book by Steve Souders. There already have been several reports on the Internet about it, for example on the Yahoo Developers Blog. There is also a video of Steve Souders talking about the book.

The book is structured into 14 rules, which, when applied properly, can vastly improve the speed of a web site or web application.

Alongside with the book he also introduced YSlow, an extension for the Firefox extension FireBug. YSlow helps the developer to see how good his site complies with the rules Steve has set up.

I had the honour to do the technical review on this book, and I love it. Apart from some standard techniques (for example employing HTTP headers like Expires or Last-Modified/Etag), Steve certainly has some tricks up his sleave:

For instance he shows how it is possible to reduce the number of HTTP requests (by inlining the script sources) for first time visitors, while still filling up their cache for their next page load (see page 59ff).

The small down side of this book is that some rules need to be taken with care when applied to smaller environments; for example, it does not make sense (from a cost-benefit perspective) for everyone to employ a CDN. A book just can’t be perfect for all readers.

If you are interested in web site performance and have a developer background, then buy this book (or read it online). It is certainly something for you.

The book has been published by O’Reilly in September 2007, ISBN 9780596529307.

Some more links on the topic:

,

What does “size” in int(size) of MySQL mean?

I was always wondering what the size of numeric columns in MySQL was. Forgive me if this is obvious to someone else. But for me the MySQL manual lacks a great deal in this field.

TL;DR: It’s about the display width. You only see it when you use ZEROFILL.

Usually you see something like int(11) in CREATE TABLE statements, but you can also change it to int(4).

So what does this size mean? Can you store higher values in a int(11) than in an int(4)?

Let’s see what the MySQL manual says:

INT[(M)] [UNSIGNED] [ZEROFILL]
A normal-size integer. The signed range is -2147483648 to 2147483647. The unsigned range is 0 to 4294967295.

No word about the M. The entry about BOOL suggests that the size is not there for fun as it is a synonym for TINYINT(1) (with the specific size of 1).

TINYINT[(M)] [UNSIGNED] [ZEROFILL]
A very small integer. The signed range is -128 to 127. The unsigned range is 0 to 255.

BOOL, BOOLEAN
These types are synonyms for TINYINT(1). A value of zero is considered false. Non-zero values are considered true: […]

So TINYINT(1) must be different in some way from TINYINT(4) which is assumed by default when you leave the size out1. Still, you can store for example 100 into a TINYINT(1).

Finally, let’s come to the place of the manual where there is the biggest hint to what the number means:

Several of the data type descriptions use these conventions:

M indicates the maximum display width for integer types. For floating-point and fixed-point types, M is the total number of digits that can be stored. For string types, M is the maximum length. The maximum allowable value of M depends on the data type.

It’s about the display width. The weird thing is, though2, that, for example, if you have a value of 5 digits in a field with a display width of 4 digits, the display width will not cut a digits off.

If the value has less digits than the display width, nothing happens either. So it seems like the display doesn’t have any effect in real life.

Now2 ZEROFILL comes into play. It is a neat feature that pads values that are (here it comes) less than the specified display width with zeros, so that you will always receive a value of the specified length. This is for example useful for invoice ids.

So, concluding: The size is neither bits nor bytes. It’s just the display width, that is used when the field has ZEROFILL specified.

If you see any more uses in the size value, please tell me. I am curious to know.

1 See this example:
mysql> create table a ( a tinyint );
Query OK, 0 rows affected (0.29 sec)
mysql> show columns from a;
+-------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------+------+-----+---------+-------+
| a | tinyint(4) | YES | | NULL | |
+-------+------------+------+-----+---------+-------+
1 row in set (0.26 sec)

mysql> alter table a change a a tinyint(1);
Query OK, 0 rows affected (0.09 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> insert into a values (100);
Query OK, 1 row affected (0.00 sec)

mysql> select * from a;
+-----+
| a |
+-----+
| 100 |
+-----+
1 row in set (0.00 sec)

2 Some code to better explain what I described so clumsily.
mysql> create table b ( b int (4));
Query OK, 0 rows affected (0.25 sec)

mysql> insert into b values (10000);
Query OK, 1 row affected (0.00 sec)

mysql> select * from b;
+-------+
| b |
+-------+
| 10000 |
+-------+
1 row in set (0.00 sec)

mysql> alter table b change b b int(11);
Query OK, 1 row affected (0.00 sec)
Records: 1 Duplicates: 0 Warnings: 0

mysql> select * from b;
+-------+
| b |
+-------+
| 10000 |
+-------+
1 row in set (0.00 sec)

mysql> alter table b change b b int(11) zerofill;
Query OK, 1 row affected (0.00 sec)
Records: 1 Duplicates: 0 Warnings: 0

mysql> select * from b;
+-------------+
| b |
+-------------+
| 00000010000 |
+-------------+
1 row in set (0.00 sec)

mysql> alter table b change b b int(4) zerofill;
Query OK, 1 row affected (0.08 sec)
Records: 1 Duplicates: 0 Warnings: 0

mysql> select * from b;
+-------+
| b |
+-------+
| 10000 |
+-------+
1 row in set (0.00 sec)

mysql> alter table b change b b int(6) zerofill;
Query OK, 1 row affected (0.01 sec)
Records: 1 Duplicates: 0 Warnings: 0

mysql> select * from b;
+--------+
| b |
+--------+
| 010000 |
+--------+
1 row in set (0.00 sec)

,

Webkit catching up with Firefox and Firebug

Webkit, the rendering Toolkit that powers Apple’s Safari web browser, is getting a lot of love lately (iPhone, Windows beta version).

But for developers it was always hard to debug and inspect your web applications running in Safari. With Drosera a decent debugger exists since June 2006 (for Webkit only so far, though — it’s not going to happen with Safari 2).

And now, the (already existent, but somewhat weird looking) (Web) Inspector got a makeover:

Webkit: New Inspector

This is a big step, giving web developers not only the chance to precisely identify why this or that DOM element is shown in the way it is, but it also allows a look into how the web page loads, much like Firebug on Firefox.

As a neat extra, you can view how your components add to the loading time of the page.

Webkit: Transfer Time

Even though Webkit is in some ways just mimicking Firebug, it is a good step for future web development on Safari. Even more as the new Webkit builds contain less than the usual number of browser quirks that make programming Safari difficult in the Ajax world.

The Webkit nightly builds provide the new feature by a right click on the page, selecting “Inspect Element”. For more info, see the blog post on Surfin’ Safari Webkit blog.

Finally one more pic, because it’s quite beautiful :)

Webkit: CSS/DOM

, , , ,

Posted in web

Spamhaus.org no longer lists Austrian Registry on its Block List

It has come to my attention today that the almost famous Spam Block List provider put the IP addresses of the Austrian Registry nic.at on their block list.

The list that Spamhaus provides is actually something good: it allows mail server administrators to automatically block mails arriving from servers that are known to be operated by phishers.

At this point Spamhaus took the wrong term, though. They demanded from the Austrian Registry to delete 15 domains that they consider to be used by phishers, apparently without providing (enough) evidence to nic.at. So nic.at responded that — because of Austrian law — they cannot just delete domains without proof of bogus WHOIS addresses.

I cannot judge who is ultimately right in this dispute (like did Spamhaus provide enough evidence or not), but I can definitely judge that Spamhaus took the wrong decision when they started to block the IP addresses of nic.at in their list.

Welcome to the Kindergarten, guys.

nic.at is bound to Austrian law, and as a foreign company you can’t just come along and ask them to remove certain domains. What if someone would go to your registry and request deletion of spamhaus.org without providing any legitimate reason.

Dear Spamhaus, you need to stick to your policy. Your block list is about phishers, and nic.at did not send out any phishing mails. You can’t just put someone on there because you want to pressure them.

As a result, mail server administrators should no longer rely on block lists of such a provider who misuses his own list for trying to put other companies/organizations under pressure. So this is the right moment to remove sbl-xbl.spamhaus.org from your server configuration.

Coverage on the German Heise.de.

Update 2007-06-20: They have stopped listing nic.at. Finally they see reason. (They have changed the IP address block to 193.170.120.0/32 which matches no addresses); also see german futurezone.

, ,

Subversion: The Magic of Merging

When programming professionally, Subversion is a must-have. Same for system administration: it’s quite a good idea to keep your configuration files (e.g in Linux the whole /etc/ directory) as a Subversion checkout.

So the goal of Subversion (or any other Source Control system) is to allow you to do something Apple will introduce with it’s new Leopard operating system: Time Machine. Go back in time (and restore a version of a file as it was on day x).

Using Subversion on a daily basis is quite easy. Just check in (svn ci) your changes after you have completed a certain task. When you work collaboratively, and someone else has committed some changes, you do a svn up and the changes of the others are applied to your codebase.

That’s all you basically need. But how can you go back in time now?

So you poke around a bit and find that svn up has a parameter -r which let’s you put your checkout to the state in which it was at a certain revision.

Let’s suppose we know that something was ok on monday and is not today. So let’s use the command from above to see what it looks like.

~/project/trunk$ svn up -r {2006-10-09} app.php
U app.php

Voila, there it is. Now we choose to use that code now and throw away all changes that have been committed since. We modify the file a bit and do a check in:

~/project/trunk$ svn ci -m "revert to monday" app.php
Sending app.php
svn: Commit failed (details follow):
svn: Your file or directory 'app.php' is probably out-of-date
svn: The version resource does not correspond to the resource within the transaction. Either the requested version resource is out of date (needs to be updated), or the requested version resource is newer than the transaction root (restart the commit).

Uh.. ok. So you probably you know that error message already. It is also returned when you want to check something in on a file that has been changed by someone else since your last svn up.

When you check something into a subversion repository, one of the basic rules is that the file you want to commit is “up to date”, i.e. the revision number of your local file (updated by svn up) equals the number in the repository (on the server).

Ok, so, let’s update our checkout so we can re-run the check in.

~/project/trunk$ svn up
G app.php

So you discover the changes that happened since have been re-inserted to that file again. Maybe Subversion has alerted you of a conflict, because you changed some lines that have been modified since monday also.

Great! Basically we are back to where we started.

Let’s not resign here, but rather use the appropriate command: svn merge. That command is mostly known for merging changes from one branch of development to another. But it can also help you to go back in time.

The parameters of svn merge are to specify a revision range, which changes to be merged, and a source — what part of the subversion repository should be searched for the changes.

Usually one would find this command used in a way like:

~/project/trunk$ svn merge -r 15:26 ../branches/first_release/
G app.php

So with two revisions specified you define a range of changes which should be merged into the current checkout. Ok so how would us help this here?

You can also specify revisions backwards, to go back in time. So to undo the command form before you can write:

~/project/trunk$ svn merge -r 26:15 ../branches/first_release/
G app.php

To put it simple, Subversion generates a diff file behind the scenes that incorporates the changes between the given revisions. Then the changes are merged with the files in the same way the patch command (Linux, Unix, OS X, …) does it. When going back in time, the parameter -R is used which applies the patch in the reverse direction. Voila.

So as a final solution this leaves us with:

~/project/trunk$ svn merge -r head:{2006-10-09} .
U app.php
~/project/trunk$ svn ci -m "revert to monday" app.php
Sending app.php
Transmitting file data .
Committed revision 27.

For further questions, the Subversion FAQ is a good starting point when you know exactly what you want (i.e. the correct terminology). (For example reverting does not mean to go back to a previous version of the file, but rather to remove the changes you did locally).

There is the subversion book (also published by O’Reilly), of which the Guided Tour is a good starting point.

The process I described above as a trial and error is also described in that book at Undoing changes.

Also OSCON: Subversion Best Practices, a transcript of a talk given by the subversion creators (Ben Collins-Sussman and Brian W. Fitzpatrick) by Brad Choate has some good tips.

Have fun :)

, , ,