Reducing Our Dependency On Third-Party Platforms For Our Online Activity

There is something that I feel is not right with today’s web structure. We, as the population of the web, create so much content that ends up on servers of large companies. We could own our data.

Therefore, I’d like to help reduce everyone’s dependency on third party platforms for their online activity.

Vision

You use your own server for publishing and interact with other people’s content on their server.

Since everybody follows this method, your content only reaches the people it is meant for, without involving a third party in the distribution.

You interact using the same means of today: web UI, mobile apps; the difference is that they don’t talk to a third-party server, neither for publishing nor getting and interacting with others’ content.

I am trying to create a solution for this with WordPress and the Friends plugin.

Note: in the following text I use the word “server” somewhat synonymous for the WordPress+Friends setup but it there could be alternative implementations. Also, server doesn’t mean “a dedicated machine somewhere,” you just need web space where you can install WordPress, which doesn’t mean it has to be expensive. For example, wpfriends.at is hosted for €1,90/month incl. domain.

Separation of Content

Today, separate social networks exist for different types of content; for example short content has its home on Twitter, Instagram is for photos. Facebook is a mix of this but allows for private content. Instant messengers like WhatsApp (or Facebook groups) are very clear about who the content or conversation is for.

So, separating content makes for a more homogeneous experience when consuming the content, also you have a good grasp of where (and for whom) you want to publish your content.

On your own server, you can separate out different types of content as well (in WordPress this is called “post formats”). You can also post something privately, only giving access to authenticated users.

Adding a friend and subscribing to multiple platforms on which they publish

The key is to connect the different types appropriately. By fetching the content from your friends and placing them in such “buckets,” you now have the option to view everyone’s content from multiple perspectives: everything from a single friend (or a group) across content types, or everything across friends of just one content type.

By continuing to use specialized apps for different types of content, you keep the benefits but exclude the third-party having full access to the content since they now talk to your own server.

Platforms

The possibility to publish on the Internet by using your own server (most of the time: rented web space) is well-established since blogging was invented. It can become clunky when you interact with others: how do I respond?

That’s why there are several reasons why platforms are appealing:

  • It’s easy to connect with others on the same platform.
  • Leveraging network effects is easier since the platform knows who is connected to whom.
  • Spam is usually under control.

Thinking back to the era of blogging, we had lots of interactions with others using comments and pingbacks. What mainly led to its demise was automated spam which reduced everybody’s willingness to be open for interaction. Commenting meant moderating spam.

Core

So one of the core features of the Friends plugin is to solve the authentication problem between people you know and trust.

When you decide “to become friends” with someone, it means that you both get (low privilege) accounts on each other’s servers.

You can give them permission to see your privately published content, or not. When you want to respond to their post, you are automatically authenticated on their side.

This means that you could close comments for unauthenticated users (thus eliminating spam) but keep the discussion open for your friends.

Transitioning

Using your own server still has a learning curve. Getting started with a self-hosted WordPress and installing the Friends plugin is not as trivial as creating an account on a social network.

While it has become considerably easier to register a domain, get webspace connected, and a WordPress installed, there are still many further steps until you really can get started.

So, reality is that your friends likely won’t be migrating off third-party networks. Possibly never.

To still allow yourself to disconnect, the Friends plugin allows you to follow your friends across different (possibly third-party) channels including popular social networks like Twitter.

You can subscribe to someone on Twitter and their messages will be aggregated on your server under their user, in the respective post format.

This means that you can either view all the content (that you follow) for a single friend, or you can view the aggregate type of content, e.g. all the short messages your friends posted across services.

Not only for “real” Friends

Many use social networks to not only follow people they know and trust, but also to follow the news or celebrities. Despite the name of the Friends plugin, it can also do that.

You can subscribe any supported content (some out of the box, for others you can extend it with plugins) but not go into the friendship realm.

Your server takes care of fetching the content from various third-party sources and you can then consume the content in the way you see fit:

  • The local “Friends UI” on your own server,
  • an RSS reader,
  • or more specialized clients like mobile apps.

Ads

One side-effect of this could possibly be that this removes our content from being wrapped in ads and websites trying to have us spend as much time as possible on their site.

We’d all spend a little money for domain and server every month or year for our own gain. We don’t need to put ads to our content.

For journalistic content it is already not uncommon for tech publications to provide an ad-free RSS feed as part of their paid subscription.

More Ideas

You can use your server to store more things, for example:

Your personal bookmark collection and todo lists (I have a work-in-progress for transitioning my previous project called thinkery.me to a WordPress plugin thinkery to host this yourself and use the Android app as a client).

Leverage your aggregated content as your own browser start page (my WordPress plugin called Startpage).

Right now, this is only implemented for WordPress, and the authentication is only leveraged for consuming private content and posting comments. But it doesn’t need to end there.

The authentication could be used for further actions, for example you could give posting permissions to your friends to create a (private) forum, hosted by you or a friend.

The Friendship protocol is a REST API that can be implemented in other software as well.

The Friends plugin is open source and GPL.

Work in Progress

How far along the way is the Friends plugin? It’s already well usuable for your own purposes. Like I said above, you can establish friendships between servers and consume your friends’ content, even from third-party social networks.

There is still a lot to do, especially around commenting and notifications. Better tools are needed for leveraging network effects, like search your friends+ posts and explore their friends (if they allow you to do so).

Posted in web

Pocketbook Color

I read a lot of web articles on my e-reader (often using Push to Kindle which is fantastic). I left the Kindle ecosystem a while ago and Pocketbook (a TouchHD 3) has been a good home so far.

Since my content is often a mix of text and non-text, I was appealed by a color eInk screen. The Pocketbook Color recently came out and I purchased one to test it.

Here are my conclusions after about two weeks of usage:

Pro

  • While limited in their range, colors work well for diagrams, illustrations and screenshots. Photos can look a bit awkward but it’s definitely better than greyscale.
  • Before this e-reader, I haven’t read comics on a device but with color it’s quite fun. The clipping tools in the UI are useful to get rid of white borders around the actual comics.

Con

  • It’s not very suitable for night time reading (which is my main use case), the minimum brightness is high, there is only white backlight.
  • With the strong white backlight it feels like an LCD screen which kind of diminishes the idea behind eInk.
  • The technology seems to use two layers: one “common” 300dpi greyscale, and one around 150dpi color layer. This results in a pattern overlaying the whole screen making the screen less crips when reading text.
  • Pocketbook readers are rather slow. I always wonder how they do scrolling and inertia so much better than Kindles but feel so. slow. navigating through the UI.

Overall, I think color eInk is technology well worth exploring, especially when no backlight is necessary and it doesn’t need any battery power (think photo walls)

I am torn whether I’ll keep the Pocketbook Color as my main reading device because using the reader in the dark is like turning on the light in the room.

Posted in web

Decentralized Social Networking with WordPress

Over the past year, I’ve been working on the side on a WordPress plugin that implements an idea that has been growing in me over the last couple of years. Decentralized Social Networking. The plugin that does it is called Friends.

Starting with the frustration that there are few alternatives for people who use Facebook: if you don’t want them to own your data but still want to privately keep your friends and family up to date about your live and discuss what interests you, where do you go?

I realize that many people just switched to instant messaging (like WhatApp) which does allow exchanging private messages (and photos) with your friends, but overall I do like the idea of having a more structured publishing platform. I just don’t like a single entity to control it all.

So I realized: We actually had an alternative all along: blogging.

Blogging is decentralized: you decide where you host, you decide which blogs you read and nobody really knows which ones you have subscribed to.

What disqualifies it as an alternative to my wish to keep friends and family up to date, is that it public by default. While there is the option to publish something as “private,” there is not a lot you can then do with a privately published post.

So what if you were easily able to give your friends access to your private posts?

Here comes the Friends plugin. And with it, the downside to the solution:

You need to have your own blog to become friends with each other.

Right now, this is only implemented on WordPress, the technology is framework agnostic, though.

Your friends get their own user on your blog. Here my request is still pending.

When everyone involved has their own blogging platform and they’d decide where they want to host, we automatically get a decentralized platform.

The technological ingredients to this are actually pretty old:

  • RSS
  • REST API
  • Authentication via keys

After friendship has been established (this involves both parties accepting friendship and exchange private keys in the background), your server will use that key when requesting your friend’s RSS feed which will in turn (since you are friends) contain private posts. And vice versa.

For commenting, you’ll go to your friend’s blog, it’s a one-click-authentication away. This can also eliminate spam if you only allow friends to comment on your posts.

All of this is highly compatible with standard WordPress: if you want to accept a friend request on your mobile phone, use the WordPress iOS or Android app and change that user’s role to “Friend.”

Chicken-and-egg problem: I use a social network because all my friends are there.

So why use a social network that not all of your friends use (yet)? Because the Friends plugin is actually a pretty decent way to consume RSS feeds.

Since it is based on RSS, it means you can also subscribe any blog or website that offers this well-established way of distributing content.

As you are in control of the server, you can also decide what you’re interested in and tailor the feeds to your liking: you can define rules for incoming feed items and ignore posts you know you won’t be interested in.

I personally like to consume notifications via e-mail as it provides read/unread functionality and I can sort and categorize e-mails to my needs. You can now also read your friends’ posts (or subscriptions) via e-mail. But you don’t need to.

Another method to view your friend posts and subscriptions is the “Friends Page,” a timeline of friend posts and subscriptions. You can therefore scroll down the list of your friend’s posts and subcriptions, just as you see it people doing it all the time on Facebook.

Read your friends’ posts on the Friends Page

There is quite a bit more to this, you can Emoji react to a post, recommend it specifically to friends, have sections in your posts for friends/not-friends, and more.

Overall, I see this as a way to take blogging to the next level: Choose your private audience. Choose where you host. Publish publically if it’s meant to be public.

It’s clear that setting up and having your own blog is not (yet) for everyone. It’s more work than just signing up for some social network. Often you’ll need to pay for hosting (and domain). But it also gives you the freedom to take your data somewhere else if you want to. Or delete it.

We’ll have to see who will use this. As of now, it’s for a technical audience but maybe someday there will be dedicated Friends-WordPress hosting?

The Friends Plugin is open source under GPL2. If you have a WordPress blog, try it out, and if you think this could be better, different, enhanced: create an issue, or better: create a patch and send a pull request.

Probably it’s not ready for prime time yet, we’re at version 0.14. But it’s getting there, at least I am already using it daily :) There are many ideas left to be implmented, and these are only mine so far.

Oh, if you happen to be in Vienna coming week, I’m going to talk about Decentralized Social Networking with WordPress at our WordPress meetup on November 7, 2018.

November WordPress Meetup Vienna

Wednesday, Nov 7, 2018, 6:30 PM

CodeFactory Vienna
Kettenbrückengasse 23/2/12 Wien, AT

29 People Attending

Schedule 18:30 | Arrival, Registration 19:00 | Welcome & Introduction 19:15 | Alex Kirk – Decentralized Social Networking with WordPress 19:45 | Break 20:00 | Harry Martin – Hello to 5.0, a first look at the next major release and the new theme Twenty Nineteen 20:30 | Socialising! 21:00 | Leaving CodeFactory, maybe for drinks somewhere close by __…

Check out this Meetup →

Posted in web

Fixing WhatsApp image dates after Android Migration

Recently I've had the issue to have a completely unsorted Photo Library in Android after migrating to a new phone. The reason is that WhatsApp images are copied into internal storage and end up with the last modification date when they were copied, thus conglomerating together when they should be spread out over time.

The problem is not that trivially to solve because you cannot mount internal storage into a computer and then modify the file dates. Thankfully, I was still able to create a viable solution using a bash script.

It all revolves around the Android Terminal Emulator Termux which allows you to execute scripts on your phone.

  1. Install Termux .
  2. Grant Termux access to your storage directories.
  3. Install core tools apt install coreutils
  4. Copy this script into your file tree (for example via Android File Transfer):
for f in IMG-20* VID-20* AUD-20*; do
    [ -e "$f" ] || continue
    NEWDATE=`echo $f | cut -c5-8`-`echo $f | cut -c9-10`-`echo $f | cut -c10-11`
    echo touch -d $NEWDATE "$f"
    touch -d $NEWDATE "$f"
done
  1. Run the script in the directories you need to fix file dates (for example in /storage/emulated/0/WhatsApp/Media/WhatsApp Images)
  2. Delete the data of MediaStorage (and maybe reboot) to make Android re-index the files with the new file dates.
Posted in web

Stack Overflow: Ways out of the negativity

This is in response to the Stack Overflow Meta question: Why is Stack Overflow so negative of late?

In my opinion the problem that Stack Overflow is currently facing is caused by a lot of new users that are characterized by user Mysticial as "help vampires". They care nothing for the site and just want their code fixed. They don’t research (or very little) and provide less than the minimum information needed. Most of the times the questions are very basic and can be answered by an intermediate programmer in a few minutes.

In a normal forum, users would not yield any responses. Not so on Stack Overflow: you get reputation for answering questions and therefore even theses badly researched questions get answers within under a minute. Mystical calls these users "reputation whores".

The problem is that "help vampires" and "reputation whores" create a vicious circle: they both need each other and therefore the circle continues to spin.

The outcome of this situation: the site is flooded with a high number of low quality questions, experienced programmers who are interested in learning something don’t see the forrest for the trees. Even though questions can be voted up, they don’t stand out enough to gain momentum.

Proposed Solutions

a) Create a "beginners test"

This would create a higher burden for low reputation users before they can ask their question. They need to invest more time and rethink their action before they get to post something.

A few ideas what that could be:

  • The user needs to give 3 search queries that he used either on Google or on Stack Overflow that didn’t yield results.
  • If they don’t include any code, they must confirm that they are asking a non-code question. See this proposal on Stack Exchange Meta.
  • Specify the time that they took to research the problem (while this can be easily faked, it makes the user reconsider if they had taken enough time for the problem)

b) Have experienced users review a question, before it goes online

There would be a process where a new user asks his or her question, but it doesn’t go online. Higher reputation users read the question but are unable to answer it, and give feedback if the question has enough information or has been researched enough. Finally, the question get’s thrown into the shark tank.

It would be fine to give these reviewing higher reputation users even more reputation for reviewing this: they are helping to improve the site, this is actually what the reputation system has been designed for: to make the site interesting, not for feeding the "help vampires".

All in all it is remarkable that despite the current situation, Stack Overflow has reached the quality it has. The reputation and badge system has for sure been a very big factor in this but it is very appalling that in order to reach a certain reputation level, you really have to feed the "help vampires".

You can find me on Stack Overflow as akirk.

Posted in web

Fix qTranslate with WordPress 3.9

When updating a blog of mine to WordPress 3.9 the page wouldn’t load anymore because of qTranslate not able to cope with the update. In the error log it says:

PHP Catchable fatal error: Object of class WP_Post could not be converted to string in ../wp-content/plugins/qtranslate/qtranslate_core.php on line 455

The error is caused by this change: get_the_date() to accept optional $post argument

There is a proposed quick fix by Saverio Proto, but it doesn’t take the problem at its root:

qTranslate registers the function qtrans_dateFromPostForCurrentLanguage($old_date, $format ='', $before = '', $after = '') for the hook get_the_date but it actually only accepts one parameter. With the new update it accepts a second parameter $post, which now wrongly fills the variable $before that is in the process being converted to a string.

So the solution simply is to delete the two parameters that were assigned the wrong meaning and have defaults anyway.

Posted in web

Thinkery API launched

Just a quick note, we made the Thinkery API public.

If you don’t know thinkery.me, it is a simple yet powerful tool for storing both notes and bookmarks. The contents of the saved page is stored in your Thinkery which you can keep even if the webpage goes down. With #hashtags you can easily categorize everything.

Check it out!

Posted in web

Genial Daneben Analyse

Zur Abwechslung mal ein Post auf deutsch. Ich bin Fan der (inzwischen abgesetzten) Fernsehsendung Genial daneben. Es gibt da eine Genial Daneben Datenbank mit (nahezu) allen Fragen, die in den Sendungen vorkamen. Ohne konkreten Nutzen habe ich diese Daten aus der Text-Form in eine echte Datenbank konviertiert (Script hier) und bin zu folgender Tabelle gekommen. Für alle die es interessiert:

Fragen: 2732 (1087 beantwortet, das sind fast 40%)

Teilnehmer Teilnahmen gelöst gelöst pro Teilnahme
Bernhard Hoecker 399 223 0.56
Hella von Sinnen 390 202 0.52
Wigald Boning 120 56 0.47
Guido Cantz 96 51 0.53
Barbara Schöneberger 62 14 0.23
Dieter Nuhr 62 18 0.29
Bastian Pastewka 62 6 0.10
Ralf Schmitz 60 6 0.10
Oliver Kalkofe 55 6 0.11
Herbert Feuerstein 47 25 0.53
Georg Uecker 39 13 0.33
Olli Dittrich 38 11 0.29
Oliver Welke 37 11 0.30
Ingo Appelt 32 8 0.25
Martin Schneider 31 0 0.00
Michael Kessler 29 6 0.21
Thomas Hermanns 27 7 0.26
Anke Engelke 26 8 0.31
Matze Knop 23 2 0.09
Christoph Maria Herbst 20 3 0.15
Mario Barth 20 8 0.40
Jürgen von der Lippe 19 3 0.16
Lou Richter 19 7 0.37
Ingo Oschmann 13 2 0.15
Guildo Horn 13 1 0.08
Dirk Bach 12 3 0.25
Kim Fisher 12 1 0.08
Urban Priol 10 2 0.20
Cordula Stratmann 9 0 0.00
Eckart von Hirschhausen 9 0 0.00
Tetje Mierendorf 8 0 0.00
Oliver Pocher 8 0 0.00
Bodo Bach 8 0 0.00
Hennes Bender 7 1 0.14
Rüdiger Hoffmann 7 0 0.00
Elton 7 1 0.14
Johann Köhnich 7 0 0.00
Bürger Lars Dietrich 7 2 0.29
Helge Schneider 7 7 1.00
Anka Zink 6 0 0.00
Hans Werner Olm 6 0 0.00
Cindy aus Marzahn 6 0 0.00
Kaya Yanar 5 2 0.40
Mike Krüger 5 2 0.40
Horst Lichter 5 2 0.40
Susanne Pätzold 5 1 0.20
Jochen Busse 5 1 0.20
Karl Dall 5 0 0.00
Mirja Regensburg 4 1 0.25
Oli Petszokat 4 2 0.50
Janine Kunze 4 0 0.00
Michael Mittermeier 4 0 0.00
Paul Panzer 3 0 0.00
Florian Schroeder 3 0 0.00
Konrad Stöckel 3 2 0.67
Axel Stein 3 0 0.00
Gayle Tufts 3 1 0.33
Verona Pooth 3 1 0.33
Zack Michalowski 2 0 0.00
Atze Schröder 2 1 0.50
Emily Wood 2 1 0.50
Susanne Fröhlich 2 0 0.00
Gabi Decker 2 0 0.00
Helfried 2 1 0.50
Mirja Boes 2 1 0.50
April Hailer 2 0 0.00
Michael “Bully” Herbig 2 1 0.50
Roberto Cappelluti 2 0 0.00
Olaf Schubert 2 0 0.00
Sissi Perlinger 2 0 0.00
Ottfried Fischer 2 0 0.00
Klaus Eberhartinger 2 0 0.00
Rick Kavanian 2 0 0.00
Lisa Feller 1 0 0.00
Sascha Korf 1 0 0.00
Marc Metzger 1 0 0.00
Joachim Fuchsberger 1 1 1.00
Martin Klempnow 1 0 0.00
Tom Gerhard 1 0 0.00
Ralph Morgenstern 1 1 1.00
Valerie Bolzano 1 0 0.00
Kurt Krömer 1 0 0.00
Hubertus Meyer-Burkhardt 1 0 0.00
Smudo 1 1 1.00
Badesalz (Gerd Knebel und Hendrik Nachtsheim) 1 0 0.00
Markus Maria Profitlich 1 1 1.00
Sven Nagel 1 0 0.00
Bernd Stelter 1 0 0.00
Matthias Matschke 1 0 0.00
Otto Waalkes 1 0 0.00
Till Hoheneder 1 0 0.00
Waldemar Hartmann 1 0 0.00
Lisa Fitz 1 0 0.00
Fatih Cevikkollu 1 0 0.00
Matze Knop (als Franz Beckenbauer) 1 0 0.00
Ralf Morgenstern 1 0 0.00
Ulrike von der Gröben 1 0 0.00

Ich habe den Großteil der Daten nicht überprüft, aber im Groben dürfte es stimmen.

Posted in web

Android WebView: Web page not available

Just a quick note in order to save someone else searching for a solution to this problem.

When you want to display HTML content in an Android WebView do it like this:

String html = "my >b<HTML content>/b<. 100% cool.";
WebView webView = (WebView) findViewById(R.id.myWebView);
webView.loadData(">?xml version=\"1.0\" encoding=\"UTF-8\" ?<" + html.replace("%","%25"), "text/html", "UTF-8");

If you don’t replace the % to its url encoded equivalent you will get a “Web page not available”. Simple, arguable but not at all apparent.

Posted in web

Git tip: Changing your mind: Push pending changes to a (not-yet existing) new branch

It happens quite often to me that I start committing things and only afterwards decide I should have created a new branch.

So git status says something like:

Your branch is ahead of 'origin/master' by 4 commits.


but I don’t want to push to origin/master but rather create a new branch. (Of course this works for any other branch, not named master)

So you can use this sequence of commands:

git checkout -b newbranch
git push origin newbranch
git checkout master
git reset --hard origin/master

Explanation: This…

1. creates a new branch pointing to the current changes (and switches to it)

2. pushes this new branch including the changes to the server

3. switches back to the branch master
4. and undoes the changes that were made locally

The important thing to know is that the changes remain in the repository because a branch is merely a pointer to a commit.

Afterwards you can continue to commit to master, for example:

(screenshots done with a fork of gitx)

Posted in web

Use an authenticated feed in Google Reader

You currently can’t subscribe to an authenticated feed (for example in Basecamp) in Google Reader.

If you want to do it nonetheless you can use this script of mine which will talk to the server that needs authentication, passing through all the headers (so that also cookies and “not modified” requests will come through): download authenticated-feed-passthru.php


<?php
// change this url
$url = "https://username:password@proj.basecamphq.com/projects/123/feed/recent_items_rss";

$ch = curl_init($url);

if (isset($_SERVER['REQUEST_METHOD']) && strtolower($_SERVER['REQUEST_METHOD']) == 'post') {
    curl_setopt($ch, CURLOPT_POST, true);
    curl_setopt($ch, CURLOPT_POSTFIELDS, $_POST);
}

curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HEADER, true);

$headers = array();
foreach ($_SERVER as $name => $value) {
    if (substr($name, 0, 5) != 'HTTP_') continue;
    if ($name == "HTTP_HOST") continue;
    $headers[] = str_replace(' ', '-', ucwords(strtolower(str_replace('_', ' ', substr($name, 5))))) . ": " . $value;
}
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

list($header, $contents) = preg_split('/([\r\n][\r\n])\\1/', curl_exec($ch), 2);
curl_close($ch);

foreach (preg_split('/[\r\n]+/', $header) as $header) {
    header($header);
}

echo $contents;

If you don’t mind giving away your credentials you can also use Free My Feed.

Posted in web

New Feature for HN Collapsible Threads: Collapse Whole Thread

I have added a feature to the HN Collapsible Threads bookmarklet that enables you to close a whole thread from any point within the thread:

This is useful when you are reading a thread and decided that you are having enough of it and want to move on to the next thread. Before you had to scroll all the way up to the top post and collapse that one.

Drag this to your bookmarks bar: collapsible threads

Install Greasemonkey script

Posted in web

Safari Extension: Clean URLs

I have been picking up and developing a fork of Grant Heaslip’s Safari extension URL clenser which removes all sorts of un-necessary junk for the URL so that you can easily pass on a clean URL to someone else. Things being removed include:

  • Google Analytics parameters (utm_source=, utm_medium, etc.)
  • Youtube related parameters (feature=)
  • Partner tracking stuff for NYTimes, Macword, CNN, CBC Canada and The Star

You can download my version here: url_cleanser.safariextz

Posted in web

Title Junk: Solve it with Javascript

There is some back and forth by John Gruber and others, about HTML <title> tags, with Gruber complaining (and rightly so) that for SEO reasons the titles are filled up with junk having little to do with the real page content.

The writers of cam.ly suggest to use the SEO title in the HTML and have something proper be displayed in Google by using an OpenSearch description. But this still doesn’t solve the problem of bloated window titles and bookmarks.

So my solution to that: use JavaScript. If you want to satisfy your readers with a good title and present a nice title to Google, simply set the title to something nice after the page has loaded with JavaScript:


document.title = "Title Junk: Solve it with JavaScript";

Everyone happy. Except those who have JavaScript disabled maybe.

I have also created a tiny WordPress plugin that does just that: title-junk.zip

Discussion on Hacker News

Posted in web

Reddit-like Collapsible Threads for Hacker News

I enjoy consuming and participating at Hacker News by Y Combinator resp. Paul Graham.

One thing that needs improvement is the reading comments there. At times it happens that the first comment develops into a huge thread, and then the second top-level comment (which might also be well worth reading) disappears somewhere down into the page.

Collapsible Threads at Hacker News through a bookmarkletReddit has combatted this common problem by making threads easily collapsible. I think it is worth having this also on Hacker News, so I implemented it and wrapped it into a bookmarklet so that you can use this functionality on-demand at Hacker News.

Drag this to your bookmarks bar: collapsible threads

As soon as it is available in your bookmarks bar, go to Hacker News and click on it when viewing a comments page. Next to each thread a symbol [+] will appear. Click it to collapse the thread and it will change to a [-]. Click that to expand the thread again.

I have licensed the source code under an MIT License. Click here to view the source code of hackernews-collapsible-threads.js. (Actually for caching reasons the bookmarklet currently loads hackernews-collapsible-threads-v6.js which is actually just the same)

The Hacker News HTML source code seems quite fragile in the sense that the comments section of a page can’t be identified in a really unique way (for example it does not have an HTML id attribute), so it might break when the layout of the page changes. This is why the bookmarklet is actually only a loader for the script on my server. I have tuned the HTTP headers in a way that your browser should properly cache the script so that the speed of my server should not affect the loading of the bookmarklet.

Enjoy :)

If you use Hackernews on another URL than news.ycombinator.com or hackerne.ws, use this bookmarklet: collapsible threads (no domain check)

Update March 18, 2011: Paul Biggar has contributed a greasemonkey script that also works on Firefox 4. I have adapted it so that it also works (which basically involved copying the jQuery script above mine) in Safari and Chrome (using NinjaKit).

Install Greasemonkey script

Install Paul Biggar’s Greasemonkey script

Update November 22, 2011: Eemeli Aro has sent me a little CSS tweak so that the lines don’t move around when collapsing. The code downloadable from above contains his code. Thank you!

Posted in web

Even Faster Web Sites, a book by Steve Souders

Steve Souders has recently released something like a sequel to his previous book “High Performance Web Sites” (HPWS) which I have already reviewed earlier. With Even Faster Web Sites he and his co-authors (specialists in their fields, such as Doug Crockford (JavaScript: The Good Parts) on Javascript) elaborate on some of the rules Steve postulated in HPWS.

It needs to be stated first that if you haven’t read and followed Steve’s first book, you should go and do that first. It’s a must-read that makes it pretty easy to understand why your page might be slow and how to improve it.

In “Even Faster Web Sites”, Steve and his co-authors walk a fine line between fast and maintainable code. While most techniques described in his first book could be integrated with an intelligent deployment process, it is much harder with “Even Faster Web Sites”.

In the chapters that Steve wrote himself for “Even Faster Web Sites,” he is pretty much obsessed with analyzing when, in what sequence, and how parallel the parts of a web page are loaded. Being able to have resources transfered in parallel lead to the highest gains in page loading speed. The enemy of the parallel download is the script tag, so Steve spends (like in HPWS but in greater detail in this book) quite a few pages analyzing which technique of embedding external scripts lead to which sequence in loading the resources of the page.

Steve also covers interesting techniques such as ways to split the initial payload of a web site (lazy loading) and also chunked HTTP responses into consideration that allow sending back HTTP responses even before the script has finished. Downgrading to HTTP/1.0 can only be considered as hard-core technique that just huge sites such as Wikipedia are using right now and should be considered being covered for educational reasons only.

There is a section focussing on Optimizing Images which thankfully takes the deployment process into consideration and shows how to automate the techniques they suggest to optimize the images.

My only real disappointment with “Even Faster Web Sites” is the section by Nicolas C. Zakas. He writes about how to Write Efficient JavaScript but fails to prove it. To be fair: in the first section of the chapter he shows benchmarks and draws conclusions that I can confirm in the real world (accessing properties of objects and their child-objects can be expensive). But then he gives advice for writing code that can hardly be called maintainable (e.g. re-ordering and nesting if-statements (!), re-writing loops as repeated statements (!!!)) and then doesn’t even prove that this makes the code any faster. I suspect that the gains of these micro-optimizations are negligible, so chapters like these should be (if at all) included in an appendix.

Speaking of appendices, I love what Steve has put in here: he shows a selection of the finest performance tools that can be found in the field.

This book can help you make your site dangerously fast. You also need to be dangerously careful what tips you follow and how you try to keep your site maintainable at the same time. “Even Faster Web Sites” is great for people who can’t get enough of site optimization and therefore a worthy sequel to “High Performance Web Sites,” but just make sure that you also read and follow Steve’s first book first.

The book has been published by O’Reilly in June 2009, ISBN 9780596522308.

Posted in web

Website Optimization, a book by Andrew B. King

Website Optimization

This time I’m reviewing a book by Andy King. Unlike High Performance website by Steve Souders, it doesn’t solely focus on the speed side of optimization, but it adds the art of Search Engine Optimization to form a compelling mix in a single book.

If you have a website that underperforms your expectations, this single book can be your one-stop shop to get all the knowledge you need.

Andy uses interesting examples of how he succeeded in improving his clients’ pages that illustrate well what he describes in theory before. He not only focuses on how to make your website show up at high ranks in search engines (what he calls “natural SEO”), but also discusses in detail how to use pay per click (PPC) ads to drive even more people to one’s site. I especially liked how Andy describes how to find the best keywords to pick and also describes how to monitor success of PPC.

The part about the optimization for speed feels a little too separated in the book. It is a good read and provides similar content as Steve Souders book, though the level of detail feels a little awkward considering how different the audience for the SEO part of the book is. Still, programmers can easily get deep knowledge about how to get that page load fast.

Unfortunately Andy missed out a little on bringing this all into the grand picture. Why would I want to follow not only SEO but also optimize the speed of the page? There is a chapter meant to “bridge” the topics, but it turns out to be about how to properly do statistics and use the correct metrics. Important, but not enough to really connect the topics (and actually I would have expected this bridging beforehand).

Altogether I would have structured things a little different. For example: It’s the content that makes search engines find the page and makes people return to a page, yet Andy explains how to pick the right keywords for the content first whereas he tells the reader how to create it only afterwards.
Everything is there, I had just hoped for a different organization of things.

All in all, the book really deserves the broad title “Website Optimization.” Other books leave out SEO which usually is the thing that people mean when they want to optimize their websites (or have them optimized).

I really liked that the topics are combined a book and I highly recommend the book for everyone who wants to get his or her website in shape.

The book has been published by O’Reilly in July 2008, ISBN 9780596515089. Also take a look at the Website Optimization Secrets companion site.

Thanks to Andy for providing me a review copy of this book.

Facebook discloses its users to 3rd party web sites

Q&A with Dave Morin of Facebook

Just a quick post, because what I read at Joshua Porter’s blog somewhat alarms me: Facebook?s Brilliant but Evil design.

I feel more and more reassured at why I don’t use Facebook and have a bad feeling about them.

The gist is this: when you buy something at a participating web site (Ethan Zuckerman shows how it is done at overstock.com), Facebook discloses to that 3rd party web site, that you are a user of Facebook, and hands over some more details about you — while you are only visiting that 3rd party page (and not facebook.com)!!

This goes against the idea of separate Domains on the Internet. Joshua fortunately also goes into technical detail, how this could be done.

In my opinion Facebook users should quit the service and heavily protest against these practices. But I am afraid, few of them will even notice that this is happening.

Posted in web

This was FOWA Expo 2007

fowa.jpg

I have been attending this year’s Future of Web Apps Expo in London’s ExCeL centre.

There were a ton of interesting speakers and I enjoyed listening a lot. Amongst others there were Steve Souders of Yahoo (High Performance Web Sites), Paul Graham of Y Combinator (The future of web startups), Matt Mullenweg of WordPress.com (The architecture of WordPress.com, he was the only one to go into some detail) and Kevin Rose of digg (Launching Startups).

I also enjoyed Robin Christopherson’s talk very much. He is vision impaired and showed how he browses the web (amazing how fast he had set the speed of his screen reader — I know why and guess that most vision impared people turn up the speed, yet it still feels awkward to listen to it) and which challenges therefore arise. Unfortunately Chris Shiflett only held a workshop which I was not attending.

The conference was clearly not so much for developers (at some points I would have greatly enjoyed some delving into code), so I am trying to keep my eyes open for even nerdier conferences :) Any suggestions?

On the evening of the first day there was a “live” diggnation recorded which was pretty fun.

According to Ryan Carson, he will be publishing audio files of the talks on www.futureofwebapps.com soon. Thanks to Carsonified for installing this great conference. I hope I will be able to return next year.

I have posted more photos to flickr.

,

High Performance Web Sites, a book by Steve Souders

I’d like to introduce you to this great book by Steve Souders. There already have been several reports on the Internet about it, for example on the Yahoo Developers Blog. There is also a video of Steve Souders talking about the book.

The book is structured into 14 rules, which, when applied properly, can vastly improve the speed of a web site or web application.

Alongside with the book he also introduced YSlow, an extension for the Firefox extension FireBug. YSlow helps the developer to see how good his site complies with the rules Steve has set up.

I had the honour to do the technical review on this book, and I love it. Apart from some standard techniques (for example employing HTTP headers like Expires or Last-Modified/Etag), Steve certainly has some tricks up his sleave:

For instance he shows how it is possible to reduce the number of HTTP requests (by inlining the script sources) for first time visitors, while still filling up their cache for their next page load (see page 59ff).

The small down side of this book is that some rules need to be taken with care when applied to smaller environments; for example, it does not make sense (from a cost-benefit perspective) for everyone to employ a CDN. A book just can’t be perfect for all readers.

If you are interested in web site performance and have a developer background, then buy this book (or read it online). It is certainly something for you.

The book has been published by O’Reilly in September 2007, ISBN 9780596529307.

Some more links on the topic:

,