Better code downloading with AJAX

I've been playing with Code downloading (or Javascript on Demand) a little more.

Michael Mahemoff pointed me at his great Ajaxpatterns in which he suggests a different solution:

if (self.uploadMessages) { // Already exists
return;
}
var head = document.getElementsByTagName("head")[0];
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = "upload.js";
head.appendChild(script);

Via DOM manipulation a new script tag is added to our document, loading the new script via the 'src' attribute. I have put a working example here. As you can see this does not even need to do an XmlHttpRequest (XHR later on) so it will also work on browsers not supporting that.

So why use this approach and not mine? Initially I thought that it was not as good as doing it via XHR because you receive a direct feedback (i.e. a function call) when the script has been loaded. This is per se not possible with this technique. But as in good ol' times a simple function call at the end of the script file will do the same job (compare source codes from the last example and this one (plus load.js)).

Using this method to load code later on also provides another "feature" (thanks for that hint to Erik Arvidsson): Unlike XHRs Firefox also provides a cache for scripts loaded that way. There seems to be a disagreement about whether this is a bug or a feature (people complaining that IE caches such requests while it could be quite useful in this scenario).

When using dynamically generated javascript code you will also have to keep your HTTP headers in mind (scripts don't send them by default). The headers Cache-Control and Last-Modified will do usually (see section 6.1.2 of my thesis)

The method above is also the method used by Dojo, a developer (David Schontzler) commented, too. He says that Dojo also only loads the stuff the programmer needs, so little overhead can be expected from this project.

Also Alex Russell from Dojo left a comment about bloated javascript libraries. He has some good points about script size to say (read for yourself), I just want quote the best point of his posting:

So yes, large libraries are a problem, but developers need some of the capabilities they provide. The best libraries, though, should make you only pay for what you use. Hopefully Dojo and JSAN will make this the defacto way of doing things.

So hang on for Dojo, they seem to be on a good way (coverage of Dojo to follow).

Finally I want to thank you all for your great and insightful comments!

, , , , ,

3 thoughts on “Better code downloading with AJAX

  1. Great stuff! I need to add some discussion of caching in On-Demand Javascript.

    I was interested in Eric's comment wrt XHR caching and Firefox and found this link - https://bugzilla.mozilla.org/show_bug.cgi?id=268844 - which confirms there was a bug,

    And the commentary there links to an interesting comment in the WHATWG webapp spec:
    "In particular, UAs must not automatically set the Cache-Control or Pragma headers to defeat caching." http://whatwg.org/specs/web-apps/current-work/#setrequestheader

  2. I haven't yet had time to approve Erik's comment myself (by trying it out) but it did some research on the web which seemed to approve it.

    Furthermore I had a look at the Source code part where a comment says:

    1581 // Bypass the network cache in cases where it makes no sense:
    1582 // 1) Multipart responses are very large and would likely be doomed by the
    1583 // cache once they grow too large, so they are not worth caching.
    1584 // 2) POST responses are always unique, and we provide no API that would
    1585 // allow our consumers to specify a "cache key" to access old POST
    1586 // responses, so they are not worth caching.

    So, when using a GET request for loading data via XHR it should indeed be cached. I will verfiy this.

  3. I have looked at the code for Mozilla but like you I haven't really found where the bug is. I assume the bug is that they do not add the if-modified-since header and therefore the files will always be redownloaded from the server.

    It is pretty easy to see what goes on in Mozilla using Live HTTP Headers. Then you'll see that it totally ignores last-modified.

Comments are closed.