Friday 21 November 2008

Mark Nottingham on "HTTP Status"

From the author of XML and AtomPub specs to the author of the other half of the picture, the Atom Syndication Format spec-- Mark Nottingham. Seems like a nice guy, he apparently lives in Melbourne now! Smart guy.

Quick notes...

HTTP/1.1 was basically only written to “contain the damage” of 0.9 and 1.0 (vhosting, persistence, caching)
Mark was involved with the WS-* stack -- but he graciously apologised to the room for his sins ;-) An interesting comment regarding SOAP etc was that “having that much extension available in a protocol is socially irresponsible - protocols are all about agreement" and you need to draw lines to make soething useful. He was basically saying that WS-* allows you to do too much, giving you enough rope, and making the normal case hard just to make an extreme case possible. (Or something like that, if there's a blog post where he explains himself I'll gladly link to it instead of badly paraphrasing him)

Mark had a neat way of saying that RESTful APIs "use HTTP as protocol construction toolkit”. They're not built on top of HTTP, they're build as part of HTTP (in a way).

HTTP 1.1 bis: With Roy Fielding and others, Mark is working on "HTTP 1.1bis", a rewrite of the HTTP spec to make it much easier to read, to resolve ambiguities, and to define edge cases that were missed in the first version (eg "what happens when you put an ETag on the response to a PUT"?!). All this sounds very esoteric but people are really pushing the boundaries of HTTP these days with streaming services, Comet and Ajax, etc, so it's best to resolve the differences now rather than wait for implementations to define behaviour (and possibly have two versions of what happens in these scenarios)

One question I was wondering is how they will market the new spec: if it's being sold as "just a rewrite to make it easier to understand", then people won't pay much attention, but if people start creating new web servers that are "HTTP/1.1bis compliant" then it's a new standard, not a rewrite, and might as well have some new functionality as well! It's not obvious how this will work.

Compatibility: Mark mentioned an interesting point in passing: that “an http/1.0 server can still possibly take 1.1 directives” -- with squid as the canonical example. Squid officially doesn't support HTTP/1.1 yet, but it actually supports most 1.1 directives and commands.

HTTP methods: convention wisdom says that intermediaries might reject PUT and DELETE verbs due to security concerns, old gateways etc, but Mark asserted that it doesn’t really happen in practice. Google created a workaround whereby they send everything as a POST and have an extra HTTP header, “X-HTTP-Request” (I think that was right?) to "pretend" to do a PUT or DELETE. A bit silly really, signs that things need to change!

URI length: IE still limits URIs to 2k in length. Squid limits headers to 20k. HTTPbis is going to recommend at least 8k.

Cache testing: Coadvisor is a test suite for intermediaries

Headers/trailers: Most web programmers know how annoying it is to have to set all HTTP headers before you output any text. So they're thinking of “trailers” as well as headers to the envelope of your payload. This could be really useful.

Something about a 307 redirect for POST – not handled by safari... I kinda missed that bit?

Request-side cache control isn’t well supported – eg the act of posting to a blog should be able to invalidate the cache. Http currently has request cache control: eg “I’m okay with this being up to X seconds old”

Request pipelining – not supported except in safari, but would be v useful if it worked

Data ranges: need to be better supported. We should be able to jump to a section of a video etc without putting query params in the URI (although one thing you get from a URI is addressability, which shouldn't be overlooked in the quest to make things neat from an architecture perspective...)

OAuth: the IETF/OAuth BOF working group the other day went well, it could have been a culture clash but the "grey hairs" were visibly excited by the enthusiasm and drive of the OAuth guys (I guess this was Chris Messina, Eran Hammer-Lahav etc) and it ended up being "a bit of a love fest". So OAuth looks like becoming an IETF standard. Let's hope that means HTTP authentication improves a lot as a result.

New transport protocols: Looking at HTTP over SCTP, a streaming protocol I am not really familiar with. Mark is thinking of proxy-to-proxy overlays: one point-to-point many-streamed SCTP connection being muxed/demuxed to TCP at the edges.

Prefer header: is in internet draft now. More than content negotiation that lets you choose languages, encodings etc, Perfer lets you ask for semantically different content, eg only summaries or only pictures.

Typed links: making a comeback: “this invalidates X, the previous one is Y, edit this at Z” - similar to what Atom does with prev/next/edit etc. There will be a controlled list of types based on URIs (very semweb, which is nice), using the registry which already exists for Atom. (Mark didn't mention that he is the author of the internet-draft!)

What does the future hold...?
  • libraries of “higher-level but still RESTful abstractions,” ie systems that let users - webmachine is an example
  • Rack::Cache extends HTTP libraries to provide a better cache implementation built in to Ruby, he hopes to see them in other languages and frameworks soon
  • Building blocks for intermediaries, so people don't have to extend Squid every time they want to build some kind of intermediary system – eg xLightweb (Java)
  • The “O2.0” stack -- openid, oauth, comet, html5, gears etc -- “fail to consider overall architecture” - “cowboy development on the web” - “new pseudo-standards” - basically he wasn't very friendly to them! But I think he is much happier now that Messina etc are working with the IETF.

Link: http://tools.ietf.org/wg/httpbis/

0 comments: