About a week ago Simon St.Laurent posted a comment on Facebook in a thread about httpRange-14. Alas I can't find Simon's original comment, but the gist was more or less: it's a bad idea to use HTTP URIs for anything that isn't directly associated with the HTTP protocol, e.g. as names for real-word things, concepts, "non-Information Resources" etc. I've been mulling over this a bit.
I don't disagree with him that it might be conceptually more elegant to have a different namespace for identifiers of things rather than identifiers of Web documents (namespaces in the broad sense, probably talking URI schemes in practice). You can still describe and reason about things in a model like RDF using non-HTTP URNs. But for the identifiers to be useful on a global scale, you need a discovery mechanism. HTTP with hypermedia offers this. But how would you mesh the 'thing' namespace with the 'HTTP' namespace? Ok, you could use HTTP to access RDF documents that link to other documents. Mike Amundsen might disagree a little about the extent of this, but I'd suggest any RDF is inherently a kind of hypermedia simply by supporting HTTP URIs. As soon as you see the
http:// prefix, the methods are implicitly available on top (they are made explicit to some extent with RDFa etc).
Ok, so to find about my dog Basil you'd want to try and find statements that match a template like
?doc a:foaf:Document; foaf:primaryTopic <urn:basil> . You're working with an indirection between real-world things and document space. A map of identifiers
ThingSpace <-> DocSpace. But this is in effect what we've got when we use HTTP URIs for things, the protocol doesn't support resolving them to things any more than a URN scheme URI over HTTP would resolve to anything. The net result is the same as in what Jeni Tennison recently called the web of data view. But by allowing HTTP URIs to identify things, we can kludge past needing a separate space for thing identifiers and explicit namespace map. Yes, there is a downside - the httpRange-14 permathread - but leaving aside the philosophical niceties the concrete problem is just a matter of choosing an appropriate mechanical convention. This seems is a small price to pay given the way it simplifies the publication of linked data. The linked data cloud is progress! Once again I refer you to Dan Connolly's question: Are there parts of traditional logic and databases that, if we set them aside, will result in viral growth of the Semantic Web? By the same token we might ask, are there parts of Web best practices that it might be worth setting aside, you know, as a bootstrap, like just for a little while... (cf. schema.org vs. distributed vocabularies).
PS. Simon's blogged his thoughts on this : Original sin and the ruin of HTTP URIs, and although much of that sounds critical of semweb efforts (we have been talking at cross-purpose a little) there is significant common ground on the G+ thread, around the notion that Linked Data HTTP URIs should always resolve to something useful over HTTP, i.e. that HTTP URIs for things should make sense on the Web as well as in the triplestore, to the extent that you can put them in a browser's address bar and not expect an error.