For me, the heart of what we're trying to do is make the Web itself offer some of the facilities we previously got from just one particular site or another.
I rather like that line even without context, which is the social platfom. What danbri says next:
So copying everything into one big site, even if it is a non-profit with open APIs, feels a bit over-centralised. I do like the direction of the sixapart feed though; with some filtering mechanisms it could make a big difference.Â
Yup. The Six Apart Relationship Update Stream reminded me of the continuous feed update Bob Wyman set up some time ago at PubSub.com using direct TCP/IP. Googling for that, I ran across this post from around the same time talking about that from none other than Brad Fitzpatrick. There is still a live Six Apart Update Stream.
I'm still not sure about the best protocol approach for this kind of thing, keeping a HTTP connection open seems a bit kludgy, raw TCP/IP a bit low level, Jabber may be closer to optimal when you're talking diff streams.
While the Six Apart stream is Atom on the surface, the actual data being transmitted is contained in content in an XML dialect that is, er, a little eccentric. It describes the changes to the social graph with edges having characteristics like:
Though there's probably value in using Atom for timestamping changes (it's not total Revenge of Babble), if anything ever begged for RDFification, it's this payload. I've already got some other XML dialects/vocabs in the queue for RDF mapping (via GRDDL), I do hope someone else can pick this one up (there's nothing useful at the glueon namespace btw, so here's a pointer to the relevant tutorial).
On a meta level, I reckon this could probably use a bit more distribution (using the Web more, as danbri suggests), very much the kind of thing I've had in mind for HTTP/RDF-based lightweight agents, with the messaging as small but complete request/response chunks (with moderately smart filtering and routing), providing for data pull & push.
In terms of Relationship Update, this would mean exploiting the notion that triplestores are just little caches of chunks of the Semantic Web. You could have short-lived caches of all the data in the stream, or (for moderately smart filtering and routing) only save data corresponding to resources of interest to the local system and ideally offer SPARQL endpoints so downstream systems could get the info. A more active agent could maybe interact with pingthesemanticweb.
There was just a flurry of posts about the web as platform
which I'll likely comment on when I get a bit more time...