...lightweight database abstraction layer suitable for high-load websites where you need the scalable advantages of connection pooling. Written in C for speed, DBSlayer talks to clients via JSON over HTTP, meaning it's simple to monitor and can swiftly interoperate with any web framework you choose.
The way you interact with it is through encoding SQL queries as URIs, the results coming back in very tabular JSON. This seems to fall at the far end of an HTTP-interface query continuum:
- Linked Data -Â pure Web : a link is a manifestation of a relation, a GET yieldsÂ a representation of the target resource
- SPARQL - Web model, SQL-like implementation
- Microsoft Project Astoria - entity/relation model, SQL-like implementation
- DBSlayer - SQL model, SQL-like implementation
I won't sully the list with SOAP/XML-RPC style method tunnelling, that's not declarative and hence beyond the pale.Â
As a Web enthusiast I'd naturally favour the first two, which are distinguished by using a data model with URIs as keys. The latter two have an extra layer of indirection between the Web and the things being described - you're looking at entities in a database, not first-class identified resources. The query/result syntax I would say is secondary to this because of the opacity of URIs and the notions of resources and representations
But that's not to say that there isn't useful stuff around the other two approaches - Astoria can be used in a very SemWeb-oriented fashion (and its query URI construction can be a lot prettier than SPARQL's).
DBSlayer is superficially a lot like SPARQL, yet was actually designed to help with scalability, still an open question with SPARQL. That has to be interesting (hmm, perhaps I should've tried the thing rather than typing this post...).
While I'm (almost) as fascinated as the next coder in the material around parallelism (Hadoop, Erlang etc), part of me suspects the drive in this direction is a little misguided. If the aim is to make efficient Web sites/services - which seems like most of the motivation around here - then I reckon there's a lot more to explore around HTTP and Web-oriented information modelling. No matter how scalable the back end is, if you're only exposing it to the Web through narrow channels then you've still got a bottleneck. As the bumper sticker says " Get your data structures correct first, and the rest of the program will write itself."@en