A wee rant.
Ok, I'm totally with the consensus that the future is Cloud-based, and to be a little more specific Platform-based and to be even more specific primarily HTTP-based. To back that up, cf.
But to expand something I mentioned in passing here recently :
Here I could point to the traditional DOM API, blame the W3C for all the world's ills and an awful lot of people would nod and smile knowingly. But although that's arguably valid (heh), I reckon the problem is more systematic and can mostly be blamed on browser developers.
Ok, blame is too strong. The decisions made over the years and the directions taken have generally been perfectly rational in the context of the prevailing conditions. But there have been feedback loops at work. The flashy [sic] chrome [sic] surrounding HTML dev, from the img tag onwards, has pulled Web developers in like moths around a flame. So the browser developers act to improve that experience. Meanwhile server-side tech has developed out of the corporate legacy of silo-based systems. Let me quote Steve Yegge there: "It's a big stretch even to get most teams to offer a stubby service to get programmatic access to their data and computations.". The way services are offered over the Web, even Web 2.0 services still have a big hangover from this mentality. I'd argue that most Web APIs are only marginally better than SOAPy stubs. Largely because XML and JSON aren't particularly Web-friendly. Ok, don't bite my head off, let me qualify that.
First XML. There have been plenty of arguments over the years around XHML, and back in the day (I wonder how old that phrase is) there were arguments about the XML nature of RSS. Postel's Law, the "Robustness Principle" got cited a lot. Let me give you some deja vu:
Be liberal in what you accept, and conservative in what you send.
What a lot of people misinterpreted was the keyword robust. A robust system is one designed to be able to fail gracefully or continue working acceptably with noisy data. That's exactly what we want for the Web, right? Well not necessarily, if I was ordering a book from Amazon, and there was a partial failure, I'd rather they didn't make a best-guess when it came to taking money of my credit card (I think paraphrasing Tim Bray there). Anyhow, XML is not robust, by design. XML is designed to bail out completely at the first sniff of anything dodgy. As it happens, the way XML is often served on the Web is without proper regard for the media type, i.e. dodgy and hence broken.
Sorry, that was gratuitous deviation, the real reason I'd say XML isn't Web friendly, like JSON, is in the way people use it. Whether data is conveyed as name-value pairs or through more complex structures, the key parts are generally just simple strings. But by itself, a string on the Web is next to useless. You or I can (maybe) read it, or even paste it into Google and get a definition. But what is a poor machine client to do? What makes the Web are links. It's 101 but somehow still manages to be overlooked: the link has two facets: a universally unambiguous name (URI/IRI) and a protocol for following it (HTTP). If a client on the Web encounters a link, it can follow its nose to find out more information about it. That's what we as humans do in browsers all the time, yet when it comes to Web services for some reason a simple string is seen as adequate to identify something.
Ok, with XML, the HTML DOM and to some extent JSON there's been some justifiable resistance to the use of URIs for names, because namespaces have traditionally been uninuitive at best and agony at worst. Using URIs instead of simple strings certainly adds a burden (it doesn't have to be that great, check Turtle syntax), but its benefits far outweigh the costs.
The thing is, you'll hear talk of snowflake APIs - only one implementation of each exists - but what gets overlooked is that by their very nature, most APIs just aren't Webby. The client must have prior knowledge that the service at endpoint X uses API Y. What you end up with is effectively a series of 1:1 client-server connections. That, while the uniform interface REST may mean it's less brittle than an RPC connection, still means tight coupling.
Ok, you might argue, that for any communication to take place, some prior knowledge is required. Sure, but that can be minimised - just like the way we follow links for more information in a browser, a service client can follow links to get more information. This is only a small conceptual step, but what it enables is hugely powerful. Above everything else, it's what Linked Data and the Semantic Web gets right.
I reckon that browser developers, with their emphasis on doc-oriented HTML have a natural tendency to carry their experience in that domain across and apply it to data. Naturally namespace-less XML and JSON will seem preferable through that lens. But in practice, documents and data are apples and oranges. Browsers have been optimized over the years for the former, incidentally making the latter harder than necessary.
It's funny how you don't hear so much about service mashups these days, despite their undeniable coolness. I'll assert that it's because developing for Web data in the browser is bloody hard work, especially when there are NxN arbitrary API mappings to know.
Overall it's actually something of a miracle that the notion of cloud-based platforms has emerged.
I had planned to say more about Cloud Computing Outside of the Browser - or to put it another way, evolving old-fashioned non-browser Rich Internet Clients (as well as server-server and every other non-browser configuration). But ranting's worn me out. Anyhow, in short, I reckon that for the forseeable future, non-browser clients in many circumstance are probably preferable to browser-based equivalents, primarily because they're easier to develop (as I keep saying, I reckon the agent model of combined client/server units is a good way to go). While I personally welcome HTML5 and the APIs as a clean-up of document markup and processing, when it comes to data it isn't even a Band-Aid.