Rambling chain of thought I had earlier, jotting down to think on some more some other time. The main idea's no doubt been in the literature since early AI, I'm just wondering if it might usefully be tweaked & recycled.
HTTP PATCH, a method that was in a draft of the 1.1 spec, has a pause bit in a two-phase process: "the duration of the pause is five (5) seconds or until a response is received from the server". Why 5 seconds and not (say) 5 milliseconds? I can't be bothered hunting through the archives for the definitive answer, it may just be what seemed reasonable given state-of-the-art performance/bandwidth in 1996. Whatever, it now seems more like a human timescale figure than a machine one.
This leads down the path of why a HTTP server should do anything on human timescales when HTTP is a machine-machine protocol. But the human in front of a browser is assumed in most web systems. While this is may be wrong-minded in principle (it's the protocol which matters) in practice it might make a lot of sense, not only for the obvious reason that right now in most cases there is a GUI HTML viewer.
In the traditional Turing Test communications take place between a person at terminal/teletype and something that may be a person or may be a machine. But what connects these two systems is a wire. There may be a person tapping away at a keyboard on the other end of the wire, responding to messages on a monitor. But the stuff at this end of the wire is not human, no matter what's at the other end. To put it another way, from the connected machine's point of view, humans are machines too. ( Hence the title of this post - nothing to do with my being a rather Newtonian old fart ...).
Now flipping back again, to the assumption that at one end of the web service is a person in front of a browser. Where this may be useful in other respects is in exploiting human-style timing and expectations. Within the scope of an individual interaction cycle with an application (e.g. doing a search on Google) the response is expected to appear in reasonable time and be complete within the known limitations of the application (you get a bunch of hits for that search). But in practice it's not common that an individual interaction cycle will fulfil the requirements (refine the query, try again). With a complex application it may be necessary to navigate multiple menus and join different subsystems together (or even write some code) before getting the desired results.
Thing is, in this kind of interaction, it doesn't matter how fast the software is, there'll still be quite a bit of latency between the human seeing the results of the last phase of interaction and responding to trigger the next phase. In these periods there are plenty of cycles available on the machine for further processing and/or information retrieval.
Finally flipping back to a machine in front of the machine, if that machine behaved like a human, taking its time between interactions, it would itself have free cycles for other work. Ok, this is no doubt getting into the territory of bog-standard concurrent processing, with an agent kind of view of the connected subsystems. But given that most web services make the assumption of human user (and hence will tend to be optimised to work that way), perhaps this provides room for net system optimisation. Or something.
may wish to give me a kick for mentioning AI in a post I've
categorised under 'Semantic Web' (although I didn't actually
mention the Semantic Web per se, so maybe that's ok).