Past few weeks I've been on pause, my head not working properly. Finally got around to seeing doctor yesterday, now waiting for antidepressants to take effect. I haven't totally wasted my disconnected time, watched a lot of stuff. Including a Midsomer, a couple of Bargain Hunts and a geeky-great vid on poker bots (have I said I really like Berlin? This is a Chaos Communication Camp production, wonderful material). Simulating an actual poker player is really hard, but it got me thinking about the similarly hard problem of what consciousness is, appropriately mental for my state of mind.
Caveat, I'm not up to date on theories in psychology or even AI. Last big thing I read anywhere near this was a lay-reader book I think with "Intelligence" in the title, about what humans are really good at is predicting the future - pretty good hypothesis IMHO. Maybe someone can enlighten me about current thought (I'll cc Planet RDF). But the thing that has been on my mind is more old-school, the internal model bit I think was popular around the 17th century, gone downhill since. Although it may well be rubbish as human stuff, something makes me imagine it might be worth thinking about for machine stuff. I really like the agent metaphor.
Ok, generation 0, we have an agent (A) in a universe (U), and it just sits there. It's a rock. It's surrounded by other agents (which might also be rocks).
Generation 1, we have an agent capable of interacting with the environment, but its interactions are pretty minimal, starting somewhere around a pebble on a beach that has a wander with each tide up to a living creature that has built-in stimulus-response maps along with learnt ones. Kinda Behaviourist. I'm starting with the pebble because interaction with the environment can take a lot of forms, and there's quite a history from at least the Neolithic of generally anthropomorphic agency views of facets of the environment (weather etc) through the Bronze Age deities up to the modern-day religious mythologies.
Generation 2 we approach the Enlightenment and/or Smalltalk. The agent in question has an internal model of the universe containing the agents outside.
On generation 3 we come to the bit that I'll call novel until someone points to an 18th century philosopher who already suggested this. The agent in question has had all its sensors and actuators geared up to the outside world for a while, as well as sensors (and actuators) connected internally. By the mechanisms of Intelligent Design, Natural Selection and copy, paste and tweak a bit, it notices parallels between interactions with the external agents and interactions with itself. It develops a sense of self as another model very similar to the models it has for external agents. Here's the novelty - first the agent becomes aware of external agencies, only then by analogy it becomes aware of itself.
Like all the great (as in most entertaining) theories this is of course unverifiable. But I like the notion that the local stuff only appears after some level of comprehension of the remote stuff, feels like it might be useful somehow.