Artificial Intelligence, Finite State Machines and Clockwork Oranges: Part 1

During my Computer Science finals a relatively interesting thing happend during the paper for Artificial Intelligence. We’d been told to begin, so I did. All the questions seemed pretty straightforward so I dived right in. Then I heard talking. In an exam! Looking up I saw one of the overseas students with their hand in the air and I could just about make out what they were saying:

“I don’t understand ‘aesthetic’, what does this mean, can I see a dictionary please.”
Cue muffled middle-classed outrage amongst the room: “how dare they, if you don’t understand what it means you can’t answer the question”. But to me, this was an act of intelligence. The candidate was in an alien country, using a foreign language, sitting an examination on the development of methods and processes to simulate intelligence. Seemed like an appropriate place to start when considering software intelligences.



The question in question ran along the lines of “Describe a model and its analytical process that could evaluate the aesthetic appeal of a chair”. Seriously, that was the question. The second mildy interesting thing was I managed to get a first for all my AI work despite not attending any of the lectures – I went to the first and the lecturer simply read extracts from the set text, I left very dissatisfied. So I gambled on making sure I totally understood the set texts against the lecture plans and set about spending all my time reading philosophy primers, particularly the dialectic mechanism. This could have backfired spectacularly but, however flawed, it was a product of my ‘intelligence’. If it had failed the word ‘stupidity’ would surely replace ‘intelligence’ and the two are not as disimilar as you first think. It’s only the end result that enables the process that derived it to be viewed one way or the other.



Could software have come up with that method of solving the problem? It would be very difficult, for the main reason that we don’t, as yet, have a generically effective way to semantically formalise the way we make connections in a satisfying way. The italicised portion is, I believe, the root problem of providing something meaningful as the endeavour looks to model what we do and I don’t believe that will work for the earlier reason. All this yields is a clockwork orange. Intelligence is not the product of a series of Finite State Machines (FSM) but the emergent behaviour that is the result of them operating in parallel and their ordering is defined initially by genetics and latterly by experience. Intelligence is being able to pull out the signal from the resultant noise by recognising and evaluating the pattern within it.



For what it’s worth, I think texts on biological evolution (particularly Richard Dawkin’s The Blind Watchmaker) and the philosophy of reason are just as valuable for A.I. So too is Garry Kasparov’s How Life Imitates Chess. I would also recommend Betrand Russell’s brilliant A History of Western Philosophy as a primer rather than trying to plumb the depths of something like Kant’s Religion within the Limits of Pure Reason Alone. Likewise, Wittgenstein’s Tractatus Logico-Philosophicus. It’s probably the work of genius but you probably also need to be one to understand it. Safe to say I hard a very hard time reading that and gave up about a third of the way in. It reminded me of trying to appreciate hardcore Jazz. I quickly retreated back to the safe simplicity of pop music!



Intelligence is an umbrella term and covers very different meanings although I can clearly remember my naive University essays starting with the commonplace and ghastly ”The Oxford dictionary defines intelligence as…”. For example, someone lucky enough to have a photographic memory could recall every story they read in the preceding week’s newspapers. To hear that person answer questions such as “last Tuesday, who visited the Iranian Embassy in London” with laser like precision renders us in awe of them. It would be common to think this person is very intelligent. But if this person was merely retrieving items from their exceptionally well ordered mind without ‘understanding’ their meaning, it would not strictly be intelligence. In fact, this is what computers can do remarkably well. What they can’t do is tell you what Tuesday felt like or why reading about the Iranian Embassy reminded them to phone their friend (because they also remembered their friend’s mother is originally from Iran).



What computers are remarkably poor at are linking these pieces of information through abstract thoughts. At the moment we’ve not, unlike say realtime lighting methods or ragdoll physics, a standard way of framing the problem or solution. Interestingly, I’m not sure why it’s much easier to model the physical world, it’s not like blackhole physics is a straightforward process to model and test.



My frustration with A.I. is that I saw it, and still do, as a storage problem. Computers are fantastic at storing and retrieving information but we’re still at the atomic level with it. The human mind doesn’t just store a single piece of information, it stores lots of ‘incidental’ things such as sense data, colours and emotions.



My opinion has always been that intelligence is an emergent effect that results from this evolutionary outcome and our thought process is a receiver that is able to listen to the channel that illuminates, like a broad searchlight in the dark, simultaneous areas of this vast archive. I’m not even sure we can achieve this with current Von Neumann architectures perhaps we need to evolve that part first.



Therefore this is really just my first pass at setting my stall (or excuses) out here on what I think before delving into the games A.I. because no doubt I’ll end up with a basket of clockwork oranges. But perhaps for the purpose of my game at least, that’s an acceptable (and pragmatic) goal.



By way of a postscript, it’s been pointed out that I’ve not mentioned the Turing Test and that was deliberate as I see it as a contrived benchmark that’s not relevant. An interesting experiment but, for the record, I think measuring an entity’s intelligence by using a human as the yardstick is a bit rich. Like Dylan said:

“We’re idiots babe, it’s wonder we can even feed ourselves”

Leave a Reply