We always wondered about knowledge, truth, and being. In every civilisation we built, while learning how to construct roads, houses, and societies, we were also constructing systems of thought and methods to try and describe how we think, learn, and deduce truths. These were so precious to us that even during long centuries where commerce between ends of earth barely existed, we still found ways to communicate about mathematics, philosophy, religion, and logic. Centuries of brutal colonialism wiped much of that part from collective memory, replacing it with moronic lies about the supremacy of European intellect; regardless, the search for deeper understanding of universe was always a globalised endeavour.
It was tortured and slow for the most of it, with bursts of growth scattered over millennia and across the globe; at times when one civilisation or another felt a pressing need to search for meaning deeper than what current narrative had to offer.
By the 19th century, we developed some basic forms of logic, and quite a few algorithms for solving equations and measuring things larger than what we could pace.
Then we learned how to build machines, really complicated ones. We even built some that could take instructions on a wooden card with holes and weave different beautiful patterns by following them. A few decades later in an incredibly insightful collaboration between a genius engineer and an aristocrat mathematician daughter of a flamboyant gay poet, a first attempt was made to build a thinking machine based on the rules of logic and mathematics. He was building it to solve equations, she hoped it could write poetry and compose music (providing first encoding of higher order concepts into numbers and functions along the way, to make a convincing case). They failed because at the time the undertaking was far too expensive, and possibly impossible to manufacture with the tools of the time.
What stayed behind their failure was the understanding that at least in principle we can build machines to do so much of what only humans could do.
As the century turned, one side of the world was drowning in decadence, and the rest of it was getting too tired of colonial exploitation. The liberation that new machines brought, the speed at which news travelled unhindered and unfiltered, and the availability of cheap knowledge printed en masse on one hand, and the undeniable savagery performed in the name of once loving god on the other, all culminated in the most horrific butchery fest the world has ever seen, world war one. A war that took millions upon millions of our children only to rearrange the commerce of the world a little.
It scarred us, it brutalised our faith in an external agent of good that will sort it out for us. The God failed us terribly, and we were at the same time abandoned and freed like never before.
Free to explore and driven to find out what else is there, we learned about forces that bend the shape of space and light, about inexplicable randomness at the core of existence, about the darkest and purest thoughts that drive us; we were painting pure thoughts and visions and composing music about lust and meditation; we learned how to capture the energy of water, wind, or fire and send it, pure as can be, along thin strips of metal that we forged ourselves; we brought light where there was darkness since we existed; we found ways to see inside our bodies while still alive, to talk to each other through thin air across the world; we discovered antibiotics; and we discovered so much mathematics that no single person could even keep track of what all is there.
It was beautiful, and it was Chaos.
Which is not necessarily a bad thing, but in the west we were not used to that. Not at all.
The greatest minds of the time, arguably amongst the greatest minds we ever produced, started an international movement in Europe and later America, a programme with conferences and manifestos (they called them differently, but that is what they were) to restore order by looking inwards, into our minds, by way of systematising all mathematics. They were going to do it using the power of pure rational analytical thinking and logic. Their mission was to encode the way intelligence works in a set of axioms and rules, a complex but clear and unambiguous mechanical statement.
They failed. Of course they did, we are not made of rational analysis alone, nor is the universe. Their lives were, almost with no exceptions, stories of nervous breakdowns, deep depressions, paranoid delusions, or suicides. The notion that completeness of mind must involve unruly randomness and chaos was simply too alien to accept; and that was exactly what they were discovering at the limits of their efforts again and again, no matter what they tried. Quantum randomness, general relativity, Freudian psyche driven by lust, heart wrenching and laughter inducing beauty of surrealism … it was everywhere.
It all became painfully clear when a very young mathematician Godel, published his Incompleteness Theorem. It says that no matter how hard you try to come up with a set of axioms and rules that you can use to derive theorems from those axions; there are always theorems that you know are true, but you can neither prove nor disprove them within that system. In other words: no system consisting only of axioms and rules can ever be a complete description of truth.
A little later, Turing imagined a machine that could execute algorithms, proved that it could execute any algorithm, and that it is impossible to construct an algorithm that will guarantee a definitive answer to any question about any number. All in a single paper. The key point was that last bit.
It is known as the Halting Problem, and the way it was formulated is along the lines of: is it possible to write a big mamma computer program (call it oracle), that will take as input any other program and any input to it, and tell you if when you execute that program with that input the execution will ever halt; or it will go on forever? The answer is no.
It is a monumental result; because you can formulate any question in mathematics this way. For example, want to know if there is a finite number of primes? Sure, write a program that prints them in turn, then feed it to the oracle – if the oracle says it will stop, then there is a largest prime, if not, then there isn’t one. Care to know if an integer equation has a solution at all or not? Write a program that tries all numbers in turn, feed it to the oracle… Is there a closed form solution to an integral? Start trying things out, let the oracle tell you if you will ever stop.
Nope, can’t be done, not by writing an algorithm. The thing is though, we do this all the time.
How do we do it? We don’t know. Ok? We just don’t. Still working on it.
In the meantime, other models of computation were developed – you can represent them using functions alone (Church’s lambda calculus), graphs, Conway’s game of life (apparently!) … all sorts. There is a result called the Church-Turing thesis that proves that they are all equivalent. No matter how you go about it, there is a small set of basic operations that a machine (or a method) has to supply to be able to execute algorithms. Such a set of operations is said to be Turing Complete – it can do anything that Turing’s machine can do. There are many variants of minimal sets like this, but in essence they all need to:
- Have an ability to store, read, and write values to and from somewhere, some kind of memory, and a way to address relevant memory locations. This could be explicit, like RAM, or quite implicit like function arguments and return values in functional programming.
- Have some version of a concept of the current state of execution (a stack pointer and an instruction pointer, a named state, a position in a function calls stack … something).
- Ability to perform some version of basic arithmetic (boolean algebra, integer algebra, function composition … they are all isomorphic to each other).
- Ability to inspect results of calculations and change the state of execution based on the value of the result (jump to a different location in code, slip to a different named state, invoke a different function …).
The twist is, as soon as something is Turing Complete, it allows for infinite loops, because of the halting problem.
There are very many things you can do without Turing completeness, and much research went into this. Theory of Computation is a large area of science.
The most commonly used case of this are finite state machines. This is a model of computation that drops the requirements for memory – you have to have some to keep track of the current state and process inputs, but that’s it, nothing permanent, nothing dynamic. They are often implemented as parts of algorithms. The huge advantage of them is that they can really only loop forever if the input is infinite.
Look them up; they are very powerful and used all the time, even though a whole lot of time you don’t even know it. In short, the way they work: define a finite set of states, then read input in chunks, for every allowable type of input, define how the state changes. Most often they are drawn as graphs, but you can think of them as tables too, rows are states, columns are valid input chunks, the content defines which state the machine goes to for a given input and a given start state.
For example, regular expressions are always implemented using finite state machines (there is actually a one-to-one mapping between expressive power of regular expressions and finite state machines). Any sort of workflow that you don’t want to keep going forever is also a finite state machine. Parsing languages uses them a whole lot.
There is a hierarchy of computation models with finite state machines at one end, Turing machines at the other; with the capabilities increasing as you go. It’s called the Chomsky hierarchy, and it’s only four deep. The details get boring quickly, but what is fascinating about it is that it came about from linguistics – Chomsky is a linguist. He was looking at underlying structures of human languages and came up with this idea that once you try to specify language grammars formally, there is a hierarchy of grammars that have increasingly expressive power in a language. Each level maps directly to a type of computational model, an abstract machine.
Language expressiveness can be mapped directly to ability to express and solve logical problems. Just like our minds need more than algorithms to solve problems, our languages need more to sound real.
But we can create simplified languages that can express enough thought to describe any algorithm and do it unambiguously. This is what programming languages do; and this is why they work.
It’s a beautiful result, and very useful, but only surprising in the context of the European view of logic as separate from humanity; in Indian versions of logic, the language that expresses statements, and its expressive capability, are considerations from the get go.
Free Will, Strong AI and Machine Learning
There is a school of philosophy that claims that none of this work matters, and none of the results count; we are still just implementing very complex algorithms.
The justifications get very detailed, but in the end they boil down to the point that you don’t actually need imagination, it doesn’t even exist, it’s just an illusion. Godel’s and Turing’s work are dismissed on (flimsy) grounds that they require a concept of infinity for the proofs to work, and infinity doesn’t exist. It is a rather dogmatic approach dismissive (with disdain) of the idealistic interpretation of the world. It deals with quantum randomness in a similar fashion.
It has an important consequence – if it is true, then there is no such thing as free will. The universe is an immensely complicated clockwork and we are tiny cogs.
I have two problems with it.
Firstly, it purports to be purely rationally scientific, but it fails the core test of any theory to be a scientific one: it is not falsifiable. You cannot prove it wrong even in principle, the answer is always that all you need is more computational power and you will get there.
Mind you, the same goes for the opposing view, that there is more to it than algorithms, we just don’t know what it is; we don’t even know why we don’t know or if we can ever know it.
Where we are now boils down to the question of free will. Do you believe in it or not?
I like my free will. That is my second problem with Strong AI. It is depressing enough knowing that you cannot always exercise what you feel is good; believing that it utterly does not matter, not even in principle, would make life unbearable. Life is an exercise in free will; you must have it in order to love.
The current state of research is that it is mostly quite stalled, it certainly ain’t cool or well funded. We are overwhelmed by the promise that neural networks/machine learning algorithms/artificial intelligence will finally create thinking machines and solve our problems.
Curiously, this time the attempt to mimic humans is not brought around by an existential crisis, it is symbiotic with it, they amplify each other.
Don’t get me wrong, the ability to implement algorithms that can configure themself with enormous flexibility, just by looking at data, in ways that we would literally never have time for, is hugely entertaining, impressive, and will have many uses. It will make life better and easier in many ways.
But they are just algorithms. The big questions they pose are about decadence: how much of our ability to create beauty and love are we willing to give up for the sake of comfort and an illusion of safety that comes with brutal ordering by strict rules? It is not a new question; we’ve been through this every time we conquered a new technology. Every time we chose life and love, even if the lesson came at huge cost to the generations that had to decide.
I remain hopeful.
Leave a comment