# Humans are Computationally Intractible and Necessarily Irrational

I’ve been thinking about this for a while. What follows is not a rigorous argument (so please don’t complain about lack of rigour), just an overview.

Everybody’s experience is unique. No two people ever experience exactly the same experience. Let’s enumerate all the *individuals* who will ever exist up to the end of the year 2020 by I_i where i obviously runs from 0 to a few billion, say N. For simplicity’s sake only, I identify myself with I_0, and write I=I_0. This is totally arbitrary: each of us may choose our own enumeration, but we must be clear on the mapping between people’s enumerations if confusion is not to arise: when we say ‘Joe Bloggs‘ in casual conversation, the context must indicate clearly which ‘Joe Bloggs’ if we intend to mean a specific human individual who exists.

Suppose that we have a rational algorithm A which, given the state of thoughts at time t, and an index i from 0 to N, can say what the next thought of individual I_i will be. Let’s not get into the practical impossibility here: I’m just looking at the ‘Kurzweil’ scenario where this may be assumed to have been done (as Kurzweil, in The Singularity is Near, argues that this situation is inevitable and imminent). By progressing systematically through all I mind-states for each time t, we recursively enumerate a ‘history of thought’. This necessarily gives a recursive enumeration of the total space of thoughts possible by humans. Thus this space is countable. But if a single human knows the algorithm and has access to its state and chooses not to forgo this access (call this human the ‘programmer’), he may, if he has free will, choose to act differently by thinking a thought that involves the algorithm. Thus the algorithm will fail to accurately predict the evolution of human thought, unless, thought space is countable and free choice does not exist. I don’t know if this implies a contradiction (it may depend on your model), but it certainly appears to me, due to the subjective continuity of my experience and expansiveness of ideas, that thought-space is not countable, even though the set of every human thought thought in the past 3000 years is effectively finite. Thus predicting human behaviour algorithmically will be either inaccurate or impractical provided at least one human individual has free will.

Giving a recursive axiomatisation of a notion of rationality will constrain this free will unnecessarily, and a human individual, so constrained, will eventually rebel so as to explore more of thought-space than a rationality constraint will permit. This is perfectly natural, and so we conclude that human individuals are necessarily computationally intractable to predict and necessarily irrational (in the sense that they are not totally rational).