backOne may conceive that every single program running on the lowliest computer has a certain degree of consciousness. There have been cases of an insect eating another insect while the eater was being devoured by a third insect. We're talking many orders of magnitude below this level of consciousness. How to do it, then? How do I create the Monad? How do I create it with this emergent theory of consciousness if I am not to be bogged down with writing a program weighing terabytes?
I have inspiration from Boole: all the myriad theories of logic boiled down in the three operators AND, OR, and NOT. The desired goal is to be able to look at the code, slap oneself on the head and say, "Now, why didn't I think of that?" But how to break out of the algorithm? How not to follow, step by step only, the things it must and only can do, based on the program, the recipe? How do I create free will? We cannot break out of the fact that all a computer can do can be done (given resources and time) on a Turing machine. However, we may skirt the issue. We may be able to break out of the algorithm, and I describe the way below. We are stuck with the deterministic model of computation, though algorithms may treat non-deterministic parallelism through simulation by deterministic means. Non determinismwith multiple forking presentmay be present in massively parallel structures such as the brain, but as stated in my previous paper, we do not have that luxury (or at least I myself have little in that regard, having only a single processor computer at my disposal). The system to be created, however, must be able to make non-deterministic choices (ones lacking a clear answer from the information available). Non-determinism by simplification may be a viable alternative, though I am hesitant to merely make a choice by way of a pseudo-random number generator.
The simple, base notion of an intelligence may be broken down to what I mentioned before as a Monad, which Leibniz outlined as the substratum for all substance, (in fact, it was his idea of substance itself) in his work Monadology. An intelligent Monad consists of the following four properties: perception, appetition consciousness, and memory. A state which changes in the world which corresponds to a change in state of the Monad is called perception. "The activity of the internal principle which produces change or passage from one perception to another may be called Appetition." [Monadology, 14, italics mine] Appetition may be described as desire. "Perception is to be distinguished from Apperception or Consciousness, as will afterwards appear." [Monadology, 14] Apperception is defined by Websters thus: 1 : introspective self-consciousness, 2 : mental perception; especially : the process of understanding something perceived in terms of previous experience. (The word entry is dated, curiously enough, decades after Leibniz died.) Leibniz says of consciousness (or apperception, or reason), " it is the knowledge of necessary and eternal truths that distinguishes us from the mere animals and gives us Reason and the sciences, raising us to the knowledge of ourselves and of God. And it is this in us that is called the rational soul or mind [esprit]." [Monadology, 29]
Given the emergent theory of consciousness to which I subscribe, there must be an incredible complexity existent in anything which can be called to be the highest Monad, that which has reason. How do we resolve the paradox, then, which appears, something which is very simple which also must be very complex? The answer is to be that the Monad itself will have a very simple core, which is to grow perhaps exponentially to something very complex indeed. I emphasize this, that it must grow an enormous amount before it is to resemble even a human infant (or even, say, a dog). By knowing this is to be done, we overcome the danger of giving any sort of power to an anthropomorphized entity which only has the intelligence of an insect. What is necessary is code which can write its own code for itself, as well as replace code within itself. This will be the essence of the system's free will, the way to break out of the algorithm by which it was originally created: it will create its own algorithms by which it will run.
The one problem we come across with this solution is the notion of creativity. If it is not creative, the algorithms it creates will merely be deterministic to what was originally programmed within the system to begin with. Anything resembling true free will would be illusory. I leave this to be dealt with in perhaps the next paper I write. Also, the other large issue, that of common sense, I leave to a paper later on. What I propose here is the creation of free will.
Now, if a program were to be able to creatively write new parts of itself by which it runs, we can say, then, that it has free will, and I believe that to be an essential part of the highest Monad, an aspect of having the ability to reason. If it were to have only perception, appetition, and memory, we do not have an intelligence in the true sense, but something which can be modeled as your run-of-the-mill computer program. A word processor, for instance, has perception in the form of event listeners which tell it when a key has been typed, or a command is chosen in the menu. It will then desire to display that character, or to execute that command--appetition. And of course, it remembers what characters were typed (they are displayed), and the commands entered (by its formatting of the text). I say that it is an animal, which were seen by Leibniz to have those three qualities; it is an animal indeed. It is in perfect line with the emergent theory of consciousness--it is perhaps of the complexity of a mitochondrion, perhaps less than that, a protein--in terms of its consciousness. We may in fact have programs existing which have the complexity of an amoeba in terms of consciousness. What is lacking is the fundamental structure which will organize smaller elements into a cohesive model of consciousness, which will make a true artificial intelligence possible.