Artificial Intelligence or AI is one of the hottest topics of the modern computer tech age in which we live.  Much debate and speculation at the highest levels of academia to the tabloid news stands occurs as to how close we are.  In my humble and somewhat lay-person understanding of the technology involved, I’ll take a stab at explaining why we are much further away than some might believe and how certain basics are continually overlooked when mainstream media reports update us on the latest developments.

Let me first explain how the term AI is in and of itself entirely wrong in the context of what everyone believes when they hear it.  Artificial Intelligence exists right now in so far as we rely on computational machines to perform tasks on which our daily lives depend.  Even when the commonly understood definition of AI does occur; it won’t be artificial at that point.  Simply put: artificial sugar isn’t artificial if it’s sugar.  Artificial in this context is the simple distinction of either manmade or naturally occurring.

What we should be talking about is organic and inorganic consciousness and I for one would feel better if we started using those terms.  It is this exacting distinction in fact that intimates for us why it will be so difficult for an organic consciousness to create an inorganic one.

Memory certainly won’t be an issue.  Most of us can hold more data in our hands than our minds at this point in history.  I would stipulate that overburdening any inorganic intelligence attempts with too much raw data would be counterproductive to intelligence and consciousness development.  Relationships between objects, words, numbers, memories, instincts and emotions all play a part in how intelligent we are but consider this; instincts and emotions are the only items on the list we are born with.

Every object in our daily lives, every word, every number and the languages we utilize to communally apply understanding are human constructs.  There is a lot we don’t yet understand about understanding itself as well.  Consider the sky for just a moment.  Whoever you are, you would most likely agree the sky is blue.  The science behind that is far less important than the basic fact that we agree on blue.  Now consider that no matter how much we agree; you cannot see through my eyes any more than I can see through yours.  Given that, how do either of us know the other is not seeing red and simply understanding it to be blue?

Going back to basics, what is intelligence in the first place?  IQ measurements are human constructs and even the popular definitions are made up of constructed language implying reason, comprehension mental acuity and learning abilities.  Starting then at learning; how do organic organisms do that and perhaps more importantly; why?  Referring back to the earlier list, we (at least most humans) begin with instincts and emotions.  Few infants don’t know what to do when brought to the teat for the first time.  This is a basic survival instinct that forces the infant to take in nutrition.

Were an AI machine (AIM) to be equipped with robotic eyes, arms, legs, back up battery power and so on; what would the machine do if you unplugged it from main power?  This is a much more complex scenario than it may appear at first.  For the next little while, we will consider the multiple variations of this situation in terms of the infant brought to the teat.  In no particular order of consideration;

  • Does the AIM know it needs the power to survive?
  • Does the AIM know what it means to survive or perish?
  • Does the AIM value survival over perishing?
  • Does the AIM know its power comes from the outlet?
  • Does the AIM know what power is?
  • Does the AIM know Human creators exist?

The last question is of vital consideration.  Like the infant again, would an unplugged AIM wait for us to feed it?  Despite the fact the backup power will fail; the AIM is not dead.  When plugged in again we could reboot.  An infant would never think that mom won’t come back to provide food and doesn’t equate her absence with any threat to survival.  How then can a survival instinct be programmed into a computer?  Can the creation of what we think of as AI or artificial consciousness exist without it?

Tropisms are behaviors in response to stimulus found in nature and humans respond to them even though they are most often referenced with respect to plant life.  We recognize night and day, up and down (gravity’s influence), hot and cold, wet and dry and use these influences to shape a basic understanding of our environment.  Like other animals, we recognize basic survival instincts and needs.  We must eat, drink, breath and sleep and lacking in any one of these requirements can significantly alter our cognitive abilities.  Our earliest ancestors chose only to learn things that aided in simplifying survival and reducing the amount of effort expended to achieve survival.

A machine however, can be built of materials that could easily continue functioning in environments fatal to humans.  How then, if a programmed survival instinct is even possible, do we assign the boundary conditions that when approached become perceived as a threat to survival?  Would you select the extremes of our earth as boundaries or would you consider the coldest vacuum of space to the surface of the sun as the defined extremes?  Does humankind even know the limits of the boundaries to be found in the universe?

Many a science fiction story has used AI (OK.  I want to use consciousness as the benchmark so; from now on its IC for inorganic consciousness instead of AI) IC to show off societies where human bask in relaxation as robots and robotic instruments take care of our needs and even a few whims.  Then the machines become our overlords and we only realize too late that we are without our freedom.

The only two such stories I know of that have any credibility relative to today’s technology are “Colossus: the Forbin Project” from 1970 and “War Games” from 1983.  Societies freedom was lost not to any IC but rather learning programs that were given control of nuclear arsenals without anybody thinking to put in an override or kill switch.  Sorry for the oversimplifying spoilers.  Even the sci-fi complexities and risks to human kind offered in Kubrick’s 2001 or Asimov’s I Robot eventually boiled down to programming conflicts more colloquially known as “human error”.

Ultimately, for the present and foreseeable future; any claims of breakthroughs in IC or AI are unmitigated egotistical nonsense on the part of the claimant.

Leave a Reply

Your email address will not be published. Required fields are marked *