The Best of Creative Computing Volume 2 (published 1977)

Page 48 << PREVIOUS >> NEXT Jump to page:
Go to contents Go to thumbnails
This book is also available for the Kindle

Primer on Artificial Intelligence (elements necessary for artificial intelligence)

graphic of page

The human nervous system is intricately more complex than that of the computer
but is more sluggish in handling messages (Fig. 4). The reasons for this are
found in the speeds with which electrical impulses are transmitted. Human nerve
pulses last about one thousandth of a second whereas typical computer pulses
last only a few billionths of a second. Hence, the human brain can process only
50 billion bits of information within a conscious lifetime, while the computer
can process this same number within a couple of hours.

Complexity or sophistication of the information processes available

Fig. 4


A machine must have facilities for representing and analyzing its own goals and
resources. There are three basic elements necessary to achieve true artificial
intelligence: memory, pattern recognition, and learning.

It is assumed that the long-term storage of information and data in the brain is
necessary to learning. Memory is, in actuality, a problem of recognition. This
is true because facts are rarely at hand in the form they are needed. Man's
pattern recognition of data is largely due to his fabulous memory system and its
ability to classify information. If 225,000,000,000 computers (IBM 370/135 or
equivalent) were connected together, they still would not achieve the memory
capacity of the human brain.

Pattern Recognition
Many of the problems in artificial intelligence are in pattern recognition such
as identifying printed letters (Fig. 5).

Pattern recognition techniques are necessary in order to cut down on the
possibilities to be considered in solving a problem. Unless this is done, the
search for a solution becomes exponential in growth, and soon outstrips the
limits of the computer.

At present, pattern recognition programs do not even approach the flexibility of
human pattern recognition abilities. Until they do, true artificial intelligence
will not be possible.

Over 2300 years ago the Greek philosopher Aristotle studied the process of
associative learning. For centuries man has been fascinated by this learning
process. The two most important families of contemporary learning theory are:
Stimulus-Response theorist and Cognitive field (or Gestalt) theorist.

	Sequential-processing program for distinguishing four letters, A, H, V and Y,
employs three test features: presence or absence of a concavity above, a
crossbar, and a vertical line. The tests are applied in order, with each outcome
determining the next step.


Stimulus-Response. Under Stimulus-Response, behavior is seen as a transaction
between the stimuli that impinge an organism, and the resulting response. The
Stimulus-Response theorist sees learning as a permanent relation between
stimulus and response.

In the early 1 900's American psychologist E.L. Thorndike formulated the Law of
Effect-when a person repeatedly does something successfully, the neural pathways
become reinforced; when a person repeatedly fails to do something successfully,
the neural pathways become inhibited. Ivan Pavlov's famous experiments with the
salivation of a dog when a bell rings illustrates this theory quite well.

	Current computer programs, for the most part, follow the Stimulus-Response
theory since the same input usually engenders the same output.

	Gestalt. Gestalt is a German word which means a configuration has
characteristics more broadly based than those of its parts. Gestalt psychology
originated in Germany in the early part of the 20th century with four
psychologists: Max Wertheimer, Wolfgang Kohler, Kurt Koffka, and Kurt Lewin.
Gestalt theorists see learning as goal-oriented with the learner being
creatively bent.

	Researchers in artificial intelligence using Gestalt theories as their guide
generally analyze techniques human subjects employ and then incorporate them
into a program.

Generally this type of research is called Cognitive Simulation.

	Cognitive Simulation (Gestalt) in artificial intelligence research was marked
by early success. Unfortunately, the successes diminished quite rapidly until
researchers became disenchanted with this approach.

	Feedback in the Learning Process. Learning is necessarily a goal-seeking
process and feedback is inherent in it. In practice, feedback is the process of
regulating a procedure or system by returning information gained from its
outputs to its inputs (Fig. 9). In order for a system to obtain feedback
information, it must be able to develop associative patterns from which it can
determine how to use the feedback information (Fig. 10). In other words, a
system must be told something for it to learn something.

Page 48 << PREVIOUS >> NEXT Jump to page:
Go to contents Go to thumbnails
This book is also available for the Kindle