Notes for my tutorial – any feedback is
always most welcome.
I recall those chilling childhood
memories of out of control crudely constructed robotic figures, made out of
tin, taking over the world in the movies.
As I recall they were serials which left you in suspense as to how those trapped in burning buildings were able to escape unharmed as inevitably happened in the next episode. Today I can imagine a future play involving much more humanistic robotic faces to whom a human utters the words to his robot ‘What think you of this matter in your consciousness’ dear Robot – my faithful companion? to which he replies ‘the sum of my conscious perceptions is yours to command’. But it finally ends in tragedy as per my conclusion.
Consciousness is defined as the state of being
aware of and responsive to one's surroundings, or to be aware of something.
Awareness varies enormously; from purely instinctive responses by non-thinking insects to the advanced responses of the upper echelons of the animal kingdom and ourselves.
Awareness varies enormously; from purely instinctive responses by non-thinking insects to the advanced responses of the upper echelons of the animal kingdom and ourselves.
This paper will talk about Artificial Intelligence,
its risks and the madness of attempting to build a conscious robot not too
dissimilar to us. Along the way I will talk about non-material theories of
consciousness.
However, consciousness
and intelligence are not the same thing as intelligence involves learning and
being able to make judgments or to form opinions based on reasoning.
Hubert Dreyfus talks
about the history of artificial intelligence,which he
contends in its infancy wisely only developed physically based systems using symbols and
algorithms.
Think about stick like inventions, analogous to non-thinking insects in the form of vacuum cleaner robotics. They were able to autonomously guide themselves around and avoid bumping into things via inbuilt cameras and sensory devices.
But even so some modern day versions still get stuck on different surfaces, others work as if they wearing a blindfold whilst a few manage to avoid bumping into tables and chairs.
More successful applications are in the auto and other manufacturing industries.
Importantly they are physically developed systems with no learning capability.
Think about stick like inventions, analogous to non-thinking insects in the form of vacuum cleaner robotics. They were able to autonomously guide themselves around and avoid bumping into things via inbuilt cameras and sensory devices.
But even so some modern day versions still get stuck on different surfaces, others work as if they wearing a blindfold whilst a few manage to avoid bumping into tables and chairs.
More successful applications are in the auto and other manufacturing industries.
Importantly they are physically developed systems with no learning capability.
Intelligent
systems able to learn
Dreyfus summarises his ideas as to how we learn as
human beings and why he thinks this cannot be easily replicated in any so called
advanced intelligent computer systems capable of machine learning.
We experience the world as in our existence, where
we store information about that experience. That enables us to see changes from
continually gained knowledge. Hence how it looks to us will be according to our
inner perceptions, continually updated as we gain more knowledge or
information. Now it may be more complicated than that, but even so, one recognises
immediately the difficulty a machine is going to have. How is the machine going
to continually look at itself and update that perception as new knowledge is
gained, to observe the difference to what the world previously looked like?
COG and other impossible quests.
Undeterred Computer scientists continue in the
quest to try to introduce a form of advanced machine learning,
which has not been successful. An early example was a robot called COG, who was
going to be able to capture essential human style representations, but
unsurprisingly such a fanciful idea ended in abject failure.
Even so, Computer Scientists still hold out high
hopes, that in the future, you might be able to construct such a robot, like
ourselves. That envisages a closed deterministic system, assembled from bits
and pieces of atoms, capable of reactions and based on optimum pre-programmed perceptions
complete with an invented language-a system with some sort
of advanced consciousness comparable to human beings.
But let us consider what we know about the human
brain and the baffling aspect of how consciousness arises.
Notwithstanding an enormous amount of information known
about our human brains, we can’t say precisely how that consciousness comes
about. In other words we can explain the brains output, and understand where
our consciousness principally arises, or figure out where it is that we discern
what the world looks like to us, but not how precisely that consciousness came into
being in the first place.
But the enthusiastic trailblazers are now arguing the
answers will be provided by ever more powerful computers such as gravity
quantum computers, able to exceed the computational ability of our brains capacities
many times over.
But as our present brain capacity works to provide
us with a highly developed conscious experience and the ability to learn, why
would you think a flawed system with more capacity and speed would be capable?
There must be something else inherent in consciousness, but we can’t precisely
put our fingers on it and say how it arises or even where it comes from.
That is a question whose answer seems to me to
stretch back in time and is inextricably tied to our existence and possibly predates
the formation of life itself.
Nonmaterialistic ideas on consciousness.
Hence given these
failures in physically based systems there has been a resurgence in interest in
non-materialistic ideas on consciousness.
Author Steve Taylor, whose article appeared recently
in ‘Philosophy Now’, argues the point that from inception the mind was always
in matter. Hence all living things either have a mind, or some sort of mind-like quality.
Bear in mind under the term “all things” and “mind” that refers to anything in
itself having a reaction to phenomena - (an inner reaction) as opposed to being
injected or sustained from outside.
As you may
have gathered already, this isn’t an entirely new idea as the idea of a fundamental
consciousness already present prior to the formation of the universe is
included in many of the ancients world view. The distinction Taylor makes is in his idea about
channeling that more basic form of consciousness, combined with phenomena which
is expressed in our advanced human form of consciousness.
Hence Taylor’s
theory suggests we can think in terms of just early style fundamental
consciousness, mind and matter. From an evolutionary perspective, in the
beginning there was fundamental consciousness, followed by simple life forms
with mind within matter to evolve with increasing complexity.
However,
in summing up, there exists a virtual smorgasbord of different ideas about it. Hume’s
proposition is all there can be is perceptions, similar to Russell’s different perspectives.
He contends how they appear to us will depend on the relationships between physical
things and of mental pictures. He likens it to using a telephone book where you
view the information according to alphabetical index of names or by way of location.
It’s the same thing seen from different perspectives.
At this
point it is suffice to say we just don’t know how precisely it originally arose
and precisely how it translates into our psyche.
Risks in artificial intelligence
Already we have some evidence it impedes over
time our own brains amazing array of conceptual abilities. That is, as we rely
more and more on the machine to do our thinking for us, will our own ability be
thwarted by the machines convenience to do all the hard work for us?
Currently there exists evidence adolescents, whose lifestyle
principally revolves around an interpersonal life and external world experiences,
will rate higher in conception ability than those mostly in tune with just the digital
world
Any inroads made into far more complex systems, must
be deterministic and based upon the material inputs of their designers. Inevitably
that must include some sort of belief system and it is hard to see how machine
learning can be factored into the machine, as in the way learning applies to ourselves.
Hence there is a growing realisation, that any advanced intelligence system
must have inbuilt ethical checkpoints and allow human oversight, if we are to
avoid the whole project getting completely out of control.
Conclusion
At this point I want to return to the final scene
in my play, the moment of tragedy.
The human being realises his humanoid like robotic
companion has taken away his ability to think of things he used to contemplate.
His neuronal highways have become decayed, through lack of use, and can’t be
repaired or replaced. So he is now in a state of deep despair as a young man and
complains to the robot. The Robot’s sensors overreact to identify a dangerous terrorist/alien
and the human is tragically killed.
The curtain closes as the evening news reports a law
suit has been filed against the global leader manufacturer of Robot companions,
alleged to have fitted defective over sensitive sensors.
No comments:
Post a Comment