2/21/2006

Stealing Light went sailing past the forty-five thousand word mark a couple of days ago, and continues to steam ahead; part of the reason for this is the actual main narrative didn't kick in until the thirty-five thousand word mark, something I'm going to have to sort out in subsequent drafts by taking earlier flashback material and distributing it in smaller chunks throughout the rest of the text. That way, I can have the story proper start much earlier than it currently does.

In the meantime, I've been reading and re-reading a couple of novels in the same vein as Stealing Light, mainly so I can be sure what I'm doing is as much my own thing as possible, without owing too much to Those Who Have Gone Before Me. How to make what I do sufficiently distinctive from the works of, say, Peter Hamilton, or Dan Simmons, or Alistair Reynolds (see what kind of company I like to keep?)?

What is it I can do to make it a Gary Gibson novel, rather than an 'anyone else' novel?

One thing I've been thinking about for a while is artificial intelligence; as in, the nature of.

A lot of writers like to play around with the whole idea of mind transference - from body to body, from body to machine, from machine to body, and so on. Unfortunately, as much fun as this kind of thing really can be, in some ways it's starting to get a little old - and when it gets old, it's time to start thinking of new spins on the idea.

Problems: nobody can tell you what the human mind is. We have a pretty good idea of the mechanics of the brain, of certain fundamental ways in which it appears to process information, and of the ways particular abilities and information storage appear to be located in specific regions of the cerebrum. But nobody out there with a hankering to maintain a serious scientific career is going to stand up in front of an international audience just yet and tell you exactly what a mind is. Which is strange, since for the past couple of decades science fiction has been treating the human mind as little more than a datastream which can be converted and reconverted from digital to non-digital information.

Again: there's nothing wrong with this, if it facilitates the narrative drive. If it makes for a better story, great. I'm just thinking about ways in which it might be different. Our genuine knowledge about the actual nature of what makes a living being living is effectively zero. There may be plenty of good guesses out there, but when it comes to quantifiable evidence the jury is still very much out. Put simply, when we're dealing with the mind, we don't really know what we're dealing with.

One thing I often like to say is that the conscious mind is like a horseback rider. The conscious human mind as we understand it is somewhere between, say, a million and half a million years old. It rides piggyback on the unconscious mind - the instinctive, animal mind, if you will - itself a machine close on a billion years old.

A billion years old: as old as life itself. This is important. Our fundamental animal nature is an exquisitely honed machine, the descendant of a billion years of evolutionary ancestry - and to survive for a billion years, you have to be very very good at what you do. One of the ways this ability to survive expresses itself is in what we call 'instinct' - a non-verbal interpretation of the world around us. The feeling you get when a person looks at you a particular way. The sense of danger when you walk down an unfamiliar street with inadequate lighting. The good feeling when someone you've never met walks into a room - or the bad feeling, or the uneasy feeling, or the sense of desire.

This is why I'm a big believer in trusting your instincts: your instinctual senses have been around, in one form or another, for a very long time - certainly far longer than the human race itself. It's something which - like the conscious mind itself - may or may not be contained solely within the brain, or distributed in some form throughout the entire human body, or may exist in some form we simply don't as yet understand. This is not in any way to suggest the intellect doesn't matter - of course not. There are so many other matters that could be discussed here - people's worst instincts, the drive to separate each other into 'them or us' that leads to most of our problems as a species - but it's not my concern to discuss them today.

Right now, I'm concerned with writing a science fiction novel which may or may not include mind transference of the aforementioned varieties.

If the human mind is intrinsically allied to the blood and meat that carries it, what happens when you remove it from that context, make it free from the finely honed nervous system that supports and informs it? If you free a mind from the constraints of genetic imperatives, does it then lose the will to live? Or to reproduce? Or to conquer its neighbours? Given so much even of our science fiction, at least in televisual/cinematic form is dedicated to the frankly bizarre notion that machines or computers might be somehow sufficiently threatened by us as to wish to destroy us rather than simply ignore us ('Don't you see? They hate us because we're human! Etc etc), wouldn't it be interesting to consider the nature of a mind shifted so violently from its natural context into one distinctly unnatural?

Much of our ideas concerning the nature of alien intelligence - amongst which I include artificial intelligence - are informed by the ape/tribal politics that continue to inform both our daily lives and the decisions made by governments and corporations. One thing that has crossed my mind is to consider the modern international corporation as quite literally an artificial organism, with its workers, managers, CEO's and footsoldiers in suits its individual cells which can either join or leave the main organism. I find this an interesting analogy because it suggests to me that any machine intelligence in the form it usually takes in science fiction novels might well by its very nature behave in ways entirely incomprehensible to us.

If the human mind really is so simple that it can be reduced to digitised information, then doesn't it make sense to think of, say, Shell or any other giant company in the same way, as something complete with its own desire, motivations, and will to live and survive? And if it's possible to accept this notion, is there any reason to think such an entity's motivations would make any sense to us whatsoever?

Since so many of our desires, emotions and drives stem more or less directly from genetic imperatives for food, shelter, survival (including the conquest of or subjugation by other tribes) and reproduction, what happens when you 'free' the mind from those constraints?

Creature of pure intellect, emotional/autistic vacuum, or digitised idiot?

As a result of these thoughts, it strikes me as more interesting to think of digitised/uploaded human minds as a brand new species - not necessarily superior, just different, and quite incomprehensibly so. Otherwise, all you have is a voice in a box.

Perhaps it's time, as it were, to think outside the box.

1 comment:

Anonymous said...

Hi Garry
An interesting aspect is Godel's work on 'Incompleteness'. Godel has been getting a bit of a revival lately as a philospher on time but also on AI.
Most of the arguments seem to go like this:
Godel proved that any formal system that is capable of generating and proving true statements of arithmatic is also capable of generating true statements that a HUMAN can intuitively recognise as true but CANNOT be proved true by the formal system. THEREFORE any program that can generate logical truths/concepts will be able to generate new truths/concepts but will NOT be able to recognise them as true itself. There is brief discussion in 'A World Without Time: The Forgotten Legacy of Godel and Einstein' by Palle Yourgrau and an extensive discussion in Penrose's 'The Emperors New Clothes'.
I think it's an interesting idea that you can have the best AI imaginable but the program unavoidably cannot recognise new logical truths that are not in the original program. It needs a human brain for this.
I don't know if this idea has been used in SF. (I'm planning to try it in a story.)