[This was originally written as a response to People Plus: Is Transhumanism the Next Stage In Our Evolution? on The Conversation.]
My biggest problem with transhumanism as a movement is it seems to be largely about the extremes of techno-neepery on the part of a lot of IT and engineering nerds. Now, that would be fine, except IT and engineering nerdery share a couple of rather crucial traits which make them, and the resulting transhumanist movement, somewhat dodgy.
Firstly, there’s the very real tendency on the part of IT and engineering types to consider their expert knowledge about their field makes them experts in every other field. The clearest example of where this can go wrong is visible in the “AI Singularity” types – because they know a lot about IT or engineering, they presume every other field is just like their own, and so therefore the whole problem of Artificial Intelligence is easy enough to solve, we just have to make computers small enough, or fast enough, or create interesting enough algorithms, or teach them how to learn. The problem is even a cursory knowledge of psychology shows the biggest problem with AI: they’re trying to virtualise a virtualisation – trying to create an artificial version of something we haven’t even defined properly in the natural version. The best definition of “intelligence” the psychological and neurological sciences can come up with is “intelligence is the thing measured by intelligence tests”. (Which is, incidentally, why humans are so lousy at recognising it when we’re faced with it, even in other humans; see colonialism, feminism, racism, ableism and similar for examples).
The “mind upload” people are another example of this kind of thinking pushed too hard. Basically, the assumption of the uploaders is because we know everything there is to know about the way computers are built, and how things work from the level of the digital logic through the micro-instructions to the instruction sets to the operating systems to the assembly level to the programming level, we can easily transfer this knowledge over to the human brain. But here’s the biggest problem: we don’t know this information for the human brain. We don’t know enough about the wetware.
What we know about the brain, and how neural impulses become what we experience as thought and sensation is minimal. We know how neurons work in isolation. We have guesses about how altering the neurochemistry makes alterations to the thought processes (but these are very much guesses – there’s no such thing, for example, as a blood test or cerebro-spinal fluid test to check for depression or schizophrenia). But aside from that? We know the neurons fire, and we know certain areas of the brain appear to be associated with neural activity for certain stimuli, and then there’s this big blank… (wherein a miracle occurs) and then somehow, thoughts appear and are made manifest as behaviours. There’s a lot of work that needs to be done on step two.
To put it bluntly, as far as the cognitive sciences are concerned, we have a map of the way the mind works which looks a lot like the business plan of the underpants gnomes from South Park.
Which leads to the second big problem with the IT and engineering types who tend to make up the majority of transhumanists, and to be fair to them, it’s a human problem more than an IT/engineering specific one. This is the tendency to believe if they don’t know something about a subject, then what they don’t know can’t be relevant, difficult, or important. Given this is something that IT and engineering types both bitch about constantly with relation to management, it’s somewhat ironic to see them displaying this tendency themselves in great numbers in the transhumanist movement, but feelings of schadenfreude aside, it’s a big blind spot to have.
It leads to a huge blind spot when it comes to questions of ethics – of the “why” of even their own disciplines rather than the “what” and and the “how”. I’m studying IT myself, and one of the things which worries me about what I’m studying is there doesn’t seem to be much consideration of the ethics of what’s being done (admittedly, I’m only at first-year level, and there may be more consideration of this later on down the track) – of whether the ability to do something is necessarily the same as the consideration of whether it should be done at all. Instead, there’s merely considerations of what can and can’t be done, as though these are the same things.
The whole area of transhumanism is fraught with ethical rabbitholes and landmines, and yet the main concern of most transhumanists appears to be “can we get the shiny tech to work properly sooner?”, without a thought to the consequences. Now, on the one hand, I agree if all inventors had been required to sit and consider the full consequences of their work before they created it, nothing would have materialised and we’d still be grubbing for fruit and roots in the rainforests of Africa like our bonobo cousins. However, there’s a happy medium to be reached between infinite contemplation of the possible consequences, and the “build it and see what happens” mindset which is so encouraged in IT and engineering.
We’re seeing the consequences of the “build it and see what happens” thing when it comes to the internet now – we’re seeing people take this lovely technology which has so many possible positive uses, and use it for things like passing around upskirt photographs of young women on public transport, or destroying the infrastructure and economy of an ideologically opposed nation, or hounding someone to the point where they kill themselves, or just creating databases of information about people to the point where personal privacy becomes a distant memory. The knock-on effects of the internet are still being discovered, the ethical ramifications of this greater degree of connectedness are still being explored, and the pitfalls are being fallen into as we discover them. (Incidentally, there’s also the great ethical questions like “should we breach the anonymity of persons doing anti-social things on the internet?” being asked without the counter question of “why is the privacy of the offender apparently more highly valued than the privacy of the persons offended against?” being put).
To put it simply, my biggest worry when it comes to transhumanism is the way a lot of transhumanists are rushing forward with their eyes firmly fixed on some distant horizon, and not paying anywhere near enough attention to the footing of the rather complex moral ground they’re covering.