[This was originally written as a response to People Plus: Is Transhumanism the Next Stage In Our Evolution? on The Conversation.]
My biggest problem with transhumanism as a movement is it seems to be largely about the extremes of techno-neepery on the part of a lot of IT and engineering nerds. Now, that would be fine, except IT and engineering nerdery share a couple of rather crucial traits which make them, and the resulting transhumanist movement, somewhat dodgy.
Firstly, there’s the very real tendency on the part of IT and engineering types to consider their expert knowledge about their field makes them experts in every other field. The clearest example of where this can go wrong is visible in the “AI Singularity” types – because they know a lot about IT or engineering, they presume every other field is just like their own, and so therefore the whole problem of Artificial Intelligence is easy enough to solve, we just have to make computers small enough, or fast enough, or create interesting enough algorithms, or teach them how to learn. The problem is even a cursory knowledge of psychology shows the biggest problem with AI: they’re trying to virtualise a virtualisation – trying to create an artificial version of something we haven’t even defined properly in the natural version. The best definition of “intelligence” the psychological and neurological sciences can come up with is “intelligence is the thing measured by intelligence tests”. (Which is, incidentally, why humans are so lousy at recognising it when we’re faced with it, even in other humans; see colonialism, feminism, racism, ableism and similar for examples).
The “mind upload” people are another example of this kind of thinking pushed too hard. Basically, the assumption of the uploaders is because we know everything there is to know about the way computers are built, and how things work from the level of the digital logic through the micro-instructions to the instruction sets to the operating systems to the assembly level to the programming level, we can easily transfer this knowledge over to the human brain. But here’s the biggest problem: we don’t know this information for the human brain. We don’t know enough about the wetware.
What we know about the brain, and how neural impulses become what we experience as thought and sensation is minimal. We know how neurons work in isolation. We have guesses about how altering the neurochemistry makes alterations to the thought processes (but these are very much guesses – there’s no such thing, for example, as a blood test or cerebro-spinal fluid test to check for depression or schizophrenia). But aside from that? We know the neurons fire, and we know certain areas of the brain appear to be associated with neural activity for certain stimuli, and then there’s this big blank… (wherein a miracle occurs) and then somehow, thoughts appear and are made manifest as behaviours. There’s a lot of work that needs to be done on step two.
To put it bluntly, as far as the cognitive sciences are concerned, we have a map of the way the mind works which looks a lot like the business plan of the underpants gnomes from South Park.
Which leads to the second big problem with the IT and engineering types who tend to make up the majority of transhumanists, and to be fair to them, it’s a human problem more than an IT/engineering specific one. This is the tendency to believe if they don’t know something about a subject, then what they don’t know can’t be relevant, difficult, or important. Given this is something that IT and engineering types both bitch about constantly with relation to management, it’s somewhat ironic to see them displaying this tendency themselves in great numbers in the transhumanist movement, but feelings of schadenfreude aside, it’s a big blind spot to have.
It leads to a huge blind spot when it comes to questions of ethics – of the “why” of even their own disciplines rather than the “what” and and the “how”. I’m studying IT myself, and one of the things which worries me about what I’m studying is there doesn’t seem to be much consideration of the ethics of what’s being done (admittedly, I’m only at first-year level, and there may be more consideration of this later on down the track) – of whether the ability to do something is necessarily the same as the consideration of whether it should be done at all. Instead, there’s merely considerations of what can and can’t be done, as though these are the same things.
The whole area of transhumanism is fraught with ethical rabbitholes and landmines, and yet the main concern of most transhumanists appears to be “can we get the shiny tech to work properly sooner?”, without a thought to the consequences. Now, on the one hand, I agree if all inventors had been required to sit and consider the full consequences of their work before they created it, nothing would have materialised and we’d still be grubbing for fruit and roots in the rainforests of Africa like our bonobo cousins. However, there’s a happy medium to be reached between infinite contemplation of the possible consequences, and the “build it and see what happens” mindset which is so encouraged in IT and engineering.
We’re seeing the consequences of the “build it and see what happens” thing when it comes to the internet now – we’re seeing people take this lovely technology which has so many possible positive uses, and use it for things like passing around upskirt photographs of young women on public transport, or destroying the infrastructure and economy of an ideologically opposed nation, or hounding someone to the point where they kill themselves, or just creating databases of information about people to the point where personal privacy becomes a distant memory. The knock-on effects of the internet are still being discovered, the ethical ramifications of this greater degree of connectedness are still being explored, and the pitfalls are being fallen into as we discover them. (Incidentally, there’s also the great ethical questions like “should we breach the anonymity of persons doing anti-social things on the internet?” being asked without the counter question of “why is the privacy of the offender apparently more highly valued than the privacy of the persons offended against?” being put).
To put it simply, my biggest worry when it comes to transhumanism is the way a lot of transhumanists are rushing forward with their eyes firmly fixed on some distant horizon, and not paying anywhere near enough attention to the footing of the rather complex moral ground they’re covering.
Categories: Culture, ethics & philosophy, Science, technology
There’s a fairly simple explanation for the domination of IT-types in transhumanism. We’ve seen extraordinary exponential progress over several decades. No other field I know of comes close.
The first computer I used regularly had 64KB of ram. The one I use to type this message on has precisely a factor of one million (64GB) more ram. This happened in 25 years.
By extrapolating, I can guesstimate that in another 25 years, I’ll have ready access to computers with ram measured in petabytes.
It’s pretty reasonable to spend some time thinking about just what we’ll use those computers for.
What you say: that we don’t know how the brain works, is true. But it’s also true that algorithms and hardware always gets better over time – and it’s reasonable to assume that a million times more powerful hardware, compared with better algorithms, will be able to do more than todays computers.
Strong AI ? No idea. Things that are closer to strong AI than what we’ve got today ? Certainly !
Ignoring the culture in the H+ movement for a second, I have to admit that the whole business of artificial intelligence and mind uploading feels too distant for me to be bothered with. I don’t share the optimistic time frames that most H+ people seem to think are possible, and I’m not sure that AI is the panachea that it’s talked up to be.
I’m more excited about the possibility of mechanical augmentation. To start off with, it would just be a matter of replacing limbs that had been lost to accident, but the end-game of augmentation would allow a human to have direct control over, say, a bomb disposal drone. Or a vehicle – complete with an expanded kinesthetic sense, so that controlling it felt as natural as walking does to an able bodied person. I feel augmentation is a vastly overlooked area of H+ that’s a lot closer and more exiting than most of the rest of it. I mean, we’ve already got bionic ears, and we’re starting to be able to give limited bionic sight to some otherwise blind people.
Back to the culture aspect; I agree with you, Megpie, that most H+ people are blinding themselves to the potential harms. I think that this will change over the next 10 years, as more women and PsOC get into STEM fields and forcibly remove the ‘other’ label by way of being a consistent presence and reminder that these ‘others’ are people too.
I think that the next decade will also produce vast changes in education for autistics, like myself, so that arseholes don’t get to pretend they’re incapable instead of unwilling. Especially since most autistics aren’t incapable of considering such concepts as othering. And as we are more widely understood, fewer opportunities will arise for arseholes to use us as cover. Of course, they’ll try another front. But I think they’re running out…
Well, yes, that would be the end game for some people, and for others just a side effect of building a battlemech that can plant bombs.
YetAnotherMatt: There are downsides to all technologies. That particular downside is easily predictable and, depending on how you look at it, not particularly down. It’s the same kind of situation that the US air force is raising with UAV aircraft: on the one hand, you have the capability of harm to civilians – much like piloted fighter and bomber craft. On the other hand, UAVs mean no more pilot deaths. Battle mechs potentially harming civilians? Yes, but only in the same way that soldiers already were. Battle mechs saving soldiers lives? I count that as a win. The problem comes not in the technology, but the misuse of power. You know, one of the major reasons HaT exists, if in the misuse of male priviledge rather than military power.
In short, I think the down sides can be overcome by working on humans rather than preventing technology. Otherwise, I wouldn’t be feminist.
My background is maths/stats and evolutionary biology, so I tend to assume stuff I know nothing about will be more complicated than it first appears, and that it’s really important to pay attention to the bits you don’t know about at all times because that’s what is going to bite you on the bum.
So transhumanism either confuses or annoys me. Because by one set of definitions, we’ve been “transhuman” as long as we’ve been human, possibly longer (there are two species that might have self-domesticated: cats and humans). The other set of definitions seems to be based on ideas about biology and evolution that aren’t worthy of being printed on the back of cereal boxes.
Overall, I’m in total agreement with Megpie, but from a different angle, I think.
Well everyone knows that B.E. stands for Bachelor of everything 🙂 And there’s certainly a lot of examples out there who believe that’s true. But I don’t know if that stereotype is specific to IT/Engineers or rather just to generally smart people (doctors/lawyers etc) because you see it a lot in people from other fields too. Though perhaps the number of people in IT/Engineering with poor social skills is larger so they come across worse.
I did both computer science and engineering. In computer science there weren’t any courses (elective or not) that covered ethics. It simply wasn’t on the map. The engineering course did have some coverage, though mostly related to the compulsory professional accreditation that you need to act as an engineer.
I think there’s a general attitude in computer science that what people build are merely tools – for example you can use advances in supercomputing to cure cancer or test nuclear weapons. And its pretty much impossible to advance one without the other. Or another example is the networking technology that can protect your privacy and data from being stolen is exactly the same technology that can be used by oppressive governments to control their citizens.
From what I’ve read about AI singularity (which admittedly is not a whole lot) it appears that a popular belief is that its going to happen accidentally. Someone will be building a an extremely complicated system and someone will realise its actually sentient. If that’s true its going to be pretty hard to put a stop to that even if we wanted do.
That’s true, but I don’t think this is something that can be regulated or controlled like you might find say with medical research.
We’re busy wiping out other species and destroying habitat and stuffing the environment life depends on, and these clowns are wrapped up in this sort of stuff …
Fortunately, there is a whole discipline of people who worry about the ethics of this sort of stuff for you! Donna Haraway is perhaps one of the most famous ethicists who works on this, but this is a big field. There is also a branch called ‘cyberfeminism’, although this is now bigger than just AI and cyborgs and can include scholars working on less biologically-intensive techonologies, like the internet.
One of the things that strikes me about this isn’t just about whether it’s right or wrong or will cause problems, but it offers some fundamental challenges for some disability rights positions, that argue that differently bodied does not mean a need or desire to be ‘fixed’.
There is also a whole secular philosophy around the human ‘soul’, to apply a religious term to a complex concept. There are whole branches of philosophy devoted to this idea, who might wonder whether it can be replicated and indeed whether human cyborgs might become less rather than more, by being less than ‘fully’ human.