The presentation hosted by Leadership Jersey at the offices of Prosperity 24/7 was really interesting. Eve Poole talked about “robot souls,” the title of her new book, which explores programming humanity and artificial intelligence.
I won’t attempt to summarize her talk because it was truly excellent—witty, informative, and engaging—and I don’t think I could capture all that in this written article. However, I would strongly recommend the book. I have some reflections from the meeting, some of which are perhaps questions that I would have liked to have asked had there been time.
It was a packed audience, fully engaged with lots of questions, many lingering long into the evening after the presentation had ended. Instead, I took the time to reflect and I’ve had these thoughts.
My personal view is that AI is an accelerant. It helps you reach a point much more quickly. If it is a good point, it will help you get there quickly; if it is a bad point, well, it will have the same effect. And so, if you’re pointing towards doom, AI will accelerate you there. But it can equally push you much more rapidly towards success.
The challenge, however, is that we often think about success and solutions, and rarely about downsides and consequences. For every singular success, there are many consequences. If we look at the success of the global economy, many, if not most, have prospered, but there have been real consequences, both for society and for the planet. One of the challenges is that successes come quickly, and the consequences come slowly, effectively allowing companies to privatize the gains and socialize the losses. So, oil companies make huge sums of money, and generations have to deal with the result of climate change.
I agree with Eve Poole that AI will probably provide huge successes in healthcare and may well tackle some of the issues of climate change. But the problem of complex adaptive systems is that for everything you adjust, there are first, second, and third order events of consequence, many of which are really unpredictable.
As regards AI fixing climate change, I think the interesting challenge here is that most of the gains from AI will be privatized, and the real incentive is to maximize money rather than to resolve the problems of future generations. The solutions that AI will come up with will be for short-term profit maximization, not long-term planetary sustainability. Because who realistically is going to invest billions of dollars to create a fantastic planet in 2000 years? Humanity is short-term and selfish. And indeed, it is this that has made it a successful species in the long term. And to be fair, the species will survive, but it will be very different, and we’ll evolve very differently from where we are today, likely on a very different planet with a very different climate.
Another thing to consider is how AI is just another tool. Increasingly, we’ve had tools with greater and greater leverage. Many thousands of years ago, if we had an argument we might throw sticks at each other, then we might evolve to rocks, then we may set up a catapult. Eventually, we invented the gun, then the rocket, then the missile, and now nuclear weapons. These advancements took thousands of years, perhaps like a child riding a scooter and evolving onto a bicycle, a moped, and perhaps one day a motorcycle. But we now live in a world where the next generation may be learning to drive something far faster than a Formula One racing car, and I wonder how well we are at being able to maneuver and control this, or whether there will be countless crashes.
I’m also consciously aware of the Jevons paradox, which means that when we become good at something, we do more of it. When we have automation to help us with emails, we send more emails. When we have technology to help us extract more oil, suddenly we are extracting more oil. We never actually do more with less. We just become better at extraction, and the better we become at extraction, the more we extract. This, of course, has devastating consequences for nature and climate change, but also has consequences for humanity. The more hours we can work, the more we do work. The more adverts we can see, the more adverts we do see. We are constantly in this cycle of productivity, demanding increasingly more of us simply because now we can.
I wonder what the result of AI will be if it can facilitate doing more; will people actually be required to do more? The myth of the AI economy, where we are all wealthy and only have to work a few hours per week, never transpired for us, who grew up watching the TV programme “Tomorrow’s World.” It is true that there may be mass underemployment as a result of AI, simply because many of the cognitive jobs can be done faster, cheaper, and better by technology, and we can dispense with many of the routine operational tasks. The idea that suddenly we need millions of programmers to do that work is also a naïve myth.
Since humans are effectively defined by their relationships with other people, we are defined by how people respond to us—whether as nurturing caregivers at birth, friends, colleagues, or community or faith members. I wonder how we will redefine ourselves when we spend more time with an algorithm than we do with a human. Indeed, there’s a strong argument that says for some, this has already happened. We spend more time on social media being pumped full of fake news memes and fabricated entertainment. To the extent that I suspect many now feel socially anxious and potentially depressed because they are living in a disembodied experience, tethered to a technology platform which feeds them with dopamine hits.
I’m really wondering what it will be to be human when AI is providing that dopamine hit to entertain us, as many of us will be underemployed, not necessarily unemployed.
As an extension of the thought process that humans are defined by their relationships with other people, the increasing importance and attention we give to artificial intelligence also means that we effectively undermine the lived experience. We no longer have the experience of learning; we simply outsource it. We may no longer have the empathic experiences of many of our emotions because AI means that we can avoid the anxiety of not having done our homework, replacing human needs for connection with dopamine alternatives provided by algorithms. So as much as I think we may lose our thinking, feeling, and being faculties as a result of AI.
There is no conclusion here. These are simply a list of thoughts provoked by Eve Poole’s fantastic presentation. I strongly recommend her book and want to extend my thanks to Leadership Jersey and Prosperity 24/7 for hosting the event, and of course, to Eve Poole, for making me think my own thoughts, unassisted by AI.