What Ray Kurzweil and geoengineering have in common

August 17, 2010

John Rennie points me to the most eye rollingest article of the week, in which we’re told that brilliant inventor-turned-futurist Ray Kurzweil claims we are perhaps two decades from “reverse-engineering the human brain so we can simulate it using computers.”

I say eye-rollingest. In fact, I had a more visceral reaction to this non-story.

If you haven’t heard of Kurzweil, let me give you some context: He pops 200 vitamin pills a day. He goes to a longevity clinic once a week to be pumped full of untested life-extending drugs. Why? Because he believes mankind is destined to achieve immortality by uploading itself into ultra-powerful computers, and he doesn’t want to miss his shot. He has written a best-selling book arguing that computers will soon surpass human intelligence, an event he calls the “singularity.” He has also founded the Singularity University to advance that goal.

In short, he badly needs to accept his mortality, and people need to stop listening to him.

But instead he gets press for making incorrect statements like this…:

The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code

…and for coughing up techno-Utopian wank material like this:

[Reverse-engineering the brain] would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.

The challenges of the future? Really? As far as I’m aware, there is one primary challenge of the future, and it is the looming global catastrophe known as climate change. Kurzweil’s singularity will never happen. It’s a distraction, a fantasy for people who take their science and technology a little too seriously.

Not coincidentally, climate change has its own version of Kurzweil’s singularity. Called geoengineering, it’s the vague concept that we can somehow pump substances into the atmosphere or the oceans to counteract global warming. Of course, a) we have absolutely no idea how to do such a thing, and b) even if we did, we’d still have to find a place to stash all the CO2 in the atmosphere, because otherwise as soon as our geoengineering fix wore off the warming would kick right back in where it left off.

In other words, it’s as incoherent as the vision offered by Kurzweil, and it taps into the same twisted belief that we can engineer our way out of the constraints of the real world.

Indian activist and eco-feminist Vandana Shiva strums my pain with her fingers on Democracy Now!:

[I]t is the idea of being able to engineer our lives on this very fragile and complex and interrelated and interconnected planet that’s created the mess we are in. It’s an engineering paradigm that created the fossil fuel age, that gave us climate change. And Einstein warned us and said you can’t solve problems with the same mindset that created them. Geoengineering is trying to solve the problems with the same old mindset of controlling nature.

Or, in Audre Lorde’s famous line, “The master’s tools will never dismantle the master’s house.”

And believe you me, Kurzweil and the rest of us are living in the master’s house, a house in which workers, women and minorities are too often treated as objects not fully real; in which objects are fetishized; and in which our ability to find solidarity and meaning in one another has been corrupted, leading some of us to cling to crazy ideas like the singularity.

Now, how do i convince Kurzweil of any of this?

Advertisements

12 Responses to “What Ray Kurzweil and geoengineering have in common”

  1. S.K.Graham Says:

    Hi. You can thank PZ for the this hit. Since I’m here, thought I’d comment. While I can sympathize, I can’t say I agree.

    Your Shiva quote… No. What got us here was the inevitability of human population growth. The only way to feed/shelter everyone was to advance technology. When resources become scarce, wars happen. The only way to avoid those wars is to advance technology to produce more resources, especially food. The technology had side effects (pollution) which couldn’t be helped in the past.

    The only way we could have avoided getting where we are now would have been to have had much much larger and deadly wars or exterminations in the past. No way to “go back”, unless you want to exterminate a bunch of people now.

    So what is left? We have to invent new tools to clean up the mess made by the tools we’ve been using up until now.

    I do have to agree that the modern world makes it damn hard to find any kind of meaning. We are psychologically best suited to grow up and live our lives among a small group of mostly extended family. People we know trust and live with every day from birth til death. Tribal life gave people all the meaning they needed doing all they could to keep the tribe alive and healthy, especially the children. Corporations, schools, political parties and religions are poor substitutes for the “tribe”.

    But there is no going back. And tribal life may have been emotionally more satisfying, but it was physically more brutal. Many people live full lives today who would have died in childhood in more primitive times.

  2. Svlad Cjelli Says:

    “The master’s tools will never dismantle the master’s house.”

    The quote really isn’t a good illustration of your point, since A) houses can be dismantled by tools, regardless of ownership, and B) houses are frequently dismantled by tools, regardless of ownership.

  3. McDuff Says:

    I suppose I’m a “singulatarian”, in the words of that SciAm podcast, but I don’t buy Kurtzweil’s timescales.

    Still, I wonder a little about the leap from “Kurtzweil is off in his predictions” to “it will never happen, it’s all a fantasy!”

    Extend the timescales tenfold, call it 200 years. Hell, call it half a millennium. Assume, as I think is entirely valid, the idea that the human brain as currently constituted is in all likelihood incapable of comprehending itself.

    None of that indicates that the emergence of something like machine-intelligence is an impossibility. Nor does it indicate that human beings themselves will never change due to a process somewhat akin to evolutionary development – although I hesitate to use the word since there would be at least some element of conscious design, it’s more a lack of vocabulary – as boundaries between human “brain based” consciousnesses get broken and further integrated with the various technologies that surround them.

    Let’s take as an example the “Matrix” hypothetical world, where a human brain simply has its sensory inputs hijacked by something. There are, as I understand it, no theoretical barriers to that kind of thing. Further, it does not even require us to understand the brain in order to start it. A rudimentary understanding simply of the interface between brain and senses, something beyond our current technology but certainly nothing of the scale of replicating and mapping the whole thing, would be all that is required. Theoretically possible, certainly.

    Now, take that several generations down the line, and it’s not at all difficult to see how the boundaries would continue to get blurred, as experimental data from the wired-in subjects got integrated into the mass of knowledge and various other brain/mind functions would get transferred across the boundaries. One wouldn’t have to hypothesise a complete understanding of how the brains form memories to see how some aspects of memory formation could be passed on to a secondary machine. Similarly, the adaptability of the brain itself indicates that various existing processes could be hijacked for more complex two-way communication. Again, understanding them completely would be unnecessary, as long as we got the inputs and outputs mostly right.

    You can, of course, follow that rabbit hole all the way down.

    Now, this is a significant departure from the Kurtzweil “do it in one fell swoop” hypothesis of modeling the human brain and uploading a data model into a computer. It’s also worth noticing that, should such a pathway get embarked upon, the mind at that stage would differ in a vast number of ways from being what we’d recognise as an early-21st-century homo sapiens. Indeed, if it ever did produce something capable of comprehending the nature of consciousness, it would almost by definition have to be something significantly removed from the current design.

    Also, of course, this is a speculative pathway on which many obstacles can be strewn, not least the possibility that the human race might carbonise itself into a state where it can’t access the technology required to take some of the steps over the next couple of centuries.

    But the theory of the slow burn, gradual “singularity” emerging over time doesn’t seem to be at all inconsistent with our current understanding. Kurtzweil is a granstander who has a lot personally invested in the idea of his own personal immortality, and I think he’s more wishful thinking than good judgement when he talks about timescales. But to go from that to “this will never happen”, well, I think that’s falling foul of one of Clarke’s laws there. “Never” is an awfully long time for things to not happen in.

  4. JR Minkel Says:

    Hi. Thanks for commenting on my half-baked observations.

    @S.K. Graham: You make a reasonable point about population growth. We do have to come up with new tools, but we also have to find the moral clarity and the political will to implement those tools.

    @Svlad Cjelli: The master’s house is built on income inequality. To maintain a reasonable standard of living with a sustainable carbon footprint, we will have to work toward a more egalitarian society.

    @McDuff: I don’t deny that what Kurzweil has in mind is logically possible. The point is, we are rapidly running out of time before climate change takes away our freedom to indulge in this kind of speculation.

    • S.K.Graham Says:

      Moral clarity and political will (of the right kind) are often hard to come by. By I find a lot of hope in our modern world. Despite the wars we still have had over the last 50 years, I suspect that if we could crunch the numbers, we would find that violent death per capita for the whole human race is lower now than ever in history. I credit this to the massive ease of communication (even language barriers are falling) enabled by computers and networks, and also credit technology which provides more resources per capita to go around. The communication aspect enhances trust and cooperation. There are at least something like 2 billion of us in the developed world who pretty much have zero interest in going to war with each other any more. Actually, I’d say 4 billion in the developed and developing world. If we could figure out how to share with the remaining, impoverished, 2 billion and bring them “up to speed” so to speak, that’s help a lot. Even without war, there are still too many people driven by ambition, greed, fear and mistrust for those of us with more foresight to cooperatively overcome global problems.

      I suspect one form of geo-engineering or another is the only hope for global warming. Investment in all kinds of research and development of technologies really is our only hope.

      The crisis of not enough energy is looming large though. In fact thinking about it here has motivated me to start my own blog… the first post on this very topic: http://rightnice.blogspot.com/2010/08/in-long-run-is-even-solar-power-going.html

  5. Uplift Says:

    Thanks for this. I don’t buy Shiva’s quote, though, nor your argument that reducing income inequality per se will solve our problems.

    I’m all in favor of reducing income inequality. But to the extent that’s happening right now, it often contribute to climate change. China and India are building their economies in an effort to raise standard of living: more factories, more pollution, more cars, more people traveling by plane. So define your methods: by what method will we reduce global inequality, and by what specific mechanism will that reduction lead to reduced climate change?

    Re: Shiva, what’s the proposed solution to climate change that involves no science or engineering? (I know she didn’t say “science”, but IMO that quote doesn’t leave any room for science, either.) I’m not snarking: what is her actual, proposed solution? When you give up on science and engineering, you’re left with – what? Prayer?

    In general, given probable tipping points in climate change, how specifically ought we reduce our emissions by, say, 50% or more in the next 20-30 years while maintaining a reasonable standard of living without massive engineering projects, probably including lots and lots of nuclear power?

  6. Uplift Says:

    Let me add – if you were NOT arguing that “reducing income inequality per se will solve our problems” (in climate change, I mean), then I apologize – I was not trying to misquote you.

  7. JR Minkel Says:

    Thanks for your comment. You’ve got it right. I am arguing that “reducing income inequality per se will [go a long way toward] solv[ing] our problem.” I hope to write a post this week that lays out the evidence for that view. The short answer is here and here.

    I don’t doubt that new and creative use of technology will provide another piece of the solution. We’re a technological species. But I question whether some newfangled technology is going to come along and save us. That’s just my feeling.

  8. Alan Kellogg Says:

    The big problem with Kurzweil’s thinking is that we don’t know how the brain works. We have found that individual neurons can do things we once thought it took clumps of neurons to pull off. Far from being a computer made up of organic transistors, it’s more likely that the neurons themselves are organic computers, using an operating system utterly unlike that of electronic computers. The brain being an enormous net of these organic computers.

    All nucleated cells process information, the neuron is especially dedicated to this. Exactly how information is processed is as yet unknown, and may not be known for some time. So I have to consider Kurzweil’s ideas uninformed and downright silly.

  9. Mark Bellis Says:

    Poor Albert Einstein gets his name hung onto quotes that never passed his lips like the one in this story – civilization is all about controlling nature.


  10. Your article seems strongly opinioned which I can appreciate, but not argued.

    You write that Kurzweil is patently wrong in his statement about the brain and how much code would need to be written and computer power to reflect it. Why? Is it a brain no-brainer? Count me brainless–but tell us why.

    You write that Kurzweil is pumping out Utopian wank. The quote you provide is:
    “…Together these could create the ultimate machine that can help us handle the challenges of the future…” A statement with a ‘could’ and ‘can help us handle’ is a far cry from the usual Utopian assurances and strong retreat/flight type language I think of as Utopian. I’ve read his book “The Singularity is Near” and the picture he paints is replete with continued military conflict. It is no lion and lamb “Walden Two” community.

    I share with you the deep and activating concern for women, the exploited, the marginalized. I believe that Kurzweil does also. His writing and presentations do reflect to me a very human interest and compassion.
    I don’t know if Kurzweil is a part of the condition that you write as “in which our ability to find solidarity and meaning in one another has been corrupted” but I would disagree if so. He writes about the radical solidarity and erotic fusion that is love in quite flowery ways.
    And yes, meaning has been wildly transformed of late (a la Marshall McLuhan) but I don’t see meaning or social cohesion and compassion being dissolved at all. Quite the opposite in my opinion.
    I also fail to see how geo-engineering or Kurzweil or anyone who is excited about technology may represent a “fetishizing” of objects. I don’t fetishize a well. I celebrate a village’s health. Nor do I worship the laptops that come to the same village. I eagerly await to see what self-expression and enjoyment in education comes from the use of them as just another tool.

    I appreciate that you are concerned for the human situation, but this article just isn’t ringing true or convincing to me.
    Ryan McGivern

  11. JR Minkel Says:

    Hi Ryan. I admit, as a blogger, my reach sometimes exceeds my grasp. I’ve made an attempt to spell a few things out here.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: