So I finished reading Ray Kurzweil's latest book, The Singularity is Near and I have to say it is mighty weird.
The concept of the Singularity is simply that all human technology, culture and even biology is essentially converging more or less at a rate dictated by Moore's Law, and will eventually (within the next 200 years in fact) turn us all into a giant happy cloud of benevolent nanites with a shared consciousness gobbling up the universe at probably faster than the speed of light to transform all extant energy and matter into a giant universal computer. Wicked!
He pins his thesis principally to the convergence of three key technologies, which he calls GNR (Genetics, Nanotechnology and Robotics - with robotics referring not to this, but instead to Strong AI).
Basically, the rate at which this stuff is accelerating, and the accelerating rate of that acceleration, means that within 30-40 years we should all be sort of functionally immortal. Our brains will be so infused with nanites, and we will be so intimately connected to our information technology that death would really be more akin to a Windows crash with an open document (though hopefully somewhat less frequent). We might lose a few moments of our experience should something catastrophic happen, but our pattern will remain due to the ease and necessity of having a more sparsely distributed self.
As a side note, this perspective starts solving some nasty existentialist philosophical questions. I remember wrestling with the teleportation problem in a college Philosophy class - if a teleporter can replicate you perfectly at the other end would you mind being disintegrated on this end to solve the problem of having two of you? I would mind that. But on thinking about Ray's vision of the complete future that would necessarily surround the required technology, the problem becomes less troubling. In a world where such teleportation was commonplace, I would likely consider whatever physical embodiment I was using at the time to be a mere copy of something. Again, like a word document, I really wouldn't care about one printed version of it. We currently live in a world where we only have one hard copy of ourselves, and the idea of having it destroyed is anathema to us. But in a world where the existence of our patterns is tied into some worldwide source control and versioning system... fuck it... who cares what happens to your body.
Now Ray is in his fifties, and in a sort of desperate bid to make sure he makes it to the critical date of feasible immortality, he is locked in a moment to moment battle with death. The guy admits to taking over 250 supplements a day, and even regularly cleansing his own blood. The results he reports are admittedly impressive, and if he's right then his odds of surviving to the stage where immortality is feasible are good. If he's wrong though, it's kind of ghoulish and creepy. His unsettling fear of death (which he would likely claim is not fear - but merely a rational approach to a solvable problem) tends to throw his thesis into disrepute because he wants - in fact he needs - to believe in his future vision. He's trying to prove the result he desires.
Ray's perspective on technology growth and convergence should probably not be ignored however. The man has been doing this stuff for a long time. He has a fair amount of compelling insight into some of the challenges, benefits, and potential risks of these emerging technologies. He advises to a number of different government and private bodies that keep their eyes on things like AI research and nanotech development. His book presents an interesting glimpse at a possible future from a guy who invests a lot of his (considerable) brain power to anticipating that future.
The book's main problem, in my opinion, is that Ray is just too damn logical. He presents his arguments as 'If A then B, if B then C, if C then D - therefore-we-will-gobble-up-all-spacetime-as-hyperintelligent-nanodust-within-200-years'. He spends much of the book deflecting counterpoints, and does admirably well, but still it seems to just be too logical. He presents his material as though his vision of the future is inevitable - indeed, as though his logic is an irrefutable proof. Being so logical - he could preface the book by admitting that since so little of the problem is actually knowable, that there exists a near to 100% chance that he is grossly incorrect in at least one place and that very possibly that one error cold cause a cascading failure down his entire chain of reasoning, making everything else wrong. I expect he would strongly refute that idea - and he could probably refute it very effectively because he knows a lot more about chaos theory than I do.
I admit that his vision of the future is pretty strange, but it's also pretty cool. It's not as cool as the future I hoped for as a kid, but it'll do. And in any case, his future would not prevent me from living in my ideal one, since I could simply create that world for myself to live in with the computing power inherent in a mere kilogram of matter... I hope the rest of you won't mind if I keep a single kilogram out of the entire universe to create a pocket universe where I can zip around in a flight jacket with my jet-pack and raygun doing battle against chaotic evil Martians.
Recent Comments