Participatory Spirituality for the 21st Century
It looks like Ray Kurzweil's book, The Singularity is Near, is being made into a movie and will be released later this year:
"The Singularity is Near, A True Story about the Future, based on Ray Kurzweil’s New York Times best selling book, will be a full-length motion picture slated for theatrical release in 2010. The movie intertwines a fast-paced A-line documentary with a B-line narrative story."
Here's an excerpt from a Q&A overview of Kurzweil's singularity thesis:
So what is the Singularity?
Within a quarter century, nonbiological intelligence will match the range and subtlety of
human intelligence. It will then soar past it because of the continuing acceleration of
information-based technologies, as well as the ability of machines to instantly share their
knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and
our environment, overcoming pollution and poverty, providing vastly extended longevity,
full-immersion virtual reality incorporating all of the senses (like “The Matrix”),
"experience beaming” (like “Being John Malkovich”), and vastly enhanced human
intelligence. The result will be an intimate merger between the technology-creating
species and the technological evolutionary process it spawned.
And that’s the Singularity?
No, that’s just the precursor. Nonbiological intelligence will have access to its own
design and will be able to improve itself in an increasingly rapid redesign cycle. We’ll
get to a point where technical progress will be so fast that unenhanced human intelligence
will be unable to follow it. That will mark the Singularity.
When will that occur?
I set the date for the Singularity—representing a profound and disruptive transformation
in human capability—as 2045. The nonbiological intelligence created in that year will be
one billion times more powerful than all human intelligence today.
Why is this called the Singularity?
The term “Singularity” in my book is comparable to the use of this term by the physics
community. Just as we find it hard to see beyond the event horizon of a black hole, we
also find it difficult to see beyond the event horizon of the historical Singularity. How can
we, with our limited biological brains, imagine what our future civilization, with its
intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless,
just as we can draw conclusions about the nature of black holes through our conceptual
thinking, despite never having actually been inside one, our thinking today is powerful
enough to have meaningful insights into the implications of the Singularity. That’s what
I’ve tried to do in this book.
Okay, let’s break this down. It seems a key part of your thesis is that we will be
able to capture the intelligence of our brains in a machine.
Indeed.
So how are we going to achieve that?
We can break this down further into hardware and software requirements. In the book, I
show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a
functional equivalent to all the regions of the brain. Some estimates are lower than this
by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016
cps around the end of this decade. Several supercomputers with 1 quadrillion cps are
already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps
around the end of the decade. By 2020, 10 quadrillion cps will be available for around
$1,000. Achieving the hardware requirement was controversial when my last book on
this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a
mainstream view among informed observers. Now the controversy is focused on the
algorithms.
And how will we recreate the algorithms of human intelligence?
To understand the principles of human intelligence we need to reverse-engineer the
human brain. Here, progress is far greater than most people realize. The spatial and
temporal (time) resolution of brain scanning is also progressing at an exponential rate,
roughly doubling each year, like most everything else having to do with information. Just
recently, scanning tools can see individual interneuronal connections, and watch them
fire in real time. Already, we have mathematical models and simulations of a couple
dozen regions of the brain, including the cerebellum, which comprises more than half the
neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons,
including tens of millions of connections. The first version will simulate the electrical
activity, and a future version will also simulate the relevant chemical activity. By the
mid 2020s, it’s conservative to conclude that we will have effective models for all of the
brain.
So at that point we’ll just copy a human brain into a supercomputer?
I would rather put it this way: At that point, we’ll have a full understanding of the
methods of the human brain. One benefit will be a deep understanding of ourselves, but
the key implication is that it will expand the toolkit of techniques we can apply to create
artificial intelligence. We will then be able to create nonbiological systems that match
human intelligence in the ways that humans are now superior, for example, our patternrecognition
abilities. These superintelligent computers will be able to do things we are
not able to do, such as share knowledge and skills at electronic speeds.
By 2030, a thousand dollars of computation will be about a thousand times more
powerful than a human brain. Keep in mind also that computers will not be organized as
discrete objects as they are today. There will be a web of computing deeply integrated
into the environment, our bodies and brains.
You mentioned the AI tool kit. Hasn’t AI failed to live up to its expectations?
There was a boom and bust cycle in AI during the 1980s, similar to what we saw recently
in e-commerce and telecommunications. Such boom-bust cycles are often harbingers of
true revolutions; recall the railroad boom and bust in the 19th century. But just as the
Internet “bust” was not the end of the Internet, the so-called “AI Winter” was not the end
of the story for AI either. There are hundreds of applications of “narrow AI” (machine
intelligence that equals or exceeds human intelligence for specific tasks) now permeating
our modern infrastructure. Every time you send an email or make a cell phone call,
intelligent algorithms route the information. AI programs diagnose electrocardiograms
with an accuracy rivaling doctors, evaluate medical images, fly and land airplanes, guide
intelligent autonomous weapons, make automated investment decisions for over a trillion
dollars of funds, and guide industrial processes. These were all research projects a couple
of decades ago. If all the intelligent software in the world were to suddenly stop
functioning, modern civilization would grind to a halt. Of course, our AI programs are
not intelligent enough to organize such a conspiracy, at least not yet.
Why don’t more people see these profound changes ahead?
Hopefully after they read my new book, they will. But the primary failure is the inability
of many observers to think in exponential terms. Most long-range forecasts of what is
technically feasible in future time periods dramatically underestimate the power of future
developments because they are based on what I call the “intuitive linear” view of history
rather than the “historical exponential” view. My models show that we are doubling the
paradigm-shift rate every decade. Thus the 20th century was gradually speeding up to the
rate of progress at the end of the century; its achievements, therefore, were equivalent to
about twenty years of progress at the rate in 2000. We’ll make another twenty years of
progress in just fourteen years (by 2014), and then do the same again in only seven years.
To express this another way, we won’t experience one hundred years of technological
advance in the 21st century; we will witness on the order of 20,000 years of progress
(again, when measured by the rate of progress in 2000), or about 1,000 times greater than
what was achieved in the 20th century.
The exponential growth of information technologies is even greater: we’re doubling the
power of information technologies, as measured by price-performance, bandwidth,
capacity and many other types of measures, about every year. That’s a factor of a
thousand in ten years, a million in twenty years, and a billion in thirty years. This goes
far beyond Moore’s law (the shrinking of transistors on an integrated circuit, allowing us
to double the price-performance of electronics each year). Electronics is just one
example of many. As another example, it took us 14 years to sequence HIV; we recently
sequenced SARS in only 31 days.
So this acceleration of information technologies applies to biology as well?
Absolutely. It’s not just computer devices like cell phones and digital cameras that are
accelerating in capability. Ultimately, everything of importance will be comprised
essentially of information technology. With the advent of nanotechnology-based
manufacturing in the 2020s, we’ll be able to use inexpensive table-top devices to
manufacture on-demand just about anything from very inexpensive raw materials using
information processes that will rearrange matter and energy at the molecular level.
We’ll meet our energy needs using nanotechnology-based solar panels that will capture
the energy in .03 percent of the sunlight that falls on the Earth, which is all we need to
meet our projected energy needs in 2030. We’ll store the energy in highly distributed
fuel cells.
I want to come back to both biology and nanotechnology, but how can you be so
sure of these developments? Isn’t technical progress on specific projects
essentially unpredictable?
Predicting specific projects is indeed not feasible. But the result of the overall complex,
chaotic evolutionary process of technological progress is predictable.
People intuitively assume that the current rate of progress will continue for future
periods. Even for those who have been around long enough to experience how the pace of
change increases over time, unexamined intuition leaves one with the impression that
change occurs at the same rate that we have experienced most recently. From the
mathematician’s perspective, the reason for this is that an exponential curve looks like a
straight line when examined for only a brief duration. As a result, even sophisticated
commentators, when considering the future, typically use the current pace of change to
determine their expectations in extrapolating progress over the next ten years or one
hundred years. This is why I describe this way of looking at the future as the “intuitive
linear” view. But a serious assessment of the history of technology reveals that
technological change is exponential. Exponential growth is a feature of any evolutionary
process, of which technology is a primary example.
As I show in the book, this has also been true of biological evolution. Indeed,
technological evolution emerges from biological evolution. You can examine the data in
different ways, on different timescales, and for a wide variety of technologies, ranging
from electronic to biological, as well as for their implications, ranging from the amount
of human knowledge to the size of the economy, and you get the same exponential—not
linear—progression. I have over forty graphs in the book from a broad variety of fields
that show the exponential nature of progress in information-based measures. For the
price-performance of computing, this goes back over a century, well before Gordon
Moore was even born.
Aren’t there are a lot of predictions of the future from the past that look a little
ridiculous now?
Yes, any number of bad predictions from other futurists in earlier eras can be cited to
support the notion that we cannot make reliable predictions. In general, these
prognosticators were not using a methodology based on a sound theory of technology
evolution. I say this not just looking backwards now. I’ve been making accurate
forward-looking predictions for over twenty years based on these models....
What will the impact of these developments be?
Radical life extension, for one.
Sounds interesting, how does that work?
In the book, I talk about three great overlapping revolutions that go by the letters “GNR,”
which stands for genetics, nanotechnology, and robotics. Each will provide a dramatic
increase to human longevity, among other profound impacts. We’re in the early stages of
the genetics—also called biotechnology—revolution right now. Biotechnology is
providing the means to actually change your genes: not just designer babies but designer
baby boomers. We’ll also be able to rejuvenate all of your body’s tissues and organs by
transforming your skin cells into youthful versions of every other cell type. Already, new
drug development is precisely targeting key steps in the process of atherosclerosis (the
cause of heart disease), cancerous tumor formation, and the metabolic processes
underlying each major disease and aging process. The biotechnology revolution is
already in its early stages and will reach its peak in the second decade of this century, at
which point we’ll be able to overcome most major diseases and dramatically slow down
the aging process.
That will bring us to the nanotechnology revolution, which will achieve maturity in the
2020s. With nanotechnology, we will be able to go beyond the limits of biology, and
replace your current “human body version 1.0” with a dramatically upgraded version 2.0,
providing radical life extension.
And how does that work?
The “killer app” of nanotechnology is “nanobots,” which are blood-cell sized robots that
can travel in the bloodstream destroying pathogens, removing debris, correcting DNA
errors, and reversing aging processes.
Human body version 2.0?
We’re already in the early stages of augmenting and replacing each of our organs, even
portions of our brains with neural implants, the most recent versions of which allow
patients to download new software to their neural implants from outside their bodies. In
the book, I describe how each of our organs will ultimately be replaced. For example,
nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones,
and other substances we need, as well as remove toxins and waste products. The
gastrointestinal tract could be reserved for culinary pleasures rather than the tedious
biological function of providing nutrients. After all, we’ve already in some ways
separated the communication and pleasurable aspects of sex from its biological function.
And the third revolution?
The robotics revolution, which really refers to “strong” AI, that is, artificial intelligence
at the human level, which we talked about earlier. We’ll have both the hardware and
software to recreate human intelligence by the end of the 2020s. We’ll be able to
improve these methods and harness the speed, memory capabilities, and knowledgesharing
ability of machines.
We’ll ultimately be able to scan all the salient details of our brains from inside, using
billions of nanobots in the capillaries. We can then back up the information. Using
nanotechnology-based manufacturing, we could recreate your brain, or better yet
reinstantiate it in a more capable computing substrate.
Which means?
Our biological brains use chemical signaling, which transmit information at only a few
hundred feet per second. Electronics is already millions of times faster than this. In the
book, I show how one cubic inch of nanotube circuitry would be about one hundred
million times more powerful than the human brain. So we’ll have more powerful means
of instantiating our intelligence than the extremely slow speeds of our interneuronal
connections.
So we’ll just replace our biological brains with circuitry?
I see this starting with nanobots in our bodies and brains. The nanobots will keep us
healthy, provide full-immersion virtual reality from within the nervous system, provide
direct brain-to-brain communication over the Internet, and otherwise greatly expand
human intelligence. But keep in mind that nonbiological intelligence is doubling in
capability each year, whereas our biological intelligence is essentially fixed in capacity.
As we get to the 2030s, the nonbiological portion of our intelligence will predominate.
The closest life extension technology, however, is biotechnology, isn’t that right?
There’s certainly overlap in the G, N and R revolutions, but that’s essentially correct.
[Read Ray's full Q & A here.]
Tags:
Views: 353
I think Kurzweil's predictions ride largely on a rather large assumption: human-like consciousness = adequate complexity of circuitry. That may be the case, but it's not clear that we really understand consciousness well enough to just "assume" that.
At the moment, this site is at full membership capacity and we are not admitting new members. We are still getting new membership applications, however, so I am considering upgrading to the next level, which will allow for more members to join. In the meantime, all discussions are open for viewing and we hope you will read and enjoy the content here.
© 2025 Created by Balder. Powered by