Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 2







Post#26 at 06-08-2003 09:17 PM by Leados [at joined Sep 2002 #posts 217]
---
06-08-2003, 09:17 PM #26
Join Date
Sep 2002
Posts
217

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
The subject of eschatology (the religious study of the "last days")
is an interesting sidelight to the Fourth Turning theory, because of
the way it affects politics, especially the surprising political
alliances between the Christian right and the Jewish left over their
common desire for a strong defense of Israel. This alliance is
motivated, at least partially, by the belief among some
fundamentalist Christians that the next Fourth Turning war will
coincide with "the second coming" of Christ in Jerusalem.

I don't see the world coming to an end in the next war, any more than
it came to an end in World War II, but interestingly, I think I do
know how the human race is going to come to an end -- within the next
100 years or so.

Computers are becoming increasingly intelligent, and by 2030 or so,
it will be evident to everybody that computers will soon be more
intelligent than human beings. This will give rise to a public
debate over "who will inherit the earth -- humans or computers?"

By 2050 or 2060, computers will already be more intelligent than
humans, and will be able to replicate themselves. They'll be a new
species as much more advanced over humans as humans are advanced over
monkeys.

But it won't stop there. By 2100, the new computers will be as much
more advanced over the 2050 computers as the 2050 computers are over
humans.

Maybe these computers will decide to keep a few humans around in zoos
or something, or maybe they'll keep us around as pets the way we keep
dogs and cats, but those scenarios are problematical. The most
likely scenario is that the human race will end by 2100.

This is an issue that today's children will be directly facing in
their lifetimes. They and their own children will be facing
potential extinction in this way.

What's the probability that all this will happen? The fact that
computers will soon be much more intelligent than humans is almost
100% certain. What will happen after that is open to speculation.

Finally, a word about the "Matrix" movies. These movies portray a
human victory over a future of intelligent computers, but there's an
important point that the movies omit. Even if the humans beat back
the computers in some future war, it'll be an empty victory. A
decade later there'll be new generation of computers so much more
intelligent than the last generation that we won't stand a chance
against them.

John
I'm sure someone else has read I, Robot.

If we hardwire them not to kill us, then we're fine. But if we let them have free reign to kill people, that's where the problems would start. Also, couldn't we just wipe them out with EMPs if we needed to? Or just pour water on them. Also, how will these things be powered? Must be an awful lot of power needed to run a robotic killer. If we have no oil, how can we construct/power these things?
My name is John, and I want to be a Chemist When I grow up.







Post#27 at 06-10-2003 04:57 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-10-2003, 04:57 PM #27
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Leados,

Quote Originally Posted by Leados
> I'm sure someone else has read I, Robot. If we hardwire them not
> to kill us, then we're fine. But if we let them have free reign to
> kill people, that's where the problems would start. Also, couldn't
> we just wipe them out with EMPs if we needed to? Or just pour
> water on them. Also, how will these things be powered? Must be an
> awful lot of power needed to run a robotic killer. If we have no
> oil, how can we construct/power these things?
Yes, I read I, Robot and the Foundation series in my youth. Those
were exciting books, but as brilliant as Isaac Asimov's vision was,
scientific developments that have occurred since those books were
written point the way to some errors.

Asimov's Three Laws of Robotics are:

(1) A robot may not injure a human being, or, through inaction, allow
a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

The problem is that there's no way to "hardwire" these laws into all
computers. If America programmed its intelligent computers never to
kill humans, that's no guarantee that the Chinese would do the same.
(And why would even we do so?)

Pour water on them? You can buy waterproof computers today.

Powered? Remember, we're talking about several decades into the
future. By that time, many alternative power sources will be
developed, undoubtedly including super-batteries that can last a very
long time.

A lot of power needed to run a robotic killer? Well, it takes a lot
of power to run a human killer, and I'll bet that someone will figure
out a way to power robotic killers that'll be much more efficient
than the method of powering human killers (which, as I recall, is by
feeding them "meals ready to eat," or MREs).

John







Post#28 at 06-10-2003 09:17 PM by Leados [at joined Sep 2002 #posts 217]
---
06-10-2003, 09:17 PM #28
Join Date
Sep 2002
Posts
217

You never mentioned how to avoid these things from being killed with EMPs? I suppose you could shield them or something, but that would be difficult to do well. If you can make it, you can unmake it.
My name is John, and I want to be a Chemist When I grow up.







Post#29 at 06-13-2003 02:58 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-13-2003, 02:58 PM #29
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2100?

Dear Leados,

Quote Originally Posted by Leados
> You never mentioned how to avoid these things from being killed
> with EMPs? I suppose you could shield them or something, but that
> would be difficult to do well. If you can make it, you can unmake
> it.
Back in the 1980s, I was consulting for Northrop Corporation
building, among other things, early versions of smart bombs. Some of
the projects we worked on had to be "nuclear hardened," which meant
that they had to survive things like electromagnetic pulses. This
was accomplished by using special parts and special shielding.

I'm sure you can think of all sorts of technical reasons why
intelligent robots might be vulnerable to attack in various ways, and
maybe I can even come up with some responses, but to have that kind
of discussion misses the point in a couple of ways.

First, we're talking about technology that will be deployed several
decades from now, so no matter what you and I think of today, it will
probably be irrelevant in the time frame we're discussing.

Second, and most important, what we're talking about here is
intelligent computers that will be more intelligent than humans.
That means that no matter how clever you are at thinking up problems,
and no matter how clever I am at thinking up solutions, you and I are
going to be really dumb compared to the computers themselves. They
will be able to decide how to attack their enemy counterparts, and
how to protect themselves from enemy attack, and they won't even
bother to ask humans because humans will be too stupid to be of help.
And the difference in intelligence between computers and humans will
only increase exponentially as time goes on.

John







Post#30 at 06-15-2003 10:25 PM by Leados [at joined Sep 2002 #posts 217]
---
06-15-2003, 10:25 PM #30
Join Date
Sep 2002
Posts
217

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Dear Leados,

Quote Originally Posted by Leados
> You never mentioned how to avoid these things from being killed
> with EMPs? I suppose you could shield them or something, but that
> would be difficult to do well. If you can make it, you can unmake
> it.
Back in the 1980s, I was consulting for Northrop Corporation
building, among other things, early versions of smart bombs. Some of
the projects we worked on had to be "nuclear hardened," which meant
that they had to survive things like electromagnetic pulses. This
was accomplished by using special parts and special shielding.

I'm sure you can think of all sorts of technical reasons why
intelligent robots might be vulnerable to attack in various ways, and
maybe I can even come up with some responses, but to have that kind
of discussion misses the point in a couple of ways.

First, we're talking about technology that will be deployed several
decades from now, so no matter what you and I think of today, it will
probably be irrelevant in the time frame we're discussing.

Second, and most important, what we're talking about here is
intelligent computers that will be more intelligent than humans.
That means that no matter how clever you are at thinking up problems,
and no matter how clever I am at thinking up solutions, you and I are
going to be really dumb compared to the computers themselves. They
will be able to decide how to attack their enemy counterparts, and
how to protect themselves from enemy attack, and they won't even
bother to ask humans because humans will be too stupid to be of help.
And the difference in intelligence between computers and humans will
only increase exponentially as time goes on.

John
How are computers and robots going to become more intelligent? Seems like since they are robots, their one weakness (eventually) would be that they would be predictable, since they are not likely to have emotions, at least not like humans do. Could be that you're right though.

Just where are these machines going to be able to find the resources to reproduce themselves? I think they would be limited from that alone, unless we're talking some kind of biological component to robots.

If something like you think does happen, it will be a true test of humans that is for sure. And it is something we could fail.


What about "good" robots vs. "Bad" robots?

:wink:
My name is John, and I want to be a Chemist When I grow up.







Post#31 at 06-21-2003 11:17 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-21-2003, 11:17 PM #31
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Leados,

You're asking a lot of the right questions, which is why I believe
it's important for philosophers and theologians to start discussing
this problem as soon as possible.

Basically the situation is this:

The first generation of intelligent computers will be designed
by human beings, and are currently under development in laboratories
around the world, with successful products expected by the end of the
2020s.

The second generation of intelligent computers will be
designed by the first generation of intelligent computers.

In other words, once the first generation is built, human beings
won't have a thing more to say about how the following generations
work or what they do.

So maybe it's possible for philosophers and theologians to provide
guidance to developers and researchers so that if the first
generation is designed correctly, then subsequent generations will
nice to human beings. (I'm actually pessimistic that this can be
done, but I think it's worth a shot.)

Responses to your specific questions:

Quote Originally Posted by Leados
> How are computers and robots going to become more intelligent?
By implementing intelligence into computer software. I actually know
how to develop such software today, using a "brute force" algorithm.
So why don't I implement it today? Because today's computers are way
too slow to execute that kind of algorithm. But by 2030, computers
will be easily powerful enough to implement that and other
intelligence algorithms.

> Seems like since they are robots, their one weakness (eventually)
> would be that they would be predictable, since they are not
> likely to have emotions, at least not like humans do. Could be
> that you're right though.
What are emotions? Do you want the intelligent computer to cry when
someone dies? To laugh when someone tells a joke? We can program
that into the software. I don't know if anyone would want to bother
doing that, but it could be done.

There are many computer algorithms even today that aren't
predictable. For example, when you play a computer game, the
computer plays differently each time. That's because there's a
random move generator built into most computer games. Similarly, we
could allow for some randomness in the algorithm for computer
intelligence.

> Just where are these machines going to be able to find the
> resources to reproduce themselves? I think they would be limited
> from that alone, unless we're talking some kind of biological
> component to robots.
I think that self-reproduction is going to take a few extra decades,
but I'm sure the problem will be solved by the end of the century at
the latest.

> If something like you think does happen, it will be a true test
> of humans that is for sure. And it is something we could fail.
What test are you referring to?

> What about "good" robots vs. "Bad" robots?
This is a value judgment, and once again that's why we need
philosophers and theologians to weigh in. Suppose an intelligent
computer decides to kill a human being. Is the computer guilty of
homicide? Can an intelligent computer ever be guilty of anything? I
have no idea.

John







Post#32 at 06-21-2003 11:22 PM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
06-21-2003, 11:22 PM #32
Join Date
Sep 2001
Posts
9,412

Quote Originally Posted by John J. Xenakis
Dear Max,

Quote Originally Posted by Max
> I don't recognize your name, but, lately that doesn't mean much.
You don't mention my name in your posting, but I assume you're
addressing this to me.

> You point out that there are people who see the next 4T as the
> "last day" as described by The Book of Revelation.
This was the subject of a number of news stories about a year ago,
when political pundits were commenting on an emerging alliance
between the Christian Right and the Jewish Left.

> Then you go on to describe a scenario that has as much to do with
> the book of Revelation as Lightening does to a lightening bug.

> Where dost thou get thy Holy revelations? Art thou a modern day
> Prophet more 'beloved' than John the Beloved, than Daniel, than
> Joel?

> Did you receive divine story telling from the Angel "V---GER" Or
> perhaps is was the Oracle "HAL".

> On the other hand, if we can design computers of intelligence we
> can have Stepford after all .....
I'm not that dumb. I'm well aware that the scenario I described has
nothing to do with what's described in Revelations.

However, I do think that there's a great deal for philosophers and
religious scholars to think about here. There's no doubt that
computers will be more intelligent than humans within a few decades.
There's no doubt that such computers will be used in warfare as
self-duplicating killing machines. And there's no doubt that, as
time goes on, computers will become as much more intelligent than
humans as we are more intelligent than dogs and cats.
Actually, nobody knows if it's even possible for computers to become self-willed or not. We can't even define consciousness and free will, much less speculate meaningfully about how soon it will be before computers surpass humans in intellect.

Based on what we currently know, as opposed to speculation, we can't rule out superhumanly intelligent computers within decades. It's equally likely (since we can't assign any meaningful numbers) that such developments are 10,000 years away.

This isn't something I'm losing sleep over.







Post#33 at 06-21-2003 11:45 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-21-2003, 11:45 PM #33
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Dear Hopeful Cynic,

Quote Originally Posted by HopefulCynic68
> Actually, nobody knows if it's even possible for computers to
> become self-willed or not. We can't even define consciousness and
> free will, much less speculate meaningfully about how soon it will
> be before computers surpass humans in intellect.
What does "consciousness" mean? Is it a necessary requirement for
making intelligent decisions?

Computers make intelligent decisions in many areas today, complicated
decisions that humans couldn't make. These are in anything from
commercial logistics applications to programs that guide weapons
systems. Each year, computers are doing more and more things better
than humans can do.

By 2030, computers will be able to read, write, talk, (I'm not sure
about taste and smell), reason, draw inferences, fix plumbing, build
house, drive cars, do research, invent things, and conduct wars. By
2040, they'll be able to do those things much, much better. Which of
those things require "consciousness"?

> Based on what we currently know, as opposed to speculation, we
> can't rule out superhumanly intelligent computers within decades.
> It's equally likely (since we can't assign any meaningful numbers)
> that such developments are 10,000 years away.
Sure. And based on what we currently know, by your logic it's just as
likely that the next snowstorm is 10,000 years away.

> This isn't something I'm losing sleep over.
Neither am I. But I'm keeping my snow boots around too.

John







Post#34 at 06-22-2003 01:46 AM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
06-22-2003, 01:46 AM #34
Join Date
Sep 2001
Posts
9,412

Quote Originally Posted by John J. Xenakis
Dear Hopeful Cynic,

Quote Originally Posted by HopefulCynic68
> Actually, nobody knows if it's even possible for computers to
> become self-willed or not. We can't even define consciousness and
> free will, much less speculate meaningfully about how soon it will
> be before computers surpass humans in intellect.
What does "consciousness" mean? Is it a necessary requirement for
making intelligent decisions?
No one knows. Which was my point.


Computers make intelligent decisions in many areas today, complicated
decisions that humans couldn't make. These are in anything from
commercial logistics applications to programs that guide weapons
systems. Each year, computers are doing more and more things better
than humans can do.

By 2030, computers will be able to read, write, talk, (I'm not sure
about taste and smell), reason, draw inferences, fix plumbing, build
house, drive cars, do research, invent things, and conduct wars. By
2040, they'll be able to do those things much, much better. Which of
those things require "consciousness"?
Unless, of course, none of those things occurs. MAYBE computers will be drawing inferences in 2030, maybe not. PERHAPS they'll be able to invent new things, perhaps not. Nobody knows right now.

Frankly, I don't think they'll be able to do most of that any time in the 21st century, I expect the rate of progress in computer and information science to slow steadily. This is the usual pattern of technological development, an 'S' curve. High-speed advancement for a while, followed by a plateau and a changeover to slow-and-steady improvement. Right now we're in the 'high-speed' area of the S curve, but sooner or later it will very likely level off.

Note that the driving force for the level-off may not be technical limits, but the limits of market demand.



> Based on what we currently know, as opposed to speculation, we
> can't rule out superhumanly intelligent computers within decades.
> It's equally likely (since we can't assign any meaningful numbers)
> that such developments are 10,000 years away.
Sure. And based on what we currently know, by your logic it's just as
likely that the next snowstorm is 10,000 years away.
No, we have a good understanding of the physics (at least in general) of snowstorms, water phase changes, and we have enormous masses of practical data on the behavior of weather systems. We can test our models of such functions by the accuracy of their predictions.

We can do none of that in technological forecasting for the pregression of artificial intelligence.







Post#35 at 06-22-2003 09:18 PM by Leados [at joined Sep 2002 #posts 217]
---
06-22-2003, 09:18 PM #35
Join Date
Sep 2002
Posts
217

John,

The "test" I was referring to is whether we could beat the machines IF they turn on us. They could, but no one at this point knows. Why would we let computers design computers in the first place? That's the first step down the slippery slope...

Leados
My name is John, and I want to be a Chemist When I grow up.







Post#36 at 06-24-2003 11:33 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-24-2003, 11:33 PM #36
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Hopeful Cynic,

Quote Originally Posted by HopefulCynic68
> Frankly, I don't think they'll be able to do most of that any time
> in the 21st century, I expect the rate of progress in computer
> and information science to slow steadily. This is the usual
> pattern of technological development, an 'S' curve. High-speed
> advancement for a while, followed by a plateau and a changeover to
> slow-and-steady improvement. Right now we're in the 'high-speed'
> area of the S curve, but sooner or later it will very likely level
> off.
You expect the rate of progress to slow steadily? I'm always amazed
when I see something like this. It reminds me of that apocryphal
story about the head of the patent office in 1899 wanting to shut the
patent office down because everything had already been invented.

It also reminds me of that moronic pseudo-science commentator - whose
name currently escapes me -- that I seen on TV from time to time. In
1991, during the recession, he said that technology had always pulled
us out of recessions before, but that there were no developments
coming up to help us then. He then concluded that the recession
would go on forever. That was just before the internet explosion.

Actually, I agree that we're in a bit of a lull right now, but
that'll be over in 10 years or so. I expect a huge product explosion
again in the 2020s, and the super-intelligent computers by 2030.

If you want to believe that progress is going slow up, then you're
welcome to your beliefs, but we'll just have to agree to disagree.

> This is the usual pattern of technological development, an 'S'
> curve.
This isn't exactly right, and I'll use the power of computers as an
example. It's true that each technology paradigm follows an S curve,
but just as each paradigm begins to level off, a new one takes over.

Ray Kurzweil has tracked the power of calculating machines and
computers back to the 19th century, and found that they follow a
consistent, steadfast exponential growth curve through one technology
after another. The technologies he studied are:

(1) Punched card electromechanical calculators, used in 1890 and 1900
census

(2) Relay-based machine used during World War II to crack the Nazi
Enigma machine's encryption code.

(3) The CBS vacuum tube computer that predicted the election of
President Eisenhower in 1952.

(4) Transistor-based machines used in the first space launches

(5) Integrated circuits - multiple transistors on a chip (Moore's
Law)

Each of these technology paradigms follows an S curve, but whenever
one levels off, the next one takes over.

Integrated circuits are expected to level off in the 2010s, and then
nanotechnology will probably take over. So the power of computers
will continue to grow in a steadfast exponential growth manner.

> Note that the driving force for the level-off may not be technical
> limits, but the limits of market demand.
Right now there's a snowstorm -- a blizzard -- of high technology
research projects going on around the world, in private labs,
universities, and government labs. Every nation, including the US,
is pouring funding into these projects because no one can afford to
be left behind. Even if there's no market demand for a year or two,
competition is driving these research projects to go on full speed
ahead.

> No, we have a good understanding of the physics (at least in
> general) of snowstorms, water phase changes, and we have enormous
> masses of practical data on the behavior of weather systems. We
> can test our models of such functions by the accuracy of their
> predictions.

> We can do none of that in technological forecasting for the
> pregression of artificial intelligence.
OK, you have your probabilistic models for snowstorms, and I've just
described Ray Kurzweil's model for exponential growth of the power of
computers. And yes we can do technological forecasting for growth in
artificial intelligence. There's nothing magic about it.

Nothing is absolutely certain, but forecasts of super-intelligent
computers by 2030 are about as certain as you can get.

John







Post#37 at 06-24-2003 11:38 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-24-2003, 11:38 PM #37
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Leados,

Quote Originally Posted by Leados
> The "test" I was referring to is whether we could beat the
> machines IF they turn on us. They could, but no one at this point
> knows. Why would we let computers design computers in the first
> place? That's the first step down the slippery slope...
I agree with you about the "slippery slope" concern, and you can bet
that that will be a big political and emotional issue when the time
comes.

But you can't stay off the slippery slope for long, because some huge
need comes along.

For example, if our computers are at war with China's computers, then
we're going to want our computers to be as good as possible, even
improving themselves out on the battlefield, because otherwise the
Chinese computers will clobber us.

Sooner or later it's all gonna happen.

John







Post#38 at 06-25-2003 01:23 AM by Mike [at joined Jun 2003 #posts 221]
---
06-25-2003, 01:23 AM #38
Join Date
Jun 2003
Posts
221

Unless computers somehow are able to make mistakes and learn from them, there is no way computers can compete with humans. Computers run a program and are not allowed to make choices, they only execute commands. There is no way someone will ever be able to program a computer that complete. It will have to be able to learn on it's own which is impossible.







Post#39 at 06-25-2003 07:37 AM by Mikebert [at Kalamazoo MI joined Jul 2001 #posts 4,502]
---
06-25-2003, 07:37 AM #39
Join Date
Jul 2001
Location
Kalamazoo MI
Posts
4,502

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Ray Kurzweil has tracked the power of calculating machines and computers back to the 19th century, and found that they follow a consistent, steadfast exponential growth curve through one technology after another. The technologies he studied are:

(1) Punched card electromechanical calculators, used in 1890 and 1900
census

(2) Relay-based machine used during World War II to crack the Nazi
Enigma machine's encryption code.

(3) The CBS vacuum tube computer that predicted the election of
President Eisenhower in 1952.

(4) Transistor-based machines used in the first space launches

(5) Integrated circuits - multiple transistors on a chip (Moore's
Law)

Each of these technology paradigms follows an S curve, but whenever
one levels off, the next one takes over.

Integrated circuits are expected to level off in the 2010s, and then
nanotechnology will probably take over. So the power of computers
will continue to grow in a steadfast exponential growth manner.
There is a difference between raw computing power and artificial intelligence. Modern computers are far more powerful than those of 20 years ago. They can do some things better than humans, like play checkers and chess. In general, any task that has a fixed set of rules can be done by a machine better than by a human. But thinking isn't rule-based as far as we can tell, and computers are still awfully bad at it. Computers still cannot recognize faces or read expressions. They also cannot navigate their environment effectively. Modern robots are about as smart as a cockroach in this dimension. Robot domestics are as far away today as they were twenty years ago.

Back in the 1950's when AI began as a field it was thought that robots capable of performing as domestic servants would surely be a reality by the end of the century. Didn't happen. And despite huge advances in processor power since the eighties they haven't gotten much smarter.

Look at your own PC. I ran a machine with a 3 mhz clock speed, an 8-bit word and simple instruction set twenty years ago. Today machines are 1 or 2 ghz with 64 bit words and complex instruction sets. This suggests my modern machine might be 10-30 thousand times faster than what I used then. Back then it took two minutes to invert a 20x20 matrix. My modern PC can probably invert a 100x100 matrix in a second or so.

Yet I have no need to invert 100x100 matrices. I don't even program any more. Instead I run software that is so bloated that it doesn't run any faster than it did years ago. My computer takes about as much time to boot up (and longer to shut down) than it did 15 years ago.

What I am getting at is AI isn't going to come from advanced hardware. It will have to come from advanced software. As experience shows, the power of software grows much more slowly than that of hardware. Look at your PC interface. It's really the same wysiwyg screen plus mouse & keyboard featured on the Apple Lisa 20 years ago.

We are still far away from talking to our PCs like on Star Trek.







Post#40 at 06-25-2003 05:47 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-25-2003, 05:47 PM #40
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike
> Unless computers somehow are able to make mistakes and learn from
> them, there is no way computers can compete with humans. Computers
> run a program and are not allowed to make choices, they only
> execute commands. There is no way someone will ever be able to
> program a computer that complete. It will have to be able to learn
> on it's own which is impossible.
Actually, computer software has had various feedback mechanisms for
error correction for a long time.

One simple example is that some game-playing programs have an ability
to analyze lost games and try to figure out why they lost, and then
adjust their evaluation weights for future games.

A slightly different example, but related, is how microprocessors
work on things like the space shuttle. There are always multiple
processors anyway, to provide redundancy. But the problem arises: If
one of the microprocessors is faulty, how do you find out? Well,
what they do is they have 5 separate microprocessors, and for a
critical decision, they have all 5 make a decision, and then the 5
microprocessors "vote" - and whatever decision wins the vote is the
action that's taken.

Having a computer program that learns from its mistakes is really not
a significantly more difficult problem than many of the other
problems that have to be solved in building a super-intelligent
computer. The algorithm is: Computer does A, expecting B to happen.
But C happens instead of B. So the computer needs to have a way to
go back and figure out what it did wrong. By the way, that's what
human beings do, isn't it?

John







Post#41 at 06-25-2003 05:52 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-25-2003, 05:52 PM #41
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike Alexander '59
> There is a difference between raw computing power and artificial
> intelligence. Modern computers are far more powerful than those of
> 20 years ago. They can do some things better than humans, like
> play checkers and chess. In general, any task that has a fixed set
> of rules can be done by a machine better than by a human. But
> thinking isn't rule-based as far as we can tell, and computers are
> still awfully bad at it. Computers still cannot recognize faces or
> read expressions. They also cannot navigate their environment
> effectively. Modern robots are about as smart as a cockroach in
> this dimension. Robot domestics are as far away today as they were
> twenty years ago.
You've raised a number of very interesting issues, and you're right
that many of the goals of AI have not been met.

For example, in 1957, AI researchers expected a chess-playing
computer to be world champion within ten years or so, and yet by
1970, the best chess-playing computer was little better than a
beginner. The researchers were unable to develop algorithms and
heuristics which could mimic the reasoning of a chess master or
grandmaster.

And yet, today's chess-playing computers play at world champion
level. How did computers get so good in 30 years? It turns out that
the chess-playing algorithms used in today's computers are really not
much different from the algorithms used in 1970. So why are they
doing so much better today? It's because computers are much more
powerful. A chess-playing algorithm could only look 3 or 4 moves
ahead in 1970, but can look 8 to 10 moves ahead today, because
computers are much more powerful.

This example illustrates, in a sense, how much of a failure
artificial intelligence research has been. Back in the 50s, 60s, and
70s, researchers were expecting to find elegant algorithms and
heuristics that would make computers match humans in a variety of
areas -- game playing, voice recognition, natural language
processing, computer vision, theorem proving, and so forth. But the
fact is that AI researchers have failed to do so in every area.

Instead, they've fallen back onto "brute force" algorithms. The
phrase "brute force" was meant to be pejorative, but now it's really
become the only game in town. What "brute force" means is to use the
power of the computer to try every possibility until one works.

Take voice recognition, for example. Today we have several
commercially available programs that do a pretty decent job of
"taking dictation" -- listening to your voice and typing what you
say. To get them to work well, you have to "train" them for many
hours, but once the training is over, they can do pretty well.

This is pure brute force technology. The training simply means that
you recite dozens of phrases that the computer saves on disk. Then
when you dictate, the computer simply compares what you say to the
saved patterns until it finds a match.

Voice recognition technology has been getting better all the time.
In 1990, they were limited to a vocabulary of a few hundred words.
By 1995, the vocabulary was up to 10,000 words, but you had to say
each word separately and distinctly, without running words together
as you speak. By 2000, they supported continuous speech. As time
goes on, they get better and better, because they're able to able to
do a lot more pattern matching as computers get more and more
powerful.

There's been some algorithmic improvement over the years, but the
main difference between 1990 and today -- the power of the computers.

By the 2020s, thanks to vastly more powerful computers, and vastly
larger disk space, when you buy a voice recognition program, it will
come with billions upon billions of stored voice patterns. You
probably won't have to train it, since the computer will be able to
compare your dictation with the stored patterns that come with the
program. This is not especially clever from an AI point of view, but
it shows how well "brute force" algorithms work as computers get more
and more powerful.

The same is true of one AI technology after another. In the end,
they'll all fall to brute force techniques.

This shouldn't be surprising -- after all, that's how our human brain
works. Our brains are exceptionally good at pattern matching and
associative memory. When you look at a chair, you don't start to
think, "Let's see, it has four legs, with a flat part on top, so it
must be a chair." What happens is that your brain instantly compares
what you see to all the other things you've previously identified as
being chairs, and instantly identifies it as such, and then
associates the result with "Gee, I think I'll sit down."

That's how the first generation of super-intelligent computers will
work. They'll simply mimic the human brain's capacity for pattern
matching and association -- two things that don't work very well on
today's slow computers, but will work very well on the powerful
computers of the 2020s.

Computers are roughly doubling in power every 18 months, and so by
2030, they'll be approximately 100,000 times as powerful as today.
That should be enough power to implement a first generation
super-intelligent computer. By 2050, computers will be about
10,000,000,000 times as powerful as today, and that will certainly be
more than enough.

John







Post#42 at 06-25-2003 08:06 PM by Leados [at joined Sep 2002 #posts 217]
---
06-25-2003, 08:06 PM #42
Join Date
Sep 2002
Posts
217

John,

I think the question here is how long can computers keep on doubling in speed? I think in about 10 years or so, current tech is going to run out of space to double like that. I'm sure they're developing new technologies to circumvent that, but I think they're already starting to see that there is a molecular/atomic limit to the way microchips can be made and run. So will the computers gain a critical mass of speed enough to overcome human thought? I still agree with Mike, that humans will probably always be able to outwit a computer; although if the computer is fast enough to think of all responses to something humans are doomed. But All is a superlative, and its doubtful that they will be able to think of ALL possibilities ever.
My name is John, and I want to be a Chemist When I grow up.







Post#43 at 06-25-2003 10:24 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-25-2003, 10:24 PM #43
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Leados,

Quote Originally Posted by Leados
> I think the question here is how long can computers keep on
> doubling in speed? I think in about 10 years or so, current tech
> is going to run out of space to double like that. I'm sure they're
> developing new technologies to circumvent that, but I think
> they're already starting to see that there is a molecular/atomic
> limit to the way microchips can be made and run. So will the
> computers gain a critical mass of speed enough to overcome human
> thought? I still agree with Mike, that humans will probably always
> be able to outwit a computer; although if the computer is fast
> enough to think of all responses to something humans are doomed.
> But All is a superlative, and its doubtful that they will be able
> to think of ALL possibilities ever.
The current technology, using integrated circuits, is expected to
flatten out in about 10 years or so, but the new nanotechnologies will
just be starting up, and I wouldn't expect those to flatten out for
many decades.

It's possible that there's some physical limit to the exponential
growth of computer power, but it really has to be a long way off.

Let me repeat a story that I posted earlier. I had a summer job in
computer programming at Honeywell Corp. in 1964. I had a conversation
with the engineer sitting at a nearby desk. He said, "Computers
can't get much faster. The circuits in today's computers require
signals to travel at almost the speed of light, and there is no way to
get around that. Thus, computers may get a little faster, but not
much faster than they are today."

Well, at that time it seemed that the speed of light was about to
limit the power of computers in 1964, but it didn't happen. As of
today, I don't think it will happen for a long, long time.

John







Post#44 at 06-26-2003 12:04 AM by Leados [at joined Sep 2002 #posts 217]
---
06-26-2003, 12:04 AM #44
Join Date
Sep 2002
Posts
217

John,

I think we can both agree its not very useful to conjecture about this very much.

This issue divides into two sides: Those that think machines will take over the earth, and those that do not. Either is equally plausible and yet implausible.

Also, are there enough raw materials to make the number of robots necessary to destroy the human race? I'm sure there will have to be some very exotic materials developed in order to make the future you speak of happen.

This is fun.

Leados
My name is John, and I want to be a Chemist When I grow up.







Post#45 at 06-26-2003 12:21 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
06-26-2003, 12:21 PM #45
Join Date
Mar 2003
Posts
2,460

The time frame postulated for Humanity's end (c.2100) I can definitely see, assuming we don't do it this 4T, but I'd more likely bet on us doing it to ourselves somehow, in either case.







Post#46 at 06-26-2003 12:47 PM by Mike [at joined Jun 2003 #posts 221]
---
06-26-2003, 12:47 PM #46
Join Date
Jun 2003
Posts
221

Once the computers reach the speed of light, there are still ways to increase processing power. They can be made smaller, more efficient, and add more processors dividing the load. However, as stated there aren't a finite number of possibilities, and no way for the computers to think of everything. The very nature of following rules doesn't give them a choice, or to be creative. They only know what is programmed. Eventually humans would be able to work this against them.







Post#47 at 06-27-2003 07:04 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-27-2003, 07:04 PM #47
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Leados,

Quote Originally Posted by Leados
> Also, are there enough raw materials to make the number of robots
> necessary to destroy the human race? I'm sure there will have to
> be some very exotic materials developed in order to make the
> future you speak of happen.
These are questions that will be answered in the future. All I know
is that a solution is always found for problems of this type.

John







Post#48 at 06-27-2003 07:05 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-27-2003, 07:05 PM #48
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Dear Titus,

Quote Originally Posted by Titus Sabinus Parthicus
> The time frame postulated for Humanity's end (c.2100) I can
> definitely see, assuming we don't do it this 4T, but I'd more
> likely bet on us doing it to ourselves somehow, in either case.
That's an interesting point, but I wonder if only "traditional"
methods of human extermination were used -- war, disease, famine --
then somehow little enclaves of humans would survive.

I would think that to really wipe out the human race it would take an
intelligent exterminator that would have the tenacity track down every
human, no matter where he or she was hiding.

John







Post#49 at 06-27-2003 07:06 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-27-2003, 07:06 PM #49
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike
> Once the computers reach the speed of light, there are still ways
> to increase processing power. They can be made smaller, more
> efficient, and add more processors dividing the load. However, as
> stated there aren't a finite number of possibilities, and no way
> for the computers to think of everything. The very nature of
> following rules doesn't give them a choice, or to be creative.
> They only know what is programmed. Eventually humans would be able
> to work this against them
Yes, but isn't the human mind just a kind of computer itself? A lot
researchers think so, and some are looking to "reverse engineer" the
human brain so that it can implemented in computer software, when
computers become powerful enough, and then computers will be just as
creative as humans are.

John







Post#50 at 06-27-2003 07:18 PM by Leados [at joined Sep 2002 #posts 217]
---
06-27-2003, 07:18 PM #50
Join Date
Sep 2002
Posts
217

Then a lot of researchers are going to end up killing us. Unless we integrate within them, which is another option, since there's a strong argument that we've stopped evolving since agriculture began.
My name is John, and I want to be a Chemist When I grow up.
-----------------------------------------