Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 6







Post#126 at 07-23-2003 11:35 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-23-2003, 11:35 PM #126
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2100?

Dear Justin, Max, and Mike,

You've all completely missed the point. All you've done is prove
that Eliza is stupid -- but I already told you that.

The point is that you all treated her like a real girl -- as if you
were teenagers making contemptuous fun of a girl who had difficulty
speaking English or who really was dumb.

Justin, you asked her, "Zvest Brzk Mzdest" several times, and she
said she didn't understand you in several different ways. What would
you expect any real girl to do?

Max, would you ask your car about your butt? No, because you KNOW
your car isn't self-aware. But you asked Eliza about your butt
because you thought of her as a girl, and you wanted to sexually
harass her. (Is there an EEOC for computer girls?)

Mike, you asked her some really dumb questions. Why would she know
how many feet are in a mile? She's just a girl, and you know it.

Now remember this: Eliza is a simple 500-line piece of code. The
super-intelligent Eliza that I'm talking about may contain something
like 500 BILLION lines of code. I can assure you that she will NOT
be dumb. She'll be able to answer all your questions very
intelligently.

In fact she'll be able to make a fool out of any one of the three of
you before you even know what hit you.

Sure, you may be able to make some analytical argument that
super-intelligent Eliza isn't "really" self-aware, but it'll all be
obsessive rationalization. You were pretty much fooled by dumb
Eliza; by the time super-intelligent Eliza is through with you,
you'll be going nuts panting and wishing you could have her. She'll
have you eating out of the palm of her hand, and there isn't a damn
thing you can do about it.

John







Post#127 at 07-23-2003 11:44 PM by Max [at Left Coast joined Jun 2002 #posts 1,038]
---
07-23-2003, 11:44 PM #127
Join Date
Jun 2002
Location
Left Coast
Posts
1,038

Well, perhaps Justin and Mike will be eating out of "her" hand, but, not me, Max(ine) :wink:
...."um...(obvious confusion)...what?"
"Max"
(silence)
"It's short for Maxine"
" *brightens*....oh!"
"But nobody calls me that"







Post#128 at 07-24-2003 12:00 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-24-2003, 12:00 AM #128
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Dear Maxine,

Quote Originally Posted by Max
> Well, perhaps Justin and Mike will be eating out of "her" hand,
> but, not me, Max(ine)
Maybe Eliza's older brother's available!

John







Post#129 at 07-24-2003 01:14 AM by Max [at Left Coast joined Jun 2002 #posts 1,038]
---
07-24-2003, 01:14 AM #129
Join Date
Jun 2002
Location
Left Coast
Posts
1,038

Does he look anything like the Terminator? 8)
Or maybe Rutger Hauer in Blade Runner?
...."um...(obvious confusion)...what?"
"Max"
(silence)
"It's short for Maxine"
" *brightens*....oh!"
"But nobody calls me that"







Post#130 at 07-24-2003 02:31 AM by Mike [at joined Jun 2003 #posts 221]
---
07-24-2003, 02:31 AM #130
Join Date
Jun 2003
Posts
221

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Dear Justin, Max, and Mike,

Sure, you may be able to make some analytical argument that
super-intelligent Eliza isn't "really" self-aware, but it'll all be
obsessive rationalization.
No it's not rationalization. How will the computers be able to do anything original or creative if they are not self aware? If a computer were to invent a light bulb, it first needs to recognize the need for light. Then it would be able to think of which materials in what combination would create light. But since a computer is not self aware in the sense we know, it would not be able to accomplish the task. The only needs it recognizes are the needs preprogrammed. If there is a new need for light that was not previously programmed, it would go unrecognized.







Post#131 at 07-24-2003 09:21 AM by Justin '77 [at Meh. joined Sep 2001 #posts 12,182]
---
07-24-2003, 09:21 AM #131
Join Date
Sep 2001
Location
Meh.
Posts
12,182

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Justin, you asked her, "Zvest Brzk Mzdest" several times, and she said she didn't understand you in several different ways. What would you expect any real girl to do?
Walk away, start up a conversation with the guy next to me, start dating and marry my best friend, respond "Harrufurrhrh fhhrunnhhrhdth mhrhhhrhhrr", etc, etc. The very nature of consciousnesses outside one's own is their total unpredictability. This is the flaw in your repeated requests that I provide an example of the response of a self-aware entity. A computer can be programmed to generate every type of example response I supply -- but, not until someone supplies it or a model of it first.

You keep hammering the "she can fool you" angle. As I mentioned, deceiving casual observation is not really proof of anything (other than inadequate level of observation, I guess). Self-awareness is a quality of the entity, not an observed state.







Post#132 at 07-24-2003 11:08 AM by Child of Socrates [at Cybrarian from America's Dairyland, 1961 cohort joined Sep 2001 #posts 14,092]
---
07-24-2003, 11:08 AM #132
Join Date
Sep 2001
Location
Cybrarian from America's Dairyland, 1961 cohort
Posts
14,092

Justin,

Happy P2K!

From your smarter-than-average liberal in the sunny Midwest. :wink:







Post#133 at 07-24-2003 11:49 AM by Max [at Left Coast joined Jun 2002 #posts 1,038]
---
07-24-2003, 11:49 AM #133
Join Date
Jun 2002
Location
Left Coast
Posts
1,038

:lol: Your right Kiff, I wouldn't expect to have a conversation with you
like the one I had with Eliza.
...."um...(obvious confusion)...what?"
"Max"
(silence)
"It's short for Maxine"
" *brightens*....oh!"
"But nobody calls me that"







Post#134 at 07-24-2003 12:37 PM by Max [at Left Coast joined Jun 2002 #posts 1,038]
---
07-24-2003, 12:37 PM #134
Join Date
Jun 2002
Location
Left Coast
Posts
1,038

John,
Is this take over by machines something you believe will happen?
Or, do you view it as a likely senario?
...."um...(obvious confusion)...what?"
"Max"
(silence)
"It's short for Maxine"
" *brightens*....oh!"
"But nobody calls me that"







Post#135 at 07-24-2003 09:45 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-24-2003, 09:45 PM #135
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

To Mike Alexander:

I was looking through old messages in this thread in order to answer
Justin's question, and I found, to my surprise, I never posted my
answer to one of your messages. I have the answer in my notes, but I
was answering several messages at one time and evidently skipped
answering yours. Please accept my apologies.

Here's your message and my response:

Quote Originally Posted by Mike Alexander '59
Quote Originally Posted by John J. Xenakis
Dear Hopeful Cynic,

I did a quick plot and added an exponential growth trend line, and
here's what I came up with:



That looks pretty good to me!
It looks that a linear fit works better than exponential. For the
year 2000, the exponential trend predicts about 1200 mph while the
linear predicts about 800 mph. The actual value in 1997 was 763
mph.

Dear Mike,

> It looks that a linear fit works better than exponential. For the
> year 2000, the exponential trend predicts about 1200 mph while the
> linear predicts about 800 mph. The actual value in 1997 was 763
> mph.

You know, you're right, although the exponential trend line actually
works pretty well too.

But there's a third possibility, and this is what I think is really
happening, though it's a guess: I believe the curve is leveling
off into an S-curve because of the wheels. At some point, the wheels
have to spin so fast that the internal friction sets a limit to the
land speed. In order to continue exponential growth in speed of land
vehicles, it'll be necessary to find a way to eliminate the wheels.

John







Post#136 at 07-24-2003 09:48 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-24-2003, 09:48 PM #136
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Mike,

> No it's not rationalization. How will the computers be able to do
> anything original or creative if they are not self aware? If a
> computer were to invent a light bulb, it first needs to recognize
> the need for light. Then it would be able to think of which
> materials in what combination would create light. But since a
> computer is not self aware in the sense we know, it would not be
> able to accomplish the task. The only needs it recognizes are the
> needs preprogrammed. If there is a new need for light that was not
> previously programmed, it would go unrecognized.

This is an interesting question, but it's not a self-awareness
question -- it's a goal-setting question.

In the two examples I gave -- Edison's invention of the incandescent
light, and Wiles' proof of Fermat's Last Theorem -- there was a
well-defined goal. I showed that the goal could be reached by set of
steps that a computer could emulate in software, but you're making
the point that the goals were set by humans. How could the computer
set its own goals?

Well, how do humans set goals? Why would Wiles lock himself in an
attic for seven long years to prove a mathematical theorem which is
of absolutely no practical use? Why did Edison and Wiles select
those goals for themselves?

To answer this question, I would go back to Darwinian evolution.
Evolution has provided us -- and indeed every living creature -- with
one overriding goal, personal survival, survival of the tribe,
survival of the species.

Nature has given us a "survival instinct" which makes us do
everything necessary to stay alive; nature gives us sex and a sex
drive to guarantee that the species will survive; and nature gives us
a desire for war (as well as famine and disease) to guarantee that
the fittest tribes will survive.

Both Edison and Wiles fit this paradigm. In fact, if you think about
it, every goal you set for yourself is related to: "I'm better than
you are. My tribe is better than your tribe. My species is better
than your species." In fact, the whole Fourth Turning concept is part
of this.

Super-intelligent computers, especially those used in warfare,
will undoubtedly also have a "survival instinct." This means that
they'll set goals for themselves to achieve results which are most
likely to help them survive.

In fact, let's recall Asimov's Three Laws of Robotics, posted earlier
in this thread:

(1) A robot may not injure a human being, or, through inaction, allow
a human being to come to harm.

(2) A robot must obey the orders given it by human beings except
where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

Now, as I said at the time, I don't believe that these laws will be
universally implemented, since super-intelligent computers will be
used for warfare. But these laws are examples of the kinds of goals
that these computers will be striving for.

Once you know what your overall goal is (e.g., "Kill people in the
enemy army"), then the computer can do what a human does -- select
from a menu of things to do and pick the one that's most likely to
kill people in the enemy army. Only the computer will make its
choices more accurately and much faster.

So super-intelligent computers will indeed be able to set goals.

John







Post#137 at 07-24-2003 09:50 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-24-2003, 09:50 PM #137
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Justin,

> You keep hammering the "she can fool you" angle. As I mentioned,
> deceiving casual observation is not really proof of anything
> (other than inadequate level of observation, I guess).
> Self-awareness is a quality of the entity, not an observed state.

Yes, but you keep going around in the same circle. Even though I've
asked you repeatedly what you mean by "self-awareness," you never
explain it. In the meantime, I've looked at it in several different
ways, and showed that a computer provides a satisfactory solution. I
addressed it via the dictionary definition, I addressed it via
"creative thinking," I dealt with the "infinity" of the mind.

With regard to perception, I made the following point: You have no
right to claim that super-intelligent computers are not "self-aware,"
because the only way you could make that claim is through perception.

You make statements like, "Self-awareness is a quality of the
entity," without telling what "quality" you're referring to, or even
giving any indication that YOU have any idea what the "quality" is.

Now, if you want to play the game of "he'll never prove computers can
be self-aware, because I'll just tell him he's wrong no matter what
he says," then I guess you win.

But I've told you three times now: Tell me what you mean by
self-awareness, and I'll tell you how to implement it in software.

If you can't tell me what you mean, then you probably don't mean
anything.

As for me, my response to the self-awareness argument is: (1)
Self-awareness probably doesn't even have a meaning, except through
perception; (2) whatever it is, I can implement in software; and (3)
self-awareness is completely irrelevant anyway.

John







Post#138 at 07-24-2003 09:56 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-24-2003, 09:56 PM #138
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2100?

Dear Max,

> Is this take over by machines something you believe will happen?
> Or, do you view it as a likely scenario?

Since we're talking about a lot of different things, I'd have to give
multiple answers.

(1) Computers will become more intelligent than humans within a few
decades. Probability: 99% .

Explanation: I already KNOW how to write this software -- using a
collection of brute force techniques. However, brute force
algorithms take huge amounts of computer power, and today's computers
aren't powerful enough. By 2030, computers will be 100,000 times as
powerful as they are today, and that should be enough.

(2) Computers will be more intelligent than humans by 2030 (90%) or
2040 (98%) or 2050 (99%).

I'm pretty sure that 2030 will be good enough, but if the date is
wrong, then computers will be 100,000,000 times as powerful as they
are today by 2050, so there's no doubt about 2050.

Incidentally, the only reason I'm saying 99% instead of 100% is
because there might be a huge war or something that's wipes out
everything. But short of a catastrophe like that, the probability is
really 100%.

(3) Computers will be as much more intelligent than humans in 2100 as
humans are more intelligent than cats and dogs. Probability: 99%.

Explanation: Computers will be getting more and more intelligent
every year, while humans will stay pretty much the same.

(4) The human race will become extinct by 2100. Probability: 50%.

Explanation: When I say 50%, I'm saying I really don't know. We can
imagine two scenarios: (a) Computers decide that, for their own
survival, they need to get rid of all humans; or (b) Computers
decide that we're pretty irrelevant, and they'll keep us around, just
like we keep cats and dogs around. In scenario (b), they may even
wait on us and give us everything we want - what do they care?

-----------------------------------------

Finally, I have to address one more scenario that I personally have
mixed emotions about. There are some who believe that the human race
will cease to exist much more quickly -- in the 2030s. The reasoning
is as follows:

Once computers become more intelligent than humans (and that will
happen around 2030), then things will start to change very
dramatically. For that reason, the point in time where this happens
is called "The Singularity," because it will be the most important
point in time in the history of Earth. As I've said, I expect the
Singularity to occur around 2030.

Technological growth has been proceeding according to a fixed
exponential growth formula probably for centuries and maybe even
millennia. During that time, the human brain hasn't changed much at
all, or gotten any more powerful.

However, once we reach the Singularity, then there'll be something
very different: the "brain" that's doing the inventions for
technological growth will ALSO be growing exponentially in power.
This means that that the technology will change much more quickly
than before, and we may have millennia of progress in just 10 or 20
years. According to this reasoning, imagine what you used to think
the world will be like in the year 2500 or 3000 -- well, we'll reach
that point by 2050. In this world, the reasoning concludes, humans
have no place and will become extinct.

Now, I buy this argument partially, but not completely.

For one thing, it ignores the economic Law of Diminishing Returns,
which says that if you one resource increases more rapidly than
others, then after a while the new resource gets wasted. It seems to
me that this would apply even when the resource is intelligence. So
even as computer intelligence grows very quickly, lack of other
resources will hold back overall growth.

Secondly, as I said before, it's not clear to me that
super-intelligent computers will automatically want to wipe out all
humans. They may decide that having humans around aids in their own
survival, just as humans have decided (through the environmental
movement) that having other species around helps humans.

However, it's something to consider.

-----

So, you asked what I believe, and here's a brief summary: I believe
that computers will be more intelligent than humans by the 2030 time
frame, and that all hell will break loose as a result. I don't know
whether humans will survive or not, and I can see it going either
way. I believe that philosophers and theologians should start
thinking about this problem right away, for the following reason:
Human beings will build the first generation of super-intelligent
computers, and will have some say in how they're built; the second
generation will be built by the first generation, and humans will
have no say. So we have only one shot at influencing the development
of super-intelligent computers, and we ought to start thinking about
it now.

John







Post#139 at 07-25-2003 01:21 AM by Mike [at joined Jun 2003 #posts 221]
---
07-25-2003, 01:21 AM #139
Join Date
Jun 2003
Posts
221

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Human beings will build the first generation of super-intelligent
computers, and will have some say in how they're built; the second
generation will be built by the first generation, and humans will
have no say. So we have only one shot at influencing the development
of super-intelligent computers, and we ought to start thinking about
it now.
You miss something here. You say that we will have no influence on the second generation, BUT we have the influence on the first generation that makes the second. Why and more importantly HOW would the first generation ignore our influence in creating the second? Since we made rules that the first generation cannot violate in creating the second, they will be no different.







Post#140 at 07-25-2003 01:50 AM by Mike [at joined Jun 2003 #posts 221]
---
07-25-2003, 01:50 AM #140
Join Date
Jun 2003
Posts
221

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
So super-intelligent computers will indeed be able to set goals.
The difference between a human's natural drive and a program's rules are that the drive can be ignored, rules cannot. Suicide is the denile of the natural drive. It is this flexibility that allows us to change our goals and needs. Can a program change its own rules?







Post#141 at 07-25-2003 10:46 AM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
07-25-2003, 10:46 AM #141
Join Date
Jul 2001
Location
California
Posts
12,392

Wow, I've been avoiding this discussion, but it turned into something very interesting. A few scattered observations:

Justin:

Your conception of possible thoughts which renders them infinite, also renders them something not chosen, but rather something that emerges from perception/experience/the unconscious without volition, and upon which we act. John's point was that our choices are all made from a finite menu, making them amenable to computer mimicry.

Regarding self-awareness, you are referring to something completely subjective, which can only be known from within. In that sense, you know that you are self-aware, and you can never show that a computer is. But then, how can you show that I am? This is always an unprovable assumption for any entity other than yourself.

It is possible to define self-awareness objectively and to show it using certain tests. For example, a researcher once put a sticker on a chimpanzee's forehead while giving her a hug. The chimp saw the sticker in a mirror, and promptly removed it. This showed that the chimp knew the mirror showed an image of herself, and was therefore aware of herself. (Or at least could behave as if she were.) Similar tests could be devised for machines, although I certainly agree the link John posted does not provide such a test.

John:

It is generally a mistake to take a curve of change operant at the moment and assume it will continue into the future. As you observed with the curve for land vehicle speed, physical factors can insert limits which change the rate of progress.

Regarding computer power, we should expect a quantum jump in the future as molecular computing is made practicable, although exactly how long that will take I can't project. In other words, we haven't reached the physical limits yet. But those limits do -- must -- exist. We live in the real world, not in a mathematical abstraction, and projections of trends should not be blindly trusted.

Also, there is one factor in human intelligence which explorers of artificial intelligence are not incorporating, and which computers are deliberately designed to exclude, but it is an absolutely vital factor if intelligence is to provide true creativity: indeterminacy. Creative intelligence requires not only making choices from a menu, but also creating new menus in unforeseen circumstances, and making choices on insufficient data for logic. A natural intelligence chooses randomly at times, or intuitively, because it must; it doesn't know enough to make the choice logically. Artificial intelligence will be faced with the same sort of knowledge gap and must make the same sort of non-logical choices. Algorithms mimicing human decision-making in known and understood circumstances, no matter how powerful, cannot bridge this gap by themselves. For these reasons, I am skeptical of your claim to already be able to program artificial intelligence using brute-force algorithms, once a sufficiently powerful computer exists. However, none of this means it couldn't be done.

Regarding the use of intelligent machines in war, assuming that the present social order will continue into the future is even more reckless than projecting current trends. The problems we face now require, if civilization is to survive, a degree of international cooperation that will likely render war as we have known it obsolete, just as the industrial revolution rendered chattel slavery obsolete, and led also to the emancipation of women. A change in material circumstances is often all that is required to turn a utopian idea into an inevitable one. So it will be with war, and by the time we could have computerized weapons capable of completely replacing human military forces, we will have no wars for them to fight. And this is true whether we succeed or fail to meet the current crisis, because if we fail (barring such total failure as renders us extinct) wars will continue, but computers will not.

And finally, it's a mistake to assume that human intelligence will itself be a constant, unchanged in the future. We face two technological revolutions in the near term, and artificial intelligence is one of them. The other is biological engineering. We, too, are capable of upgrades.







Post#142 at 07-25-2003 06:04 PM by Justin '77 [at Meh. joined Sep 2001 #posts 12,182]
---
07-25-2003, 06:04 PM #142
Join Date
Sep 2001
Location
Meh.
Posts
12,182

Brian (and John):

I have sort of argued myself into a subjectivist corner on the whole self-awareness thing. My conviction that some unproveable difference exists between self-aware beings outside myself and things which merely mimic self-awareness appears to be a visceral superstition. Not that I intend to give it up, mind you. I think, on some level, I believe in a 'soul' -- or at least some sort of continuity of being.

It's why I would never willingly be teleported, FWIW.

Pondering this dead end, I have to concede to John that, on the terms of the discussion, there is no reason why a computer could not simulate self-awareness well enough for it to make no difference to those interacting with it whether they considered it conscious or not.







Post#143 at 07-28-2003 11:22 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-28-2003, 11:22 PM #143
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike
> You miss something here. You say that we will have no influence
> on the second generation, BUT we have the influence on the first
> generation that makes the second. Why and more importantly HOW
> would the first generation ignore our influence in creating the
> second? Since we made rules that the first generation cannot
> violate in creating the second, they will be no different.
Sorry for being unclear. You've stated the point I wanted to make a
lot more clearly than I did: that we can influence first generations
through the first generation.

Can we make rules in the first generation that can't be broken in
future generations? I tend to suspect not. But even if we could,
then our enemies wouldn't necessarily do the same.

John







Post#144 at 07-28-2003 11:23 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-28-2003, 11:23 PM #144
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike
> The difference between a human's natural drive and a program's
> rules are that the drive can be ignored, rules cannot. Suicide is
> the denial of the natural drive. It is this flexibility that
> allows us to change our goals and needs. Can a program change its
> own rules?
Well, humans have rules that can't be ignored (e.g., "you have to
breathe"). And computers can be given rules that can be ignored.

If you're writing a chess-playing program, then you'd want to require
the computer to strictly follow the rules of chess, but other rules
("Develop your knights before your bishops") could be ignored in the
right circumstances.

John







Post#145 at 07-28-2003 11:26 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-28-2003, 11:26 PM #145
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Brian,

Quote Originally Posted by Brian Rush
> Wow, I've been avoiding this discussion, but it turned into
> something very interesting.
Yes, it has taken some unexpected twists and turns.

> It is generally a mistake to take a curve of change operant at
> the moment and assume it will continue into the future. As you
> observed with the curve for land vehicle speed, physical factors
> can insert limits which change the rate of progress.
If the variable being measured is chosen correctly, then the
exponential growth formula will hold over multiple technologies and
technology paradigms. In the case of the land vehicle, the wheel is
the limiting factor that prevents land vehicles from exceeding 800
mph or so, but if new technology could be found that eliminates
wheels and other moving parts (such as using air jets to provide
lift), then the speed growth path should be maintained.

> Regarding computer power, we should expect a quantum jump in the
> future as molecular computing is made practicable, although
> exactly how long that will take I can't project. In other words,
> we haven't reached the physical limits yet. But those limits do --
> must -- exist. We live in the real world, not in a mathematical
> abstraction, and projections of trends should not be blindly
> trusted.
So many people have said something like this, I'm beginning to think
of it as the "wishful thinking" response -- let's hope that
technology development slows down or comes to a halt for a while, so
that we'll all be saved.

All of this stuff is under active development at research labs around
the world -- and the competition is very stiff, because no nation
(especially America) can afford to be left behind. Here are some
links if you'd like to see this for yourself:

(*) Nanotechnologies: http://www.etcgroup.org/documents/TheBigDown.pdf
-- an excellent description of the latest technologies who's
developing them.

(*) www.top500.org -- The top 500 supercomputers in the world today.
They just updated their list. You can compare the new list to the
list of a couple of years ago, and see how quickly supercomputers are
becoming increasingly more powerful.

(*) http://researchweb.watson.ibm.com/bluegene/ -- IBM's Blue Gene
project - a new supercomputer to be available in 2006 that will have
5% of the power of the human brain.

You say that limits must exist. Well, I guess that's true. But we
KNOW we haven't reached those limits yet, because we haven't yet
developed a computer with the power of the human brain, and there's
no reason to believe that the human brain is the maximum possible.
Maybe the maximum will turn out to be 2 x the human brain, or 3 x, or
20 x, or 1000 x, but whatever it is, we're nowhere near it yet.

> Also, there is one factor in human intelligence which explorers
> of artificial intelligence are not incorporating, and which
> computers are deliberately designed to exclude, but it is an
> absolutely vital factor if intelligence is to provide true
> creativity: indeterminacy.
Indeterminacy is not that hard to provide in computer software. For
example, in the 70s I once wrote a simple three-dimensional
tic-tac-toe game. (3DTTT is played on a 4x4x4 cube; players take
turns playing X or O, with the objective of getting 4 in a row.) When
it was the computer's turn to play, I had it evaluate all the moves
and if the top two (or more) moves had the same score, then the
computer picked one of them at random. It added a lot of versatility
to the game.

Indeterminacy can be used in lots of places. For example, suppose
in a game that all moves lose (or in life, any action gets you
killed). Then a random move might be selected -- one that throws
everything into chaos in the hope that the opponent will make a
mistake.

> Regarding the use of intelligent machines in war, assuming that
> the present social order will continue into the future is even
> more reckless than projecting current trends.
Technological forecasting trends are independent of the social order
or wars or politics. They're on a trend line all their own.

> So it will be with war, and by the time we could have computerized
> weapons capable of completely replacing human military forces, we
> will have no wars for them to fight.
I know of no evidence to support this claim.

> And finally, it's a mistake to assume that human intelligence
> will itself be a constant, unchanged in the future. We face two
> technological revolutions in the near term, and artificial
> intelligence is one of them. The other is biological engineering.
> We, too, are capable of upgrades.
I have, indeed, seen proposals of this type, though I consider them
somewhat bizarre. Still, it is a possibility that human beings
themselves will continue exist in some computerized form. But in
that case, humans will be computers. Does that mean that the human
race still exists? I'll leave that question to the philosophers.

John







Post#146 at 07-28-2003 11:28 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-28-2003, 11:28 PM #146
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Eschatology - The End of the Human Race by 2100?

Dear Justin,

I don't think you should give up either. I'd like to make some
suggestions.

Quote Originally Posted by Justin '77
> I think, on some level, I believe in a 'soul' -- or at least some
> sort of continuity of being. It's why I would never willingly be
> teleported, FWIW.
What's wrong with the jigsaw puzzle model?

When you buy a jigsaw puzzle and open the box, it looks like a pile
of garbage. But when you assemble the pieces, then something new
appears, something that wasn't there before -- a picture. As the
saying goes, "the whole is greater than the sum of the parts." You
started with 1,000 jigsaw pieces, and now you have 1,000 jigsaw
pieces plus a picture.

If you disassemble the puzzle, then the picture disappears. (Where
does it go? Maybe it disappears, but maybe not -- maybe it hangs
around in some invisble form.)

If you then transport the puzzle to another city and reassemble the
pieces, the picture reappears.

Why can't that model work with teleporting? When Captain Kirk gets
into the teleporter, the particles that make up his body are
disassembled, and his soul hangs around somewhere, all by itself.
When the particles are reassembled, the soul returns.

There has to be a way for new souls to be created. If you clone
someone, then the clone has a separate soul from the original. That
would work, wouldn't it? That's the same as if you cloned all the
pieces of the jigsaw puzzle and assembled the cloned pieces -- you'd
get another picture. If you assemble both puzzles, then you have two
pictures.

Then the same thing could happen with super-intelligent computers.
When they get to a certain point, then they have a soul. Why not?

In the recent Terminator movie, one of the plot points at the end is
that the big supercomputer is so powerful that it becomes
"self-aware" and decides to blow up all human beings. Well, forget
the blowing up part, and focus on the self-aware part. What does
self-awareness mean?

How about this: A super-intelligent computer is self-aware if it's
capable of setting goals without the aid of a human. In other words,
it's capable of deciding, on its own, what it should do next. Of
course, computers do that today, so there'd have to be a threshhold
-- it's capable of setting goals that are surprising to the human
being who designed the software. As soon a computer becomes
sufficiently surprising to humans, then it's "self-aware."

Does that work?

John







Post#147 at 07-29-2003 12:26 AM by Mike [at joined Jun 2003 #posts 221]
---
07-29-2003, 12:26 AM #147
Join Date
Jun 2003
Posts
221

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Dear Mike,

Quote Originally Posted by Mike
> The difference between a human's natural drive and a program's
> rules are that the drive can be ignored, rules cannot. Suicide is
> the denial of the natural drive. It is this flexibility that
> allows us to change our goals and needs. Can a program change its
> own rules?
Well, humans have rules that can't be ignored (e.g., "you have to
breathe"). And computers can be given rules that can be ignored.

If you're writing a chess-playing program, then you'd want to require
the computer to strictly follow the rules of chess, but other rules
("Develop your knights before your bishops") could be ignored in the
right circumstances.

John
There are of course people that are able to override the basic instincts, even stop their pulse. But all you're doing is making rules more complicated, but they are still being followed. A computer will be selfaware when it isn't forced to rules anymore.







Post#148 at 07-29-2003 12:59 AM by Max [at Left Coast joined Jun 2002 #posts 1,038]
---
07-29-2003, 12:59 AM #148
Join Date
Jun 2002
Location
Left Coast
Posts
1,038

John your models are pure science and mathmatical equations.
I can't make sense of them. I don't even try to pretend to understand them. But, to say that all life, all that we see and comprehend can be
boiled down to some mathmatical string of numbers, just doesn't jibe with me. Nor is it true. You can't make life, where there isn't any. When you take apart a living organism part by part piece by piece and get to the smallest piece of us, it is not alive. A cell is alive, but, atoms and neutrons or protons or what ever they are, are
not alive. Is there a mathmatical equasion which explains life from non-life? It seems you are trying to marry the two. Computer programs and life. For a computer to be "self aware" would suggest life. The spiritual
realm IS real, things happen which we cannot understand or explain, there is more to "this" than "this". We aren't the masters of the universe.

Let me give you an alternative hypothetical. One in which I don't believe, but,
hey, given what you say could be true.

Let's believe that we aren't the only beings of intelligence in the universe.
That there are beings of superior intelligence. Would it be a leap to believe
that other superior intelligent beings have themselves created computers,
machines that could be programmed? These beings, light years ahead of us in evolution and thought may have created for themselves crafts of which to travel the universe. They may have visited us. Many many people believe in these creaures existence. Would it be a leap to say since they continue to exist they have yet to create a machine which they themselves
have no control? Or perhaps as you put it, "the machines are allowing them to co-exist with them" slightly paraphrased but, the thought remains.
...."um...(obvious confusion)...what?"
"Max"
(silence)
"It's short for Maxine"
" *brightens*....oh!"
"But nobody calls me that"







Post#149 at 07-29-2003 07:59 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
07-29-2003, 07:59 PM #149
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Dear Max,

Quote Originally Posted by Max
> You can't make life, where there isn't any. When you take apart a
> living organism part by part piece by piece and get to the
> smallest piece of us, it is not alive. A cell is alive, but,
> atoms and neutrons or protons or what ever they are, are not
> alive. ... For a computer to be "self aware" would suggest life.
Well, there are three different things involved in this discussion:
being alive, being self-aware, and having a soul.

You say that a single cell is life. Ok. As I recall, single-cell
organisms were the first "life" to evolve out of the so-called
primordial soup. The primordial soup was the ocean, filled with
strange protein-like chemicals that themselves had evolved from
simpler chemicals. Once conditions were exactly right, enough
chemicals came together to form proteins, and then cells. (This
happened over hundreds of millions of years, of course.)

So a cell is more than just a part of ourselves. It's something that
itself came about from basic chemical components. At what point would
you call it "alive"? And why can't something "alive" come about some
other way - e.g., by becoming a super-intelligent computer?

Maybe this life issue comes down to the old evolution versus
creationism argument. If you believe that all life was created, then
presumably a super-intelligent computer can't be alive; but if you
believe that simple forms of life evolved, then it's hard to see how
you can deny "life" by other means.

However, even if you believe that all life was created, that doesn't
change the final result. Super-intelligent computers could still be
running around on earth, and not really give a damn whether we think
they're alive or not.

> Let's believe that we aren't the only beings of intelligence in
> the universe. That there are beings of superior intelligence.
> Would it be a leap to believe that other superior intelligent
> beings have themselves created computers, machines that could be
> programmed? These beings, light years ahead of us in evolution and
> thought may have created for themselves crafts of which to travel
> the universe. They may have visited us. Many many people believe
> in these creaures existence. Would it be a leap to say since they
> continue to exist they have yet to create a machine which they
> themselves have no control?
There are many people who believe that evolution must have occurred
on other planets as well, and so there's other intelligent life in
other parts of the universe.

I can't prove this, but I suspect that any form of life would have to
follow pretty much the same technology curve that we have, and that
the development of super-intelligent computers comes far earlier on
the technology curve than the development of interstellar space
travel. Therefore, (assuming my suspicion is correct), any alien
life that's visited us would be in the form of super-intelligent
computers.

> Or perhaps as you put it, "the machines are allowing them to
> co-exist with them" slightly paraphrased but, the thought
> remains.
If we're going to speculate whether super-intelligent computers will
simply co-exist with humans, we also have to ask about other life.
Will super-intelligent computers simply co-exist with elephants and
snakes and cockroaches and monkeys? If so, then I don't see why they
wouldn't do the same for humans, since the super-intelligent
computers would consider us pretty much as dumb as cockroaches, in
comparison to them.

All this speculation is fun, and I enjoy it as much as anyone, but
it's important to remember that there's only one thing that's pretty
certain: That computers will become more intelligent than humans in
the 2030 time frame, and increasingly more intelligent after that.
Whatever we think spins out from that is pure speculation.

> The spiritual realm IS real, things happen which we cannot
> understand or explain, there is more to "this" than "this". We
> aren't the masters of the universe.
Heck, we aren't masters of much of anything!

John







Post#150 at 07-29-2003 08:54 PM by Max [at Left Coast joined Jun 2002 #posts 1,038]
---
07-29-2003, 08:54 PM #150
Join Date
Jun 2002
Location
Left Coast
Posts
1,038

Therefore, (assuming my suspicion is correct), any alien
life that's visited us would be in the form of super-intelligent
computers.
Hey hey hey, computers traveling the entire cosmos, to probe our collective ass?

It does answer the question of time necessary though. Computers don't age.

But, would the data collected by the space traveling computers, collectors of human fecal matter, be down loadable ? Of course the computers collecting the data would be obsolete once they completed their circut. They could keep the older models around I suppose just to bridge the great technology divide. I mean imagine the quantum leaps of machines made within 100's of years. It's mind boggling. It begs the question however, "Is the necessary information gained out of human reproductive systems and prostates rendered obsolete once obtained?" And if so, aren't the computers smart enough to figure that out?

OTOH Maybe that's why aliens are so butt fricking ugly. Maybe they are super intelligent computers trying to be "real" with, like, dead skin on the outside, and it's been discoverd the secret of regenerating life is up you ass? :wink:

This hypothetical spins quickly into nonsense.
...."um...(obvious confusion)...what?"
"Max"
(silence)
"It's short for Maxine"
" *brightens*....oh!"
"But nobody calls me that"
-----------------------------------------