Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 10







Post#226 at 04-04-2004 12:19 PM by Tim Walker '56 [at joined Jun 2001 #posts 24]
---
04-04-2004, 12:19 PM #226
Join Date
Jun 2001
Posts
24

Singularity








Post#227 at 04-05-2004 03:00 AM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
04-05-2004, 03:00 AM #227
Join Date
Sep 2001
Posts
9,412

Quote Originally Posted by John J. Xenakis
Scientists creating life from scratch

The following article almost makes it sound like we'll soon be able
to create life using ordinary ingredients found in your home
refrigerator.

Early in this topic, we had a lengthy discussion of whether a
super-intelligent computer could be considered "alive."

Now we have scientists telling us that they'll be able to create
cellular life within ten years.

I recently had occasion to put together a new graphic on what the
future is going to look like:



When we try to look forward into the future, we can see a 4T time
with the "clash of civilizations," following by a period of great
prosperity.

But around 2030, the point at which computers will become more
intelligent than human beings
Yeah, IF Moore's Law holds for another 26 years, and IF intelligence is primarily a function of computing power (both doubtful concepts), then an outside chance exists that human-equivalent machines will exist in 2030.

But I'd bet my money against it.







Post#228 at 04-05-2004 09:05 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
04-05-2004, 09:05 AM #228
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Artificial life

Quote Originally Posted by John J. Xenakis

[He questions, as I do, a recent article from the WSJ]

Researchers Exploring 'What Is Life?' Seek To Create a Living Cell

April 2, 2004; Page B1


SCIENCE JOURNAL
By SHARON BEGLEY

... they are trying to assemble -- from off-the-shelf,
nonliving molecules -- a living cell.

...The missing ingredient in this cell wannabe is metabolism, but Steen
Rasmussen of Los Alamos National Lab thinks he can provide it. He and
Liaohai Chen of Argonne National Lab have designed a microscopic
container with metabolic molecules and genes whose electrical
properties drive metabolic reactions
. The scientists have demonstrated
experimentally that this micrometabolism can produce exactly the
molecules the container is made of (so the system would be able to
grow).

"All the pieces are there -- self-assembling container, genes and
metabolism that captures energy from the outside world," says Dr.
Rasmussen. "The question is, how do we get it to reproduce? If we do,
then most people would say it is alive."...
This one bothers me a bit. What is meant by "off-the-shelf, nonliving molecules"? Where do those perfunctory genes come from? Where does the polymerase come from? Certainly not from scratch! And without them, a living cell could become very confused about what it's supposed to do in life. I smell a journalistic rat in the WSJ.

Nowhere does she mention that an artificial polio virus has already been fabricated in New York--from mail-order genes, no less. Viruses are living cells, enclosed by membranes, just like all the others. So what's the special deal?

If one sets out to create life from scratch, then one would need to worry first about the problem of genetic information. And if one can easily get those needed genes through the mail, then one is not exactly creating life from scratch. I see Betty Crocker in the mix. Saying you can make artificial life from scratch as long as you can purchase the necessary genes is like saying you can make artificial intellegence from scratch as long as you can buy a human brain.

--Croak







Post#229 at 04-06-2004 12:58 AM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
04-06-2004, 12:58 AM #229
Join Date
Sep 2001
Posts
9,412

Re: Artificial life

Quote Originally Posted by Croakmore

Nowhere does she mention that an artificial polio virus has already been fabricated in New York--from mail-order genes, no less. Viruses are living cells, enclosed by membranes, just like all the others. So what's the special deal?
Viruses are not cells. They can not reproduce on their own, as bacteria can, they have to invade and overide a true cell to reproduce. (Which I am sure you know.)

But the debate over whether a virus is even truly alive has never been conclusively decided.







Post#230 at 04-06-2004 08:29 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
04-06-2004, 08:29 AM #230
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Re: Artificial life

Quote Originally Posted by HopefulCynic68
Quote Originally Posted by Croakmore

Nowhere does she mention that an artificial polio virus has already been fabricated in New York--from mail-order genes, no less. Viruses are living cells, enclosed by membranes, just like all the others. So what's the special deal?
Viruses are not cells. They can not reproduce on their own, as bacteria can, they have to invade and overide a true cell to reproduce. (Which I am sure you know.)

But the debate over whether a virus is even truly alive has never been conclusively decided.
I really don't know who's having a debate anymore over whether or not viruses are "truly alive." Please understand that while viruses are not "living organisms" per se, that are "truly alive." Some of them contain DNA, like the herpes viruses, while others contain RNA, like the flu virus (below):

.

These are living cells. They have bipolar lipid membranes, species identity, carry their own genetic information, build proteins, have metabolism, and reproduce by genetic parasitism. True, they are very small and cannot reproduce on their own. But that's just an interesting technicality. If they could they would be called "living organisms." That's the distinction you need to make.

You may say correctly that virus are not living organisms, but it is incorrect to say they are not alive.

Hope this helps.

--Croaker







Post#231 at 04-06-2004 08:32 AM by Ocicat [at joined Jan 2003 #posts 167]
---
04-06-2004, 08:32 AM #231
Join Date
Jan 2003
Posts
167

Like transposons and other elements that parasitically use the metabolic machinery of actual cells to reproduce under certain conditions, viruses are generally not considered to be living. The accepted divisions of life, these days, are the eubacteria, the archaebacteria, and the eukarya. It is true, though, that viruses are sometimes considered "acellular life" by some biologists. It is also true that since they use nucleic acids as their genetic material and their host cells' machinery for metabolic function they are "connected" to cellular life. If I recall correctly, the evolutionary history of this connection is still debated.

It is not correct that viruses are in any sense cellular, despite the occasional presence of a membrane. I don't, offhand, know whether the flu virus you refer to above creates its own bipolar membrane, though I doubt it. I do know that HIV has a bipolar membrane that it derives from its host cell.
No matter how small, every feline is a masterpiece.
-- Leonardo da Vinci







Post#232 at 04-06-2004 09:38 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
04-06-2004, 09:38 AM #232
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Quote Originally Posted by Corvis
Like transposons and other elements that parasitically use the metabolic machinery of actual cells to reproduce under certain conditions, viruses are generally not considered to be living [organisms] . The accepted divisions of life, these days, are the eubacteria, the archaebacteria, and the eukarya.
If you would add this stipulation [organisms] I would agree. But you mistake the real meaning of life. It does not need to be always organisms. To wit: the great Salk-Sabin debate of the 1950s was over the use live viruses as vaccine, as opposed to killing them first. I don't know how you can account for all this living and killing if those little fellers were not alive at least sometime during their little lives.

Do you call viruses a different form of life than the organisms they abuse? Yes, they are different in their reproductive requirements, and of course they are not true organisms, but they are still the same kind of life in a genetic context. Please, what other kind is there?

Would I be correct in assuming that you draw the line at organisms, and that we differ only on the meaning of that line?

--Croaker







Post#233 at 04-06-2004 10:02 AM by Ocicat [at joined Jan 2003 #posts 167]
---
04-06-2004, 10:02 AM #233
Join Date
Jan 2003
Posts
167

I believe we're really only having a semantic disagreement. In terms of the science, I think we agree. I guess I prefer to see viruses as "biologically active" rather than "alive," because I like to reserve "aliveness" for cellular life possessing its own metabolic machinery. But perhaps I'm just splitting hairs...
No matter how small, every feline is a masterpiece.
-- Leonardo da Vinci







Post#234 at 04-06-2004 10:19 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
04-06-2004, 10:19 AM #234
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Life Lines

...but these are coarse hairs, Corvis. Why would scientists go to the trouble of assigning to the polio virus, for example, a genus and a species if it were not alive? And why would they be so conerned about the evolution of that and many other virus species? There really is no line. You can't have viruses without living organisms; I'll agree to that. But you can't have humans with bacteria, either. Would you not agree?

The spectrum of life dips below organisms on my scale of measure. But the DNA/RNA kind of life is all I ever see.







Post#235 at 04-06-2004 10:40 AM by Ocicat [at joined Jan 2003 #posts 167]
---
04-06-2004, 10:40 AM #235
Join Date
Jan 2003
Posts
167

All I'm saying is that biologists generally define "life" in such a way that viruses are not, these days, considered to be alive. From Purves, et al, Life: The Science of Biology, 5th ed.:
Cells show the characteristics by which living systems are recognized: They use DNA as hereditary material and proteins as catalysts. In addition, most cells reproduce, transform matter and energy, and respond to their environment.

Viruses, on the other hand, show only a few of these characteristics and thus are not usually considered alive. Viruses depend entirely on living cells to reproduce, and they neither transform matter and energy nor respond to the environment.
Now, if you want to say that DNA/RNA-based genetics is sufficient to define life, that's fine with me, and you will probably find some biologists who agree with you. But it is not true that your definition is generally accepted.
No matter how small, every feline is a masterpiece.
-- Leonardo da Vinci







Post#236 at 04-06-2004 11:17 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
04-06-2004, 11:17 AM #236
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Werds

Yes, I know, some college textbooks will say that. But I don't necessarily buy it. I have a college textbook laying before me (2002 Campbell, Mitchell, Reece) that says: "Natural selection is a prominent force in nature." I am bothered by the word "force" here, because I think these biologists are floating on the suds of imprecision. "Force" = mass times acceleration. Wouldn't it be fine if Darwinian evolution could be put to such a measure?







Post#237 at 04-06-2004 08:49 PM by Tim Walker '56 [at joined Jun 2001 #posts 24]
---
04-06-2004, 08:49 PM #237
Join Date
Jun 2001
Posts
24

The Spike by Damien Broderick

Quoting:

"....what's wrong with most media images of the future..is their laughably timid conservatism.

"...The future is going to be a fast, wild ride into strangeness."

(~*~)







Post#238 at 04-06-2004 10:46 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-06-2004, 10:46 PM #238
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Re: The Spike by Damien Broderick

Quote Originally Posted by Tim Walker
Quoting:

"....what's wrong with most media images of the future..is their laughably timid conservatism.

(~*~)
That because human brains are pattern-matching systems with certain default assumptions; for example, straight-line projections. That is of course useful in the real world (Newtonian mechanics and all.) It fails when the underlying pattern is cyclical (as S&H point out in detail.) It also fails with the underlying pattern is exponential. Hence, due to straight-line projections we tend to overestimate changes in the short term (cf. flying cars and moon colonies) and underestimate changes in the long term (cf. biotech and nanotech.)







Post#239 at 04-07-2004 12:29 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-07-2004, 12:29 AM #239
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Singularity

Dear Tim,

Thank you for posting those links.

The one that I found most interesting was
http://www.singularityawareness.com/ , which seems to be a kind of
cheering section for the Singularity.

They want the singularity to come as quickly as possible, so that we
can enjoy its benefits - living in eternal affluence.

It's a fun site with a lot of interesting stuff. I recommend it to
anyone here who wants to learn more.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#240 at 04-07-2004 12:32 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-07-2004, 12:32 AM #240
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

How much do you want to bet?

Dear Hopeful Cynic 68,

Quote Originally Posted by HopefulCynic68
> Yeah, IF Moore's Law holds for another 26 years,
Actually, it's pretty well agreed that Moore's Law will run its
string out by 2010 or 2012. However, there's a plethora of new
technology -- nanotechnology, protein folding technology, molecular
technology quantum technology -- under development to take the place
of integrated circuits and keep the power of computers growing
steadfastly, as history shows will happen.

Quote Originally Posted by HopefulCynic68
> and IF intelligence is primarily a function of computing power
> (both doubtful concepts),
Well, it appears that the human brain is a very good multiprocessing
computer specializing in pattern mapping and associative memory. For
example, when you look at a chair, then you recognize it as a chair
because your brain simultaneously compares it to zillions of chair
images stored in your brain. Things like "inventions" are achieved by
finding an appropriate combination of previously learned bits of
knowledge, and combining those bits into something new and more
wonderful. The faster your brain can find bits of knowledge and
combine them, then the more intelligent you are. So as computers get
faster, they get more intelligent.

Quote Originally Posted by HopefulCynic68
> then an outside chance exists that human-equivalent machines will
> exist in 2030.
The date 2030 is my own personal estimate, although many others reach
the same estimate. Some people's estimates are out 30 or 40 years,
while some estimates are as low as ten years.

The reasoning behind the lower estimates is interesting. I arrived
at 2030 because I believe that it's around then that desktop
computers (or whatever form factor is in use) will have human level
intelligence.

One way to arrive at that date is as follows: IBM's Blue Gene project
< http://researchweb.watson.ibm.com/bluegene/ > will yield a
supercomputer at about human level intelligence around 2020. It
takes about ten years for the power of a room-sized supercomputer to
reach the desktop, which would be around 2030.

But the people who think that the Singularity will come much earlier
believe that all we'll need is just one human-level supercomputer
that can start researching ways to improve itself. These people also
believe that the development of super-intelligent computers will
accelerate very fast, and that within ten years everything that can
be discovered will have been discovered.

I'm a bit uncomfortable with that prediction, so I'm sticking with
the more conservative 2030. I personally feel pretty certain that if
2030 is too early, then it's too early by no more than a few years.

Quote Originally Posted by HopefulCynic68
> But I'd bet my money against it.
It's hard to see why anyone would want to take that bet: If I took
that bet and I won, then I might very well not be able to collect.
So we'll just have to make it a mental bet.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#241 at 04-07-2004 12:32 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-07-2004, 12:32 AM #241
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Artificial life

Dear Croak More,

Quote Originally Posted by Croakmore
> If one sets out to create life from scratch, then one would need
> to worry first about the problem of genetic information. And if
> one can easily get those needed genes through the mail, then one
> is not exactly creating life from scratch. I see Betty Crocker in
> the mix. Saying you can make artificial life from scratch as long
> as you can purchase the necessary genes is like saying you can
> make artificial intellegence from scratch as long as you can buy a
> human brain.
You raise a number of interesting points. It certainly is possible
that a popular article in WSJ has left out a few important details,
and I certainly bow to your knowledge of molecular biology, as I do to
Corvis'.

However, I did a little googling around, and I found another article,
posted below.

According to this article, the new "living cells" would be unlike any
existing living cell. They would be designed for specific purposes,
such as cleaning up oil spills.

They don't contain DNA, but they contain a synthetic version, "called
PNA, for lipophilic peptide nucleic acid - with a simple metabolism
and a cell-like container to hold the material."

So, I don't know if this explanation resolves all the issues you
raised, but it does appear interesting.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com

Lab scientists may soon be shouting: It's alive

By Sue Vorenberg

Tribune Reporter

The same laboratory that gave the world the atomic bomb is now
collaborating on giving birth to artificial life.

Scientists at Los Alamos National Laboratory are close to creating an
organism completely different from any life on Earth.

They have all the building blocks ready and are hoping in the next few
years to create a new organism 10 million times smaller than the
smallest bacteria, they say in an article released today in the
journal Science.

"If we can build creatures like this from scratch, we can design them
to do things we've never seen in nature," said Steen Rasmussen, a
scientist on the project. "They could be the building blocks for
self-repairing systems, but we could also design them with metabolisms
not found in nature. We could make them eat the worst contaminants out
there, then die when they run out of food."

These tiny critters could lead to a new field of technology that
allows all sorts of things - clothing, computer components, maybe even
your car - to fix themselves, Rasmussen said.

"This is the first step in a truly enabling technology," Rasmussen
said. "Materials that can self-heal have far-reaching applications.
Achieving that is probably many years away, but the achievement will
be similar to the invention of the first transistor or splitting the
atom. It will change everything."

Rasmussen's work was done with a team of United States and
international scientists.

The scientists would build the organism by combining a synthetic DNA -
called PNA, for lipophilic peptide nucleic acid - with a simple
metabolism and a cell-like container to hold the material.

Such creatures could be released onto an oil spill in the ocean.
They'd eat all the oil from the spill, breaking it down into harmless
components, then die off as soon as the spill was gone. Other
organisms could be designed to only eat deadly viruses, Rasmussen
said.

"Although I probably wouldn't want to be the person they tested that
on," he joked.

Ethics surrounding the creatures is another story, said Mark Bedau, a
philosophy professor at Reed College in Portland, Oregon, who is
working with the scientists.

"This is a powerful new kind of idea, with huge potential benefits and
a lot of big risks," Bedau said. "The risks come from the fact that
these things would be artificial, but they'd also be alive. They'd
construct themselves out of material in their environment, reproduce
themselves and evolve."

But whether this creature fits the technical definition of life
remains to be seen, Rasmussen said.

"Well, defining life is notoriously difficult," he said. "Some
biologists say a living entity is something that harvests resources
from the environment, can evolve and can reproduce itself. But if you
look at that and think of a mule, then a mule wouldn't be alive
because it can't reproduce itself."

For research purposes, the Los Alamos scientists are defining life as
something with a metabolism that can turn resources into energy or
building blocks, genetic material and a container to hold everything.
Under that definition, their creatures will certainly be alive.

"Of course, there are a lot of moral issues surrounding the idea of
living systems," Rasmussen said. "Should we treat them a certain way?
We don't have moral issues when we give antibiotics to kill a disease.
Still, there are serious issues about how do we use a technology like
this. It's like atomic technology - there are good ways and bad ways
to use it. We just have to be careful."

Some of the potential problems are very serious. If the tiny creatures
were harmful or toxic to humans and able to reproduce themselves, it
could cause large-scale damage, Bedau said.

"Also if they're alive and can evolve, the consequences are impossible
to predict," Bedau said. "Still, we need to work on this because other
countries are working on it. We need to research it so if there is a
problem elsewhere we know how to deal with it. Also, by working on it
we'll learn how to control it and contain it."

Right now, there are no regulations preventing or monitoring the
creation of artificial life. That is something that should be worked
out before scientists actually create their first living cell, Bedau
said.

"There may be clouds on the horizon but this technology also has the
potential to open a fantastic door to all kinds of new possibilities,"
Bedau said. "We'll be making life. On a pure scientific level, that
means we'll have a very deep understanding of what life actually is.
In a more practical vein, life has very interesting and powerful
properties. It's the only thing that can reproduce and repair
itself."

One day, clothing designers might use the technology to design a
sweater that will fix itself if it rips.

They might even be able to design a sweater that builds itself from
scratch, Rasmussen said.

"Of course, the first order of business is to demonstrate that we can
make these little guys replicate themselves," he said. "We're far from
that right now, but working on it. The second step is teaching them to
do something useful, like build a sweater or some other useful object.
They just won't understand Fortran or Microsoft Word or anything right
off the bat. It's a long way off."

The time scale is to the scientists' advantage, Bedau added, because
it will allow regulators, ethicists and others time to figure out
exactly how to deal with this new phenomenon.

"We have time to think this through," Bedau said. "And it will
certainly give us a lot to think about. In the end I think the
benefits will outweigh the risks. But we're not at the point where we
have to define everything just yet."

http://www.abqtrib.com/archives/news...ews_life.shtml

[End of message]







Post#242 at 04-07-2004 12:50 AM by Tim Walker '56 [at joined Jun 2001 #posts 24]
---
04-07-2004, 12:50 AM #242
Join Date
Jun 2001
Posts
24

Response to Rick Hirst

A cycle might be overlooked if it occurs over very long time spans. Both Sorokin and Melko described three-part cycles across millenia of European history. The saeculum is now about the length of a fairly long human life, which makes it difficult to discern based on personal experience. How could one be sure that there is a cycle except by laborious scholarship?

Nevertheless, these are based on history. The Singularity would be beyond human experience, indeed, beyond the human condition.

(~*~)







Post#243 at 04-07-2004 01:08 AM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
04-07-2004, 01:08 AM #243
Join Date
Sep 2001
Posts
9,412

Re: How much do you want to bet?

Quote Originally Posted by John J. Xenakis
Dear Hopeful Cynic 68,

Quote Originally Posted by HopefulCynic68
> Yeah, IF Moore's Law holds for another 26 years,
Actually, it's pretty well agreed that Moore's Law will run its
string out by 2010 or 2012. However, there's a plethora of new
technology -- nanotechnology, protein folding technology, molecular
technology quantum technology -- under development to take the place
of integrated circuits and keep the power of computers growing
steadfastly, as history shows will happen.
History shows nothing of the sort. You can't meaningfully extrapolate past trends onto new technologies in that sense. It might happen, but the likelihood is guesswork, and nothing more.

A projection of the improvement in rocket propulsion, for ex, between the Mercury Redstone series and the Saturn V would indicate that we should be all over the Solar System by now. But the trend stopped.


Quote Originally Posted by HopefulCynic68
> and IF intelligence is primarily a function of computing power
> (both doubtful concepts),
Well, it appears that the human brain is a very good multiprocessing
computer specializing in pattern mapping and associative memory. For
example, when you look at a chair, then you recognize it as a chair
because your brain simultaneously compares it to zillions of chair
images stored in your brain. Things like "inventions" are achieved by
finding an appropriate combination of previously learned bits of
knowledge, and combining those bits into something new and more
wonderful. The faster your brain can find bits of knowledge and
combine them, then the more intelligent you are. So as computers get
faster, they get more intelligent.

Yes, that's one kind of intelligence. But since we don't know how the brain works, it's hard to say how important that aspect is, or whether it can scale indefinitely. We don't even know if the brain is doing something we'd recognize as 'computing' when it performs pattern-recognition.

We've only recently begun to realize the apparent importance of glial cells to the formation of long-term memory.

Things like "inventions" are achieved by
finding an appropriate combination of previously learned bits of
knowledge, and combining those bits into something new and more
wonderful.


What does that mean? It really doesn't explain anything, it's just a description of the observed effects. Frankly, science currently has no idea how invention and creativity occur.

[quote]


Quote Originally Posted by HopefulCynic68
> then an outside chance exists that human-equivalent machines will
> exist in 2030.
The date 2030 is my own personal estimate, although many others reach
the same estimate. Some people's estimates are out 30 or 40 years,
while some estimates are as low as ten years.
I've seen them. My point is that they don't mean anything, since we're applying estimates to processes we have no real understanding of.

We don't know whether intelligence hinges primarily on computational power.

We don't know the computational and data storage capacities of the human brain (there are guesstimates, based on various assumptions of how the brain works but that's all they are).

We don't know whether intelligence implies motivation or desires.

We don't know the nature of motivation (though we've made some progress in that area, such as the improved understanding of the relationship between the 'reward systems' and addictions).

We don't know what the relationship between such incomprehensibles (at the moment) as humor or irony are to intelligence.

We don't know what parts of the brain are critical to self-awareness, and what parts aren't. (Some people have retained their full conscious personalities with large chunks of their cerebral cortex gone! Others suffer serious degradation of ability and memory, and alteration of personality from apparently trivial brain damage.)

Thus, given all those unknowns, I take such projections with a large crystal of sodium chloride.







Post#244 at 04-07-2004 02:07 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-07-2004, 02:07 AM #244
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: How much do you want to bet?

Dear Hopeful Cynic 68,

Earlier in this thread I gave a number of examples to show how you
can indeed past trends onto new technologies. I also have a number
of such examples in my book (
http://www.generationaldynamics.com/...?d=ww2010.book ),
and you can check those out.

So I can say with some certainty that it's not "guesswork." Even if
I happen to be wrong this case -- and I'm almost certain I'm not --
then it would be the exception, not the rule.

Quote Originally Posted by HopefulCynic68
> A projection of the improvement in rocket propulsion, for ex,
> between the Mercury Redstone series and the Saturn V would
> indicate that we should be all over the Solar System by now. But
> the trend stopped.
With regard to your example of rocket propulsion: I've never looked
at that example, and when I have a chance I will do so, and post what
I find here.

With regard to how inventions work, I've previously posted
discussions of two examples: Edison's invention of the light bulb and
Andrew Wiles' successful search for a proof to Fermat's Last Theorem.

Now I know you'll pooh-pooh these examples, and repeat your claim
that scientists have "no idea" how invention and creativity occur,
but if you're going to make that claim, then I think I have a right
to expect you to at least produce an example of invention or
creativity that doesn't fit the pattern I've described.

It's certainly not true that we have "no idea," since I've already
posted "an idea" that covers a lot of cases -- in fact all the cases
I'm aware of and have analyzed. The most you can reasonably claim,
in my opinion, is that there are types of invention or creativity
that my "idea" doesn't describe, but in that case I'd certainly
appreciate an example to analyze.

Finally, even if you did manage to come up with an example of
invention or creativity that somehow didn't come about in the way I
describe, I don't think it makes any difference to the Singularity.

Even if you call my description of invention and creativity
"artificial," it wouldn't make any difference, because it would still
be sufficient to develop super-intelligent computers that can improve
themselves.

Quote Originally Posted by HopefulCynic68
> We don't know whether intelligence implies motivation or desires.
I don't think this question is relevant to the Singularity, but I
think it can be answered. I believe the answer is "No." I believe
that you can have "intelligence" without motivation or desires. Once
you have the intelligence in a programmable computer, you can then
add "motivation" by giving the software a goal -- e.g., invent a
better mousetrap, or invent a better version of yourself.

Quote Originally Posted by HopefulCynic68
> We don't know the nature of motivation (though we've made some
> progress in that area, such as the improved understanding of the
> relationship between the 'reward systems' and addictions).
Once again, I don't believe that this question is relevant to the
Singularity. Even if we don't fully understand human motivation, it
doesn't make any difference, because we'll use "artificial
motivation" in the super-intelligent computers by simply programming
the software to achieve a goal.

Quote Originally Posted by HopefulCynic68
> We don't know what the relationship between such incomprehensibles
> (at the moment) as humor or irony are to intelligence.
It doesn't matter. I believe that with some work a super-intelligent
computer can be programmed to be humorous (or at least tell jokes)
and be ironic. You might say that it's not exactly the same as human
humor or irony, but it's still close enough for government work, and
it's still good enough to produce the Singularity.

Quote Originally Posted by HopefulCynic68
> We don't know what parts of the brain are critical to
> self-awareness, and what parts aren't. (Some people have retained
> their full conscious personalities with large chunks of their
> cerebral cortex gone! Others suffer serious degradation of ability
> and memory, and alteration of personality from apparently trivial
> brain damage.)
Once again, it doesn't matter. The Singularity will occur perfectly
well with "artificial intelligence," with or without
"self-awareness." The brain issues you raise don't matter at all in
any way that I can see.

Incidentally, we had a lengthy discussion of "self-awareness" in this
thread a few months ago, and I finally said that I don't know what
self-awareness is, but if you can tell me what it is, then I can tell
you how to implement it software.

Over the months, I think I've come up with I think that
"self-awareness" means: Self-awareness means having self-preservation
as a goal. You can tweak this definition by throwing in
group-awareness, meaning that preservation of self and group is a
goal. However, I also still believe it's true that it makes no
difference at all to the Singularity whether the super-intelligent
computers are "self-aware" or not.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#245 at 04-07-2004 02:18 AM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
04-07-2004, 02:18 AM #245
Join Date
Sep 2001
Posts
9,412

Re: How much do you want to bet?

Quote Originally Posted by John J. Xenakis

Now I know you'll pooh-pooh these examples, and repeat your claim
that scientists have "no idea" how invention and creativity occur,
but if you're going to make that claim, then I think I have a right
to expect you to at least produce an example of invention or
creativity that doesn't fit the pattern I've described.
The pattern you described is merely a restatement. Of course they all fit the pattern you've described, but your description tell us nothing of the 'hows' and 'whys'. We don't know what the brain is doing when it puts information together in original ways.




It's certainly not true that we have "no idea," since I've already
posted "an idea" that covers a lot of cases -- in fact all the cases
I'm aware of and have analyzed. The most you can reasonably claim,
in my opinion, is that there are types of invention or creativity
that my "idea" doesn't describe, but in that case I'd certainly
appreciate an example to analyze.

Finally, even if you did manage to come up with an example of
invention or creativity that somehow didn't come about in the way I
describe, I don't think it makes any difference to the Singularity.

Even if you call my description of invention and creativity
"artificial," it wouldn't make any difference, because it would still
be sufficient to develop super-intelligent computers that can improve
themselves.
Without creativity and imagination, they won't be able to improve themselves, and since we can't define either even though we know both eixst, we can't assign a meaningful timeframe.


Quote Originally Posted by HopefulCynic68
> We don't know whether intelligence implies motivation or desires.
I don't think this question is relevant to the Singularity, but I
think it can be answered. I believe the answer is "No." I believe
that you can have "intelligence" without motivation or desires. Once
you have the intelligence in a programmable computer, you can then
add "motivation" by giving the software a goal -- e.g., invent a
better mousetrap, or invent a better version of yourself.
Without imagination and (probably) self-awareness, it won't be able to design a better version of itself. It won't even be able to comprehend the meaning of the word 'better'.




Quote Originally Posted by HopefulCynic68
> We don't know what the relationship between such incomprehensibles
> (at the moment) as humor or irony are to intelligence.
It doesn't matter. I believe that with some work a super-intelligent
computer can be programmed to be humorous (or at least tell jokes)
and be ironic. You might say that it's not exactly the same as human
humor or irony, but it's still close enough for government work, and
it's still good enough to produce the Singularity.
We don't have any idea how to program irony or humor, since we don't know what they are except intuitively.





Once again, it doesn't matter. The Singularity will occur perfectly
well with "artificial intelligence," with or without
"self-awareness." The brain issues you raise don't matter at all in
any way that I can see.

Incidentally, we had a lengthy discussion of "self-awareness" in this
thread a few months ago, and I finally said that I don't know what
self-awareness is, but if you can tell me what it is, then I can tell
you how to implement it software.
Nobody can tell you what it is, that's the point. Until we can meaningfully define it, we can't intentionally duplicate it by software or any other means. It might happen by accident, but we can't make it happen until we can define it.

But if you try to tell me that it doesn't exist, I'm just going to laugh.


Over the months, I think I've come up with I think that
"self-awareness" means: Self-awareness means having self-preservation
as a goal.
No, because some creatures that are almost certainly not self-aware, such as ants, have some programmed tendencies toward self-preservation. Also, human suicides are self-aware, but they don't seek self-preservation.







Post#246 at 04-07-2004 02:40 AM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-07-2004, 02:40 AM #246
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Re: How much do you want to bet?

Quote Originally Posted by HopefulCynic68
Quote Originally Posted by John J. Xenakis
Actually, it's pretty well agreed that Moore's Law will run its string out by 2010 or 2012. However, there's a plethora of new technology -- nanotechnology, protein folding technology, molecular technology quantum technology -- under development to take the place of integrated circuits and keep the power of computers growing steadfastly, as history shows will happen.
History shows nothing of the sort. You can't meaningfully extrapolate past trends onto new technologies in that sense. It might happen, but the likelihood is guesswork, and nothing more.

A projection of the improvement in rocket propulsion, for ex, between the Mercury Redstone series and the Saturn V would indicate that we should be all over the Solar System by now. But the trend stopped.
I'm with Hopeful on this one :shock: Moore's Law is simply a statement of Intel Corporation's business model, not any particular insight into underlying models of technology development. As John said, "Moore's Law" will run out in another 6-8 years: not because of some technological barrier, but rather because Intel's fab costs are also increasing exponentially, and soon the risk of building the next-generation fab will exceed any expected return. Presumably by then, Intel will have been investing in other technologies such as John mentioned, but that will represent a break, a discontinuity in the upward trend, so any results are "guesswork" as HC says. (Rocket propulsion is another example of the risk/return hitting a limit.)

Quote Originally Posted by HopefulCynic68
Quote Originally Posted by John J. Xenakis
Quote Originally Posted by HopefulCynic68
> and IF intelligence is primarily a function of computing power
> (both doubtful concepts),
Well, it appears that the human brain is a very good multiprocessing computer specializing in pattern mapping and associative memory. For example, when you look at a chair, then you recognize it as a chair because your brain simultaneously compares it to zillions of chair images stored in your brain. Things like "inventions" are achieved by finding an appropriate combination of previously learned bits of knowledge, and combining those bits into something new and more wonderful. The faster your brain can find bits of knowledge and combine them, then the more intelligent you are. So as computers get faster, they get more intelligent.
Yes, that's one kind of intelligence. But since we don't know how the brain works, it's hard to say how important that aspect is, or whether it can scale indefinitely. We don't even know if the brain is doing something we'd recognize as 'computing' when it performs pattern-recognition.

We've only recently begun to realize the apparent importance of glial cells to the formation of long-term memory.

Well, one obvious limiting factor in the "scale-up" approach to developing machine intelligence is that nobody is working on it. All the supercomputers currently in operation are built for highly specialized purposes. Even the supercomputer with a capability most closely resembling human intelligence, the chess grandmaster Deep Blue, is actually just a computer optimized for playing chess against Kasparov. Hardly a demonstration of general intelligence.

With regards to AI in general, the state of the art is not promising. For example, a decent performance in the Turing test is generally considered to be a minimum qualification for declaring any sort of intelligence. The Loebner Prize is a sort-of-annual attempt at meeting the Turing Test. Check out the most recent transcripts. In terms of attracting marquee talent, it may not help that Loebner is nuttier than a fruitcake; but most of the programs entered in the contest are pretty pathetic (not much improved over the state-of-the-art 20 years ago.) Right now, the most advanced AI program is still much less intelligent than, say, an ant. If progress continues at this pace, in 5-10 years, we'll have perhaps a much smarter ant, but we're at least 10-15 years away from matching a cockroach.

Again, it is certainly possible that some unforeseen breakthrough will move such a date closer by leaps and bounds; but for now, the Kurzweilian model of inexorable technological progress toward the Singularity seems mostly to be Extropian wishful thinking.


Personally, I would be more inclined to place my bets on human augmentation (embedded computers and genetic tinkering) as the next radical step toward post-humanity. Those are areas where vast amounts of capital and effort are currently focused.







Post#247 at 04-07-2004 07:08 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-07-2004, 07:08 AM #247
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: How much do you want to bet?

Dear Hopeful Cynic 68,

Quote Originally Posted by HopefulCynic68
> The pattern you described is merely a restatement. ... We don't
> know what the brain is doing when it puts information together in
> original ways.

> ...

> Without creativity and imagination, they won't be able to improve
> themselves, and since we can't define either even though we know
> both eixst, we can't assign a meaningful timeframe.

> ...

> We don't have any idea how to program irony or humor, since we
> don't know what they are except intuitively.

> ...

> Nobody can tell you what it is, that's the point.
You didn't even address my request that you provide an example of
invention or creativity that doesn't fit the pattern I described.

You could say, "You don't know why the sky is blue." I could say,
"It's blue because the atmosphere refracts the sunlight." Then you
could say, "You don't know. We know nothing of why the sky is blue,
or how it puts blueness together in original ways." Yada, yada,
yada.

I've addressed all the points you raise at length throughout this
thread. If you just respond "We don't know ..." to everything I
write, then there's nothing I can add that isn't a restatement, so
we'll just have to agree to disagree.

Quote Originally Posted by HopefulCynic68
> Without imagination and (probably) self-awareness, it won't be
> able to design a better version of itself. It won't even be able
> to comprehend the meaning of the word 'better'.
Not true. Even dumb computer programs can be programmed to improve
themselves according to some metric. The first checker-playing
program became world checker champion (in 1960 as I recall) by
constantly adjusting its own parameters.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#248 at 04-07-2004 07:33 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-07-2004, 07:33 AM #248
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: How much do you want to bet?

Dear Rick,

Quote Originally Posted by Rick Hirst
> Moore's Law is simply a statement of Intel Corporation's business
> model, not any particular insight into underlying models of
> technology development.
Actually, that isn't true. Moore announced his law in the early
1960s as a technological prediction, not a business prediction, and
every computer chip developer has followed the same path.

Here's a graph from an IBM presentation:



As you can see, there are many vendors on this list besides Intel.
It's the technology that improves at a steadfast exponential rate,
not the business plan.

As you probably already know, Ray Kurzweil has gone back as far as the
late 1800s to show that computing speed started increasing
exponentially long before integrated circuits were invented. The
speed of computers has been growing according to a predictable
exponential rate through may wildly different technologies.

Here's the list of technologies identified by Ray Kurzweil
( http://www.kurzweilai.net/articles/art0134.html ):

(1) Punched card electromechanical calculators, used in 1890 and 1900
census

(2) Relay-based machine used during World War II to crack the Nazi
Enigma machine's encryption code.

(3) The CBS vacuum tube computer that predicted the election of
President Eisenhower in 1952.

(4) Transistor-based machines used in the first space launches

(5) Integrated circuits - multiple transistors on a chip (Moore's
Law)

Kurzweil has shown that all of these technologies cause computing
speed to increase according to a steadfast, predictable exponential
growth curve.

It isn't just a "Kurzweilian model" either. In chapter 11 of my book
( http://www.generationaldynamics.com/...?d=ww2010.book ),
I present several examples of exponential growth through wildly
changing technology paradigms, including artificial light sources,
combat aircraft, installed technological horsepower per capita, the
divorce rate, rate of out-of-wedlock births, and stock prices.

Quote Originally Posted by Rick Hirst
> With regards to AI in general, the state of the art is not
> promising. For example, a decent performance in the (
> http://cogsci.ucsd.edu/~asaygin/tt/ttest.html ) Turing test is
> generally considered to be a minimum qualification for declaring
> any sort of intelligence. The (
> http://www.loebner.net/Prizef/loebner-prize.html ) Loebner Prize
> is a sort-of-annual attempt at meeting the Turing Test. Check out
> the most recent (
> http://www.surrey.ac.uk/dwrc/loebner...-logs-2003.zip )
> transcripts.
Just because computers have not yet passed the Turing Test doesn't
mean they won't within 10 or 20 years. Computers are not yet fast
enough to perform the brute-force pattern matching needed to pass the
Turing Test with decent performance, but they will.

Quote Originally Posted by Rick Hirst
> Well, one obvious limiting factor in the "scale-up" approach to
> developing machine intelligence is that nobody is working on
> it.
All the supercomputers currently in operation are built
> for highly specialized purposes. Even the supercomputer with a
> capability most closely resembling human intelligence, the chess
> grandmaster Deep Blue, is actually just a computer optimized for
> playing chess against Kasparov. Hardly a demonstration of
> general intelligence.
This simply isn't true. Many of the supercomputers listed at
http://www.top500.org are general purpose machines.

As for chess playing, you're thinking of "Deep Thought," a chess
playing machine machine using special purpose chips. However, Deep
Blue, the computer that played against Kasparov in 1997, used
standard RS/6000 chips.

There have been several important chess matches since then. The
latest, played in November 2003, matched Kasparov against a
commercially available X3D Fritz computer, and resulted in a tied
2-2 match. (See http://www.x3dchess.com/ ).

Chess playing is a good example because it shows by analogy how
improved computer speed will beat the Turing Test using brute force
algorithms.

In 1970, the best chess-playing program was quite weak. Today, the
best chess-playing program uses roughly the same minimax algorithm,
but is world champion class. The improvement from "weak" to "world
championship class" came about strictly from increasing computer
speed. Similarly, the ability to pass the Turing Test will come with
increased computer speed.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#249 at 04-07-2004 09:04 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
04-07-2004, 09:04 AM #249
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Re: Artificial life

Quote Originally Posted by John J. Xenakis
Discussing a news article on scientists making "artificial life" out of "artificial DNA":

...

"Of course, the first order of business is to demonstrate that we can
make these little guys replicate themselves," he said. "We're far from
that right now, but working on it. The second step is teaching them to
do something useful, like build a sweater or some other useful object.
They just won't understand Fortran or Microsoft Word or anything right
off the bat. It's a long way off."

The time scale is to the scientists' advantage, Bedau added, because
it will allow regulators, ethicists and others time to figure out
exactly how to deal with this new phenomenon.

"We have time to think this through," Bedau said. "And it will
certainly give us a lot to think about. In the end I think the
benefits will outweigh the risks. But we're not at the point where we
have to define everything just yet."

http://www.abqtrib.com/archives/news...ews_life.shtml
Well, I am not yet sure there is a new phenomenon.

This would be a very big deal if certain key claims can be verified. I first picked up on "artificial nucleotides" when Nature reported them as a news item from Astrobiology Magazine, a good resource, which I believe is connected to NASA. Making and substituting artificial nucleotides, amounting to any workable permutation off the standard four, would be truly astonishing. And "artifical life" would be a whopper! But I was unnecessarily astonished by "morphogenic fields" and "cold fusion," too. These chilly experiences left me, one who pays close attention to such things, just a little bit skeptical.

For one thing, the current focus in molecular biology is shifting somewhat from linear arrangements of nucleotides, which are important in their own right, to "epigenes"--subtle-but-significant overlord tribes of non-nucleotide radicals like methyl groups, sulfites, and the baffling roles of "introns," which are indeed nucleotides but apparently contain no genetic code. With so much attention directed toward epigenes, these new artifical-nucleotide makers are more on the order of curiosities at present.

But let them get the artificial replication going and I'll really get juiced! One thing to remember, though, is that these stories, like cold fusion, morpogenic fields, and dinosaur tracks along side those of humans, are often rife with journalistic excitement and lean on verifiable substance. My skeptical eye does not turn entirely away, however, because I've seen science fiction (like Dick Tracy's two-way wrist radio) become science fact all too often.

It's worth the time we spend here chatting about it.

--Croaker







Post#250 at 04-07-2004 12:52 PM by HopefulCynic68 [at joined Sep 2001 #posts 9,412]
---
04-07-2004, 12:52 PM #250
Join Date
Sep 2001
Posts
9,412

Re: How much do you want to bet?

Quote Originally Posted by John J. Xenakis
Dear Hopeful Cynic 68,

Quote Originally Posted by HopefulCynic68
> The pattern you described is merely a restatement. ... We don't
> know what the brain is doing when it puts information together in
> original ways.

> ...

> Without creativity and imagination, they won't be able to improve
> themselves, and since we can't define either even though we know
> both eixst, we can't assign a meaningful timeframe.

> ...

> We don't have any idea how to program irony or humor, since we
> don't know what they are except intuitively.

> ...

> Nobody can tell you what it is, that's the point.
You didn't even address my request that you provide an example of
invention or creativity that doesn't fit the pattern I described.
Yes, I did. I agreed that there are no such examples.


You could say, "You don't know why the sky is blue." I could say,
"It's blue because the atmosphere refracts the sunlight." Then you
could say, "You don't know. We know nothing of why the sky is blue,
or how it puts blueness together in original ways." Yada, yada,
yada.
Bad comparison.

The sky is blue because of atmospheric refraction. We can meaningfully define refraction, because we understand to a point that light can be described as a wave, and that the waves follow a path through a medium dependent on the density and temperature. We can explain the how of that because we have learned that matter is particulate, and we grasp the way electrons absorb and emit photons, which is light in its particulate aspect. We can mathematically model the process meaningfully.

Your description of invention would be the equivalant of saying "The sky is blue because air bends light." Completely true, but it doesn't really tell us anything about the underlying mechanisms. Right now, our understanding of the process of creativity and invention is at the 'because it bends light' level, if that.




Quote Originally Posted by HopefulCynic68
> Without imagination and (probably) self-awareness, it won't be
> able to design a better version of itself. It won't even be able
> to comprehend the meaning of the word 'better'.
Not true. Even dumb computer programs can be programmed to improve
themselves according to some metric.
But we know the rules of chess. They're mathematically amenable to modelling. We don't know the rules of how to improve anything in a creative sense. If you ask a creative person how they do what they do, be it technical invention, musical composition, artistic expression, or anything of the sort, they themselves usually have no idea how they do it, they just do it.
-----------------------------------------