Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 15







Post#351 at 06-04-2004 02:47 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
06-04-2004, 02:47 PM #351
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Quote Originally Posted by Brian Rush
In my novel, the machines feel their motivations as emotions, or something like emotions.
That makes sense:


Main Entry: emo?tion
Pronunciation: i-'mO-sh&n
Function: noun
Etymology: Middle French, from emouvoir to stir up, from Old French esmovoir, from Latin emovEre to remove, displace, from e- + movEre to move

Main Entry: mo?ti?va?tion
Pronunciation: "mO-t&-'vA-sh&n
Function: noun
1 a : the act or process of motivating

Main Entry: mo?ti?vate
Pronunciation: 'mO-t&-"vAt
Function: transitive verb
Inflected Form(s): -vat?ed; -vat?ing
: to provide with a motive

Main Entry: mo?tive
Pronunciation: 'mO-tiv, 2 is also mO-'tEv
Function: noun
Etymology: Middle English, from Middle French motif, from motif, adjective, moving, from Medieval Latin motivus, from Latin motus, past participle of movEre to move


I guess it boils down to what gets you up in the morning.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#352 at 06-04-2004 02:49 PM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
06-04-2004, 02:49 PM #352
Join Date
Jul 2001
Location
California
Posts
12,392

Now my question is: Are you really so certain that there's anything
really chaotic in any of these questions?
Yes. Indeterminacy is the rule in nature. It prevails at the subatomic level, obviously, but in a few natural processes (like the orbits of the planets) quantum indeterminacy becomes suppressed. These relatively rare processes have been the basis for classical mechanics and a lot of technology, but they are the exceptions, not the rule.

So there is nothing out of the ordinary in saying that human behavior is chaotic. It's what should be expected.







Post#353 at 06-04-2004 04:27 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-04-2004, 04:27 PM #353
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Chaos

Dear Brian,

Quote Originally Posted by Brian Rush
> Yes. Indeterminacy is the rule in nature. It prevails at the
> subatomic level, obviously, but in a few natural processes (like
> the orbits of the planets) quantum indeterminacy becomes
> suppressed. These relatively rare processes have been the basis
> for classical mechanics and a lot of technology, but they are the
> exceptions, not the rule.

> So there is nothing out of the ordinary in saying that human
> behavior is chaotic. It's what should be expected.
But wait - how can you infer chaos at the macro level from chaos
at the quantum level? If the brain is making chaotically random
decisions, then from your argument I'd have to conclude that
computers also make chaotically random decisions, even though we know
that they don't.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#354 at 06-04-2004 05:01 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
06-04-2004, 05:01 PM #354
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Re: Astonishment

Quote Originally Posted by John J. Xenakis
So the point that I'm making is that even relatively simple rule sets
can yield totally surprising, astonishing results, and so the hugely
more complex rule sets that people use to get through the day are
certain to produce astonishing results sometimes. So if your evidence
of human decision-making randomness is that humans sometimes make
extremely astonishing and surprising decisions, then that would happen
anyway, so that randomness wouldn't be necessary.
Darn good point.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#355 at 06-11-2004 05:53 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-11-2004, 05:53 PM #355
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Logic-assisted pattern matching

Logic-assisted pattern matching

The discussion in the last few days has raised a number issues in the
discussion of the "Intelligent Computer (IC) Algorithm," the software
that will make super-powerful computer super-intelligent.

This posting is an attempt to add to the IC algorithm to account for
these problems.

In the original IC algorithm design, I simply assumed, without going
into any detail, that voice recognition and vision would be handled
by massively parallel brute force pattern matching subsystems that
would provide some answer to the question, "What am I seeing?" or
"What am I hearing?" In this message, I'll address the questions of
vision and voice recognition in greater detail.

The brain's subsystems

In order to describe major functions of the brain, it's easiest to
talk about the human brain in computer terms. I'm well aware that
the brain doesn't have things like databases and primary and
secondary storage, but these terms should be interpreted only in
their functional terms for the purposes of this memo.

For the purposes of this memo, we assume that the brain has several
separate subsystems:

(*) The "logic unit" or the "rule processing unit." You may know
this subsystem by the name "conscious thought."

(*) The "intuition unit," an adjunct to the logic unit, which makes
decisions by intuition.

(*) The "vision unit". This unit has its own memory, a database of
images, and is able to to use massively parallel brute force pattern
matching to compare the current scene with the images in the
database, to identify the object you're looking at.

(*) The "face recognition unit." I suspect that the human brain has
a special capability for recognizing faces. A sidelight to this
capability is the well-established fact that people of each race are
better able to recognize and identify faces of their own race than
those of other races, indicating that the "face recognition unit" may
be optimized for the person's race.

(*) The "sound unit," which can similarly recognize sounds.

(*) The "voice recognition unit" is related to the sound unit, and
provides the additional functionality of identifying words and
phrases in spoken sounds.

All of these except the logic unit work through massively parallel
processing. The logic unit uses deductive logic.

How logic affects vision and hearing

Here, I want to address the question of how logic affects hearing or
vision. That is, the computer makes a logic-based decision about how
to interpret what it's seeing or hearing.

The human brain does this all the time:

(*) When you're looking at an optical illusion, you're usually able
to make a conscious decision as to which of the conflicting
interpretations you wish to "see." Here are two examples:



In the above graphic, the cube can be seen in two different ways, and
your conscious mind can control which of the two images you see.



In the above example, your conscious mind can control whether you see
a young woman or an old woman.

(*) When you're at a party with a roomful of different voices, you're
able to make a conscious decision to "tune in" to one particular
voice.

These examples indicate that the brain has some control over the
pattern matching process. In computer terms, the vision subsystem and
the voice recognition subsystem are not autonomous processors that do
their job and present their results -- one or more matches -- to the
logic unit. The logic unit will have several controls, yet to be
determined, over how the pattern search is performed. This is what
we mean by "logic-assisted pattern matching."

Inflexible vs Intuitive People

Recently in this thread we've discussed problems in rule-based
systems -- how even the simplest rule sets can lead to astonishing
and surprising decisions.

One of the examples was the school principal who follows
"zero-tolerance" drug rules, leading to decisions to severely punish
innocent children of innocent acts.

More generally, such people are usually called "inflexible," because
they appear to follow rules blindly. People are frequently called
"inflexible" in the political arena, often by using the phrase
"hard-core," as in "hard core feminist," "hard core conservative
Christian," "hard core capitalist," or "hard core communist."

Other people are considered "intuitive." These people don't seem to
follow any rules at all, but seem to know what to do without thinking
about it. Presidents Reagan and Clinton seemed to have that quality,
while President Nixon appeared to be more rule-based.

Most people (probably all people) are a combination of the two.
Everyone follows general principles, general rules, to get through
the day, but everyone has the flexibility to override those rules
intuitively when required.

Common law has become such a monstrosity for exactly this reason. We
start with some basic rules: thou shalt not kill, steal, etc. And
then, when one of the rules leads to a ridiculous conclusion (like
punishing someone for killing in self-defense, or for "stealing"
something that someone else has thrown in the garbage), then the rule
is modified. Centuries of such such common law rule modifications
have led to a very complex body of rules.

The Common Law example is important to think about, because it shows
that something different is going on with intuitive human beings.
The things that intuitive people do could probably be boiled down to
a set of rules (and there are "self-help" books that try to do this),
but in the end the number of rules would be comparable to the size of
the Common Law, so there's something else going on.

What we appear to have here is the inverse situation that we had with
vision and voice recognition. In the case of vision, for example,
visual images are recognized by means of massively-parallel pattern
matching, sometimes restricted by rules.

We don't recognize things by rules. We don't look at a person's face
and say, "The eyes are 2.3 inches apart, the nose has a 12 degree
hook, and the mouth turns down slightly, so this must be my friend
Frank." The pattern matching in this case is instantaneous and
automatic. Even when the pattern-matching is modified by rules, as
in the case of the optical illusions illustrated above, it's the
pattern-matching process that dominates.

In the case of intuitive thinking, predominate control may go in
either direction.

When a child first learns to count from 1 to 100, he has to apply
rules at every step to decide what the next number should be. When
an adult counts 1 to 100, most of it is intuitive, with one number
following another intuitively.

But when faced with a heavy decision, such as whether to punish a
child for something, there'll be a rule-based decision ("he broke the
rule and has to be punished"), and an intuitive decision based on
pattern-matching (Template: "Punishing innocent kids causes guilt" or
template: "being nice to juvenile delinquents leads to crime"). The
intuitive results are then incorporated into the rule-making so that
the brain can synthesize a final decision based on everything.

Below we'll discuss how intuitive thinking can be implemented using
templates.

Hypnotism

Hypnotism provides a fascinating bit of additional insight.

Different people are capable of achieving different levels of
hypnosis. At lighter levels of hypnosis, the subject is open to
posthypnotic suggestions, though a subject never does something under
hypnosis he would not do while awake. This level of hypnosis is used
in stage presentations ("When you wake up, you'll cluck like a
chicken."), and in a therapeutic setting it's used for control of
phobias, or to reinforce good habits.

A subject who is capable of achieving a medium level of hypnosis can
use it to control pain or to recall long-forgotten or traumatic
events. It's also possible to induce temporary amnesia. The subject
can also respond to suggestions to hallucinate specific scenes or
sounds, and these hallucinations will appear very vivid to the
subject.

At a high level of hypnosis, a subject can respond to suggestions of
negative hallucinations, which will keep him, for example,
from seeing and hearing a person in the room with him.

The effects of hypnosis indicate an even greater level of conscious
control over sight, sound and touch, and provide further hints into
the implementation of the IC algorithm.

Inductive Logic and Templates

No two chairs look exactly alike, and yet you can look at something
an immediately know whether or not it's a chair. How does that work?

The ancient Greek philosopher Plato discussed this problem, and came
up with a "Theory of Forms." Even though no two chairs look exactly
alike, they both have the property of "chairness." Plato argued that
"chairness" is a form, and that such a form exists independent of the
existence of any actual chair. The question of whether Forms have a
separate existence of their own has launched probably billions of
hours of useless philosophical debate over the centuries, but for us
there's a very practical application.

Our brains also have a concept of "chairness," which the IC will
implement with what we'll call a "chairness template" (meaning
roughly the same thing as Plato's chairness form).

The brain creates templates automatically using inductive reasoning.

Inductive reasoning works by inferring information based on examples.
By the time we've seen a thousand different chairs, our brain somehow
knows what a chair is. Even if we've seen red, green, yellow, brown
and black chairs, but we've never seen a blue-green chair before, our
brain automatically recognizes it as a chair based on its internal
"chairness template."

These templates exist for all sorts of sights, sounds, and even
concepts. If you hear your wife's voice a million times, then even
when she utters a sentence you've never heard before, you still
recognize her voice because the sound fits your wife's "voice
template."

A teacher who grades students' term papers will get used to seeing
the same sorts of concepts and arguments used over and over again by
different students. His mind will develop "concept templates" which
will permit him to immediately recognize what arguments the student
is making, on the basis of just a few words or sentences.

This is also true in ordinary conversations. You might speak to a
stranger at a party and know immediately whether his political views
are "liberal" or "conservative," based on just a few words, or even
just on the way he says those few words. That's because your brain
has created a "liberal template" and a "conservative template," based
on thousands of such previous conversations.

Templates are also important in the implementation of intuitive
versus rule-based decision making. Intuitive decisions come from
templates created from inductive reasoning on past experiences in
similar situations.

If there's a need to implement emotion-based decision-making, then
templates may be the best way for that as well.

The Pattern Database

Each of the subsystems we've described has a database: The vision
subsystem has a database of scene templates; the voice recognition
subsystem has a database of voices and sounds; and the rules subsystem
has a database of rules. Each subsystem also has a database of
templates.

We don't know how these databases are organized in the brain, but we
can specify how they should be organized in the IC.

"These foolish things remind me of you," goes the old song. The
foolish things might include a thing, like an airline ticket, or a
sound, like a tinkling piano in the next apartment, or a smell, a
taste, a feeling, or word, or almost anything else.

These indicate that the IC databases should be very flexible in the
sense that each stored object should be linked to any other related
objects. Thus, a single object may have a huge number of links.

This will create a problem in that the database, together with all
the links, will become too large for quick access.

When we come to problems like this, it's amazing how we can look to
the actual human brain for a solution.

How does the human brain deal with too much data? It moves some of
the data into "secondary storage" -- storage that's harder to get at.

We see this when we try to remember something we once knew but have
forgotten. You may see someone and say, "I've seen that face before,
but I can't remember where." Later, while you're doing something
completely unrelated, you suddenly jump up with a start, and say,
"Now I remember where I saw that face before!"

During that interval, the brain went off on its own, without you
having to do anything conscious about it, and followed all the links
into secondary storage to find out who that person is.

Here's another example. Have you even been stumped by a confusing
problem when you go to sleep at night, and then wake up the next
morning and find the problem solved in your brain? Or maybe you've
even woken up in the middle of the night, sat up and said to your
wife, "Honey, I've got it!" When she replies, "Honey, it's 3 am. Go
back to sleep," you then feel the need to run to your desk and write
down the answer.

These examples illustrate the amazing ability of the brain to go off
an do stuff on its own, to solve problems that you've set for it.
Sounds, sights, names, faces are all linked together, but some of the
links go back into secondary storage in your brain that's not readily
available, but the brain will automatically go searching through that
secondary storage if you want it to.

It's very important that we understand these workings of the brain,
because they're going to be very important in the implementation of
the first Intelligent Computers. Why are they important? Because of
the following assumption: The brain evolved over millions of years.
During that time, nature "experimented" with many different brain
designs, and only the most efficient designs passed the "survival of
the fittest" test necessary to continue into the next generations.
Therefore, any function that the brain performs today must be the
best way that it can ever perform, because it's the only
design that's survived the survival tests for millions of years. So
if we wonder what the best solution is for system design in the IC,
then we need only look to the real human brain for direction.

So the IC algorithm will have functionality to do the same thing. If
the rules-based unit decides it needs some information that isn't
readily available, then other processors in the IC will automatically
perform the necessary secondary storage searches.

But how does the IC decide what data goes into primary storage, and
what data goes into second storage? And when do those transfers
between primary and secondar storage take place? Here again, we look
to the human brain for answers.

Years ago, in college, I studied French and German, and was decently
fluent in both. I've found that if I don't use either language for
several years, then I no longer understand it. But then, a few days
of review brings back all the old fluency.

So the brain moves unused information to secondary storage, but when
the brain brings it back, it also brings back all the information
it's linked to, so you don't have to relearn the whole thing.

This is related to the saying, "Once you learn to ride a bicycle, you
never forget." Actually, you do forget, in the sense that the
brain automatically moves those skills to secondary storage if you
don't ride a bicycle for a long time. But those skills are just
sitting there, waiting until you hop onto a bicycle again, and then
the brain goes and finds those skills and brings them forward.

Similarly, the IC will have to be able to move large related blocks
of information back and forth to secondary storage.

The importance of sleep

We previously mentioned that you might go to sleep puzzled over a
problem, and then wake up the next morning to find the problem solved
in your brain.

Evidently a lot of brain activity goes on while you're asleep. This
is the time when the brain does a lot of work moving data back and
forth between primary and secondary storage.

Will it be necessary for the Intelligent Computers to go to sleep for
similar reasons? Perhaps not: it's possible that a way will be found
to do this sort of database management in "real time," so that the
IC's database will always be fresh. But it's also possible that this
cannot work because of the volume of data that the IC has to manage;
without a period of "sleep," with no new data coming in, the IC may
become confused if database management is attempted in real time.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#356 at 06-15-2004 05:32 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-15-2004, 05:32 PM #356
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Bilingualism and aging of the mind

Bilingualism and aging of the mind

According to this article, your mind will stay younger if you learn
two languages.

Whenever I see something like this these days, I start wondering if
it has any relevance to the "Intelligent Computer" algorithm. Will
intelligent computers get confused because they have too many things
to think about? Is it possible that a bug in the software will cause
attention deficit disorder? Or will the Intelligent Computer be so
single-minded on its job that it won't notice that the house is
burning down.

Ever since the sixties, computers have had "time slicing" or
"multitasking" capabilities that let them do several things at once,
but the different tasks have always been separate and discrete
activities that were specifically started up.

The Intelligent Computer will have to operate in a world where new
tasks can be added at any time, just as humans have to react to a
changed environment all the time. Since Intelligent Computers will
be multilingual as a matter of course, is there anything
corresponding to bilingualism that's relevant?

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com

http://my.webmd.com/content/article/...000_1000_nb_04

Bilingualism May Keep the Mind Young

Knowing Two Languages May Slow Effects of Aging on the Mind


By Jennifer Warner
WebMD Medical News Reviewed By Brunilda Nazario, MD
on Monday, June 14, 2004

June 14, 2004 -- Two languages may be better than one when it comes to
keeping the mind young. A new study shows that being fluent in two
languages may help prevent some of the effects of aging on brain
function.

Researchers found that people who were bilingual most of their lives
were better able to stay focused on a task amidst a rapidly changing
environment compared with people who only spoke one language.

The ability to keep one's attention on a task is known as fluid
intelligence, and it is one of the first aspects of brain function to
deteriorate as people get older.

Researchers suggest that that the ability to stay focused and to
manage attention while ignoring irrelevant information may involve
some of the same brain processes involved in using two languages. This
means bilingualism may offer a wide range of benefits for keeping the
mind sharp and fighting the effects of aging.

Bilingualism May Counter Effects of Aging

In the study, which appears in this month's issue of the journal
Psychology and Aging, researchers compared the reaction time of a task
performed by a group of bilingual and monolingual middle-aged (30- to
59-year-olds) and older (60- to 88-year-olds). The task measured brain
thinking processes known to decline with age.

For example, in one test the participants watched flashing squares on
a computer screen and were asked to press a particular colored key
when they saw a square in a certain location of the screen. Half of
the squares were presented on the same side of the screen where the
correct key was located and the other half of the squares were on the
opposite side of the screen to where the correct key was located.

Then the number of squares was also increased and other distractions
were introduced to analyze reaction time.

Researchers found that in all phases of the testing, both younger and
older bilingual adults performed the task faster than those who only
spoke one language, regardless of positioning of the squares or the
speed in which the squares were presented.

More importantly, researchers say that the bilingual participants were
also less distracted by unnecessary information.

All of the bilinguals in the study had used their two languages
everyday since they were 10 years old, and researchers say that the
life-long experience of managing two languages may prevent some of the
negative effects of aging on processing of distracting information.

--------------------------------------------------------------------------------

SOURCES: Bialystock, E. Psychology and Aging, June 2004; vol 19: pp
290-303. News release, American Psychological Association.







Post#357 at 06-16-2004 08:44 PM by evdaevb [at joined Jun 2004 #posts 9]
---
06-16-2004, 08:44 PM #357
Join Date
Jun 2004
Posts
9

Before I start I want to make it clear that I did not read the whole thread. I read the first few pages, skimmed the middle, and I read the last few pages. I am looking forward to reading the entire thing when I more time.

Anyways....

Xenakis:

I have a few concerns that I would like you to address.

You put the date for ultra-intelligent AI at 2030. This date strikes me as quite early. We do know (by simply continuing a linear line on a graph) that the average person will be able to afford a computer with the hardware capacity of the human brain at around 2020 (supercomputers will reach equal capacity by the end of this decade/early next decade).

Today's home computers are significantly less capable than out brains (I'm specifically reffering to hardware but this also applies to software) Thanks to the exponential advancement in hardware we will see human brain equivilent computers in and around 2020 (I'm well aware you know all this stuff but it helps me get my bearings straight).

This exponential growth is not happening with software. Software advances linearly.

Two things have to happen in order to create human-equivilent AI (note: not greater than human). We need to understand the brain (which will probably be for the most part understood in the mid 2020) and we have to be able to memic the neural networks in our brain.

Network research has already started and it is progressing wonderfully. Much of it depends on hardware and as we know we are getting more and more powerful hardware annualy. Brain research is closely tied to computation (which is why an approxamite date can be given for progress in research) but it also requires subtantial interpretation and testing.

It's now the mid-to-late 2020s and software programmers are just now implementing the code and testing different methods to make the machine function intelligently. This process would probably take around 4-5 years. It is in 2030 when the machine will have intelligence equal to humans. This is around the time that most predict a computer will pass the Turing Test.

Ultra-intelligent AI is quite different and will take much more work. To get to greater then human intelligence we will need computers helping us with the research. Fast hardware and almost full knowledge about our brain will required. That means knowing how thinking and intelligence works and how the senses and emotion affect those processes. With that full undersanding we can start modifying. I can't see ultra-AI coming before 2035. My best guess would be close to 2045.


My second concern is in regards to your end of world scenorio. Why do you think we (that is, humans) will remain static? Brain augmentation and like technologies will put us in equal footing with these machines. Maybe not at first but we will eventually increase our own intelligence and match theirs. I don't see why anyone would refuse.


That's it for now.







Post#358 at 06-16-2004 10:24 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-16-2004, 10:24 PM #358
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

2030

Quote Originally Posted by evdaevb
> You put the date for ultra-intelligent AI at 2030. This date
> strikes me as quite early. We do know (by simply continuing a
> linear line on a graph) that the average person will be able to
> afford a computer with the hardware capacity of the human brain at
> around 2020 (supercomputers will reach equal capacity by the end
> of this decade/early next decade).

> Today's home computers are significantly less capable than out
> brains (I'm specifically reffering to hardware but this also
> applies to software) Thanks to the exponential advancement in
> hardware we will see human brain equivilent computers in and
> around 2020 (I'm well aware you know all this stuff but it helps
> me get my bearings straight).

> This exponential growth is not happening with software. Software
> advances linearly.
I agree with your 2020 date for computers as powerful as the human
brain - and perhaps a little later. But I still think the
Singularity will occur around 2030.

I believe that we know how to write the software today, and that the
only reason we aren't writing it is because computers aren't yet
powerful enough to execute the code fast enough. Basically we need a
lot more massive parallelism than we have today, and when we have
that, then the code will work.

I was challenged a few months ago to describe the algorithm for
super-intelligent computers, so I developed a rough design for the
"IC algorithm" (the Intelligent Computer algorithm), and I posted it
in the following message:
http://fourthturning.com/forums/view...?p=93588#93588

Since that time I've been trying to enhance the algorithm, and I've
become semi-obsessed with the effort. (I say "semi-obsessed" only
because this project has to compete for my time with other
obsessions.)

I actually think that right now I have a pretty firm idea of how the
software for the first super-intelligent computers will work.

Quote Originally Posted by evdaevb
> Two things have to happen in order to create human-equivilent AI
> (note: not greater than human). We need to understand the brain
> (which will probably be for the most part understood in the mid
> 2020) and we have to be able to memic the neural networks in our
> brain.
I really don't agree with this. We don't have to mimic the actually
neurology of the brain. We only have to emulate the major
block-level functions of the brain - identifying objects with vision,
making decisions, etc. We can look to the actual brain to give us
ideas and clues, but the IC algorithm will do everything that the
human brain can do, from a practical point of view.

Quote Originally Posted by evdaevb
> but it also requires subtantial interpretation and testing
If it's possible to "super-agree" with something, then this would be
an example. I've identified several different potential problems in
the IC algorithm that will have to be addressed, and they're posted
in various messages. The most interesting problem is the one raised
by the Terminator movie: If the IC has self-preservation as a
high priority goal, it may decide that the best way to achieve that
goal is to kill all the humans! That's the kind of thing that we'll
need testing for. But I've never known any programmer who refused to
go ahead with software development because he hadn't yet figured out
how to do testing!

Quote Originally Posted by evdaevb
> Ultra-intelligent AI is quite different and will take much more
> work. To get to greater then human intelligence we will need
> computers helping us with the research. Fast hardware and almost
> full knowledge about our brain will required.
I agree with this, but for version 2. The way I see it is this: We
human beings will develop version 1 of the IC algorithm, and
computers will develop version 2, at which time the whole thing will
be completely out of humans' control. So we'd better get the first
version right.

Quote Originally Posted by evdaevb
> My second concern is in regards to your end of world scenario. Why
> do you think we (that is, humans) will remain static? Brain
> augmentation and like technologies will put us in equal footing
> with these machines. Maybe not at first but we will eventually
> increase our own intelligence and match theirs. I don't see why
> anyone would refuse.
My view is that by 2030 these brain augmentation technologies will be
only experimental, not available for the mass market. Furthermore,
some people will indeed refuse for the same reason that some people
still use manual typewriters. And anyway, we must never forget that
the first widespread use (the first killer application, so to speak)
will be for war - because every new technology is used first for war.

In the end, it's extremely likely that some sort of brain
augmentation (or brain to computer uploading) will be available, but
how it will play out is very hard to see, as is everything past 2030.



Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#359 at 06-17-2004 05:16 PM by evdaevb [at joined Jun 2004 #posts 9]
---
06-17-2004, 05:16 PM #359
Join Date
Jun 2004
Posts
9

I just can't see greater than human AI coming at 2030. It seems too soon.

I guess we'll just have to agree to disagree.

One thing I think we can agree on is that the next few decades will certainly be interesting.

My view is that by 2030 these brain augmentation technologies will be
only experimental, not available for the mass market. Furthermore,
some people will indeed refuse for the same reason that some people
still use manual typewriters. And anyway, we must never forget that
the first widespread use (the first killer application, so to speak)
will be for war - because every new technology is used first for war.
Not to sound too selfish, but I don't care what others do. If those that refuse to adapt to the new technologies end up seriously disadvantaged because of their decisions, fine by me. I agree that the militray will use the technology first but only if they invent it first. I'm sure they will or at least they pay large sums of money to take the technology of the hands of the inventor/s.


In the end, it's extremely likely that some sort of brain
augmentation (or brain to computer uploading) will be available, but
how it will play out is very hard to see, as is everything past 2030.
I agree [that we can't see beyong 2030]. I remain optimistic though.







Post#360 at 06-18-2004 12:53 PM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
06-18-2004, 12:53 PM #360
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Just in case anyone might be interested, Sir Martin Rees, Britain's Astronomer Royal, is discussed in the July 2004 Scientific American. Rees's recent book, Our Final Hour (Our Final Century, in the U.K) posits that civilization has only a 50-50 chance of surviving the twenty-first century. And he has wagered $1,000 that either bioterror or "bioerror" will claim a million lives by 2020.







Post#361 at 06-18-2004 01:25 PM by monoghan [at Ohio joined Jun 2002 #posts 1,189]
---
06-18-2004, 01:25 PM #361
Join Date
Jun 2002
Location
Ohio
Posts
1,189

Quote Originally Posted by Croakmore
Just in case anyone might be interested, Sir Martin Rees, Britain's Astronomer Royal, is discussed in the July 2004 Scientific American. Rees's recent book, Our Final Hour (Our Final Century, in the U.K) posits that civilization has only a 50-50 chance of surviving the twenty-first century. And he has wagered $1,000 that either bioterror or "bioerror" will claim a million lives by 2020.
Does he mean western civilization, American hegemony, or the human race?

That wager would result in the biggest tort litigation the world has ever seen. It is an ill wind that blows no good.







Post#362 at 06-18-2004 02:04 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-18-2004, 02:04 PM #362
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2020?

Quote Originally Posted by Croakmore
> Just in case anyone might be interested, Sir Martin Rees,
> Britain's Astronomer Royal, is discussed in the July 2004
> Scientific American. Rees's recent book, Our Final Hour (Our Final
> Century, in the U.K) posits that civilization has only a 50-50
> chance of surviving the twenty-first century. And he has wagered
> $1,000 that either bioterror or "bioerror" will claim a million
> lives by 2020.
I agree with this.

When I first started this thread, a year ago, I considered it
possible that the human race would end because of a war with
super-intelligent computers.

Since then I've changed my mind. Although the above scenario is
still a possibility, I now consider it far more likely that if the
human race is to become extinct, it will be through some sort of
biological element, such as a man-made disease that kills everyone.

However, I'm not dumb enough to take bets on this subject. If I lose
the bet then I have to pay, but if I win the bet, then I won't be
around to collect!!

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#363 at 06-19-2004 08:14 PM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
06-19-2004, 08:14 PM #363
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

John,

As an evolutionary biologist, I would also suggest the possibility of a new speciation, or maybe a re-speciation, if technology is up to it; but quite possibly a speciation of the meta-human kind. Look what happen to the apes when their jungle habitats began disappearing -- they had to hit the ground running and start using the brains God gave them!

Personally, I think we should worry more about asteroids. Recently, one the size of a TV set whizzed over Puget Sound, and it sounded like the end of the world. I'm afraid there's one out there more like the size of a Buick. Astronomers can see them comming if they are about 3 kilometers wide, but this one was less than a meter wide, and it delivered an astonishing amount of energy.

--Croak







Post#364 at 06-19-2004 10:43 PM by [at joined #posts ]
---
06-19-2004, 10:43 PM #364
Guest

Quote Originally Posted by Croakmore
John,

As an evolutionary biologist, Personally, I think we should worry more about asteroids. --Croak
Dear Mr. Emery,

I am, alas, sad to inform you of this fact: you are going to die.

Sincerely,
A Goat Named Lamb







Post#365 at 06-19-2004 11:57 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-19-2004, 11:57 PM #365
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Croaking

Dear Marc,

Quote Originally Posted by Devil's Advocate
> I am, alas, sad to inform you of this fact: you are going to die.
Don't you mean that he's going to have to croak?

We're going to upload his brain into a computer, so he can live
forever.

John







Post#366 at 06-19-2004 11:58 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-19-2004, 11:58 PM #366
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Darwin

Dear Richard,

Quote Originally Posted by Croakmore
> As an evolutionary biologist, I would also suggest the possibility
> of a new speciation, or maybe a re-speciation, if technology is up
> to it; but quite possibly a speciation of the meta-human kind.
> Look what happen to the apes when their jungle habitats began
> disappearing -- they had to hit the ground running and start using
> the brains God gave them!
It's hard to see how Darwinian evolution would apply to new
generations of super-intelligent computers, since some of Darwin's
assumptions (relatively small variations limited by heredity) would
not be satisfied.

Each generation of super-intelligent computer would design the next
generation. So new generations would not depend on heredity, and the
variations could be of any complexity.

It's also hard to see how "living forever" would fit into this
structure, as so many people seem to be hoping. Even if a way could
be found to upload one's brain and consciousness into a computer,
what would happen in the next generation?

I pretty much stick to my original belief that if humans continue to
exist at all, then they'll have the same status and relative
intelligence to the super-intelligent computers as dogs and cats have
to humans today.

And whenever we start talking about uploading brains to computers, I
keep thinking about things like pain. If your consciousness is
inside a computer, can you be made to feel excruciating pain with a
simple software adjustment by a malicious entity?



We just don't know.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#367 at 06-20-2004 02:09 AM by Roadbldr '59 [at Vancouver, Washington joined Jul 2001 #posts 8,275]
---
06-20-2004, 02:09 AM #367
Join Date
Jul 2001
Location
Vancouver, Washington
Posts
8,275

Quote Originally Posted by Croakmore
John,

As an evolutionary biologist, I would also suggest the possibility of a new speciation, or maybe a re-speciation, if technology is up to it; but quite possibly a speciation of the meta-human kind. Look what happen to the apes when their jungle habitats began disappearing -- they had to hit the ground running and start using the brains God gave them!

Personally, I think we should worry more about asteroids. Recently, one the size of a TV set whizzed over Puget Sound, and it sounded like the end of the world. I'm afraid there's one out there more like the size of a Buick. Astronomers can see them comming if they are about 3 kilometers wide, but this one was less than a meter wide, and it delivered an astonishing amount of energy.

--Croak
Was this the same one that apparently exploded over Chehalis last week in a great flash of blue?







Post#368 at 06-20-2004 09:00 PM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
06-20-2004, 09:00 PM #368
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

The Meteorite

Kevin, I think it could have been that one, but the one I refered to seems like it occurred 2 or 3 weeks ago, I think...don't trust me on this. The flash of light that occurred here in Bremerton was very white and bright; and the sound shook every 2x4 in my bedroom walls. I bolted out of bed thinking it was the Big One.

But I've been a little jumpy lately.


--Croaker







Post#369 at 06-21-2004 01:24 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-21-2004, 01:24 PM #369
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

I, Robot

To all:

If you'd like to have some fun, take a look at the trailer for the
move "I, Robot," scheduled for release on 7/16/2004.

http://www.irobotmovie.com/english/trailer3/index.html

The movie takes place in 2035. The trailer shows scenes of robots
walking dogs, hugging children, and performing other chores.

The plot thesis is that the robots have been developed with strict
adherence to Isaac Asimov's Three Laws of Robotics:

(1) A robot may not injure a human being, or, through inaction, allow
a human being to come to harm.

(2) A robot must obey the orders given it by human beings except
where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

In the movie, it suddenly turns out the guy who designed the robots
has been murdered, and the perpetrator is apparently a robot. Is
there a bug in the robot implementation? Did the robots evolve on
their own? These are the BIG QUESTIONS that the movie poses.

The trailer doesn't tell how the movie ends, of course, but we can
assume that Will Smith will conquer the malevolent robots and get the
girl, and the robots will be happily restored to the state of
strictly obeying the Three Laws, and that everyone will live happily
ever after.

The movie is based on a series of short stories that Asimov wrote in
the 1940s. They're collected into a book:
http://www.amazon.com/exec/obidos/ASIN/0553294385

For a great synopsis of Asimov's stories, take a look at:
http://members.tripod.com/templetongate/irobot.htm

Although it appears that the movie will be a lot of fun, let me as
usual put on my curmudgeon hat and just point out that:

(*) The super-intelligent robots of the future will first be used for
war, so nothing like Asimov's Three Laws will ever be implemented in
any meaningful way.

(*) The fact that the movie takes place in 2035 supports the view
that the first super-intelligent computers will be available in the
2020s, and that the Singularity will occur around 2030. (Of course,
the movie won't mention the Singularity.)

(*) Once Will Smith gets the girl, the movie won't mention the fact
that trying to beat the robots is a lost cause, since robots will
become increasingly intelligent each year, while humans will remain
the same (not counting the human beings who allow themselves to be
turned into robots).

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#370 at 06-29-2004 07:34 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
06-29-2004, 07:34 PM #370
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Re: I, Robot

Quote Originally Posted by John J. Xenakis
The movie is based on a series of short stories that Asimov wrote in the 1940s.
I just picked up the book at the library. I haven't read it since high school, and I doubt the movie follows the book much if at all. I vaguely remember the premise of the book as having the robots independently develop a "Zeroth law", whereby a robot could not injure humanity or allow it to come to harm; as such, some humans would need to be harmed to protect "humanity".

Quote Originally Posted by John J. Xenakis
The super-intelligent robots of the future will first be used for war, so nothing like Asimov's Three Laws will ever be implemented in any meaningful way.
Certainly autonomous robots will be used for war -- they already are -- but it is unlikely such robots would resemble bipedal humanoids. There are zero tactical advantages to doing so. The most efficient design would likely be akin to a giant cockroach, since insects apparently maintain their balance through purely mechanical means rather than through sophisticated nervous systems.

That said, it's all the more likely that the products of a "U.S. Robots & Mechanical Men" company will have the "Three Laws" built in, since ironclad safeguards would need to be provided to overcome potential customers' visceral fear of robots (having first encountered them in wartime.)

It also justifies the special-effects conceit of having the robots appear not-quite-human, with their Mr. Potato Head plastic faces: humans react most hostilely to creatures that are not quite human (zombies, vampires, demons, etc.) This is known in the literature as the Uncanny Valley.

Quote Originally Posted by John J. Xenakis
(*) The fact that the movie takes place in 2035 supports the view that the first super-intelligent computers will be available in the 2020s, and that the Singularity will occur around 2030. (Of course, the movie won't mention the Singularity.)
No, the movie probably won't, but the books actually go into some depth on this subject. As I mentioned, with the Three (four) Laws programmed in, the robots actually take pretty good care of us. :wink:

Quote Originally Posted by John J. Xenakis
(*) Once Will Smith gets the girl, the movie won't mention the fact that trying to beat the robots is a lost cause, since robots will become increasingly intelligent each year, while humans will remain the same (not counting the human beings who allow themselves to be turned into robots).
As I've mentioned repeatedly in this thread, "humans turning themselves into robots" is exactly what I expect to see happen, and sooner rather than later. Given current technology, I estimate that we are only a few years away from crossing a very important psychological threshold -- implanting electronic devices by choice (rather than medical necessity.) The first implants will be rudimentary cell phones; but the leap to Speaker For The Dead's subvocalization sensor is not too far -- NASA is already working on it.

Humans won't become robots, but they will merge with them.







Post#371 at 06-29-2004 08:51 PM by Tim Walker '56 [at joined Jun 2001 #posts 24]
---
06-29-2004, 08:51 PM #371
Join Date
Jun 2001
Posts
24

Is the Uncanny Valley the most disturbing? See Starship Troopers.







Post#372 at 06-29-2004 11:25 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-29-2004, 11:25 PM #372
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: I, Robot

Dear Rick,

Quote Originally Posted by Rick Hirst
> I just picked up the book at the library. I haven't read it since
> high school, and I doubt the movie follows the book much if at
> all. I vaguely remember the premise of the book as having the
> robots independently develop a "Zeroth law", whereby a robot could
> not injure humanity or allow it to come to harm; as such, some
> humans would need to be harmed to protect "humanity".
Here's some info about the zeroth law:

> The Three Laws of Robotics are:

> 1. A robot may not injure a human being, or, through inaction,
> allow a human being to come to harm.

> 2. A robot must obey the orders given it by human beings except
> where such orders would conflict with the First Law.

> 3. A robot must protect its own existence as long as such
> protection does not conflict with the First or Second Law.

> From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in
> I, Robot.

> In Robots and Empire (ch. 63), the "Zeroth Law" is extrapolated,
> and the other Three Laws modified accordingly: 0. A robot may not
> injure humanity or, through inaction, allow humanity to come to
> harm. Unlike the Three Laws, however, the Zeroth Law is not a
> fundamental part of positronic robotic engineering, is not part of
> all positronic robots, and, in fact, requires a very sophisticated
> robot to even accept it.

> Source: http://www.asimovonline.com/asimov_FAQ.html#series13

Quote Originally Posted by Rick Hirst
> Certainly autonomous robots will be used for war -- they already
> are -- but it is unlikely such robots would resemble bipedal
> humanoids. There are zero tactical advantages to doing so. The
> most efficient design would likely be akin to a giant cockroach [
> http://www.discover.com/issues/jul-0...f-cockroaches/
> ], since insects apparently maintain their balance through purely
> mechanical means rather than through sophisticated nervous
> systems.

> That said, it's all the more likely that the products of a "U.S.
> Robots & Mechanical Men" company will have the "Three Laws" built
> in, since ironclad safeguards would need to be provided to
> overcome potential customers' visceral fear of robots (having
> first encountered them in wartime.)

> It also justifies the special-effects conceit of having the
> robots appear not-quite-human, with their Mr. Potato Head plastic
> faces: humans react most hostilely to creatures that are not
> quite human (zombies, vampires, demons, etc.) This is known in
> the literature as the Uncanny Valley [
> http://www.arclight.net/~pdb/glimpses/valley.html ].
I agree with you that any "business" or "war" robots would not have a
human appearance.

But I don't even think "household robots" will have a human
appearance. I think a robot that looks human is creepy, and I think
people are going to want computers to look like computers.

I've always pictured the servant robots as looking something like
this:



Once you think about this design, you'll see why you'd even want a
household robot to look something like this. Why limit it to two
arms? You want it to be able to have extra arms (or whatever) that
can reach behind walls to fix plumbing. And why just two eyes? That
arm that reaches behind the wall can have an extra eye on it. I
expect servant robots to look very utilitarian and not look humanoid
at all.

Quote Originally Posted by Rick Hirst
> No, the movie probably won't, but the books actually go into some
> depth on this subject. As I mentioned, with the Three (four) Laws
> programmed in, the robots actually take pretty good care of us.
> Wink
The interesting question is: When will this issue reach the public's
radar? When will the public in general realize, "Omigod, when my
kids grow up, robots will be the dominant species?" Something will
trigger this realization at some point in the next few years. Maybe
this movie will be the trigger.

Quote Originally Posted by Rick Hirst
> As I've mentioned repeatedly in this thread, "humans turning
> themselves into robots" is exactly what I expect to see happen,
> and sooner rather than later. Given current technology [
> http://www.nttdocomo.com/corebiz/ubi...erwhisper.html ], I
> estimate that we are only a few years away from crossing a very
> important psychological threshold -- implanting electronic devices
> by choice (rather than medical necessity.) The first implants will
> be rudimentary cell phones; but the leap to Speaker For The
> Dead's subvocalization sensor is not too far -- NASA is already
> working on it [
> http://www.boingboing.net/2004/03/17...ar_unspok.html ].

> Humans won't become robots, but they will merge with them.
And as I've repeatedly responded, I agree with this only partially.
I do not see this as a widespread thing by 2030. I see this as
experimental, or at best implemented in a few thousand people. The
other six billion people will still be ordinary humans, wondering
when the 'bots are gonna come get 'em.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#373 at 06-30-2004 12:09 AM by Tim Walker '56 [at joined Jun 2001 #posts 24]
---
06-30-2004, 12:09 AM #373
Join Date
Jun 2001
Posts
24

Natural Born Cyborgs

In regard to telepresence Andy Clark wrote that to the brain body image is negotiable. He then described a telepresence device, a sort of avatar, that would resemble a cubist statue.

Why a cubist statue? After looking at the Uncanny Valley web site, I would imagine that a popular avatar would resemble a cute, furry animal.







Post#374 at 06-30-2004 01:50 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
06-30-2004, 01:50 PM #374
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Re: I, Robot

Quote Originally Posted by Rick Hirst
Quote Originally Posted by John J. Xenakis
The movie is based on a series of short stories that Asimov wrote in the 1940s.
I just picked up the book at the library. I haven't read it since high school, and I doubt the movie follows the book much if at all. I vaguely remember the premise of the book as having the robots independently develop a "Zeroth law", whereby a robot could not injure humanity or allow it to come to harm; as such, some humans would need to be harmed to protect "humanity".

Quote Originally Posted by John J. Xenakis
The super-intelligent robots of the future will first be used for war, so nothing like Asimov's Three Laws will ever be implemented in any meaningful way.
Certainly autonomous robots will be used for war -- they already are -- but it is unlikely such robots would resemble bipedal humanoids. There are zero tactical advantages to doing so. The most efficient design would likely be akin to a giant cockroach, since insects apparently maintain their balance through purely mechanical means rather than through sophisticated nervous systems.

That said, it's all the more likely that the products of a "U.S. Robots & Mechanical Men" company will have the "Three Laws" built in, since ironclad safeguards would need to be provided to overcome potential customers' visceral fear of robots (having first encountered them in wartime.)

It also justifies the special-effects conceit of having the robots appear not-quite-human, with their Mr. Potato Head plastic faces: humans react most hostilely to creatures that are not quite human (zombies, vampires, demons, etc.) This is known in the literature as the Uncanny Valley.

Quote Originally Posted by John J. Xenakis
(*) The fact that the movie takes place in 2035 supports the view that the first super-intelligent computers will be available in the 2020s, and that the Singularity will occur around 2030. (Of course, the movie won't mention the Singularity.)
No, the movie probably won't, but the books actually go into some depth on this subject. As I mentioned, with the Three (four) Laws programmed in, the robots actually take pretty good care of us. :wink:

Quote Originally Posted by John J. Xenakis
(*) Once Will Smith gets the girl, the movie won't mention the fact that trying to beat the robots is a lost cause, since robots will become increasingly intelligent each year, while humans will remain the same (not counting the human beings who allow themselves to be turned into robots).
As I've mentioned repeatedly in this thread, "humans turning themselves into robots" is exactly what I expect to see happen, and sooner rather than later. Given current technology, I estimate that we are only a few years away from crossing a very important psychological threshold -- implanting electronic devices by choice (rather than medical necessity.) The first implants will be rudimentary cell phones; but the leap to Speaker For The Dead's subvocalization sensor is not too far -- NASA is already working on it.

Humans won't become robots, but they will merge with them.
That "Uncanny Valley" concept and study is fascinating! I guess we'll have to keep androids somewhat generic-looking until we're ready to make them look completely human -- if things go that direction at all.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#375 at 06-30-2004 01:53 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
06-30-2004, 01:53 PM #375
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Quote Originally Posted by Tim Walker
Is the Uncanny Valley the most disturbing? See Starship Troopers.
AWESOME MOVIE!!!!

Satirical camp at it's best. "The only good bug, is a dead bug". Absolutely classic.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.
-----------------------------------------