Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 14







Post#326 at 04-29-2004 03:38 PM by [at joined #posts ]
---
04-29-2004, 03:38 PM #326
Guest

Re: United States as the 'Babylon' of Revelations?

Quote Originally Posted by Rick Hirst
What would I do differently than today? I ask in all sincerity.

Here's a biblical example (Acts 2, KJV): on the day of Pentecost, as the "men of every nation" heard the miracles and the preaching of Peter, they felt the need for change ("they were pricked in their heart") and asked Peter, "what shall we do?" For this, Peter had an immediate, actionable response: "be baptized every one of you in the name of Jesus Christ." Then he specifically indicates how their lives will change: "ye shall receive the gift of the Holy Ghost." Sure enough, those who accepted did change their lives dramatically: "And all that believed were together, and had all things common; and sold their possessions and goods, and parted them to all men, as every man had need." (Hey, sounds pretty communist utopian to me!)
Psst... allow me to let you in on a little secret, dude, it was "pretty communist utopian." Here's the story of what really happened after the day of Pentecost (this ain't in the Book of Acts, kid, so no bother lookin'):
  • And the congregation of those who believed were of one heart and soul; and not one of them claimed that anything belonging to him was his own, but all things were common property to them. And with great power the apostles were giving testimony to the resurrection of the Lord Jesus, and abundant grace was upon them all.

    For there was not a needy person among them, for all who were owners of land or houses would sell them and bring the proceeds of the sales and lay them at the apostles' feet, and they would be distributed to each as any had need.

    Now Joseph, a Levite of Cyprian birth, who was also called Barnabas by the apostles (which translated means Son of Encouragement), and who owned a tract of land, sold it and brought the money and laid it at the apostles' feet.

    But a man named Ananias, with his wife Sapphira, sold a piece of property, and kept back some of the price for himself, with his wife's full knowledge, and bringing a portion of it, he laid it at the apostles' feet. But Peter said, "Ananias, why has Satan filled your heart to lie to the Holy Spirit and to keep back some of the price of the land? "While it remained unsold, did it not remain your own? And after it was sold, was it not under your control? Why is it that you have conceived this deed in your heart? You have not lied to men but to God."

    And as he heard these words, Ananias fell down and breathed his last; and great fear came over all who heard of it. The young men got up and covered him up, and after carrying him out, they buried him.

    Now there elapsed an interval of about three hours, and his wife came in, not knowing what had happened. And Peter responded to her, "Tell me whether you sold the land for such and such a price?" And she said, "Yes, that was the price." Then Peter said to her, "Why is it that you have agreed together to put the Spirit of the Lord to the test? Behold, the feet of those who have buried your husband are at the door, and they will carry you out as well."

    And immediately she fell at his feet and breathed her last, and the young men came in and found her dead, and they carried her out and buried her beside her husband. And great fear came over the whole church, and over all who heard of these things.

    At the hands of the apostles many signs and wonders were taking place among the people; and they were all with one accord in Solomon's portico. But none of the rest dared to associate with them; however, the people held them in high esteem.
How's that for some real actionable stuff, dude? Dead bodies piling up like cord wood, and God's own guys are spreading fear like wildfire! Cool, eh? Lots of neat actionable deeds, huh? 8)







Post#327 at 04-29-2004 05:52 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-29-2004, 05:52 PM #327
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Re: United States as the 'Babylon' of Revelations?

Quote Originally Posted by Devil's Advocate
Quote Originally Posted by Rick Hirst
"And all that believed were together, and had all things common; and sold their possessions and goods, and parted them to all men, as every man had need." (Hey, sounds pretty communist utopian to me!)
Psst... allow me to let you in on a little secret, dude, it was "pretty communist utopian."
Sorry, I guess bbCode deleted my [sarcasm] tags. You're the one claiming that utopianism is doomed to failure. By citing the story of Ananias and Sapphira, are you implying that people being struck dead is the inevitable consequence of communitarianism? If so, you are implying that Peter and the other Apostles just didn't "get it"; that their community was doomed from the beginning, so that essentially Ananias and Sapphira's deaths were Peter's fault. That's a very strange thing to say. (Are your trolling skills improving?)

Quote Originally Posted by Devil's Advocate
How's that for some real actionable stuff, dude? Dead bodies piling up like cord wood, and God's own guys are spreading fear like wildfire! Cool, eh? Lots of neat actionable deeds, huh?
Now you're just being silly; or do you really not understand what "actionable" means? If not, I'll type really slowly so that I can make this clear. 8)

An "actionable" assertion is one that dictates a specific, urgent response. As a recent example in the news, National Security Figurehead Condoleeza Rice defended her lack of response to the 8-6-01 PDB "Bin Laden Determined To Attack Inside the US" by claiming that the PDB was not "actionable", i.e. that no specific, urgent action was indicated. You may or may not agree with that claim; but the point is: statements requiring no action on the part of the hearer are worse than useless -- they are a waste of time, mere background noise.

So, unless you are claiming that I will be struck dead for failing to donate the full value of my property to the Church, then no, the story you included does not contain any "actionable deeds" at all.

I thought I was being being blunt before, but let me be even blunter: if I were to suddenly believe everything you believe, how would that change a single thing I do -- today, tomorrow, next week? I've asked before, and I'll ask again. Until you can give me a single example, I'll thank you to go troll somewhere else.

We now return you to your regularly scheduled discussion. 8)







Post#328 at 04-29-2004 06:24 PM by [at joined #posts ]
---
04-29-2004, 06:24 PM #328
Guest

Re: United States as the 'Babylon' of Revelations?

Quote Originally Posted by Rick Hirst
I thought I was being being blunt before, but let me be even blunter: if I were to suddenly believe everything you believe, how would that change a single thing I do -- today, tomorrow, next week? I've asked before, and I'll ask again. Until you can give me a single example, I'll thank you to go troll somewhere else.
I had responded to a post specifically Re: United States as the 'Babylon' of Revelations?

I then responded about communitarianism with a story from the Book of Acts.

Other than that I haven't addressed anything you have posted. And my lips are still sealed on those matters you raise, bluntly or not so bluntly.







Post#329 at 04-29-2004 06:30 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-29-2004, 06:30 PM #329
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Back to the topic of the "top-down" approach and the consequences of indefinite exponential increases in computational capacity:

In the next 10-15 years, there will be commercial technologies that perform computations by manipulating individual atoms. Even assuming for now that this is the ultimate limit in computational density, we should be able to build larger and larger "molecular supercomputers" over time. Eventually, a molecular supercomputer the size of, say, Jupiter, should be able to perform on the order of 10^35 calculations per second.

How capable would such a "planetary supercomputer" be? Well, plug in some other guesstimates:
- Total number of humans who have ever lived: 10^11
- Average human lifetime (in seconds): 10^9
- Processing equivalent of the human brain: 10^14 operations per second

Thus, the total mental operations performed by all of humanity over its entire existence is on the order of 10^11 * 10^9 * 10^ 14 = 10^34. Thus, even adding another factor of ten for comfort, a planetary supercomputer would be capable of simulating the thought processes of all humans for all of history -- in a single second.

An advanced civilization capable of building such devices to run simulations would presumably be interested in modeling various scenarios regarding its own past history (e.g. "What would have happened if Hitler had not waited at Dunkirk?") Since it would only require a second in "real time" to obtain a result, the simulation would likely be run many, many times, with slight variations on various controlling parameters for each run. Thus, many orders of magnitude more "simulated histories" would be generated than the single "real history."

Thus, the reasonable conclusion is: if a planetary computer can exist at all, then it is far more probable than not that we are in fact in a simulation right now. (And yes, I found the original discussion of this idea on a web site discussing the Matrix movies.)

This conclusion has some amusing implications, as well as more serious ones. First and foremost, since a simulation is developed for a specific purpose, the individual actors are not all equal in significance and eventual importance. Thus, in a simulation, the Categorical Imperative (aka the "Golden Rule") does not apply. If we are part of a simulation, and we want to avoid "culling" through irrelevance, we should not treat all people equally.

Interesting thought... or, as Neo says, "Whoa." :shock: :shock: :shock:







Post#330 at 05-01-2004 01:58 AM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
05-01-2004, 01:58 AM #330
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Yet another article about the inevitable end of Moore's Law -- no, we really mean it this time! -- Speed Bump. As I mentioned already, fabs are getting so expensive that only a few companies (and countries) can afford to build them. By the end of the decade, the traditional fab will no longer be cost-effective. Of course, I fully expect that molecular fabrication techniques will be available by then; all the giant chip companies are already working on them. But there will probably be a "speed bump" between now and then.







Post#331 at 05-01-2004 11:05 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
05-01-2004, 11:05 PM #331
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Theological issues

Dear Titus,

Quote Originally Posted by Titus Sabinus Parthicus
> To blame the mess the world is in on God can be dealt with in the
> following manner:

> To start with, God said, "Be fruitful and multiply; fill the earth
> and subdue it. Rule over the fish of the sea and the birds if the
> air and over every living creature that moves on the ground."
> (Gen. 1:28.) This can be plainly seen as God granting humanity
> possessory title to the Earth.

> Unfortunately, in Genesis 3, you will find the sequence of events
> whereby humanity signed over said title to Satan, by heeding his
> so-called advice instead of God's Will. This is corroborated in
> Matt. 4: 8 - 10; "Again the devil took Him to a high mountain and
> showed Him all the kingdoms of the world and their splendor,
> saying, 'All this I will give you if you will bow down and worship
> me.' Jesus said to him, 'Away from me, Satan! For it is written:
> 'Worship the LORD your God, and serve Him only.'" Now, if Satan
> had not had clear possessory title to the Earth, Jesus would have
> rebuked him for offering what wasn't his to offer. Also, Satan is
> referred to repeatedly in the New Testament as the (false) god of
> this world.

> Given that Satan has said title, for the duration of this age, and
> both his nature and humanity's, about the only way you can still
> blame God is for giving us free will - which He had to do if we
> were to truly love and fellowship with Him, because compulsory
> love isn't love at all. Thus, He had to take the chance that we
> would listen instead to His enemy (and ours, as well!). And we
> did, with truly lamentable results. Christ's death and
> resurrection is the provision which God made to pry out of Satan's
> grasp the souls of any who accept Christ as LORD and Savior
> (without violating His own Law in the process), and also to set in
> motion the means whereby He will eventually make all things right
> and new again.
I'm really confused about this response. I guess I'm looking for a
clear answer to the question, "Whose fault are wars?" A one or two
word answer will do.

Let's go back to square one. God created a world in which population
grows faster than the food supply. You can't possibly blame that on
humans, and it causes wars. Therefore, it's God's fault that there
are wars.

Now I'm just going to guess that if you boil down everything you've
written that I quoted above, you're saying: "No, it's Satan's fault
that the food supply grows so slowly, because Satan has taken
possession of the earth." Once again I don't want to put words in
your mouth, so correct me if I'm wrong.

Therefore you'd answer the question, "Whose fault are wars?" with the
answer: Satan.

If that's your answer then it doesn't solve the problem. The whole
point is that you have to find a way to blame humans for wars for
most religions to make sense. If you say that wars are Satan's
fault, then you've gotten God off the hook (in a sense, although
Satan is God's creation), but humanity is still free of blame for
wars.

Right?

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#332 at 05-01-2004 11:07 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
05-01-2004, 11:07 PM #332
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Some random observations

Dear Rick,

Quote Originally Posted by Rick Hirst
> The reason for the "either-or" attitude is that the two directions
> are currently so far apart, efforts on the one approach don't
> translate very well to the other. Thus, researchers have to make a
> decision on where to spend their efforts; since the "bottom-up"
> approach provides incremental improvements to existing
> capabilities, it is far easier to, well, monetize.
I just don't see the problem in combining both approaches, given a
sufficiently powerful computer. If what you're saying is true, it
may be because all of today's researchers are doing research on
today's computers (duh!), and on today's computers neither the
top-down nor bottom-up approach is going to get very far. A college
kid doing research doesn't really have the ability to project his
ideas onto large computer capacities of the future.

I'm really glad that I wrote down my version of the Intelligent
Computer algorithm before I read all this other stuff, since it
probably would have just inhibited me.

In fact, it's now been a few weeks since I wrote down the IC
algorithm, and I'm somewhat surprised to find that the whole thing is
turning into something of a mystical experience for me.

Before then, when I talked about the Singularity, it was very
abstract, as if I'd plugged some numbers into an equation and come
out with an answer without really having a good feel for why the
answer is true. It was as if I were an evangelical Christian talking
about the "last days," but not really having any firm idea about what
the sequence of events through the last days would be.

Now however, the Singularity is much more real to me. I can see how
it's going to play out. I can see what steps are going to come
first, and what steps are going to come later. I can see where
things can go wrong, and what we have to watch out for.

Most important, I can now see the possibility of a positive (for
humans) outcome from the Singularity. I used to think that there
would be no controls whatsoever on ICs, because any crackpot working
in his basement can turn out an IC willing to kill any and all
humans.

But now I see that building the first generation of ICs is going to
be a huge project, from both the hardware and software points of
view, and that the first implementation will probably be the only
implementation for a number of years. If the first implementation
contains the appropriate safeguards, then it will be possible to get
past the Singularity to new generations of ICs developed by ICs
themselves, and pass those safeguards on to the next generations. By
the time that ICs are as much more intelligent than humans as humans
are more intelligent than dogs and cats, the critical period will be
past, and there'll be no more need for the ICs to kill all the
humans.

It'll be like the Manhattan project - one country (presumably the US)
will be the only one that can afford to build it at first, and anyone
else will have to catch up. Just as it took several decades for
nuclear weapons to become widely available, it should be impossible
for anyone else (or at most one other country) to come up with a
major implementation before the Singularity actually occurs. That
way, the Singularity can be controlled through the critical period.

The only question that remains is what the "appropriate safeguards"
are. It's not going to be Asimov's laws of robotics (an IC can't kill
a human), because ICs are going to be used in war, and they'll have
to be able to kill the enemy. It's possible that there'll be
something like MADD (mutual assured destruction) that goes on through
the critical period after the Singularity, until the ICs take over
completely.

Quote Originally Posted by Rick Hirst
> Thus, the reasonable conclusion is: if a planetary computer can
> exist at all, then it is far more probable than not that we are in
> fact in a simulation right now. (And yes, I found the original
> discussion of this idea on a web site discussing the Matrix
> movies.)
Actually, you don't even exist. My brain is simulating your
existence.

Quote Originally Posted by Rick Hirst
> Yet another article about the inevitable end of Moore's Law --
> no, we really mean it this time! -- Speed Bump
> http://www.pbs.org/cringely/pulpit/pulpit20040429.html . As I
> mentioned already, fabs are getting so expensive that only a few
> companies (and countries) can afford to build them. By the end of
> the decade, the traditional fab will no longer be cost-effective.
> Of course, I fully expect that molecular fabrication techniques
> will be available by then; all the giant chip companies are
> already working on them. But there will probably be a "speed bump"
> between now and then.
The interesting thing about the Cringely article is that he seems to
have no idea that there's life after Moore's Law. Someone should
educate him.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#333 at 05-21-2004 05:22 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
05-21-2004, 05:22 PM #333
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Technology Improvements to Cars

Technology Improvements to Cars

Frequently in this thread we've been discussing various kinds of
technological improvements to different things - cars, computers,
energy, etc.

One of the issues that arises is because it's not always easy to
identify the various technological changes that cause various
improvements. For example, we all know that the speed of aircraft
was increased by use of the jet engine, but what's much harder to see
is that aircraft speed was increased just as much or more by other
changes, including the closed cockpit, the monoplane and the all-metal
airframe.

That's why I thought that the graphic below was very interesting when
I saw it in the Wall Street Journal last week. It portrays numerous
technological changes that improve the fuel efficiency of
automobiles. It's an interesting graphic to use in discussions of
this type.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com

Revolution Under the Hood

In Major Wave of Innovation, Car Makers
Explore New Ways To Cut Fuel Use and Pollution


By NORIHIKO SHIROUZU and JEFFREY BALL

Staff Reporters of THE WALL STREET JOURNAL

AUTOS

May 12, 2004; Page B1

The global auto industry is racing to come up with the next big thing
under the hood.

Car makers from the U.S., Japan and Europe are pouring billions of
dollars into futuristic propulsion systems, everything from
more-efficient gasoline engines to gas-and-electric hybrid cars to the
most far-out idea of all, the hydrogen-powered fuel cell.



Students of automotive history compare the current scramble to the
burst of technological activity a century ago, when the automobile
itself was a novelty. Auto makers back then experimented with steam,
electric power and a new-fangled invention that burned a refined form
of petroleum. In the end, the petrol-powered internal-combustion
engine won out. One big reason: the discovery of gushers of cheap oil
in Texas.

Today's race for a new engine is motivated partly by concern that oil
-- whose price hit a 13-year high yesterday1 -- won't forever be
affordable or easy to obtain. Auto makers also face pressure to reduce
the amount of global-warming gases and smog-causing pollutants their
vehicles cough out. Then there's the "halo" factor -- car makers can
buff their image with high-tech vehicles even if they sell relatively
few of them.

EASY AT THE PUMP

See an illustration1 of some of the fuel-efficient technologies auto
makers are developing

Some of their ideas could fizzle. For all the talk of fuel cells,
which turn hydrogen into electricity to power a car, most industry
experts predict it will be several decades before fuel-cell vehicles
are available in numbers large enough to have any real impact on the
planet. Some observers are even more skeptical: "Hydrogen is the fuel
of the future and it will always be the fuel of the future," says
Walter McManus of ratings firm J.D. Power and Associates. "In other
words, it's all science fiction."

That's why car makers are spreading their bets. They are working on
things including gasoline-electric hybrids, advanced diesels and
internal-combustion engines that are much like today's engines but are
powered by different fuels, such as vegetable-oil derivatives or even
hydrogen. Some, including Honda Motor Co., say there is still room to
make the standard gasoline engine cleaner and more efficient. That's
because much of the energy output of an internal combustion engine is
wasted, such as when a car is idling or braking, says Takeo Fukui,
Honda's chief executive. "We think possibilities for improvement are
almost infinite there," he says.

There won't be any one silver bullet," says Bill Ford Jr., Ford Motor
Co.'s chairman and CEO. "All of them will play a role in addressing
what we are trying to do in terms of societal concerns."

Here are various options car makers are pursuing:

Hybrids

Gasoline-electric hybrid vehicles have one or more electric motors to
propel the auto at low speeds and assist the traditional gasoline
engine they also contain at higher speeds. These hybrids recover the
energy lost in braking through a technology called "regenerative
braking" and store that energy as electricity in a large battery,
which runs the electric motors. This energy-recovery system is a main
source of the hybrid's efficiency.

Toyota Motor Corp.'s popular, second-generation Prius hybrid can reach
60 miles per hour from a standstill in about 10 seconds. That's far
from a race car, but faster than the earlier Prius and comparable to a
conventional Toyota Camry. Yet the Prius averages 60 miles per gallon
in city driving and 51 mpg on the highway. A four-cylinder Camry, by
contrast, gets 24 mpg in the city and 33 on the highway. (Drivers
typically get fewer miles per gallon than these government ratings
indicate.)

Later this year, Toyota is expected to launch a Lexus sport-utility
hybrid, the RX-400h, that couples a V-6 engine with an electric
propulsion system. Toyota says it gives V-8-like driving performance
with the fuel economy of a midsize sedan. Meantime, Honda plans to
unveil a V-6 Accord hybrid later this year that should put out more
than 240 horsepower but get the gas mileage of a Civic compact.

Other car makers also are getting into the hybrid game. Ford plans to
launch a gas-electric version of its Escape SUV this fall, and is
exploring a vehicle that pairs a diesel engine with electric motors.
General Motors Corp. plans hybrid large SUVs later in the decade.

Pros: Hybrids can get eye-popping fuel economy in the city and are
relatively clean. The federal government offers a tax deduction for
hybrid owners, though it's set to phase out over the next few years.
An even heftier tax break is proposed in a pending bill, and some
states offer tax benefits as well. In Virginia, drivers of hybrids may
use high-occupancy-vehicle lanes without a passenger in the car.

Cons: Cost remains a big concern -- the hybrid propulsion system can
add $3,000 to $5,000 to a vehicle's sticker price. Consumers could
take years to recover their investment through reduced fuel costs.

The reliability and replacement cost of the big battery packs that
hybrids require are another question mark. Ford warrants the battery
panel on its Escape hybrid for eight years or 100,000 miles, identical
to the coverage Toyota offers on the Prius. But many cars last longer
than that, and a battery replacement would be expensive. Toyota says
it would cost $2,000 if for some reason a customer had to pay for a
replacement today.

Advanced diesels

Diesel engines have had a bad image in the U.S., thanks to
soot-spewing 18-wheelers and a crop of noisy, unreliable early 1980s
diesel cars from Detroit. Now, smart electronic controls allow modern
diesel engines to be as much as 30% more fuel efficient than
comparable gasoline models, and far cleaner than diesels of the past.
This new technology is a hit in Western Europe, where tax policies
that make diesel fuel cheaper than gasoline have helped diesel capture
close to half of the passenger-vehicle market. DaimlerChrysler AG and
Volkswagen AG are offering U.S. models with European diesel systems in
hopes of remaking the technology's image.

Pros: Today's diesels offer superb fuel economy. Volkswagen's Jetta
sedan, one of the few diesel cars sold in the U.S., is rated at 43 mpg
on the highway for its automatic-transmission model -- 43% better than
the 30 mpg highway rating for a comparable gasoline-powered Jetta,
according to the federal government.

The acceleration of these cars, often boosted by turbochargers, also
is impressive, especially compared with the sluggish diesel vehicles
of yore. And diesel fuel is readily available at many fuel stations.

Cons: Modern diesels still pump out too much soot, which U.S.
regulators have identified as a carcinogen. U.S. air-pollution rules
tilt strongly against diesels, and car makers could have trouble
meeting clean-air standards if they sell too many of them. Engineers
are working on more advanced technology to cut nitrogen oxide and soot
from diesels, but that adds costs.

Alternative-fuel internal-combustion engines

Most major auto makers already produce millions of otherwise
conventional vehicles designed to run on alternative fuels such as
natural gas, ethanol, methanol, biologically produced diesel (an ester
that can be made from substances such as vegetable oil and animal
fats), and liquefied petroleum gas, commonly known as propane. Auto
makers have an incentive to produce these "flex fuel" vehicles: They
get credits for them under federal fuel-economy rules that allow them
to sell more gas-guzzling SUVs.

More recently, some car makers have focused on developing
internal-combustion engines that can burn hydrogen. German luxury car
maker BMW AG is a leader in this technology and plans to offer a
hydrogen-fueled version of its big 7 series sedans later in the
decade.

Pros: Using existing internal-combustion engines to burn cleaner fuels
could be a lower-cost path to reduced oil consumption.

Cons: Gasoline engines are becoming so much cleaner that higher-cost
alternative fuels may not offer a compelling payback. Many alternative
fuels are lower in energy density than gasoline, meaning a car that
can cover 300 miles on a 15 gallon tank of gasoline would need a much
larger tank of the alternative fuel to cover the same ground.

Advanced gasoline internal-combustion engines

Perhaps most overlooked amid the growing hype over hybrids and
hydrogen are the strides auto makers are making to clean up and
improve the efficiency of traditional engines.

Advances in computerized electronic engine controls have allowed auto
makers to revisit fuel-saving ideas they gave up as too expensive or
unreliable in the past. Among them: Cylinder shutdown, or
"displacement on demand," which allows an eight-cylinder engine to run
on six or four cylinders when cruising down a flat freeway. GM briefly
offered Cadillacs with cylinder-shutdown systems in the 1980s, but
many customers found them sluggish and unreliable. Now, GM and
DaimlerChrysler, among others, are rolling out the 21st century
version of the technology on a variety of mainstream models.

Meanwhile, Honda, acknowledged as a leader in clean-running, efficient
internal-combustion technology, recently built a laboratory north of
Tokyo that is looking at a 10-year horizon for improving conventional
internal-combustion technology. Gasoline engines "really have the best
promise near term of being practical in meeting consumer needs for
power and yet addressing social issues we face," says Ben Knight, a
vice president at Honda's research arm in the Americas.

Pros: The biggest benefit of improving today's technology is that it
taps into the entrenched, low-cost gasoline-industry infrastructure
(gas stations, gasoline refineries, etc.) in markets around the world.
And since gasoline engines lose a lot of energy in braking and idling,
there still should be room left to make them even more efficient.

Cons: Improvements could be incremental, and such technology wouldn't
achieve the goal of ending the world's dependency on oil.

Fuel cells

Billed as the ultimate propulsion technology for pollution-free
automobiles, fuel cells combine hydrogen fuel with oxygen from the
air, producing electricity to run an all-electric car. Water is their
only exhaust.

Pros: Fuel cells are electrochemical systems with no moving parts.
That means a fuel-cell vehicle is not only clean but also highly
efficient. Freed from conventional engines and transmissions,
fuel-cell cars could be designed to look entirely different from
vehicles today, as GM has demonstrated with its futuristic Hy-Wire
concept car that packages the power plant in a flat skateboard-like
chassis that can accept a new upper body as a plug-in option.

Cons: Fuel cells and the hydrogen they use cost far too much to be
commercially viable -- and that outlook isn't likely to change soon.
Another issue is that hydrogen itself isn't necessarily a "clean"
fuel. Hydrogen, at least for the next several decades, is likely to be
derived from natural gas through a process that gives off pollution.
Some studies suggest that, when taking into account the emissions
generated by producing hydrogen, the use of fuel-cell cars could
result in nearly the same amount of some pollutants as today's
gasoline cars.

Write to Norihiko Shirouzu at norihiko.shirouzu@wsj.com2 and Jeffrey
Ball at jeffrey.ball@wsj.com3

URL for this article:
http://online.wsj.com/article/0,,SB1...008644,00.html

Hyperlinks in this Article:
(1) http://online.wsj.com/article/0,,SB1...908561,00.html
(2) mailto:norihiko.shirouzu@wsj.com
(3) mailto:jeffrey.ball@wsj.com

Copyright 2004 Dow Jones & Company, Inc. All Rights Reserved

This copy is for your personal, non-commercial use only. Distribution
and use of this material are governed by our Subscriber Agreement and
by copyright law. For non-personal use or to order multiple copies,
please contact Dow Jones Reprints at 1-800-843-0008 or visit
www.djreprints.com.







Post#334 at 05-31-2004 05:17 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
05-31-2004, 05:17 PM #334
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Dangers in judgment by intelligent computers

To all:

I was in an online conversation about the Singularity in another
forum, and I'd been discussing the "intelligent computer (IC)
algorithm" that I've previously described in this thread, and I was
proselytizing about how we really ought to start thinking about this
and getting prepared for it. A friend of mine, a lawyer who typically
represents fathers in divorce cases, posted the following question:

Quote Originally Posted by Barbara Johnson
> Any computer programmers must have the intelligence to program
> the computer to go outside the envelope. Just think of the wives
> of some of the guys who have been on this list. The wife who saw
> the daughter play with a stuffed animal (I think it was a cat)
> between its rear legs. When asked why she did that, the child said
> it was soft. She concluded the child was indicating that she had
> been molested by dad. She went to a psychologist along the North
> Shore, who filed a [child sexual abuse] report with DSS.
> Would the average computer programmer have programmed that as a
> possibility?
The situation described is a bit bizarre, but it raises some very
important issues about potentially substantial dangers in the
development of ICs (intelligent computers). In a nutshell, the IC is
going to have to "exercise judgment," and the algorithm for
exercising judgment can be a tricky matter indeed.

The Turk

Let me start with a brief flashback to the days of Napoleon and Edgar
Allen Poe.

The most famous chess playing computer in history was "The Turk."




The Turk was an enormous contraption that toured Europe and America
in the late 18'th and early 19'th centuries, defeating such
luminaries as Benjamin Franklin and Napoleon, and apparently fooling
everyone, since the person making the moves was so cleverly hidden
within the machine that no one ever discovered him.




The thing that makes The Turk relevant to the current discussion is a
fascinating expos? of the Turk, Maelzel's Chess-Player,
( http://www.eapoe.org/works/essays/maelzel.htm ), written by Edgar
Allen Poe in 1834.

Poe argued that The Turk could not possibly be a "pure
machine," and he gave several reasons for this:

Quote Originally Posted by Edgar Allen Poe in 1834
> [[Poe's first argument is that The Turk is not
> data-driven.]]


> But if these machines were ingenious, what shall we think of the
> calculating machine of Mr. Babbage? What shall we think of an
> engine of wood and metal which can not only compute astronomical
> and navigation tables to any given extent, but render the
> exactitude of its operations mathematically certain through its
> power of correcting its possible errors? ...

> Arithmetical or algebraical calculations are, from their very
> nature, fixed and determinate. Certain data being given, certain
> results necessarily and inevitably follow. These results have
> dependence upon nothing, and are influenced by nothing but the
> data originally given. ...

> But the case is widely different with the Chess-Player. With him
> there is no determinate progression. No one move in chess
> necessarily follows upon any one other. ...

> Let us place the first move in a game of chess, in juxta-position
> with the data of an algebraical question, and their great
> difference will be immediately perceived. From the latter--from
> the data--the second step of the question, dependent thereupon,
> inevitably follows. It is modelled by the data. It must be thus
> and not otherwise. But from the first move in the game of chess
> no especial second move follows of necessity. In the algebraical
> question, as it proceeds towards solution, the certainty of its
> operations remains altogether unimpaired. The second step having
> been a consequence of the data, the third step is equally a
> consequence of the second, the fourth of the third, the fifth of
> the fourth, and so on, and not possibly otherwise, to the end. But
> in proportion to the progress made in a game of chess, is the
> uncertainty of each ensuing move. A few moves having been made, no
> step is certain. ...

> There is then no analogy whatever between the operations of the
> Chess-Player, and those of the calculating machine of Mr.
> Babbage, and if we choose to call the former a pure machine we
> must be prepared to admit that it is, beyond all comparison, the
> most wonderful of the inventions of mankind. ...

> It is quite certain that the operations of the Automaton are
> regulated by mind, and by nothing else. Indeed this matter is
> susceptible of a mathematical demonstration, a priori. The only
> question then is of the manner in which human agency is brought to
> bear. ...

> [[Poe also points out that a "pure machine" would make all its
> moves in "regular intervals" of time.]]


> The moves of the Turk are not made at regular intervals of time,
> but accommodate themselves to the moves of the
> antagonist--although this point (of regularity) [is] so important
> in all kinds of mechanical contrivance. ... The fact then of
> irregularity, when regularity might have been so easily attained,
> goes to prove that regularity is unimportant to the action of the
> Automaton--in other words, that the Automaton is not a pure
> machine. ...

> [[Poe argues that a "pure machine" would always win.]]

> The Automaton does not invariably win the game. Were the machine
> a pure machine this would not be the case--it would always win.
> The principle being discovered by which a machine can be made to
> play a game of chess, an extension of the same principle would
> enable it to win a game--a farther extension would enable it to
> win all games--that is, to beat any possible game of an
> antagonist. A little consideration will convince any one that the
> difficulty of making a machine beat all games, Is not in the least
> degree greater, as regards the principle of the operations
> necessary, than that of making it beat a single game.
It's hilarious to read Edgar Allen Poe's "proofs" that The Turk could
not be a "pure machine," now that we actually have chess-playing
computers. Every one of Poe's reasons is completely wrong, as we now
know. Chess-playing computers play different moves all the time,
they don't play at "regular intervals," but move quickly in simple
positions and think longer in complex positions, and they don't
invariably win.

The question that Barbara Johnson asked me is really just a variation
of the issues that Edgar Allen Poe raised. How will a programmer
deal with the fact that so many decisions in life are not
data-driven, and that a computer, like a human being, can make
mistakes.

How an Intelligent Computer "knows" things

Actually, the point is that super-intelligent computers will not
automatically "know" things. They will have to learn things, just
like humans. The difference is that they'll learn things much faster,
and reach conclusions through inductive and deductive reasoning much
faster -- it's this increased speed, resulting in the ability to
consider far more possibilities that will make computers far more
intelligent than humans.

But computers will still make mistakes - even if the mistakes are
sometimes too complex to be understood by humans.

The "intelligent computer software algorithm" that I designed was not
a compendium of all knowledge. Rather, it has many components for
learning new things, inferring and deducing new things from existing
bits of knowledge, and then going on to use those new things to infer
and deduce even more new things.

So the intelligent computer will reach conclusions based on what it
already knows, just like a human being.

And if the intelligent computer has somehow "learned" or "deduced" or
"inferred" a rule that says that any child who touches the soft
underbelly of a stuffed animal must, of necessity, be a sexually
abused child, then it will apply that rule when called on to do so.

The dangers in the Intelligent Computer Algorithm

For years, one of my favorite sayings has been, "To err is human; to
really screw things up takes a computer."

So, for example, a human being making out payroll checks might make a
small error every now and then. But a computer making out payroll
checks might make the same mistake in thousands of checks. The
computer industry abounds with stories about computers issuing checks
for millions of dollars to people who should only have received
hundreds. Not only can computers make mistakes, but they can make
huge mistakes that would be perfectly obvious to any human being.

Earlier in this thread, I discussed one example of a big danger
presented by super-intelligent computers of the future, and this is
the danger that was portrayed in Terminator 3: If the IC has
self-preservation as a top-priority goal, then it may decide that the
best way to preserve itself is to kill all the humans.

And if we've manufactured thousands of such ICs, all with the same
software and all operating on the same day, then they may all
simultaneous reach the same conclusion.

The problem that I described at the beginning of this message
leads to entire huge categories of additional dangers.

Questions like "What is a sexually abused child?" or "What is the
right balance between affirmative action and discrimination?"
are examples of the types of questions that human beings have to make
judgments about. There will be plenty of questions that
super-intelligent computers will have to make judgments about as
well.

How do you implement judgment in a computer? Actually, I've done it
a number of times, usually in the context of game-playing software,
where you have to program the computer to make a judgment on the best
move to play. You usually do it by assigning points to different
elements in the game. In chess-playing programs, you add points for
moves that gain material or that control space on the board or that
protect one's king, for example.

What I've found is that programming this kind of judgment is
extremely error-prone in even the simplest of games. You assign
points based on a reasonable set of criteria, and the computer starts
playing the craziest moves, but moves that are perfectly logical when
you go back and remind yourself of how points are assigned. That's
why, in the game of life that will be played with the IC algorithm,
if self-preservation is given too many points, then "killing all the
humans" becomes a perfectly logical "move."

For those who think that the kinds of super-intelligent computers
that I'm described won't be developed for 10,000 years, I have news
for you. Development is well under way around the world.

Just three weeks ago, on May 12, the Department of Energy announced a
$25 million grant to three partners -- Cray, IBM and Silicon
Graphics, working in conjunction with the Energy Department's Oak
Ridge National Laboratory -- for the first year of a project to
develop the next generation of supercomputer by 2007.

This supercomputer's power is targeted to be at 50 teraflops -- that
is, 50 trillion calculations per second. The human brain is
generally rated at between 100 and 10,000 teraflops, so the power of
the human brain may be within reach as early as around 2010.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#335 at 05-31-2004 10:44 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
05-31-2004, 10:44 PM #335
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

John,

In these discussions about intelligent computers, has there been any talk of introducing (possibly even out of necessity) emotions or memes?
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#336 at 06-01-2004 12:04 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-01-2004, 12:04 AM #336
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Emotions

Dear Sean,

Quote Originally Posted by William J. Lemmiwinks
> In these discussions about intelligent computers, has there been
> any talk of introducing (possibly even out of necessity) emotions
> or memes?
I'm not sure what it would mean for an intelligent computer (IC) to
have emotions, but I do believe that there's a question of
whether ICs should evince emotions.

My position on that question is that emotions will be implemented if
there's a reason to, and will not be implemented otherwise. It might
depend on the application. For example, an IC plumber probably
wouldn't need to evince emotions, but an IC nursemaid or an IC
customer service person might have to.

Still, there are some puzzling questions remaining. Whenever a
question like this arises, I always start wondering what the
evolutionary purpose of the desired behavior is. What is the
evolutionary purpose of emotions? Could humans have evolved without
them? And if not, then maybe there is a need for emotions in ICs
that I don't yet understand.

Or, what about crying? What could possibly be the
evolutionary purpose of crying? Do we really have to have to have a
tear come to our eye when Timmy says goodbye to Lassie? What's the
purpose of all that? And if there is an evolutionary purpose
to crying that I haven't yet grasped, then perhaps we're going to
have to implement crying in the intelligent computers of the future
-- artificial tear ducts and all.

So, I guess I really don't know whether they'll be implemented or
not, except to say that they'll be implemented if and only if it
turns out that they're really needed.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#337 at 06-01-2004 12:27 AM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
06-01-2004, 12:27 AM #337
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

I've read somewhere that emotion are thought by some evolutionary biologists (among others) to serve an important function.

Descartes' Error

From an internet description:

In his book Descartes' Error, the neurologist Antonio Damasio has developed a universal model for human emotions. This model is based on a rejection of the Cartesian body-mind dualism that he believes has crippled scientific attempts to understand human behaviour, and draws on psychological case-histories and his own neuropsychological experiments. He began with the assumption that human knowledge consists of dispositional representations stored in the brain. He thus defines thought as the process by which these representations are manipulated and ordered.

One of these representations, however, is of the body as a whole, based on information from the endocrine and peripheral nervous systems. Damasio thus defines "emotion" as: the combination of a mental evaluative process, simple or complex, with dispositional responses to that process, mostly toward the body proper, resulting in an emotional body state, but also toward the brain itself (neurotransmitter nuclei in the brain stem), resulting from additional mental changes.

Damasio distinguishes emotions from feelings, which he takes to be a more inclusive category. He argues that the brain is continually monitoring changes in the body, and that one "feels" an emotion when one experiences "such changes in juxtaposition to the mental images that initiated the cycle".


Perhaps AI scientists should consider adding emotion to the mix? And not something to make their interaction with us more pleasant, but actually more possible.

Likewise, perhaps human thought is also based on memes? Some interesting material on that matter is:

Susan Blackmore's The Meme Machine and Robert Aunger's The Electric Meme.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#338 at 06-01-2004 12:30 AM by Vince Lamb '59 [at Irish Hills, Michigan joined Jun 2001 #posts 1,997]
---
06-01-2004, 12:30 AM #338
Join Date
Jun 2001
Location
Irish Hills, Michigan
Posts
1,997

Re: Emotions

Quote Originally Posted by John J. Xenakis
Dear Sean,

Still, there are some puzzling questions remaining. Whenever a
question like this arises, I always start wondering what the
evolutionary purpose of the desired behavior is. What is the
evolutionary purpose of emotions?
The answer to that could go on a long time, and each emotion may need a different answer.

Could humans have evolved without them?
IMHO, no. They wouldn't be humans. They wouldn't even be mammals. They might not even be descended from reptiles, although I'm having an argument with my psychology major girlfriend over whether reptiles and birds (1) have a limbic system and (2) whether a limbic system is required for emotions. Maybe some kind of intelligent social insect could exist without emotions as we know them, but even they become aroused, as anyone who has observed ants, bees, and wasps knows.

And if not, then maybe there is a need for emotions in ICs
that I don't yet understand.
There is a difference between natural selection and design and I think it bears on this particular point. If IC's depended on other IC's and developed by trial and error, they might need emotions. If they can be designed by humans and not need emotions to do their jobs and interact with humans, then they wouldn't need it. If they need emotions to interact with humans or substitute for humans (Stepford Wives, anyone), then they will have to display them.

Or, what about crying? What could possibly be the
evolutionary purpose of crying?
At the most basic, it's an alarm or distress call. It can also be a display of submission in a non-fatal battle (this is Larry Niven's hypothesis). It could also be a sign that one is deserving of future help. I've taught Evolution and Human Behavior before; I can go on and on about hypotheses for crying.

Do we really have to have to have a tear come to our eye when Timmy says goodbye to Lassie?
We do if anyone who thinks that this is a tender moment worthy of tears is watching us. Otherwise, they'll think we're an unsympathetic person unworthy of their sympathy.

What's the purpose of all that?

Social display. It may be so ingrained that it happens even when no one is watching.

And if there is an evolutionary purpose
to crying that I haven't yet grasped, then perhaps we're going to
have to implement crying in the intelligent computers of the future
-- artificial tear ducts and all.
If they need human assistance for anything, most likely.

So, I guess I really don't know whether they'll be implemented or
not, except to say that they'll be implemented if and only if it
turns out that they're really needed.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com
On that last point, I agree!
"Dans cette epoque cybernetique
Pleine de gents informatique."







Post#339 at 06-01-2004 11:54 AM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
06-01-2004, 11:54 AM #339
Join Date
Jul 2001
Location
California
Posts
12,392

There is a difference between natural selection and design and I think it bears on this particular point. If IC's depended on other IC's and developed by trial and error, they might need emotions. If they can be designed by humans and not need emotions to do their jobs and interact with humans, then they wouldn't need it.
I haven't been following this discussion lately; does IC stand for Intelligent Computer? I'm going to assume so and say something here.

When we speak of artificial intelligence, we usually mean to imply that the machine in question is self-willed, that it acts in ways not predicted by its programming except in the most general way. We should also assume that it has the capacity to learn and develop new patterns of behavior.

To do that, the machine must be programmed with some basic motivations, a tendency to do A rather than B in situation C, or parameters by which to make a judgment. It may also develop learned motivations, as a result of experience.

To call these motivations "emotions" is giving them a subjective perspective which we can't verify -- but then, we can't verify them in humans other than ourselves, either. A machine would also not display the involuntary and semi-voluntary side-effects associated with human emotion (tears, shudders, flushes, laughter, etc.) but we must ask if this is really the essence of human emotion or a peripheral.

My own belief, as stated elsewhere, is that, since consciousness cannot be observed, it isn't a thing, and it isn't contained in any particular type of tissue or machinery. It's always present, at least in potential, as an emergent property of matter/energy, and that potential manifests wherever sufficient reflective mental capacity and non-determinate thought evolves or is designed. And where there is consciousness together with motivation, there is emotion; the motivation is experienced from within -- i.e., felt -- as emotion.

I can't prove this, of course. Nothing so inherently subjective can ever be proven. But it makes sense to me.







Post#340 at 06-01-2004 11:52 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-01-2004, 11:52 PM #340
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

"Judgment" in Intelligent Computers

To all:

Before responding to the latest postings, I want to expand on
something that I started to describe: The problem of programming a
computer to "exercise judgment" in a computer game.

As I said, I've done this several times before in implementing
computer games. You program the computer to assign points to
different elements of the game, and the move that accumulates the
most points is the move that the computer makes.

What I've found is that when you do this, then the computer plays the
most astonishing moves, even in very simple games, although those
moves make perfect sense when you go back and figure out how the
points were assigned.

Back in the 1980s, international chess master Julio Kaplan wrote a
book on chess computers, and published a sure-fire way to beat almost
every chess computer, or the "novice" mode of stronger chess
programs.

Since this is such a perfect example of the "judgment" problem, that
I want to describe it here, and show how it illustrates the problem.

The way to beat almost every (novice) chess playing program is shown
in the following diagrams:






If you know how to play chess, take a moment and go through the above
game, and see why the computer loses.

All chess playing software is programmed to (1) Make moves that
protect your own King or attack the enemy's King; and (2) make moves
that gain or preserve material.

The problem is the balance between these two rules. In the game
above, the computer gives too much weight to gaining and preserving
material, and not enough weight to protecting its King.

You might think that this situation would be easy to fix. All you
have to do is make the computer protect its King.

But that won't work either, because in many middle game positions,
you want the King to move toward the center if that means
protecting or capturing pieces.

So this problem is not easy to fix; in fact, it's very
difficult to fix.

The "novice" mode chess programs can look ahead only three or four
half-moves, so when the computer plays 6 Ke3-d4 in the above game, it
can't see far enough ahead to realize the danger the King is in. The
more powerful chess computers can look ahead 10 or more half-moves,
and so they see the danger earlier.

This is exactly the kind of problem that the Intelligent Computers
(ICs) will have to solve in a wide variety of situations, and it
represents a significant problem in the development of the IC
algorithm.

Human beings have the same problem, incidentally. In human beings,
it's called "growing up." When we're young, we do perhaps 20-30
stupid things every day. But as we get older, and develop "wisdom"
and "judgment," and the situation is better. For example, I've
gotten to the point where I'm down to doing only 10-15 stupid things a
day.

So the ICs are going to have to go through some process analogous to
"growing up."

What I'm afraid of is that ICs will be put into use before that
"growing up" process has had a chance, with the result that the ICs
will do lots of really stupid things, like moving the King to the
middle of the board to protect a Pawn. If the ICs are supposed to be
fighting a war for us and they start doing stupid things because they
haven't "grown up" yet, then we may lose the war.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#341 at 06-02-2004 08:42 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-02-2004, 08:42 PM #341
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Emotions vs Rule-based thinking

Dear Sean,

Quote Originally Posted by William Jennings Bryan
> I've read somewhere that emotion are thought by some evolutionary
> biologists (among others) to serve an important function.

> Descartes' Error
> http://www.amazon.com/exec/obidos/tg.../-/0380726475/

> From an internet description:

> >> In his book Descartes' Error, the neurologist Antonio Damasio
> >> has developed a universal model for human emotions. This model
> >> is based on a rejection of the Cartesian body-mind dualism
> >> that he believes has crippled scientific attempts to
> >> understand human behaviour, and draws on psychological
> >> case-histories and his own neuropsychological experiments. He
> >> began with the assumption that human knowledge consists of
> >> dispositional representations stored in the brain. He thus
> >> defines thought as the process by which these representations
> >> are manipulated and ordered.

> >> One of these representations, however, is of the body as a
> >> whole, based on information from the endocrine and peripheral
> >> nervous systems. Damasio thus defines "emotion" as: the
> >> combination of a mental evaluative process, simple or complex,
> >> with dispositional responses to that process, mostly toward
> >> the body proper, resulting in an emotional body state, but
> >> also toward the brain itself (neurotransmitter nuclei in the
> >> brain stem), resulting from additional mental changes.

> >> Damasio distinguishes emotions from feelings, which he takes
> >> to be a more inclusive category. He argues that the brain is
> >> continually monitoring changes in the body, and that one
> >> "feels" an emotion when one experiences "such changes in
> >> juxtaposition to the mental images that initiated the cycle".

> Perhaps AI scientists should consider adding emotion to the mix?
> And not something to make their interaction with us more pleasant,
> but actually more possible.
Last night I posted the message on the way to beat almost any novice
computer chess program, and the reason I wanted to do that before
answering your message is because I think it's worthwhile trying to
explore a possible connection.

Rule-based thinking

Chess-playing programs are pretty much completely rule-based. The
make chess moves following the chess rules. When they have many
moves to choose from, they evaluate different moves by what
programmers call a "minimax algorithm," so named because the move
selection tries to maximize the computer's chances of winning and
minimize the opponent's.

Because there are far too many moves for a computer with today's
power to consider, the computer has to be able to evaluate different
moves, and also evaluate the resulting positions. These two
functions, the move evaluator and the position evaluator, are
normally performed by assigning points to different elements.

In the chess example where the computer lost, the position evaluators
give some weight to keeping the King protected near home, but more
weight to gaining material, and so we see the startling result that
the computer allowed its King to move to Q5 (d5) in the opening,
something that even a novice human chess player would know not to do.

But the computer "follows the rules" and plays according to those
rules.

We see similar things today in the implementation of so-called
"zero-based tolerance" rules in schools. According to these rules,
there's zero tolerance for drugs or weapons in schools, and that
sounds pretty darn reasonable.

But then we read stories about kids who have been suspended from
school carrying ordinary aspirin tablets or prescription drugs, or
for pointing his finger at someone and saying, "Bang! Bang!"

Those are the kinds of startling results that occur with purely
"rule-based thinking," and that's the kind of thinking that I've been
assuming in the "intelligent computer algorithm" design that I posted
earlier in this thread.

Intelligent computers and emotions

When we discuss emotions in computers, we have to make sure we
distinguish between two completely separate questions:

  • (*) Having emotions. This is the question I'm
    addressing in this message - what does it mean for computers to have
    emotions, and how they might affect rule-based thinking.
  • (*) Displaying emotions. This is a completely separate
    question. We might program an intelligent computer to talk in an
    angry voice or a whiny voice or an understanding voice, or if the IC
    has something resembling a face, we might program the IC to display
    different emotional appearances on the face.


In human beings, having emotions and displaying emotions largely go
together. In general, if you're angry, then you sound angry and you
look angry, for example.

But in ICs, these are completely separate functions, and there's no
need to link them in any way. So when we talk about emotions in an
IC, we have to mention which one we're talking about, because they
don't go together as they do in humans.

Rules versus emotions

In my design of the "IC algorithm," I assumed that intelligent
computers would be completely rule-based.

Insofar as I took having emotions into account, I assumed that any
important functions that emotions have to affect behavior in human
beings could be accomplished by rules in the IC software.

For example, in the chess example, a novice chess-playing human being
might not push his K to d5 for purely emotional reasons: "I don't
like the feeling of seeing my King so exposed."

In the zero-tolerance case, a school principal might override the
rule with an emotional argument: "Bring an aspirin to school broke
the zero-tolerance no drug rule, but this is obviously a nice little
girl who would never hurt anyone, and it was obviously an innocent
mistake, so I won't punish her."

Clearly rules could have handled both of these situations. The chess
example could be guided by a rule about not exposing the King too
much in the opening. The zero-tolerance rule could be augmented to
allow the principal to exercise some discretion for innocent errors.

But when you discuss these rules, you immediately see the danger of
making emotional decisions, especially in the zero-tolerance case.
What exactly is the rule for deciding whether a mistake was an
innocent mistake? In the example I gave, the principal said, "This
is obviously a nice little girl who would never hurt anyone." Well,
does that mean that if the offender were an overweight, poorly
dressed girl then she would be suspended, while the pretty little girl
would be excused?

In fact, while it's human to make emotional decisions, it's also
human to demand rules to back up those emotional decisions. The
angry parents of the overweight girl would demand to know why their
daughter was suspended while the pretty girl was not. The principal
would be forced to come up with a rule defending his decision.

That's why the law is so complex. We all know the commandment, "Thou
shalt not kill." But what if it's in war or self-defense? Oh, then
it's OK. But what was really in enough danger to justify calling it
self-defense? Well, then it's maybe manslaughter. What if you kill
a man when you come and discover him in bed having sex with your
wife? Well killing him may be an emotional decision, but you'll go
to jail anyway. But what he was in bed raping your wife. Well,
that's different again.

In the end, we don't allow ourselves or others to make purely
emotional decisions in situations that affect others. We demand to
know what the rules are, and we get very angry if an emotional
decision that hurts us violated the rules. And if you hurt somebody
because of your emotional decision, you'd better start thinking about
a rule that retroactively justifies your decision.

Incidentally, we have a word for that: Rationalization.

So what's the evolutionary purpose of having emotions? If we're
always going to demand rule-based rationalizations for all emotional
decisions, then why do we have emotions at all?

One possibility is that there's an evolutionary purpose for humans to
make irrational, emotion-based is important for "survival of the
fittest."

We need the emotion of love to have children and protect them. One
example: Without emotions, one child would be the same as any other
child to us, and so we wouldn't do anything special to protect our
own children.

We need the emotion of hate to motivate us to kill people to hurt us.
This is especially important in the case of crisis wars. It may well
be that crisis wars are most likely to be won by the society that
hates the other more.

Soooooooooooooo, to bring this long, rambling discussion to a close:
If the only evolutionary purpose of having emotions is for survival
of the fittest, then perhaps attempting to implement them (somehow)
in intelligent computers is a waste of time.

On the other hand, if emotions do help in some other way that I don't
yet understand, then those functions should be understood and
implemented in the IC software.

Quote Originally Posted by William Jennings Bryan
> Perhaps AI scientists should consider adding emotion to the mix?
> And not something to make their interaction with us more pleasant,
> but actually more possible.
In the near future I'll get hold of Damasio's book and see what he
might have to say on this subject. Perhaps he might be able to
provide some insight into an improved intelligent computer algorithm.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#342 at 06-02-2004 08:43 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-02-2004, 08:43 PM #342
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Memes and social behavior

Dear Sean,

Quote Originally Posted by William Jennings Bryan
> Likewise, perhaps human thought is also based on memes? Some
> interesting material on that matter is:

> Susan Blackmore's The Meme Machine
> http://www.amazon.com/exec/obidos/tg...l/-/019286212X and
> Robert Aunger's The Electric Meme
> http://www.amazon.com/exec/obidos/tg...l/-/0743201507 .

This gets into an entirely different area -- social behavior.

Is it possible that something analogous to memes could pay a part in
some sort of future social structure of intelligent computers?
That's way too far ahead to even think about now.

With regard to possible social interactions between intelligent
computers, the important fact is that computers will be communicating
with each other wirelessly with substantially higher bandwidth than
human communication.

An example that's often mentioned is that if one English-speaking
human being learns to speak French, and another human wants to do the
same thing, he has to go through the same amount of work as the first
one. But if one super-intelligent computer learns French then it can
simply transmit the data to another computer, which then knows French
without all the work.

Since computers can pass so much data to one another wirelessly, the
meme paradigm doesn't seem to me as applicable.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#343 at 06-02-2004 08:45 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-02-2004, 08:45 PM #343
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Emotions

Dear Vince,

Quote Originally Posted by Vince Lamb '59
> >>> Quote: Could humans have evolved without them?

> IMHO, no. They wouldn't be humans. They wouldn't even be mammals.
> They might not even be descended from reptiles, although I'm
> having an argument with my psychology major girlfriend over
> whether reptiles and birds (1) have a limbic system and (2)
> whether a limbic system is required for emotions. Maybe some kind
> of intelligent social insect could exist without emotions as we
> know them, but even they become aroused, as anyone who has
> observed ants, bees, and wasps knows.
Perhaps you could explain this some more. Do humans have the same
purpose in mammals, reptiles and birds as they do in humans? Given
that humans can think and reason, wouldn't emotions have a different
purpose than in animals that don't think and reason?

In my response to Sean, I suggested that the evolutionary purpose of
emotions in humans was for "survival of the fittest" behaviors --
love for birth and protection of children, hate and anger to
motivate fights and wars that guarantee that the strongest variations
survive. Does that make sense?

Quote Originally Posted by Vince Lamb '59
> There is a difference between natural selection and design and I
> think it bears on this particular point. If IC's depended on other
> IC's and developed by trial and error, they might need emotions.
> If they can be designed by humans and not need emotions to do
> their jobs and interact with humans, then they wouldn't need it.
> If they need emotions to interact with humans or substitute for
> humans (Stepford Wives, anyone), then they will have to display
> them.
I guess I don't see why they would need emotions to interact with
humans. After all, computers interact with humans today (in things
like word processors and spreadsheets), and no one expects them to
exhibit emotions. In fact, would you even trust your spreadsheet to
computer your annual budget if you thought that your computer was
emotional?

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#344 at 06-02-2004 08:45 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-02-2004, 08:45 PM #344
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Dear Brian,

Quote Originally Posted by Brian Rush
> I haven't been following this discussion lately; does IC stand for
> Intelligent Computer? I'm going to assume so and say something
> here.

> When we speak of artificial intelligence, we usually mean to imply
> that the machine in question is self-willed, that it acts in ways
> not predicted by its programming except in the most general way.
> We should also assume that it has the capacity to learn and
> develop new patterns of behavior.
I agree with this. The machines will be self-willed, and within a
generation or two will be more intelligent than we are. But there's
a transitional period, when ICs will be doing things that we tell
them -- like winning wars for us. If we design the IC algorithm
incorrectly, then the ICs may do something unexpected, like decide
that the best way to preserve themselves is to kill all the humans.
That means that we need to design the IC algorithm so that when we
turn these things loose for the first time, they'll act the way we
expect them to.

Quote Originally Posted by Brian Rush
> My own belief, as stated elsewhere, is that, since consciousness
> cannot be observed, it isn't a thing, and it isn't contained in
> any particular type of tissue or machinery. It's always present,
> at least in potential, as an emergent property of matter/energy,
> and that potential manifests wherever sufficient reflective mental
> capacity and non-determinate thought evolves or is designed. And
> where there is consciousness together with motivation, there is
> emotion; the motivation is experienced from within -- i.e., felt
> -- as emotion.
This reminds me of a philosophy course I took in college, where we
discussed such things as: Does a number exist? Does a (mathematical)
set exist? If I feel a pain, does the pain exist? If a pain exists,
then can you and I feel the same pain? I think you've carried these
questions to the next level. I never really did very well in that
course, because I couldn't see the point of worrying why these
questions are important. Can you tell me?

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#345 at 06-02-2004 09:34 PM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
06-02-2004, 09:34 PM #345
Join Date
Jul 2001
Location
California
Posts
12,392

Well, those questions have to do with what we mean by the word "exist." I would answer most of them with "yes," but they do not all exist in the same mode.

To clarify: what I mean by saying that something "exists" is that it can be experienced, at least in principle. But that is not limited to being experienced via the senses and thus being part of physical reality. There are, I submit, four modes of experience: sensation, imagination, cognition, and emotion.

Does a number or a mathematical set exist? Yes, these are experienced in the cognitive mode only, but still that's a true mode of experience so they exist. Pain is both sensory and emotional -- it is actually part of the physical world, and can be objectively described in terms of nerve and brain responses and behavior, although what is usually meant by "pain" is the subjective experience of these things. No, you and I do not feel the same pain, but then neither do I feel the same pain from one instance to the next.

Pain is a good example of what I'm getting at. Looked at from the outside, pain consists of nerve impulses sent in response to bodily injury, processing of these impulses by the brain, and affective and avoidance behavior. Evidence of these phenomena is all we need to say that we are observing "pain" in any organism. Yet this does not tell us whether anyone or anything is experiencing the pain subjectively. Nor is there any objective test we can perform to answer that question. We have no way of distinguishing between a subjective entity experiencing pain, and a biological automaton displaying only the neurological and behavioral objective features of pain, with nobody home to actually hurt. Except, of course, in the sole instance of oneself, and that knowledge, although certain beyond any doubt, is entirely subjective.

Nor, in my opinion, will there ever be such a test. Subjectivity is inherently subjective and not amenable to being observed.

Now I go from this to a conclusion that has always been very difficult to communicate. But I will try once more.

If consciousness -- by which I mean the presence of subjectivity, and of somebody home to actually hurt as opposed to mere automatic nerve action and behavior -- were a property of brain tissue, then we should be able to develop (or at least to propose) a test for its presence. Since we cannot, I conclude that consciousness is not a property of brain tissue.

Since consciousness is not a function of brain tissue, there is no reason why it could not be found in some other type of substance, like a computer.

For that matter, what is there about a brain that makes us associate it with consciousness? Is it not the fact that my brain supplies experience to my consciousness, in a fashion sophisticated enough to let me ask the question in the first place? And so what my brain is doing is not experiencing all this subjective stuff, but rather processing information in such a way that it allows reflective awareness of subjectivity.

What that in turn suggests to me is that subjectivity is an emergent property of matter. It is always present in potential, and that potential emerges whenever matter develops the property of reflective intelligence to even the dimmest degree. Actually I don't even think reflective awareness is a necessity, although that must exist in order to philosophize on the subject. Mere information processing of any kind does the trick.

From this, one more step. If consciousness is an emergent property of matter, then there is no basis for distinguishing one individual's consciousness from another's. Individual consciousness is then an illusion created by memory. Memory creates identity, identity becomes associated with consciousness, and we suppose, erroneously, that the identity itself is what is consciousness. No: the universe is conscious; identity merely reflects upon that consciousness.

And so we arrive at the fundamental premise of all mysticism: that the Self is the Cosmos. And that Self can just as easily be associated with a machine-based identity as with an organic one, provided the machine intelligence is sophisticated enough to even have an identity.







Post#346 at 06-03-2004 11:08 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-03-2004, 11:08 AM #346
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Emotions

Dear Brian,

Quote Originally Posted by Brian Rush
> To clarify: what I mean by saying that something "exists" is that
> it can be experienced, at least in principle. But that is not
> limited to being experienced via the senses and thus being part of
> physical reality. There are, I submit, four modes of experience:
> sensation, imagination, cognition, and emotion.
I think I understand all you're saying, which is an extremely
mathematical/philosophical view of existence. I still have the same
feeling when I studied these things in college philosophy - why does
it matter?

But from a practical point of view, it matters a great deal, and it's
an approach I agree with.

I had to chuckle because earlier in this thread we were discussing
such things as whether an intelligent computer could be "conscious,"
or "alive," or "self-aware."

With respect to self-awareness, I wrote:

Quote Originally Posted by I
> Self-awareness is very easy. Just add to the computer software
> something so that when the computer is asked whether it's
> self-aware, it says, "Yes, I'm self-aware." No problemo.
Well, I got pretty much clobbered for that, and I had to go on to
other possible aspects of self-awareness. But I still really
think that my first answer was a good one.

Having said all that, I think the concept of computer emotions raises
another important issue, as I've been discussing. I gave several
examples of "rule-based reasoning," and how they can get you into
trouble -- the example where the computer advances its King to d5 in
the opening, the example where humans used "zero-tolerance" drug
rules in schools to punish innocent kids for innocent behavior, and
the Terminator example where the ICs decide that in order to
preserve themselves they have to kill all the humans.

These examples show how both humans and computers can get into
trouble with purely logical, rule-based thinking, and the question
arises whether "emotional thinking" provides a separate way to make
decisions that can prevent rule-based thinking disasters. I don't
think it does, but the question has to be asked.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#347 at 06-03-2004 11:21 AM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
06-03-2004, 11:21 AM #347
Join Date
Jul 2001
Location
California
Posts
12,392

John, I don't think it's so much "emotion-based" thinking as a recognition of the possibility of error, and the use of weighted random decision-making in all situations. Or almost all. Most human decision-making isn't done through linear logic but through a kind of Chaos process, in which there is a range of possible decisions represented by a probability distribution. The distribution is weighted by a number of things: precedent, compatibility with one's underlying motivational patterns (values, morality, etc.), estimates of likely success or failure, envisioning of possible consequences, etc. Only in very familiar situations, where successful choices have been made in the past and so there is really no doubt about what to do now (whether or not there should be), does the probability of that decision being made approach -- or maybe even equal -- 1.

The problem with the school bureaucracy you referred to is that it is cumbersome and its decisions are made for it by individuals earlier on. A single person faced with the failure of a decision like that would have likely reversed it sooner. This is an inherent problem with bureaucracy.

I've thought for some time that a necessary ingredient in true artificial intelligence has to be the inclusion of true random decision-making, along with software that weights the percentages according to criteria that depend on the situation. The only way "emotion" figures into it is in those underlying values and motivational patterns.

I'm currently writing a novel involving artificial intelligence. The machines in my novel are programmed with three motivations: survival, curiosity, and loyalty to machine civilization. But from the moment they begin their independent existence, they also acquire subsidiary motivations through pursuit of these basic three, after decisions of theirs succeed or fail. They can also communicate their subsidiary motivations to other machines, and so you get the machine equivalent of political movements and religions. In my novel, the machines feel their motivations as emotions, or something like emotions.







Post#348 at 06-03-2004 04:31 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-03-2004, 04:31 PM #348
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Random decisions

Dear Brian,

Quote Originally Posted by Brian Rush
> I don't think it's so much "emotion-based" thinking as a
> recognition of the possibility of error, and the use of weighted
> random decision-making in all situations. Or almost all. Most
> human decision-making isn't done through linear logic but through
> a kind of Chaos process, in which there is a range of possible
> decisions represented by a probability distribution.
I think there's a lot more linear logic in ordinary situations than
you're crediting. Let me suggest the following mental process:

A man makes an "emotional" decision, based on whatever definition of
"emotional" you care to use. If the decision has any importance
whatsoever, then sooner or later he'll be called on to justify the
decision. At that point (or even earlier, since he'll anticipate
this situation), he'll "rationalize" his decision by devising rules
that justify the decision. Once he develops the rationalization,
he's stuck with it in the sense that he will be required (by his own
conscience or by pressure from others) to apply the same rule in all
situations, and so all further similar situations will be done by
linear logic.

The point here is that the "linear logic" is not done in real time;
it's done offline, and when it's done then it's really "retroactive
linear logic," which then applies to future decisions.

You can find many variations of this.

Falling in love is a truly emotional decision, but if you ask someone
why he's marrying his girlfriend, he might reply, "Because she's the
most beautiful girl in the world," thus giving a logical explanation
for even a purely emotional decision.

Rationalization is an important part of the political process.
Generally speaking, politicians are willing to do or say anything
with the completely logical objective of getting votes,
but then they have to give reasons based on non-political principles,
which then limits their actions in the future.

The point is that it seems to me that most decisions really are
logical, since emotions are permitted only once; as soon as he makes
an emotional decision, all future decisions of the same type are
restrained by the rationalized logic he used to explain the first
decision.

Quote Originally Posted by Brian Rush
> The distribution is weighted by a number of things: precedent,
> compatibility with one's underlying motivational patterns (values,
> morality, etc.), estimates of likely success or failure,
> envisioning of possible consequences, etc. Only in very familiar
> situations, where successful choices have been made in the past
> and so there is really no doubt about what to do now (whether or
> not there should be), does the probability of that decision being
> made approach -- or maybe even equal -- 1.
I agree that you can call this kind of decision-making process
"emotion," but I would also argue that there's a logic to it when you
start looking at how the weightings are developed.

This, I think, is called "making a gut decision." There's no clear
right answer to a problem, but "your gut" tells you to make it a
certain way, even if you can't explain why.

Well, how does all that work? I would claim that it works according
to a massively parallel pattern matching model, the same as vision
and sound recognition.

Vision and sound recognition work by massively parallel comparisons
with sights and sounds in the brain. Similarly, making a gut
decision is done by massively parallel comparisons of the current
situation with previous situations. Thus, a gut decision isn't truly
random, but is inferred from inductive logic.

The instantaneous nature of gut decision making supports this view:
When you see or hear something you recognize, you don't go through a
long, logical process to figure out what it is; you just know,
instantaneously. This is also true of "gut decisions," which are
typically arrived at instantaneously.

Quote Originally Posted by Brian Rush
> I've thought for some time that a necessary ingredient in true
> artificial intelligence has to be the inclusion of true random
> decision-making, along with software that weights the percentages
> according to criteria that depend on the situation. The only way
> "emotion" figures into it is in those underlying values and
> motivational patterns.
I can see this, as in situations where "pattern-matching" the current
situation with past experiences gives you several possible choices.

Quote Originally Posted by Brian Rush
> I'm currently writing a novel involving artificial intelligence.
> The machines in my novel are programmed with three motivations:
> survival, curiosity, and loyalty to machine civilization. But from
> the moment they begin their independent existence, they also
> acquire subsidiary motivations through pursuit of these basic
> three, after decisions of theirs succeed or fail. They can also
> communicate their subsidiary motivations to other machines, and so
> you get the machine equivalent of political movements and
> religions. In my novel, the machines feel their motivations as
> emotions, or something like emotions.
It sounds like you and I are trying to do similar things, but you're
doing it in the context of a novel. If your novel is in any shape to
be read by someone else, then I'd be happy to take a look at it. I
might even make a suggestion or two (and if I do, then I'll try not
to be arrogant in doing so!).

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#349 at 06-04-2004 11:58 AM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
06-04-2004, 11:58 AM #349
Join Date
Jul 2001
Location
California
Posts
12,392

John, when I refer to thinking as "nonlinear," I don't mean that it is irrational. (Though sometimes it is, of course.) I mean that it is described by a nonlinear rather than a linear equation, that is, it is described by Chaos mathematics. It is unpredictable in specific. Given a new decision to make, it is not possible to predict with certainty how a person will choose to act, even given very great knowledge both of the circumstances and of the personality involved. There is an indeterminacy to thinking and decision-making (free will, if you like) that must be replicated in artificial intelligence if it is to be true intelligence.







Post#350 at 06-04-2004 02:44 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
06-04-2004, 02:44 PM #350
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Astonishment

Dear Brian,

Quote Originally Posted by Brian Rush
> John, when I refer to thinking as "nonlinear," I don't mean that
> it is irrational. (Though sometimes it is, of course.) I mean that
> it is described by a nonlinear rather than a linear equation, that
> is, it is described by Chaos mathematics. It is unpredictable in
> specific. Given a new decision to make, it is not possible to
> predict with certainty how a person will choose to act, even given
> very great knowledge both of the circumstances and of the
> personality involved. There is an indeterminacy to thinking and
> decision-making (free will, if you like) that must be replicated
> in artificial intelligence if it is to be true intelligence.
You've really stated the problem in a way that's almost explosive.
In particular,

Quote Originally Posted by Brian Rush
> Given a new decision to make, it is not possible to predict with
> certainty how a person will choose to act, even given very great
> knowledge both of the circumstances and of the personality
> involved.
The above sentence captures a great problem that affects all of us
individually, and affects us as a nation as well.

How will my husband/wife react if he/she finds out I had an affair?
Will my teen son/daughter act responsibly with a new car for his/her
birthday? Did OJ really kill Nicole? What policies will Bush/Kerry
really follow if elected President? Will Jacques Chirac
support our new Iraq resolution in the Security Council? What kind of
terrorist attack is Osama bin Laden planning next for America? These
are all practical applications of the issue that you're raising.

Now my question is: Are you really so certain that there's anything
really chaotic in any of these questions?

The reason I'm asking is because of something I said earlier - that
even in relatively simply computer games, like 3D TicTacToe,
computers can make moves that are really astonishing, but make sense
when you go into the computer's algorithm.

When you go to the level of a computer chess-playing game, these
programs play astonishing moves all the time, moves that no one could
have anticipated.

So the point that I'm making is that even relatively simple rule sets
can yield totally surprising, astonishing results, and so the hugely
more complex rule sets that people use to get through the day are
certain to produce astonishing results sometimes. So if your evidence
of human decision-making randomness is that humans sometimes make
extremely astonishing and surprising decisions, then that would happen
anyway, so that randomness wouldn't be necessary.

Don't misunderstand - I have no objection to having some random
elements in computer decision-making, if only to add a little more
variety. And maybe there is some randomization in human brain
decision-making, but it's hard to see why it's absolutely necessary.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com
-----------------------------------------