Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 12







Post#276 at 04-15-2004 08:26 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-15-2004, 08:26 PM #276
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Some random observations

On machine intelligence:

Difficult computational problems are generally categorized according to their parallelizability: that is, how easy is it to split up the computation into individual, independent chunks. The "hardest" problems involve billions of individual computation units that are all tightly dependent on the results of other units. The classic example is finite element analysis, such as used in the simulation of nuclear explosions and weather systems: each simulated particle's behavior is dependent on the behavior of all the "neighboring" particles. Solving problems of this type requires a collection of processors that are very tightly coupled, where the speed of interconnection is more of a bottleneck than the processing power of the individual units. This has some interesting implications.

First, the biggest, baddest supercomputers in the world are required for such problems. Indeed, the very definition of a supercomputer (performance on the LINPACK benchmark) is the ability to address problems involving tightly coupled computations. However, since the interconnections are the limiting factor, the processing units themselves can be simpler, off-the-shelf components. Indeed, we see this in the Top500 rankings. As recently as 1997, specialized vector processors (mostly from Cray) dominated the Top500 rankings, providing over 90% of the world's supercomputing capacity; but since 2001, not a single vector processor appears in the Top500 at all! Instead, over 40% of the Top500 capacity is provided by standard x86 desktop-type chips.

Second, as the expensive interconnect technology enters the mainstream (with the explosive growth of the Internet), we see the cost of supercomputing technology decreasing dramatically, although not quite at the 18-month-halving rate of processors in particular. It is now possible to assemble a supercomputer entirely from off-the-shelf hardware and software, as Virginia Tech has done. In fact, it is possible to build a (temporary) supercomputer for free, using donated computer time and open-source software, as demonstrated here.

How is this relevant to Machine Intelligence? Well, virtually every model of human cognition indicates that it is a supercomputing-class problem, in that the interconnection capability is as significant as the processing capability. The "cycle time" of an individual neuron is on the order of 50-100 ms, yet humans are capable of completing cognitive tasks in 500 ms, i.e. in only 5-10 "neural cycles". This would be the equivalent of a typical desktop CPU completing a task in a few hundredths of a microsecond. Humans can do that, and computers cannot, and clearly this has something to do with the massive number of interconnections between neurons. Of course, we still don't know for sure exactly how the brain works, but in all likelihood, simulating a brain will require supercomputer-level technology.

That is not to say that other computational arrangements are not useful. Some problems requiring immense computing power can easily be broken into completely independent chunks. Such problems are referred to as "trivially parallelizable." The best example is protein folding: to target a drug to a particular cellular (or viral) receptor, a drug company must design a protein with a particular three-dimensional shape. Currently, the only way to do this is to try every possible sequence of proteins. The number of possible sequences is in the quintillions, but each sequence can be computed and tested individually, without consulting the results for any other sequence. Thus, one possible approach to such problems is to distribute individual computational units among thousands or even millions of individual computers. This is known as "distributed computing"; United Devices' GRID.org and Stanford's Folding@home are two real-world examples. (SETI@Home is another example, but its scientific value is more dubious.) The distributed computing approach has the advantage that setup and maintenance are much simpler; both projects just listed use a screensaver program to take advantage of desktop computers' idle cycles. Thus the cost is also much lower, close to zero.

Unfortunately, as discussed, distributed computing is a poor model for John's "Intelligent Computer". However, the process of gathering the "Knowledge Bits" that John described can easily be distributed. In fact, there is a very active project where exactly this approach is being attempted: MIT Media Lab's CommonSense OpenMind. OpenMind currently has over 600,000 "knowledge bits" in its database. The entire CommonSense database can be downloaded for free; combine this with a few thousand off-the-shelf PCs, and you can have an Intelligent Computer of your very own.

Yet such an IC would not even approach the capabilities of a typical 1-year-old child. Why? Well, let's leave aside metaphysical concerns for the moment and look at one example: language acquisition. Language is the most distinctive, and most widely studied, of all human behaviors. Yet we still know so very little about how it is learned. One thing is clear: a child most definitely does not acquire language by assembling a collection of facts, as John implies. When I want to teach my child what a tree is, I don't say "A tree is a perennial woody plant having a main trunk and usually a distinct crown." I don't even say something simpler, like "A tree is something you can climb on. They make nice shade in the summer, and the leaves change colors and fall off when it gets cooler. Trees are made out of wood. Some of them have fruit on them." I just point to a tree and say, "tree." This simple experience, and thousands of more methodical studies, indicate that a child acquires the meaning of a word generally with a single exposure. This could only be possible if the concept of (say) a tree exists in the child's brain before it ever hears the word. In fact, it seems that all human languages share the same ten thousand or so basic concepts, and simply assign different series of sounds to the various internally represented concepts.

Thus, it appears that the human brain is hard-wired for language acquisition and production. Essentially, our brains are highly-tuned, special-purpose computing devices, not general-purpose computers like the typical PC. This has enormous implications for the future development of "intelligent" machines.







Post#277 at 04-16-2004 10:40 AM by AAA1969 [at U.S.A. joined Mar 2002 #posts 595]
---
04-16-2004, 10:40 AM #277
Join Date
Mar 2002
Location
U.S.A.
Posts
595

For machine intelligence to regularly "outinvent" human beings requires a lot more than a machine that is more complicated than a human being. It even requires a lot more than a machine that is smarter than a human being.

Yes, inventions are made by people, but the sum of our innovations are made by that super-organism, SOCIETY. People as units in a larger group.

Did a person invent the Nissan Altima? No. Some person way back when discovered the wheel. Same goes for the screw, the ball-bearing, the combustion engine, the shock absorber, non-scratch paint coatings, shatter-resistant glass, and the cupholder.

Even then, did one person choose to assemble all this stuff in a aprticular way? Nope. A team of designers came up with a deisgn. They ran it through marketing, who tested it with consumers. They went to cost group to price it out. Many other areas were involved as well.

Few inventions are the product of one mind. We collaborate, and one of the huge advantages is that our individual computers are so DIFFERENT from each other. Also, we use machines as needed to help us think, and build helper machines when we see the need for one.

When machines can reach THAT level, things will change. It will be some time.







Post#278 at 04-16-2004 10:40 AM by AAA1969 [at U.S.A. joined Mar 2002 #posts 595]
---
04-16-2004, 10:40 AM #278
Join Date
Mar 2002
Location
U.S.A.
Posts
595

For machine intelligence to regularly "outinvent" human beings requires a lot more than a machine that is more complicated than a human being. It even requires a lot more than a machine that is smarter than a human being.

Yes, inventions are made by people, but the sum of our innovations are made by that super-organism, SOCIETY. People as units in a larger group.

Did a person invent the Nissan Altima? No. Some person way back when discovered the wheel. Same goes for the screw, the ball-bearing, the combustion engine, the shock absorber, non-scratch paint coatings, shatter-resistant glass, and the cupholder.

Even then, did one person choose to assemble all this stuff in a aprticular way? Nope. A team of designers came up with a deisgn. They ran it through marketing, who tested it with consumers. They went to cost group to price it out. Many other areas were involved as well.

Few inventions are the product of one mind. We collaborate, and one of the huge advantages is that our individual computers are so DIFFERENT from each other. Also, we use machines as needed to help us think, and build helper machines when we see the need for one.

When machines can reach THAT level, things will change. It will be some time.







Post#279 at 04-19-2004 12:10 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-19-2004, 12:10 AM #279
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Some random observations

Dear Rick,

Quote Originally Posted by Rick Hirst
> Difficult computational problems are generally categorized
> according to their parallelizability: that is, how easy is it to
> split up the computation into individual, independent chunks. ...
> Unfortunately, as discussed, distributed computing is a poor model
> for John's "Intelligent Computer".
Yes I do understand about parallelizability, about interconnection
problems, about the math involved in NP-complete computations, and
a few parallel architectures.

The algorithm I described was designed using brute force methods that
are fully parallizable. The computer vision, voice recognition and
natural language processing applications use massively parallel
pattern matching techniques.

As I indicated, I expect the first versions of the IC to have limited
vision because of the massive amounts of memory needed. In the early
versions, vision may only be possible when nothing is moving (as in
an IC plumber), or when motion is strongly constrained (as in an IC
nursemaid). Being able to drive an automobile will probably take
well into the 2020s.

As for the parallizability of goal-reaching algorithms, I would need
to spend more time to figure out how the rule searching could be
massively parallel. However, I note that most humans take a long
time "figuring out" how to do something -- anywhere from seconds to
minutes to days -- and so massive parallelization may be required
only for the simplest cases. These needs more work, but I have no
doubt that it can be done.

Quote Originally Posted by Rick Hirst
> MIT Media Lab's CommonSense OpenMind. OpenMind currently has over
> 600,000 "knowledge bits" in its database.
> http://commonsense.media.mit.edu/
Thanks for the pointer. When I have a chance I'll check this out, and
perhaps use this information to update the IC algorithm.

Quote Originally Posted by Rick Hirst
> When I want to teach my child what a tree is, I don't say "A tree
> is a perennial woody plant having a main trunk and usually a
> distinct crown." I don't even say something simpler, like "A tree
> is something you can climb on. They make nice shade in the summer,
> and the leaves change colors and fall off when it gets cooler.
> Trees are made out of wood. Some of them have fruit on them." I
> just point to a tree and say, "tree."
I have to laugh at this. Do you really think I didn't figure this
out? Do you really think that I thought that children learn language
by reading the Oxford English Dictionary? Yes, you've described how
a child learns what a tree is, but that's no reason why the IC has to
learn what a tree is that way. For one thing, as I said, vision
won't be available for a while, but in the meantime the IC can still
learn what a tree is by other means.

Quote Originally Posted by Rick Hirst
> Thus, it appears that the human brain is hard-wired for language
> acquisition and production. Essentially, our brains are
> highly-tuned, special-purpose computing devices, not
> general-purpose computers like the typical PC. This has enormous
> implications for the future development of "intelligent"
> machines.
I'm not describing a typical PC. What I'm describing is software for
a computer that will include some combination of integrated circuit
technology, nanotechnology, biotechnology and atomic technology. I
don't know what will go into a computer in the 2020s, but I do know it
won't be an ordinary PC, and I do know how powerful it will be by
extrapolating the exponential growth curve.

As I said, it would be nice if we approached this discussion more
constructively by discussing how to improve and refine the algorithm,
rather than just taking shots at it.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#280 at 04-19-2004 12:10 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-19-2004, 12:10 AM #280
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Some random observations

Dear Rick,

Quote Originally Posted by Rick Hirst
> Difficult computational problems are generally categorized
> according to their parallelizability: that is, how easy is it to
> split up the computation into individual, independent chunks. ...
> Unfortunately, as discussed, distributed computing is a poor model
> for John's "Intelligent Computer".
Yes I do understand about parallelizability, about interconnection
problems, about the math involved in NP-complete computations, and
a few parallel architectures.

The algorithm I described was designed using brute force methods that
are fully parallizable. The computer vision, voice recognition and
natural language processing applications use massively parallel
pattern matching techniques.

As I indicated, I expect the first versions of the IC to have limited
vision because of the massive amounts of memory needed. In the early
versions, vision may only be possible when nothing is moving (as in
an IC plumber), or when motion is strongly constrained (as in an IC
nursemaid). Being able to drive an automobile will probably take
well into the 2020s.

As for the parallizability of goal-reaching algorithms, I would need
to spend more time to figure out how the rule searching could be
massively parallel. However, I note that most humans take a long
time "figuring out" how to do something -- anywhere from seconds to
minutes to days -- and so massive parallelization may be required
only for the simplest cases. These needs more work, but I have no
doubt that it can be done.

Quote Originally Posted by Rick Hirst
> MIT Media Lab's CommonSense OpenMind. OpenMind currently has over
> 600,000 "knowledge bits" in its database.
> http://commonsense.media.mit.edu/
Thanks for the pointer. When I have a chance I'll check this out, and
perhaps use this information to update the IC algorithm.

Quote Originally Posted by Rick Hirst
> When I want to teach my child what a tree is, I don't say "A tree
> is a perennial woody plant having a main trunk and usually a
> distinct crown." I don't even say something simpler, like "A tree
> is something you can climb on. They make nice shade in the summer,
> and the leaves change colors and fall off when it gets cooler.
> Trees are made out of wood. Some of them have fruit on them." I
> just point to a tree and say, "tree."
I have to laugh at this. Do you really think I didn't figure this
out? Do you really think that I thought that children learn language
by reading the Oxford English Dictionary? Yes, you've described how
a child learns what a tree is, but that's no reason why the IC has to
learn what a tree is that way. For one thing, as I said, vision
won't be available for a while, but in the meantime the IC can still
learn what a tree is by other means.

Quote Originally Posted by Rick Hirst
> Thus, it appears that the human brain is hard-wired for language
> acquisition and production. Essentially, our brains are
> highly-tuned, special-purpose computing devices, not
> general-purpose computers like the typical PC. This has enormous
> implications for the future development of "intelligent"
> machines.
I'm not describing a typical PC. What I'm describing is software for
a computer that will include some combination of integrated circuit
technology, nanotechnology, biotechnology and atomic technology. I
don't know what will go into a computer in the 2020s, but I do know it
won't be an ordinary PC, and I do know how powerful it will be by
extrapolating the exponential growth curve.

As I said, it would be nice if we approached this discussion more
constructively by discussing how to improve and refine the algorithm,
rather than just taking shots at it.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#281 at 04-19-2004 12:12 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-19-2004, 12:12 AM #281
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

The Intelligent Computer Algorithm

Quote Originally Posted by AAA1969
> Few inventions are the product of one mind. We collaborate, and
> one of the huge advantages is that our individual computers are so
> DIFFERENT from each other. Also, we use machines as needed to help
> us think, and build helper machines when we see the need for one.

> When machines can reach THAT level, things will change. It will be
> some time.
You've made a very warm, fuzzy comment, but I'm going to give a very
cold answer.

I know of no reason why intelligence computers can't callaborate. In
fact, they have a big advantage over humans because the bandwidth in
human communication is very low, but ICs will be able to communicate
with each other wirelessly at very high bandwidths.

The interesting question is whether ICs will have different
personalities. I would expect the IC plumber to have a different
personality than the IC nursemaid, simply because they'll be using
different rule sets, and different rule sets require different
behaviors that appear as different personalities.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#282 at 04-19-2004 12:12 AM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-19-2004, 12:12 AM #282
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

The Intelligent Computer Algorithm

Quote Originally Posted by AAA1969
> Few inventions are the product of one mind. We collaborate, and
> one of the huge advantages is that our individual computers are so
> DIFFERENT from each other. Also, we use machines as needed to help
> us think, and build helper machines when we see the need for one.

> When machines can reach THAT level, things will change. It will be
> some time.
You've made a very warm, fuzzy comment, but I'm going to give a very
cold answer.

I know of no reason why intelligence computers can't callaborate. In
fact, they have a big advantage over humans because the bandwidth in
human communication is very low, but ICs will be able to communicate
with each other wirelessly at very high bandwidths.

The interesting question is whether ICs will have different
personalities. I would expect the IC plumber to have a different
personality than the IC nursemaid, simply because they'll be using
different rule sets, and different rule sets require different
behaviors that appear as different personalities.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#283 at 04-19-2004 08:34 AM by A.LOS79 [at Jersey joined Apr 2003 #posts 516]
---
04-19-2004, 08:34 AM #283
Join Date
Apr 2003
Location
Jersey
Posts
516

A

John: Your fear of computers is related to 1984 Orwell

possibilities by the 2040's, if the Millennial generation

pushes computers in any dumb direction without being

stopped. So you are basically saying I'll die because I'm

sitting next to a screen. My computer will murder me.

"BIG MILLENNIAL" is watching me. So just like we feared

nuclear weapons and the Soviet Union and terrorism, people

will fear the Millennials, computers and big brother.

It's something how we sure know how the Crisis of 2100 would

be fought, but not the Crisis of 2020 yet.







Post#284 at 04-19-2004 08:34 AM by A.LOS79 [at Jersey joined Apr 2003 #posts 516]
---
04-19-2004, 08:34 AM #284
Join Date
Apr 2003
Location
Jersey
Posts
516

A

John: Your fear of computers is related to 1984 Orwell

possibilities by the 2040's, if the Millennial generation

pushes computers in any dumb direction without being

stopped. So you are basically saying I'll die because I'm

sitting next to a screen. My computer will murder me.

"BIG MILLENNIAL" is watching me. So just like we feared

nuclear weapons and the Soviet Union and terrorism, people

will fear the Millennials, computers and big brother.

It's something how we sure know how the Crisis of 2100 would

be fought, but not the Crisis of 2020 yet.







Post#285 at 04-19-2004 04:32 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-19-2004, 04:32 PM #285
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Re: Some random observations

Quote Originally Posted by John J. Xenakis
Quote Originally Posted by Rick Hirst
Difficult computational problems are generally categorized according to their parallelizability: that is, how easy is it to split up the computation into individual, independent chunks. ... Unfortunately, as discussed, distributed computing is a poor model for John's "Intelligent Computer".
Yes I do understand about parallelizability, about interconnection
problems, about the math involved in NP-complete computations, and
a few parallel architectures.
I'm quite certain you do. My post was a sort of background primer for others who are following this topic. (That's why I referred to you in the 3rd person.)

Quote Originally Posted by John J. Xenakis
Quote Originally Posted by Rick Hirst
MIT Media Lab's CommonSense OpenMind. OpenMind currently has over 600,000 "knowledge bits" in its database. http://commonsense.media.mit.edu/
Thanks for the pointer. When I have a chance I'll check this out, and
perhaps use this information to update the IC algorithm.
You can also sign up and contribute. I found that the types of information they are trying to gather, and the difficulty I faced in trying to phrase my responses appropriately, helped me a great deal in understanding some of the challenges we face while developing intelligent systems.

Quote Originally Posted by John J. Xenakis
Quote Originally Posted by Rick Hirst
When I want to teach my child what a tree is, I don't say "A tree is a perennial woody plant having a main trunk and usually a distinct crown." I don't even say something simpler, like "A tree is something you can climb on. They make nice shade in the summer, and the leaves change colors and fall off when it gets cooler. Trees are made out of wood. Some of them have fruit on them." I just point to a tree and say, "tree."
I have to laugh at this. Do you really think I didn't figure this out? Do you really think that I thought that children learn language by reading the Oxford English Dictionary?
Of course not; as I said, I was simply providing some background. Actually, it was an insight to me: I had always assumed that children learn by observing and mimicking adult behavior. So, for example, a child would learn what a tree is by looking at lots of trees, and listening to adults talk about trees, and asking lots of questions about trees. This is certainly what we are taught to expect as parents (do you have children, John?) so of course we blame ourselves for any learning shortfall the child has. Thus, it comes as quite a surprise to find that children can learn with virtually no adult input at all! I've mentioned this in another thread, but I highly recommend The Nurture Assumption for a fascinating look at how children actually acquire knowledge and values.

Quote Originally Posted by John J. Xenakis
Yes, you've described how a child learns what a tree is, but that's no reason why the IC has to learn what a tree is that way.
That is a reason, because it may be that the child's approach is the best approach. Consider this: evolution optimizes* for two related metrics: efficiency and robustness. Efficiency refers to the amount of work (digestion, locomotion, computation) a specific organic component can produce per unit of energy input. The energy input includes the energy necessary to produce and maintain the component, as well as the energy required to produce the work. By this standard, current silicon-based systems are stunningly, astoundingly inefficient: to manufacture a single Pentium chip requires over a ton of raw materials to be processed, as well as thousands of gallons of waste water -- all this for processing power that compares unfavorably with an insect. Robustness refers to the ability of a system to operate in a wide variety of external conditions, and to be able to continue functioning even after sustaining damage. A spider can walk with a couple of legs missing; a CPU fails when even a single one of its millions of transistors quits working.

(* Yes, I know that even this fairly value-neutral description is an anthropomorphization of evolution. If that bothers you, you can substitute for "evolution does X" with "the evolutionary outcome is that the probability of X occurring is increased above alternate probabilities", but I'm not going to type that. 8))

The criteria of efficiency and robustness necessarily lead to gradualism in nature: no improvement to an existing system survives unless every intermediate step is also efficient and robust. There are no half-developed systems in nature.

Quote Originally Posted by John J. Xenakis
It would be nice if we approached this discussion more constructively by discussing how to improve and refine the algorithm, rather than just taking shots at it.
Certainly. In my previous post, I described the obstacles facing the "top-down" AI approach you described. Here, I present an alternate approach: the "bottom-up" approach as seen in nature. Pick a very specific task, and optimize it. For example, instead of developing a generalized vision system, work on reverse-engineering the cornea, the retina, or the optic nerve. Of course, different groups can work on each area, all at the same time. This has the advantage of producing far more useful results with far less time and money. With this approach, some amazing breakthroughs have already been achieved. (Similar approaches should yield automated vehicles within the next few years -- google "DARPA Grand Challenge" to see the current state of the art.)

As I have posted previously, I too expect the first true "post-humans" within the next thirty years; not as a result of a "top-down AI" Singularity event, like Athena springing from the head of Zeus, but rather as the result of 30 years of conscious, deliberate, gradual self-modification by willing (and unwilling) humans.

(On a side note, the notion of continuous exponential growth in computing power has some surprising philosophical implications, which I'll discuss in another thread.)







Post#286 at 04-19-2004 04:32 PM by Finch [at In the belly of the Beast joined Feb 2004 #posts 1,734]
---
04-19-2004, 04:32 PM #286
Join Date
Feb 2004
Location
In the belly of the Beast
Posts
1,734

Re: Some random observations

Quote Originally Posted by John J. Xenakis
Quote Originally Posted by Rick Hirst
Difficult computational problems are generally categorized according to their parallelizability: that is, how easy is it to split up the computation into individual, independent chunks. ... Unfortunately, as discussed, distributed computing is a poor model for John's "Intelligent Computer".
Yes I do understand about parallelizability, about interconnection
problems, about the math involved in NP-complete computations, and
a few parallel architectures.
I'm quite certain you do. My post was a sort of background primer for others who are following this topic. (That's why I referred to you in the 3rd person.)

Quote Originally Posted by John J. Xenakis
Quote Originally Posted by Rick Hirst
MIT Media Lab's CommonSense OpenMind. OpenMind currently has over 600,000 "knowledge bits" in its database. http://commonsense.media.mit.edu/
Thanks for the pointer. When I have a chance I'll check this out, and
perhaps use this information to update the IC algorithm.
You can also sign up and contribute. I found that the types of information they are trying to gather, and the difficulty I faced in trying to phrase my responses appropriately, helped me a great deal in understanding some of the challenges we face while developing intelligent systems.

Quote Originally Posted by John J. Xenakis
Quote Originally Posted by Rick Hirst
When I want to teach my child what a tree is, I don't say "A tree is a perennial woody plant having a main trunk and usually a distinct crown." I don't even say something simpler, like "A tree is something you can climb on. They make nice shade in the summer, and the leaves change colors and fall off when it gets cooler. Trees are made out of wood. Some of them have fruit on them." I just point to a tree and say, "tree."
I have to laugh at this. Do you really think I didn't figure this out? Do you really think that I thought that children learn language by reading the Oxford English Dictionary?
Of course not; as I said, I was simply providing some background. Actually, it was an insight to me: I had always assumed that children learn by observing and mimicking adult behavior. So, for example, a child would learn what a tree is by looking at lots of trees, and listening to adults talk about trees, and asking lots of questions about trees. This is certainly what we are taught to expect as parents (do you have children, John?) so of course we blame ourselves for any learning shortfall the child has. Thus, it comes as quite a surprise to find that children can learn with virtually no adult input at all! I've mentioned this in another thread, but I highly recommend The Nurture Assumption for a fascinating look at how children actually acquire knowledge and values.

Quote Originally Posted by John J. Xenakis
Yes, you've described how a child learns what a tree is, but that's no reason why the IC has to learn what a tree is that way.
That is a reason, because it may be that the child's approach is the best approach. Consider this: evolution optimizes* for two related metrics: efficiency and robustness. Efficiency refers to the amount of work (digestion, locomotion, computation) a specific organic component can produce per unit of energy input. The energy input includes the energy necessary to produce and maintain the component, as well as the energy required to produce the work. By this standard, current silicon-based systems are stunningly, astoundingly inefficient: to manufacture a single Pentium chip requires over a ton of raw materials to be processed, as well as thousands of gallons of waste water -- all this for processing power that compares unfavorably with an insect. Robustness refers to the ability of a system to operate in a wide variety of external conditions, and to be able to continue functioning even after sustaining damage. A spider can walk with a couple of legs missing; a CPU fails when even a single one of its millions of transistors quits working.

(* Yes, I know that even this fairly value-neutral description is an anthropomorphization of evolution. If that bothers you, you can substitute for "evolution does X" with "the evolutionary outcome is that the probability of X occurring is increased above alternate probabilities", but I'm not going to type that. 8))

The criteria of efficiency and robustness necessarily lead to gradualism in nature: no improvement to an existing system survives unless every intermediate step is also efficient and robust. There are no half-developed systems in nature.

Quote Originally Posted by John J. Xenakis
It would be nice if we approached this discussion more constructively by discussing how to improve and refine the algorithm, rather than just taking shots at it.
Certainly. In my previous post, I described the obstacles facing the "top-down" AI approach you described. Here, I present an alternate approach: the "bottom-up" approach as seen in nature. Pick a very specific task, and optimize it. For example, instead of developing a generalized vision system, work on reverse-engineering the cornea, the retina, or the optic nerve. Of course, different groups can work on each area, all at the same time. This has the advantage of producing far more useful results with far less time and money. With this approach, some amazing breakthroughs have already been achieved. (Similar approaches should yield automated vehicles within the next few years -- google "DARPA Grand Challenge" to see the current state of the art.)

As I have posted previously, I too expect the first true "post-humans" within the next thirty years; not as a result of a "top-down AI" Singularity event, like Athena springing from the head of Zeus, but rather as the result of 30 years of conscious, deliberate, gradual self-modification by willing (and unwilling) humans.

(On a side note, the notion of continuous exponential growth in computing power has some surprising philosophical implications, which I'll discuss in another thread.)







Post#287 at 04-21-2004 10:39 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-21-2004, 10:39 PM #287
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: A

Quote Originally Posted by A.LOS79
Your fear of computers is related to 1984 Orwell
If I were afraid of computers, then I would be in terror 24x7.

John







Post#288 at 04-21-2004 10:39 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-21-2004, 10:39 PM #288
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: A

Quote Originally Posted by A.LOS79
Your fear of computers is related to 1984 Orwell
If I were afraid of computers, then I would be in terror 24x7.

John







Post#289 at 04-21-2004 10:45 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-21-2004, 10:45 PM #289
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Some random observations

Dear Rick,

Quote Originally Posted by Rick Hirst
> Certainly. In my previous post, I described the obstacles facing
> the "top-down" AI approach you described. Here, I present an
> alternate approach: the "bottom-up" approach as seen in nature.
> Pick a very specific task, and optimize it. For example, instead
> of developing a generalized vision system, work on
> reverse-engineering the cornea, the retina, or the optic nerve. Of
> course, different groups can work on each area, all at the same
> time. This has the advantage of producing far more useful results
> with far less time and money. With this approach, some amazing
> breakthroughs have already been achieved. (Similar approaches
> should yield automated vehicles within the next few years --
> google "DARPA Grand Challenge" to see the current state of the
> art.)
This sounds reasonable to me, but how about merging the algorithms?

Instead of starting the bottom-up algorithm from a clean slate, what
about doing some of the top-down stuff first, and then using the
bottom-up approach to allow the IC to optimize itself for various
tasks.

Actually, I suspect you have to do that anyway, since the bottom-up
approach requires a fair amount of the goal-setting part of the IC
algorithm that I described in order to work. Whatever specific task
you select from the beginning, the IC has to be able to optimize
itself, just a child optimizes itself when it learns various tasks,
and that's going to require something like the IC algorithm anyway.

But by incorporating your bottom-up suggestion, we have the best of
both worlds. We give the IC a head-start (it can have OED in its
memory for ready reference, for example, so it can learn new words
quickly), but we also give it the flexibility to change its rules in
whatever way it "wants" in order to complete the goal/task that's
been set for it.

Quote Originally Posted by Rick Hirst
> As I have posted previously, I too expect the first true
> "post-humans" within the next thirty years; not as a result of a
> "top-down AI" Singularity event, like Athena springing from the
> head of Zeus, but rather as the result of 30 years of conscious,
> deliberate, gradual self-modification by willing (and unwilling)
> humans.
Kurzweil talks about this a lot, and I'm sure something like this
will happen, but the only skepticism I have is that I don't see this
as a dominating event. In other words, by 2030 I would expect to see
as many Intelligent Computers as there are PCs today. But by 2030 I
don't expect to see more than a handful of humans converted into
walking computers. I see ICs as being experimental in the 2010s, and
in full production by 2030, while I see the post-humans to be still
experimental by 2030.

There's one other point that has to be mentioned. We can talk about
various ways that ICs can be developed, and we can talk about various
applications that ICs will be used for, but the most important
developer will be the defense establishment, and the most important
application will be war. And the way I know that is because every
new technology is used for war first.

Quote Originally Posted by Rick Hirst
> (On a side note, the notion of continuous exponential growth in
> computing power has some surprising philosophical implications,
> which I'll discuss in another thread.)
I'll be interested in reading your philosophical remarks.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#290 at 04-21-2004 10:45 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-21-2004, 10:45 PM #290
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Re: Some random observations

Dear Rick,

Quote Originally Posted by Rick Hirst
> Certainly. In my previous post, I described the obstacles facing
> the "top-down" AI approach you described. Here, I present an
> alternate approach: the "bottom-up" approach as seen in nature.
> Pick a very specific task, and optimize it. For example, instead
> of developing a generalized vision system, work on
> reverse-engineering the cornea, the retina, or the optic nerve. Of
> course, different groups can work on each area, all at the same
> time. This has the advantage of producing far more useful results
> with far less time and money. With this approach, some amazing
> breakthroughs have already been achieved. (Similar approaches
> should yield automated vehicles within the next few years --
> google "DARPA Grand Challenge" to see the current state of the
> art.)
This sounds reasonable to me, but how about merging the algorithms?

Instead of starting the bottom-up algorithm from a clean slate, what
about doing some of the top-down stuff first, and then using the
bottom-up approach to allow the IC to optimize itself for various
tasks.

Actually, I suspect you have to do that anyway, since the bottom-up
approach requires a fair amount of the goal-setting part of the IC
algorithm that I described in order to work. Whatever specific task
you select from the beginning, the IC has to be able to optimize
itself, just a child optimizes itself when it learns various tasks,
and that's going to require something like the IC algorithm anyway.

But by incorporating your bottom-up suggestion, we have the best of
both worlds. We give the IC a head-start (it can have OED in its
memory for ready reference, for example, so it can learn new words
quickly), but we also give it the flexibility to change its rules in
whatever way it "wants" in order to complete the goal/task that's
been set for it.

Quote Originally Posted by Rick Hirst
> As I have posted previously, I too expect the first true
> "post-humans" within the next thirty years; not as a result of a
> "top-down AI" Singularity event, like Athena springing from the
> head of Zeus, but rather as the result of 30 years of conscious,
> deliberate, gradual self-modification by willing (and unwilling)
> humans.
Kurzweil talks about this a lot, and I'm sure something like this
will happen, but the only skepticism I have is that I don't see this
as a dominating event. In other words, by 2030 I would expect to see
as many Intelligent Computers as there are PCs today. But by 2030 I
don't expect to see more than a handful of humans converted into
walking computers. I see ICs as being experimental in the 2010s, and
in full production by 2030, while I see the post-humans to be still
experimental by 2030.

There's one other point that has to be mentioned. We can talk about
various ways that ICs can be developed, and we can talk about various
applications that ICs will be used for, but the most important
developer will be the defense establishment, and the most important
application will be war. And the way I know that is because every
new technology is used for war first.

Quote Originally Posted by Rick Hirst
> (On a side note, the notion of continuous exponential growth in
> computing power has some surprising philosophical implications,
> which I'll discuss in another thread.)
I'll be interested in reading your philosophical remarks.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#291 at 04-22-2004 03:30 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
04-22-2004, 03:30 PM #291
Join Date
Mar 2003
Posts
2,460

One other question to ponder IRT this thread: will humanity make it to the end of this century, or even long enough to develop the super-AIs that Mr. Xenakis tells us will be our undoing? Or are we living in the last days of humanity even now, with this 4T, not a hypothetical next one, as the final Apocalypse?







Post#292 at 04-22-2004 03:30 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
04-22-2004, 03:30 PM #292
Join Date
Mar 2003
Posts
2,460

One other question to ponder IRT this thread: will humanity make it to the end of this century, or even long enough to develop the super-AIs that Mr. Xenakis tells us will be our undoing? Or are we living in the last days of humanity even now, with this 4T, not a hypothetical next one, as the final Apocalypse?







Post#293 at 04-22-2004 07:32 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-22-2004, 07:32 PM #293
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Optimism

Dear Titus,

Quote Originally Posted by Titus Sabinus Parthicus
> One other question to ponder IRT this thread: will humanity make
> it to the end of this century, or even long enough to develop the
> super-AIs that Mr. Xenakis tells us will be our undoing? Or are we
> living in the last days of humanity even now, with this 4T, not a
> hypothetical next one, as the final Apocalypse?
As someone who is eternally pessimistic, this is one area where I can
exhibit a tiny bit of optimism.

I do not believe that the next 4T war crisis will come anywhere near
ending humanity.

There are six billion people on earth today. I expect 10-100 million
people to die in the next war. If I'm still around 20 years from now,
(which is very unlikely), I'll be absolutely astounded if more than
several hundred million people die in the next war. But even if
three billion people die, that still leaves half the population of the
earth that will survive. There is simply nothing unusual about this
kind of war in the scheme of human history.

There are lots of things that can be extremely destructive in the
next war. There are thousands of nuclear weapons floating around,
especially in the old Soviet Union, and a bunch of those will go off,
but those won't make a big dent in the world population. There'll
probably be a SARS epidemic, an ebola epidemic, and a tuberculosis
epidemic, and a lot of AIDS deaths, but even with those, there'll be
public "no contact" policies that will limit the deaths.

In the end, I expect most of the deaths to come in the usual way. To
paraphrase Leo Tolstoy in War and Peace: Hundreds of millions
of men will perpetrate against one another such innumerable crimes,
frauds, treacheries, thefts, forgeries, issues of false money,
burglaries, incendiarisms, and murders as in whole centuries are not
recorded in the annals of all the law courts of the world, but which
those who commit them will not at the time regard as being crimes.

But humanity will continue. In fact, things will be a lot better.
Population will be reduced, so there'll be a lot more food per
capita, so poverty will be reduced. Wars have a great way of
reducing poverty once they're over, don't they?

As for surviving the Singularity, I can even exhibit a bit of
optimism there. As I've previously said, I expect the Singularity to
occur around 2030, and by 2050 I expect Intelligent Computers to be
as much more intelligent than humans as humans are more intelligent
than dogs and cats. Just as we don't kill our dogs and cats, there's
no reason for the ICs to kill us.

But there is a transition period, and that's why it's so important
for people to understand this as soon as possible. Just going
through the exercise of writing down the algorithm has educated me on
some of the dangers that can come about from incorrectly programming
the IC algorithm. (The one I mentioned above was the one in
Terminator: The IC has a goal of preserving itself, and it concludes
that the only way to preserve itself is to kill all the humans.)

But writing down the algorithm also gave me a feel for how it can be
programmed to preserve humanity, and even serve humanity. After all,
the ICs don't have feelings, and there's no reasons why they can't
assign each of us humans an IC to be a personal servant, nursemaid,
valet, entertainer, and so forth. How's that for optimism?

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#294 at 04-22-2004 07:32 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-22-2004, 07:32 PM #294
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Optimism

Dear Titus,

Quote Originally Posted by Titus Sabinus Parthicus
> One other question to ponder IRT this thread: will humanity make
> it to the end of this century, or even long enough to develop the
> super-AIs that Mr. Xenakis tells us will be our undoing? Or are we
> living in the last days of humanity even now, with this 4T, not a
> hypothetical next one, as the final Apocalypse?
As someone who is eternally pessimistic, this is one area where I can
exhibit a tiny bit of optimism.

I do not believe that the next 4T war crisis will come anywhere near
ending humanity.

There are six billion people on earth today. I expect 10-100 million
people to die in the next war. If I'm still around 20 years from now,
(which is very unlikely), I'll be absolutely astounded if more than
several hundred million people die in the next war. But even if
three billion people die, that still leaves half the population of the
earth that will survive. There is simply nothing unusual about this
kind of war in the scheme of human history.

There are lots of things that can be extremely destructive in the
next war. There are thousands of nuclear weapons floating around,
especially in the old Soviet Union, and a bunch of those will go off,
but those won't make a big dent in the world population. There'll
probably be a SARS epidemic, an ebola epidemic, and a tuberculosis
epidemic, and a lot of AIDS deaths, but even with those, there'll be
public "no contact" policies that will limit the deaths.

In the end, I expect most of the deaths to come in the usual way. To
paraphrase Leo Tolstoy in War and Peace: Hundreds of millions
of men will perpetrate against one another such innumerable crimes,
frauds, treacheries, thefts, forgeries, issues of false money,
burglaries, incendiarisms, and murders as in whole centuries are not
recorded in the annals of all the law courts of the world, but which
those who commit them will not at the time regard as being crimes.

But humanity will continue. In fact, things will be a lot better.
Population will be reduced, so there'll be a lot more food per
capita, so poverty will be reduced. Wars have a great way of
reducing poverty once they're over, don't they?

As for surviving the Singularity, I can even exhibit a bit of
optimism there. As I've previously said, I expect the Singularity to
occur around 2030, and by 2050 I expect Intelligent Computers to be
as much more intelligent than humans as humans are more intelligent
than dogs and cats. Just as we don't kill our dogs and cats, there's
no reason for the ICs to kill us.

But there is a transition period, and that's why it's so important
for people to understand this as soon as possible. Just going
through the exercise of writing down the algorithm has educated me on
some of the dangers that can come about from incorrectly programming
the IC algorithm. (The one I mentioned above was the one in
Terminator: The IC has a goal of preserving itself, and it concludes
that the only way to preserve itself is to kill all the humans.)

But writing down the algorithm also gave me a feel for how it can be
programmed to preserve humanity, and even serve humanity. After all,
the ICs don't have feelings, and there's no reasons why they can't
assign each of us humans an IC to be a personal servant, nursemaid,
valet, entertainer, and so forth. How's that for optimism?

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#295 at 04-24-2004 03:00 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
04-24-2004, 03:00 PM #295
Join Date
Mar 2003
Posts
2,460

The following article is posted here for educational and discussion puropses only:

Five Reasons for Considering That the End Times May Soon Begin:

1) Israel claims her land.

The return of the Jewish people to Israel is the "super-sign" of prophecy. The Bible predicts over and over again that the Jews must be back in their homeland in order for the events of the end times to unfold (see Jeremiah 30:1-3; Ezekiel 34:11-24; Zechariah 10:6-10).

In 2003 ? a year of failed peace plans, rising Israeli-Palestinian tensions, and continued terrorist activity ? Israel began construction of a 320-mile barrier. The structure, now 25-percent complete, is made up of a complex network of fences, trenches, walls, and security roads. Palestinians say the barrier is a landgrab by Israel; Israel claims it is a necessary tool in its war against terrorism; prophecy experts view the barrier as a potential sign of things to come.

2) Saddam?s removal clears way for rebuilding Babylon.

The city of Babylon, located on the Euphrates River in modern Iraq, is mentioned almost 300 times in the Bible. It is consistently portrayed as a place of rebellion and pride. Scripture informs us that in the end times, Babylon will be rebuilt into a great city that will serve as a commercial and religious capital for the Antichrist.

The ousting of Saddam from power has resulted in a lifting of sanctions and limitations on Iraqi oil sales. As the ?nation-building? continues, billions of dollars will begin to flood into Iraq, making the rebuilding of Babylon as a major economic center for the Middle East (and the rest of the world) a real possibility.

3) A power struggle emerges in Europe.

This summer, a draft constitution for Europe amazingly included a reference in its preamble to a "reunited Europe.? This constitution could very well be the glue for the reunited Roman Empire predicted in the Bible during the end times.

According to the prophet Daniel, the second phase of the Roman Empire will take the form of a coalition of 10 nations. The EU, in its current form, has 15 members.

Recently, the summit on a proposed constitution for a united Europe collapsed after leaders failed to reach an agreement on the sharing of power. The fight revealed an unusual level of public animosity among the EU nations. Warning that an expanded EU could force Europe to "march to the slowest step," French President Jacques Chirac suggested a "pioneer group" of nations should move forward alone. Perhaps this new coalition will develop in the shape of a 10-kingdom form as revealed by Daniel.

4) Religious leaders debate authority of Scripture.

The Bible uses the word apostasy to describe opposition from within. It refers to people who profess to believe ? who call themselves believers ? but who believe and teach false doctrine and practice ungodly behavior. There are a handful of New Testament passages that tell us that apostasy will be a defining characteristic of the last days.

At the 2003 annual convention for the Episcopal Church, the issue of the authority of Scripture was so divisive that leaders of the worldwide Anglican Communion were forced to call an emergency session of religious leadership in London in order to prevent the splintering of the church. As debate heightened, an opposition movement in the Southern Hemisphere, where beliefs are more orthodox and growth is strong, grew more powerful.

5) Democracy falters in Russia.

Twenty-five hundred years ago, the Hebrew prophet Ezekiel was given a detailed prophecy foretelling that Russia would become a dominant player on the world scene in the last days (Ezekiel 38-39). However, following the breakup of the Soviet Union, it became increasingly difficult to believe that Russia was going to be the major power that Ezekiel described.

Then, in a recent parliamentary election that Europe's leading democracy watchdog group called "overwhelmingly distorted," Vladimir Putin?s party won a landslide victory. Manipulating state media to boost his campaign, Putin?s victory is widely considered unfair. Critics fear the death of democracy after Russia?s liberal parties were all but wiped out. Putin's supporters claim the pro-Kremlin majority will hand the ex-KGB spy more powers to fight corruption. Prophecy experts view the development as just one more reason for believing Christ could come in our time.

Join the Left Behind Club to follow these events each week!

Members get:

Weekly "Interpreting the Signs" newsletter.
In-depth analysis of the news in light of End Times prophecy.
Exclusive access to the Prophecy Resource Center and Message Boards.
FREE Gift, FREE Trial Issue, and NO Obligations!
Something to think about.







Post#296 at 04-24-2004 03:00 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
04-24-2004, 03:00 PM #296
Join Date
Mar 2003
Posts
2,460

The following article is posted here for educational and discussion puropses only:

Five Reasons for Considering That the End Times May Soon Begin:

1) Israel claims her land.

The return of the Jewish people to Israel is the "super-sign" of prophecy. The Bible predicts over and over again that the Jews must be back in their homeland in order for the events of the end times to unfold (see Jeremiah 30:1-3; Ezekiel 34:11-24; Zechariah 10:6-10).

In 2003 ? a year of failed peace plans, rising Israeli-Palestinian tensions, and continued terrorist activity ? Israel began construction of a 320-mile barrier. The structure, now 25-percent complete, is made up of a complex network of fences, trenches, walls, and security roads. Palestinians say the barrier is a landgrab by Israel; Israel claims it is a necessary tool in its war against terrorism; prophecy experts view the barrier as a potential sign of things to come.

2) Saddam?s removal clears way for rebuilding Babylon.

The city of Babylon, located on the Euphrates River in modern Iraq, is mentioned almost 300 times in the Bible. It is consistently portrayed as a place of rebellion and pride. Scripture informs us that in the end times, Babylon will be rebuilt into a great city that will serve as a commercial and religious capital for the Antichrist.

The ousting of Saddam from power has resulted in a lifting of sanctions and limitations on Iraqi oil sales. As the ?nation-building? continues, billions of dollars will begin to flood into Iraq, making the rebuilding of Babylon as a major economic center for the Middle East (and the rest of the world) a real possibility.

3) A power struggle emerges in Europe.

This summer, a draft constitution for Europe amazingly included a reference in its preamble to a "reunited Europe.? This constitution could very well be the glue for the reunited Roman Empire predicted in the Bible during the end times.

According to the prophet Daniel, the second phase of the Roman Empire will take the form of a coalition of 10 nations. The EU, in its current form, has 15 members.

Recently, the summit on a proposed constitution for a united Europe collapsed after leaders failed to reach an agreement on the sharing of power. The fight revealed an unusual level of public animosity among the EU nations. Warning that an expanded EU could force Europe to "march to the slowest step," French President Jacques Chirac suggested a "pioneer group" of nations should move forward alone. Perhaps this new coalition will develop in the shape of a 10-kingdom form as revealed by Daniel.

4) Religious leaders debate authority of Scripture.

The Bible uses the word apostasy to describe opposition from within. It refers to people who profess to believe ? who call themselves believers ? but who believe and teach false doctrine and practice ungodly behavior. There are a handful of New Testament passages that tell us that apostasy will be a defining characteristic of the last days.

At the 2003 annual convention for the Episcopal Church, the issue of the authority of Scripture was so divisive that leaders of the worldwide Anglican Communion were forced to call an emergency session of religious leadership in London in order to prevent the splintering of the church. As debate heightened, an opposition movement in the Southern Hemisphere, where beliefs are more orthodox and growth is strong, grew more powerful.

5) Democracy falters in Russia.

Twenty-five hundred years ago, the Hebrew prophet Ezekiel was given a detailed prophecy foretelling that Russia would become a dominant player on the world scene in the last days (Ezekiel 38-39). However, following the breakup of the Soviet Union, it became increasingly difficult to believe that Russia was going to be the major power that Ezekiel described.

Then, in a recent parliamentary election that Europe's leading democracy watchdog group called "overwhelmingly distorted," Vladimir Putin?s party won a landslide victory. Manipulating state media to boost his campaign, Putin?s victory is widely considered unfair. Critics fear the death of democracy after Russia?s liberal parties were all but wiped out. Putin's supporters claim the pro-Kremlin majority will hand the ex-KGB spy more powers to fight corruption. Prophecy experts view the development as just one more reason for believing Christ could come in our time.

Join the Left Behind Club to follow these events each week!

Members get:

Weekly "Interpreting the Signs" newsletter.
In-depth analysis of the news in light of End Times prophecy.
Exclusive access to the Prophecy Resource Center and Message Boards.
FREE Gift, FREE Trial Issue, and NO Obligations!
Something to think about.







Post#297 at 04-24-2004 04:46 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-24-2004, 04:46 PM #297
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Theological issues

Dear Titus,

If I'm not mistaken, I believe that all five of those signs also
occurred in the WW I time frame and again in the WW II time frame.
So it's hard to see why they mean more this time than they did at
other times.

However, speaking of theological issues, here's my current favorite:

According to beliefs in most religions, wars are the fault of human
beings, and are certainly not God's fault.

But it appears that the food supply, on a regional and worldwide
basis, grows at about 0.96% per year, while peacetime population
grows 1% to 4% per year. For example, it's been growing at 3.89% per
year in Gaza. This means that food gets relatively scarcer each year
of peace time, and so poverty mathematically MUST increase every
year, until a war breaks out to bring down the population again.

Now, if God is all-powerful, and God created the earth, it's clear He
could have created an earth where the food supply and population grew
at the same rate. Instead, He created a world in which the
population grows substantially faster than the food supply. That's
His fault.

That means that periodic wars are mathematically required. That's
also His fault. Therefore, wars are God's fault, not humans' fault.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#298 at 04-24-2004 04:46 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
04-24-2004, 04:46 PM #298
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Theological issues

Dear Titus,

If I'm not mistaken, I believe that all five of those signs also
occurred in the WW I time frame and again in the WW II time frame.
So it's hard to see why they mean more this time than they did at
other times.

However, speaking of theological issues, here's my current favorite:

According to beliefs in most religions, wars are the fault of human
beings, and are certainly not God's fault.

But it appears that the food supply, on a regional and worldwide
basis, grows at about 0.96% per year, while peacetime population
grows 1% to 4% per year. For example, it's been growing at 3.89% per
year in Gaza. This means that food gets relatively scarcer each year
of peace time, and so poverty mathematically MUST increase every
year, until a war breaks out to bring down the population again.

Now, if God is all-powerful, and God created the earth, it's clear He
could have created an earth where the food supply and population grew
at the same rate. Instead, He created a world in which the
population grows substantially faster than the food supply. That's
His fault.

That means that periodic wars are mathematically required. That's
also His fault. Therefore, wars are God's fault, not humans' fault.

Sincerely,

John

John J. Xenakis
E-mail: john@GenerationalDynamics.com
Web site: http://www.GenerationalDynamics.com







Post#299 at 04-24-2004 05:32 PM by Mikebert [at Kalamazoo MI joined Jul 2001 #posts 4,502]
---
04-24-2004, 05:32 PM #299
Join Date
Jul 2001
Location
Kalamazoo MI
Posts
4,502

Quote Originally Posted by Titus
Twenty-five hundred years ago, the Hebrew prophet Ezekiel was given a detailed prophecy foretelling that Russia would become a dominant player on the world scene in the last days (Ezekiel 38-39). However, following the breakup of the Soviet Union, it became increasingly difficult to believe that Russia was going to be the major power that Ezekiel described.
I looked up Ezekiel 38-39. I saw no references to Russia. The Catholic Encyclopedia has this to say about Gog:

But it seems more probable that both names are historical; and by some scholars Gog is identified with the Lydian king called by the Greeks Gyges, who appears as Gu-gu on the Assyrian inscriptions. If this be true, Magog should be identified with Lydia. On the other hand, as Mosoch and Thubal were nations belonging to Asia Minor, it would seem from the text of Ezechiel that Magog must be in that part of the world. Finally, Josephus and others identify Magog with Scythia, but in antiquity this name was used to designate vaguely any northern population.







Post#300 at 04-24-2004 05:32 PM by Mikebert [at Kalamazoo MI joined Jul 2001 #posts 4,502]
---
04-24-2004, 05:32 PM #300
Join Date
Jul 2001
Location
Kalamazoo MI
Posts
4,502

Quote Originally Posted by Titus
Twenty-five hundred years ago, the Hebrew prophet Ezekiel was given a detailed prophecy foretelling that Russia would become a dominant player on the world scene in the last days (Ezekiel 38-39). However, following the breakup of the Soviet Union, it became increasingly difficult to believe that Russia was going to be the major power that Ezekiel described.
I looked up Ezekiel 38-39. I saw no references to Russia. The Catholic Encyclopedia has this to say about Gog:

But it seems more probable that both names are historical; and by some scholars Gog is identified with the Lydian king called by the Greeks Gyges, who appears as Gu-gu on the Assyrian inscriptions. If this be true, Magog should be identified with Lydia. On the other hand, as Mosoch and Thubal were nations belonging to Asia Minor, it would seem from the text of Ezechiel that Magog must be in that part of the world. Finally, Josephus and others identify Magog with Scythia, but in antiquity this name was used to designate vaguely any northern population.
-----------------------------------------