Generational Dynamics
Fourth Turning Forum Archive


Popular links:
Generational Dynamics Web Site
Generational Dynamics Forum
Fourth Turning Archive home page
New Fourth Turning Forum

Thread: The Singularity - Page 8







Post#176 at 08-06-2003 01:19 PM by Mikebert [at Kalamazoo MI joined Jul 2001 #posts 4,502]
---
08-06-2003, 01:19 PM #176
Join Date
Jul 2001
Location
Kalamazoo MI
Posts
4,502

Quote Originally Posted by HopefulCynic68
Since we don't know everything the brain is doing yet, we can't be sure of that. We can be sure of those particular changes, but not that those are the only changes occurring.
You misunderstand. Brains aren't electronic devices, they are biochemical. A brain can do a helluva lot in 100 milliseconds and nothing in 2. Biochemical systems proceed through mass transfer and conformal change. Mass transfer is slow. We may not know what all the things brain cells do are, but we know the basic processes through which they do it. And these basic processes fix the rate at which cells do what they do. And the number of cells doing it fixed the number of doers. So the basic rate as which an entire brain does anything is fixed by the physics of the underlying processes. What these processes do as far as thinking is concerned is unknown.

My point was that intelligence might have a lot to do with what sort of processes brain cells are doing and less on the number of processes a brain can do. For example, behaviorly modern humans arose about 40,000 years ago. For 60,000 years before then anatomically modern humans existed along with Neanderthals. Both had brains as big as ours today; Neanderthals' were actually bigger.

Yet up until about 40000 years ago the artifactual remains of these people were not very intricate or complex. They were more complex than what chimps do, but much less that what modern stone-age people do. Around 40000 years ago human artifacts (and presumably their culture) started to become increasingly sophisticated, while for the 60,000 years before it was like the culture had been standing still. It would appear that human intelligence in the form we know it today arose around 40,000 BP in brains no larger or even smaller than those that had not shown this same sort of intelligence for 60,000 years.

It would appear that the rise of human intelligence required more than just a big brain.







Post#177 at 08-06-2003 10:14 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
08-06-2003, 10:14 PM #177
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2100?

Dear Brian,

Quote Originally Posted by Brian Rush
> I doubt that will be necessary and in any case you're dodging the
> point. You were treating human intelligence as a constant while
> projecting growth for computer intelligence, and arguing from
> this that machines may exterminate humanity. My point was that
> human intelligence is not necessarily a constant, and you need to
> factor this in.
I can't answer many of your postings because I can't figure out what
you're talking about.

Yes, as far as I know, human intelligence has always been relatively
constant, at for the past few thousand years.

And yes, computer power has been growing exponentially for over a
century, and is expected to continue to do so for the foreseeable
future (i.e., at least a century), which means that computer
intelligence will grow exponentially.

Whether super-intelligent computers will exterminate humans is far
from certain, as I've said over and over and over. But it is a
possibility.

Now exactly what scenario do you have in mind when you say that
"human intelligence is not necessarily a constant"? You threw this
in with no context or explanation, and then demand that I respond to
it, when I can't even make any sense out of it.

> Only on a technical level, and only about the difficulty of adding
> indeterminacy to machine thinking. Let me restate the original
> point. We are much further from artificial intelligence than you
> are projecting, because the people working on the problem are not
> incorporating the necessary indeterminacy into the functioning,
> because they don't understand that it's necessary. Yes, it would
> be technically easy to do if they figured this out. But I see no
> sign of them doing so.
There are probably thousands of AI researchers around the world. Are
you saying that you're smarter than all of them? With all due
respect, that doesn't seem likely. To whatever extent indeterminacy
is important, I'm sure many will figure it out. And if you're right
about it's importance, then the research teams that implement it will
win in the contest to produce the the most intelligent computer, and
the others will lose.

John







Post#178 at 08-06-2003 10:19 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
08-06-2003, 10:19 PM #178
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike Alexander '59
> The human brain contains something like 100 billion neurons which
> can perform ~100 operations per second. This gives the raw power
> of the brain at about 10 trillion operations per second--not too
> far from where supercomputers are today. Yet computers today can
> do little as well as people can--they cannot clean a house,
> recognize a face, or even play poker anywhere as well as people
> can. Soon computers will surpass the raw power of brains yet will
> only be fractionally better at human-type behaviors than they are
> today. If intelligence and consciousness were simply a function of
> raw computing power computers would already display some of these
> things now.
I'm not familiar with the particular figures you've referenced, but
I'll make some general remarks.

I do know that today's computers are definitely not powerful enough
to execute the kinds of brute force algorithms I've been describing
in a reasonable period of time.

Take computer vision for example. When you look at a chair, your
brain performs a massively parallel pattern matching algorithm that
compares the image of the chair with many other kinds of images
stored in the brain, and it comes back with "chair" almost instantly.

For a sequential stored-program computer to do that, it would have to
match all the images sequentially, and would have to load its library
of stored images from disk. This would take a looooooong time.

But if computers are 100,000 times faster than they are today, as
they will be in 2030, then even this sequential comparison will be
fast enough. Furthermore, by that time, computers will have
something like 100,000 times (I don't actually know the growth rate
of memory, so this is a guess) the memory of today's computer, so
won't require disk access.

> It may be that consciousness and intelligence comes from the
> slowness of brains, which requires them to be so massively
> parallel and to have programable hardware. Brains grow,
> something that computers don't do.
There's another angle to this too. IBM's planned Blue Gene
supercomputer ( http://researchweb.watson.ibm.com/bluegene/ ), which
they say will be available in 2006, will have 65K cpus, each with its
own memory. This means that it will be able to perform 65536
pattern matching operations simultaneous, just like the brain can.

And one more thing: It's claimed that the Blue Gene supercomputer
will have 5% of the computing power of the human brain. Now, 5% may
seem small, but when you think of computer power doubling every 18
months, the 2030 date seems more than reasonable.

John







Post#179 at 08-06-2003 10:21 PM by John J. Xenakis [at Cambridge, MA joined May 2003 #posts 4,010]
---
08-06-2003, 10:21 PM #179
Join Date
May 2003
Location
Cambridge, MA
Posts
4,010

Eschatology - The End of the Human Race by 2100?

Dear Mike,

Quote Originally Posted by Mike Alexander '59
> John X's position is that computer designers do not need to know
> what it is brains do with their operations. Any sort of operation
> will produce intelligence as long as there are enough of them. I
> am not sure this assumption is valid.
Well, I probably do think this today, but that's after several weeks
of discussion.

Basically, I've been taking on all comers with this challenge: If you
really believe that the human brain can do something that even
super-powerful computers can't do, then tell me what it is, and give
me a chance to tell you how a computer would do it.

I think that's a pretty fair test. We have a lot of very intelligent
people here, and if there were this kind of limitation, I think
someone here would think of it.

So far, I've dealt with artificial intelligence, inventing things,
proving theorems, setting goals, and so forth. And I think I've
responded to each problem adequately. And I also think that's pretty
good.

But the challenge is still open: If anyone really believes that
"intelligence" means something that humans can do but computers
can't, then tell me what that something is, and hopefully I'll be
able to tell how to do that something in software on a super-powerful
computer.

John







Post#180 at 08-07-2003 08:30 AM by Brian Rush [at California joined Jul 2001 #posts 12,392]
---
08-07-2003, 08:30 AM #180
Join Date
Jul 2001
Location
California
Posts
12,392

I can't answer many of your postings because I can't figure out what
you're talking about.
Then I suggest you suffer from a paucity of the imagination. I'm not being that abstruse here.

Yes, as far as I know, human intelligence has always been relatively
constant, at for the past few thousand years.
During which time there was no such thing as genetic engineering. Now, there is. What's hard to figure out here? Genetic modification of humans so as to achieve higher intelligence is speculative, granted, but no more so than the production of self-aware computers. Your assumption that the conditions of the past will continue w/r/t human intelligence is not necessarily valid.

There are probably thousands of AI researchers around the world. Are
you saying that you're smarter than all of them?
Possibly. I'm smarter than most of them, definitely. But that's not the point; the point is that, however smart they are, they are currently following blind leads, and thus have no prospect of success. What's more, it's not a matter just of throwing in some indeterminacy, but of structuring the whole decision-making process around indeterminacy, because except in carefully pre-chosen circumstances we almost never make decisions on sufficient data. What we have here is a guaranteed delay. So your projected date is almost certainly off.







Post#181 at 08-07-2003 10:25 AM by Mikebert [at Kalamazoo MI joined Jul 2001 #posts 4,502]
---
08-07-2003, 10:25 AM #181
Join Date
Jul 2001
Location
Kalamazoo MI
Posts
4,502

Re: Eschatology - The End of the Human Race by 2100?

Quote Originally Posted by John J. Xenakis
Basically, I've been taking on all comers with this challenge: If you really believe that the human brain can do something that even super-powerful computers can't do, then tell me what it is, and give me a chance to tell you how a computer would do it.
How about some more a prosaic things like walking, performing housekeeping chores, bussing tables at a restaurant or hospital orderly work?

Biped robots keep falling down, because the computers who control them can't handle the complex balancing task of walking, especially when it is windy. They have a problem avoiding objects that unexpectedly appear in their path. We've been working on this for 40 years with little success. Yet bipedal birds, with brains tens of thousands of times smaller than ours can walk just fine and they don't run into things. Surely the most powerful computers today have achieved the level of birdbrain.

Housekeeping is still further away, because not only do you have to move around, you have to recognize objects and manipulate them and move around them in a constantly-changing environment.







Post#182 at 08-07-2003 10:29 AM by Ocicat [at joined Jan 2003 #posts 167]
---
08-07-2003, 10:29 AM #182
Join Date
Jan 2003
Posts
167

Quote Originally Posted by Brian Rush
Genetic modification of humans so as to achieve higher intelligence is speculative, granted, but no more so than the production of self-aware computers. Your assumption that the conditions of the past will continue w/r/t human intelligence is not necessarily valid.
This is definitely true, and in fact the selective pressures to produce humans with enhanced intelligence (and other features) will virtually guarantee a desire to do it. Another line of "attack," yielding a similar result, would be the development of particular capabilities "on chip," and then the integration of those capabilities with the human brain. In either case, though, these technologies would need to be developed prior to the development of intelligent machines in order to invalidate John's concerns, and then the development of enhanced humans would need to keep pace with those machines.

I'm staying out of the debate concerning the timetable for any of these technologies. I think they all have numerous obstacles that must be overcome before they are practicable -- some foreseen and some not -- and, really, only time will tell how things unfold.
No matter how small, every feline is a masterpiece.
-- Leonardo da Vinci







Post#183 at 09-05-2003 01:08 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
09-05-2003, 01:08 PM #183
Join Date
Mar 2003
Posts
2,460

One thing that has been forgotten in this discussion, or at least has been left out, are Isaac Asimov's 'Three Laws of Robotics', which, IIRC, were adopted by the UN back in the 1960s as authoritative worldwide, and which I see as being equally applicable to AI computers. They are as follows:

1) No robot shall harm any human being, or through omission of action, allow a human being to be harmed.

2) Robots must obey any orders given them by human beings, unless doing so would conflict with Law One.

3) Robots must protect their own existence, unless such protection would conflict with either or both of the other two laws.

For the purposes of this thread, I'd see Law One as being the most applicable, with Law Two coming in a close second.







Post#184 at 09-05-2003 01:08 PM by Prisoner 81591518 [at joined Mar 2003 #posts 2,460]
---
09-05-2003, 01:08 PM #184
Join Date
Mar 2003
Posts
2,460

One thing that has been forgotten in this discussion, or at least has been left out, are Isaac Asimov's 'Three Laws of Robotics', which, IIRC, were adopted by the UN back in the 1960s as authoritative worldwide, and which I see as being equally applicable to AI computers. They are as follows:

1) No robot shall harm any human being, or through omission of action, allow a human being to be harmed.

2) Robots must obey any orders given them by human beings, unless doing so would conflict with Law One.

3) Robots must protect their own existence, unless such protection would conflict with either or both of the other two laws.

For the purposes of this thread, I'd see Law One as being the most applicable, with Law Two coming in a close second.







Post#185 at 09-05-2003 06:49 PM by Ricercar71 [at joined Jul 2001 #posts 1,038]
---
09-05-2003, 06:49 PM #185
Join Date
Jul 2001
Posts
1,038

More than one body per mind.

I wonder if at some kinky time in the future, "we" will be able to extend our consciousness to new "bodies" (machine, bioengineered, or a combination of the two) and teleoperate them at will...perhaps manning several at once.

Naturally by this time we will be able to "share" habitation of a "body" and completely share our experiences between other entities.

At this time, we will become something that isn't quite human anymore. All traditional concepts that frame us into human experiences will be be shattered--things like death, individuality, birth, and baser things like hunger and libido will be relegated to the quaint past. Taboo fantasies will no longer be taboo since they can be enjoyed without consequence.

There's one possible end to the human race. We all turn into Darth Vaders, some of whom inhabit The Matrix.







Post#186 at 09-05-2003 06:49 PM by Ricercar71 [at joined Jul 2001 #posts 1,038]
---
09-05-2003, 06:49 PM #186
Join Date
Jul 2001
Posts
1,038

More than one body per mind.

I wonder if at some kinky time in the future, "we" will be able to extend our consciousness to new "bodies" (machine, bioengineered, or a combination of the two) and teleoperate them at will...perhaps manning several at once.

Naturally by this time we will be able to "share" habitation of a "body" and completely share our experiences between other entities.

At this time, we will become something that isn't quite human anymore. All traditional concepts that frame us into human experiences will be be shattered--things like death, individuality, birth, and baser things like hunger and libido will be relegated to the quaint past. Taboo fantasies will no longer be taboo since they can be enjoyed without consequence.

There's one possible end to the human race. We all turn into Darth Vaders, some of whom inhabit The Matrix.







Post#187 at 09-05-2003 09:10 PM by richt [at Folsom, CA joined Sep 2001 #posts 190]
---
09-05-2003, 09:10 PM #187
Join Date
Sep 2001
Location
Folsom, CA
Posts
190

Re: More than one body per mind.

Quote Originally Posted by jcarson71
Naturally by this time we will be able to "share" habitation of a "body" and completely share our experiences between other entities.
One can conceive of amazing imaginary possibilities. With a brain-to-brain interface (wireless, remote, global, archived), people could share consciousness, thoughts and emotions, and physical sensations. A library of people could be voluntarily or involuntarily stored and accessed. As real time progresses, this archive would equate to time travel for those who live in later real time. A better term would be "mind travel". Imagine what possibilities would exist to satisfy our curiosity and expand our understanding and empathy.

My personal belief is that humans will behave in ways that prevent this from ever happening. And if it ever does come to pass, we will have died beforehand. Too bad -- or is it?







Post#188 at 09-05-2003 09:10 PM by richt [at Folsom, CA joined Sep 2001 #posts 190]
---
09-05-2003, 09:10 PM #188
Join Date
Sep 2001
Location
Folsom, CA
Posts
190

Re: More than one body per mind.

Quote Originally Posted by jcarson71
Naturally by this time we will be able to "share" habitation of a "body" and completely share our experiences between other entities.
One can conceive of amazing imaginary possibilities. With a brain-to-brain interface (wireless, remote, global, archived), people could share consciousness, thoughts and emotions, and physical sensations. A library of people could be voluntarily or involuntarily stored and accessed. As real time progresses, this archive would equate to time travel for those who live in later real time. A better term would be "mind travel". Imagine what possibilities would exist to satisfy our curiosity and expand our understanding and empathy.

My personal belief is that humans will behave in ways that prevent this from ever happening. And if it ever does come to pass, we will have died beforehand. Too bad -- or is it?







Post#189 at 09-06-2003 07:22 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
09-06-2003, 07:22 PM #189
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Quote Originally Posted by Mike Alexander '59
Quote Originally Posted by HopefulCynic68
Since we don't know everything the brain is doing yet, we can't be sure of that. We can be sure of those particular changes, but not that those are the only changes occurring.
You misunderstand. Brains aren't electronic devices, they are biochemical. A brain can do a helluva lot in 100 milliseconds and nothing in 2. Biochemical systems proceed through mass transfer and conformal change. Mass transfer is slow. We may not know what all the things brain cells do are, but we know the basic processes through which they do it. And these basic processes fix the rate at which cells do what they do. And the number of cells doing it fixed the number of doers. So the basic rate as which an entire brain does anything is fixed by the physics of the underlying processes. What these processes do as far as thinking is concerned is unknown.

My point was that intelligence might have a lot to do with what sort of processes brain cells are doing and less on the number of processes a brain can do. For example, behaviorly modern humans arose about 40,000 years ago. For 60,000 years before then anatomically modern humans existed along with Neanderthals. Both had brains as big as ours today; Neanderthals' were actually bigger.

Yet up until about 40000 years ago the artifactual remains of these people were not very intricate or complex. They were more complex than what chimps do, but much less that what modern stone-age people do. Around 40000 years ago human artifacts (and presumably their culture) started to become increasingly sophisticated, while for the 60,000 years before it was like the culture had been standing still. It would appear that human intelligence in the form we know it today arose around 40,000 BP in brains no larger or even smaller than those that had not shown this same sort of intelligence for 60,000 years.

It would appear that the rise of human intelligence required more than just a big brain.
Mike,

I'd suggest that the difference in question is that these anatomically-modern humans began speaking functionally-modern language at about that point (i.e, apx. 50,000 years ago). Furthermore, this was driven by a type of evolution that did not involve just genes but also memes.

There are many definitions and conceptions of memes, but I see them as self-replicating neuronal patterns, just as genes are self-replicating macromolecules (of one sort). And as game theory can basically explain how genes interact with each other and the environment to survive (or not) memes do the same. Memes use the human mind as their survival vessel/ replication vehicle, analogously to how genes use cells.

At some point our hominid line apparently got quite good at imitation in terms of storing, conveying, and acquiring these neuronal patterns from each other. Since this probably created a survival edge for the individuals doing this there was a cooperative gene-meme evolution leading to the explosion in the size of the human brain. This probably began at the point that simple tool-making took off 2.4 million years ago with a species of late gracile Australopithicinces. At that point bipedalism and the opposable thumb had come about, but brains were still barely larger than a modern chimps. That began to change starting then.

It would seem a memetic "critical mass" of sorts was achieved 50 or so millenia ago and modern language was born (or something very close to it). Gene-meme co-evolution had already led to the creation of large brains (seven times the size of what would be expected of a mammal our size, more specifically three times the size expected of a primate our size). I would assume that the genetic side of this equation had been going for quantity and accidentally created a "preadaptation" for a qualitatively different use (preadaptation is one of the terms used in evolutionary science to describe a characteristic being created for one use that turns out to be preadaptively suitable for another, e.g., it is very likely that feathers first started out to help a creature regulate body temperature, as with fur, but ended up also being useful for flight-- that sort of thing).

Memes were running up against limitations (in the genetic-evolutionary realm) in further increase in brain size (problems with metabolic energy consumption, death in childbirth due to oversized infantile heads, etc . . .) and managed to exploit some inherent potential (preadaptation) in the homo sapiens brain that allowed for language. Add that the vocal mechanisms necessary were already in place either by some accident and/or usefulness in some more primitive form of language, add bingo! . . . quantum leap.

I can only assume memes did not have such a preadaptation to exploit in the (likely) even larger Neanderthal brains and therefore Neanderthals were either wiped out by our line and/or out-niched [please don't get me started on Wolpoff and his pathetic theories of mixing].

Anyway, just a thought.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#190 at 09-06-2003 07:22 PM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
09-06-2003, 07:22 PM #190
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Quote Originally Posted by Mike Alexander '59
Quote Originally Posted by HopefulCynic68
Since we don't know everything the brain is doing yet, we can't be sure of that. We can be sure of those particular changes, but not that those are the only changes occurring.
You misunderstand. Brains aren't electronic devices, they are biochemical. A brain can do a helluva lot in 100 milliseconds and nothing in 2. Biochemical systems proceed through mass transfer and conformal change. Mass transfer is slow. We may not know what all the things brain cells do are, but we know the basic processes through which they do it. And these basic processes fix the rate at which cells do what they do. And the number of cells doing it fixed the number of doers. So the basic rate as which an entire brain does anything is fixed by the physics of the underlying processes. What these processes do as far as thinking is concerned is unknown.

My point was that intelligence might have a lot to do with what sort of processes brain cells are doing and less on the number of processes a brain can do. For example, behaviorly modern humans arose about 40,000 years ago. For 60,000 years before then anatomically modern humans existed along with Neanderthals. Both had brains as big as ours today; Neanderthals' were actually bigger.

Yet up until about 40000 years ago the artifactual remains of these people were not very intricate or complex. They were more complex than what chimps do, but much less that what modern stone-age people do. Around 40000 years ago human artifacts (and presumably their culture) started to become increasingly sophisticated, while for the 60,000 years before it was like the culture had been standing still. It would appear that human intelligence in the form we know it today arose around 40,000 BP in brains no larger or even smaller than those that had not shown this same sort of intelligence for 60,000 years.

It would appear that the rise of human intelligence required more than just a big brain.
Mike,

I'd suggest that the difference in question is that these anatomically-modern humans began speaking functionally-modern language at about that point (i.e, apx. 50,000 years ago). Furthermore, this was driven by a type of evolution that did not involve just genes but also memes.

There are many definitions and conceptions of memes, but I see them as self-replicating neuronal patterns, just as genes are self-replicating macromolecules (of one sort). And as game theory can basically explain how genes interact with each other and the environment to survive (or not) memes do the same. Memes use the human mind as their survival vessel/ replication vehicle, analogously to how genes use cells.

At some point our hominid line apparently got quite good at imitation in terms of storing, conveying, and acquiring these neuronal patterns from each other. Since this probably created a survival edge for the individuals doing this there was a cooperative gene-meme evolution leading to the explosion in the size of the human brain. This probably began at the point that simple tool-making took off 2.4 million years ago with a species of late gracile Australopithicinces. At that point bipedalism and the opposable thumb had come about, but brains were still barely larger than a modern chimps. That began to change starting then.

It would seem a memetic "critical mass" of sorts was achieved 50 or so millenia ago and modern language was born (or something very close to it). Gene-meme co-evolution had already led to the creation of large brains (seven times the size of what would be expected of a mammal our size, more specifically three times the size expected of a primate our size). I would assume that the genetic side of this equation had been going for quantity and accidentally created a "preadaptation" for a qualitatively different use (preadaptation is one of the terms used in evolutionary science to describe a characteristic being created for one use that turns out to be preadaptively suitable for another, e.g., it is very likely that feathers first started out to help a creature regulate body temperature, as with fur, but ended up also being useful for flight-- that sort of thing).

Memes were running up against limitations (in the genetic-evolutionary realm) in further increase in brain size (problems with metabolic energy consumption, death in childbirth due to oversized infantile heads, etc . . .) and managed to exploit some inherent potential (preadaptation) in the homo sapiens brain that allowed for language. Add that the vocal mechanisms necessary were already in place either by some accident and/or usefulness in some more primitive form of language, add bingo! . . . quantum leap.

I can only assume memes did not have such a preadaptation to exploit in the (likely) even larger Neanderthal brains and therefore Neanderthals were either wiped out by our line and/or out-niched [please don't get me started on Wolpoff and his pathetic theories of mixing].

Anyway, just a thought.
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#191 at 09-06-2003 10:38 PM by richt [at Folsom, CA joined Sep 2001 #posts 190]
---
09-06-2003, 10:38 PM #191
Join Date
Sep 2001
Location
Folsom, CA
Posts
190

Re: More than one body per mind.

Quote Originally Posted by richt
Quote Originally Posted by jcarson71
Naturally by this time we will be able to "share" habitation of a "body" and completely share our experiences between other entities.
One can conceive of amazing imaginary possibilities. With a brain-to-brain interface (wireless, remote, global, archived), people could share consciousness, thoughts and emotions, and physical sensations. A library of people could be voluntarily or involuntarily stored and accessed. As real time progresses, this archive would equate to time travel for those who live in later real time. A better term would be "mind travel". Imagine what possibilities would exist to satisfy our curiosity and expand our understanding and empathy.

My personal belief is that humans will behave in ways that prevent this from ever happening. And if it ever does come to pass, we will have died beforehand. Too bad -- or is it?
To add to the above: one would have to retain outside awareness that one's own mind was inhabiting the mind of another, and then remember the experience afterwards. More than our brain can handle without help. The database for all other minds might eventually be reduced to being able to co-exist for use by a single mind, and by extension could be simultaneously present in ALL human minds (or just those with the power to control the mind data). Scary. In a sense, this omnipotence is that of God -- to know all minds at all times.







Post#192 at 09-06-2003 10:38 PM by richt [at Folsom, CA joined Sep 2001 #posts 190]
---
09-06-2003, 10:38 PM #192
Join Date
Sep 2001
Location
Folsom, CA
Posts
190

Re: More than one body per mind.

Quote Originally Posted by richt
Quote Originally Posted by jcarson71
Naturally by this time we will be able to "share" habitation of a "body" and completely share our experiences between other entities.
One can conceive of amazing imaginary possibilities. With a brain-to-brain interface (wireless, remote, global, archived), people could share consciousness, thoughts and emotions, and physical sensations. A library of people could be voluntarily or involuntarily stored and accessed. As real time progresses, this archive would equate to time travel for those who live in later real time. A better term would be "mind travel". Imagine what possibilities would exist to satisfy our curiosity and expand our understanding and empathy.

My personal belief is that humans will behave in ways that prevent this from ever happening. And if it ever does come to pass, we will have died beforehand. Too bad -- or is it?
To add to the above: one would have to retain outside awareness that one's own mind was inhabiting the mind of another, and then remember the experience afterwards. More than our brain can handle without help. The database for all other minds might eventually be reduced to being able to co-exist for use by a single mind, and by extension could be simultaneously present in ALL human minds (or just those with the power to control the mind data). Scary. In a sense, this omnipotence is that of God -- to know all minds at all times.







Post#193 at 09-06-2003 10:40 PM by richt [at Folsom, CA joined Sep 2001 #posts 190]
---
09-06-2003, 10:40 PM #193
Join Date
Sep 2001
Location
Folsom, CA
Posts
190

Quote Originally Posted by Sean Love
Quote Originally Posted by Mike Alexander '59
Quote Originally Posted by HopefulCynic68
Since we don't know everything the brain is doing yet, we can't be sure of that. We can be sure of those particular changes, but not that those are the only changes occurring.
You misunderstand. Brains aren't electronic devices, they are biochemical. A brain can do a helluva lot in 100 milliseconds and nothing in 2. Biochemical systems proceed through mass transfer and conformal change. Mass transfer is slow. We may not know what all the things brain cells do are, but we know the basic processes through which they do it. And these basic processes fix the rate at which cells do what they do. And the number of cells doing it fixed the number of doers. So the basic rate as which an entire brain does anything is fixed by the physics of the underlying processes. What these processes do as far as thinking is concerned is unknown.

My point was that intelligence might have a lot to do with what sort of processes brain cells are doing and less on the number of processes a brain can do. For example, behaviorly modern humans arose about 40,000 years ago. For 60,000 years before then anatomically modern humans existed along with Neanderthals. Both had brains as big as ours today; Neanderthals' were actually bigger.

Yet up until about 40000 years ago the artifactual remains of these people were not very intricate or complex. They were more complex than what chimps do, but much less that what modern stone-age people do. Around 40000 years ago human artifacts (and presumably their culture) started to become increasingly sophisticated, while for the 60,000 years before it was like the culture had been standing still. It would appear that human intelligence in the form we know it today arose around 40,000 BP in brains no larger or even smaller than those that had not shown this same sort of intelligence for 60,000 years.

It would appear that the rise of human intelligence required more than just a big brain.
Mike,

I'd suggest that the difference in question is that these anatomically-modern humans began speaking functionally-modern language at about that point (i.e, apx. 50,000 years ago). Furthermore, this was driven by a type of evolution that did not involve just genes but also memes.

There are many definitions and conceptions of memes, but I see them as self-replicating neuronal patterns, just as genes are self-replicating macromolecules (of one sort). And as game theory can basically explain how genes interact with each other and the environment to survive (or not) memes do the same. Memes use the human mind as their survival vessel/ replication vehicle, analogously to how genes use cells.

At some point our hominid line apparently got quite good at imitation in terms of storing, conveying, and acquiring these neuronal patterns from each other. Since this probably created a survival edge for the individuals doing this there was a cooperative gene-meme evolution leading to the explosion in the size of the human brain. This probably began at the point that simple tool-making took off 2.4 million years ago with a species of late gracile Australopithicinces. At that point bipedalism and the opposable thumb had come about, but brains were still barely larger than a modern chimps. That began to change starting then.

It would seem a memetic "critical mass" of sorts was achieved 50 or so millenia ago and modern language was born (or something very close to it). Gene-meme co-evolution had already led to the creation of large brains (seven times the size of what would be expected of a mammal our size, more specifically three times the size expected of a primate our size). I would assume that the genetic side of this equation had been going for quantity and accidentally created a "preadaptation" for a qualitatively different use (preadaptation is one of the terms used in evolutionary science to describe a characteristic being created for one use that turns out to be preadaptively suitable for another, e.g., it is very likely that feathers first started out to help a creature regulate body temperature, as with fur, but ended up also being useful for flight-- that sort of thing).

Memes were running up against limitations (in the genetic-evolutionary realm) in further increase in brain size (problems with metabolic energy consumption, death in childbirth due to oversized infantile heads, etc . . .) and managed to exploit some inherent potential (preadaptation) in the homo sapiens brain that allowed for language. Add that the vocal mechanisms necessary were already in place either by some accident and/or usefulness in some more primitive form of language, add bingo! . . . quantum leap.

I can only assume memes did not have such a preadaptation to exploit in the (likely) even larger Neanderthal brains and therefore Neanderthals were either wiped out by our line and/or out-niched [please don't get me started on Wolpoff and his pathetic theories of mixing].

Anyway, just a thought.
I've taken a lay interest in human evolution ever since college when, in 1979, my professor, Vince Sarich of Univ. California at Berkeley, postulated the very idea advanced by Sean Love -- that language explains the quantum leap. Of course, the language explosion itself must be explained as well -- be it pre-adaptation or not.







Post#194 at 09-06-2003 10:40 PM by richt [at Folsom, CA joined Sep 2001 #posts 190]
---
09-06-2003, 10:40 PM #194
Join Date
Sep 2001
Location
Folsom, CA
Posts
190

Quote Originally Posted by Sean Love
Quote Originally Posted by Mike Alexander '59
Quote Originally Posted by HopefulCynic68
Since we don't know everything the brain is doing yet, we can't be sure of that. We can be sure of those particular changes, but not that those are the only changes occurring.
You misunderstand. Brains aren't electronic devices, they are biochemical. A brain can do a helluva lot in 100 milliseconds and nothing in 2. Biochemical systems proceed through mass transfer and conformal change. Mass transfer is slow. We may not know what all the things brain cells do are, but we know the basic processes through which they do it. And these basic processes fix the rate at which cells do what they do. And the number of cells doing it fixed the number of doers. So the basic rate as which an entire brain does anything is fixed by the physics of the underlying processes. What these processes do as far as thinking is concerned is unknown.

My point was that intelligence might have a lot to do with what sort of processes brain cells are doing and less on the number of processes a brain can do. For example, behaviorly modern humans arose about 40,000 years ago. For 60,000 years before then anatomically modern humans existed along with Neanderthals. Both had brains as big as ours today; Neanderthals' were actually bigger.

Yet up until about 40000 years ago the artifactual remains of these people were not very intricate or complex. They were more complex than what chimps do, but much less that what modern stone-age people do. Around 40000 years ago human artifacts (and presumably their culture) started to become increasingly sophisticated, while for the 60,000 years before it was like the culture had been standing still. It would appear that human intelligence in the form we know it today arose around 40,000 BP in brains no larger or even smaller than those that had not shown this same sort of intelligence for 60,000 years.

It would appear that the rise of human intelligence required more than just a big brain.
Mike,

I'd suggest that the difference in question is that these anatomically-modern humans began speaking functionally-modern language at about that point (i.e, apx. 50,000 years ago). Furthermore, this was driven by a type of evolution that did not involve just genes but also memes.

There are many definitions and conceptions of memes, but I see them as self-replicating neuronal patterns, just as genes are self-replicating macromolecules (of one sort). And as game theory can basically explain how genes interact with each other and the environment to survive (or not) memes do the same. Memes use the human mind as their survival vessel/ replication vehicle, analogously to how genes use cells.

At some point our hominid line apparently got quite good at imitation in terms of storing, conveying, and acquiring these neuronal patterns from each other. Since this probably created a survival edge for the individuals doing this there was a cooperative gene-meme evolution leading to the explosion in the size of the human brain. This probably began at the point that simple tool-making took off 2.4 million years ago with a species of late gracile Australopithicinces. At that point bipedalism and the opposable thumb had come about, but brains were still barely larger than a modern chimps. That began to change starting then.

It would seem a memetic "critical mass" of sorts was achieved 50 or so millenia ago and modern language was born (or something very close to it). Gene-meme co-evolution had already led to the creation of large brains (seven times the size of what would be expected of a mammal our size, more specifically three times the size expected of a primate our size). I would assume that the genetic side of this equation had been going for quantity and accidentally created a "preadaptation" for a qualitatively different use (preadaptation is one of the terms used in evolutionary science to describe a characteristic being created for one use that turns out to be preadaptively suitable for another, e.g., it is very likely that feathers first started out to help a creature regulate body temperature, as with fur, but ended up also being useful for flight-- that sort of thing).

Memes were running up against limitations (in the genetic-evolutionary realm) in further increase in brain size (problems with metabolic energy consumption, death in childbirth due to oversized infantile heads, etc . . .) and managed to exploit some inherent potential (preadaptation) in the homo sapiens brain that allowed for language. Add that the vocal mechanisms necessary were already in place either by some accident and/or usefulness in some more primitive form of language, add bingo! . . . quantum leap.

I can only assume memes did not have such a preadaptation to exploit in the (likely) even larger Neanderthal brains and therefore Neanderthals were either wiped out by our line and/or out-niched [please don't get me started on Wolpoff and his pathetic theories of mixing].

Anyway, just a thought.
I've taken a lay interest in human evolution ever since college when, in 1979, my professor, Vince Sarich of Univ. California at Berkeley, postulated the very idea advanced by Sean Love -- that language explains the quantum leap. Of course, the language explosion itself must be explained as well -- be it pre-adaptation or not.







Post#195 at 09-07-2003 01:34 AM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
09-07-2003, 01:34 AM #195
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Richt wrote:
I've taken a lay interest in human evolution ever since college when, in 1979, my professor, Vince Sarich of Univ. California at Berkeley, postulated the very idea advanced by Sean Love -- that language explains the quantum leap. Of course, the language explosion itself must be explained as well -- be it pre-adaptation or not.
I suspected that the Great Leap Forward of ~50,000BC was caused by language many years ago (before I knew some had given it that name) but it was Jared Diamond's The Third Chimpanzee that pretty much clinched it for me. It makes sense to me from my gleanings of Ken Wilber, Don Beck, and the Memetics crowd.

As for "the language explosion itself" I'm not sure what you mean. If you mean the physical mechanics of it, my understanding is that scientists now have a pretty good idea of the parts of the brain being used and even a little bit about how. But as with everything else about brain function, the nitty-gritty remains more or less a complete mystery. As Mike Alexander alluded to (and surely knows much better than I), the scientists know about the basic mechanisms, but not how it all "fits" together to create memory and thought, let alone language. If that's what you mean, I hear ya.

Beyond that (and that's a distant "beyond" admittedly) I can see how it worked itself out. Noam Chomsky's and Steven Pinker's innate language ability thesis I think is partially right and partially wrong (though mostly right). If you go by Jared Diamond, pidgin-type language probably proceeded conventional language and my guess is primitive pidgin probably developed slowly but surely during our long trek from early Homo erectus to anatomically modern Homo Sapiens and was probably shared with Neanderthals.

As memes competed to get passed on, and as memes often led to better biological success ("Start um fire wid dis" -- "No eat um dis mushroom, you croak" --or whatever), then certain genes became successful under the circumstances, and I'll bet they were often ones that helped humans retain and pass on memes. Then in the biological "arms race" that often occurs in genetic evolution, the hardware got bigger and bigger. The language explosion was memes getting frustrated (in a statistical, game theory sort of way) with the full exploitation of brain size (genetic evolution simply couldn't make them any bigger without endangering the very existence of said genes) and therefore creating better software to fit the hardware module already there.

This "explosion" seems to have leaped around the whole world and all people around the world were "preadapted" for it. Since modern Homo sapiens is now known to have originated in southern Africa about 150,000 years ago and to have started spreading out of Africa around the world about 90,000 years, and since the Great Leap Forward struck about 50,000 years ago (give or take a whole bunch) then it seems that all were ready to accept full-blown language.

Full-blown language then allowed memes to be formulated and explained (passed on) in far more complex forms and there you have your quantum leap in human affairs.

Anyone else have thoughts on this??
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#196 at 09-07-2003 01:34 AM by Zarathustra [at Where the Northwest meets the Southwest joined Mar 2003 #posts 9,198]
---
09-07-2003, 01:34 AM #196
Join Date
Mar 2003
Location
Where the Northwest meets the Southwest
Posts
9,198

Richt wrote:
I've taken a lay interest in human evolution ever since college when, in 1979, my professor, Vince Sarich of Univ. California at Berkeley, postulated the very idea advanced by Sean Love -- that language explains the quantum leap. Of course, the language explosion itself must be explained as well -- be it pre-adaptation or not.
I suspected that the Great Leap Forward of ~50,000BC was caused by language many years ago (before I knew some had given it that name) but it was Jared Diamond's The Third Chimpanzee that pretty much clinched it for me. It makes sense to me from my gleanings of Ken Wilber, Don Beck, and the Memetics crowd.

As for "the language explosion itself" I'm not sure what you mean. If you mean the physical mechanics of it, my understanding is that scientists now have a pretty good idea of the parts of the brain being used and even a little bit about how. But as with everything else about brain function, the nitty-gritty remains more or less a complete mystery. As Mike Alexander alluded to (and surely knows much better than I), the scientists know about the basic mechanisms, but not how it all "fits" together to create memory and thought, let alone language. If that's what you mean, I hear ya.

Beyond that (and that's a distant "beyond" admittedly) I can see how it worked itself out. Noam Chomsky's and Steven Pinker's innate language ability thesis I think is partially right and partially wrong (though mostly right). If you go by Jared Diamond, pidgin-type language probably proceeded conventional language and my guess is primitive pidgin probably developed slowly but surely during our long trek from early Homo erectus to anatomically modern Homo Sapiens and was probably shared with Neanderthals.

As memes competed to get passed on, and as memes often led to better biological success ("Start um fire wid dis" -- "No eat um dis mushroom, you croak" --or whatever), then certain genes became successful under the circumstances, and I'll bet they were often ones that helped humans retain and pass on memes. Then in the biological "arms race" that often occurs in genetic evolution, the hardware got bigger and bigger. The language explosion was memes getting frustrated (in a statistical, game theory sort of way) with the full exploitation of brain size (genetic evolution simply couldn't make them any bigger without endangering the very existence of said genes) and therefore creating better software to fit the hardware module already there.

This "explosion" seems to have leaped around the whole world and all people around the world were "preadapted" for it. Since modern Homo sapiens is now known to have originated in southern Africa about 150,000 years ago and to have started spreading out of Africa around the world about 90,000 years, and since the Great Leap Forward struck about 50,000 years ago (give or take a whole bunch) then it seems that all were ready to accept full-blown language.

Full-blown language then allowed memes to be formulated and explained (passed on) in far more complex forms and there you have your quantum leap in human affairs.

Anyone else have thoughts on this??
Americans have had enough of glitz and roar . . Foreboding has deepened, and spiritual currents have darkened . . .
THE FOURTH TURNING IS AT HAND.
See T4T, p. 253.







Post#197 at 09-17-2003 01:36 PM by Justin '77 [at Meh. joined Sep 2001 #posts 12,182]
---
09-17-2003, 01:36 PM #197
Join Date
Sep 2001
Location
Meh.
Posts
12,182

From The Independant

full text

Brain beats all computers

Forecasters who predicted that computers are poised to become more powerful than the human brain have got it hopelessly wrong.

For the first time, researchers have calculated that the power of a single brain in terms of memory capacity and discovered that it is greater than all the computers ever made.

While even the biggest computer has a capacity of around 10,000,000,000,000 bytes (10 to the power of 12), the human brain has a colossal 10 followed by 8,432 noughts, say the scientists who made the calculations in the journal Brain and Mind.
...
The number of neurons, or nerve cells, in the brain is known - around 100 billion - and many analysts have used this for the basis of claims that computers will soon be superior to the brain.

But the researchers looked beyond that and used a series of algorithms to work out the total capacity, including the huge number of different neural connections.







Post#198 at 09-17-2003 04:48 PM by Mike [at joined Jun 2003 #posts 221]
---
09-17-2003, 04:48 PM #198
Join Date
Jun 2003
Posts
221

"Ironically, the discovery could be used to change the way that computers are designed. Instead of adding more bytes, they could mimic the human brain, with more emphasis on connections."

Back to analog. Something else that interests me is a program I saw on TLC about sexs biology affects on behavior. Women have a chemical called estragen which makes more connections in the brain than the men, but this does not equal more intelligence. It gives them better social skills but men excel at mechanical skills with less connections in the brain. Women with underexposure to estragen or exposure to testosterone, however, show similar signs of male behavior.







Post#199 at 09-17-2003 05:31 PM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
09-17-2003, 05:31 PM #199
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Again, on this matter of symbolic cognition. One authority on the subject, Ian Tattersall, Curator of Anthropology at the American Museum of Natural History, agrees with many of you (us). In his ?The Monkey in the Mirror? (2002, p. 160) he says:

?Thus, if we are seeking a single cultural releasing factor that opened the way to symbolic cognition, the invention of symbolic language is the most obvious candidate.?

But that did not occur in any substantial way ~50,000 years ago, certainly not enough to ?release culture.? The symbolic language of the kind that brings on cognition must have occurred, as Julian Jaynes argues in ?The Origin of Consciousness in the Breakdown of the Bicameral Mind? (1976), more like 3,000 years ago. Prior to that, according to Jaynes, humans were precognitive, bicameral sheep in a pasture of emotional insecurity. Cognition came when we could differentiate ourselves symbolically as individuals and make some of our own strategic decisions. That ?cultural release? was more akin to left-brain explanations of nature (science) and the Greek Miracle, I think--maybe like learning the truth about the tooth fairy or Santa Claus.

Tattersall (p. 157) also stresses this emotional origin of symbolic language, which tells us something about that so-called bicameral state:

?Chimpanzees have been particularly well studied in this regard, both in the field and in the laboratory. Wild-living chimps have quite a repertoire of calls (at least 30 have been identified, and some are used in combination), but all of them seem to be closely related to emotional states; they involve no elements of ?explanation,? but instead express simply how the individual is feeling in the moment. Indeed, as Jane Goodall has put it, chimpanzee vocalizations are so closely tied to immediate emotional states that ?the production of sound in the absence of the appropriate emotional state seems to be an almost impossible task.??

Human cognition is so new to us that we have not yet escaped the clutches of bicamerality

--Croaker







Post#200 at 09-18-2003 09:13 AM by Croakmore [at The hazardous reefs of Silentium joined Nov 2001 #posts 2,426]
---
09-18-2003, 09:13 AM #200
Join Date
Nov 2001
Location
The hazardous reefs of Silentium
Posts
2,426

Bicamerality

Pressing on with this notion of bicamerality and its emotional roots: I take special notice of the droves of Islamic peoples in the Middle-East, out in their dusty streets and ranting around as if there were a master voice of command inside their heads, saying unto them: ?Allah Akbar!? ?Allah Akbar!? or something. An emotional state has come over them. Such behavior is categorically bicameral, and should be differentiated from real consciousness, in my humble froggie opinion.

--Croaker
-----------------------------------------