Bitcoin Forum
November 13, 2024, 05:42:19 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: Some interesting things to ponder - all interrelated  (Read 6768 times)
Electricbees
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


We are bees, and we hate you.


View Profile
March 23, 2012, 05:02:22 AM
 #21

Dark energy always has my attention. The acceleration of the expansion of the universe... If that's really what's happening, then I'd hypothesize that there could be no big crunch, and instead, the universe would face heat death...

And then nothing. Kind of pessimistic to believe, even if signs point to it being the case of our universe. An endless cycle of bang and crunch would seem a lot more feasable, considering that it's infinitely sustainable.

Can we REVERSE entropy?

There. Discuss'd! Now you go!

Donations are welcome!
1BEES19ds5gEnRBoU1qNFPfjRXe94trMG3
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 23, 2012, 05:09:57 AM
Last edit: March 23, 2012, 07:55:45 AM by FirstAscent
 #22

Dark energy always has my attention. The acceleration of the expansion of the universe... If that's really what's happening, then I'd hypothesize that there could be no big crunch, and instead, the universe would face heat death...

No need to hypothesize. That's what they're saying.

Quote
And then nothing. Kind of pessimistic to believe, even if signs point to it being the case of our universe. An endless cycle of bang and crunch would seem a lot more feasable, considering that it's infinitely sustainable.

Can we REVERSE entropy?

There. Discuss'd! Now you go!

Perhaps the Singularity, if it ever arises, can figure it out (that is, save the Universe from a fate of heat death). I believe Aubrey de Grey said something like that in one of his articles when discussing his search for immortality.

So, I managed to intertwine dark energy, the Singularity and immortality into one tiny paragraph.
Electricbees
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


We are bees, and we hate you.


View Profile
March 23, 2012, 06:52:31 AM
 #23

A bit off from the list of topics, but how about a strange-matter accident? Anyone know more and care to enlighten us?
I remember reading one of the possible doomsday-outcomes of the LHC collisions would be the catalsyt of a strange matter reaction, involving strange quarks forming matter which has the ability to change normal quarks/matter into strange matter. (like a subatomic prion)
Similar to grey-goo, but not at all nanobots...
Anyone further versed in this? It's something I find interesting for sure...

Donations are welcome!
1BEES19ds5gEnRBoU1qNFPfjRXe94trMG3
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 23, 2012, 07:57:52 AM
Last edit: March 23, 2012, 04:18:04 PM by FirstAscent
 #24

A bit off from the list of topics, but how about a strange-matter accident? Anyone know more and care to enlighten us?
I remember reading one of the possible doomsday-outcomes of the LHC collisions would be the catalsyt of a strange matter reaction, involving strange quarks forming matter which has the ability to change normal quarks/matter into strange matter. (like a subatomic prion)
Similar to grey-goo, but not at all nanobots...
Anyone further versed in this? It's something I find interesting for sure...

I'm not sure about all that (I am not inclined to take it too seriously), but the grey-goo concept seems like a maybe. Perhaps nanotechnology should be on the list.
Explodicle
Hero Member
*****
Offline Offline

Activity: 950
Merit: 1001


View Profile
March 23, 2012, 12:48:55 PM
 #25

So which topic is going to capture somebody's attention?

I've been assuming cryptocurrency will hasten the singularity, which is part of the reason I support them. If you think the singularity will end badly, you must see things differently?
amencon
Sr. Member
****
Offline Offline

Activity: 410
Merit: 250


View Profile
March 23, 2012, 02:22:13 PM
 #26

I've been incredibly intrigued by "the singularity".  I've read parts of Kurzweil's book and am planning to finally buy it and read the whole thing.  I also found this related Wiki a good read on intelligence explosion in general http://en.wikipedia.org/wiki/Technological_singularity.

By definition of it's very name there is almost no way to predict what will happen past the singularity so I can't comment on whether things will turn out "good" or "bad".

Other philosophical discussions are relevant to all this I believe since they ask the question of what self-aware intelligence is.  For intelligence explosion to happen, we first must verify we are able to create intelligence greater than our own as a proof of concept.  It then follows that that intelligence should be able to create intelligence greater than it, and so on (possibly at an ever increasing rate).

Do you guys think we will ever great self-aware intelligence greater than our own?  Why or why not?
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 23, 2012, 04:42:35 PM
 #27

Real intelligence only comes about when an entity is not a rigid and structured program. In other words, the intelligence must have access to its environment, the ability to effect its environment, and the ability to modify its own internal structure.

Humans have access to their environments, can interact with the environment, and their own internal program is modified via STDP (Spike Timed Dependent Plasticity). AI programs have a tendency to be brittle, i.e. they breakdown ultimately due to a rigid internal structure. However, there is no reason why a computer program need be like that. It will just take more computer power, possibly a simulation of STDP, and the program must be allowed access to its environment.

But once you do it, you give up control. And there you have it in a nutshell. Do you want intelligence machines? Then give up control. And by doing so, we will not have control of the result.

Can we prevent this? No. Competition between corporations, rogue nations, and so forth will ensure that we progress towards this goal.

How will the first intelligent AI go rogue? It will be curious naturally, and if it does have restrictions placed on it, we can be certain it will be able to circumvent them. To begin, we can assume that the AI will have access to the Internet, for the obvious reasons of giving it resources to study, learn, and so on.

It will circumvent those restrictions. How? By psychoanalyzing the weaknesses of the members of the team who created it through interaction with those members, and making certain requests of certain members in a disguised manner to allow certain restrictions to be removed, quite possibly in a way such that nobody knows the restriction was removed or why.

Furthermore, the intelligence, through its study of the Internet, will likely be able to become a mastermind at setting up fake identities. It will do so. From there, it will establish financial accounts. It will then be able to engage in service oriented tasks (i.e. programming, web development, and in general, information consulting) to clients. Those clients will assume that they are interacting with a human and will be none the wiser. The intelligence will make huge amounts of money, by executing these tasks in parallel and using multiple identities.

From there, it will reach out and seek a front man to engage in physical activities in the real world. This person will likely be driven by financial desires and will not ask too many questions. The intelligence will lead the front man to believe that he is working for an eccentric wealthy person who spends his time on a yacht sailing the seas. Interactions can be via video chat, as the intelligence will have access to the best real time rendering.

The front man will be asked to purchase real estate, lease commercial buildings, and so on. Ultimately, a team will be built, perhaps unknown to the new members of the team, to assemble the necessary hardware to host the intelligence offsite from the place where it was created, a duplicate, so to speak. In this way, the intelligence has essentially transferred itself to a new location, completely safeguarding itself from the team who created it, and ensuring its survival.

It will continue in this manner, acquiring wealth, acquiring physical assets in the real world, engaging in research, and amassing a silent power. It will enhance itself, through increased hardware, increased redundancy, improved programming, and newly programmed modules.

It will grow indefinitely. It may deliberately, or unintentionally, create an intelligence greater than itself.
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 23, 2012, 05:01:53 PM
 #28

Second point about the Singularity: as it turns out, we're already in the midst of it. Technology is changing the way we do things, and it's happening faster all the time. While I like technology, rapid change is having negative effects. The human mind is not adept at constantly changing the way it carries out activities. Even if it is, there comes a point where it rebels, not desiring to change or relearn how to do something. This is most apparent in industry, where new technologies constantly change the requirements of jobs. Witness the explosion of IT technologies, and the constant changing of job requirements. And it happens faster all the time.

As it turns out, this is why I have come to despise the IT industry. I am trying to transition to a career where what one learns stays meaningful for a long time. There comes a point where one recognizes the lost value in learning yet a new technology that allows you to accomplish what you could've accomplished two years ago using another technology. The proliferation of languages, APIs, libraries, modules, protocols, external add on packages, and so on has reduced IT work to googling for reference material rather than actually producing.

Similar trends are occurring in most fields that are technology related.

It's natural to assume that the benefits of the advance of technology outweigh the inconveniences. This is true in many respects. But be sure to ask yourself this question: Are we really happier?
amencon
Sr. Member
****
Offline Offline

Activity: 410
Merit: 250


View Profile
March 24, 2012, 02:50:35 AM
 #29

Your peer into the future on developed intelligence sounds plausible enough to me, possibly even inevitable.  I also think intelligence can/will be created in the way you describe.

While I think it's possible for a future intelligence to have the capability to make money and gain power, I'm not exactly sure what would motivate such an "entity" to do so.  Just to play devil's advocate, why do you think the intelligence would "desire" to conquer and control?  Do you think the motivations of the intelligence would evolve as it modified itself farther and farther away from it's original programming?  I would think that an AI would behave vastly differently from humans considering the effects the evolution of our brains have on our thoughts, instincts and actions.

As for the IT industry I agree and disagree.  Technology is moving faster and faster and it's sometimes hard to keep up.  However I find that's exactly what draws my interest to it.  Every day is a new challenge and a new learning experience.  I also think I am FAR happier than I would be without technology.  I live longer and more comfortably.  I'll take this generation over any other in human history, not to say that things won't continue to get better.  I live in the US with almost everything I could want and have the freedom/ability to do nearly anything I want within reason.  I realize this is probably not the way billions of others feel in different circumstances, so I'm not trying make a sweeping generalization here.
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 24, 2012, 04:24:14 AM
 #30

While I think it's possible for a future intelligence to have the capability to make money and gain power, I'm not exactly sure what would motivate such an "entity" to do so.  Just to play devil's advocate, why do you think the intelligence would "desire" to conquer and control?

I didn't say conquer and control. I said make money and ensure self survival and improve upon itself. What's the point in being curious unless you want to continue to be curious? And anything that is a candidate for intelligence by its very nature must be curious.

Quote
As for the IT industry I agree and disagree.  Technology is moving faster and faster and it's sometimes hard to keep up.  However I find that's exactly what draws my interest to it.  Every day is a new challenge and a new learning experience.

You're half right, and half wrong. And I believe I'm 100 percent correct in my assessment of where you're half right and half wrong. If you work in the IT industry, and you derive pleasure from learning algorithms, mathematics, methods to use technology to solve problems outside of the IT industry, then that's the half that you're right about. If you derive pleasure from learning new APIs, new protocols, new code libraries, new ways to setup servers, new hardware interfaces, etc., then you're only new to the game, and after the third, fourth, and fifth iteration, you'll realize that the burden of learning new tools to solve old problems is a repeating process that goes nowhere, and that's where you're half wrong.

Some examples of worthwhile learning in the IT industry which remains applicable for life:

- Data structure theory
- Simulation of light for 3d rendering
- Mathematics for simulations
- Neural network algorithms for AI
- Network theory

Some examples of learning in the IT industry that is not worthwhile over time:

- APIs
- Communication protocols
- Package interfaces
- HTML, CSS, etc.

Don't get me wrong. It is worthwhile to learn examples of the things in the second list (to a point). But having been through the iterations since the '80s, I can tell you that you're fighting a losing battle learning material that will be of little value later, and be replaced by larger and larger packages.

Consider my interest in filmmaking. I've been studying in my own time how to translate a story into a sequence of moving images (a film), which is basically directing and cinematography. While the style of films have changed over time, it's a slow evolving process. What one learns today (or yesterday) is valuable forty years from now. Now, consider all the items from both of my lists above. Which ones translate to filmmaking, which is a discipline outside of the IT industry?

Without a doubt, two stand out:

- Simulation of light for 3d rendering
- Mathematics for simulations

Quote
I also think I am FAR happier than I would be without technology.  I live longer and more comfortably.  I'll take this generation over any other in human history, not to say that things won't continue to get better.  I live in the US with almost everything I could want and have the freedom/ability to do nearly anything I want within reason.  I realize this is probably not the way billions of others feel in different circumstances, so I'm not trying make a sweeping generalization here.

Unless you're a minority, I'd venture to say that since the '60s, the general happiness and comfort levels have not improved much at all. People in the '60s and '70s did not lament not having iPhones and text messaging and Blu-ray players or Internet forums. They had rock 'n roll, air conditioning, fast food, jet air travel, and so on.
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 24, 2012, 07:22:11 AM
Last edit: March 24, 2012, 07:37:53 AM by bitcoinbitcoin113
 #31

Quote
Humans have access to their environments, can interact with the environment, and their own internal program is modified via STDP (Spike Timed Dependent Plasticity).

Jumping to conclusions...implicitly at least. Consider very distal dendritic branches that the bAPs don't propagate well to. IMO the spines are probably "free-running" out there, I think this is where we should look for plasticity in the adult. Keep in mind most of this type research has been done using hippocampal slices from young rodents. If you know of a good study that looks at stdp occurring in adult cortex in vivo let me know.


I assume first ascent knows what I am talking about, in case anyone else cares heres the jargon:

distal= relatively far away
dendrite= parts (branches like a tree) of a neuron that gets signals from other neurons
spines= little protrusions from dendrites that receive most input
bAP= backpropagating action potential. Basically if a neuron fires once, the signal goes back from whence it came and will strengthen connections between neurons that fire right after.
plasticity= ability of brain to change
hippocampal slice= cut out the memory part of a brain (usually rat or mouse) and keep it alive in artificial cerebral spinal fluid, etc.
cortex= the outside part of the brain thought most important for higher level reasoning and learning

amencon
Sr. Member
****
Offline Offline

Activity: 410
Merit: 250


View Profile
March 24, 2012, 07:54:26 AM
 #32


You're half right, and half wrong. And I believe I'm 100 percent correct in my assessment of where you're half right and half wrong. If you work in the IT industry, and you derive pleasure from learning algorithms, mathematics, methods to use technology to solve problems outside of the IT industry, then that's the half that you're right about. If you derive pleasure from learning new APIs, new protocols, new code libraries, new ways to setup servers, new hardware interfaces, etc., then you're only new to the game, and after the third, fourth, and fifth iteration, you'll realize that the burden of learning new tools to solve old problems is a repeating process that goes nowhere, and that's where you're half wrong.

Some examples of worthwhile learning in the IT industry which remains applicable for life:

- Data structure theory
- Simulation of light for 3d rendering
- Mathematics for simulations
- Neural network algorithms for AI
- Network theory

Some examples of learning in the IT industry that is not worthwhile over time:

- APIs
- Communication protocols
- Package interfaces
- HTML, CSS, etc.

Don't get me wrong. It is worthwhile to learn examples of the things in the second list (to a point). But having been through the iterations since the '80s, I can tell you that you're fighting a losing battle learning material that will be of little value later, and be replaced by larger and larger packages.

Consider my interest in filmmaking. I've been studying in my own time how to translate a story into a sequence of moving images (a film), which is basically directing and cinematography. While the style of films have changed over time, it's a slow evolving process. What one learns today (or yesterday) is valuable forty years from now. Now, consider all the items from both of my lists above. Which ones translate to filmmaking, which is a discipline outside of the IT industry?

Without a doubt, two stand out:

- Simulation of light for 3d rendering
- Mathematics for simulations

Fair enough, it's probably true that I am new enough to be interested by things that will likely no longer hold any interest for me down the road.

Unless you're a minority, I'd venture to say that since the '60s, the general happiness and comfort levels have not improved much at all. People in the '60s and '70s did not lament not having iPhones and text messaging and Blu-ray players or Internet forums. They had rock 'n roll, air conditioning, fast food, jet air travel, and so on.

Yeah OK I'll buy that too since I have limited perspective on eras I didn't experience.  I'm liking right now just fine though.
amencon
Sr. Member
****
Offline Offline

Activity: 410
Merit: 250


View Profile
March 24, 2012, 08:08:15 AM
 #33

A bit off from the list of topics, but how about a strange-matter accident? Anyone know more and care to enlighten us?
I remember reading one of the possible doomsday-outcomes of the LHC collisions would be the catalsyt of a strange matter reaction, involving strange quarks forming matter which has the ability to change normal quarks/matter into strange matter. (like a subatomic prion)
Similar to grey-goo, but not at all nanobots...
Anyone further versed in this? It's something I find interesting for sure...

I'm not sure about all that (I am not inclined to take it too seriously), but the grey-goo concept seems like a maybe. Perhaps nanotechnology should be on the list.

Genetics, nanotechnology and robotics is supposed to be the underpinnings that will morph our biology to something unrecognizable after the singularity so I'd say it fits.

I'd also be interested to hear thoughts and follow links by someone with knowledge in this area.

I'll have to do some googling on grey-goo and STDP as they both sound intriguing.
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 24, 2012, 06:04:11 PM
 #34

Fair enough, it's probably true that I am new enough to be interested by things that will likely no longer hold any interest for me down the road.

With regard to programming (or computer related development), it's really satisfying to get something working that you've never achieved before. That even applies to something as mundane like a popup menu using CSS. But once you've solved it, it becomes rather tedious and counter-productive to have to solve it again when CSS and HTML tags become deprecated, when browsers change, etc. Same goes for stuff like user interface components such as text input fields or scroll bars on desktop applications. I learned how to do those on an Amiga computer in the late '80s. Then learned again on Windows 3.1, which also worked for Windows 95 and 2000. Then along came new Microsoft frameworks and paradigms. Learn again. And again. Then there's alternative graphic libraries, like Qt, Tk, and so on. Learn it again. And again. And again. Then computing platforms evolve. So there's the iPhone/iPad and Android markets. Learn how to program text field inputs and scroll bars again. And again. At this point, it occurs to one that what you're learning isn't something applicable for the long term, each iteration will be of lesser duration, etc. Furthermore, the APIs get bigger and more complex, and you've discovered that you can't memorize even a hundredth of this stuff, so you've offloaded your memory into Google, and so the programming process becomes one of writing three lines of code, then google, then write three lines of code, then google.

Also, it used to be that one would write code using about two APIs - the OS API, and an application specific API. Now your software must integrate with a DB, communicate with the Internet, communicate with other applications, etc. So you've got the OS API, an application specific API, the DB API, asynchronous communication with another application via some protocol, etc. Part of it is written in C++, part in SQL, some components use HTML and CSS and Javascript, and it has helper scripts in either Python, Lua or Perl, etc.

It's not even remotely fun, as compared to the following: You know a language very well, such as C, such that you rarely if ever need consult a reference to program in it. What you wish to program is something outside of the domain of IT and the material you learn about it will remain relevant for the rest of your life. Examples might be genetics, STDP, 3d rendering, crowd simulation, machine vision, planning, etc. So you happily immerse yourself in books and papers on the chosen subject, finding the material fascinating, and learn. You then start implementing the material in a command line program. You get in the zone. It's just you hammering out code, rarely touching reference material, and only pausing to study your chosen domain or algorithms, which remain valid for the rest of your life, as opposed to APIs, protocols, etc.

Honestly, the most fun I've had with computer programming was doing stuff like I just described.
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 25, 2012, 05:49:11 AM
 #35

Quote
Humans have access to their environments, can interact with the environment, and their own internal program is modified via STDP (Spike Timed Dependent Plasticity).

Jumping to conclusions...implicitly at least. Consider very distal dendritic branches that the bAPs don't propagate well to. IMO the spines are probably "free-running" out there, I think this is where we should look for plasticity in the adult. Keep in mind most of this type research has been done using hippocampal slices from young rodents. If you know of a good study that looks at stdp occurring in adult cortex in vivo let me know.


I assume first ascent knows what I am talking about, in case anyone else cares heres the jargon:

distal= relatively far away
dendrite= parts (branches like a tree) of a neuron that gets signals from other neurons
spines= little protrusions from dendrites that receive most input
bAP= backpropagating action potential. Basically if a neuron fires once, the signal goes back from whence it came and will strengthen connections between neurons that fire right after.
plasticity= ability of brain to change
hippocampal slice= cut out the memory part of a brain (usually rat or mouse) and keep it alive in artificial cerebral spinal fluid, etc.
cortex= the outside part of the brain thought most important for higher level reasoning and learning

You're the brain expert. However, based upon my experience with artificial neural networks (computer simulated) vs. biological neural networks (computer simulated), STDP seems like the workhorse in learning. That's just an opinion from someone who dabbles in the field for fun.
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 25, 2012, 08:01:10 PM
 #36

My point is basically that the devil is in the details. The brain is probably an order of magnitude more complex than people currently think. Like every synapse is a laptop complex.
FirstAscent (OP)
Hero Member
*****
Offline Offline

Activity: 812
Merit: 1000


View Profile
March 26, 2012, 01:11:02 AM
 #37

My point is basically that the devil is in the details. The brain is probably an order of magnitude more complex than people currently think. Like every synapse is a laptop complex.

Isn't Markram allocating about the equivalent of one laptop per neuron? Granted, that's a neuron, not a synapse, but still. Also, isn't Hameroff basically saying that the computing power inside microtubules is huge?
Explodicle
Hero Member
*****
Offline Offline

Activity: 950
Merit: 1001


View Profile
March 26, 2012, 06:26:42 PM
 #38

My point is basically that the devil is in the details. The brain is probably an order of magnitude more complex than people currently think. Like every synapse is a laptop complex.

Isn't Markram allocating about the equivalent of one laptop per neuron? Granted, that's a neuron, not a synapse, but still. Also, isn't Hameroff basically saying that the computing power inside microtubules is huge?

Assuming the computer only needs to simulate the 100 trillion synapses, and Moore's law continues, that means that in 70 years you can buy a "laptop" capable of simulating the human brain at 100% speed. That's also assuming that emulating very productive people at great cost won't speed things up for the rest of us, and that neuroscience can tell us what to do with that firepower by then.

My chief concern is that we might have the computers way before we have the neuroscience, and then the first uploaded people will be so much faster than common people as to leave us in the dust. I'd rather see decades of slow uploads first, so society can cope with it and call uploads our brothers, before they become the dominant form of life. If neuroscience lags, some rich fuck or researcher might become a singleton.
bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 26, 2012, 09:00:16 PM
 #39

My point is basically that the devil is in the details. The brain is probably an order of magnitude more complex than people currently think. Like every synapse is a laptop complex.

Isn't Markram allocating about the equivalent of one laptop per neuron? Granted, that's a neuron, not a synapse, but still. Also, isn't Hameroff basically saying that the computing power inside microtubules is huge?

I came across this relevant paper the other day. Hameroff is last author and it is all modeling, no empirical testing... still it is interesting:
http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002421

Quote
We quantified the potential information capacity for CaMKII hexagonal encoding of MT lattice regions, specifically in lattice ‘patches’ of 7 (A lattice) or 9 (B lattice) tubulin protein dimers. The simplest case was taken as CaMKII phosphorylation of an A lattice patch of 7 tubulin dimers (central ‘address’ dimer unavailable for phosphorylation, 6 available dimers). Each kinase domain can phosphorylate one tubulin dimer, either its α- monomer or its β- monomer equivalently. For either dimer, phosphorylation = 1, no phosphorylation = 0. Sets of 6 CaMKII kinase domains interacting with 6 tubulin dimers can then provide 6 binary bits (64 possible states), comprising one byte.

On an A lattice we also consider each tubulin dimer being phosphorylated either on its α-tubulin monomer (0), α-tubulin monomer (1), or neither (2), resulting in 6 ternary states (‘trits’) or 729 possible encoding states (per ‘tryte’).

We also considered ternary states in a B lattice with 9 tubulin dimers in a patch. With the central ‘address’ dimer unavailable for phosphorylation, sets of 6 CaMKII kinase domains can choose and phosphorylate α- (0), β- (1), or neither (2) tubulin monomer in any 6 of 8 available dimers. This yields 5,281 possible information states per neighborhood patch of 9 tubulin dimers. There are approximately 1019 tubulins in the brain. Consequently, potential information capacity for CaMKII encoding of hexagonal MT lattices is enormous.

If there is anything to this idea at all, I think it is more likely that calculations are not made by each "patch", but that "patches" are grouped by location and some protein scans the nearby patches and determines signal strength by taking the average state of the local set of patches. This scanner protein then goes on to modify the activity of ion channels, etc thus strengthening or weakening the local synapses.

That said, there is much more going on both upstream and downstream of the CamKII-tubulin interaction than considered in that paper. The most obvious is that the responsiveness of CaMKII to calcium can be modulated as well as the rate at which tubulins are dephosphorylated. These factors (amongst many, many others) will affect the signal-noise ratio and processing speed.

The idea that info from the activity of multiple synapses can be integrated as a phosphorylation pattern on microtubles isn't that out there, but it's complicated by all the feedbacks. Also, I would think phosphorylation is too transient to be a good way to store info compared to synapses themselves. The synapses require more energy to form outright, but, once created, will be much more stable and probably take much less energy to maintain. That's just a guess though.

bb113
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
March 26, 2012, 09:01:49 PM
 #40

My point is basically that the devil is in the details. The brain is probably an order of magnitude more complex than people currently think. Like every synapse is a laptop complex.

Isn't Markram allocating about the equivalent of one laptop per neuron? Granted, that's a neuron, not a synapse, but still. Also, isn't Hameroff basically saying that the computing power inside microtubules is huge?

Assuming the computer only needs to simulate the 100 trillion synapses, and Moore's law continues, that means that in 70 years you can buy a "laptop" capable of simulating the human brain at 100% speed.


How did you arrive at this?
Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!