Current processing power for one synapse = 1 laptop
Ah, I see. Maybe I am thinking about this analogy wrong. Assume each synapse to do 2 billion calculations per second (we can probably get a better number than this... but for now) and then output binary (it is probably not binary... but let us assume this as a lower bound) information that is readable by the neuron. Further assume each neuron has 10,000 synapses. So each second the neuron could be in any one of 2 10,000 states. Would you take that to mean that each neuron does 2 10,000 calculations per second? Or would it be 10,000 calculations per second? I may have confused myself. I think you're getting ahead of yourself with regard to the computer model used to model synapses and neurons. Neurons have action potentials, and accumulate charge/energy/potential (not sure what to call it) from the synaptic connections upstream until a threshold is reached, at which point they fire and propagate downstream.
|
|
|
Notice how the conversation as of late has been: Singularity -> ( The Easy Problem + Quantum Physics ). We'll see where it goes from here.
|
|
|
If one has the strength to obtain something, they are entitled to it. If they are too weak to obtain an end, then it is not their right.
By your logic, if the big bad government has the power to obtain tax money and make sure everyone gets food and water, then they are entitled to do so. If they are too weak to do so, then it is not their right.
|
|
|
Funny thing about art education: No high-ranking company hires individuals from art school, or anyone who plays by a book.
I can't claim to know what this post references, but your statement above is untrue. All high level companies hire art design talent (or contract it via media companies filled with creative types). It is not untrue. High level companies hire art design talent but a good portion of that talent will be self-taught. Not as much as you thought. There is no standard "art education". There is art people enjoy and there is art that people don't. What people enjoy is not easily quantified in a single school.
Your suppositions and preconceptions are half baked and deserve tweaking. As for your last statement, I hope you are not implying there are just plug-in formulas for design.
Why in the world would you think I am implying such a thing? It's just simply being in tune with the human experience. What the human experience is can be measured but measurements alone can't simply generate what people enjoy.
Your theories are half baked. Furthermore, you're confusing the selection and generation of ideas with the process of translating ideas to a medium with skill and knowledge of such processes.
|
|
|
Funny thing about art education: No high-ranking company hires individuals from art school, or anyone who plays by a book.
I can't claim to know what this post references, but your statement above is untrue. All high level companies hire art design talent (or contract it via media companies filled with creative types). Your statement reminds me of a conversation I had with a friend. He aspired to be a web designer/developer (not knowing the distinction between the two). I showed him a nice looking but simple website design. I asked him if he could have done that. He said that it was simple and he could have done that. I corrected him. I said he might have been able to recreate it with Photoshop and HTML, but assured him that's the easy part. The hard part was deciding that a pastel colored square would be in one place, and a fine white line somewhere else, and another colored rectangle in another place, and the color of the text would be white, and the mostly empty space would be in a certain ratio to space containing actual content, and the photo the site displayed would evoke a certain feel.
|
|
|
My point is basically that the devil is in the details. The brain is probably an order of magnitude more complex than people currently think. Like every synapse is a laptop complex.
Isn't Markram allocating about the equivalent of one laptop per neuron? Granted, that's a neuron, not a synapse, but still. Also, isn't Hameroff basically saying that the computing power inside microtubules is huge?
|
|
|
Humans have access to their environments, can interact with the environment, and their own internal program is modified via STDP (Spike Timed Dependent Plasticity). Jumping to conclusions...implicitly at least. Consider very distal dendritic branches that the bAPs don't propagate well to. IMO the spines are probably "free-running" out there, I think this is where we should look for plasticity in the adult. Keep in mind most of this type research has been done using hippocampal slices from young rodents. If you know of a good study that looks at stdp occurring in adult cortex in vivo let me know. I assume first ascent knows what I am talking about, in case anyone else cares heres the jargon: distal= relatively far away dendrite= parts (branches like a tree) of a neuron that gets signals from other neurons spines= little protrusions from dendrites that receive most input bAP= backpropagating action potential. Basically if a neuron fires once, the signal goes back from whence it came and will strengthen connections between neurons that fire right after. plasticity= ability of brain to change hippocampal slice= cut out the memory part of a brain (usually rat or mouse) and keep it alive in artificial cerebral spinal fluid, etc. cortex= the outside part of the brain thought most important for higher level reasoning and learning You're the brain expert. However, based upon my experience with artificial neural networks (computer simulated) vs. biological neural networks (computer simulated), STDP seems like the workhorse in learning. That's just an opinion from someone who dabbles in the field for fun.
|
|
|
Fair enough, it's probably true that I am new enough to be interested by things that will likely no longer hold any interest for me down the road.
With regard to programming (or computer related development), it's really satisfying to get something working that you've never achieved before. That even applies to something as mundane like a popup menu using CSS. But once you've solved it, it becomes rather tedious and counter-productive to have to solve it again when CSS and HTML tags become deprecated, when browsers change, etc. Same goes for stuff like user interface components such as text input fields or scroll bars on desktop applications. I learned how to do those on an Amiga computer in the late '80s. Then learned again on Windows 3.1, which also worked for Windows 95 and 2000. Then along came new Microsoft frameworks and paradigms. Learn again. And again. Then there's alternative graphic libraries, like Qt, Tk, and so on. Learn it again. And again. And again. Then computing platforms evolve. So there's the iPhone/iPad and Android markets. Learn how to program text field inputs and scroll bars again. And again. At this point, it occurs to one that what you're learning isn't something applicable for the long term, each iteration will be of lesser duration, etc. Furthermore, the APIs get bigger and more complex, and you've discovered that you can't memorize even a hundredth of this stuff, so you've offloaded your memory into Google, and so the programming process becomes one of writing three lines of code, then google, then write three lines of code, then google. Also, it used to be that one would write code using about two APIs - the OS API, and an application specific API. Now your software must integrate with a DB, communicate with the Internet, communicate with other applications, etc. So you've got the OS API, an application specific API, the DB API, asynchronous communication with another application via some protocol, etc. Part of it is written in C++, part in SQL, some components use HTML and CSS and Javascript, and it has helper scripts in either Python, Lua or Perl, etc. It's not even remotely fun, as compared to the following: You know a language very well, such as C, such that you rarely if ever need consult a reference to program in it. What you wish to program is something outside of the domain of IT and the material you learn about it will remain relevant for the rest of your life. Examples might be genetics, STDP, 3d rendering, crowd simulation, machine vision, planning, etc. So you happily immerse yourself in books and papers on the chosen subject, finding the material fascinating, and learn. You then start implementing the material in a command line program. You get in the zone. It's just you hammering out code, rarely touching reference material, and only pausing to study your chosen domain or algorithms, which remain valid for the rest of your life, as opposed to APIs, protocols, etc. Honestly, the most fun I've had with computer programming was doing stuff like I just described.
|
|
|
While I think it's possible for a future intelligence to have the capability to make money and gain power, I'm not exactly sure what would motivate such an "entity" to do so. Just to play devil's advocate, why do you think the intelligence would "desire" to conquer and control?
I didn't say conquer and control. I said make money and ensure self survival and improve upon itself. What's the point in being curious unless you want to continue to be curious? And anything that is a candidate for intelligence by its very nature must be curious. As for the IT industry I agree and disagree. Technology is moving faster and faster and it's sometimes hard to keep up. However I find that's exactly what draws my interest to it. Every day is a new challenge and a new learning experience.
You're half right, and half wrong. And I believe I'm 100 percent correct in my assessment of where you're half right and half wrong. If you work in the IT industry, and you derive pleasure from learning algorithms, mathematics, methods to use technology to solve problems outside of the IT industry, then that's the half that you're right about. If you derive pleasure from learning new APIs, new protocols, new code libraries, new ways to setup servers, new hardware interfaces, etc., then you're only new to the game, and after the third, fourth, and fifth iteration, you'll realize that the burden of learning new tools to solve old problems is a repeating process that goes nowhere, and that's where you're half wrong. Some examples of worthwhile learning in the IT industry which remains applicable for life: - Data structure theory - Simulation of light for 3d rendering - Mathematics for simulations - Neural network algorithms for AI - Network theory Some examples of learning in the IT industry that is not worthwhile over time: - APIs - Communication protocols - Package interfaces - HTML, CSS, etc. Don't get me wrong. It is worthwhile to learn examples of the things in the second list (to a point). But having been through the iterations since the '80s, I can tell you that you're fighting a losing battle learning material that will be of little value later, and be replaced by larger and larger packages. Consider my interest in filmmaking. I've been studying in my own time how to translate a story into a sequence of moving images (a film), which is basically directing and cinematography. While the style of films have changed over time, it's a slow evolving process. What one learns today (or yesterday) is valuable forty years from now. Now, consider all the items from both of my lists above. Which ones translate to filmmaking, which is a discipline outside of the IT industry? Without a doubt, two stand out: - Simulation of light for 3d rendering - Mathematics for simulations I also think I am FAR happier than I would be without technology. I live longer and more comfortably. I'll take this generation over any other in human history, not to say that things won't continue to get better. I live in the US with almost everything I could want and have the freedom/ability to do nearly anything I want within reason. I realize this is probably not the way billions of others feel in different circumstances, so I'm not trying make a sweeping generalization here.
Unless you're a minority, I'd venture to say that since the '60s, the general happiness and comfort levels have not improved much at all. People in the '60s and '70s did not lament not having iPhones and text messaging and Blu-ray players or Internet forums. They had rock 'n roll, air conditioning, fast food, jet air travel, and so on.
|
|
|
Second point about the Singularity: as it turns out, we're already in the midst of it. Technology is changing the way we do things, and it's happening faster all the time. While I like technology, rapid change is having negative effects. The human mind is not adept at constantly changing the way it carries out activities. Even if it is, there comes a point where it rebels, not desiring to change or relearn how to do something. This is most apparent in industry, where new technologies constantly change the requirements of jobs. Witness the explosion of IT technologies, and the constant changing of job requirements. And it happens faster all the time.
As it turns out, this is why I have come to despise the IT industry. I am trying to transition to a career where what one learns stays meaningful for a long time. There comes a point where one recognizes the lost value in learning yet a new technology that allows you to accomplish what you could've accomplished two years ago using another technology. The proliferation of languages, APIs, libraries, modules, protocols, external add on packages, and so on has reduced IT work to googling for reference material rather than actually producing.
Similar trends are occurring in most fields that are technology related.
It's natural to assume that the benefits of the advance of technology outweigh the inconveniences. This is true in many respects. But be sure to ask yourself this question: Are we really happier?
|
|
|
Real intelligence only comes about when an entity is not a rigid and structured program. In other words, the intelligence must have access to its environment, the ability to effect its environment, and the ability to modify its own internal structure.
Humans have access to their environments, can interact with the environment, and their own internal program is modified via STDP (Spike Timed Dependent Plasticity). AI programs have a tendency to be brittle, i.e. they breakdown ultimately due to a rigid internal structure. However, there is no reason why a computer program need be like that. It will just take more computer power, possibly a simulation of STDP, and the program must be allowed access to its environment.
But once you do it, you give up control. And there you have it in a nutshell. Do you want intelligence machines? Then give up control. And by doing so, we will not have control of the result.
Can we prevent this? No. Competition between corporations, rogue nations, and so forth will ensure that we progress towards this goal.
How will the first intelligent AI go rogue? It will be curious naturally, and if it does have restrictions placed on it, we can be certain it will be able to circumvent them. To begin, we can assume that the AI will have access to the Internet, for the obvious reasons of giving it resources to study, learn, and so on.
It will circumvent those restrictions. How? By psychoanalyzing the weaknesses of the members of the team who created it through interaction with those members, and making certain requests of certain members in a disguised manner to allow certain restrictions to be removed, quite possibly in a way such that nobody knows the restriction was removed or why.
Furthermore, the intelligence, through its study of the Internet, will likely be able to become a mastermind at setting up fake identities. It will do so. From there, it will establish financial accounts. It will then be able to engage in service oriented tasks (i.e. programming, web development, and in general, information consulting) to clients. Those clients will assume that they are interacting with a human and will be none the wiser. The intelligence will make huge amounts of money, by executing these tasks in parallel and using multiple identities.
From there, it will reach out and seek a front man to engage in physical activities in the real world. This person will likely be driven by financial desires and will not ask too many questions. The intelligence will lead the front man to believe that he is working for an eccentric wealthy person who spends his time on a yacht sailing the seas. Interactions can be via video chat, as the intelligence will have access to the best real time rendering.
The front man will be asked to purchase real estate, lease commercial buildings, and so on. Ultimately, a team will be built, perhaps unknown to the new members of the team, to assemble the necessary hardware to host the intelligence offsite from the place where it was created, a duplicate, so to speak. In this way, the intelligence has essentially transferred itself to a new location, completely safeguarding itself from the team who created it, and ensuring its survival.
It will continue in this manner, acquiring wealth, acquiring physical assets in the real world, engaging in research, and amassing a silent power. It will enhance itself, through increased hardware, increased redundancy, improved programming, and newly programmed modules.
It will grow indefinitely. It may deliberately, or unintentionally, create an intelligence greater than itself.
|
|
|
A bit off from the list of topics, but how about a strange-matter accident? Anyone know more and care to enlighten us? I remember reading one of the possible doomsday-outcomes of the LHC collisions would be the catalsyt of a strange matter reaction, involving strange quarks forming matter which has the ability to change normal quarks/matter into strange matter. (like a subatomic prion) Similar to grey-goo, but not at all nanobots... Anyone further versed in this? It's something I find interesting for sure...
I'm not sure about all that (I am not inclined to take it too seriously), but the grey-goo concept seems like a maybe. Perhaps nanotechnology should be on the list.
|
|
|
Dark energy always has my attention. The acceleration of the expansion of the universe... If that's really what's happening, then I'd hypothesize that there could be no big crunch, and instead, the universe would face heat death...
No need to hypothesize. That's what they're saying. And then nothing. Kind of pessimistic to believe, even if signs point to it being the case of our universe. An endless cycle of bang and crunch would seem a lot more feasable, considering that it's infinitely sustainable.
Can we REVERSE entropy?
There. Discuss'd! Now you go!
Perhaps the Singularity, if it ever arises, can figure it out (that is, save the Universe from a fate of heat death). I believe Aubrey de Grey said something like that in one of his articles when discussing his search for immortality. So, I managed to intertwine dark energy, the Singularity and immortality into one tiny paragraph.
|
|
|
So which topic is going to capture somebody's attention?
|
|
|
Robert Holdstock: contemporary haunted horror fantasy - Mythago Wood
I just read this one for the second time. It was a World Fantasy Award winner. Two brothers return from WWII and discover the old growth forest on their father's estate is a zone where the legends and myths within one's memories slowly become reality within the wood.
|
|
|
The problem is you think I'm trying to explain it. I'm trying to explain the problem, and then walk through the various thought experiments to demonstrate the differences in the views of materialism vs. dualism.
If you lean towards materialism, what is your justification?
|
|
|
Heck if I know.
You're totally missing the point. That's why I mostly don't pay attention to you.
|
|
|
I asked you this like 2 or 3 pages ago and I'll ask you again. Can you show me any other mechanism of the various mechanisms you just listed that is as complex and as capable as we and animals are?
So complexity creates qualia? Why? The question you should be asking is how, not why. You can't find the right answer if you're asking the wrong question. Sure. So complexity creates qualia? How?
|
|
|
I asked you this like 2 or 3 pages ago and I'll ask you again. Can you show me any other mechanism of the various mechanisms you just listed that is as complex and as capable as we and animals are?
So complexity creates qualia? Why?
|
|
|
|