Bitcoin Forum
June 25, 2022, 05:07:04 AM *
News: Latest Bitcoin Core release: 23.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 100
1  Other / Meta / Re: How to put link into One word or picture? on: August 25, 2014, 05:05:19 PM
Hello, How do I put link into one word or picture on this forums?

Thank you.
On most forums,  you can quote someone that did what you wanna learn how to do, and the code they used will appear in the textbox where you're writing (you don't need to post it, just click to quote and see what shows up). Though, depending on the forum settings, you might need to disable WYSIWYG mode to see the code.
2  Economy / Games and rounds / Re: i want to give 100 BTC to one user on bitcointalk. tell me why it is U. on: August 23, 2014, 09:41:38 AM
Doesn't sound like this is going anywhere, but doesn't cost much to post; just a little bit of time in exchange for the small chance of getting a lot of money seems fair.


So what would I do with 100BTC? At the current prices?
HODL!!!
3  Other / Off-topic / Re: The sun is hollow on: August 20, 2014, 07:22:04 AM
That guy in the first video keeps saying you can't see the Sun in space... What the fuck? We got satellites and probes that can see the Sun and other stars.

And he says sunspots are holes that let you see inside the Sun because it's dark inside, really? There is still a shitload of light coming from sunspots, they're not dark, just darker than the stuff around. He sounds just as stupid (if not more) as those folks that thought there was like a quarter of the Sun missing because they saw a picture from an old camera that didn't have enough dynamic range to show the darker parts and the details on the brighter parts at the same time.
4  Bitcoin / Development & Technical Discussion / Re: New Attack Vector on: February 19, 2014, 07:25:32 PM
Perhaps i didn't read it right; does malleability cause any issues in the real world for anyone that only deals with confirmed transactions?
5  Economy / Speculation / Re: Wall Observer BTC/USD - Bitcoin price movement tracking & discussion on: February 19, 2014, 12:07:03 PM
Holy shit, seems i missed something big during these weeks i've been away... What the fuck happened this time? Why aren't we still close to or above 1k?
6  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 29, 2014, 01:18:09 PM
It seems we're at a deadlock. Perhaps some input from some neutral third-parties could help the discussion move on?

Perhaps I can help find consensus here. In my opinion TiagoTiago you and AnonyMint are talking past each other.

You are arguing that through innovation we will create a being superior to humans and that will lead to human extinction via a tech singularity where computers vastly out think humans.
AnonyMint is arguing as per his blog Information is Alive that for computers to match or exceed humanity they would essentially need to be alive aka human reproducing and contributing to the environment. 

Is is possible to create AI that is better then humans? By better I mean AI that exceeds the creativity/potential of all of humanity.
Sure it's possible, but as argued by Anonymint such an AI would have to be dynamic, alive, and variable with a chance a failure and would thus not be universally superior.
The thing is, for a virtual or decentralized synthetic agent, the cost of failure can be much smaller than for organic species. And due to the fact they wouldn't restricted by DNA, they would be able to evolve much faster.

So what we are really talking about here is will the creation of sentient AI lead to a race of AI that will result in the inevitable extinction of humans. That answer to this is a definite no.

The dynamics of inter-species competition depend on the degree each species is dependent on shared limited resources. A simple model of pure competition between two species is the Lotka-Volterra model of direct competition.
Even with pure competition species A is always and in every way bad for species B the outcome is not necessarily extinction. It depends on the competition coefficient (which is essentially a measure of how much the two species occupy the same niche).
Sure it is possible that the AI would cooperate with or otherwise be beneficial to humans. But the only pressure towards that direction is what humans would do; and we would only have an extremely short window of time to influence it before it gets beyond our reach.

Should we invent AI or even a society of AI that is collectively vastly superior to human society we would only be in danger of extinction if such robots were exactly like us (eating the same food, wanting the same shelter, ect add better endowed and lusting after human women if you want to add insult to injury Grin). Now obviously this would be very hard to do because we have evolved over a long time and are very very good at filling our personal ecological niche. We wiped out the last major contenders the neanderthal despite the fact that they were bigger, stronger, and had bigger brains (very good chance they were individually smarter).
The difference from neanderthals is a post-singularity AI would be self-improving, and would do so at timescales that are just about instant when compared to organic evolution or even human technological advancements.

Much more likely is that any AI species would occupy a completely different niche then we do (consuming electricity, living online, non organic chemistry, ect) Such an AI society would be in little to no direct competition with humans and would likely be synergistic.
Humans wouldn't take kindly to robots stealing their ore, messing with their electric grid etc, nor to viruses clogging their cat-tubes. But by then, they would already have advanced so much that we would at most piss them off; and it doesn't sound like a good idea to attract the wrath of a vastly superior entity.


The question in that case is not whether the AI society is collectively superior but is instead whether the combination of human and AI together is superior to AI alone.
If combination if better, we would be assimilated; resistance would be futile.

As the creativity of sentience is enhanced as the sentient population grows that answer is apparent.
The difference is a post-singularity AI would be able to increase it's effective population much faster than humans, while at the same time improving the efficiency of its previously existing "individuals".

Could humanity wipe itself out by creating some sort of super robot that is both more intelligent (on average) and occupies the exact same ecological niche we do? Sure we could do it in theory but it would be very very hard (much harder then just creating sentient AI). There are far easier ways to wipe out humanity.     
The AI could for example decide it would be more efficient to convert all the biomass on the planet into fuel, or wipe all the forests to build robot factories, or cover the planet with solarpanels etc. Using-up-all-the-resources-of-the-planet seems like a very likely niche; humans themselves are already aiming to promote themselves up on the Kardashev scale in the long term...
7  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 27, 2014, 12:00:05 PM
It seems we're at a deadlock. Perhaps some input from some neutral third-parties could help the discussion move on?
8  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 27, 2014, 04:26:30 AM
The entire discussion with you, you continue to ignore or miss the point that if it was possible to calculate the probabilities in real-time such that there would be some superior outcome locally or globally (meaning one outcome is objectively better than others along some metric, e.g. survival or robots and extinction or enslavement of humans), then that local or global result would collapse the present+future into the past and everything would be known within its context. Thus either it is global context and nothing exists [no (friction of) movement of time, present+future == past, i.e. no mass in the universe] or that local result collapses its context such that present+future = past, and thus must be disconnected from (unaffected by) the unknown global probabilities. (P.S. current science has no clue what mass really is, but I have explained what it must be)

I pointed out to you already that speed of computation or transmission does not transfer the entropy (because if it did, then the entropy would be destroyed and the present+future collapses into the past). Cripes man have you ever tried to manage something happening in another location over the phone or webcam? In fact, each local situation is dynamic and interacting with the diversity of the actors in that local situation. Even if you could virtually transmit yourself (or the master robot) to each location simultaneously, then if one entity is doing all the interaction then you no longer have diverse entropy. If instead your robots are autonomous and decentralized, the speed of their computation will not offset that their input entropy is more limited than the diversity of human entropy, because each human is unique along the entire timeline of DNA history through environment development in the womb. You see it is actually the mass of the universe and the zillion incremental interactions of chance over the movement of time that creates that entropy. You can't speed it up without collapsing some of the probabilities into each other and reducing the entropy-- study the equation for entropy (Shannon entropy or any other form of the equation such as the thermodynamic or biological form). In order to increase entropy, there must be more chance, i.e. more cases of failure not less. Remember the point I made about fitness and there being hypothetical infinite shapes and most failing to interlock. Computation is deterministic and not designed to have failure. Once you start building in failure into robots' CPU, then you would have to recreate biology. You simply won't be able to do a better job at maximizing entropy than nature already does, because the system of life will anneal to it. Adaption is all about diverse situations occurring simultaneously, but don't forget all the diverse failures occurring simultaneously also.
If you can know with high confidence what is more likely to happen, then you can anticipate your reaction before the immediate cause for it happens to the point of compensating for the few tenths of a second of lag from talking across the world. And you could also be prepared for numerous possible deviations from your expectation. And for many things, you don't need to start reacting faster than it takes for data to cross from one side of the world to the other.


Obviously the further into the future, the less accurate simplified models will get; but it is possible for machines to push the point where a simulation is no more accurate than random chance way further down the road way more than humans.


edit: Simulations can involve failures, and AIs can plan for failure as well as learn from it.

I mean the entire concept has flown right over your head and you continue repeating the same nonsense babble.

You just can't wrap your mind around what entropy and diversity are, and why speed of computation and transmission of data has nothing to do with it.
What makes you think evolving machines wouldn't have at least as much entropy and diversity as humans?
9  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 26, 2014, 11:53:46 PM
There is no "best" outcome. There are only outcomes. If an organism is decentralized, then it can't run a global simulation (the data can't be brought to a centralized computation, i.e. consciousness, in real-time without collapsing the present and past into a one), thus it is still only outcomes not a overall "best" outcome. I will not repeat this again (more than 5 times already I have written it), even though you continue ignoring it.
The time it takes to send information to the other side of the world is pretty much instant when compared to biological evolution timescales.

And you keep insisting on best; i said before, it doesn't need to be the best, just better than humans.


If a decentralized AI can aid adaption, then it can be incorporated into the human brain so we become Cyborgs, e.g. Google is my external memory and the integration will be improving soon as there is research on direct tapping into the brain. Yet that isn't even the salient point. The human brain is more unique because (collectively) it has more entropy. I explained why in my blog article Information is Alive!.
Sure, but assimilation isn't the only route that will be followed. Standalone AIs are very likely.


So we can't get more entropy into the AI than the human brain already has, because that entropy isn't derived from speed or power of computation, but rather from the zillions of tiny localized annealed decentralized steps of distributed life and including environmental development in the womb.
The human brain isn't the ultimate step; there is always room for improvement.

Kurzweil doesn't understand that computational power has nothing to do with the entropy of the system of life. To the extent that it becomes a competitive factor, then it is integrated into the decentralized, distributed system of life.
I'm not saying a post-singularity AI wouldn't be alive.


My point essentially is: huge computational power + self-improvement ability + natural selection = a life form beyond human control and understanding.
10  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 26, 2014, 08:53:07 AM


...

...Chance and diversity require each other. It is a very interesting conceptual discovery. I would expect some other scientists or philosophers had this discovery, but I am not aware of who they are.

That is borderline tautological (in the sense that is self-evident). To have infinite possibilities but having the odds always lead to just one of them is pretty much the same as having only one possible outcome but the odds being the same for any possible outcome.

That doesn't make any sense. The orthogonal probabilities in a free market means many different things are possible and occurring. If there was only one, the entropy is minimized not maximized. Even the Second Law of Thermodynamics says the entropy of the universe trends to maximum.
I was just pushing it to the opposite extreme, where it is easier to see the interdependence of diversity and chance.

But still, nothing in that prevents machines from having access to both enough diversity and enough random values.

Sigh. Please reread what I wrote in reply to you upthread. For example, to be diverse requires that they can't think as one, and that the information doesn't move in real-time to a centralized consciousness (decision making) for a groupwise superiority over humans.
Who said anything about a centralized consciousness? A decentralized superorganism of global scale would have much better chances of surviving, not to mention of harvesting the fruits of diversity and chance.


Sure, chaotic systems are hard to manipulate towards any specific goal in the long term; but machines can do it better than humans. Perfection is unattainable, i agree; but machines don't need to be perfect, they just need to be better than humans.

I already explained upthread that is no measurement of "better", only there exist different subjective choices that different actors make.
You only know for sure if an action is good or worse after it happens; but by running millions of simulations beforehand you can significantly increase the odds that the action you choose to do will be closer to the best one possible most of the time.

And i don't see how anything short of extinction of the human species could prevent humanity from eventually giving birth to a self-improving AI that improves itself better than humans could. Such a life-form would be more adaptable than anything Earth has ever seen; capable of trying more simultaneous evolutive routes than the whole organic biome combined.

Because you still don't admit that there is no such metric for "more adaptable" or "better".
Once the process becomes self-sustaining, "life will find a way"; i don't see humanity steering away enough to avoid giving the initial kick on the snowball; the probabilistic cloud that is humanity is falling towards the singularity, and just like a star falling into a blackhole, even tough we can't predict the exact position of each subatomic particle that composes it we can be sure that at least a few of those will fall all the way.

Logic provides the evolutive pressure towards self-perpetuating patterns; those that continue to be are obviously better than those that ceased to, even if before the results it might not have been as obvious which ones were better.


Everything is chance and local gradients towards local choices about optimums. There is no global, omniscient God and if there was then for that God, the past and present are already 100% known, i.e. for God there is no more chance nor probabilities less than 1.
Like i said many times, perfection isn't necessary, as physical biological evolution has proved so many times.

Remember entropy is maximized when the orthogonal probabilities are minimized and spread among the most possible diverse orthogonal outcomes. Not when there is only one result.
Gray goo overtaking the globe and beyond sounds pretty entropic to me...
11  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 26, 2014, 04:00:33 AM
...

...Chance and diversity require each other. It is a very interesting conceptual discovery. I would expect some other scientists or philosophers had this discovery, but I am not aware of who they are.

That is borderline tautological (in the sense that is self-evident). To have infinite possibilities but having the odds always lead to just one of them is pretty much the same as having only one possible outcome but the odds being the same for any possible outcome.


But still, nothing in that prevents machines from having access to both enough diversity and enough random values.




Sure, chaotic systems are hard to manipulate towards any specific goal in the long term; but machines can do it better than humans. Perfection is unattainable, i agree; but machines don't need to be perfect, they just need to be better than humans. And i don't see how anything short of extinction of the human species could prevent humanity from eventually giving birth to a self-improving AI that improves itself better than humans could. Such a life-form would be more adaptable than anything Earth has ever seen; capable of trying more simultaneous evolutive routes than the whole organic biome combined.


Humans are already quite adaptable in spite of their limitations; we are changing the world in a scale never seen since primordial bacteria poisoned the planet's atmosphere with oxygen; now imagine the impact an invasive species that is to us what we are to those primordial bacteria will have.
12  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 25, 2014, 06:56:30 AM
You still don't realize you didn't get my point.

And I don't say that to be condescending. Some times there are high IQ concepts that can't be conveyed to the wider population in a forum post or even a concise blog article. They require chapters of images and explanatory text to paint the concept into the mind of others.

All I can say concisely is try rereading my prior post, as it refutes your reply. Each of the points of your reply were already refuted in the prior post of mine. I guess I would need to expound, but for those who are smart enough, I don't need to expound. I have already stated.

For example, "right often enough" totally ignores the point I made about no one can know what is right, except for themself and even then they really don't know what is right for themself either. They simply made a choice with tradeoffs and impacts. Refer upthread (or the linked related thread) to where I mentioned that infinite shapes tested for interlocking fitness wouldn't be superior to each other, just different.

Life doesn't have a "correct" or "right" result, except to become more diverse.

If there was a metric for "correct" or "right", then the present and past would collapse into a single point in time and you would not exist (you would be disconnected from the universe of chance). Even if that was only locally for you (your local coherence). And it has global coherence, then the present and past for the universe would collapse into a single point in time and the entire universe wouldn't exist (because there would not be any change that isn't already known, i.e. no chance, no probabilities, and ZERO ENTROPY).
There isn't an universal "best outcome", but there are outcomes that are the best of all predicted possibilities for a single individual or group. Machines can be made to calculate the inputs required for such a best outcome, and those that calculate the moves for the best outcome for themselves, and perform those moves, in the long term will be the ones that survive all others; so those are gonna be Humanity's competitors, if there is still any humans left by that point.

Evolution doesn't care about perfection, anything that is good enough keeps going (as long as it remains good enough). Even in the absence of any external evolutionary pressure, the pressure for self-reproduction emerges by the simple fact that patterns that don't perpetuate themselves cease to be.


In other words, machines don't need to be perfect, they just need to be better than humans for there to be reason for concern.

They might not be the last new form of life the planet will see; but the odds are big that they'll be the last one humans will (if post-singularity AIs are ever created, of course).



I think you're underestimating what it means to think better than humans. Perhaps, ironically, because you believe you do. Lemme put it another way, even if that was the case, the difference is you can't improve your mind as efficiently as a post-singularity AI would be able to; if one was created right now, you would be left behind by at least a few orders of magnitude in the blink of an eye.
13  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 25, 2014, 06:24:05 AM
The thing is, computers can make virtual mistakes with no consequences on reality; and then whichever approach works better in the simulation is what the robots will do for real. For some things, computers can already run simulations faster than real-time; it's only a matter of time before most of the relevant things can be simulated faster than a human brain can.

You are entirely missing my point. My point is a very high IQ one, so don't feel dejected.

For example, your statement assumes that there is a way to measure globally what is "better". My entire point is that there is no such metric, and that life is entropic meaning the possibilities (i.e. orthogonal probabilities and diversity) are always expanding, thus no one can know what is better for everyone now, future, or in retrospect. Any one can from their arm chair claim that the present would be better had the past been such and such different, but this is incorrect because we can't change just one thing as other things are impacted in unpredictable ways (this is one reason top-down governance is such a failure, even if there were no corruption). Life is more localized variables than can be communicated to a single a metric in real time. If in fact we could transmit all the variables to a omniscient metric in real-time, then the past and present would collapse into a single point in time and the universe would not exist. Such a God would be quite lonely.

Even if you could simulate all the variables, then you would not have time to run a simulation and then go back and rerun it in real life to your omniscient advantage, because the simulation becomes a component of the real life thus impacting things in ways you can not predict. Don't forget Coase's Theorem and competition.

This is way over most of your heads. I think I will need to carefully write a book on this. And I don't have time right now.
If things happen different from the simulation, you adapt the simulation to the new data. You don't need to model the whole universe though; some simplified models can be right often enough that for most practical purposes you don't need to model everything. My point is just that creativity isn't limited to humans, and machines can be better at it; machines can fail better than humans and harvest just the good consequences of failing. Current machines can already beat humans in chess; and we keep making them more and more powerful; if the trend continues for long enough, one day will have machines that will be able to anticipate our moves better than we can anticipate theirs on most things that matter. Humans are more predictable than we like to think we are.



I'm not talking about current economics, i'm talking about the technological singularity (which may be delayed significantly depending on a shitload of factors, including major changes in the economic scenario).
14  Other / Politics & Society / Re: Is a Madmax outcome coming before 2020? Thus do we need anonymity? on: January 24, 2014, 10:55:11 PM
The thing is, computers can make virtual mistakes with no consequences on reality; and then whichever approach works better in the simulation is what the robots will do for real. For some things, computers can already run simulations faster than real-time; it's only a matter of time before most of the relevant things can be simulated faster than a human brain can.
15  Other / Politics & Society / Re: The NSA is reportedly able to access offline computers thanks to radio wave tech on: January 23, 2014, 09:22:40 PM
Well it's not like they are doing this just to annoy us, the NSA needs to do this for the safety of us. I know this is a big invasion of privacy but would you rather be private and more illegal crime going on?
Well, i need money, that doesn't mean i should be stealing.




And this isn't about safety, it's about control.
16  Other / Politics & Society / Re: The NSA is reportedly able to access offline computers thanks to radio wave tech on: January 19, 2014, 11:05:53 PM
https://www.youtube.com/watch?v=5N1C3WB8c0o <- sorta relevant
17  Other / Off-topic / Re: I wish I could rent a time machine. on: January 19, 2014, 10:10:46 PM
Haven't you ever heard of the theory that

1) Everything that CAN happen DOES happen

and

2) It is all happening at this moment, all at the same time!
The Many Worlds interpretation, sure.
18  Economy / Currency exchange / Re: WTB .2 BTC for $175 PayPal on: January 14, 2014, 12:50:17 AM
Even if you don't chargeback yourself, there is the risk Paypal will find out the money was used for bitcoins and will undo the transaction and block your account.
19  Other / Off-topic / Re: I wish I could rent a time machine. on: January 13, 2014, 07:08:26 AM
You are talking about remote viewing and postcognition. There is no travel involved, at least in the usual sense.

It's pretty much just TV with ESP.
20  Other / Off-topic / Re: I wish I could rent a time machine. on: January 13, 2014, 03:51:36 AM
Quote
"Energy is everything. Match the frequency of the reality you want and you cannot help but get that reality. It can be no other way. This is not philosophy. This is physics."
Albert Einstein
"The trouble with quotes on the Internet is that you can never know if they are genuine." --Abraham Lincoln
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 100
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!