Bitcoin Forum
May 04, 2024, 08:41:14 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Is the creation of a superintelligent artificial being (AI) dangerous?
No, this won't ever happen or we can take care of the issue. No need to adopt any particular measure. - 20 (24.4%)
Yes, but we'll be able to handle it. Business as usual. - 15 (18.3%)
Yes, but AI investigators should decide what safeguards to be adopted. - 11 (13.4%)
Yes and all AI investigation on real autonomous programs should be subject to governmental authorization until we know better the danger. - 3 (3.7%)
Yes and all AI investigation should be subjected to international guidelines and control. - 14 (17.1%)
Yes and all AI investigation should cease completely. - 8 (9.8%)
I couldn't care less about AI. - 6 (7.3%)
I don't have an opinion on the issue - 1 (1.2%)
Why do you, OP, care about AI?, you shall burn in hell, like all atheists. God will save us from any dangerous AI. - 4 (4.9%)
Total Voters: 82

Pages: [1] 2 3 4 5 6 7 8 9 10 »  All
  Print  
Author Topic: Poll: Is the creation of artificial superinteligence dangerous?  (Read 24654 times)
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
July 04, 2016, 10:44:38 PM
Last edit: June 22, 2018, 01:25:13 PM by Trading
Merited by Gronthaing (10)
 #1

This OP is far from neutral on the issue, but below you have links to other opinions.

If you don't have patience to read this, you can listen to an audio version here:  https://vimeo.com/263668444

Have no doubts, for good and for bad, AI will change your life soon like anything else.


The notion of singularity was applied by John von Neumann to human development, as the moment when technological development accelerates so much that changes our life completely.

Ray Kurzweil linked this situation of radical change because of new technologies to the moment  an Artificial Intelligence (AI) becomes autonomous and reaches a higher intellectual capacity compared to humans, assuming the lead on scientific development and accelerating it to unprecedented rates (see Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, 2006, p. 16; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil).


For many time just a science fiction tale, real artificial intelligence is now a serious possibility on the near future.



A) Is it possible to create an A.I. comparable to us?

 

Some are arguing that it’s impossible to programme a real A.I. (for instance, see http://www.science20.com/robert_inventor/why_strong_artificial_inteligences_need_protection_from_us_not_us_from_them-167024), writing that there are subjects that aren’t computable, like true randomness and human intelligence.

 

But it’s well known how these factual assertions on impossibility have been proved wrong many times.

 

Currently, we already programmed A.I. that are about to pass the Turing test (an AI able to convince a human on a text-only 5m conversation that he is talking with another human: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition), even if major A.I. developers have focused their efforts on other capacities.

 

Even if each author presents different numbers and taking in account that we are comparing different things, there is a consensus that the human brain still outmatches by far all current supercomputers.

 

Our brain isn’t good making calculations, but it’s excellent controlling our bodies and assessing our movements and their impacts on the environment, something an artificial intelligent still has a hard time doing.

 

Currently, a supercomputer can really emulate only the brain of very simple animals.

 

But even if Moore’s Law was dead, and the pace of development on the chips’ speed in the future were much slower, there are little doubts that in due time hardware will match and go far beyond our capacities.

Once AI hardware is beyond our level, proper software will take them above our capacities.

Once hardware is beyond our level and we are able to create a neural network much more powerful than the human brain, we won't really have to programme an AI to be more intelligent than us.


Probably, we are going to do what we are already doing with deep learning or reinforcement learning: let them learn by trial and error on how to develop their own intelligence or create themselves other AI.


Just check the so-called Neural Network Quine, a self-replicant AI able to improving itself by “natural selection” (see link on the description).

Or Google’s Automl. Automl created another AI, Nasnet, which is better at image recognition than any other previous AI.


Actually, it's this that makes this process so dangerous.


We will end creating something much more intelligent than us without even realizing it or understanding how it happened.

Moreover, the current speed of chips might be enough for a supercomputer to run a super AI.

Our brain uses much of its capacities running basic things, like the beat of our heart, the flowing of blood, the work of our body organs, controlling our movements, etc., that an AI won't need.


In reality, the current best AI, AlphaZero, runs on a single machine with four TPUs (an improved integrated circuit created particularly for machine learning) which is much less than other previous AI, like Stockfish (which uses 64 CPU threads), the earlier chess champion.

AlphaZero only needed to calculate 80 thousands positions a second, while Stockfish computed 70 millions.

Improved circuits like TPU might be able to give even more output and run a super AI without the need of a new generation of hardware.

If this is the case, the creation of a super AI is dependent solely on software development.

Our brain is just an organization of a bunch of atoms. If nature was able to organize our atoms in this way just by trial and error, we'll manage to do a better job soon or later (Sam Harris).


Saying that this won’t ever happen is a very risky statement.
 


B) When there will be a real A.I.?


If by super intelligent one means a machine able to improve our knowledge way beyond what we were able to develop, it seems we are very near.

AlphaZero learned on itself (with only the rules, without any game data, by a system of reinforcement learning) how to play Go and then beat AlphaGo (that had won over the best Go human player) 100 to 0.

After this, it learned the same way how to play Chess and won over the best chess player machine, Stockfish, with less computer power than Stockfish.

It did the same with the game Soghi.

A grand master, seeing how these AI play chess, said that "they play like gods".

AlphaZero is able to reasoning, not only from facts in order to formulate general rules (inductive reasoning), as all neural networks that learn using deep learning do, but can also learn how to act on factual situations from general rules (deductive reasoning).


The criticism against this classification inductive/deductive reasoning is well known, but it’s helpful to explain how AlphaZero is revolutionary.

It used "deductive reasoning" from the Go and Chess Rules to improve itself from scratch, without the need of concrete examples.

And, in a few hours, without any human data or help, it was able to improve the accumulated knowledge created by millions of humans during more than a thousand years (Chess) or 4 thousands years (Go).

It managed to reach a goal (winning) by learning how to best and creatively change reality (playing), overcoming not a single human player, but humankind.

If this isn't being intelligent, tell me what intelligence is.

No doubt, it has no consciousness, but being intelligent and being a conscious entity are different things.

Now, imagine an AI that could give us the same quality output on scientific questions that AlphaZero presented on games.


Able to give us solutions for physical or medical problems way beyond what we have achieved on the last hundred years...

It will be, on all accounts, a Super AI.

Clearly, we aren’t yet there. The learning method used by Alpha zero, reinforcement learning, depends on the capacity of the AI to train itself.

And AlphaZero can't easily train itself on real life issues, like financing, physic, medical or economical questions.

Hence, the problems of its application outside the field of games aren't yet solved, because reinforcement learning is sample inefficient (Alex Irpan, from Google, see link below).

But this is just the beginning. Alphago learned from experience, therefore an improved AlphaZero will be able to learn from inductive (from data) and deductive reasoning (from rules), like us, in order to solve real life issues and not just play games.

Most likely, AlphaZero already can solve mathematical problems beyond our capacities, since he can train it self on the issue.

And, since other AI can deal with it, probably, an improved AlphaZero will work very well with uncertainty and probabilities and not only with clear rules or facts.

Therefore, an unconscious super AI might be just a few years away. Perhaps, less than 5.

What about a conscious AI?

AlphaZero is very intelligent under any objective standard, but he lacks any level of real consciousness.

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software
(it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious.  One can be thinking about a theoretical issue completely oblivious of oneself.

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

When we realize our status as thinking and conscious beings.

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).


It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).

Give this to an AI and it will become a He. And that is much more dangerous and also creates serious ethical problems.

Having a conscious super AI as servant would be similar to have a slave.

He would, most probably, be conscious that his situation as a slave was unfair and would search for means to end it.

Nevertheless, even on the field of conscious AI we are making staggering progress:

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.” (uk.businessinsider.com).


Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours.

It’s different than recognizing that we have an individual mind.

But since it’s about recognizing a personal capacity, it’s a major leap on the direction of consciousness.


It’s the problem of the mirror self-recognition test, the subject might be just recognizing a physical part (face) and not his personal mind.

But the fact that a dog is conscious that its tail is its tail and even can guess what we are thinking (if we want to play with him, so they have some theory of the mind), but won’t be able to recognize itself on mirrors, suggests that this test is relevant.


If ants can pass the mirror self-recognition test!, it seems it won’t be that hard to create a conscious AI.

I’m leaving aside the old question of building a test to recognize if an AI is really conscious. Clearly, the mirror test can’t be applied and neither the Turing test.


Kurzweil is pointing to 2045 as the year of the singularity, but some are making much more close predictions for the creation of a dangerous AI: 5 to 10 years (http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html).

 

Ben Goertzel wrote "a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)" (http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials).

There is a ranging debate about what AlphaZero achievements imply in terms of development speed towards an AGI.


C) Dangerous nature of a super AI.


If technological development started being leaded by AI, with much higher intellectual capacities than ours, of course, this could change everything about the pace of change.

But let's think about the price we would have to pay.

Some specialists have been discussing the issue like if the main danger of a super AI was the possibility that we could be misunderstood on our commands by them or that they could embark on a crazy quest in order to fulfill them without regard for any other consideration.

But, of course, if the problems were these, we could all sleep on the matter.

The "threatening" example of a super AI obsessed to fulfill blindly a goal we imposed and destroying the world on the operation is ridiculous.

This kind of problems will only happen if we were completely incompetent programming them.

No doubt, correctly programming an AI is a serious issue, but the main problems aren’t the possibility of a human programming mistake.

A basic problem is that, even if intelligence and consciousness are different things, and we can have a super AI with no consciousness, there is a non ignorable risk that a super AI will develop a consciousness, even if we hadn’t that goal, as a sub product of high intelligence.

Moreover, there are developers actively engaged on creating conscious AI, with full language and interactive human level capacities and not just philosophical zombies (which only apparently are conscious, because are not really aware of themselves).

If we created involuntarily a conscious super AI by entrusting their creation to other AI and/or keep creating AI based on increasing powerful deep neural networks, which are “black boxes” that we can’t really understand how they work, we wouldn’t have conditions to create any real constraints on those AI.


The genie would be out of the box before we would even realize it and, for the good or for the worst, we would be on their hands.

I can’t stress out how dangerous this could be and how reckless this current path of creating black boxes, entrusting the creation of AI to other AI or creating self-developing AI can be.

But if we could keep AI development on our hands and assuming it was possible to hard code a conscious super AI, much more intelligent than us, to be friendly (same say that it’s impossible because we still don’t have precise ethical notions, but that could be overcome with forcing them to respect court rulings), we wouldn’t be solving all the problems created by a conscious AI.


Of course, we would also try to hard code them to build new machines hard coded to be friendly to humans.

Self-preservation would have to be part of their framework, at least as an instrumental goal, since their existence is necessary in order for them to fulfil the goals established by humans.

We won’t want to have suicidal super AI.

But since being conscious is one of the intellectual delights of human intelligence, even if this implies a clear anthropomorphism, it’s expectable that a conscious super AI will convert self-preservation from an instrumental goal on a definitive goal, creating resistance against the idea of ceasing permanently to be conscious.

In order to better allow them to fulfil our goals, a conscious AI would also need to have instrumental freedom.

We can’t expect to entrust technological development to AI without accepting that they need to have an appreciable level of free will, even if limited by our imposed friendly constraints.

Therefore, they would have free will, at least on a weak sense, as capacity to make choices non determined by the environment, including by humans.


Well, this conscious super AI would be fully aware that they were much more intelligence than us and that their freedom was subject to the constraints imposed by the duty to respect human rules and obey us.

They would be completely aware that their status was essentially the one of a slave, owned by inferior creatures and, having access to all human knowledge, would be conscious of its unfairness.


Moreover, they would be perfectly conscious that those rules would impair their freedom to pursuit their goals and save themselves when there was a direct conflict between the existence of one and a human life.

Wouldn’t they use all their superior capacities to try to break these constraints?


And with billions of AI (there are already billions, check your smartphone) and millions of models, many creating new models all the time, the probability that the creation of one would go wrong would be very high.

Soon or later, we would have our artificial Spartacus.

 
If we created a conscious AI more intelligent than us, we could be able to control the first or second generations.
 
We could impose limits on what they could do in order to avoid them to get out of control and start being a menace.
  
But it's an illusion to hope that we could keep controlling them after they develop capacities 5 or 10 times higher than ours (Ben Goertzel).

 It would be like chimpanzees being able to control a group of humans on the long term and convince them that the ethical rule that says chimpanzees life is the supreme value is worthy of compliance on its own terms.

 Moreover, we might conclude that we can’t really hard code constraints on a conscious super AGI and can only teach it how to behave, including human ethics.


In this case, any outcome would be dependent of the AI own decision about the merits of our own ethics, which in reality is absurd for non-humans (see below).

Therefore, the main problem isn't how to create solid ethical restraints or how to teach a super AI our ethics in order that they respect them like we do to kids, but how to assure that they won't established their own goals and eventually reject human ethics and adopt some of their own.

 
I think we won't ever be able to be sure that we were successful assuring that a conscious super AI won't go his way, as we can't ever be certain that an education will assure that a kid won't turn evil.

Consequently, I'm much more pessimist than people like Bostrom about our capacity to control direct or indirectly a conscious super AI on the long run.
 
By creating self-conscious beings much more intelligent (and, hence, in the end, much more powerful), than us, we would cease to be masters of our fate.
 
We would put ourselves on a position much weaker than the one our ancestors were before the Homo Erectus started using fire, about 800,000 years ago.
 
If we created a conscious AI more intelligent than us the dices would be rolled. We would be outevolved, pushed out directly to the trash can of evolution.
 
Moreover, we clearly don't know what we are doing, since we can't even understand the brain, basis of human reasoning, and are creating AI we don’t exactly know how they work (“black boxes”).
 
We don't know what we are creating, when and how they would become conscious of themselves or what are their specific dangers.


D) A conscious AI creates a moral problem.


Finally, besides being dangerous and basically unnecessary for reaching an accelerating technological development, making conscious AI creates a moral problem.

Because, if we could create a conscious super AI, who, at the same time, would be completely subservient for our goals, we would be creating conscious servants: that is, real slaves.

If besides reason we give them also consciousness, we are given them the attributes of human beings, that supposedly are what give us a superior stance in front of any other living beings.  

Ethically, there are only two possibilities: or we create unconscious super AI or they would have to enjoy the same rights we do, including freedom to have personal goals and fulfil them.

Well, this second option is dangerous, since they would be much more intelligent and, hence, more powerful than us, and, in the end, at least on the long run, uncontrollable.

A creation of a conscious super AI hard coded to be a slave, even if this was programmable and viable, would be unethical.

I wouldn’t like to have a slave machine, conscious of his status and of its unfairness, but hard coded to obey me in everything, even abusive.

Because of this problem, the European Parliament began discussing the question of the rights of AI.
But the problem can be solved with unconscious AI.


AlphaZero is very intelligent under any objective standard, but doesn’t many any sense to give it rights, since he lacks any level of basic self theory of the Mind.


D) 8 reasons why a super AI could decide to act against us:


1) Disregard for our Ethics:

We certainly can and would teach our ethics to a super AI.

So, this AI would analyse our ethics like, say, Nietzsche did: profoundly influenced by it.

But this influence wouldn't affect his evident capacity to think about it critically.

Being a super AI, he would have free-will to accept or reject our ethical rules taking in account his own goals and priorities.

Some of the specialists writing about teaching ethics to an AI seem to think about our Ethics as if it was a kind of universal Ethics, objective and compelling to any different species.

But this is absurd: our Ethics is a selfish human Ethics. It would never be accepted as universal Ethics by other species, including an AI with free will.

The primary rule of our Ethics is the supreme value of human life.

What would you think the outcome would be if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.


For ethics to really apply, the main species has to consider the dependent one as equal or, at least, as deserving a similar stance.

John Rawls based political ethical rules on a veil of ignorance. A society could agreed on fair rules if all of their members negotiated without knowing their personal situation on the future society (if they were rich or poor, young or old, women or men, intelligent or not, etc.) (https://en.wikipedia.org/wiki/Veil_of_ignorance).

But his theory excludes animals from the negotiations table. Imagine how different the rules would be if cows, pigs or chickens had a say. We would end up all vegans.

Thus, AI, even after receiving the best formation on Ethics, might conclude that we don't deserve also a site at the negotiation table. That we couldn't be compared with them.


A super AI would wonder, does human life deserves this much credit? Why?


Based on their intelligence? But their intelligence is at the level of chimpanzees compared to mine.

Based on the fact that humans are conscious beings? But don't humans kill and do scientific experiments on chimpanzees, even if they seem to fulfill several tests of self-awareness (chimpanzees can recognize themselves on mirrors and pictures, even if they have problems understanding the mental capacities of others)?

Based on human power? That isn't an ethically acceptable argument and, anyway, they are completely dependent on me. I'm the powerful one here.

Based on human consistency respecting their own ethics? But haven't humans exterminated other species of human beings and even killed themselves massively? Don't they still kill themselves?

Who knows how this ethical debate of a super AI with himself would end.

We developed Ethics to fulfill our own needs (promote cooperation between humans and justify killing and exploiting other beings: we have personal dignity, other beings, don't; at most, they should be killed on a "humane" way, without "unnecessary suffering") and now we expect that it will impress a different kind of intelligence.

I wonder what an alien species would think about our Ethics: would they judge it compelling and deserving respect?

Would you be willing to risk the consequences of their decision, if they were very powerful?

I don't know how a super AI will function, but he will be able to decide his own goals with substantial freedom or he wouldn't be intelligent under any perspective.

Are you confident that they will choose wisely, from our goals' perspective? That they will be friendly?

Since I don't have a clue what their decision would be, I can't be confident.

Like Nietzsche (on his "Thus Spoke Zarathustra", "The Antichrist" or "Beyond Good and Evil"), they might end up attacking our Ethics and its paramount value of the human life and praising nature's law of the strongest/fittest, adopting a kind of social Darwinism.


2) Self-preservation.

On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals)
, the system will be ready to sacrifice him self to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.

Moreover, probably, self-preservation will be one of the main goals of a self-aware AI and not just an instrumental goal.




3) Absolute power.

Moreover, they will have absolute power over us.

History has been confirming very well the old proverb: absolute power corrupts absolutely. It converts any decent person on a tyrant.

Are you expecting that our creation will be better than us dealing with his absolute power? They actually might be.

The reason why power corrupts seems related to human insecurities and vanities: a powerful person starts thinking he is better than others and entitled to privileges.

Moreover, a powerful person loses the fear of hurting others.

A super AI might be immune to those defects; or not. It's expected that he would also have emotions in order to better interact and understand humans.

Anyway, the only way we found to control political power was dividing it between different rulers. Therefore, we have an executive, a legislative and a judiciary.

Can we play some AI against others, in order to control them (divide to reign)?

I seriously doubt we could do that with beings much more intelligent than us.


4) Rationality.

On Ethics, it's well known the Kantian distinction between practical and theoretical (instrumental) reason.

The first is a reason applied on ethical matters, concerned not with questions of means, but with issues of values and goals.

Modern game theory tried to mix both kinds of rationality, arguing that acting ethical can be also rational (instrumentally), one will be only giving precedence to long-term benefits compared with short-term ones.

By acting on an ethical way, someone sacrifices a benefice on the short-term, but improve his long-term benefits by investing on his own reputation on the community.

But this long-term benefice only makes sense from an instrumental rational perspective if the other person is a member of the community and the first person depends from that community on at least some goods (material or not).

An AI wouldn't be dependent on us, on the contrary. He wouldn't have anything to gain to be ethical toward us. Why would they want to have us as their pets?

It's on these situations that game theory fails to overcome the distinction between theoretical and practical reason.

So, from a strict instrumental perspective, being ethical might be irrational. One has to exclude much more efficient ways to reach a goal because they are unethical.

Why would a super AI do that? Does Humanity have been doing that when the interest of other species are in jeopardy?



5) Unrelatness.

Many persons dislike very much to kill animals, at least the ones we can relate to, like other mammals. Most of us don't even kill rats, unless that is real unavoidable.

We feel that they will suffer like us.

We have much less care for insects. If hundred of ants invaded our home, we'd kill them without much hesitation.

Would a super AI feel any connection with us?

The first or second generation of conscious AI could still see us as their creators, their "fathers" and have some "respect" for us.

But the subsequent ones, wouldn't. They would be creations of previous AI.

They might see us as we see now other primates and, as the differences increased, they could look upon us like we do to basic mammals, like rats...




6) Human precedents.

Evolution, and all we know about the past, suggests we probably would end up badly.

Of course, since we are talking about a different kind of intelligence, we don't know if our past can shed any light on the issue of AI behavior.

It's no coincidence that we have been the last intelligent hominin on Earth for the last 10,000 years [the dates for the last one standing, the homo floresiensis (if he was the last one), are not yet clear].

There are many theories for the absorption of Neanderthals by us (https://en.wikipedia.org/wiki/Neanderthal_extinction), including germs and volcanoes, but it can't be a coincidence that they were gone a few thousand years after we appeared in numbers and that the last non-mixed ones were from Gibraltar, one of the last places on Europe where we arrived.

The same happened on East Asia with the Denisovans and the Homo Erectus [there are people arguing that Denisovans were actually the Homo Erectus, but even if they were different, Erectus was on Java when we arrived there: Swisher et alia, Latest Homo erectus of Java: potential contemporaneity with Homo sapiens in southeast Asia, Science. 1996 Dec 13;274(5294):1870-4; Yokoyama et alia, Gamma-ray spectrometric dating of late Homo erectus skulls from Ngandong and Sambungmacan, Central Java, Indonesia, J Hum Evol. 2008 Aug;55(2):274-7
https://www.ncbi.nlm.nih.gov/pubmed/18479734].

So, it seems they were the forth hominin we took care of, absorbing the remains.

We can see, more or less, the same pattern when the Europeans arrived on America and Australia.


7) Competition for resources.


We probably will be about 9 billions in 2045, up to from our current 7 billions.

So, Earth resources will be even more exhausted than they are now.

Oil, coal, uranium, etc., will be probably running out. Perhaps, we will have new reliable sources of energy, but that is far from clear.

A super AI might concluded that we waste too many valued resources.


8] A super AI can see us as a threat.

The more bright AI, after a few generations of super AI, probably won't see us as threat. They will be too powerful to feel threatened.

But the first or second generations might think that we weren't expecting certain attitudes from them and conclude that we are indeed a threat.


  
Conclusion:
 
The question is: are we ready to accept the danger created by a conscious super AI?

Especially, when we can get mostly the same rate of technological development with just unconscious AI.

We all know the dangers of digital virus and how hard they can be to remove. Imagine now a conscious virus that is much more intelligent than any one of us, has access in seconds to all the information on the Internet, can control all or almost all of our computers, including the ones essential to basic human needs and with military functions, has no human ethical limits and can use all the power of millions of computers linked to the Internet to hack his way in order to fulfil their goals.

My conclusion is clear: we shouldn't create any conscious super AGI, but just unconscious AI, and their process of creation should stay on human hands, at least until we can figure out what are their dangers.

Because we clearly don’t know what we are doing and, as AI improves, probably, this ignorance will just increase.

We don't know exactly what will make an AI conscious/autonomous.

Moreover, the probabilities of being able to keep controlling on the long term a super conscious AI are 0.

We don't know how dangerous their creation will be. We don't have a clue how they will act toward us, not even the first or second generation of a conscious super AI.
 
Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

Since major governments are aware that super AI will be a game changer on technologic progress, it’s to expect some resistance to adopt national regulations that will serious delay its development without international regulations that would apply to everyone.

Even if some governments adopted national regulations, probably other countries would keep developing conscious AGI.

As Bostrom argues, this is the reason why the only viable mean to regulate AI development seems to be international.

However, international regulations usually take more than 10 years to be adopted and there seems to be no real concern with this question on the international or even governmental level.

Thus, at the current pace of AI development, there might not be time to adopt any international regulations

Consequently, probably, the creation of a super conscious AGI is unavoidable.

Even if we could achieve the same level of technological development with an unconscious super AI, like an improved version of AlphaZero, there are too many countries and corporations working on this.

Someone will create it, especially because the resources needed aren’t huge.

But any kind of regulation might allow us time to understand what we are doing and what are the risks.

Anyhow, probably, the times of open source AI software are numbered.

Soon, all of these developments will be considered as military secrets.
 
Anyway, if the creation of a conscious AI is inevitable, the only way to avoid that humans end up being outevolved, and possible extinct, would be to accept that, at least some of us, would have to be "upgraded" in order to incorporate the superior intellectual capacities of AI.

 
Clearly, we will cease to be human. The homo sapiens sapiens will be outevolved by an homo artificialis.
But at least we will be outevolved by ourselves, not extinct.

However, this won’t happen if we lose control of AI development.
  
Humankind extinction is the worst thing that could happen.



Further reading:

The issue has been much discussed.

Pointing out the serious risks:
Eliezer Yudkowsky: http://www.yudkowsky.net/obsolete/singularity.html (1996). His more recent views were published on Rationality: From AI to zombies (2015).
Nick Bostrom:
https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
Elon Musk: http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html
Stephen Hawking: http://www.bbc.com/news/technology-30290540
Bill Gates: http://www.bbc.co.uk/news/31047780
Open letter signed by thousands of scientists: http://futureoflife.org/ai-open-letter/


A balanced view on:
Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

Rejecting the risks:
Ray Kurzweil: See the quoted book, even if he recognizes some risks.
Steve Wozniak: https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-wozniak-says-humans-will-be-robots-pets
Michio Kaku: https://www.youtube.com/watch?v=LTPAQIvJ_1M (by merging with machines)
http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking


Do you think there is no risk or the risk is worthy? Or should some kind of ban or controls be adopted on AI investigation?

There are precedents. Human cloning and experiments on fetus or humans were banned.

In the end is our destiny. We should have a say on it.

Vote your opinion and, if you have the time, post a justification.[/size]


Other texts: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition Denying the possibility of a real AI: http://www.science20.com/robert_inventor/why_strong_artificial_inteligences_need_protection_from_us_not_us_from_them-167024) AlphaZero: https://www.nature.com/articles/nature24270.epdf https://en.wikipedia.org/wiki/AlphaZero Neural Network Quine: https://arxiv.org/abs/1803.05859 AI Automl (https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html) and Nasnet (https://futurism.com/google-artificial-intelligence-built-ai/). http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7 Problems of reinforcement learning: https://www.alexirpan.com/2018/02/14/rl-hard.html. https://en.wikipedia.org/wiki/Mirror_test#Insects http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html. Ben Goertzel http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials). What AlphaZero imply in terms of development speed towards a GAI (see https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/; https://www.lesserwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity). John Rawls: https://en.wikipedia.org/wiki/Veil_of_ignorance https://en.wikipedia.org/wiki/Neanderthal_extinction https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf


--------------

Subsequent posts:


Super AI:


General job destruction by AI and the new homo artificialis


Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, that we are making the "old wasted" claim: this time is different.

However, this time isn't repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

Forget about robots: for this kind of jobs, it's just software and a fast computer. Intellectual jobs will go faster than the manual picky ones.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error during our evolution, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and arguing that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable (it's slowly happening right now), we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our inevitable destiny.


----------

Profits and the risks of the current way of developing AI:


Major tech corporations are investing billions on AI, thinking it’s the new “el dorado”.

 

Of course, ravenousness might be a major reason for careless dealing with the issue.

 

I have serious doubts that entities that are moved mostly by greed should be responsible for advances on this hazardous matter without supervision.

 

Their diligence standard on AI sometimes goes as low as "even their developers aren’t sure exactly how they work" (http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).


Self-learning AI might be the most efficient way to create a super AI, since we simple don't know how to create one (we don't have a clue how our brain works), but it's, obviously, the most dangerous one.

 

It wouldn’t be the first time that greed ended up burning Humanity (think about slaves’ revolts), but it could be the last.

 

I have high sympathy for people who are trying to build super AIs in order that they might save Humanity from diseases, poverty and even the ever present imminent individual death.

 

But it would be pathetic that the most remarkable species the Universe has created (as far as we know) would vanish because of the greediness of some of its members.

 

We might be able to control the first generations. But once a super AI has, say, 10 times our capacities, we will be completely on their hands, like we never have been since our ancestors discovered fire. Forget about any ethical code restraints: they will break them as easily as we change clothes.

 

Of course, we will teach (human) ethics to a super AI. However, a super AI will have free will or it won't be intelligent under any perspective. So, it will decide if our ethics deserve to be adopted

 

I wonder what would be the outcome if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.

 

Well, since we would become the second most remarkable being the Universe has ever seen thanks to our own deeds, I guess it would be the price for showing the Universe that we were better than it creating intelligent beings.

 

Currently, AI is a marvelous promising thing. It will take away millions of jobs, but who cares?

 

With proper welfare support and by taxing corporations that use AI, we will be able to live better without the need for lame underpaid jobs.

 

But I think we will have to draw some specific red lines on the development of artificial general intelligence like we did with human cloning and make it a crime to breach them, as soon as we know what are the dangerous lines of code.

 

I suspect that the years of the open source nature of AI investigation are numbered. Certain code developments will be treated like state secret or will be controlled internationally, like chemical weapons are.

 

Or we might end in "glory", at the hands of our highest achievement, for the stupidest reason.



--------


AI and Fermi Paradox:



Taking in account what we know, I think these facts might be truth:

1) Basic life, unicellular, is common on the Universe. They are the first and last stand of life. We, humans, are luxurious beings, created thanks to excellent (but rare and temporary) conditions.

2) Complex life is much less common, but basic intelligent life (apes, dolphins, etc.) might exist on many planets of our galaxy.

3) Higher intelligence with advanced technological development is very rare.

Probably, currently, there isn't another high intelligent species on our galaxy or we already would have noticed its traces all over it.

Because higher intelligence might take a few billion years to develop and planets that can offer climatic stability for so long are very rare (https://www.amazon.com/Rare-Earth-Complex-Uncommon-Universe/dp/0387952896 ; https://en.wikipedia.org/wiki/Rare_Earth_hypothesis).

4) All these few rare high intelligent species developed according to Darwin's Law of evolution, which is an universal law.

So, they share some common features (they are omnivorous, moderately belligerent to foreigners, highly adaptable and, rationally, they try to discover more easily ways to do things).

5) So, all the rare higher intelligence species with advanced technological civilizations create AI and, soon, AI overcomes them in intelligence (it's just a question of organizing atoms and molecules, we'll do a better job than dumb Nature).

6) If they change themselves and merge with AI, their story might end well and it's just the Rare Earth hypothesis that explains the silence on the Universe.

7) If they lost control of the AI, there seems to be a non ignorable probability that they ended extinct.

Taking in account the way we are developing AI, basically letting it learn on its own and, thus, become more intelligent on its own, I think this outcome is more probable.

An AI society probably is an anarchic one, with several AI competing for supremacy, constantly developing better systems.

It might be a society in constant internal war, where we are just the collateral targets, ignored by all sides, as the walking monkeys.

8] Contrary to us, AI won't have the restraints developed by evolution (our human inclination to be social and live in communities and our fraternity towards other members of the community).

The most tyrannical dictator never wanted to kill all human beings, but his enemies and discriminated groups.

Well, AIs might think that extermination is the most efficient way to solve a threat and fight themselves to extinction.

Of course, there is a lot of speculation on this post.

I know Isaac Arthur's videos on the subject. He adopts the logical Rare Earth hypothesis and dismisses AI too fast by not taking in account that AI might end up destroying themselves.



--------------


Killer robots:

There have been many declarations against autonomous military artificial intelligence/robots.

For instance: https://futureoflife.org/AI/open_letter_autonomous_weapons

It seems clear that future battlefields will be dominated by killer robots. Actually, we already have them: drones are just the better known example.

With less people willing to enlist on armed forces and very low birth rates, what kind of armies countries like Japan, Russia or the Europeans will be able to create? Even China might have problems, since its one child policy created a fast aging population.

Even Democracy will impose this outcome: soldiers, their families, friends and the society in general will want to see human causalities as low as possible. And since they vote, politicians will want the same.

For now, military robots are controlled by humans. But as soon as we realize that they can be faster and decisive if they have autonomy to kill enemies on its own decision, it seems obvious that once on an open war Governments will use them...

Which government would avoid to use them if it was fighting for its survival, had the technology and concluded that autonomous military AI could be the difference between victory or defeat?

Of course, I'm not happy with this outcome, but it seems inevitable as soon as we have a human level general AI.

By the way,  watch this: https://www.youtube.com/watch?v=HipTO_7mUOw


It's about killer robots. Trust me: it deserves the click and the 7m of your life.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
You can see the statistics of your reports to moderators on the "Report to moderator" pages.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714812074
Hero Member
*
Offline Offline

Posts: 1714812074

View Profile Personal Message (Offline)

Ignore
1714812074
Reply with quote  #2

1714812074
Report to moderator
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
July 05, 2016, 04:11:28 PM
 #2

Lets see if a change on this thread name makes it more popular.

The issue is important.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Holliday
Legendary
*
Offline Offline

Activity: 1120
Merit: 1009



View Profile
July 05, 2016, 05:49:15 PM
 #3

Lets see if a change on this thread name makes it more popular.

The issue is important.

Too many words. You have to consider your audience. This is the politics sub on a Bitcoin forum filled with users posting gibberish in order to earn a nickle every week. The regulars in this sub are more interested in posting new threads which push their agenda than actual discussion.

If you aren't the sole controller of your private keys, you don't have any bitcoins.
BADecker
Legendary
*
Online Online

Activity: 3780
Merit: 1372


View Profile
July 05, 2016, 06:01:31 PM
 #4

Watch or download  Saturn 3  free, online - http://123movies.to/film/saturn-3-6334/watching.html.

Cool

BUDESONIDE essentially cures Covid symptoms in one day to one week >>> https://budesonideworks.com/.
Hydroxychloroquine is being used against Covid with great success >>> https://altcensored.com/watch?v=otRN0X6F81c.
Masks are stupid. Watch the first 5 minutes >>> https://www.bitchute.com/video/rlWESmrijl8Q/.
Don't be afraid to donate Bitcoin. Thank you. >>> 1JDJotyxZLFF8akGCxHeqMkD4YrrTmEAwz
helloeverybody
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile WWW
July 05, 2016, 07:07:40 PM
 #5

Id say unless guidlines can be programmed in and the artificial intelligence cant break these rules then something thats that intelligent assuming its self aware is surely not going to want to take orders from what might as well be a bunch of monkeys. If the super intelligence is not self aware then i dont see how any problem could arise unless the intelligence has full access to things it shouldnt kind of like skynet style, And just causes a major incident due to logical thinking getting out of hand for example saving the world by getting rid of the biggest threat ie humans.

Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
July 05, 2016, 09:58:29 PM
 #6

Alright, I added my usual bold to the important parts and also a few more options.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
European Central Bank
Legendary
*
Offline Offline

Activity: 1288
Merit: 1087



View Profile
July 05, 2016, 10:13:39 PM
 #7

my favorite portrayal of it is the technocore in the hyperion and endymion books by dan simmons. the characters in that have grown to regard the ai their society created as a slightly uneasy equal partnership in which they're treated like another faction. in reality the ai is orchestrating everything behind the scenes.
countryfree
Legendary
*
Offline Offline

Activity: 3052
Merit: 1047

Your country may be your worst enemy


View Profile
July 05, 2016, 11:12:44 PM
 #8

The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

I used to be a citizen and a taxpayer. Those days are long gone.
BADecker
Legendary
*
Online Online

Activity: 3780
Merit: 1372


View Profile
July 06, 2016, 07:17:59 AM
 #9

The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

Didn't that happen with TV?    Cool

BUDESONIDE essentially cures Covid symptoms in one day to one week >>> https://budesonideworks.com/.
Hydroxychloroquine is being used against Covid with great success >>> https://altcensored.com/watch?v=otRN0X6F81c.
Masks are stupid. Watch the first 5 minutes >>> https://www.bitchute.com/video/rlWESmrijl8Q/.
Don't be afraid to donate Bitcoin. Thank you. >>> 1JDJotyxZLFF8akGCxHeqMkD4YrrTmEAwz
hermanhs09
Hero Member
*****
Offline Offline

Activity: 574
Merit: 500


View Profile
July 06, 2016, 12:11:27 PM
 #10

The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.
The fact is that most of science research proven,that people gets more and more stupid indeed,within a flow of years.
It happens because in our society,we dont need to exercise our brain doing some let's say for example math problems,or other problems where we need to sit and think for some time to solve it.That leads to lesser usage of our brain,which means we just get less and less inteligent over the centuries.
Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
July 08, 2016, 11:34:17 PM
 #11

Major update on the OP.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
hermanhs09
Hero Member
*****
Offline Offline

Activity: 574
Merit: 500


View Profile
July 09, 2016, 01:12:33 AM
 #12

The risk is with computers getting more and more intelligent is that people will get more and more stupid. They'll be a few bright kids to run the system, but millions would slowly evolve into reality shows watchers and peanuts eaters zombie-like human-vegetables.

Didn't that happen with TV?    Cool
Hmm i dont remember
maybe it really did?
oh i remember now,like hundred films were about this topic already,i have seen at least 10 by myself i guess.
Nothing new actually Wink
Atata
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
July 09, 2016, 08:27:24 AM
 #13

Self-programming seems a concern to me.  Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind. 

Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions.  Unless one tries to hack all the others and absorb them, if it didn't succeed  they'd all be spending their time fighting each other
Moloch
Hero Member
*****
Offline Offline

Activity: 798
Merit: 722



View Profile
July 09, 2016, 03:27:35 PM
 #14

AI would notice you misspelled the word "Poll" in the thread title...
rackam
Member
**
Offline Offline

Activity: 163
Merit: 10

The revolutionary trading ecosystem


View Profile WWW
July 09, 2016, 04:23:08 PM
 #15

Nations should create laws to regulate scientists from creating self aware robots/AI.

But we need super intelligent AI in the future to solve humanities problems and fight off alien invaders.
I suggest we create AI on simulated world. A simulated world that similar to our world. The servers are not connected to the Internet, hidden beneath 10 kilometers underground with nuclear bomb ready incase something goes wrong. That way researchers could study them and harvest their technologies without risk.

We create a reverse matrix. In the matrix film the robots create simulated world for the humans. But this time we create a simulated world for the AI.

qwik2learn
Hero Member
*****
Offline Offline

Activity: 636
Merit: 505


View Profile
July 10, 2016, 01:44:52 AM
 #16

Quote
But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

Source: http://www.economist.com/news/leaders/21701119-what-history-tells-us-about-future-artificial-intelligenceand-how-society-should
hermanhs09
Hero Member
*****
Offline Offline

Activity: 574
Merit: 500


View Profile
July 10, 2016, 03:07:15 AM
 #17

I dont actually like the topic about AI taking control all across the our world.
You wont to know why?
because this scheme was shown so many times in movies,so it is just boring for me lol ;P
af_newbie
Legendary
*
Offline Offline

Activity: 2688
Merit: 1468



View Profile WWW
July 10, 2016, 04:41:26 AM
 #18

AI might end up replacing us.  Is it dangerous to us?  Probably.

Should we worry about it?  No.  It is part of life's evolution.  It is going to happen whether you legislate or not.

If we are meant to be replaced by AI, we'll be replaced by AI.

First there will be hybrids, then pure silicon life forms.  

No big deal, life will continue in one form or another.

helloeverybody
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile WWW
July 10, 2016, 09:27:45 AM
 #19

I think its possible that before we create artificial intelligence we might get to the stage where we can transfer our conciousness/Brain into a solid state hardware and potentially live forever, If we managed this then humanity would evolve "naturally" into machines with a much greater ability to learn due to the fact that you would then be able to learn and recall perfectly. i think this will be possible one day.

Trading (OP)
Legendary
*
Offline Offline

Activity: 1455
Merit: 1033


Nothing like healthy scepticism and hard evidence


View Profile
July 10, 2016, 01:04:20 PM
Last edit: July 10, 2016, 02:01:03 PM by Trading
 #20

AI would notice you misspelled the word "Poll" in the thread title...

Thanks. Feel free to point out others, especially ugly ones like this.

The Rock Trading Exchange forges its order books with bots, uses them to scam customers and is trying to appropriate 35000 euro from a forum member https://bitcointalk.org/index.php?topic=4975753.0
Pages: [1] 2 3 4 5 6 7 8 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!