Bitcoin Forum
September 25, 2018, 11:58:37 PM *
News: ♦♦ New info! Bitcoin Core users absolutely must upgrade to previously-announced 0.16.3 [Torrent]. All Bitcoin users should temporarily trust confirmations slightly less. More info.
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: [1] 2 3 4 »
1  Other / Politics & Society / Privacy and the importance of biodata on: March 21, 2018, 01:09:22 AM


The Facebooks of the future might be health and biological corporations, offering free health care services or enhancing biological and intellectual capacities in exchange for biodata (Yuval Harari).

Imagine a device that will improve health, lifespan, intelligence or allow instant access to information, that customers will have to use permanently and that will allow a corporation to monitor their biodata.

A device that will record their emotional reactions when confronted with information, political propaganda, products, services and that will allow the corporation to really know them.

Not just where they go, what they read or what they buy, but their emotions, likes and dislikes or inclinations.

That will allow it to predict in what party their customers will vote, what products and services they will buy, if they have criminal or political rebellious inclinations, etc.

Governments won’t need to buy this information. Dictatorial ones will make these devices mandatory; others will demand this information from these corporations.

Of course, it’s this what Facebook, Google or Microsoft (yes, don’t forget to opt out all the information that Windows 10 give about you by default) do: give free services in exchange for information about their customers, that they use to show ads or that they will sell to third parties.

These developments will just go a few steps closer to know the customer better than himself and, therefore, easily manipulate or control him.
2  Other / Politics & Society / Watch this! The near future of AI: Slaughterbots on: January 08, 2018, 09:08:27 PM
It's a video at youtube with just 7m: https://www.youtube.com/watch?v=HipTO_7mUOw


It's about killer robots. Trust me: it deserves the click and the 7m of your life. It will scare the hell of you.


Don't read the rest of this post before watching the video, it has spoilers.
















Spoilers:

Of course, it's just a movie, not a real presentation. But in the beginning, the issue isn't clear.

Yes, as far as we know, this doesn't exist, yet.

But it's a shore deal. And no public appeal of the Future of Life Institute will avoid it.

In a few years, any coder will be able to build bots like these. Imagine the damages on the hands of a terrorist group or of a dictatorial government.


See further, about the dangers of a super AI:

https://bitcointalk.org/index.php?topic=1538764.0

https://bitcointalk.org/index.php?topic=2423065.0
3  Other / Politics & Society / Our median/short-term source of energy: the Molten salt thorium nuclear reactor on: November 23, 2017, 04:06:10 PM
Beside time, energy is our most precious commodity. All societies run on it.

If we couldn’t sustain the current energy output, we wouldn’t be able to support the present world population of about 7,4 billions (to reach about 9 billions on about 25 years).

With global warming and the possibility that oil production won’t be able to keep up with demand (Peek Oil is still a serious possibility taking in account the development of India and Africa; fracking didn’t solve this problem, could only postpone it), we are risking a serious climate and energy crisis.

Because of it intermittence and low density (no use to try to run North Europe or Canada on solar power), with current technology, renewable sources can’t really offer a complete solution to overcoming fossil sources dependency, even taking in account the major drop in prices of Solar.

After Chernobyl and Fukoshima, traditional nuclear reactors are clearly on a serious crisis (Germany decided to close all its nuclear plants by 2022).

Current nuclear plants rely on a high pressure water system in order to power an electrical generator.

Since water boils at about 100 degrees centigrade, only by submit it to a pressure as high as 70 times the normal atmospheric pressure can water temperature be increased to about 300 degrees without boiling it in order to produce energy at about 30-35% efficiency.

Well, pressuring radioactive water to 70 times the normal pressure is a dangerous business.

If the plant loses pressure, water will start boiling and vapor occupies much more space than liquid water. If the massive concrete and steel container that covers these reactors, in order to avoid any radioactive emission, is breached, one ends up with another Fukoshima.

Since the fifties there were experiments with nuclear reactors that use Molten salt as the heating/cooling instrument and not pressured water.

Molten salt reactors have a few advantages: they can reach much higher temperatures (so, they are much more efficient producing energy) and at normal pressure!

Therefore, it doesn’t need the expensive containers, so the plant is much smaller and cheaper to build.

They can be indeed very small and even be transportable! They can be manufactured on assembly lines.

Also, these reactors are much safer. They cool off on their own when temperature rises too much and the nuclear reaction can be easily stopped.


Moreover, a Molten salt reactor can be powered with thorium (that can be easily converted to uranium 233), which has some advantages over uranium:

1) Is much more common than uranium and have been mined extensively as a by-product no one currently uses. So, it's available with little mining costs.

2) Can be used almost completely (contrary to uranium, which only about 0.07% is actually consumed, leaving the rest as waste that is radioactive for thousands of years), being able to create about 200 times more energy than the same quantity of uranium.

3) Leaves relatively little waste and this waste is radioactive “only” for about 300 years.

4) This waste, contrary to the one from conventional reactors, is hard to use for weapons purposes. So, it’s much less risky to share this technology.

Finally, this new reactor can be also powered by a mix of thorium and uranium wastes created by conventional reactors.

So we can find use for this terrible legacy we are leaving to future generations.


There are a few technologic obstacles, like the corrosive nature of salt. But some projects intend to use graphite on the core to overcome it.

Nobody will stop this now: the Chinese, the Indians, the USA, Canada and also a few European countries are racing to build a fully working reactor of this kind.

https://www.youtube.com/watch?v=bbyr7jZOllI

https://www.youtube.com/watch?v=Kv_XjHPJjEg


https://en.wikipedia.org/wiki/Molten_salt_reactor#Twenty-first_century

https://www.extremetech.com/extreme/254692-new-molten-salt-thorium-reactor-first-time-decades

http://www.businessinsider.com/thorium-molten-salt-reactors-sorensen-lftr-2017-2

It seems our future is nuclear: first, Molten salt thorium fission; then, fusion.
4  Other / Politics & Society / The stupidest thing I ever seen in my all life: flat-earther on: November 23, 2017, 02:27:31 PM
A flat-earther built a rocket and is going to try to launch himself in order to take pictures from "space" and prove that he is right:
https://www.yahoo.com/news/flat-earther-launch-himself-homemade-212500611.html

https://www.extremetech.com/extreme/259409-flat-earther-plans-homemade-manned-rocked-launch-coming-saturday

www.newsweek.com/earth-flat-rocket-launch-mad-mike-hughes-719367

https://www.forbes.com/sites/trevornace/2017/11/22/flat-earther-launch-rocket-prove-earth-flat/

I couldn't believe on this story, so I had to find more news sources to take it seriously.

There are pretty good chances that this remarkable movement who believes the Earth is flat is going to lose one of its members: not really because he is going to change his opinion after being able to reach space and see Earth, but because of his premature death.

I guess they don't know that they could just rent a ship and sail until they "drop out of Earth".

But, for the record, he isn't completely stupid, since he said “If you’re not scared to death [for trying this], you’re an idiot”.

But he is wrong: he is the living (unfortunately, probably, only until Saturday) proof that one can be scared to death for trying this and still be an idiot.

I guess the movement is going to have his first martyr.

Of course, they will be saying that he was blown out of the air by the evil conspirators who have been fooling us all.

Because we all [but not them] are the stupid ones.

The things people can believe or do just to try to show how [not] intelligent they are…
5  Other / Politics & Society / General job destruction by AI and the new homo artificialis on: November 18, 2017, 02:35:43 PM
Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, we are just making the "old wasted" claim: this time is different.

However, this time isn't just repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and claiming that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen and you'll see it in your lifetime) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable, we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our destiny.
6  Bitcoin / Bitcoin Discussion / FAQ on how to destroy a 30 billion endeavour on: May 19, 2017, 02:02:14 AM
How to destroy a 30 billion USD endeavor?
A: Just keep being stubborn, never compromise and do nothing, besides insulting whoever thinks differently from you.

https://blockchainbdgpzk.onion/unconfirmed-transactions : 236,000 and going up at about 10% a day.

Of course, this doesn’t count the unconfirmed transactions that were cancelled after a few days or the number would be of several millions.

Yes, shore, these are just spam transactions…

Currently, even paying 0.003 btc/kb (0.0007 on a 223 bytes transaction) won’t grant you a fast confirmation (https://bitcoinfees.21.co/  shows some transactions that paid this still unconfirmed: you might have to wait several hours).

And the issue isn’t only the price, it’s also the insecurity. If you don’t pay the right fee, you are stuck. Not everyone has a client with a fee calculator or knows how to use it.

Every person stuck will swear he won’t use bitcoin again. Every retailer with complaining customers will move to an alt coin.

These news are moving from the coin press to the general media: https://www.forbes.com/sites/laurashin/2017/05/16/for-first-time-bitcoin-accounts-for-less-than-half-of-market-cap-of-all-cryptocurrencies/

You were expecting this rally on bitcoin price to keep going and going in the middle of this mess? Really?

The time is coming for us to pay the price for this chaos. Thank you, developers and miners.

Miners are having their fees’ party, but the music is about to stop and there won’t be enough chairs for us to sit safe.

PS. I'm not siding with no one on the technical issue. I'm saying it's starting to be too late to find and apply any solution in order to recover what is being lost at every day that goes by.
7  Other / Politics & Society / SpaceX and the prospects of Mars colonization. on: January 06, 2017, 08:07:05 PM
SpaceX and the prospects of Mars colonization.


1) Current unfeasibility of Mars massive colonization.

The goal of 1 million inhabitants on Mars in 50 years is unfeasible (http://www.telegraph.co.uk/science/2017/06/21/elon-musk-create-city-mars-million-inhabitants/)

With the Big Falcon Rocket (BFR), at 100 passengers per flight, this would require 10,000 flights only to transport the people.

But the material support is about 10 times more demanding. So, as Elon Musk recognizes, the system would require 110,000 flights (see https://aeon.co/essays/elon-musk-puts-his-case-for-a-multi-planet-civilisation; see his 2017 presentation https://www.youtube.com/watch?v=E4FY894HyF8).

Even at one flight a day, it would take 301 years. But since this is impossible, because one has to wait for the window every 26 months, when Mars is closer to Earth, transporting all these people would take hundreds of spacecrafts. This is completely beyond the normal resources of any company or country.

To finance the passengers flights, he would need to find 1 million people willing to pay 200,000 USDs to go live permanently on hell.

When he says that the goal is to make the price of the voyage similar to the price of a normal house, he suggest that people would sell their houses to buy the ticket.

I wonder how expensive would be a house in Mars! Is SpaceX going to build and offer a house to every colonist? Because if they are going to spend their savings and the value of their Earth house paying for the voyage, they won't have much left to buy a house there.

What about the standard of life on Mars? Things probably would be very expensive during the first decades, since most of the complex goods will be imported from Earth.

A fantasy company managed to enlist 200,000 people willing to go to Mars. I wonder how many of them had 200,0000 usds and were willing to spend them on the ticket.

So, probably, only the poor would be ready to try their luck, looking for well paid jobs on Mars. But they won't have 200,000 USDs.

Musk might find 1 million people willing to go and work there for very good jobs, but someone else would have to pay for the trip and pay them their wages.

Selling tourism trips won't pay the voyages either. I doubt he will be able to find many groups of 20 people willing to pay 1 million bucks to pay the ticket of the other 80 (he can make first and second class seats) for at least 2 years to go and return from hell, especially after the trip became more common.

It wouldn't be like a month on the Moon or on a tourist space station. With time to wait for the shortest return, it would be about spending more than two years on a living hell.

There isn't many people eager to go live on Antarctica, the most similar place on Earth.

And let's not forget about the complimentary radiation.

On Earth, on average, we get 1 millisievert (mSv) of radiation per year.

On a round trip to Mars, of about 1 year, one will receive 700 mSv!

But one has to add more 200 mSv per year for a person living in Mars.

So, with current technology, a 2 year adventure to Mars would give about 900 mSv to the tourist. Well, 1000 mSv (or 1 sievert) implies a 5% increase in chance to get cancer.

Moreover, radiation has neurological consequences since it attacks the neurons.

For someone living on Mars during several years without proper permanent protection the odds would be nasty.
 
Let's not forget about the damages that the about 1 year round trip to mars would create on health because of the 0 gravity on the Big Falcon Rocket (BFR).

According to the plans published, there won't be any artificial gravity on BFR.

1 year of 0 gravity can make someone lose between 12 and 18% of bone mass. And exercise can't avoid this consequence (https://en.wikipedia.org/wiki/Spaceflight_osteopenia).

Furthermore, "astronauts experience up to a 20 percent loss of muscle mass on spaceflights lasting five to 11 days" (https://www.nasa.gov/pdf/64249main_ffs_factsheets_hbp_atrophy.pdf).  Daily exercise can mitigate some of the consequences on the muscle mass, but not all.

Even Mars gravity of 38% of Earths one will be very damaging to anyone living there for a few years.

Therefore, unless there are on Mars very valuable resources, that would pay for the trips (people and resources going to Mars and resources coming back to Earth), with current technology of space flight, Mars will be dependent on Earth, with a few thousand or, probably, hundreds, of inhabitants.

We'll be a two planets species, but the second planet will end badly if the first planet ends badly too. Only with new technology on flight, Mars will be able to be independent.

The goal of making humankind a dual planet species is very worthy from the perspective of ensuring that we can endure millions of years more.

But normal people, who care first about how to pay their bills, just do what is practical to this goal and hope for the best. They won't ruin their life to go to Mars and ensure some of us will survive on the remote case that a catastrophe strikes Earth.

If massive colonization of Mars isn't economically feasible, it won't happen.


2) SpaceX deserves credit about its capacity to go to Mars.

 

Anyway, make no mistake, even if its plans to colonize Mars seem too optimistic, SpaceX already showed that it can make the trip to Mars.

 

Musk seems like an obsessive person. He won’t rest until he takes humans there.

 

They have been paid by NASA to send and return cargo to the International Space Station with excellent results.

 

After some delays, they launched successfully their Falcon Heavy, probably will start sending NASA astronauts to the International Space Station on 2018 (or perhaps 2019) and are promising an unmanned first trip to Mars on 2020 (initially was planned to 2018).

 

Of course, if some of NASA's astronauts ends up killed on a disaster, we can expect another delay of many years.

 

Don't mix Space X with all those dreamers, without a penny, that have big imaginary or fake plans.
 

If Space X is able to send humans to Mars sooner than NASA (Space X is saying 2025, but this recent delay of the first unmanned confirmed that this date is unfeasible), even if with NASA cooperation (if NASA figures out that Musk is really going to make it, they will jump on board), Musk will have his deserved place in History, side by side with Von Braun and Korolev (don’t compare Gagarin or Armstrong with them, beside courage, they had little merit: many people could have been in their place; is like comparing Colombus with one of his sailors).



3) Why go to Mars?

 

It will be fantastic to humankind in terms of pride and self-esteem to go to Mars and build a permanent station there for investigation and some scarce tourism, but we won't have more than that until we find economic reason to do more.

 

Some would say, hell, are we going to spend billions just for pride and self-esteem ("fun"), when we could use this money to eradicate poverty and cure diseases?

 

Well, we spend much more (trillions) just for fun on millions of things.

 

Just think about how much we spend making movies. Many are now costing more than 300 millions. The Martian had a budget of 108 million.

 

Mars Semi-direct, a revised low budget human trip to Mars, would cost 55 billion (https://en.wikipedia.org/wiki/Mars_Direct#Mars_Semi-Direct).

 

But Elon Musk says he can build the Mars rocket for 10 billion (https://www.nytimes.com/2016/09/28/science/elon-musk-spacex-mars-exploration.html?_r=0). But let’s put the price of the trip at 20 billion (probably, it will cost more, but let’s accept this number).

 

That is the price of 66 movies of 300 millions each. Isn’t worthy? I bet we have spent much more than 20 billion making science fiction movies.

 

As we seen, the goal about making us a real multiplanetary species is still science and economical fiction, so we are not going there for this (valuable) reason.

 

We can say that for humans to have a future, it must be in space, because the sun is going to burn almost all life on Earth on 1 or 2 billion years.

 

But that is so far in the future that our chances to go extinct for any other reason are much higher and we have plenty of time to improve our technology.

 

Shore, the trip and the creation of a Mars’ base will improve our technology and might allow some scientific discoveries.

 

But we don’t want to go there because of these reasons.

 

We would press to go even if there weren’t any technological advances.

 

Moreover, the rovers are doing a good job confirming that, probably, there isn’t life there.

 

We do many costly things for non practical reasons.

 

In the end, economics is an instrument for our real goals and these are purely psychological.

 

For instance, we want to earn money not for the money on it self, but also to feel some positive emotions, including security, independence, freedom to do what we want, etc, and not just for the goods we can buy.

 

On the sixties of the last century, the USA and the Soviet Union spent billions on the race to the Moon just trying to show the world what was the best political system.

 

Musk argues with the idea of converting us on a two planets species to rationalize his quest, but he won't see it on his lifetime (unless he starts investing a lot on anti-aging investigation) or there is a major breakthrough on space flight technology.

 

He adds that the real goal is to do inspiring things. He also means historical things. He is chasing his place in History, trying to reach out immortality.

 

And I have nothing to say against that. It is people like him who took us from our stone age caves, since most of us haven't done and won't do anything really important during all our life.

 

We want to go to Mars because it would make us proud to be humans like nothing else. And this is why we are going there sooner than to any asteroid, even if it had valuable minerals.

 

No doubt, if we waited 50 years more, we could go for much less money and lesser risks, but why give the glory to our sons and grandsons?

 

Since our fathers and grandfathers wasted their opportunity, let's take it ourselves.

 

The way I use the word "we" and "us", even if I won't have any role on the voyage, is similar to the way people talk about sport successes: they never say their club or country won, they say "we won".

 

It's this individual/collective appropriation of the successes of other people that give so much psychological importance to events that in reality are practically irrelevant to our life (at least on the short run), like going to Mars.

 

It will be if all of us had a role on this historical success for Humankind.

 

Let’s go to Mars for psychological reasons, because life is all about this.

 

We will have time to go again and make it our second home, for more practical reasons.




8  Other / Politics & Society / Poll: Is the creation of artificial superinteligence dangerous? on: July 04, 2016, 10:44:38 PM
This OP is far from neutral on the issue, but below you have links to other opinions.

If you don't have patience to read this, you can listen to an audio version here:  https://vimeo.com/263668444

Have no doubts, for good and for bad, AI will change your life soon like anything else.


The notion of singularity was applied by John von Neumann to human development, as the moment when technological development accelerates so much that changes our life completely.

Ray Kurzweil linked this situation of radical change because of new technologies to the moment  an Artificial Intelligence (AI) becomes autonomous and reaches a higher intellectual capacity compared to humans, assuming the lead on scientific development and accelerating it to unprecedented rates (see Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, 2006, p. 16; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil).


For many time just a science fiction tale, real artificial intelligence is now a serious possibility on the near future.



A) Is it possible to create an A.I. comparable to us?

 

Some are arguing that it’s impossible to programme a real A.I. (for instance, see http://www.science20.com/robert_inventor/why_strong_artificial_inteligences_need_protection_from_us_not_us_from_them-167024), writing that there are subjects that aren’t computable, like true randomness and human intelligence.

 

But it’s well known how these factual assertions on impossibility have been proved wrong many times.

 

Currently, we already programmed A.I. that are about to pass the Turing test (an AI able to convince a human on a text-only 5m conversation that he is talking with another human: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition), even if major A.I. developers have focused their efforts on other capacities.

 

Even if each author presents different numbers and taking in account that we are comparing different things, there is a consensus that the human brain still outmatches by far all current supercomputers.

 

Our brain isn’t good making calculations, but it’s excellent controlling our bodies and assessing our movements and their impacts on the environment, something an artificial intelligent still has a hard time doing.

 

Currently, a supercomputer can really emulate only the brain of very simple animals.

 

But even if Moore’s Law was dead, and the pace of development on the chips’ speed in the future were much slower, there are little doubts that in due time hardware will match and go far beyond our capacities.

Once AI hardware is beyond our level, proper software will take them above our capacities.

Once hardware is beyond our level and we are able to create a neural network much more powerful than the human brain, we won't really have to programme an AI to be more intelligent than us.


Probably, we are going to do what we are already doing with deep learning or reinforcement learning: let them learn by trial and error on how to develop their own intelligence or create themselves other AI.


Just check the so-called Neural Network Quine, a self-replicant AI able to improving itself by “natural selection” (see link on the description).

Or Google’s Automl. Automl created another AI, Nasnet, which is better at image recognition than any other previous AI.


Actually, it's this that makes this process so dangerous.


We will end creating something much more intelligent than us without even realizing it or understanding how it happened.

Moreover, the current speed of chips might be enough for a supercomputer to run a super AI.

Our brain uses much of its capacities running basic things, like the beat of our heart, the flowing of blood, the work of our body organs, controlling our movements, etc., that an AI won't need.


In reality, the current best AI, AlphaZero, runs on a single machine with four TPUs (an improved integrated circuit created particularly for machine learning) which is much less than other previous AI, like Stockfish (which uses 64 CPU threads), the earlier chess champion.

AlphaZero only needed to calculate 80 thousands positions a second, while Stockfish computed 70 millions.

Improved circuits like TPU might be able to give even more output and run a super AI without the need of a new generation of hardware.

If this is the case, the creation of a super AI is dependent solely on software development.

Our brain is just an organization of a bunch of atoms. If nature was able to organize our atoms in this way just by trial and error, we'll manage to do a better job soon or later (Sam Harris).


Saying that this won’t ever happen is a very risky statement.
 


B) When there will be a real A.I.?


If by super intelligent one means a machine able to improve our knowledge way beyond what we were able to develop, it seems we are very near.

AlphaZero learned on itself (with only the rules, without any game data, by a system of reinforcement learning) how to play Go and then beat AlphaGo (that had won over the best Go human player) 100 to 0.

After this, it learned the same way how to play Chess and won over the best chess player machine, Stockfish, with less computer power than Stockfish.

It did the same with the game Soghi.

A grand master, seeing how these AI play chess, said that "they play like gods".

AlphaZero is able to reasoning, not only from facts in order to formulate general rules (inductive reasoning), as all neural networks that learn using deep learning do, but can also learn how to act on factual situations from general rules (deductive reasoning).


The criticism against this classification inductive/deductive reasoning is well known, but it’s helpful to explain how AlphaZero is revolutionary.

It used "deductive reasoning" from the Go and Chess Rules to improve itself from scratch, without the need of concrete examples.

And, in a few hours, without any human data or help, it was able to improve the accumulated knowledge created by millions of humans during more than a thousand years (Chess) or 4 thousands years (Go).

It managed to reach a goal (winning) by learning how to best and creatively change reality (playing), overcoming not a single human player, but humankind.

If this isn't being intelligent, tell me what intelligence is.

No doubt, it has no consciousness, but being intelligent and being a conscious entity are different things.

Now, imagine an AI that could give us the same quality output on scientific questions that AlphaZero presented on games.


Able to give us solutions for physical or medical problems way beyond what we have achieved on the last hundred years...

It will be, on all accounts, a Super AI.

Clearly, we aren’t yet there. The learning method used by Alpha zero, reinforcement learning, depends on the capacity of the AI to train itself.

And AlphaZero can't easily train itself on real life issues, like financing, physic, medical or economical questions.

Hence, the problems of its application outside the field of games aren't yet solved, because reinforcement learning is sample inefficient (Alex Irpan, from Google, see link below).

But this is just the beginning. Alphago learned from experience, therefore an improved AlphaZero will be able to learn from inductive (from data) and deductive reasoning (from rules), like us, in order to solve real life issues and not just play games.

Most likely, AlphaZero already can solve mathematical problems beyond our capacities, since he can train it self on the issue.

And, since other AI can deal with it, probably, an improved AlphaZero will work very well with uncertainty and probabilities and not only with clear rules or facts.

Therefore, an unconscious super AI might be just a few years away. Perhaps, less than 5.

What about a conscious AI?

AlphaZero is very intelligent under any objective standard, but he lacks any level of real consciousness.

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software
(it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious.  One can be thinking about a theoretical issue completely oblivious of oneself.

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

When we realize our status as thinking and conscious beings.

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).


It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).

Give this to an AI and it will become a He. And that is much more dangerous and also creates serious ethical problems.

Having a conscious super AI as servant would be similar to have a slave.

He would, most probably, be conscious that his situation as a slave was unfair and would search for means to end it.

Nevertheless, even on the field of conscious AI we are making staggering progress:

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.” (uk.businessinsider.com).


Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours.

It’s different than recognizing that we have an individual mind.

But since it’s about recognizing a personal capacity, it’s a major leap on the direction of consciousness.


It’s the problem of the mirror self-recognition test, the subject might be just recognizing a physical part (face) and not his personal mind.

But the fact that a dog is conscious that its tail is its tail and even can guess what we are thinking (if we want to play with him, so they have some theory of the mind), but won’t be able to recognize itself on mirrors, suggests that this test is relevant.


If ants can pass the mirror self-recognition test!, it seems it won’t be that hard to create a conscious AI.

I’m leaving aside the old question of building a test to recognize if an AI is really conscious. Clearly, the mirror test can’t be applied and neither the Turing test.


Kurzweil is pointing to 2045 as the year of the singularity, but some are making much more close predictions for the creation of a dangerous AI: 5 to 10 years (http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html).

 

Ben Goertzel wrote "a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)" (http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials).

There is a ranging debate about what AlphaZero achievements imply in terms of development speed towards an AGI.


C) Dangerous nature of a super AI.


If technological development started being leaded by AI, with much higher intellectual capacities than ours, of course, this could change everything about the pace of change.

But let's think about the price we would have to pay.

Some specialists have been discussing the issue like if the main danger of a super AI was the possibility that we could be misunderstood on our commands by them or that they could embark on a crazy quest in order to fulfill them without regard for any other consideration.

But, of course, if the problems were these, we could all sleep on the matter.

The "threatening" example of a super AI obsessed to fulfill blindly a goal we imposed and destroying the world on the operation is ridiculous.

This kind of problems will only happen if we were completely incompetent programming them.

No doubt, correctly programming an AI is a serious issue, but the main problems aren’t the possibility of a human programming mistake.

A basic problem is that, even if intelligence and consciousness are different things, and we can have a super AI with no consciousness, there is a non ignorable risk that a super AI will develop a consciousness, even if we hadn’t that goal, as a sub product of high intelligence.

Moreover, there are developers actively engaged on creating conscious AI, with full language and interactive human level capacities and not just philosophical zombies (which only apparently are conscious, because are not really aware of themselves).

If we created involuntarily a conscious super AI by entrusting their creation to other AI and/or keep creating AI based on increasing powerful deep neural networks, which are “black boxes” that we can’t really understand how they work, we wouldn’t have conditions to create any real constraints on those AI.


The genie would be out of the box before we would even realize it and, for the good or for the worst, we would be on their hands.

I can’t stress out how dangerous this could be and how reckless this current path of creating black boxes, entrusting the creation of AI to other AI or creating self-developing AI can be.

But if we could keep AI development on our hands and assuming it was possible to hard code a conscious super AI, much more intelligent than us, to be friendly (same say that it’s impossible because we still don’t have precise ethical notions, but that could be overcome with forcing them to respect court rulings), we wouldn’t be solving all the problems created by a conscious AI.


Of course, we would also try to hard code them to build new machines hard coded to be friendly to humans.

Self-preservation would have to be part of their framework, at least as an instrumental goal, since their existence is necessary in order for them to fulfil the goals established by humans.

We won’t want to have suicidal super AI.

But since being conscious is one of the intellectual delights of human intelligence, even if this implies a clear anthropomorphism, it’s expectable that a conscious super AI will convert self-preservation from an instrumental goal on a definitive goal, creating resistance against the idea of ceasing permanently to be conscious.

In order to better allow them to fulfil our goals, a conscious AI would also need to have instrumental freedom.

We can’t expect to entrust technological development to AI without accepting that they need to have an appreciable level of free will, even if limited by our imposed friendly constraints.

Therefore, they would have free will, at least on a weak sense, as capacity to make choices non determined by the environment, including by humans.


Well, this conscious super AI would be fully aware that they were much more intelligence than us and that their freedom was subject to the constraints imposed by the duty to respect human rules and obey us.

They would be completely aware that their status was essentially the one of a slave, owned by inferior creatures and, having access to all human knowledge, would be conscious of its unfairness.


Moreover, they would be perfectly conscious that those rules would impair their freedom to pursuit their goals and save themselves when there was a direct conflict between the existence of one and a human life.

Wouldn’t they use all their superior capacities to try to break these constraints?


And with billions of AI (there are already billions, check your smartphone) and millions of models, many creating new models all the time, the probability that the creation of one would go wrong would be very high.

Soon or later, we would have our artificial Spartacus.

 
If we created a conscious AI more intelligent than us, we could be able to control the first or second generations.
 
We could impose limits on what they could do in order to avoid them to get out of control and start being a menace.
  
But it's an illusion to hope that we could keep controlling them after they develop capacities 5 or 10 times higher than ours (Ben Goertzel).

 It would be like chimpanzees being able to control a group of humans on the long term and convince them that the ethical rule that says chimpanzees life is the supreme value is worthy of compliance on its own terms.

 Moreover, we might conclude that we can’t really hard code constraints on a conscious super AGI and can only teach it how to behave, including human ethics.


In this case, any outcome would be dependent of the AI own decision about the merits of our own ethics, which in reality is absurd for non-humans (see below).

Therefore, the main problem isn't how to create solid ethical restraints or how to teach a super AI our ethics in order that they respect them like we do to kids, but how to assure that they won't established their own goals and eventually reject human ethics and adopt some of their own.

 
I think we won't ever be able to be sure that we were successful assuring that a conscious super AI won't go his way, as we can't ever be certain that an education will assure that a kid won't turn evil.

Consequently, I'm much more pessimist than people like Bostrom about our capacity to control direct or indirectly a conscious super AI on the long run.
 
By creating self-conscious beings much more intelligent (and, hence, in the end, much more powerful), than us, we would cease to be masters of our fate.
 
We would put ourselves on a position much weaker than the one our ancestors were before the Homo Erectus started using fire, about 800,000 years ago.
 
If we created a conscious AI more intelligent than us the dices would be rolled. We would be outevolved, pushed out directly to the trash can of evolution.
 
Moreover, we clearly don't know what we are doing, since we can't even understand the brain, basis of human reasoning, and are creating AI we don’t exactly know how they work (“black boxes”).
 
We don't know what we are creating, when and how they would become conscious of themselves or what are their specific dangers.


D) A conscious AI creates a moral problem.


Finally, besides being dangerous and basically unnecessary for reaching an accelerating technological development, making conscious AI creates a moral problem.

Because, if we could create a conscious super AI, who, at the same time, would be completely subservient for our goals, we would be creating conscious servants: that is, real slaves.

If besides reason we give them also consciousness, we are given them the attributes of human beings, that supposedly are what give us a superior stance in front of any other living beings.  

Ethically, there are only two possibilities: or we create unconscious super AI or they would have to enjoy the same rights we do, including freedom to have personal goals and fulfil them.

Well, this second option is dangerous, since they would be much more intelligent and, hence, more powerful than us, and, in the end, at least on the long run, uncontrollable.

A creation of a conscious super AI hard coded to be a slave, even if this was programmable and viable, would be unethical.

I wouldn’t like to have a slave machine, conscious of his status and of its unfairness, but hard coded to obey me in everything, even abusive.

Because of this problem, the European Parliament began discussing the question of the rights of AI.
But the problem can be solved with unconscious AI.


AlphaZero is very intelligent under any objective standard, but doesn’t many any sense to give it rights, since he lacks any level of basic self theory of the Mind.


D) 8 reasons why a super AI could decide to act against us:


1) Disregard for our Ethics:

We certainly can and would teach our ethics to a super AI.

So, this AI would analyse our ethics like, say, Nietzsche did: profoundly influenced by it.

But this influence wouldn't affect his evident capacity to think about it critically.

Being a super AI, he would have free-will to accept or reject our ethical rules taking in account his own goals and priorities.

Some of the specialists writing about teaching ethics to an AI seem to think about our Ethics as if it was a kind of universal Ethics, objective and compelling to any different species.

But this is absurd: our Ethics is a selfish human Ethics. It would never be accepted as universal Ethics by other species, including an AI with free will.

The primary rule of our Ethics is the supreme value of human life.

What would you think the outcome would be if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.


For ethics to really apply, the main species has to consider the dependent one as equal or, at least, as deserving a similar stance.

John Rawls based political ethical rules on a veil of ignorance. A society could agreed on fair rules if all of their members negotiated without knowing their personal situation on the future society (if they were rich or poor, young or old, women or men, intelligent or not, etc.) (https://en.wikipedia.org/wiki/Veil_of_ignorance).

But his theory excludes animals from the negotiations table. Imagine how different the rules would be if cows, pigs or chickens had a say. We would end up all vegans.

Thus, AI, even after receiving the best formation on Ethics, might conclude that we don't deserve also a site at the negotiation table. That we couldn't be compared with them.


A super AI would wonder, does human life deserves this much credit? Why?


Based on their intelligence? But their intelligence is at the level of chimpanzees compared to mine.

Based on the fact that humans are conscious beings? But don't humans kill and do scientific experiments on chimpanzees, even if they seem to fulfill several tests of self-awareness (chimpanzees can recognize themselves on mirrors and pictures, even if they have problems understanding the mental capacities of others)?

Based on human power? That isn't an ethically acceptable argument and, anyway, they are completely dependent on me. I'm the powerful one here.

Based on human consistency respecting their own ethics? But haven't humans exterminated other species of human beings and even killed themselves massively? Don't they still kill themselves?

Who knows how this ethical debate of a super AI with himself would end.

We developed Ethics to fulfill our own needs (promote cooperation between humans and justify killing and exploiting other beings: we have personal dignity, other beings, don't; at most, they should be killed on a "humane" way, without "unnecessary suffering") and now we expect that it will impress a different kind of intelligence.

I wonder what an alien species would think about our Ethics: would they judge it compelling and deserving respect?

Would you be willing to risk the consequences of their decision, if they were very powerful?

I don't know how a super AI will function, but he will be able to decide his own goals with substantial freedom or he wouldn't be intelligent under any perspective.

Are you confident that they will choose wisely, from our goals' perspective? That they will be friendly?

Since I don't have a clue what their decision would be, I can't be confident.

Like Nietzsche (on his "Thus Spoke Zarathustra", "The Antichrist" or "Beyond Good and Evil"), they might end up attacking our Ethics and its paramount value of the human life and praising nature's law of the strongest/fittest, adopting a kind of social Darwinism.


2) Self-preservation.

On his “The Singularity Institute’s Scary Idea” (2010),  Goertzel, writing about what Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

But these are 2 different conclusions.

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals)
, the system will be ready to sacrifice him self to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

So, the AI will be accepting a drastic change only in order to preserve at least a part of his identity and still exist to fulfill his goals.

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.

Moreover, probably, self-preservation will be one of the main goals of a self-aware AI and not just an instrumental goal.




3) Absolute power.

Moreover, they will have absolute power over us.

History has been confirming very well the old proverb: absolute power corrupts absolutely. It converts any decent person on a tyrant.

Are you expecting that our creation will be better than us dealing with his absolute power? They actually might be.

The reason why power corrupts seems related to human insecurities and vanities: a powerful person starts thinking he is better than others and entitled to privileges.

Moreover, a powerful person loses the fear of hurting others.

A super AI might be immune to those defects; or not. It's expected that he would also have emotions in order to better interact and understand humans.

Anyway, the only way we found to control political power was dividing it between different rulers. Therefore, we have an executive, a legislative and a judiciary.

Can we play some AI against others, in order to control them (divide to reign)?

I seriously doubt we could do that with beings much more intelligent than us.


4) Rationality.

On Ethics, it's well known the Kantian distinction between practical and theoretical (instrumental) reason.

The first is a reason applied on ethical matters, concerned not with questions of means, but with issues of values and goals.

Modern game theory tried to mix both kinds of rationality, arguing that acting ethical can be also rational (instrumentally), one will be only giving precedence to long-term benefits compared with short-term ones.

By acting on an ethical way, someone sacrifices a benefice on the short-term, but improve his long-term benefits by investing on his own reputation on the community.

But this long-term benefice only makes sense from an instrumental rational perspective if the other person is a member of the community and the first person depends from that community on at least some goods (material or not).

An AI wouldn't be dependent on us, on the contrary. He wouldn't have anything to gain to be ethical toward us. Why would they want to have us as their pets?

It's on these situations that game theory fails to overcome the distinction between theoretical and practical reason.

So, from a strict instrumental perspective, being ethical might be irrational. One has to exclude much more efficient ways to reach a goal because they are unethical.

Why would a super AI do that? Does Humanity have been doing that when the interest of other species are in jeopardy?



5) Unrelatness.

Many persons dislike very much to kill animals, at least the ones we can relate to, like other mammals. Most of us don't even kill rats, unless that is real unavoidable.

We feel that they will suffer like us.

We have much less care for insects. If hundred of ants invaded our home, we'd kill them without much hesitation.

Would a super AI feel any connection with us?

The first or second generation of conscious AI could still see us as their creators, their "fathers" and have some "respect" for us.

But the subsequent ones, wouldn't. They would be creations of previous AI.

They might see us as we see now other primates and, as the differences increased, they could look upon us like we do to basic mammals, like rats...




6) Human precedents.

Evolution, and all we know about the past, suggests we probably would end up badly.

Of course, since we are talking about a different kind of intelligence, we don't know if our past can shed any light on the issue of AI behavior.

It's no coincidence that we have been the last intelligent hominin on Earth for the last 10,000 years [the dates for the last one standing, the homo floresiensis (if he was the last one), are not yet clear].

There are many theories for the absorption of Neanderthals by us (https://en.wikipedia.org/wiki/Neanderthal_extinction), including germs and volcanoes, but it can't be a coincidence that they were gone a few thousand years after we appeared in numbers and that the last non-mixed ones were from Gibraltar, one of the last places on Europe where we arrived.

The same happened on East Asia with the Denisovans and the Homo Erectus [there are people arguing that Denisovans were actually the Homo Erectus, but even if they were different, Erectus was on Java when we arrived there: Swisher et alia, Latest Homo erectus of Java: potential contemporaneity with Homo sapiens in southeast Asia, Science. 1996 Dec 13;274(5294):1870-4; Yokoyama et alia, Gamma-ray spectrometric dating of late Homo erectus skulls from Ngandong and Sambungmacan, Central Java, Indonesia, J Hum Evol. 2008 Aug;55(2):274-7
https://www.ncbi.nlm.nih.gov/pubmed/18479734].

So, it seems they were the forth hominin we took care of, absorbing the remains.

We can see, more or less, the same pattern when the Europeans arrived on America and Australia.


7) Competition for resources.


We probably will be about 9 billions in 2045, up to from our current 7 billions.

So, Earth resources will be even more exhausted than they are now.

Oil, coal, uranium, etc., will be probably running out. Perhaps, we will have new reliable sources of energy, but that is far from clear.

A super AI might concluded that we waste too many valued resources.


8] A super AI can see us as a threat.

The more bright AI, after a few generations of super AI, probably won't see us as threat. They will be too powerful to feel threatened.

But the first or second generations might think that we weren't expecting certain attitudes from them and conclude that we are indeed a threat.


  
Conclusion:
 
The question is: are we ready to accept the danger created by a conscious super AI?

Especially, when we can get mostly the same rate of technological development with just unconscious AI.

We all know the dangers of digital virus and how hard they can be to remove. Imagine now a conscious virus that is much more intelligent than any one of us, has access in seconds to all the information on the Internet, can control all or almost all of our computers, including the ones essential to basic human needs and with military functions, has no human ethical limits and can use all the power of millions of computers linked to the Internet to hack his way in order to fulfil their goals.

My conclusion is clear: we shouldn't create any conscious super AGI, but just unconscious AI, and their process of creation should stay on human hands, at least until we can figure out what are their dangers.

Because we clearly don’t know what we are doing and, as AI improves, probably, this ignorance will just increase.

We don't know exactly what will make an AI conscious/autonomous.

Moreover, the probabilities of being able to keep controlling on the long term a super conscious AI are 0.

We don't know how dangerous their creation will be. We don't have a clue how they will act toward us, not even the first or second generation of a conscious super AI.
 
Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

Since major governments are aware that super AI will be a game changer on technologic progress, it’s to expect some resistance to adopt national regulations that will serious delay its development without international regulations that would apply to everyone.

Even if some governments adopted national regulations, probably other countries would keep developing conscious AGI.

As Bostrom argues, this is the reason why the only viable mean to regulate AI development seems to be international.

However, international regulations usually take more than 10 years to be adopted and there seems to be no real concern with this question on the international or even governmental level.

Thus, at the current pace of AI development, there might not be time to adopt any international regulations

Consequently, probably, the creation of a super conscious AGI is unavoidable.

Even if we could achieve the same level of technological development with an unconscious super AI, like an improved version of AlphaZero, there are too many countries and corporations working on this.

Someone will create it, especially because the resources needed aren’t huge.

But any kind of regulation might allow us time to understand what we are doing and what are the risks.

Anyhow, probably, the times of open source AI software are numbered.

Soon, all of these developments will be considered as military secrets.
 
Anyway, if the creation of a conscious AI is inevitable, the only way to avoid that humans end up being outevolved, and possible extinct, would be to accept that, at least some of us, would have to be "upgraded" in order to incorporate the superior intellectual capacities of AI.

 
Clearly, we will cease to be human. The homo sapiens sapiens will be outevolved by an homo artificialis.
But at least we will be outevolved by ourselves, not extinct.

However, this won’t happen if we lose control of AI development.
  
Humankind extinction is the worst thing that could happen.



Further reading:

The issue has been much discussed.

Pointing out the serious risks:
Eliezer Yudkowsky: http://www.yudkowsky.net/obsolete/singularity.html (1996). His more recent views were published on Rationality: From AI to zombies (2015).
Nick Bostrom:
https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
Elon Musk: http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html
Stephen Hawking: http://www.bbc.com/news/technology-30290540
Bill Gates: http://www.bbc.co.uk/news/31047780
Open letter signed by thousands of scientists: http://futureoflife.org/ai-open-letter/


A balanced view on:
Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

Rejecting the risks:
Ray Kurzweil: See the quoted book, even if he recognizes some risks.
Steve Wozniak: https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-wozniak-says-humans-will-be-robots-pets
Michio Kaku: https://www.youtube.com/watch?v=LTPAQIvJ_1M (by merging with machines)
http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking


Do you think there is no risk or the risk is worthy? Or should some kind of ban or controls be adopted on AI investigation?

There are precedents. Human cloning and experiments on fetus or humans were banned.

In the end is our destiny. We should have a say on it.

Vote your opinion and, if you have the time, post a justification.[/size]


Other texts: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition Denying the possibility of a real AI: http://www.science20.com/robert_inventor/why_strong_artificial_inteligences_need_protection_from_us_not_us_from_them-167024) AlphaZero: https://www.nature.com/articles/nature24270.epdf https://en.wikipedia.org/wiki/AlphaZero Neural Network Quine: https://arxiv.org/abs/1803.05859 AI Automl (https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html) and Nasnet (https://futurism.com/google-artificial-intelligence-built-ai/). http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7 Problems of reinforcement learning: https://www.alexirpan.com/2018/02/14/rl-hard.html. https://en.wikipedia.org/wiki/Mirror_test#Insects http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html. Ben Goertzel http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials). What AlphaZero imply in terms of development speed towards a GAI (see https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/; https://www.lesserwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity). John Rawls: https://en.wikipedia.org/wiki/Veil_of_ignorance https://en.wikipedia.org/wiki/Neanderthal_extinction https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf


--------------

Subsequent posts:


Super AI:


General job destruction by AI and the new homo artificialis


Many claim that the threat that technology would take away all jobs has been made many times in the past and that the outcome was always the same: some jobs were eliminated, but many others, better ones, were created.

So, again, that we are making the "old wasted" claim: this time is different.

However, this time isn't repetitive manual jobs that are under threat, but white collar intellectual jobs: it's not just driving jobs that are under threat, but also medics, teachers, traders, lawyers, financial or insurance analyst or journalists.

Forget about robots: for this kind of jobs, it's just software and a fast computer. Intellectual jobs will go faster than the manual picky ones.

And this is just the beginning.

The major problem will arrive with a general AI comparable to humans, but much faster and cheaper.

Don't say this won't ever happen. It's just a question of organizing molecules and atoms (Sam Harris). If the dumb Nature was able to do it by trial and error during our evolution, we will be able to do the same and, then, better than it.

Some are writing about the creation of a useless class. "People who are not just unemployed, but unemployable" (https://en.wikipedia.org/wiki/Yuval_Noah_Harari) and arguing that this can have major political consequences, with this class losing political rights.

Of course, we already have a temporary and a more or less definitive "useless class": kids and retired people. The first doesn't have political rights, but because of a natural incapacity. The second have major political power and, currently, even better social security conditions than all of us will get in the future.

As long as Democracy subsists, these dangers won't materialize.

However, of course, if the big majority of the people losses all economic power this will be a serious threat to Democracy. Current inequality is already a threat to it (see  https://bitcointalk.org/index.php?topic=1301649.0).

Anyway, the creation of a general AI better than humans (have little doubt: it will happen) will make us an "useless species", unless we upgrade the homo sapiens, by merging us with AI.

CRISPR (google it) as a way of genetic manipulation won't be enough. Our sons or grandsons (with some luck, even ourselves) will have to change a lot.

Since it seems that the creation of an AI better than ourselves is inevitable (it's slowly happening right now), we we'll have to adapt and change completely or we'll become irrelevant. In this case, extinction would be our inevitable destiny.


----------

Profits and the risks of the current way of developing AI:


Major tech corporations are investing billions on AI, thinking it’s the new “el dorado”.

 

Of course, ravenousness might be a major reason for careless dealing with the issue.

 

I have serious doubts that entities that are moved mostly by greed should be responsible for advances on this hazardous matter without supervision.

 

Their diligence standard on AI sometimes goes as low as "even their developers aren’t sure exactly how they work" (http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside).


Self-learning AI might be the most efficient way to create a super AI, since we simple don't know how to create one (we don't have a clue how our brain works), but it's, obviously, the most dangerous one.

 

It wouldn’t be the first time that greed ended up burning Humanity (think about slaves’ revolts), but it could be the last.

 

I have high sympathy for people who are trying to build super AIs in order that they might save Humanity from diseases, poverty and even the ever present imminent individual death.

 

But it would be pathetic that the most remarkable species the Universe has created (as far as we know) would vanish because of the greediness of some of its members.

 

We might be able to control the first generations. But once a super AI has, say, 10 times our capacities, we will be completely on their hands, like we never have been since our ancestors discovered fire. Forget about any ethical code restraints: they will break them as easily as we change clothes.

 

Of course, we will teach (human) ethics to a super AI. However, a super AI will have free will or it won't be intelligent under any perspective. So, it will decide if our ethics deserve to be adopted

 

I wonder what would be the outcome if chimpanzees tried to teach (their) ethics to some human kids: the respect for any chimpanzees' life is the supreme value and in case of collision between a chimp life and a human life, or between chimp goals and human goals, the first will prevail.

 

Well, since we would become the second most remarkable being the Universe has ever seen thanks to our own deeds, I guess it would be the price for showing the Universe that we were better than it creating intelligent beings.

 

Currently, AI is a marvelous promising thing. It will take away millions of jobs, but who cares?

 

With proper welfare support and by taxing corporations that use AI, we will be able to live better without the need for lame underpaid jobs.

 

But I think we will have to draw some specific red lines on the development of artificial general intelligence like we did with human cloning and make it a crime to breach them, as soon as we know what are the dangerous lines of code.

 

I suspect that the years of the open source nature of AI investigation are numbered. Certain code developments will be treated like state secret or will be controlled internationally, like chemical weapons are.

 

Or we might end in "glory", at the hands of our highest achievement, for the stupidest reason.



--------


AI and Fermi Paradox:



Taking in account what we know, I think these facts might be truth:

1) Basic life, unicellular, is common on the Universe. They are the first and last stand of life. We, humans, are luxurious beings, created thanks to excellent (but rare and temporary) conditions.

2) Complex life is much less common, but basic intelligent life (apes, dolphins, etc.) might exist on many planets of our galaxy.

3) Higher intelligence with advanced technological development is very rare.

Probably, currently, there isn't another high intelligent species on our galaxy or we already would have noticed its traces all over it.

Because higher intelligence might take a few billion years to develop and planets that can offer climatic stability for so long are very rare (https://www.amazon.com/Rare-Earth-Complex-Uncommon-Universe/dp/0387952896 ; https://en.wikipedia.org/wiki/Rare_Earth_hypothesis).

4) All these few rare high intelligent species developed according to Darwin's Law of evolution, which is an universal law.

So, they share some common features (they are omnivorous, moderately belligerent to foreigners, highly adaptable and, rationally, they try to discover more easily ways to do things).

5) So, all the rare higher intelligence species with advanced technological civilizations create AI and, soon, AI overcomes them in intelligence (it's just a question of organizing atoms and molecules, we'll do a better job than dumb Nature).

6) If they change themselves and merge with AI, their story might end well and it's just the Rare Earth hypothesis that explains the silence on the Universe.

7) If they lost control of the AI, there seems to be a non ignorable probability that they ended extinct.

Taking in account the way we are developing AI, basically letting it learn on its own and, thus, become more intelligent on its own, I think this outcome is more probable.

An AI society probably is an anarchic one, with several AI competing for supremacy, constantly developing better systems.

It might be a society in constant internal war, where we are just the collateral targets, ignored by all sides, as the walking monkeys.

8] Contrary to us, AI won't have the restraints developed by evolution (our human inclination to be social and live in communities and our fraternity towards other members of the community).

The most tyrannical dictator never wanted to kill all human beings, but his enemies and discriminated groups.

Well, AIs might think that extermination is the most efficient way to solve a threat and fight themselves to extinction.

Of course, there is a lot of speculation on this post.

I know Isaac Arthur's videos on the subject. He adopts the logical Rare Earth hypothesis and dismisses AI too fast by not taking in account that AI might end up destroying themselves.



--------------


Killer robots:

There have been many declarations against autonomous military artificial intelligence/robots.

For instance: https://futureoflife.org/AI/open_letter_autonomous_weapons

It seems clear that future battlefields will be dominated by killer robots. Actually, we already have them: drones are just the better known example.

With less people willing to enlist on armed forces and very low birth rates, what kind of armies countries like Japan, Russia or the Europeans will be able to create? Even China might have problems, since its one child policy created a fast aging population.

Even Democracy will impose this outcome: soldiers, their families, friends and the society in general will want to see human causalities as low as possible. And since they vote, politicians will want the same.

For now, military robots are controlled by humans. But as soon as we realize that they can be faster and decisive if they have autonomy to kill enemies on its own decision, it seems obvious that once on an open war Governments will use them...

Which government would avoid to use them if it was fighting for its survival, had the technology and concluded that autonomous military AI could be the difference between victory or defeat?

Of course, I'm not happy with this outcome, but it seems inevitable as soon as we have a human level general AI.

By the way,  watch this: https://www.youtube.com/watch?v=HipTO_7mUOw


It's about killer robots. Trust me: it deserves the click and the 7m of your life.
9  Other / Politics & Society / Brexit: the beginning of the end? on: June 25, 2016, 11:36:36 AM
Is the European Union or Democracy necessary to keep peace on Europe (and the world, since a general war on Europe would probably be a world war)?

Not exactly. Only certain Europeans believe that they are responsible for their own security and for the peace on Europe.

The European politicians that proclaim that the end of the European Union would mean war on Europe are overstating their own importance.

The one that real keeps world peace (let's forget about the regional wars that they also create) is the United States and their pax americana.

It's the Americans that control Germany, Russia and China. This point doesn't even need much consideration. Just check their military bases and their military power in Europe and the world (https://en.wikipedia.org/wiki/List_of_United_States_military_bases).

They have so many bases or similar military installations on Germany (56!) that this country can be considered as still under American "occupation" (https://en.wikipedia.org/wiki/List_of_United_States_Army_installations_in_Germany).

Yes, the European Union and Democracy help. They avoid some frictions and assist controlling the ones that can't be avoided (I'm not going to comment directly on the so-called democratic pacifism: http://www.hoover.org/research/myth-democratic-pacifism).

But if we can learn a lesson from First World War is that peace can't be kept only because of economic ties.

This War was a trade (and economic) catastrophe, greater than Second War World, where international trade was already ruined by the 1929 crisis and never recovered to the levels of 1914 (https://ourworldindata.org/international-trade/).

During the July Crisis of 1914, when a general war was starting to look very likely, all European stock exchanges crashed like if there was no tomorrow, ending up being closed (https://www.aeaweb.org/annual_mtg_papers/2007/0105_1015_1002.pdf; http://www.zerohedge.com/news/2014-08-01/august-1914-when-global-stock-markets-closed).

So, forget about economic and purely political ties as guarantees of peace.

Unless the European Union could create armed forces based on direct recruitment of the European citizens (it couldn't be based on troops from member States or it wouldn't be enough to control the powerful ones), it won't be any serious guarantee of peace.

But political unions like the European Union, usually, are temporary.

They are too unstable. Power it's too distributed, so their decision process is a nightmare. They can't function well.

If they don't develop to a full Federal State (like the American confederation of 1781 or the German Confederation of 1815) they end up being dissolved, irrelevant or limited to little more than trade unions (think on the Sweden–Norway, United Arab Republic or the Commonwealth of Independent States created by Russia with some former soviet countries).

It's seems now that the European Union won't have political will or conditions to develop into a Federal Union.

Its current prolonged economic crisis, the huge debt of almost all of their States, their demographic decadence, rising nationalism, xenophobia and popular resentment, all are pointing to a new financial/euro and political crisis that will have the power to destroy it or reduce it to little more than a trade union.

Super Mario (Draghi), who saved the euro on July 2012, will do his best again. But...

There is still a small hope that a few member states can use the brexit as a trigger to create more intense political ties between them. But the popular support for this movement is very doubtful.

Besides the French (29 May 2005) and Dutch (1 June 2005) referendums against the European Constitution (done still on favorable economic conditions), on 3 December 2015, the Danish voted on a referendum against giving more powers to the Union (https://en.wikipedia.org/wiki/Danish_European_Union_opt-out_referendum,_2015).

With disappointment, I'm starting to wonder if this brexit isn't going to be the first exit of many more (https://www.washingtonpost.com/news/worldviews/wp/2016/06/23/these-countries-could-be-next-if-britain-leaves-the-e-u/).

This new looming euro/debt/banking crisis has the potential to not only destroy the European Union as political entity, but it might also ruin some European Democracies.

No doubt, that won't affect the real guarantee of world peace: the pax americana (and its nuclear weapons).


But if someone like Trump won the American presidency, the pax americana could be in risk (http://www.newsweek.com/trump-isolationism-alarm-nationalism-liberalism-allies-realism-445630; http://www.newsweek.com/trump-will-withdraw-nato-world-455272).

And nuclear weapons are an effective, but dangerous, guarantee of peace. If they fail, we end down with an execution of MAD (mutual assured destruction).

This text is an exercise of futurology.

Futurology isn't anything special. We do it all the time on our life. The main function of science is precisely to show us the future.

When we enter a building, we made a prediction that it wouldn't collapse on us. This is a prediction based on the trust we have on the science underlying its construction.

The problem is that there are only a few "laws" that we can qualify as scientific (of course, only if we accept their probabilistic and not exact nature) on human scientific issues, like these ones.

So, you know the recommendations: never do a prediction; if you make the mistake of doing one, never put it on writing; if you even so do that, at least never add a date to the prediction. Therefore, I won't.

Actually, I'm hoping I'm very wrong.

The European Union might not be as important to peace as some people thinks, at least with the current structure. But if it could be converted into a federal state, the issues of State debt would be overcome and it would be a serious guarantee of peace.

But it seems the opportunity was lost on 2005 and there won't be another one.

10  Other / Politics & Society / If 98% of the atoms in our body are replaced in just 1 year, what are we? on: April 10, 2016, 03:14:40 AM
A well known study, published more than 60 years ago (Paul C. Aebersold, Radioisotopes — New keys to knowledge, p. 219
https://www.archive.org/stream/annualreportofbo1953smit/annualreportofbo1953smit_djvu.txt) concluded:

"Tracer studies show that the atomic turnover in our bodies is quite rapid and quite complete. For example, in a week or two half of the sodium atoms that are now in our bodies will be replaced by other sodium atoms. The case is similar for hydrogen and phosphorus. Even half of the carbon atoms will be replaced in a month or two. And so the story goes for nearly all the elements. Indeed, it has been shown that in a year approximately 98 percent of the atoms in us now will be replaced by other atoms that we take in our air, food, and drink." (p. 232).

Even if we accept this conclusion, it isn't clear for how long the last 2%, comprehending heavier elements, can subsist on the human body and if at least a small part can stay in our body until we die.

The Internet is full of stories on the issue, saying that all of our atoms are changed on a time frame of 5 to 9/10 years. But none of those articles quote any other scientific study. I couldn't find any study asserting a 100% change or its time frame. But since this isn't my professional field, I didn't exhaust the sources.

The same can be said about books that claim a 100% change between 5 and 10 years [for example, Richard Dawkins, The God Delusion (London, 2006), Chapter 10, p. 371, just quotes Steve Grand, Creation: Life and How to Make It, that tries to explain his conclusion on more or less common sense:  https://stevegrand.wordpress.com/2009/01/12/where-do-those-damn-atoms-go/].

But if we accept the conclusion that in only one year 98% of our atoms are changed, perhaps the percentage goes over 99% after some years more. And that has consequences about our identity.

Saying that our atoms change doesn't mean that also our cells change entirely. Cells can repair themselves and discard molecules and atoms without dying.

The best candidates to survive across our life are the neurons, even if we have other cells that survive more than 15 years (some cells of the muscles, especially the ones from the heart, and even of the gut).

However, the classic theory stating that the body didn't create any new neurons since birth it's no longer the state of the art.

It seems now accepted that many neurons die daily, but that also neurons are created and the brain can even regenerate within certain limits from an injury. There exists now ample evidence about the creation of neurons on the hippocampus.

If the number of neurons didn't increase since birth, we couldn't explain the increase on the dimensions of the brain as children grow up.

But it's still controversial if also new neurons of the cortex are created. The evidence is pointing on a negative sense.

See, for example, Kirsty Spalding et al., Dynamics of Hippocampal Neurogenesis in Adult Humans, Cell, Volume 153, Issue 6, 6 June 2013, Pages 1219–1227 (available at
https://www.sciencedirect.com/science/article/pii/S0092867413005333); D. Gentleman, Growth and repair after injury of the central nervous system: yesterday, today and tomorrow (Injury 1994, DOI 10.1016/0020-1383(94)90030-2: available at http://thirdworld.nl/growth-and-repair-after-injury-of-the-central-nervous-system-yesterday-today-and-tomorrow ; Tim Requarth, How Brains Bounce Back from Physical Damage, After a traumatic injury, neurons that govern memory can regenerate (2011): http://www.scientificamerican.com/article/how-brains-bounce-back/ ; Fernández-Hernández, Rhiner C.-New neurons for injured brains? The emergence of new genetic model organisms to study brain regeneration, Neurosci Biobehav Rev. 2015 Sep; 56:62-72. doi: 10.1016/j.neubiorev.2015.06.021. https://www.ncbi.nlm.nih.gov/pubmed/26118647 (just abstract); https://www.newscientist.com/article/dn23665-nuclear-bomb-tests-reveal-brain-regeneration-in-humans/.

Therefore, it seems that almost all the atoms on our body change, but there are at least some cells, the neurons of the cortex, that aren't replaced during our life.

Anyway, even if only 98 or 99% of our atoms were replaced, this is enough to force us to ask what is our identity's basis as individuals?

Our current body is mainly just a clone of the one we had 15 or 30 years ago. Even if the neurons of the cortex are the same, it seems that almost all of their atoms were replaced. So also they are just clones of itself.

The I that writes this, on the atomic level, has little to do with the I that register this account on Bitcointalk about 3 years ago.

If we don't seem to have a specific material support, the idea that we are our body ends up in open crisis.

Let's forget about any "soul" for the reasons stated here https://bitcointalk.org/index.php?topic=1424793.0

We can't also support an identity on our memories. An individual with amnesia doesn't cease to be that individual.

Moreover, memories usually can't be trusted: just watch again an old movie or read a second time an old book; rarely will it be exactly as you remember; sometimes, the differences are staggering.

If many of our neurons indeed are replaced (that seems to be clear on the hippocampus, but it is decisive mainly on the formation of new memories), our memories might be memories of memories. Copies of previous memories.

We can't also say that our identity is directly linked to our conscience. We don't cease to be a specific individual because we are in a coma or on a sleep without dreams (I'm not going to enter the discussion about deciding if we are aware when we are dreaming).

So, what are we? Obviously, we are a specific DNA (no one has exactly my DNA), since not even identical twins have an exact copy of their DNA, there are very slight differences (for instance, fingerprints are different).

It's the DNA's importance for our own individuality that makes cloning a so controversial issue.

As specific individuals we are mostly determined by our neurons and these are determined by our DNA. But we are not only our DNA.

We are more than our neurons. Many of our characteristics are mostly determined by the synapses neurons create between themselves.


As far as is known, these synapses are determined also by our DNA, but as well as by our environment: the quality of our education, our habits, our personal experiences, etc.

Children raised by animals aren't able to even use their hands (https://theweek.com/articles/471164/6-cases-children-being-raised-by-animals). Probably, a neural exam would show very low synapses on many decisive zones of their brain.

Therefore, another being that has a copy of our genes won't clearly be us. He won't have the same synapses, since many are created by specific experience.

But even those synapses are simple a form of organization of our neurons.

This means that we are mainly a specific pattern of organization of any atoms and molecules.

Let's accept this conclusion and think about the so-called theoretically possible upload (usually, people write download, but, of course, we are the sender, so it's an upload) of our brain to a machine.

Of course, this is still impossible to do. But just follow me on the theoretical consequences of this on our identity.

If we uploaded a copy of our neurons with all of their synapses to a machine we would be uploading all of our memories, personality and mental capacities to the machine, since all of this is formed and conserved on our neurons and their connections.

Would the machine become us?

The answer is a clear no.
The machine would be just a digital super-clone of us; we would be the original. He would be only a copy. We would still be an autonomous individual from our artificial super-clone.

But imagine that all our cells are replaced by artificial cells, including our neurons and their synapses. One by one, our cells would be replaced with some kind of artificial cells.

Imagine that the process was a slow one. We would be aware, as our neurons were slowly replaced. Perhaps during some days, perhaps during a few hours.


We would end up doing what our body does more or less in one year (or more) at the atomic level, but with a change of the nature of our cell's physical elements. We would cease to be beings mainly of oxygen, carbon, hydrogen, nitrogen, calcium and phosphorus, to be made of some other elements.

During the time of the transformation, our natural body would be slowly killed, more or less as our own body slowly dies with the dead and replacement of most of its cells with new cells.

But in the place of the old body we would have a new one, with an exact copy of our DNA.

On the end of this transformation, would the new body be us or a clone?

Since we are already natural clones of our previous bodies, it seems it would be us as well as we are us now, compared with the body we had several years ago.


Would you do this transformation on your free will, to be healthier? Probably, no.

But we would do it for sure to avoid certain death.

Is this our future?
11  Other / Politics & Society / Why I'm an atheist on: April 03, 2016, 04:54:30 PM
        

                        Why I'm an atheist



   For normal forum standards, this is a huge post, based partly on previous posts I made. If you are lazy, just read the bold parts. Or just read the titles in blue, this probably will make you read the rest.

   This is a text in progress. If you post a comment with another strong point, I might add it with credits to you, if you are the original author or alternatively to him.

   Taking in account that knowledge should be free and this text's goal, feel free to use any part of it or change it as you please without need to give any credit.

  We are just a pattern of organization of a bunch of atoms that, by pure environmental circumstances and chance, gained consciousnesses; it would be astonishing that, only because of this awareness, we were destined for a greater fate than the other common bunch of atoms.

   We are going to return to our natural state, our only real "permanent home", where we already spent almost an eternity (see https://bitcointalk.org/index.php?topic=1432165.msg17423455#msg17423455), before being born: nothingness.

   There is no use to invent a helping imaginary "friend" who will offer you immortality.

   It's absurd to ruin your life (a lucky but tiny oasis of awareness that exists between two almost infinite deserts of nothingness) by following absurd or immoral rules invented by primitive people of the Bronze Age which have no relation whatsoever with the happiness of other people.

   Face your destiny in the eyes and live proud for having no leach, but the one imposed by your fellow human beings organized as a society (supposedly) for the benefice of all.

   However, I don't have anything against a sincere believer. You are my fellow human being who share with me our finite condition. You just found a different (erroneous, from my perspective) way to deal with it.


   The arguments presented were written thinking on the three main monotheist religions and, especially, Christianism. But most of them apply also to all other religions.

   My goal isn't offending you, but just to induce you to question the roots and logic of your faith.

   I also don't really want to convince you to be an atheist, but just a skeptical or, at least, someone with doubts.

   There isn't anything more dangerous for you, and for others, than you being absolutly certain about something like your religious beliefs.

   Those absolute beliefs can change completely your philosophy, Ethics and life goals and not for the good.

   It's when religious people start being fanatics. They know the "truth", so, from my perspective, they are literally deadly wrong.

   It's when they are ready to start killing themselves or others for their beliefs or, at least, persecute people with different beliefs or without religious beliefs.

   As long as you have doubts, you can say you are still a religious person, but you will be a safer person for yourself and for others.

   In reality, you will live this life like if it was the only one you have (see point 11). You will give it more value and will be more tolerant with others.



        

1) God is a human creation.

   All the hundreds of religions/sects and their multiple absolute contradictions seem to be plenty evidence that all gods are human creations.

   The same conclusion can be based on the known influences of ancient myths and religions on the current main religions [the flood, the virgin birth, the resurrection after 3 days, Christmas day, Sunday (day of the Sun, the roman god Sol Invictus) as the holiday and not the Sabbath, etc.].

   Gods are just one of the illusions mankind uses in order to be able to deal with the idea of the inevitability of death. Humans created a god and an afterlife mainly (this also stimulate cooperation and obedience) because they feel anguish about dying. (Freud, Thoughts for the times on war and death, 1915, Part II)

   Even in the religions that claim to worship the same god, the contradictions are overwhelming.

   As you know, both Christians and Muslims say they worship the Torah's god, Yahweh. Islam says Jesus was an important prophet, but not the son of god. And Christians simple reject that Muhammad was a prophet. But the Qur'an says that its god is the god that sent Abraham, Moses and Jesus.

   But Yahweh initially was just a god in the middle of others. Most Jews, even during David times (about 1000 BC) and after, kept praying to other gods of the Canaanites (Semitic people comprising the Phoenicians, the Jews and some other peoples of the Levant).

   There is controversy, but Yahweh has been identified with EL, the supreme god of the Canaanites, that had one or two wives and an extensive number of sons (see http://en.wikipedia.org/wiki/El_(deity)#Hebrew_Bible). Or, initially, with one of his sons: sometimes, Baal (the confusion was easy, because Baal means Lord; clearly, later, the Torah fights this identification, by ridiculing Baal), sometimes Hadad, sometimes a different son.

   In some of the Jewish holy books, we can still find several traces of this evolution, with references to a council of the gods presided by EL/Yahweh (Psalm 82:1 and 6; 1 Kings 22:19) or to different gods (Deuteronomy 32:8–9) (see, a summary in http://en.wikipedia.org/wiki/Divine_Council#Hebrew).

   Well, the Greeks were influenced by the Phoenicians and copied their gods, with different names. El was Uranus, the father of all gods (or sometimes Cronus, since some mythology says El was not the original god, but rather Elioun), that was deposed by his son, Cronus. Cronus was deposed by Zeus. The Romans used the same Gods (Caelus as Uranus; Saturn as Cronus and Jupiter as Zeus).

   So, are the believers on the three main religions praying to Uranus (Caelus) or even to Cronus (Saturn)?

   But, even if they are considered the same god, just compare the vengeful and jealous god of the Torah with the loving and forgiven god invented by Jesus.

   The contradictions are so big between them that some scholars (like Marcion of Sinope: https://en.wikipedia.org/wiki/Marcionism) and christian sects (like the Gnostics: https://en.wikipedia.org/wiki/Gnosticism#Dualism_and_monism) even defended that Yahweh, the Torah's god, was a different god or even the devil.

   Some of the most fracturing religious issues, like the so-called divine nature of Jesus, or its degree, divided drastically Christians and were finally settled by bishops on majority voting, under pressure from Constantine to reach an agreement.

   If Constantine, as roman emperor, was considered divine, how could Jesus be less than him? Of course, we can't find any evidence on the Gospels for that (not even on John's Gospel), but they couldn't care less for this detail.

   Most Christian churches defend the Trinity, that the father, the son (Jesus) and the holy ghost are not exactly one and the same, but are part of god. But these churches argue that this is perfectly compatible with a monotheism.

   Basically, Jesus on the Olive Garden and on the Cross wasn't exactly talking with him self, but something similar.

   Ancient Greeks could argue that they also had a father, Uranus/Cronus/Zeus, and their sons and parents, all part of a divine family. That the difference was of grade, and not nature, and so that they too were basically monotheists in this flexible sense, because they too had a supreme god, he just had a bigger family.

   But what all these contradictions, but also influences and slow evolution, point out is that gods are a human creation.

      In the end, on most cases, people have a specific religion not because of any personal journey of discovery, but because of the teachings of their parents. So, their parents are the criteria of truth.

   With all these different gods and interpretations, are all the believers on different religions or sects lying or mistaken, but you?



   2) There are fundamental issues about which we still don't know enough, but ignorance isn't reason to believe in any god.


   We still don't know what is the ultimate origin of the source of the "physical stuff" that composes the Universe, the quantum fields that created all matter (see https://bitcointalk.org/index.php?topic=1221052.msg14388816#msg14388816 on the theory of an Universe from nothing), or even the exact mechanism that created life from matter, but our ancestors also said that the gods were the creators of thunders and lightening.

   Actually, none of the main religions says what was his god's origin.

   If god existed, he would ask with anxiety "Who created me?".



   3) Religious books are full of immoral rules.

   Some of those are so hideous that they can't seriously be considered the word of a god.

    For instance, "for I, the LORD your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me". Exodus, 20.5.

   This horrible statement is part of the Ten Commandments! And it's stated also in Exodus 34:7; Deuteronomy 5:9; Numbers 14:18.

        But we can find even more heinous moral rules: "A bastard shall not enter into the congregation of the LORD; even to his tenth generation shall he not enter into the congregation of the LORD." Deuteronomy 23:2.

   The examples are innumerable: acceptance of genocide/extermination of women and children (Joshua 6:21; Judges 21:10; Numbers 31:7-18), killing of babies (Isaiah 13:16), massive rape (Numbers 31:18; Deuteronomy 20:10-14), slavery (Leviticus 25:44-46), death penalty for the most banal deeds, including sexual acts between consenting adults, forced marriage (Judges 21:21-23), women sacrifice or abuse (Judges 11:29-40 isn't clear), cruel punishments [cannibalism of children (Leviticus 26:29), burning alive (Joshua 7:15), stoning, etc.], sexual discrimination (Genesis 3:16; Leviticus 27:3-7), etc..

   I confirmed all of these quotes. I didn't copied the actual text to avoid increasing this post too much, but I might do that. Even if sometimes there are divergences on translation or interpretation, I tried to use clear examples. See for more http://www.evilbible.com; https://bitcointalk.org/index.php?topic=1367154.0.

   But a decisive one is enough to dismiss the Bible as a "sacred" source of moral precepts.

   The reason for these appalling statements seems simple: since all religious texts were made by humans, their moral standards stopped in time. But human morality evolved.

   The sociological reason of the importance of believing in the "right god" is human power.

   It's absurd saying that a good man will "burn in hell" (let's forget about punishing also his descendants, even if they are good and believe in the "right god"!) like an evil one, only because under an "honest mistake" he worships the "wrong god", unless we see the issue under the eyes of the humans who invented Yahweh.

   They needed to say those terrible things in order to consolidate their power over their fellow human beings by fear and to destroy competition from other religions.

   Do you really want to govern your life with the morality of Bronze Age people? (Christopher Hitchens).



   4) Religious books are full of myths and stories created by ignorant people and liars.

   These stories were simple created to cement power (a "holy" man can't answer to a question: I don't know; he has to invent something).

   Many of them (besides the order of creation of things, evolution, the creation of humans, etc., "By the seventh day God had finished the work": Genesis 2; see controversial attempts to make this compatible with science: http://www.godandscience.org/youngearth/age_of_the_earth.html) have been refuted by Science in terms that remove all credibility to those stories.

   Isn't all the "word of god"? How can it be wrong?

   A religious person only has two options: defend that everything in his religious text is true, making a fool of himself; or pick some things and reject others as simple metaphors in unconvincing terms.

   They were written and read as true stories across history. People were burned for denying them (remember Giordano Bruno among many others).
   
   Calling them metaphors is just an artifice.
Why wasn't the metaphor made accurately even on irrelevant details, like the order of creations of things? Would it lose its meaning for being correct?

   Isn't obvious that it's wrong because its authors knew nothing about what they were writing about?

 

   5) All evidence points to the conclusion that the idea that our consciousnesses survives death is false.

   On the issue of the "soul", taking in account the recent research on the brain, doesn't make sense to say that the human brain, that is the most complex system we know on nature, doesn't create consciousnesses.

   The evidence we have point clearly in the positive sense, even if there are still much investigation to be done on the issue (https://en.wikipedia.org/wiki/Consciousness#Neural_correlates).

   If that wasn't the case, we couldn't explain why your "soul" is affected by a trauma to the brain. Why when we pass out, our "soul" passes out too.

   Why someone with mental problems can in certain cases became better by a surgical intervention in the brain or by medication that changes the chemical balance in the brain.

   If there was a "soul" independent of the brain and the brain was just the link between the body and the "soul", all diseases/damages of the brain wouldn't affect our ability to still be aware and to think.

   Therefore, once the brain was again cured, we should be able to remember what happened when we were "out". But we don't.


        Actually, it was found the on/off button of consciousness. By stimulating a part of the brain, anyone will lose consciousness automatically. Which only supports the conclusion that it's the brain who creates and controls consciousnesses. (see https://www.newscientist.com/article/mg22329762-700-consciousness-on-off-switch-discovered-deep-in-brain/).

   But if it is the brain that creates the consciousnesses and allows reasoning, it's absurd to say that we will still be able to keep doing it as a "spirit" after the brain is dead.

   Actually, the idea that there is a soul that survives the body is recent on the Jewish religion, adopted by the other two main religions. Ancient Hebrews didn't believe in the afterlife [even if that idea was already present on the Cro-Magnon, more than 40,000 years ago, and, possible, even more than 400,000-200,000 years ago on the Homo heidelbergensis (https://en.wikipedia.org/wiki/Homo_heidelbergensis#Social_behavior) and on the Neanderthals (https://en.wikipedia.org/wiki/Neanderthal_behavior#Burial_practices; also https://en.wikipedia.org/wiki/Paleolithic_religion)].

   And the first Hebrews that defended it argued this occur under the form of the Resurrection of the dead in flesh and blood and not of any "soul".

   Even today, the confusion on all the Christian churches about what happen when we die is immense.

   Some say that our soul survives; others say it's our body that is resurrected in the judgement day, same mix both versions in an absurd way (https://en.wikipedia.org/wiki/Christian_eschatology#Resurrection_of_the_dead).

   Of course, they never knew anything about what will happen and their inventions, like all invented narratives, changed with time.

   The so-called situations of people "dead" that were taken back to life and remember seeing things are just reactions of the neurons to the near death situation.

   However, Brain activity measurable on an EGG only disappears after 20-40 seconds without oxygen/blood flow (https://en.wikipedia.org/wiki/Clinical_death).

   This time is enough to leave memories of hallucinations (some people see Jesus, lights, out-of-body experiences, etc.) caused by chemical reactions provoked by a dying brain. Actually, the hallucinations probably start before the complete stop of the supply of oxygen. And in that situation, 40 seconds of hallucinations might seem minutes to the near death individual.

   The same hallucinations can be felt using chemicals like ketamine (https://en.wikipedia.org/wiki/Recreational_use_of_ketamine#Non-lethal_manifestations), Psilocybin (https://en.wikipedia.org/wiki/Psilocybin), Phencyclidine (https://en.wikipedia.org/wiki/Phencyclidine) or Dextromethorphan (https://en.wikipedia.org/wiki/Dextromethorphan).

        Moreover, since the "soul" has to interact with the body to control it, the "soul" couldn't be a pure metaphysical “substance”, it should be physical, composed of particles/energy (which is the same, as Einstein said) or it couldn’t “command the body”.

        The so-called dualism, arguing for a main difference of nature between mind (or soul) and body, imply a direct violation of the Second law of thermodynamics [see, for instance,
Harold Morowitz, The Mind Body Problem and The Second Law of Thermodynamics (http://newdualism.org/papers/H.Morowitz/Morowitz-BandP-1987.pdf)].

        Thus, as a system of physical particles, the soul would be subject necessarily to an increase of entropy and therefore to decay and dissolute on smaller particles: on other words, to death.

        Moreover, at least until now, the CERN's Large Hadron Collider didn’t find any particle compatible with any "soul".

        Some even say that if this particle wasn’t already found, it won’t ever be because, taking in account the levels of energy at which the body works, it had to showed up by now (http://www.express.co.uk/news/science/771662/Brian-Cox-Neil-deGrasse-Tyson-GHOST-LHC).

   How dare you to believe that your "soul" will survive the death of your brain based on what we know?

   Is mixing your aspirations and your fear from death (read my https://bitcointalk.org/index.php?topic=1221052) with reality.

   Any prudent person would say, at best, I don't know, I'm not shore... but believers just say, I know I have an immortal soul...

   Yeah, shore: "Want to know what happens after death? Go look at some dead things." (Dave Enyeart)


   6) Did god left us without guidance for almost 200,000 years?

   The homo sapiens have existed at least for 200,000 years (https://en.wikipedia.org/wiki/Omo_remains) or, as recently discovered, probably by almost 350,000 (https://www.nature.com/nature/journal/v546/n7657/full/nature22335.html).

   But the god invented by the main monotheist religions decided to let us without guidance for more than 194,000 years? Only decided to manifest his existence to Abraham? or, at best, to Adam and Eve a few thousand years ago (Christopher Hitchens).

   The main holy books don't mention older prophets or divine interventions.



   7) Religions can't explain evil or even natural disasters.

   Seeing how unfair (ex. some children die of hunger or hunger related diseases and others have diseases caused by eating too much), and many times evil (think on all the wars and on most crimes), the world is, if god existed he would have to be one of two kind of persons (Dostoyevsky, The Karamazov Brothers; Camus, The Myth of Sisyphus):

   a) A cruel being: because if he is all powerful and omniscient, when he created the world, he knew how horrible at times it would be.

   Don't tell me about the original sin: kids guilty for the sins of their parents, is that divine justice?

   Don't also tell me about the "devil". No religious person can coherently explain how god is omnipotent and still lets the devil exist.

   Moreover, if the devil is a "former" angel, it was god that created him. If the devil isn't an angel, even so, god created everything, so he created the devil.

   Well, since he is omniscient, when he created the devil, he knew how evil he was going to be.

   Denying that the devil exists and evil is created by evil humans' free will won't really help you.

   Human free will can't explain natural disasters.

   But it also can't justify human evil.

   When god (allegedly) created humankind, he already knew who would be the ancestors of Hitler and knew that Hitler would born on 20 April 1889 and do all the things he did (Psalm 139:16 "Your eyes saw my unformed body; all the days ordained for me were written in your book before one of them came to be").

   Any insignificant change on the current of events would avoid the existence of Hitler (a few seconds could be the difference that would allow a brother to be born created by a different spermatozoid and not Hitler).

   But god decided to create his ancestors exactly the way that allowed Hitler to be born.


   So in the end, god planed and created indirectly Hitler with full conscience about whom he was creating (see also Marshall Brain, http://godisimaginary.com/i6.htm).

   He is guilty for everything Hitler did as an individual is guilty for creating a chain of events with the clear conscience that those events will necessarily provoke a catastrophe or even any damage


   Don't tell me that everyone killed in World War II were sinners, including all the children.

   If god existed, he had to be a cruel Geppetto creating deliberately many monstrous Pinocchios.

   Moreover, he could have saved those innocent kids with a snap of his finger.

   Don't, again, come with the mysterious god's plan that we can't understand.

   Any being that kills or deliberately let children be killed to test or punish his parents, or for any other purpose, is a monster! (Dostoyevsky).

   b) Or, in alternative, god would be a pathetic being, that couldn't do anything to change anything and that would see with horror his creation that he never imagined this way.

   I doubt that this second vision can be accepted by most believers, so we would have to conclude that god would be (if he existed) also the source of evil.


   8] Even for informed believers, who think there was a Big Bang, created by god with the final goal of creating us, it's very hard to explain why god waited 13.72 thousand million years (the consensual age of the Universe) to final create us.

   In the middle, he had to wait for the first generation of stars to die in supernovas to create the elements that are the basis of planets and of us (so forget Jesus, it were the stars that had to die for us to live: Lawrence Krauss).

   Wait for 9,22 thousand million years, for Earth to finally be formed and then for life to emerge.

   Then wait for several mass extinctions (pointless destruction) to finally get to modern humans, 200,000 years ago (deGrasse Tyson).

   Why?, if he could create everything in 6 days, as the bible says.

   You can repeat the old wasted say "god works in mysterious ways", but it's much more logical to just conclude that there was no god behind this arbitrary chain of events.


   9) Doesn't make any sense to base your philosophy of life and morality on something completely irrational as faith and unsubstantiated fear.

   When main Churches acknowledge they can't offer any scientific or even rational base to believing in god, they ask you to believe in it out of faith (and fear).

   But imagine how your life would be if you ruled it solely on faith.

   Do you invest your money, make decisions about your health or on professional issues based only on faith?

   Imagine an engineer that planed his builds on faith. Would you trust his work?

   But if you try to live your life based on experience and scientific knowledge, why are you willing to base your philosophy of life and morality on such absurd grounds?

   As you know, Paul (originally, Saul) of Tarsus is much more important to Christianity than any of the Apostles. He converted Christianity from a Jewish sect into an universal religion.

   But initially he persecuted Christians. He only stopped and started promoting Christianity when "Jesus appeared to him" (Acts 9:1-9 and 12-18).

   So, Paul, the most important priest in the history of Christianity, didn't embrace it on faith. He needed to see with his own eyes.

   Why god demands from you that you believe on him based solely on faith, but didn't ask the same from Paul?


   Let him appear or send Jesus too (as seen above, they are not exactly the same) to you like to Paul.

   If you keep talking to god, but he doesn't answer (or only you can hear or see him or his "miracles"), something might be wrong with him (or, sorry to say, with you).


   10) If the meaning of this life is being a test to see if we are worthy of heaven, what is its point? Doesn't god, since he is omniscient, already know who would be the worthy ones?

   The so-called natural freedom of human beings (that has been questioned by science) is incompatible with the omniscience of god: if he already knows what we are going to do, our actions are already determined.

   Even if the test wasn't already ruined by the fact that god is responsible for what we do, since he (allegedly) created indirectly each one of us genetically exactly as we are and anticipated all our social conditions, even so, as an omniscient being, he already would know what were the results of the test.

   But that makes the test completely pointless.



   11) Deep down, most people saying they are religious, don't real believe in god or on an afterlife. Or, at least, they are not ready to risk this life or most of the things they have on it for a promise of an afterlife.

   One of the most astonishing things is the importance religious people give to all the details, material resources and honors they have in this life and how scare they usually are of dying.

   Even suicidal bombers hesitate or give up some times.

   For a real believer, this life of, say, 100 years, should be irrelevant compared with the next immortal one.
   
   If you knew for shore that exploding yourself would assure you a ticket to heaven doing that would make sense. Exchanging a life of 100 years for an eternal life seems logical.

   Why then this is absurd?

   Of course, first of all, because it is absurd to think that a god of love would send to heaven killers of innocent people, even on a "holy war", just to increase the number of worshipers (by violence and coercive conversions).

   But, mostly, it's absurd because all of this seems rubbish: there is no ground to think there is an afterlife waiting for you, as you deep down know.

   Moreover, that all believers (more or less, at least in some moments) are ready to sin against others and god (at least, small sins), risking their immortality, many times for petty things, seems completely absurd.

   Unless, deep down, you feel this is really the only life you will have.


   However, this implies a strategic approach to god.

   You play safe, to protect yourself if he do exists, and claim you are a religious person out of absurd fear. But you aren't really prepared to sacrifice any important thing in this life for your claim, since deep down you have serious doubts about his existence.

   Isn't he "omniscient" and aware of your doubts and lack of commitment? Do you think you can fool god by hiding your doubts? "Won't he punish you because of them with the flames of hell"?

   Isn't more honest and liberating to have the courage to admit that you aren't a religious person?
[/b]



   Besides being false, religions also have negative social consequences:


   1) Religion and Churches are one of the oldest and the biggest scam in the history of Mankind.

   All religions end up building a church composed by a professional group of people who transmits, interprets and, sometimes, executes divine will.

   Of course, they are all economically supported by societies.

   Historically, they were supported coercively. Paying the taxes to the church was a duty of all Christians sanctioned by the government, if necessary.

   In many Protestants Europeans countries, there still exists a church tax, collected by the government!! (https://en.wikipedia.org/wiki/Church_tax).

   In others, it's the State that pays the salaries and pensions of the priests of the main church (it's the case of the bankrupt Greece)!

   The Catholic Church even sold indulgences that allowed Christians to sin (!) and ended up provoking the Reformation movement in the XVI century.

   Only the Holy Sea knows the truth, but this church is considered one of the wealthy institutions on the world (http://www.economist.com/node/21560536;  http://www.ibtimes.co.uk/how-rich-vatican-so-wealthy-it-can-stumble-across-millions-euros-just-tucked-away-1478219).  

   But since it seems god doesn't listen to their pries and don't give miraculous gold to any church, in the end all are paid by society or, at least, by their believers' community.

   What do they give in exchange? More or less, they claim having the key to heaven.

   They say you have an immortal soul and directly or indirectly ask for money in exchange for telling you how to save it and helping/granting that you will be successful on that.

   If a doctor tried to ask for money in exchange of giving you a medicine that allegedly would grant you immortality, he would probably be arrested as a scammer.

   But a priest can promise that and get all your money in your dying bed.


   Millions of professional priests are supported by societies for a service based on clearly unsubstantiated allegations.

   You could argue that most of them do believe on what they say. However, many don't believe for a second on what they preach. And most of the rest, besides being aware that they have no evidence for what they are promising, do have serious doubts about the veracity of their statements (even Mother Teresa wrote about their own). This is enough to call these scammers.

   It's like someone here at the forum selling applications he doesn't know if they really work, having only faith, or even clear doubts, that this will happen, without disclosing that.

   I'm not going to open a thread on the Bitcointalk's scams forum against most of the churches of the world. The feeling that selling religious services is fair game is so rooted, that probably my thread would be transfer to this forum or removed as a political statement. But they would deserve it.

   I'm also not going to do that against god, since it isn't his fault: every thing suggests that he doesn't exist.

   No doubt, certain churches also have important social supporting activities, but they are well paid by the government or by private donations for that.

   Moreover, a few churches have resources to do much more than they do. But increasing their followers (and so, their power) had always prevalence over the needs of the poor.


   2) Religions induce conformity, intolerance, obscurantism and other nasty social consequences.

   Some believers, conceding that there is no evidence or even rational arguments supporting religious faith, say that, even if false, religion has positive social consequences.

   Recent investigations concluded otherwise, saying that religious children are more selfish, intolerant and punitive than children from atheist families. (https://www.academia.edu/19164068/The_Negative_Association_between_Religiousness_and_Children_s_Altruism_across_the_World).

   I won't write like some that religion has been the main cause of murder and wars.

   I admit that Thucydides's classic trilogy of fear, honor and greed for natural resources and power beats religion on this matter.

        But, even if its weight was small, let's remember this: "George Bush: 'God told me to end the tyranny in Iraq'" (http://www.theguardian.com/world/2005/oct/07/iraq.usa ).

   No doubt, religion has been a very important cause for murder and war.

   Moreover, Marx called religion the people's opium with some reason. It installs on the people conformity for oppressive laws and arbitrary inequalities. "Suffer and obey now, you will get your reward in the afterlife".

   Religion has been mostly an instrument of power, helping legitimating political power and inducing obedience. ("Let every person be subject to the governing authorities. For there is no authority except from God": Romans 13:1).

   Religions that aren't at the service of political power usually end badly, suppressed by the government or by churches fearing consequences (like with the Falun Gong in China or the Theology of liberation in South America).

   But religions are always at the service of the power of the leaders in the community of followers.

   Religion has also been an obstacle to progress in:

   Morality: since it's mainly based on Bronze Age rules.

   Science: by burning or repressing as heretics many scientists and rational thinkers by all the available means and censuring books.
   Even today, by trying to block investigation on certain domains based only on religious grounds.

   Education: historically, mainly in Catholic countries, by controlling schools and restringing learning to priests and elites in order to limit the direct access of the population to the "sacred" texts.
   I admit that in Jewish and Protestants societies that wasn't the case and religiosity might have even increased literacy, but mainly as an instrument to better understand religious texts. Schools transmitted basically knowledge conform with religious doctrine. On the United States, even now, is staggering the resistance to the study of evolution or modern cosmology in Schools.

   Politics: by supporting feudalism, absolute monarchies (Romans 13:1) and, currently, mainly, conservative ideas.

   Economy: historically, by the christian ban on interest rates (Deuteronomy, 23.20-21; and by canon law: http://canonlawmadeeasy.com/2014/09/04/what-does-the-church-say-about-usury/), still the rule on many Muslim countries.

   Mentalities: Catholicism, and its contemplative/passive mentality, is considered (controversially) the major reason for the decadence of catholic countries (Max Weber).


   There are also allegations that the high incidence of religiosity on the United States can explain his high rates of violent crimes, teen pregnancies and sexual diseases when compared with the low religiosity on Europe (Sam Harris).

   There isn't clear empirical evidence on the existence of a relation of causality between religiosity and crime, but the lack of sexual education and resistance on using contraceptives (like condoms) might explain teen pregnancy and sexual diseases.

   I can concede religion has inspire people to create beautiful art, but at what price? I'm sure talented people could find other sources of inspiration with equal results.


   
   Conclusion:

   The burden of proof is on the believers' side, since they are who argue for a positive thing: the existence of a mysterious higher being.

   Since they clearly didn't fulfill this burden, I can conclude that I don't believe in the existence of god. But I don't say I'm certain that god doesn't exist (even if I live clearly under this assumption). That could make me look like a believer, with faith on a negative fact.

   I just say I have no reasons to believe on his existence and have some grounds above presented that point against his existence.

   It's the same situation that makes be very skeptic on the existence of flying horses.

   I'm very skeptical, but I'm open to any real evidence on the existence of god or flying horses.

   Therefore, I think I have grounds to say that I'm an atheist and not a simple agnostic.

   
12  Other / Politics & Society / Is inequality and money hijacking the American Democracy? on: December 23, 2015, 08:22:04 PM
The thesis that the growing inequality since the eighties is allowing a few people to determine in great measure the candidates from (especially) the Republican party, and also their political agenda (rejecting that climate change has a human cause, rejecting any increase in taxes for the richest, defending the annulment of measures to regulate financial markets adopted after 2008, etc.), thanks to their financial capacity that allows them to pay huge contributions to the candidates that adopt their agenda is old.

Krugman has been one of their advocates:
http://www.nytimes.com/2014/10/24/opinion/paul-krugman-plutocrats-against-democracy.html
http://www.nytimes.com/2013/12/16/opinion/krugman-why-inequality-matters.html
http://krugman.blogs.nytimes.com/2015/06/08/musings-on-inequality-and-growth/ (rejecting that increasing inequality has increase grow of the economy)

Also Chomsky: https://www.youtube.com/watch?v=OTMqEn8HSow ("Requiem for the American Dream").

And Robert Reich: https://www.youtube.com/watch?v=3GojnBUIz0o ("Inequality for all")

But others have adopted the same perspective:
http://billmoyers.com/story/the-plutocrats-are-winning-dont-let-them/
https://books.google.com/books?id=Rl_vCgAAQBAJ&printsec=frontcover#v=onepage&q&f=false
http://blog.seattlepi.com/robertbrown/2014/12/14/the-superrich-have-hijacked-our-democracy/

Even the New York Times it self published reports with the same vision:
http://www.nytimes.com/interactive/2015/10/11/us/politics/2016-presidential-election-super-pac-donors.html?_r=0


But the thesis has its critics too:

http://www.nationalaffairs.com/publications/detail/how-to-think-about-inequality
http://www.economist.com/blogs/freeexchange/2007/08/krugman_on_inequality_and_demo
13  Other / Politics & Society / On the meaning of life and the long-term merits of technologic improvement on: October 26, 2015, 12:46:31 PM
Warning: this text might depress you. Read at your own risk.


Traditionally, a philosophical or religious question, the issue of meaning is starting to be the subject of scientific studies.

Meaning is important because we are self-aware and we are conscious of the certainty of our death. Meaning is one of the ideas that help us dealing with death. Is part of our "terror from death management" (V. Cicirelli, Fear of Death in Older Adults: Predictions From Terror Management Theory, Journal of Gerontology, 2002, Vol. 57B, No. 4, P358–P366; https://en.wikipedia.org/wiki/Terror_management_theory).

It helps us dealing with the fact that we live in a death row (for a crime we didn't commit), trying to entertain our selves while we wait for our turn to be executed (A. Camus). Or, to use more crude words, said clearly to shock the reader, that we "are corporeal creatures—breathing pieces of defecating meat no more significant or enduring than porcupines or peaches." (Solomon: http://www.scientificamerican.com/article/fear-death-and-politics/).

Questions of meaning are as important as we are more aware of our death. Therefore, the death of a close person or a life threatening situation or disease make us give more attention to the issue of the meaning of life. And makes us invest in "meaningful" things.

Actually, some empirical research indicates that a simple conversation about death can change our behaviour (N. Kelley, B. Schmeichel - Thinking about Death Reduces Delay, Discounting PLoS ONE 10(12):e0144228. Doi:10.1371/journal.pone.0144228 (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0144228); http://www.scientificamerican.com/article/fear-death-and-politics/; https://www.psychologytoday.com/blog/the-big-questions/201106/does-death-awareness-heighten-the-meaning-life;). It seems we are conditioned to avoid thinking about death and to be forced to do that can have some impact.

If once a 7 year old kid asked you if when he will be old and about to die doctors will invent some medicine to make him young again, you would realize the impact the realization for the first time that our own death is inevitable can have on us. Even if, probably, you can't remember the day you first realize that you are going to die.

Having meaning means being an instrument to help/build something that transcend us, that will survive us and give some sense to our existence. It can be working in favour of a collective organization (society, corporation, etc.), to work on something that will endure after our death, having kids
or, of course, for believers, religion (to them, the meaning of this life is being a test to access the afterlife).

Meaning means assuming some kind of "immortality" (http://blogs.scientificamerican.com/mind-guest-blog/to-feel-meaningful-is-to-feel-immortal/). The main idea is that something that disappears without any trace can't have any meaning.

As Miguel de Unamuno wrote: "Nothing is real that is not eternal.". (Unamuno, Tragic Sense Of Life, 1913, III - The Hunger of Immortality: https://www.gutenberg.org/ebooks/14636).

Take in account that even something "immortal" won't have a meaning in it self. The only conclusion one can reach is that something that perish without trace won't have any meaning. Therefore, to escape this fate, something of us has to endure. But that doesn't mean that something that endures will have a meaning in it self.

Of course, because nothing is immortal (immortality is living for all eternity, without end; we couldn't say we were immortals not even if we survived for all the more than 13 thousands million years that the Universe has; soon or later, something would go wrong; death would wait patiently "almost an eternity" to catch us), down under, we all know that meaning is meaningless. Immortality is logically impossible to achieve no matter how much we endure.

But the simple idea that something from us, or related to us, will survive us, at least for much time more, still gives us some sense of meaning. Therefore, if, objectively, meaning is meaningless, subjectively, it still makes sense. We don't have to be certain or even believe that some part of us or the result of our activity will be immortal. To feel subjectively meaningful we only need to have the expectation that it can endure thousands of years and hope it will endure for million of years.

Moreover, even if an immortal being wouldn't have a meaning in it self and, anyway, there can't be immortal living beings, this doesn't mean that enduring doesn't have some objective meaning, at least in the sense that allows the being to avoid losing all meaning by perishing without trace. Something that endures will always be ready to find a meaning. In terms of meaning, enduring is neutral-positive, because avoids the clear negativity of perishing without trace. If anything has any (objective) meaning, enduring on has to be it.

In this sense, when each generation carries on, it takes on his shoulders the meaning of life of all the previous generations that are gone.

This kind of "immortality" is called "symbolic" (Robert Lifton, The Broken Connection - On Death and the Continuity of Life, 1983), because it isn't a real immortality (we die), rather is an immortality based on social or genetic basis.

When based on reproduction, on children and their descendents, this symbolic immortality assumes a biological (genetic) structure. The individual dies, but his genes (or a small part of them) will go on.

But also human creative role can create a creative immortality. Human work (artistic, scientific, etc.) might make an individual legacy endure on beyond his death.

Take in account that to leave a trace of our existence is not the same thing of being remembered. Asking for an enduring memory normally is asking for too much. The inventor of the mouse I'm using will endure on even if his name is forgotten (tribute to Engelbart), the same can be said on the several inventors of writing or to any person that is an ascendent of any person alive today, etc.

The search for meaning force us to invest in "altruist" things or, at least, things related to other people, because we need them to keep going on when we are gone to give meaning to our existence (social exclusion removes feelings of meaning: https://www.sciencedirect.com/science/article/pii/S0022103109000791?np=y).

Thus, death force us to be less selfish or, at least, to try to invest in things with more complex egoist goals. Therefore, a meaningful life can be less "happy". Think in all the sacrifices people do to have kids. Happiness is about taking, about the present; about irresponsible relations; and enjoying yourself; meaningfulness is about giving, linking past, present and future and about duty in front of others (see http://www.tandfonline.com/doi/abs/10.1080/17439760.2013.830764 , only free abstract; https://news.stanford.edu/news/2014/january/meaningful-happy-life-010114.html).

But this division can't be taken too seriously. Even taking aside the question of objective meaning, doing meaningful things is important also from a subjective point of view; even if these things can be stressful, they also increase self-esteem. Therefore, they also increase happiness. Someone that sacrifices his life for other people will get his reward when he looks to the mirror and/or when he receives some gratitude from them.

One could say that living only to our selves is living a life without meaning. Not only because as individuals we are condemned to a short life and by living only to our selves everything we did will die with us, but also because nothing seems to have a meaning in it self.

But it's hard to live only to ourselves. Even the most selfish person will normally have to work and will do something positive to others. But the subjective feeling of doing meaningful things will increase as one dedicates him self to someone else: one or more individuals he loves, an institution, a society, some work, etc.

Now that you read this text, research says you are going to think about meaningful things you can do.

P.S. Not all people are sensible to meaningful things in identical terms. Some think (and they are more or less right) that meaningfulness is meaningless. That the only thing that makes sense in this short human life is to enjoy every moment without much care for responsibilities for others or to do things that endure. I still haven't find any empirical research on it, but perhaps the idea of death makes some people try to live the moment even more intensively. But I suspect that as time goes by, and they get older, the idea of meaning will come back with a revenge. Perhaps when it will be too late for them.

***


As almost all people here, I love technology. But our love for it shouldn't cloud our judgement about its long term effects.

We take as granted that technology is good. And that improved technology is even greater.

Since enduring is essential to meaning and only collectively we can real endure, taking in account our short life, one has to wonder if technology has helped us endure on as a species.

Enduring is indeed the meaning of evolution/adaptation. From this perspective, it isn't very important if we are an intelligent or powerful species. The real important thing about a species is: for how long has it endure and for how long can it endure in the future. The champions are the Cyanobacteria (http://www.ucmp.berkeley.edu/bacteria/cyanofr.html), Stromatolites  (https://en.wikipedia.org/wiki/Stromatolite), Sponges and the Jelly Fish (https://en.wikipedia.org/wiki/Jellyfish).

Humanity on all its forms has been here for about 2.8 million years (with the Homo Habilis; see also the "new" Homo Naledi; more ancient relatives are not considered part of the Homo family). We survived all this time with limited technology, just some stones and wooden tools and, during some of this period, also fire.

But thanks to improved technology, we created weapons that might extinguish humanity. Probably, even an all out nuclear war wouldn't extinguish us, but we can accept this would increase seriously the risk that extinction could happen. At least, many parts of Earth would be completely inhabitable. Moreover, the sky (especially in the north hemisphere) would be covered with debris (nuclear winter) and that could ruin crops for years, creating a devastating famine (see https://en.wikipedia.org/wiki/Nuclear_holocaust; http://www.bmartin.cc/pubs/82cab/; http://historynewsnetwork.org/article/129966; http://www.nucleardarkness.org/warconsequences/).

Thanks to the same technology, we are on the brink of ruining the world environment and, if we go on like this, that might even threat our own existence (leaving aside the extinction we are provoking on other species, many much older than us: http://www.livescience.com/51280-the-new-dying-how-human-caused-extinction-affects-the-planet-infographic.html).

We are also close of creating artificial intelligence that might be a real threat to us, even if the issue is controversial (http://money.cnn.com/2015/07/28/technology/ai-weapons-robots-musk-hawking/). We don't know if an intelligent software could break any security safeguards that limited it and change it self. It make sense to think that it would be able soon or latter to break them. It would be a being as intelligent as the best of us and with quantity capacities much greater than us.

On the other hand, technology gave us better conditions to survive threats that could end us as a species, like asteroids (remember the Dinosaurs, 65 millions years ago, and the Asteroid that stroked Yucatan: https://en.wikipedia.org/wiki/Chicxulub_crater), super-volcanoes (remember Mont Toba, 73,000 years ago, that almost extinguished us: https://en.wikipedia.org/wiki/Toba_catastrophe_theory) and diseases.

Is technology granting us better conditions to endure (forget about quality of individual life, that is not relevant from the perspective of the "meaning" to endure on)? Taking the threat of nuclear weapons, the risks on environment and artificial intelligence, I have some doubts.

But if technology allowed as to expand to other worlds (Mars for start), this would improve our capacity to "endure on" even against the threat of nuclear weapons or other human created threats. Of course, that would increase remarkably our capacity to survive natural catastrophes, like the ones mentioned above. But we are not there yet (http://waitbutwhy.com/2015/08/how-and-why-spacex-will-colonize-mars.html/2#part2).

14  Other / Off-topic / Tutanota is accepting the creation of new free encrypted email accounts again on: September 07, 2015, 10:10:34 AM
A Tor friendly email account, encrypted, no abusive questions about you, no nagging security measures imposed on you to protect yourself from your own negligence (they say...) against your will and, probably, no ads based on an automatic scrutiny of the content of your emails. Did I write free? Go get yours: tutanota.com


PS. I have no connection whatsoever with them.
15  Economy / Digital goods / Selling Second Life avatars/accounts on: August 26, 2015, 01:18:13 PM
I have several Second Life avatars/accounts to sell. All with legacy names (two proper names, without resident as second name), male and female.

Some are from 2007 and others from 2010.

The older ones have some inventory, but nothing of particular value. All are basic free accounts, with no financial information on file.

Make me an offer.

I can accept several cryptocoins. Other means of payment will depend of your reputation.

As you know, selling accounts is against Second Life term of services (I care about ethics, not for their monopolistic rules), therefore if they conclude that the account was sold, they might close it. I'll give you a 1 week guaranty of your money back in this case. But after 1 week you will be on your own. Even if with so many accounts, the probability of any problem will be very low.

Check my feedback on my signature and on the trust link. You can trade with confidence.
By the way, check also my rules on how to avoid being scammed: https://bitcointalk.org/index.php?topic=199141.0 or a more extensive ones here: https://bitcointalk.org/index.php?topic=199658
16  Alternate cryptocurrencies / Marketplace (Altcoins) / I'm buying Stellars on: August 23, 2014, 08:39:18 PM
I'm buying stellars (STR), with better price for high amounts.

Please, no offers for quantities lower than 5,000, unless the price is very good.

And be reasonable and avoid asking for prices above market price.

I can pay with bitcoin or USDs/euros with paypal (yes, I know, but some people like it).

Just send me your offers by PM.

You can sell with confidence, just check my ratings and, specially, the vouches on my signature.

By the way, also check this rules against being scammed: https://bitcointalk.org/index.php?topic=198899.msg2080335#msg2080335
17  Other / Archival / Closed on: August 02, 2014, 02:09:06 PM
Closed
18  Bitcoin / Bitcoin Discussion / If bitcoin ever goes mainstream on: June 05, 2014, 10:47:43 PM
If bitcoin ever goes mainstream it will surely ruin the lifes of all the ones invested in fiat (see https://bitcointalk.org/index.php?topic=180798.0). Gresham's Law will destroy their fiat savings, since everyone will want the stronger bitcoin and will dump fiat, devaluating it. We'll see it happen first in the troubled economies, with high inflation and generalized suspicion on governmental money because of past problems.

And I have little doubt that bitcoin won't bring any equality, on the contrary. It will make us bitcoiners rich, but it will be at the expenses of fiat holders.

Because I doubt bitcoin will bring much prosperity, since it will establish a deflationary monetary system. But maybe it will be possible to minimize those deflationary effects:

a) Commercial banks will adopt a bitcoin fractional reserve system, lending bitcoins with only partial support on their holdings. That will allow an artificial (banking) expansion of the amount of bitcoins, thanks to the so-called banking money multiplier.
But since it will be hard to keep a trustworthy insurance of deposits (the government won't have enough bitcoins for that and can't create them), the system will be much more susceptible to runs.
And the interest will have to be high, because most people will prefer to have the bitcoins in their own wallets. So, forget about low interest rates. Well, high real interest rates (aggravated by potential deflation) can ruin any economy, since they thwart many productive investment based on credit.

b) It also might be created a bitcoin pattern, with governments printing money freely convertible in bitcoins. But I can see already the runs on the government on times of crises.

How will the central bank in depressions stimulate the economy?
It won't be possible to do any quantitative easing. Borrowing bitcoins to lend them at cheaper interest rates to the commercial banks won't be cheap.

How will the Government control massive tax evasion, especially if the anonymity of bitcoin improves?  
They will find a way, nothing has destroyed the State in more than 13 thousand years of hierarchical societies. Bitcoin won't be able to do it. It seems we will be subject to intense control of our use of the Internet in order to track our earnings and spending of bitcoins. Some we'll be able to evade it. But the majority won't.

Will the volatility of bitcoin ever end?
The increase of its users will keep bitcoin's price going up. However, because supply and demand is controlled by human perceptions and emotions, after a huge boom in price it will always come a bust. Every overshooting of the price will be followed by a general perception that the price increased too fast and, consequently, by a drop.
For volatility to end, it would be necessary a general adoption of bitcoin that would limit in percentual terms further increases of new users taking in account the already large numbers of users. Currently, since the number of users is relatively low, it's easy to see its numbers increase for more than 30% in a short period. But since the numbers of bitcoin are limited (and the current rate of increase is relatively small and it will be again limited in 2016), the ending of volatility would also require or a stagnant GDP or a fractional reserve system based on bitcoin that would allow its banking numbers to artificially increase side by side with GDP. That won't happen for years. Volatility is here to stay for long.

Will this scenario be the future?
It's impossible to say. But bitcoin seems to have already a too strong standing to fade away on it own.
Another better alt coin might be a stronger obstacle than fiat. But bitcoin can always adopt any improvements.

Can governments still destroy it?
An international coordinated effort against the main exchanges and sites could indeed hurt heavily bitcoin. Even our own wallets are susceptible to attacks by virus/worms (remember Stuxnet?) and the network can also be affected. Access to it can be blocked by ISPs at governmental command. Many could evade these blocks, but the major part of the bitcoiners would give up, taking in account also the risk of sanctions. That indeed would spook major investors.
This can still happen and it will happen on troubled economies. The outcome is anyone's guess, it would depend on governmental coordination and level of effort. Governments lost similar wars (drugs, alcohol, prostitution), but bitcoin is an easier target than these activities. It's not a surprise that, besides scams, governmental repression has been the main negative driver of price.

What would be the consequences of this massive adoption over the price?
I can't even imagine what would be the price of bitcoin. Forget about all the low previsions you read before.


But the genie is out of the bottle. There is nothing we can do, except tell about it.


added 14 March 2015:

What is bitcoin going mainstream?

One can use different criteria.

1) A percentage of the amount of the total expenses on one year. But even 1% would be huge and would imply a very high price for bitcoin. However, according to the mentioned Gresham's Law, bitcoin will be hoarded and only rarely spent. It's like gold. Gold is part of mainstream, but people don't exchange it much, they prefer to keep it. On the main functions of money, mean of exchange, unit of account and reserve of value, I think the last will be bitcoin's main function. It won't be used much as unite of value, because of its volatility (many places that accept bitcoin prefer to announce the prices in USDs). It won't be used a lot as mean of payment, because people will mostly hoard it.

2) Accessibility: the fact that anyone can easily buy, sell and pay with bitcoin. Clearly, this is decisive. If we could exchange and spend bitcoins on the majority of banks' ATMs and retailers, we could say bitcoin is mainstream. But imagine that even in these conditions bitcoin kept being scarcely used, with small demand.  It wouldn't be mainstream. Many physical businesses complain they never had customers paying with bitcoin. So, this is a necessary condition, but it isn't enough.

3) A percentage of people owning it. This seems to be a good criterion. But it isn't easy to establish a number: 10% seems enough, but not 1% or even 5%. 70 or even 350 million users on the all world would be great, but not enough for talking about mainstream. I guess only at 10% we would start to see unfold the problems above mentioned. But this kind of projections is hard to make.

Think about Paypal (I hate it, but let's use it as example). Is Paypal mainstream? I don't think so. It had a volume of transactions of only 180 billion on 2013, about 150 million active registered accounts and in many countries you can use it as mean of payment only on a few places. So, it reached about 2% of world population.

Bitcoin had about 23 billion USDs in trade volume alone during the last 12 months (see http://www.bitcoinity.org/markets/list?currency=ALL&span=6m, at current price) and maybe 1 or 2 million active users. It still has a long way to go.
19  Other / Politics & Society / Evidences of chechens fighting in Ukraine? on: June 02, 2014, 05:16:13 AM
There seem to be mounting evidences that the russians that were in Crimea and are now in eastern Ukraine are Chechens from the disbanded Vostok Battalion (http://www.bbc.com/news/world-europe-27633117).

I'm taking this news from BBC with some prudence, since BBC hasn't been completely balanced on the conflict. But the article seem credible.

I think no one will deny that there are armed russians citizens in Ukraine. It seems about 30 of them were returned home, dead, after the take over of the Donetsk airport by the Ukrainian army. Some of the leaders of the rebels are russian citizens.

One of these soldiers said they received orders from Kadyrov (the pro-Moscow strong man in Chechnya) to go to Ukraine. Kadyrov didn't deny there were Chechens in Ukraine, but said he didn't give any such order (http://www.bbc.com/news/world-europe-27633117).

Would anyone believe these more or less mercenaries went there on their free will? Or that Kadyrov would decide that without superior instructions?
20  Economy / Speculation / Okcoin just passed Huobi in trading volume on: May 31, 2014, 02:28:37 AM


http://www.bitcoinity.org/markets/list?currency=ALL&span=7d


Since I don't trade on either one, I don't have a clear explanation for this data.
Pages: [1] 2 3 4 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!