Bitcoin Forum
May 06, 2024, 02:28:55 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: What would it take to make a 51% attack on the whole bitcoin network?  (Read 2580 times)
coinfreak (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0



View Profile WWW
September 14, 2011, 08:14:03 PM
Last edit: September 15, 2011, 02:49:05 PM by coinfreak
 #1

From Reddit

Quote from: avsa
I've read about many units of supercomputer, but not being an expert I couldn't relate to it. If someone wanted to do a 51% attack on the whole bitcoin network, effectively rewriting it's whole credit history, what kind of computing power would be needed? Does a company like Google or IBM, or some military installation has enough computer power at it's disposal to overwrite the current blocks? Is it possible?
Lets do a trivial research:
  • Current network hashrate is 15 TH/s
  • My $600 Radeon HD6990 performance is 700 MH/s
  • Successful 51% attack should involve 21'500 such GPUs
  • Cost of the attack is about 13 million dollars

As mentioned in the discussion...

Quote from: avsa
There's a very distinct difference between needing more computing power than the world's supercomputers combined and needing around 4-13 million dollar in equipment. Anything in that magnitude is very cheap not only for a government, but for most banks and big corporations - heck it's atainable even for a very rich individual. Honestly I don't see why what would be the motivation, but it's technically a fragile system..

Sad but true...

The cause of such weakness is in the chosen POW method: sha256(sha256(H)). It allows building highly specialized, cost and power efficient mining rigs so shifting the balance from ordinary users having only their general-purpose PCs to governments and corporations with their resources for R&D in mining.

How can we fix this issue?

I propose a main criteria of specializability of POW method - the order of magnitude of the number of processing and storage elements necessary for 1 unit of rig.
In case of sha256(sha256(H)) it's 10 ALUs + 8 RAMs per round and it's relatively simple to trade off processing power to storage and vice versa (I mean pipelining technique) because of high data locality. But we must keep the criteria as close to the general-purpose PC as possible yielding extensive techniques of specialization to be limited by exponential growth of the cost of such optimization.

I haven't any idea better then increasing storage requirements for rig unit thus breaking the data locality.
Say we have N storage units <S> where
  • S0=sha256(H);
  • Si+1=sha256(Si+Sj), i=1..N, j=Si mod i;
  • And finally POW=sha256(SN).
We need N to be big enough to outfit processor registers and cache with intermediate <Si> values and small enough to store whole that data at RAM without swapping.
And also N need to be adjusted to the difficulty somehow.

Any thoughts?
1714962535
Hero Member
*
Offline Offline

Posts: 1714962535

View Profile Personal Message (Offline)

Ignore
1714962535
Reply with quote  #2

1714962535
Report to moderator
1714962535
Hero Member
*
Offline Offline

Posts: 1714962535

View Profile Personal Message (Offline)

Ignore
1714962535
Reply with quote  #2

1714962535
Report to moderator
The Bitcoin software, network, and concept is called "Bitcoin" with a capitalized "B". Bitcoin currency units are called "bitcoins" with a lowercase "b" -- this is often abbreviated BTC.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714962535
Hero Member
*
Offline Offline

Posts: 1714962535

View Profile Personal Message (Offline)

Ignore
1714962535
Reply with quote  #2

1714962535
Report to moderator
1714962535
Hero Member
*
Offline Offline

Posts: 1714962535

View Profile Personal Message (Offline)

Ignore
1714962535
Reply with quote  #2

1714962535
Report to moderator
1714962535
Hero Member
*
Offline Offline

Posts: 1714962535

View Profile Personal Message (Offline)

Ignore
1714962535
Reply with quote  #2

1714962535
Report to moderator
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1024



View Profile
September 14, 2011, 09:18:16 PM
 #2

Bcrypt isn't magic.  It just does the difficulty adjustment internally.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
September 14, 2011, 09:27:10 PM
 #3

A Radeon 6990 has 4 gigabytes of ram.

If the task is "find a number that bcrypts/scrypts to less than a given hash target," I don't see anything that would stop a GPU programmer from implementing bcrypt/scrypt on the CPU and parallelizing at the try-different-nonces level.

Maybe I'm missing something; I'm probably biased because I worked at SGI from 1988 to 1996 and saw first-hand the evolution of GPUs from very-special-purpose chips with very limited memory to very-general-purpose vector-processing pipelines with very fast access to lots of memory.

How often do you get the chance to work on a potentially world-changing project?
coinfreak (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0



View Profile WWW
September 14, 2011, 09:40:15 PM
 #4

A Radeon 6990 has 4 gigabytes of ram.

If the task is "find a number that bcrypts/scrypts to less than a given hash target," I don't see anything that would stop a GPU programmer from implementing bcrypt/scrypt on the CPU and parallelizing at the try-different-nonces level.

Maybe I'm missing something; I'm probably biased because I worked at SGI from 1988 to 1996 and saw first-hand the evolution of GPUs from very-special-purpose chips with very limited memory to very-general-purpose vector-processing pipelines with very fast access to lots of memory.
HD6990 has more than 3000 processing units so bcrypt/scrypt/whatevercrypt can be relatively easy parallelized.
My idea is to make each processing unit requiring a lot of RAM for intermediate data thus adding "GByte" dimension to "GHash" race and then keeping GByte-GHash profile similar to that one of the general-purpose PCs somehow.
etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 14, 2011, 09:43:48 PM
 #5

First of all, this is exactly the question I was asking in my post here (Anyone else concerned about the global hashrate?).  This scares me because you're right, it's prohibitive, but still completely feasible for an individual/corporation/mafia/government with enough motivation. 

One thing that should offer some comfort is that running 20,000 GPUs is a lot harder in practice than theory.  You're going to have to buy, at minimum, a warehouse and probably have your own power plant as you would need about 10 megawatts of constant output to run those GPUs.  Not to mention hire lots of people to build and maintain all those systems.  Alternatively, you could buy a 100,000 FPGAs and get away with 1 MW of power, but at substantially higher cost.  Of course, you don't need to maintain it very long to execute the attack, but it's still a pretty substantial investment of time an money, well in excess of the cost of the raw hardware.

My concern was more about botnets.  It would seem that the biggest botnet is capable of generating the same order of magnitude as the current global hashrate, though it's tough to tell if they could achieve 10% or 51%.  Either way, it concerns me, and it's not going to get any better, unless BTC goes up to $100 and then people start buying more hardware.  In times like now, it appears to be profitable to run existing hardware, but too risky/long-term to justify buying new hardware.  If BTC floats this way until the generation rate drops from 50 BTC/block to 25 BTC/block, I think we're "screwed."  I put it in quotes, because the feasibility of an attack does not guarantee it.  But certainly, the fact that it's feasible will have an impact on general acceptance of the currency.

In regards to botnets, I think the current POW is actually better.  If users were solely using their CPUs for such calculations, then even a "small" botnet would have no problem overpowering the base of 20,000 BTC users using 1-2 CPUs.  The computers controlled by botnets typically don't have big GPUs, and thus are at a significant disadvantage relative to the base of BTC nerds with 20+ GPUs in their basement.  Of course this may not hold when talking about the threat of governments that want to attack the network, but I would guess their existing hardware is similarly limited-- at least right now, most "computing clusters" still consist mainly of CPUs and not GPUs.  Getting up to 15 THash/s on CPUs is a lot harder.

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
bitcoinhead
Newbie
*
Offline Offline

Activity: 15
Merit: 0


View Profile
September 15, 2011, 12:21:41 AM
 #6


Lets see...

Take over a couple of the mining pools.

or

start a new mining pool with a zero fee and get people to switch over (hey, even lie about how many GHashes you have to get more people on board), then use this to attack

etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 15, 2011, 12:33:42 AM
 #7


Lets see...

Take over a couple of the mining pools.

or

start a new mining pool with a zero fee and get people to switch over (hey, even lie about how many GHashes you have to get more people on board), then use this to attack


This brings up a really good point.  The miners who are computing for the pools should be aware of what the "current" block really is (they are regular nodes, too, aren't they?).  If so, they could be programmed to pay attention to the current longest-chain, and reject doing work for the pool if it sends headers to be hashed that use a different previous-block hash.  This assumes that the only type of attack someone would want to execute is a blockchain rewrite/double-spend attack. 

This would seem to shut down this kind of attack very quickly.  And as you pointed out... some pools already have a "dangerous" amount of computing power.  Though now that I think about it, my miners are using Phoenix which actually doesn't learn of the blockchain.  Perhaps this is a worthy functionality to try to fit into the GPU-miner code (such as Pheonix). 

(Yes, yes, I know, there's no good reason for a pool operator to attack the network.  But people don't always do things for good reasons.  I'd rather just avoid the possibility entirely)

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
Cosbycoin
Hero Member
*****
Offline Offline

Activity: 980
Merit: 506



View Profile
September 15, 2011, 12:54:35 AM
 #8

Not sure if someone could get their hands on that many gpus.
mrb
Legendary
*
Offline Offline

Activity: 1512
Merit: 1027


View Profile WWW
September 15, 2011, 04:37:57 AM
Last edit: September 15, 2011, 07:44:40 AM by mrb
 #9

Not sure if someone could get their hands on that many gpus.

Why? Tianhe-1A is made of 7168 GPUs. 21k high-end GPUs may take months to be provisioned by a manufacturer, but it is certainly within reach with adequate budget.
ovidiusoft
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250


View Profile
September 15, 2011, 06:57:50 AM
 #10

This type of attack is considered "probably not a problem": https://en.bitcoin.it/wiki/Weaknesses#Attacker_has_a_lot_of_computing_power . For someone to do it, it would need not only to have such processing power, but to also maintain the attack indefinitely. Running costs of a few millions a month will only be justified if the attacker gains more than that AND he has no other means of achieving the same results.

Botnet owners will not do this, because they can use their power for much more profitable "services". Governments can filter or make Bitcoin illegal with less effort. So that leaves us a very rich private person who really-really hates Bitcoin. Doesn't seem to be a problem.
pusle
Member
**
Offline Offline

Activity: 89
Merit: 10


View Profile
September 15, 2011, 08:25:30 AM
 #11


My suggestion was to have two chains verifying all transactions. 
The one we have today + another with a different hashing algorithm which favors generic cpus.

This way you'd need both a ton of gpus AND a huge botnet to attack the system.

As bitcoin grows large globally there would eventually be no government nor botnet in the world who could catch it.

coinfreak (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0



View Profile WWW
September 15, 2011, 12:31:29 PM
 #12

My suggestion was to have two chains verifying all transactions. 
The one we have today + another with a different hashing algorithm which favors generic cpus.

That's a problem. Is there any existing well-known hashing algorithm which favors generic CPUs and have some properties preventing it to be implemented in highly-effective GPU/FPGA/ASIC for acceptable cost?
etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 15, 2011, 02:32:01 PM
 #13

Botnet owners will not do this, because they can use their power for much more profitable "services".

This is a bad argument.  It assumes that people are rational, and/or have some kind of nominal, predictable intentions.  People don't always do things for good reasons, or maybe that have reasons others didn't think about.

  • Perhaps a government/company pays a botnet owner to do it
  • Perhaps the government takes over a botnet and decides to kill two birds with one stone.
  • Perhaps a botnet owner thinks that BTC has no future, so he finds someone who is willing to let him short-sell $1million in BTC and then he crashes the network for immediate payout.

They don't need to persist this attack forever to do damage.  It only takes one time to permanently scar BTC (maybe make the final KO), and it doesn't make me sufficiently happy to say "see, I was right!"  I want to see it not happen, and not have it possible to happen.   We've seen what happens to the BTC market when non-security breaches happen (not related to the functional security of BTC itself), what do you think is going to happen when someone pulls off a massive double-spend once which actually does reflect on the security of BTC? 

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
ovidiusoft
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250


View Profile
September 15, 2011, 03:36:06 PM
 #14

We've seen what happens to the BTC market when non-security breaches happen (not related to the functional security of BTC itself), what do you think is going to happen when someone pulls off a massive double-spend once which actually does reflect on the security of BTC? 

I don't know what would happen. The market as it is is crazy enough so I don't even dare to try to guess Smiley. What I do know is that (1) there is no 100% security, (2) bad things that could happen, will happen sooner or later, (3) problems get fixed.

Obviously, we should try to make the system better *before* attacks can destroy it. But even is it does get destroyed (I don't believe this will happen, I think Bitcoin being declared illegal is much more probable), I think that it's already proven that people want such a system. And what people want, people get. It might not be called Bitcoin, but in the end, it doesn't even matter.
etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 15, 2011, 05:08:26 PM
 #15

That's a problem. Is there any existing well-known hashing algorithm which favors generic CPUs and have some properties preventing it to be implemented in highly-effective GPU/FPGA/ASIC for acceptable cost?

You don't need a raw hashing algorithm to impose these kinds of requirements on the system.  You just need a unparallelizable process that ultimately leads to a random string of bits--i.e. only the last step needs to be hashing.  This is a silly, off-the-top-of-my-head idea, but it should illustrate the point:

The current process for POW looks like this:

Code:
a = Header
POW = Hash256(a)

Instead you could make it:

Code:
 
a = Header
b = [Hash(a) Hash^2(a) Hash^3(a) ... Hash^N(a)]
c = Gzip(b)
POW = Hash256(c)

The idea here is that each thread must compute N sequential hashes and save all the intermediate results so it can make one huge string and gzip it (I believe gzip needs the entire string in memory to operate... if not, then pick a different operation that does require it).  Then the result is hashed to 32 bytes to get the final proof-of-work.  

The reason something like this works is that N can be chosen to enforce any kind of per-thread memory requirement.   If N is 10,000,000, then each thread has to hold 320 MB of data at once to do the GZIP.  That means that a 6990 would only be able to run 10 threads at once:  possibly faster than your 4-core CPU, maybe not.  Most GPUs with 1 GB of mem would only run 2 or 3 threads.  At best, your GPU would be like an extra CPU, and a modern quad-core computer with 4 GB of RAM could probably run fine with this in the background (or the user could scale it down to 1-2 cores when they need the resources).

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1024



View Profile
September 15, 2011, 05:54:48 PM
 #16

Nope, sorry.  Gzip is a streaming compressor.

And in general, GPUs aren't as memory limited as you are thinking.  The typical highish end video card that we use has more memory on it than a generic server from a couple of years ago.  Any change that makes hashing memory intensive enough to stop GPU mining will kill the network by forcing nodes to drop out.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
AnRkey
Newbie
*
Offline Offline

Activity: 18
Merit: 0



View Profile
September 15, 2011, 06:11:01 PM
 #17

Nope, sorry.  Gzip is a streaming compressor.

And in general, GPUs aren't as memory limited as you are thinking.  The typical highish end video card that we use has more memory on it than a generic server from a couple of years ago.  Any change that makes hashing memory intensive enough to stop GPU mining will kill the network by forcing nodes to drop out.

+1 and +1
etotheipi
Legendary
*
expert
Offline Offline

Activity: 1428
Merit: 1093


Core Armory Developer


View Profile WWW
September 15, 2011, 07:00:04 PM
 #18

Quote
And in general, GPUs aren't as memory limited as you are thinking.  The typical highish end video card that we use has more memory on it than a generic server from a couple of years ago.  Any change that makes hashing memory intensive enough to stop GPU mining will kill the network by forcing nodes to drop out.

I don't agree.   I don't see how the GPUs can be "not as memory limited as you are thinking."  We know the total RAM on the card, and we can count on a little bit extra for local memory and registers, but not on the order of 100s of MB.  I program CUDA on NVIDIA GPUs, and you're pretty much limited by the RAM amount stated on the box (and OpenCL isn't much different).  You can use CPU RAM if you want, but there's massive latencies associated with transferring data over the PCIe bus.  Perhaps the latencies aren't all that important in this application but the point is to make the computation memory-limited, not computation-limited.

Right now the computation requires virtually no RAM, beyond a few round constants and storing the original 80-byte header.  A GPU with 1600 shaders might have 2 GB of RAM-- that's 1-2MB per core.  But a CPU typically has 250-1000MB/core.  If the each thread only required 100 MB RAM, then even semi-older computers could apply all their core(s) to computation without impacting system usability dramatically (if at all).  But it would limit the 1600 shaders on a GPU to only 20 simultaneous threads (about 1-2% GPU capacity).  

As for GZIP... the point was to apply a process that needs the entirety of the data in order to run and can't be broken into smaller pieces.  I'm sure there's a billion existing algorithms that can do this, I just guessed that gzip was one of them (incorrectly).

Personally though, I think the biggest threat is actually botnets, so the dominance of GPUs is actually preferable to me.  You don't have to agree, though.

Founder and CEO of Armory Technologies, Inc.
Armory Bitcoin Wallet: Bringing cold storage to the average user!
Only use Armory software signed by the Armory Offline Signing Key (0x98832223)

Please donate to the Armory project by clicking here!    (or donate directly via 1QBDLYTDFHHZAABYSKGKPWKLSXZWCCJQBX -- yes, it's a real address!)
coinfreak (OP)
Newbie
*
Offline Offline

Activity: 39
Merit: 0



View Profile WWW
September 15, 2011, 08:42:43 PM
Last edit: September 16, 2011, 11:24:43 AM by coinfreak
 #19

Personally though, I think the biggest threat is actually botnets, so the dominance of GPUs is actually preferable to me.
For now it's true. But if you look to the BitTorrent network size you'll see more than 100 million peers. I think if Bitcoin would ever reach such popularity botnets shouldn't be a problem.
storr
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
September 16, 2011, 09:16:24 PM
 #20

I worked for 5 years for a company that creates ASIC chips, and I understand the process well enough. My estimation is that it is possible to create such an ASIC that will feed the same power as Radeon 5870 (for example), and calculate 20-100 times more hashes. It is not surprising because the chip is special-purpose, and GPU is universal. I suppose that it is possible to create such an ASIC in 6-12 months. The power of network now is approximately 12 Thash, that is equivalent to ~ 20000-40000 GPUs. That means that 2000 such chips would be enough to attain more than 50% of the computing power. I think, the cost of creating such a number of chips is approximately 5 million dollars. Indeed, the major part of this money would be spent on designing the chip, not on manufacture. Therefore, to create 10,000 chips like this, one would need approximately 10 million dollars. And 10,000 chips is 80% of the nets computing power.

So, I think that ASIC is the most possible way to accomplish 51% attack.

What can we invent to prevent net destruction? I have thought about it but I have not found any decision. IMHO, change POW method is useful only for the first time, because we can change our miners quickly, but it is not possible to change an ASIC chip. They would need to make another one (+3-6 months). Adding memory requirements in POW method is good idea, but it changes not much. Let us say, not 10 but 20 million dollars per 80% of the nets computing power.

I think the better way to find the decision is to think about changing a way of cooperation of nodes. For example it is possible to create a system of trust between nodes. If one node makes some suspicious actions (destributes a new block that not contains a majority part of transactions or new block chain that removes the last 10 blocks), its "rank" decreases. If a node destribute good information, its "rank" increased. Information from nodes with too low "rank" is skipped. Its only a raw idea, I know. I just want to show the direction of how else it could be.

Anyway, what we can do just now (except finding decision) is to recognize the problem. And change in wiki the status of this vulnerability from "Probably not a problem" to "Might be a problem".
What do you think?


Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!