Bitcoin Forum
November 13, 2024, 06:41:29 AM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Where is the separate discussion devoted to possible Bitcoin weaknesses.  (Read 9095 times)
throughput (OP)
Full Member
***
Offline Offline

Activity: 158
Merit: 100


View Profile
August 11, 2010, 03:45:42 PM
 #1

Just to be able to ask "What if ...?" and have all ideas collected in one place.

For example.

It seems, that a generating node does not need to receive all that transactions at all.
The only data it needs is the previous block hash.
Right?

Next.
It is possible to connect to almost every publicly accessible node, right?
We can collect their addresses and establish connections to almost all of them.
And send them all the data we want.
Like fake (or not so) transactions in huge volumes.
What if it is possible to throttle their generating capability by forcing them to receive and verify
very large amounts of (possibly invalid) transactions (or perhaps another trash)?

If that is true, then we can lower the difficulty, right?
Just do this for a long period of time.
When it lowers to an acceptable for our supercomputer (botnet) value,
we may connect it to the network, but not directly.
Connect it via special node, that does forward messages in a special way, to filter the trash data we are still flooding.
So, the supercomputer will receive the blocks and will participate in generation, the others will be flooded and will get
only a small portion of generated BTCs.

Then, if we are not interested in generated BTCs, we may start generating a blockchain fork.
Immediately after the difficulty drops, we start to generate alternative version of blockchain in a isolated environment.
Since difficulty does not change immediately, we can try to outperform the rest of the network, while they are chewing our
trash data. Fast enough we present everybody with the longest chain, but then the difficulty raises back.
By doing this  it is possible to wipe our previous spend transactions, if they are made after the blockchain fork.
So, is it possible that we recover them and get back unspent transactions? And spend them again?
How will previous transactions incorporate into the new blockchain if they were "respent" in that manner?

And then it can be repeated.
If I'm wrong, just say: "you are wrong".
But you may also give me a hint why.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
August 11, 2010, 04:10:56 PM
 #2

Bitcoin's p2p network is subject to various kinds of denial of service attacks.

There, I said it.

Do you have constructive suggestions for how to fix it, or are you the kind of person who just enjoys breaking things because you can?

Ideas that have been bouncing around my head that may or may not work:

+ have clients tell each other how many transactions per unit of time they're willing to accept.  If a client sends you more (within some fuzz factor), drop it.  Compile in a default that's based on estimated number of transactions for a typical user and estimate on the number of current users.

+ require some proof-of-work as part of the client-to-client connection process (helps prevent 'Sybil' attacks).

This is an active area of research; see, for example: http://scholar.google.com/scholar?q=ddos+attacks+by+subverting+membership

How often do you get the chance to work on a potentially world-changing project?
Red
Full Member
***
Offline Offline

Activity: 210
Merit: 115


View Profile
August 11, 2010, 04:17:11 PM
 #3

Knightmb said increasing connections and bandwidth had little effect on khash speed.

Game over. Try again?
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
August 11, 2010, 04:40:07 PM
 #4

+ require some proof-of-work as part of the client-to-client connection process (helps prevent 'Sybil' attacks).

Isn't that a brilliant idea? Like hashcash?

You would be required to hash the string of the transaction, with a proof of work, that would say, take 5 seconds to calculate on a modern PC. Checking the POW just like in bitcoin would be easy and very quick for the receiving machines, but would stop a flood attack of random data without the attacker having limitless CPU power.

I was actually thinking of a minute or three of proof-of-work on initial connection, not when submitting a transaction, but requiring some proof-of-work for every transaction submitted into the network IS a very interesting idea!  Should be straightforward to implement, too (add a nonce and either a full or partial hash to the transaction)...

How often do you get the chance to work on a potentially world-changing project?
lachesis
Full Member
***
Offline Offline

Activity: 210
Merit: 105


View Profile
August 11, 2010, 06:07:12 PM
 #5

+ require some proof-of-work as part of the client-to-client connection process (helps prevent 'Sybil' attacks).

Isn't that a brilliant idea? Like hashcash?

You would be required to hash the string of the transaction, with a proof of work, that would say, take 5 seconds to calculate on a modern PC. Checking the POW just like in bitcoin would be easy and very quick for the receiving machines, but would stop a flood attack of random data without the attacker having limitless CPU power.

I was actually thinking of a minute or three of proof-of-work on initial connection, not when submitting a transaction, but requiring some proof-of-work for every transaction submitted into the network IS a very interesting idea!  Should be straightforward to implement, too (add a nonce and either a full or partial hash to the transaction)...

Unfortunately, as simple to implement as it is, in order to be effective it would have to be a breaking change.

Older clients won't send the proof of work because they don't know to. So an attacker could just claim that their client version was, say, 308 - too old for the protection. So your node would have to choose between dropping support for older clients and being secure against this sort of attack or continuing to support older clients and being insecure against this.

I'm not saying we shouldn't do it, but I think we ought to get a list of important breaking changes together before we make one. Users really don't like being told that they have to upgrade every other day.

Also, I don't know how the message relay protocol works. I guess each node could just relay that Tx with the same nonce and hash that the first one gave it. There's no danger of "reusing" a hash since a duplicate transaction is by nature invalid (although it could still harass the nodes).

Some detection and blacklisting code would be useful too. If an IP sends you a certain number of invalid transactions within a certain time, disconnect and refuse to accept connections from them for a certain period of time. If they do it again upon reconnecting, block them for longer... etc.

Bitcoin Calculator | Scallion | GPG Key | WoT Rating | 1QGacAtYA7E8V3BAiM7sgvLg7PZHk5WnYc
Red
Full Member
***
Offline Offline

Activity: 210
Merit: 115


View Profile
August 11, 2010, 06:46:25 PM
 #6

+ have clients tell each other how many transactions per unit of time they're willing to accept.  If a client sends you more (within some fuzz factor), drop it.  Compile in a default that's based on estimated number of transactions for a typical user and estimate on the number of current users.

+ require some proof-of-work as part of the client-to-client connection process (helps prevent 'Sybil' attacks).

I agree that eventually the latter will have to be done. It's for the reasons you pointed out that my DHT solution has flaws. Curiously it's all a side effect of not being able to implement the former constraint.

If you allow validating nodes to arbitrarily ignore transactions you risk breaking the key requirement that all validating nodes receive and record all transactions. The current presumption is that all validators try to receive and record all transactions. If a transaction is non-uniformly delayed and missed by the node who completes the block, it is presumed that statistically the transaction would be recorded in a subsequent block. However, that requires continually rebroadcasting the transaction to assure it gets through.

Let's say throughput was right and there was an advantage to a node saying, "I'm only willing to take 5 transactions a 10 block period." in that case it still generates blocks that can't be rejected by others, but an increasing number of unrecorded transactions backlogs with each minimal block. This causes additional retransmissions exacerbating the bandwidth problem.

In effect you rely on unrestricted nodes to compensate for a problem caused by restricted nodes. So if the restricted nodes are causing problems and doing less validation and recording work than other nodes, why should they be rewarded equally for generating blocks? That seems counter productive.

It would be better to say, "record all transactions or you can't be a validator!" Less validators means less bandwidth usage overall. It also becomes easier to spot abusers.

----ps----

An zero knowledge proof-of-completeness would be for competing validators to reject a proof-of-work block if it didn't contain 99% of the known outstanding transactions.
 
satoshi
Founder
Sr. Member
*
qt
Offline Offline

Activity: 364
Merit: 7248


View Profile
August 11, 2010, 10:40:25 PM
 #7

It doesn't have to be such a breaking change.  New nodes could accept old transactions for a long time until most nodes have already upgraded before starting to refuse transactions without PoW.  Or, they could always accept old transactions, but only a limited number per time period.

I've thought about PoW on transactions many times, but usually I end up thinking a 0.01 transaction fee is essentially similar and better.  0.01 is basically a proof of work, but not wasted.  But if the problem is validating loads of transactions, then PoW could be checked faster.

A more general umbrella partial solution would be to implement the idea where an unlikely dropoff in blocks received is detected.  Then an attacker would still need a substantial portion of the network's power to benefit from a DoS attack.

Bitcoin's p2p network is subject to various kinds of denial of service attacks.

There, I said it.
+1

Any demonstration tests at this point would only show what we already know, and divert dev time from strengthening the system to operational fire fighting.
knightmb
Sr. Member
****
Offline Offline

Activity: 308
Merit: 258



View Profile WWW
August 12, 2010, 07:47:01 AM
 #8

Just to be able to ask "What if ...?" and have all ideas collected in one place.

For example.

It seems, that a generating node does not need to receive all that transactions at all.
The only data it needs is the previous block hash.
Right?

Next.
It is possible to connect to almost every publicly accessible node, right?
We can collect their addresses and establish connections to almost all of them.
And send them all the data we want.
Like fake (or not so) transactions in huge volumes.
What if it is possible to throttle their generating capability by forcing them to receive and verify
very large amounts of (possibly invalid) transactions (or perhaps another trash)?
Nope, no, and not yet. I've tried that myself, the clients just ignore it all.
Quote
If that is true, then we can lower the difficulty, right?
Nope
Quote
Just do this for a long period of time.
When it lowers to an acceptable for our supercomputer (botnet) value,
we may connect it to the network, but not directly.
Connect it via special node, that does forward messages in a special way, to filter the trash data we are still flooding.
So, the supercomputer will receive the blocks and will participate in generation, the others will be flooded and will get
only a small portion of generated BTCs.
Nope
Quote
Then, if we are not interested in generated BTCs, we may start generating a blockchain fork.
Immediately after the difficulty drops, we start to generate alternative version of blockchain in a isolated environment.
Since difficulty does not change immediately, we can try to outperform the rest of the network, while they are chewing our
trash data. Fast enough we present everybody with the longest chain, but then the difficulty raises back.
By doing this  it is possible to wipe our previous spend transactions, if they are made after the blockchain fork.
So, is it possible that we recover them and get back unspent transactions? And spend them again?
How will previous transactions incorporate into the new blockchain if they were "respent" in that manner?

And then it can be repeated.
If I'm wrong, just say: "you are wrong".
But you may also give me a hint why.
A lot of us have already attempted everything you've listed here. That's why we have a lot of security updates for releases  Wink

About the only thing that will make any of that work is having more CPU power than the entire swarm and the currently difficulty. Reverse time chains, slow DoS, fast DoS, swarm manipulation, wormholes, and time travel have all been tried so far. But if you come up with some unique, we'll be glad to try it.

Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
throughput (OP)
Full Member
***
Offline Offline

Activity: 158
Merit: 100


View Profile
August 12, 2010, 09:04:25 AM
 #9


Nope, no, and not yet. I've tried that myself, the clients just ignore it all.
Is there no chance to try any better? We only need to slow down chain growth. The individual khps may stay the same.
Quote
A lot of us have already attempted everything you've listed here. That's why we have a lot of security updates for releases  Wink
Will you, a lot of us, share your experience, especially the results and conclusions?
Good analysis with an experiment may be published as a scientific research paper, are you interested?

Quote
About the only thing that will make any of that work is having more CPU power than the entire swarm and the currently difficulty. Reverse time chains, slow DoS, fast DoS, swarm manipulation, wormholes, and time travel have all been tried so far. But if you come up with some unique, we'll be glad to try it.

As soon, as design weaknesses (as opposed to implementation vulnerabilities) discussion will get it's section on some
public forum and become more concentrated and accessible to the public, then I'll be happy to present other unique
ideas. I just don't want to hear that again: "Nope, that's not possible, it was already discussed, read forum".
It is boring, believe me. Forum is bloated with repeating ideas. Let's collect them.
I want to see the real analysis, that was already performed. To confirm the results and to not duplicate the effort.
I saw some article on wiki that summarizes frequently proposed weaknesses, that are not weaknesses, but that is not an analysis and not an discussion, there are just statements and it is not too convincing.
And it does not mention a lot of other obvious (non-)weaknesses, that are discussed here.

For example, I propose, that costs of operation of Bitcoin node is by design not affordable to a mediocre Joe with a PC.
Transactions volume (especially the full history) will make it impossible to practically operate it on modest PC hardware.
Not for now. Let's wait for 5 years, or 15. And you will see, that Bitcoin requires server hardware to just operate.
There should be a way to estimate the space required to store the Bitcoin node on disk and the time required to process
all that stored data (to verify every incoming transaction).
And my estimates say, that the global transaction rate will be limited by the system.
Let's document why I am not right here. And nobody should bother and everybody can safely buy into Bitcoin. OK?
NewLibertyStandard
Sr. Member
****
Offline Offline

Activity: 252
Merit: 268



View Profile WWW
August 12, 2010, 09:25:07 AM
 #10

For example, I propose, that costs of operation of Bitcoin node is by design not affordable to a mediocre Joe with a PC.
Generating bitcoins will always be affordable to the average Joe in the same way that SETI@home is always affordable to the average Joe. It may not always be profitable, but it will always be affordable. Right now it actually is profitable. ฿50 can easily go for $3.50, or more if you're patient. That is more than it costs the average Joe in electricity on his average computer which he already leaves on all the time and in excess Internet bandwidth which he never fully utilizes.

Treazant: A Fullever Rewarding Bitcoin - Backup Your Wallet TODAY to Double Your Money! - Dual Currency Donation Address: 1Dnvwj3hAGSwFPMnkJZvi3KnaqksRPa74p
lfm
Full Member
***
Offline Offline

Activity: 196
Merit: 104



View Profile
August 12, 2010, 10:26:41 AM
 #11


For example, I propose, that costs of operation of Bitcoin node is by design not affordable to a mediocre Joe with a PC.
Transactions volume (especially the full history) will make it impossible to practically operate it on modest PC hardware.
Not for now. Let's wait for 5 years, or 15. And you will see, that Bitcoin requires server hardware to just operate.
There should be a way to estimate the space required to store the Bitcoin node on disk and the time required to process
all that stored data (to verify every incoming transaction).
And my estimates say, that the global transaction rate will be limited by the system.
Let's document why I am not right here. And nobody should bother and everybody can safely buy into Bitcoin. OK?

Its pretty hard to say exactly what "the average Joe" will have in terms of processing power, storage space and bandwidth in 5 or 15 years but I think most estimates would easily cover what bitcoin would take. Its not zero, sure, but its not terrible either.
throughput (OP)
Full Member
***
Offline Offline

Activity: 158
Merit: 100


View Profile
August 12, 2010, 11:39:33 AM
 #12

For example, I propose, that costs of operation of Bitcoin node is by design not affordable to a mediocre Joe with a PC.
Generating bitcoins will always be affordable to the average Joe in the same way that SETI@home is always affordable to the average Joe. It may not always be profitable, but it will always be affordable. Right now it actually is profitable. ฿50 can easily go for $3.50, or more if you're patient. That is more than it costs the average Joe in electricity on his average computer which he already leaves on all the time and in excess Internet bandwidth which he never fully utilizes.
No, no. Forget about generating. It will stop ASAP. There will be no more than 21MBTC.
Tracking transaction inputs to outputs will require access to the full transaction store.
Say, there will be 1 million transaction per day.
356 days = 356 millions of transactions.

Yes, there is some "compression" or "compaction". What are the estimates of it's effectiveness in the average case and in the worst case? I don't know the numbers, please correct me.

10 years = 3560 millions of records everyone should have to scan it for every transaction it receives (1 million per day?)
If it will be 10 millions per day, then 35600 millions of records to store * 10 millions of accesses per day.
Complexity is growing with the number of transactions faster, than linearly, noticed that Smiley ?

The more deflation occurs, the more separate transactions we will have, right? 0.01 limit will disappear.
Yes, CPU power and storage space were growing exponetially in the past and we are used to that.
There can be no limit for capacity, sure, but we are not limited by capacity, it is important to have very fast
access times for storage, not huge capacity.
They say we are almost touched the limit for CPU speed, now the number of cores is doubling.

Another interesting aspect is that the byte size of the block (so the number of transactions in it) is limited,
and the speed of the generation is limited too, by dynamic difficulty.
There is upper limit on the number of transactions per day, imposed by the block size limitation.
Let's publish the numbers?
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!