Bitcoin Forum
May 11, 2024, 05:23:53 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 4 5 6 [All]
  Print  
Author Topic: On The Longest Chain Rule and Programmed Self-Destruction of Crypto Currencies  (Read 17855 times)
otila (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 250


View Profile
May 08, 2014, 11:03:55 AM
 #1

http://cryptome.org/2014/05/bitcoin-suicide.pdf
1715448233
Hero Member
*
Offline Offline

Posts: 1715448233

View Profile Personal Message (Offline)

Ignore
1715448233
Reply with quote  #2

1715448233
Report to moderator
1715448233
Hero Member
*
Offline Offline

Posts: 1715448233

View Profile Personal Message (Offline)

Ignore
1715448233
Reply with quote  #2

1715448233
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715448233
Hero Member
*
Offline Offline

Posts: 1715448233

View Profile Personal Message (Offline)

Ignore
1715448233
Reply with quote  #2

1715448233
Report to moderator
b!z
Legendary
*
Offline Offline

Activity: 1582
Merit: 1010



View Profile
May 08, 2014, 11:12:57 AM
 #2

Interesting paper. Thank you for sharing.
ncsupanda
Legendary
*
Offline Offline

Activity: 1628
Merit: 1012



View Profile
May 08, 2014, 06:55:25 PM
Last edit: May 08, 2014, 07:25:13 PM by ncsupanda
 #3


Wow. This is actually a very interesting read. I'm not currently finished with it, but I had to stop and say thank you.

Most people completely disregard Dogecoin as having any clout in the cryptocurrency world, so I find the analysis in those sections fairly intriguing.

EDIT: Seems I was a bit misinformed, based on the post below. Re-reading....

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 08, 2014, 07:22:43 PM
Last edit: June 25, 2014, 09:16:57 PM by DeathAndTaxes
Merited by LeGaulois (1)
 #4

Utter nonsense.   It is sad that they wrote a paper based on the premise that timestamps can be used to solve the double spend problem (they can't) and then never did any research as to why Bitcoin doesn't rely on timestamps.   I mean it isn't a minor note, it is the core claim of the article.

Quote
It surprising to discover that Satoshi did NOT introduce a transaction
timestamp in bitcoin software. It is NOT known WHY neither the original
creator of bitcoin nor later bitcoin developers did not mandate one. This could
can be seen as an expression of misplaced ideology. Giving an impression
showing that maybe the Longest Chain Rule does solve the problems in an
appropriate way. Unhappily it doesn't.

One would think that when writing a paper if you are surprised that it would lead you to question and seek information.  Maybe you are surprised because you are misinformed, or misunderstand the conditions. Generally speaking just assuming your surprise is due to someone else being wrong and then not verifying that in any way is not the start of a good paper.  Satoshi did not include tx timestamps because proving timestamps in a decentralized environment is an incredibly difficult (some would say impossible) task.  In the absence of verifiable timestamps it would simply be wasted bits.  Nodes can optionally record the timestamp of when they learn of the transaction but that will differ from node to node.

Quote
Currently an approximate timing of transactions is known in the bitcoin
network, it comes from the number of block in which a given transaction is
included: this gives a precision of approx. 10 minutes. Transactions without a
fee could be much older than the block. However all blocks are broadcast on the
network and it is very easy for the bitcoin software to obtain more precise timing
of transactions with a precision of 1 second, maybe better. A number of web sites
such as blockchain.info are already doing this: they publish timestamps for
all bitcoin transactions which correspond to the earliest moment at which these
transactions have been seen.

Seen by who?  Oh thats right one individual node with no guarantee or assurance that ANY OTHER NODE has even seen that transaction much less at that time.

Quote
Then the solution is quite simple:

1. In case of double spending if the second event is older than say 20 seconds
after the fi rst transaction, the fi rst transaction will simply be considered as
valid and the second as invalid. This based on the earliest timestamp in
existence which proves that one transaction was in existance earlier.
This seems reasonable knowing that the median time until a node receives
a block is 6.5 seconds cf. [8, 9].

How are you proving timestamps?  Timestamp is simply a number.  I can give you a timestamp from the birth of Jesus does that mean I have proven I made a bitcoin transaction thousands of years ago?

Quote
The implementation of such a mechanism is not obvious and will be discussed separately below. However it seems that it could be left to the free
market: several mechanisms could function simultaneously. For example one can immediately use timestamps published by blockchain.info and simultaneously use timestamps from other sources.

So the decentralized currency is based on the timestamps as decided by some centralized "super peers".  If I bribe the timestamp servers to say my tx is older then I can double spend without even using hashing power.

Quote
For solutions which would prevent various bitcoin web servers from manipulating these time stamps we will need to propose additional mechanisms,
such as secure bitstamps or additional distributed consensus mechanisms.
We will develop these questions in another paper.

Gee really?  You are saying that decentralized provable timestamps are an incredibly difficult problem to solve.  Maybe someone could use an append only linked list of timestamps, where each timestamp is proven by requiring the creator of an individual timestamp element use a non-trivial amount of resources.  Thus once an individual element of the list has a sufficient number of elements after it then users would have some assurances that modifying the element would be very difficult.  As part of the process to make the element of the list maybe we could put a hash of the txn in the element and thus substituting the txn becomes as difficult as changing the list.  To avoid a lot of work needed for every txn you could have each element store a hash which represents an entire set of txns.  Each user would always extend the longest valid list to force a consensus between multiple valid but incompatible lists.  We could probably call these list elements "blocks" and the linked list a "blockchain".

Quote
In case of double spending if both events come within at most 20 seconds
of each other, miners should NOT include any of these transactions in block
they mine. Some miners can nevertheless accept a transaction because they
have only received one of the two transactions, or because they are trying to
cheat. Then their block could simply be invalidated because they have not
been careful enough about collecting all the transactions which have been
around. For honest miners this would occur with small probability.

So it becomes trivial to fork the network.

I create two transaction (with magical 100% provable decentralized timestamps which don't exist and yet are the lynchpin of the paper yet are not discussed in the paper) within 20 seconds of each other and broadcast one, when it is included in a block I broadcast the other one and the block just mined is invalid.  

Quote
Again this decision on whether to include or not a given
transaction could be decentralized.
All this requires some form of timestamping and some security against manipulation of these timestamps to be implemented than in the current software,
either by consensus or secure timestamps.

By magic?  I mean it like saying taking a spacecraft to the moon is a flawed strategy when we could just teleport there instead.  Also teleporting to the moon will require some teleportation capabilities and stuff.

Quote
An alternative to timestamps could be a pure consensus mechanism by which
numerous network nodes would certify that that they have seen one transaction
earlier than another transaction. This can be very easy done: we can re-use shares
which are already computed by miners in vast quantities or select only certain
shares with a sucient number of zeros.  We could mandate that if transactions
are hashed in a certain order in a Merkle hash tree, it means that this miner
have seen certain transactions earlier

I am seeing a trend, when you say "easy" it actually means "this seems easy because I lack the basic knowledge to see why it isn't".  So a tx being in the merkle tree of a miner means it is first?  What if a miner ignores the first tx and puts the second tx in the merkle tree?  We have a tx in a merkle tree somehow it magically proves it was first?  However even if it did, we are still operating on magical provable decentralized timestamps.

Quote
or another similar mechanism assuming
that the majority of miners are honest
.

Wait a minute.  Lets read this one again slowly "assuming that the majority of miners are honest", "assuming that the majority of miners are honest".  Really?

I know you can't mean majority as in >50% of the nodes by count because the entire concept of proof of work (or stake) is predicated on a well founded assumption that preventing a sybill attack in a decentealized network is infeasible.  If you could prevent a sybill attack, well miners could just vote and there would be no need for work or stake at all.  You wouldn't even need magical provable timestamps as each node would simply vote based on its own observations.  

So by "majority" you must mean more than 50% of the hashrate.  You didn't see where I am going when you wrote this?  In other words even this convoluted, nonsensical solution still fails to prevent double spends when the attacker has 51% of the hashrate.  Wasn't that the "flaw" that the entire paper is based on "fixing"?






cr1776
Legendary
*
Offline Offline

Activity: 4032
Merit: 1301


View Profile
May 08, 2014, 08:02:00 PM
 #5

Utter nonsense.   It is sad that they wrote a paper based on the premise that timestamps can be used to solve the double spend problem (they can't) and then never did any research as to why Bitcoin doesn't rely on timestamps.   I mean it isn't a minor note, it is the core claim of the article.

Quote
It surprising to discover that Satoshi did NOT introduce a transaction
timestamp in bitcoin software. It is NOT known WHY neither the original
creator of bitcoin nor later bitcoin developers did not mandate one. This could
can be seen as an expression of misplaced ideology. Giving an impression
showing that maybe the Longest Chain Rule does solve the problems in an
appropriate way. Unhappily it doesn't.

One would think that when writing a paper if you are surprised that it would lead you to question and seek information.  Maybe you are surprised because you are misinformed, or misunderstand the conditions.  Generally speaking just assuming your surprise is due to someone else being wrong and then not verifying that in any way is not the start of a good paper.

Saved me having to look at it.

And It is known why - many reasons, which a little research would've discovered.

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 08:05:26 PM
 #6

Perhaps it is possible to replace proof of work with some other distributed time-based system,
although you would still need the longest chain as a basis for convergence.


benjyz
Full Member
***
Offline Offline

Activity: 140
Merit: 107


View Profile
May 08, 2014, 08:10:06 PM
 #7

Academics have produced nothing but perfect nonsense on the topic of Bitcoin. This is one of the worst.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 08, 2014, 08:11:37 PM
 #8

Perhaps it is possible to replace proof of work with some other distributed time-based system,
although you would still need the longest chain as a basis for convergence.

Possibly but it is a non trivial problem for which robust decentralized solution exists.  As I said it is like saying instead of taking a rocket to the moon we "could" teleport directly there although the technology does not exist and it may not ever be possible.  Hopefully you wouldn't write a paper saying nobody knows why NASA opted to use a rocket instead of teleport to the moon.  Smiley
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 08:27:43 PM
 #9

Perhaps it is possible to replace proof of work with some other distributed time-based system,
although you would still need the longest chain as a basis for convergence.

Possibly but it is a non trivial problem for which robust decentralized solution exists.  As I said it is like saying instead of taking a rocket to the moon we "could" teleport directly there although the technology does not exist and it may not ever be possible.  Hopefully you wouldn't write a paper saying nobody knows why NASA opted to use a rocket instead of teleport to the moon.  Smiley

Non trivial, although on the surface, seems deceptively simple.
(We just want a block every 10 minutes!)

Maybe there could be a list of 1000 different timestamp servers
that nodes can check, and by introducing a time element, the hashing
work could be reduced.   

telepatheic
Jr. Member
*
Offline Offline

Activity: 56
Merit: 1


View Profile
May 08, 2014, 10:40:33 PM
Last edit: May 08, 2014, 11:35:49 PM by telepatheic
 #10

Interestingly Satoshi didn't put much thought into the problem of time stamping, although he realised timekeeping was important. The general assumption he made was that all nodes would report the correct time. (This is not the case and has the potential to cause major problems with bitcoin if exploited in tandem with a Sybil attack)

In the code Satoshi wrote:

Quote from: Satoshi
"Never go to sea with two chronometers; take one or three."
Our three chronometers are:
  - System clock
  - Median of other server's clocks
  - NTP servers

note: NTP isn't implemented yet, so until then we just use the median of other nodes clocks to correct ours.

The quote comes from "The Mythical Man Month" for those interested in Satoshi's background.

The code he implemented was:

Code:
nTimeOffset = nMedian;
if ((nMedian > 0 ? nMedian : -nMedian) > 5 * 60)
{
    // Only let other nodes change our clock so far before we
    // go to the NTP servers
    /// todo: Get time from NTP servers, then set a flag
    ///    to make sure it doesn't get changed again
}

Satoshi decided to contradict his quote and use two chronometers!

The idea was that if the offset between your node and the others was more than 5 minutes something is wrong. This was further relaxed in a subsequent version to a massive 70 minute offset! (Why 70 minutes, I have no idea, the change isn't documented anywhere public as far as I can tell) Edit: the changes can be found here and here but there is no explanation.

Code:
// Only let other nodes change our time by so much
if (abs64(nMedian) < 70 * 60)
{
      nTimeOffset = nMedian;
}

But still no NTP support.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 10:50:52 PM
 #11

Thanks for the link. Funny how 2011 is ancient history in the bitcoin world.

Wasn't aware that time stamping was used at all, except for difficulty changes.
Seems it used as a sort of double checking for blocks, although seems unnecessary
or at least not critical because of the longest chain rule.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 08, 2014, 10:52:53 PM
Last edit: May 08, 2014, 11:06:20 PM by DeathAndTaxes
 #12

Wasn't aware that time stamping was used at all, except for difficulty changes.

It isn't.

Quote
Seems it used as a sort of double checking for blocks, although seems unnecessary or at least not critical because of the longest chain rule.

How is change in difficulty computed?  What would happen to difficulty computation if there was no timestamp checking on blocks (attacker could use any timestamp he wanted)?
chriswilmer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile WWW
May 08, 2014, 11:04:54 PM
 #13

I know one shouldn't judge a book by its cover... but just glancing at the abstract one can tell that this isn't serious scholarly work.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 11:10:39 PM
 #14

Wasn't aware that time stamping was used at all, except for difficulty changes.

It isn't.

Quote
Seems it used as a sort of double checking for blocks, although seems unnecessary or at least not critical because of the longest chain rule.

How is change in difficulty computed?  What would happen to difficulty computation if there was no timestamp checking on blocks (attacker could use any timestamp he wanted)?


No, I agree we need time stamps for the difficulty change.
However, the article isn't talking about that. 

Quote
The network time is used to validate new blocks.
.

Basically it is talking about double spends, etc. 

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 08, 2014, 11:14:51 PM
 #15

Quote
The network time is used to validate new blocks.
.
Basically it is talking about double spends, etc.  

The article is saying block timestamps are used to prevent double spends, it is showing how block timestamps can be manipulated to exploit the fact that nodes will validate the timestamp and use that to isolate nodes .   The timestamps are only validated to ensure difficulty can't be gamed.   No other function of the bitcoin protocol relies upon them.  To prevent difficulty from being gamed requires invalidating blocks outside of an acceptable range.  The article is pointing out that the validation of timestamps can be an attack vector for double spends.  The attack vector wouldn't exist if the network didn't validate block timestamps.  Of course if there were no validation of timestamps then there would be no secure provable way of setting difficulty.  It is a catch-22.  The article recommends reducing the window for validating blocks to make this attack more difficult (an attack which has been successfully executed).

Satoshi understood that timestamps are very difficult to authenticate as such limited them to areas where there is no solution which doesn't involve timestamps.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 11:21:07 PM
 #16

Sorry but I am missing something here.

Why is there a possible attack vector if the protocol
is using the time stamps for anything other than
difficulty adjustments? (Using them to validate blocks)

 Huh

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 08, 2014, 11:29:47 PM
 #17

Sorry but I am missing something here.

Why is there a possible attack vector if the protocol
is using the time stamps for anything other than
difficulty adjustments? (Using them to validate blocks)

 Huh


I assume you mean isn't?

The article explains the attack in pretty simple language.  The attacker isolates the node and feeds it incorrect timestamps.  This causes the computed network time on the victim's node to be SLOWER than the rest of the network.  The attacker then creates a block with an incorrect timestamp that is faster than the correct time.  For most of the network although the timestamp is wrong it is still within the validation window and the block is accepted.  The victim's clock however has been slowed down enough that the block is outside the validation window and the block is rejected.

The victim has now been forked off the main network and can be fed false information and double spent at will.

The timestamps validation of blocks is only used to prevent manipulation of difficulty but the fact that nodes validate timestamps is exploited to isolate and attack the node.  In reality this attack would be rather difficult to pull off, is expensive as it requires mining full strength blocks which will be orphaned by the main network, and there are some pretty cheap and easy countermeasures. 


Essentially the target's proper checking of timestamps (to prevent manipulation of difficulty) is used against the target by careful manipulation of timestamps.  I don't believe this is a very effective attack in the real world for a couple of reasons.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 11:40:44 PM
 #18

So they stamp all the blocks which could sort of be exploited for double spends but not really and then the stamp is used for the diff change.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 08, 2014, 11:45:17 PM
 #19

So they stamp all the blocks which could sort of be exploited for double spends but not really and then the stamp is used for the diff change.

Correct.  All blocks contain a timestamp, all blocks timestamps are validated by all nodes to ensure they are within an acceptable window.  This is done to "keep miners honest".  Without it miners could use false timestamps to manipulate the 2016 block timespan, lower difficulty, and boost their profits. 

That validation in theory could be exploited to isolate and double spend a node.  The article describes how that attack could be executed.




jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
May 08, 2014, 11:48:50 PM
 #20

Got it. 

I need to be excused now.  My brain is full.

Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 09, 2014, 05:26:47 AM
 #21

It's true that we don't know how to implement some of the author's proposed solutions, but he has a pretty good grasp of some very serious problems.

In particular, he has a good point about what happens when block rewards are multiplied by half.

There's an investment (in ASIC mining equipment) constantly seeking its most profitable allocation.  That allocation is an equilibrium in which each option pays identically.   

At the point where there's a block reward halving, one of the allocation options has its return cut in half, and the equilibrium has to find a new balance point.

If you're UNO, and you cut your block reward in half, the total rate of return is hardly affected at all because you represent such a tiny fraction of the total available income.  The allocation of that investment to mining your blockchain, though, gets cut approximately in half, because that's the point at which the return for mining it remains competitive.

If you're BTC, and you cut your block reward in half, the total rate of return is cut by almost half.  Suddenly, every *other* allocation opportunity is suddenly worth twice as much of the miner's remaining hash power investment as it was before, because that's the rate at which the return for mining it stays competitive with BTC.

Of course, the latter doesn't account for mining rigs that are no longer profitable to run at all.... 
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 09, 2014, 05:39:57 AM
 #22

The author is wrong about the proposed timestamp solution, because we don't really have a practical distributed-timestamp scheme.  But there may be a simpler one (not requiring a distributed timestamp) that works.  I'll have to think about it, but it's certainly in the best interests of honest miners and honest transaction makers to provide accurate timestamps if it improves security against dishonest ones, so it isn't hopeless.

The author is right about increasing the security of mining by requiring miners to know both the hash of the current block and the hash of the previous block - the hashing operation they need to do is essentially the same, and making sure miners know what block they're building on would make certain classes of attack (diverting pool miners to another coin, using pool miners to build an unpublished blockchain, etc) which are currently easy to make undetectably, leave incontrovertible evidence.  That is a good idea and we should do it.

Foxpup
Legendary
*
Offline Offline

Activity: 4354
Merit: 3044


Vile Vixen and Miss Bitcointalk 2021-2023


View Profile
May 09, 2014, 08:00:31 AM
 #23

The name sounds strangely familiar. Isn't this the same guy who came up with the "selfish mining" nonsense a while ago?

Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4
I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me off-forum without a signed message from the above address! Accept no excuses and make no exceptions!
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
May 09, 2014, 08:26:45 AM
Last edit: May 09, 2014, 05:31:47 PM by gmaxwell
 #24

The name sounds strangely familiar. Isn't this the same guy who came up with the "selfish mining" nonsense a while ago?
No, he had another paper where he 'invented' a number of long used mining optimizations like elimiating the final three rounds, mining from a midstate, and merging adder carries, and then spent the last half ranting about how the geometric subsidy decline doomed Bitcoin to failure with strange all-caps bold words mixed in, and saying that we must adopt his proposal to adjust the subsidy every 600 blocks, while simultaneously ignoring that we made it through one subsidy halving without incident.  On the basis of the prior paper and some comments from people who's opinions I trust who read this one, I've pretty much given this one a pass myself.

His work on the mining optimization stuff, though— I recall it being largely redundant with work already deployed out there— was not unintelligent. The conclusions he was drawing—  well, I think everyone who wanders into Bitcoin experiences at least 20 instances of "Ah ha! it cannot work, I've found the flaw!", some of us just go through it a little more privately than others. Smiley

Rather than focusing on what the paper has wrong, it might be more useful to ask what it got right or what interesting questions it poses. Even a completely confused paper can sometimes inspire some interesting questions or approaches. I understand that it makes some pretty concrete fairly near term predictions about dogecoin which will be falsifiable, — and hey, making a falsifiable prediction would put it ahead of a lot of things.

You have to keep in mind that publications (esp pre-prints) are just another communications channel for people. By themselves they don't automatically mean the work is of cosmic importance or even that its intended to be. So if it helps you extract something useful from it you can think of it as a expanded forum post. One virtue of that form is that often forum posts are so incomplete that it's hard to even tell if you can tell what there idea is from the post.  In this case, where the author seems to have some misunderstandings about the non-existence of globally consistent time in a decentralized system, and he failed to actually describe his solution— well at least you could tell what was missing. Smiley  Don't let the bombastic language get to you, it's a cultural norm in some places to make every thought sound like some major revelation. Annoying at times, but you do yourself a disservice if you can't learn to ignore it and sieve out the good ideas that might be hiding behind the noise.


Foxpup
Legendary
*
Offline Offline

Activity: 4354
Merit: 3044


Vile Vixen and Miss Bitcointalk 2021-2023


View Profile
May 09, 2014, 08:37:10 AM
 #25

The name sounds strangely familiar. Isn't this the same guy who came up with the "selfish mining" nonsense a while ago?
No, he had another paper where he 'invented' a number of long used mining optimizations like elimiating the final three rounds, mining from a midstate, and merging adder carries, and then spent the last half ranting about how the geometric subsidy decline doomed Bitcoin to failure with strange all-caps bold words mixed in, and saying that we must adopt his proposal to adjust the subsidy every 600 blocks, while simultaneously ignoring that we made it through one subsidy halving without incident.
Oh, right. There are so many terrible papers floating around it's getting hard to keep track of them all. Undecided

Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4
I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me off-forum without a signed message from the above address! Accept no excuses and make no exceptions!
TooDumbForBitcoin
Legendary
*
Offline Offline

Activity: 1638
Merit: 1001



View Profile
May 09, 2014, 07:46:55 PM
 #26

Quote
I mean it like saying taking a spacecraft to the moon is a flawed strategy when we could just teleport there instead.  Also teleporting to the moon will require some teleportation capabilities and stuff.

Steve Martin once lectured on "How to Be a Millionaire and Not Pay Taxes":

"...First, get a million dollars. ..."




▄▄                                  ▄▄
 ███▄                            ▄███
  ██████                      ██████
   ███████                  ███████
    ███████                ███████
     ███████              ███████
      ███████            ███████
       ███████▄▄      ▄▄███████
        ██████████████████████
         ████████████████████
          ██████████████████
           ████████████████
            ██████████████
             ███████████
              █████████
               ███████
                █████
                 ██
                  █
veil|     PRIVACY    
     WITHOUT COMPROMISE.      
▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
|   NO ICO. NO PREMINE. 
   X16RT GPU Mining. Fair distribution.  
|      The first Zerocoin-based Cryptocurrency      
   WITH ALWAYS-ON PRIVACY.  
|



                   ▄▄████
              ▄▄████████▌
         ▄▄█████████▀███
    ▄▄██████████▀▀ ▄███▌
▄████████████▀▀  ▄█████
▀▀▀███████▀   ▄███████▌
      ██    ▄█████████
       █  ▄██████████▌
       █  ███████████
       █ ██▀ ▀██████▌
       ██▀     ▀████
                 ▀█▌




   ▄███████
   ████████
   ███▀
   ███
██████████
██████████
   ███
   ███
   ███
   ███
   ███
   ███




     ▄▄█▀▀ ▄▄▄▄▄▄▄▄ ▀▀█▄▄
   ▐██▄▄██████████████▄▄██▌
   ████████████████████████
  ▐████████████████████████▌
  ███████▀▀▀██████▀▀▀███████
 ▐██████     ████     ██████▌
 ███████     ████     ███████
▐████████▄▄▄██████▄▄▄████████▌
▐████████████████████████████▌
 █████▄▄▀▀▀▀██████▀▀▀▀▄▄█████
  ▀▀██████          ██████▀▀
      ▀▀▀            ▀▀▀
ByteCoin
Sr. Member
****
expert
Offline Offline

Activity: 416
Merit: 277


View Profile
May 10, 2014, 01:04:51 AM
 #27

I know Nicolas Courtois and had I not seen the paper linked from his web page I would have assumed that someone had just added his name to this rubbish in order to give it some gravitas.

He should have given his old supervisor a red pen (with plenty of ink left ) and asked him to review the paper first. There are portions which are OK but he's certainly gone a long way down in my estimation.

Bytecoin
kadter
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
May 10, 2014, 01:46:16 AM
 #28

The trustless trust is a logical fallacy Cheesy
odolvlobo
Legendary
*
Offline Offline

Activity: 4312
Merit: 3214



View Profile
May 10, 2014, 03:58:45 AM
 #29

Well, I read nearly all of the 46 pages (as much as I could) and I can summarize it in a single sentence:


If the profitability of mining BTC falls (due to a decreasing block reward) low enough such that a competitive currency becomes more profitable to mine, the hash rate will plummet and present an opportunity for a 51% attack.


In my view, this scenario is possible (any scenario is possible), but it is extremely unlikely because of the economics. Furthermore, if it does happen, then the other currency is probably preferred over bitcoin anyway and a switch to the other currency would be a positive result.

Join an anti-signature campaign: Click ignore on the members of signature campaigns.
PGP Fingerprint: 6B6BC26599EC24EF7E29A405EAF050539D0B2925 Signing address: 13GAVJo8YaAuenj6keiEykwxWUZ7jMoSLt
bitfreak!
Legendary
*
Offline Offline

Activity: 1536
Merit: 1000


electronic [r]evolution


View Profile WWW
May 10, 2014, 04:28:35 PM
Last edit: May 10, 2014, 06:45:41 PM by bitfreak!
 #30

I received this paper earlier today in a Google Scholar Alert. I couldn't spend more than 5 minutes reading it... so many misconceptions and holes in their knowledge of Bitcoin. It's got some interesting graphs though.

XCN: CYsvPpb2YuyAib5ay9GJXU8j3nwohbttTz | BTC: 18MWPVJA9mFLPFT3zht5twuNQmZBDzHoWF
Cryptonite - 1st mini-blockchain altcoin | BitShop - digital shop script
Web Developer - PHP, SQL, JS, AJAX, JSON, XML, RSS, HTML, CSS
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 10, 2014, 07:04:57 PM
 #31

Look, he made one mistake; we don't have a practical distributed timestamp protocol, so his proposed solution to that one problem doesn't work.

But he's right about there being a security improvement if miners have to know what they're mining on. 

He's also right about the effects of block reward halving on hash power allocation. 

Those are real problems with viable solutions, and we can fix them, so why is everybody focusing on the one thing he got wrong?

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 10, 2014, 07:12:11 PM
Last edit: May 10, 2014, 07:40:17 PM by DeathAndTaxes
 #32

He's also right about the effects of block reward halving on hash power allocation.  
No he isn't or at least not his conclusions on what "will" happen are just speculation.

The author claims that when the block reward is halved that hashrate will be halved.  That is probably not true unless the operating margin on EXISTING hardware is <0 (or another coin is more profitable) after the halving.  For a user with a 0.5 J/GH rig and $0.10 per kWh electrical rates the 0 operating margin point is ~1 PH per $1 in USD exchange rate.  So at the current exchange rate the network would need to be >450 PH/s for that miner to have a negative operating margin.  The exchange rate would hopefully be higher by 2016.  At the ATM it would require 1,200 PH/s.

The block halving will probably dry up sales of new mining hardware (actually they will dry up months prior in anticipating of the drop) but for a miner who already owns SHA-256 hashing power he essentially has three options

a) continue to mine bitcoin for half the revenue
b) sell the hardware to a miner with lower costs (namely cheaper/free electricity and cool climate)
c) mine an altcoin.

The author jumps right to c.  To date (other than brief periods of pump & dump) no sha256 coin has been more profitable than bitcoin to mine.  The author predicts the hashrate will fall by 50% which would assume that no miners opt for "a" or "b".  Still even if that is true there is no certainty that a halving in hashrate will make Bitcoin vulnerable to a 51% attack.  The hashrate today is 70 PH/s.  For the halving to make the operating margin negative would require a hashrate of 450 PH/s (current exchange rate, $0.10 per kWh, 0.5 J/GH).

So say in 2016 that does happen.  The hashrate drops from 450 PH/s to 225 PH/s which is more than 3x the current hashrate.  The Bitcoin network still has ~50% of the hashrate of known miners.  The idea that it is suddenly trivial to perform a 51% attack is an unsupported conclusion.


Quote
But he's right about there being a security improvement if miners have to know what they're mining on.
Which has nothing to do with the Bitcoin protocol.  The Bitcoin protocol does include information on the prior block in the blockheader.  Some pools use a protocol which ommits that however it is possible for miners in pools to be in control of the blockheader.   Pool mining is a protocol that is built on top of the bitcoin protocol, it isn't part of the bitcoin protocol.  It would be like saying if you made some changes to anti-counterfeiting features on the dollar bill you could reduce credit card fraud.  The credit card network is an independent network built on top of the cash network.

Quote
Those are real problems with viable solutions, and we can fix them, so why is everybody focusing on the one thing he got wrong?

The bitcoin protocol reward is not going to be changed.  Period.  It would be a breaking fork and you will never achieve a super majority to support any fork.  There are already methods which allow a miner to be in control of the blockheader and miners frankly don't really give a crap.  You can't force them.  It is more a social problem then a technological one.


Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 10, 2014, 07:30:31 PM
 #33



He's also right about the effects of block reward halving on hash power allocation.  
No he isn't or at least not his conclusions on what "will" happen are just speculation.

His "speculation" is that at least half of miners are willing to switch to mining a different coin if it's more profitable.  The rest is just a math problem about how to optimize profits.  I don't think that's at all in question.

Seriously, you can set this up as an equation.
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 10, 2014, 07:32:35 PM
 #34


The bitcoin protocol reward is not going to be changed.  Period.  It would be a breaking fork and you will never achieve a super majority to support any fork.  


You're probably right about that; it would destroy the value of the sunk-costs in ASICs for starters, which means the miners would scream bloody murder.

bitfreak!
Legendary
*
Offline Offline

Activity: 1536
Merit: 1000


electronic [r]evolution


View Profile WWW
May 10, 2014, 07:39:31 PM
 #35

I don't think anyone is really going to argue that a smoother shift in the block reward would be less preferable than one which halves instantly after a long period of time, but that's just the way Bitcoin is and there are altcoins designed to remedy that issue (I'm guessing that's the issue being discussed, like I said I didn't read the paper properly).

Which has nothing to do with the Bitcoin protocol.  The Bitcoin protocol does include information on the prior block in the blockheader.  Some pools use a protocol which ommits that however it is possible for miners in pools to be in control of the blockheader.   Pool mining is a protocol that is built on top of the bitcoin protocol, it isn't part of the bitcoin protocol.
I knew that had to be the case, I thought I was going crazy there for a moment, thanks for clearing that up.

XCN: CYsvPpb2YuyAib5ay9GJXU8j3nwohbttTz | BTC: 18MWPVJA9mFLPFT3zht5twuNQmZBDzHoWF
Cryptonite - 1st mini-blockchain altcoin | BitShop - digital shop script
Web Developer - PHP, SQL, JS, AJAX, JSON, XML, RSS, HTML, CSS
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 10, 2014, 07:43:41 PM
 #36

His "speculation" is that at least half of miners are willing to switch to mining a different coin if it's more profitable.  The rest is just a math problem about how to optimize profits.  I don't think that's at all in question.

Which is dubious in itself but lets assume the network is 450 PH/s the day before the halving and after the halving 50% of miners leave for this non-existent altcoin which is still more profitable than Bitcoin even with the difficulty that comes from 225 PH/s.

Ok so Bitcoin has 225 PH/s worth of miners.
CoinX has 225 PH/s worth of miners.

How exactly is it now trivial to 51% the Bitcoin network?

Quote
Seriously, you can set this up as an equation.
Yeah you can and with any realistic guesstimates you don't reach the conclusion the author reached.

TooDumbForBitcoin
Legendary
*
Offline Offline

Activity: 1638
Merit: 1001



View Profile
May 10, 2014, 07:56:36 PM
 #37

Quote
Which is dubious in itself but lets assume the network is 450 PH/s the day before the halving and after the halving 50% of miners leave for this non-existent altcoin which is still more profitable than Bitcoin even with the difficulty that comes from 225 PH/s.

Should be fun to watch.

1.  Reward halves.
2.  Half the hashers depart.
3.  Reward per hash on bitcoin network doesn't change.
4.  No change in profit for the half that stay!  (Increase, really, if you count fees).

I wonder how the 50% who leave will make that decision.



▄▄                                  ▄▄
 ███▄                            ▄███
  ██████                      ██████
   ███████                  ███████
    ███████                ███████
     ███████              ███████
      ███████            ███████
       ███████▄▄      ▄▄███████
        ██████████████████████
         ████████████████████
          ██████████████████
           ████████████████
            ██████████████
             ███████████
              █████████
               ███████
                █████
                 ██
                  █
veil|     PRIVACY    
     WITHOUT COMPROMISE.      
▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
|   NO ICO. NO PREMINE. 
   X16RT GPU Mining. Fair distribution.  
|      The first Zerocoin-based Cryptocurrency      
   WITH ALWAYS-ON PRIVACY.  
|



                   ▄▄████
              ▄▄████████▌
         ▄▄█████████▀███
    ▄▄██████████▀▀ ▄███▌
▄████████████▀▀  ▄█████
▀▀▀███████▀   ▄███████▌
      ██    ▄█████████
       █  ▄██████████▌
       █  ███████████
       █ ██▀ ▀██████▌
       ██▀     ▀████
                 ▀█▌




   ▄███████
   ████████
   ███▀
   ███
██████████
██████████
   ███
   ███
   ███
   ███
   ███
   ███




     ▄▄█▀▀ ▄▄▄▄▄▄▄▄ ▀▀█▄▄
   ▐██▄▄██████████████▄▄██▌
   ████████████████████████
  ▐████████████████████████▌
  ███████▀▀▀██████▀▀▀███████
 ▐██████     ████     ██████▌
 ███████     ████     ███████
▐████████▄▄▄██████▄▄▄████████▌
▐████████████████████████████▌
 █████▄▄▀▀▀▀██████▀▀▀▀▄▄█████
  ▀▀██████          ██████▀▀
      ▀▀▀            ▀▀▀
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 10, 2014, 08:23:27 PM
 #38

Okay, you're getting the problem set up wrong.  As long as bitcoin is dominant, it doesn't lose fully half of its hashing power when it halves its reward. 

Let's say that there is some unit of hashing power that pays 10$ per hour.

If equilibrium has 90% of the effort on BTC at a given moment and 10% on altcoins, then we can conclude that the value of BTC produced by that hashing, per hour, is worth $9 and the value of altcoins produced by that hashing, per hour, is $1. 

Now if BTC cuts its reward by half, suddenly it's producing only $4.50 worth of value per hour for that amount of hashing power.  The total value being produced by that hash power - the new equilibrium rate - is now $5.50 per hour.

Bitcoin at its halved speed produces that value when it gets 9/11 of that amount of that hashing power and the alts, at the same speed as before, produce that value when they get 2/11 of that hashing power.  This is the new point at which the ratio of hashing power to value produced is equal for bitcoin and the alts.

What happens is that the amount of hashing power devoted to the alts nearly doubles, and the reallocation comes out of the amount devoted to bitcoin. 

When he uses UNO as an example, UNO halving its reward has effectively no impact on the rate of value production, so the equilibrium rate is relatively unaffected.  That makes it very simple; At the same rate, half the value produced buys half the hash power. 
bitcoinbeliever
Newbie
*
Offline Offline

Activity: 54
Merit: 0


View Profile
May 10, 2014, 10:21:30 PM
 #39

Hand-waving distributed timestamps into existence can't be confused with this idea

 https://bitcointalk.org/index.php?topic=3441.msg48484#msg48484

which relies only on nodes' local clocks, and then only on their keeping accurate time over the short term; they don't have to be military-atomic-clock accurate and it doesn't matter what time zone they are in.  Instead of trying to create a distributed time protocol, the existing consensus mechanism is used with nodes simply voting to reject blocks containing double-spends they detect themselves.  They actually already make this decision ... they just don't tell anyone.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 10, 2014, 10:38:24 PM
Last edit: May 11, 2014, 03:39:03 PM by DeathAndTaxes
 #40

Hand-waving distributed timestamps into existence can't be confused with this idea

 https://bitcointalk.org/index.php?topic=3441.msg48484#msg48484

which relies only on nodes' local clocks, and then only on their keeping accurate time over the short term; they don't have to be military-atomic-clock accurate and it doesn't matter what time zone they are in.  Instead of trying to create a distributed time protocol, the existing consensus mechanism is used with nodes simply voting to reject blocks containing double-spends they detect themselves.  They actually already make this decision ... they just don't tell anyone.

If you could trust nodes voting you wouldn't need mining to begin with.  There is no guarantee that all nodes know about all transaction at any point in time.  There is no guarantee nodes will learn of transactions with any guarantee on timeliness or nodes will learn of transactions in the same order.   The composition of the network is also continually changing

The network isn't a single unified block of memory it is a coalition of independent systems.   If we could trust anonymous decentralized nodes to "vote" in a secure manner they could simply agree on the ordering of transactions amongst themselves and you wouldn't need blocks and you certainly wouldn't need a proof of work.   The unresolved problem is preventing a sybil attack.  Satoshi saw that in a pseudonymous decentralized network, where an attacker could cheaply create thousands or even hundreds of thousands of nodes, that no vote based on 1 node = 1 vote could be trusted.  The proof of work is a mechanics to force a consensus among nodes which may conflicted views on the ordering of transactions.  Proof of work creates a canonical ordering of transactions and all nodes update their internal ordering to match that.  If nodes could reject blocks based on "incorrect" ordering then it would imply they already know the canonical ordering.  If they know that, they you don't need mining to begin with.
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1009



View Profile
May 11, 2014, 01:08:09 PM
 #41

If you could trust nodes voting you wouldn't need mining to begin with.  There is no guarantee that all nodes know about all transaction at any point in time.  There is no guarantee nodes will learn of transactions with any guarantee on timeliness or nodes will learn of transactions in the same order.   The composition of the network is also continually changing

The network isn't a single unified block of memory it is a coalition of independent systems.   If we could trust anonymous decentralized nodes to "vote" in a secure manner they could simply agree on the ordering of transactions.  You wouldn't even need blocks and you certainly wouldn't need mining.   The unresolved problem is preventing a sybil attack.  Satoshi saw that in a pseudonymous decentralized network, where an attacker could cheaply create thousands or even hundreds of thousands of nodes, that no vote based on 1 node = 1 vote could be trusted.

Proof of work is the method to force a consensus on the network to create a canonical ordering of transactions.  If nodes could reject blocks based on "incorrect" ordering then it would imply they already know the canonical ordering.  If they know that, they you don't need mining to begin with.
"I found a way to solve the BGP without solving the BGP" is going to be the next generation's perpetual motion hoax.

All those people who make YouTube videos where claim to run a portable generator to electrolyze water which they use to power the generator engine? Soon they'll all switch to inventing new scamcoins.
bitcoinbeliever
Newbie
*
Offline Offline

Activity: 54
Merit: 0


View Profile
May 12, 2014, 01:57:51 AM
 #42

If you could trust nodes voting you wouldn't need mining to begin with.

The post I linked to does not claim to invent a way to fully order all transactions.  It only proposes that nodes (the important ones being miners) reject (not build on top of) blocks that contain transactions that are not only double-spends --  but 20-second-late double-spends (the exact threshold would be determined later).

If they go ahead and build on top of such a block anyway, they risk the rest of the network (miners again) ignoring any block they find.

This would not make all 0-conf transactions safe.  But it could make them a lot safer than today, if a seller waited for example 20 seconds before completing a real-world transaction.

The safest path for a miner would be to NOT mine double spends.  Which of course is the point.  The only risk to a miner who adoped that strategy would be somehow missing a first-spend.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 12, 2014, 02:02:56 AM
Last edit: May 13, 2014, 01:25:37 PM by DeathAndTaxes
 #43

If you could trust nodes voting you wouldn't need mining to begin with.

The post I linked to does not claim to invent a way to fully order all transactions.  It only proposes that nodes (the important ones being miners) reject (not build on top of) blocks that contain transactions that are not only double-spends --  but 20-second-late double-spends (the exact threshold would be determined later).

If they go ahead and build on top of such a block anyway, they risk the rest of the network (miners again) ignoring any block they find.

This would not make all 0-conf transactions safe.  But it could make them a lot safer than today, if a seller waited for example 20 seconds before completing a real-world transaction.

The safest path for a miner would be to NOT mine double spends.  Which of course is the point.  The only risk to a miner who adoped that strategy would be somehow missing a first-spend.



It would also make confirmations horribly insecure as you would be reporting confirmation on a minor chain where you may be double spent in the main chain.  Worse it hides this fact from the user by ignoring the main chain because it contains tx "too close together".  It won't be deterministic between nodes until 6 confirmations which makes troubleshooting and analysis far more confusing for users.  Essentially not only are 0-confirm transactions now risk, 1 to 5 confirms are also highly risky too.  6 confirms becomes the new 1 confirm so the "clearing time" goes from 10 minutes to an hou.  

Miners are highly pragmatic they will build off the longest chain as it has the most chance of remaining the longest chain. You can suggest miners not build off the longest chain but unless a super majority of them follow that they simply lose money by doing so, which means they wont.   It is a modified prisoners dilemma where in almost all scenarios breaking from the longest chain is the worse choice.
ncourtois
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
May 13, 2014, 10:27:26 AM
 #44

This is work in progress. Thank you all for extremely valuable comments.
The most up to date version of this paper is available at:
http://arxiv.org/abs/1405.0534

hashman
Legendary
*
Offline Offline

Activity: 1264
Merit: 1008


View Profile
May 14, 2014, 11:32:49 AM
 #45

Nice paper Cheesy  It is indeed a complex ecosystem of competing currencies we find ourselves in. 

couple comments:

"Sudden jumps and rapid phase transitions are programmed at xed dates in time and are likely to ruin the life of these currencies"

Participants know in advance about the sudden jumps and value hashpower and currencies accordingly far in advance.  See 1st bitcoin halving.  Will people prefer this to the nondisclosed jumps/transitions in supply and mining rates we have seen in fiat or even gold? 

"We discovered that neither Satoshi nor bitcoin developers have EVER mandated any sort of transaction timestamp in bitcoin software."

I disagree.  Block height is one of the best timestamp mechanisms ever developed.  Second, more conventional dates ARE in the software as they are required to set difficulty properly.  In some commmodities there is not even a public record let alone a great timestamping system.   

"In this paper we show that most crypto curren-cies simply do NOT have ANY protection against double spending."

Participants all know and understand the possibility of a longer-chain or 51% attack and its cost.  We wait for the appropriate number of blocks (confirmations) until we know a double spend effort on a payment we are accepting would be a total loss for the payer.  Pretty good protection against double spend IMHO, especially compared to fiat Wink           


AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 14, 2014, 01:00:15 AM
Last edit: July 15, 2014, 02:49:04 PM by AnonyMint
 #46

The most important point made in the paper is that an adversary potentially doesn't need to own 50+% of the network hashrate. Rather if sufficient hashrate can be rented then the adversary only needs to double-spend over a small window of time an amount that is more than the mining rental cost, which potentially makes the 50+% double-spend attack much more accessible given that at 25 BTC per block then a 6-confirmation rental cost should be in the range of only 150 BTC.

Another point made is that the double-spends could be spread out over smaller transactions, so large transaction fingerprinting can't be a solution. The point is also that waiting for 6-confirmations doesn't necessary give a high probability of protection against a double-spend (although the attacker wouldn't likely be able to monetize on sufficient scale some types of transactions, e.g. purchasing toddler shoes from a merchant).

The attack works by spending on the public chain fork and simultaneously employing the rented mining hardware to create a secret chain fork which omits or double-spends those spends. Then after the spends have received enough confirmations on the public fork, then publish the secret fork which has a longer block length which orphans the fork with the originating spends.

Proposed Solution

I hereby write down a rough sketch for an idea of a decentralized solution which is based on the attacker not being able to afford to sustain the high hashrate. I don't know if there is a prior art or discussion of this or a similar idea?

a. If a mining node receives a request to add a transaction which double-spends a prior seen transaction, the request is discarded.

b. If a mining node is attempting to add a transaction to the next block it will win and the received winning block double-spends the former, the former is discarded.

c. If a mining node is in possession of an orphaned block chain which contains transactions that are missing or double-spent in a received longer block chain, the mining node broadcasts this fact and all mining nodes which agree (i.e. had seen this fork before it was orphaned) are expected to add this fact to the next winning block which thus marks any double-spent coins as forfeited to the ether (or adds the spends that were omitted).

This accomplishes the following.

1. The more confirmations a merchant waits for, the less probable the typical spender could (after receiving what was paid for) propagate a double-spend to spite the merchant. Todo: quantify the probability.

2. The mining nodes in agreement would continue forever trying to add this forfeiture evidence to the attacker's fork after it is public. Only if they have a minority of the hashrate would they fail. Thus the attacker would need to maintain his 50+% advantage forever (or for a long enough time until those mining nodes in agreement became a minority which could be a significant period of time).

Any holes in my logic?

One issue is honest mining nodes are punished by attacker for as long as the adversary can sustain the 50+% hashrate, the honest miners lose their mining rewards. But they are going to be losing them without my idea if this attack becomes viable. If the attack becomes viable, the network would constantly be under attack (attackers wouldn't hold back to preserve Bitcoin market price due to their competing with each other in a Tragedy of the Commons) and all honest blocks would be orphaned.

(note I only invested roughly an hour into this idea, so I might have missed something obvious)

Edit: this rented 50+% attack is a Tragedy of the Commons, because the attackers would compete to drive rental costs higher and higher, crowding out honest miners. This is potentially a very dangerous risk as more and more mining hardware is put out for rent to obtain market prices. Note this becomes more likely as the trading volume in the exchanges become more liquid as a crypto-currency matures.

Edit#2: note the proposed solution doesn't deal with the issue of during network fragmentation where there are isolated public forks then the spender could issue a double-spend on each fork, because block chain fragmentation is an orthogonal problem (which I think currently has no known solution?).

Edit#3: the attacker must double-spend and not just omit the prior spends in the longer block chain, otherwise the other nodes will remember those spends and re-add them to the longer block chain after it is public.

Edit#4: the only reason I have thought of for not modifying the proposal to reinstate the originating spends instead of declaring the double-spent coins forfeited to the ether, is if the typical spender can somehow spend to himself first then to merchant secondly. Todo: analyze this along with quantifying the probability in #1 above.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
July 14, 2014, 05:22:34 AM
Last edit: July 14, 2014, 11:15:38 AM by gmaxwell
 #47

The most important point made in the paper
Huh, the fact that someone with <<50% hashpower can successfully double spend has been repeated many many times on this forum, by dozens of people (including myself), along with comments that mining pools or (especially) vertically integrated closed mining operations with double digit percentages of the hashrate are all concerning, even if its not near 50%. The original Bitcoin whitepaper gives the formal for calculating the success rates.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 14, 2014, 05:31:18 AM
Last edit: July 14, 2014, 05:58:37 AM by AnonyMint
 #48

The most important point made in the paper
Huh, the fact that someone with <<50% hashpower can successfully double spend has been repeated many many times on this forum, by dozens of people (including myself), along with comments that mining pools or (especially) vertically integrated closed mining operations with double digit percentages of the hashrate are all concerning, even if its not near 50%. The original Bitcoin whitepaper gives the formal for calculating the success rates.

I am aware of Meni Rosenfeld's white paper which calculates the probability of winning n blocks with LESS than 50% of the hashrate. But that is an orthogonal issue to one which I raise of sustaining an attack with MORE than 50% (see I wrote "50+%") of the hash power. I propose to defeat the attacker who can't sustain 50+% of the hashrate (up to) indefinitely.

Where was the following discussed before and where was my solution proposed before?

...is that an adversary potentially doesn't need to own 50+% of the network hashrate. Rather if sufficient hashrate can be rented then the adversary only needs to double-spend over a small window of time an amount that is more than the mining rental cost, which potentially makes the 50+% double-spend attack much more accessible given that at 25 BTC per block then a 6-confirmation rental cost should be in the range of only 150 BTC...

...this rented 50+% attack is a Tragedy of the Commons, because the attackers would compete to drive rental costs higher and higher, crowding out honest miners. This is potentially a very dangerous risk as more and more mining hardware is put out for rent to obtain market prices. Note this becomes more likely as the trading volume in the exchanges become more liquid as a crypto-currency matures...

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
July 14, 2014, 11:30:42 AM
Last edit: July 14, 2014, 11:44:51 AM by gmaxwell
 #49

and where was my solution proposed before?
I don't believe your post contained a "proposed solution" when I initially responded, if it did— I missed it.

But it's not a solution, alas. Ignoring other issues, at best it still leaves it at a simple piece of extortion "return most of the funds to me or I will reliably destroy your payment". It that sense pretty much isomorphic to "replace by fee scorched earth". The ongoing effort has other problems— a txout can be spent again immediately in the same block. Imagine it takes months to get the fraud notice out (heck, imagine a malicious miner creating one and intentionally withholding it).  By that time perhaps virtually all coins in active circulation are deprived from the conflicted coins. Now they finally get the notice out (/finally stop hiding it). What do you do?  Nothing? Invalidate _everyone's_ coins? Partially invalidate everyone's coins?  Each option is horrible. Do nothing makes the 'fix' ineffective in all cases: the attacker just always sends the coins to themselves in the same block, the others make the failure propagate— potentially forever, and don't just hit the unlucky merchant with the potentially unwise policy.

(It should be noted that already systems reject from their mempool discard later double-spend transactions from their local perspective, because— duh)

Quote
Rather if sufficient hashrate can be rented [...] potentially makes the 50+% double-spend attack much more accessible
Many places, I'll pick an example from myself— (note the log I reference there 12:37 < gmaxwell> pirateat40: you can have 10% of the hash power and attack.)
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 02:42:09 AM
Last edit: July 15, 2014, 02:53:56 AM by AnonyMint
 #50

I don't believe your post contained a "proposed solution" when I initially responded, if it did— I missed it.

It was there— only the enumerated edits were added which are below the proposed solution.

I imagine you may have limited time to digest a lengthy post, if on first glance you thought it was rehashing an old issue.

But it's not a solution, alas. Ignoring other issues, at best it still leaves it at a simple piece of extortion "return most of the funds to me or I will reliably destroy your payment".

That specific threat was paramount in my mind as I was designing my proposal and I think I eliminated it.

The mining nodes reject any double-spend transaction which conflicts with the block chain. The only transactions that can be unwound are those which appear in a competing fork and only when that competing fork does not have enough sustained agreement. The premise is the attacker can't maintain 50+% of the hashrate indefinitely. Essentially what I am proposing is that orphaned chains are not forgotten by the sustained majority when the longer chain temporarily double-spends the orphaned chain, so the sustained majority (eventually) unwinds the temporary attack. The attack is differentiated from the majority because it is not sustained indefinitely. Abstractly I am proposing a smoothing filter on Proof-of-work longest chain rule. The ephemeral attacker is aliasing error.

And I think (perhaps) they can be unwound to eliminate the double-spend, rather than to the ether.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 05:35:32 AM
 #51

I don't believe your post contained a "proposed solution" when I initially responded, if it did— I missed it.

It was there— only the enumerated edits were added which are below the proposed solution.

I imagine you may have limited time to digest a lengthy post, if on first glance you thought it was rehashing an old issue.

But it's not a solution, alas. Ignoring other issues, at best it still leaves it at a simple piece of extortion "return most of the funds to me or I will reliably destroy your payment".

That specific threat was paramount in my mind as I was designing my proposal and I think I eliminated it.

The mining nodes reject any double-spend transaction which conflicts with the block chain. The only transactions that can be unwound are those which appear in a competing fork and only when that competing fork does not have enough sustained agreement. The premise is the attacker can't maintain 50+% of the hashrate indefinitely. Essentially what I am proposing is that orphaned chains are not forgotten by the sustained majority when the longer chain temporarily double-spends the orphaned chain, so the sustained majority (eventually) unwinds the temporary attack. The attack is differentiated from the majority because it is not sustained indefinitely. Abstractly I am proposing a smoothing filter on Proof-of-work longest chain rule. The ephemeral attacker is aliasing error.

And I think (perhaps) they can be unwound to eliminate the double-spend, rather than to the ether.

Even if you unwind specific transactions without breaking the block headers, you still run into the issues DeathandTaxes mentioned earlier.  Namely, that having a certain number of confirmations won't mean what it used to.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 11:07:52 AM
 #52

Even if you unwind specific transactions without breaking the block headers, you still run into the issues DeathandTaxes mentioned earlier.  Namely, that having a certain number of confirmations won't mean what it used to.

Incorrect. My proposal strengthens the insurance from n confirmations. If you disagree, then walk me through your logic.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 01:24:05 PM
 #53

Note in my proposal unwinding only affects double-spent coins, so if you never send a double-spend then your transaction will never be unwound. Note a double-spend is not the same as resending the same transaction.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
July 15, 2014, 01:48:14 PM
 #54

Note in my proposal unwinding only affects double-spent coins, so if you never send a double-spend then your transaction will never be unwound. Note a double-spend is not the same as resending the same transaction.
As mentioned in my prior response— if you did this then your proposal is completely ineffective. First you double spend, and then you spend your double-spent coins to yourself.  If you do not also unwind the child transaction then the doublespender walks free for nothing but the cost of an extra transaction. If you do unwind the children then everyone is at constant risk.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 02:09:25 PM
Last edit: July 15, 2014, 02:22:04 PM by jonald_fyookball
 #55

Note in my proposal unwinding only affects double-spent coins, so if you never send a double-spend then your transaction will never be unwound Note a double-spend is not the same as resending the same transaction.
As mentioned in my prior response— if you did this then your proposal is completely ineffective. First you double spend, and then you spend your double-spent coins to yourself.  If you do not also unwind the child transaction then the doublespender walks free for nothing but the cost of an extra transaction. If you do unwind the children then everyone is at constant risk.

Not to mention the obvious:  Double spent coins are the only ones we care about here.

The whole meaning of n confirmations is measuring security against double spends.
Otherwise, 1-2 confirmation would be enough to know the tx was formatted correctly
and was included in the blockchain.

So if you are talking about the possibility of winding them back, then
confirmations lose their meaning, and your "solution" does more harm
than good.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 02:32:18 PM
Last edit: July 15, 2014, 03:33:40 PM by AnonyMint
 #56

Note in my proposal unwinding only affects double-spent coins, so if you never send a double-spend then your transaction will never be unwound. Note a double-spend is not the same as resending the same transaction.

As mentioned in my prior response— if you did this then your proposal is completely ineffective. First you double spend, and then you spend your double-spent coins to yourself.  If you do not also unwind the child transaction then the doublespender walks free for nothing but the cost of an extra transaction. If you do unwind the children then everyone is at constant risk.

I couldn't grok what you wrote before so I ignored it. Now I understand your point.

Definitely derivative transactions will be unwound too (thus my quoted statement is incorrect if you haven't waited the say 100 or so confirmations mentioned below), and this does not violate my assertion that insurance increases as n confirmations does.

As you admitted (in the part I didn't grok before), the risk you speak of applies whether my proposal is implemented or not. For example, in the current implementation of longest chain rule, if you got paid in the public orphaned chain and these are double-spent into the secret chain which becomes the longer chain when publicized (thus orphaning the other chain), then your payments are effectively unwound.

My proposal is that you don't trust the longest chain until considerable n confirmations have transpired, because the majority will be trying to unwind those bogus double-spends. So in my proposal, not only do you not deliver goods until a payment has n (usually 1 - 6) confirmations, but also you don't accept payment from a transaction (history) unless the prior transaction has much more than n (say 100 or so) confirmations. Abstractly the smoothing filter applies from both directions.

Note that CryptoNote ring signatures (and probably Zerocash and Zerocoin also) breaks the type of unwinding in my proposal because derivative transactions are unlinkable.

Edit: similar functionality can be obtained in the current implementation of the longest chain rule, by waiting for 100 or so confirmations before accepting a payment as final (to extend out the duration cost for the extended time attacker has to keep his chain secret so your payment isn't orphaned by the attacker's chain). Thus unlinkable coins could still defeat ephemeral 50+% double-spending attacks, but with very slow payments.

Not to mention the obvious:  Double spent coins are the only ones we care about here.

The whole meaning of n confirmations is measuring security against double spends.
Otherwise, 1-2 confirmation would be enough to know the tx was formatted correctly
and was included in the blockchain.

So if you are talking about the possibility of winding them back, then
confirmations lose their meaning, and your "solution" does more harm
than good.

You are confused by aliasing error, which can be ephemeral in my proposal.

The confirmations that matter are those in the chain that has the sustainable majority of the hashrate. The only way to differentiate aliasing error from signal, is for the width of the smoothing filter (i.e. the number of confirmations) to be greater than the period of the aliasing error (another way of stating the Nyquist-Shannon sampling theorem). That period is how long the attacker can sustain the hashrate to maintain the longest chain.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 03:17:06 PM
Last edit: July 15, 2014, 03:38:15 PM by AnonyMint
 #57

Edit: similar functionality can be obtained in the current implementation of the longest chain rule, by waiting for 100 or so confirmations before accepting a payment as final (to extend out the duration cost for the extended time attacker has to keep his chain secret so your payment isn't orphaned by the attacker's chain). Thus unlinkable coins could still defeat ephemeral 50+% double-spending attacks, but with very slow payments.

Ah so thus the solution to these ephemeral (rented hashrate) 50+% attacks is either in the existing longest chain rule to increase the number of confirmations for finalizing payments to greater than the period the attacker can afford (i.e. 100 or so), or changing the longest chain rule to my proposal which shifts that wait to the duration that a prior transaction has to wait before it can be safely spent again. In my proposal payments still need 1- 6 confirmations as usual to defend against orphans due to propagation delay and < 50% attacks.

So what my proposal does is shift the smoothing filter from slower payments to slower re-spending, as the means of muting ephemeral 50+% (rented hardware) double-spending.

6 confirmations is on the order of 150 BTC cost for the attacker. 100 confirmations on the order of 2500 BTC. The attacker has to recoup that with double-spends.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
cAPSLOCK
Legendary
*
Offline Offline

Activity: 3738
Merit: 5127


Whimsical Pants


View Profile
July 15, 2014, 03:21:34 PM
 #58


You are confused by aliasing error, which can be ephemeral in my proposal.

The confirmations that matter are those in the chain that has the sustainable majority of the hashrate. The only way to differentiate aliasing error from signal, is for the width of the smoothing filter (i.e. the number of confirmations) to be greater than the period of the aliasing error (another way of stating the Nyquist-Shannon sampling theorem). That period is how long the attacker can sustain the hashrate to maintain the longest chain.

Your sampling analogy deals with effects of aliasing OUTSIDE of the frequency set where the errors occur, whereas the double spend problem occurs in alternate versions of the current set.  In other words, the filter eliminates the audible aliasing by addressing it's consequences while a double spend happens at the exact same frequency as the actual (agreed upon) signal.

Does this flaw in the analogy not point out a possible crack in the thinking here?
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 03:59:44 PM
 #59

If miners who have a winning block can simply reroute funds or unwind transactions because they claim they saw a related transaction
in a previous orphaned block, what's to stop miners from doing that arbitrarily?  

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 04:18:48 PM
 #60


You are confused by aliasing error, which can be ephemeral in my proposal.

The confirmations that matter are those in the chain that has the sustainable majority of the hashrate. The only way to differentiate aliasing error from signal, is for the width of the smoothing filter (i.e. the number of confirmations) to be greater than the period of the aliasing error (another way of stating the Nyquist-Shannon sampling theorem). That period is how long the attacker can sustain the hashrate to maintain the longest chain.

Your sampling analogy deals with effects of aliasing OUTSIDE of the frequency set where the errors occur, whereas the double spend problem occurs in alternate versions of the current set.  In other words, the filter eliminates the audible aliasing by addressing it's consequences while a double spend happens at the exact same frequency as the actual (agreed upon) signal.

Does this flaw in the analogy not point out a possible crack in the thinking here?

Users want to band-limit the block chain, so they don't sample high frequency ephemeral attack block chains as being the true signal (because these can contain double-spends). They do this by applying a smoothing filter which is the number of confirmations they wait.

The only decentralized way I can see to defeat the ephemeral 50+% attack is to increase the period of time in which the attacker must be able to control the block chain, i.e. maintain 50+% of the hashrate.

In the current longest chain rule implementation, increasing the number of confirmations before accepting a payment as final forces the hacker to keep his double-spend chain secret longer.

In my proposed variation where the majority hashrate can unwind double-spends (and their derivative transactions), increasing the number of confirmations before accepting a prior transaction as final forces the attacker to maintain his public double-spend chain longer.

The smoothing filter applies to the attacker's period in both cases, and the choice of designs for the longest chain rule determines whether payments or re-spends should be delayed.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 04:30:33 PM
 #61

If miners who have a winning block can simply reroute funds or unwind transactions because they claim they saw a related transaction
in a previous orphaned block, what's to stop miners from doing that arbitrarily?  

Miners cannot reroute transactions. They can only record them in the block chain.
They can't "unwind" transactions. They can only decide to include them or not. If a miner does not include a transaction in a block, another miner will include it in the next block.

Isn't that what he is proposing with this:

Quote
c. If a mining node is in possession of an orphaned block chain which contains transactions that are missing or double-spent in a received longer block chain, the mining node broadcasts this fact and all mining nodes which agree (i.e. had seen this fork before it was orphaned) are expected to add this fact to the next winning block which thus marks any double-spent coins as forfeited to the ether (or adds the spends that were omitted).

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 04:37:49 PM
 #62

If miners who have a winning block can simply reroute funds or unwind transactions because they claim they saw a related transaction
in a previous orphaned block, what's to stop miners from doing that arbitrarily?  

Because in my proposal, the majority hashrate won't agree unless they also saw the previous orphaned block. Thus it isn't arbitrary, rather it is only the longest chain rule consensus of the sustained hashrate.

What you are missing from your analysis is that my proposal retains the longest chain rule. It only allows that rule to use smoothing filter to remove aliasing error spikes in the hashrate that are not sustained and impose double-spends.

What I did was view all the variables of the longest chain rule and realized we had another degree-of-freedom to tinker with, without violating the coherence of the rule.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 05:03:52 PM
Last edit: July 15, 2014, 06:55:06 PM by AnonyMint
 #63

Let me try to make this more comprehensible.

The way it is now, we wait 1 - 6 confirmations to make sure orphans or a < 50% attack can't unwind our transaction.

However that doesn't stop an attacker who can rent 100% of the hashrate for 6 confirmations for on the order of 6 x 25 = 150 BTC in cost (rough estimate based on block reward).

The way it is now, the only way to stop that ephemeral 50+% attacker is to wait say 100 or more confirmations to increase the rental cost that the attacker must recoup with double-spends.

Premised on the notion that the larger the value of the double-spends, the more difficult for the attacker to scale it and the more (dead or alive?) bounty that will be put on his head.

Excruciatingly slow transactions (e.g. 100 or more confirmations) are very undesirable.

Instead I proposed a new rule which is consistent with the Longest Chain Rule consensus, which allows the consensus to unwind double-spends (and their derivative transactions). Note this proposal appears to not be compatible with unlinkable block chains such as CryptoNote (Monero et al) and Zerocash.

Thus it is not necessary to increase the number of confirmations to wait on a transaction to more than the typical 1 - 6 as we only need to make sure the consensus hashrate has seen our transactions in a longest chain (before it would be orphaned by the attackers secret longer chain).

Instead in my proposal, we increase the delay that a completed transaction must wait before being transacted again (e.g. 100 or more confirmations), so it won't be unwound as a derivative transaction of an attacker's double-spend, which is much more desirable than excruciatingly slow transactions, because often we don't transact received funds too quickly any way.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
drawingthesun
Legendary
*
Offline Offline

Activity: 1176
Merit: 1015


View Profile
July 15, 2014, 05:24:32 PM
Last edit: July 15, 2014, 05:36:13 PM by drawingthesun
 #64

Let me try to make this more comprehensible.

The way it is now, we wait 1 - 6 confirmations to make sure orphans or a < 50% attack can't unwind our transaction.

However that doesn't stop an attacker who can rent 100% of the hashrate for 6 confirmations for on the order of 6 x 25 = 150 BTC in cost (rough estimate based on block reward).

The way it is now, the only way to stop that ephemeral 50+% attacker is to wait say 100 or more confirmations to increase the rental cost that the attacker must recoup with double-spends.

Premised on the notion that the larger the value of the double-spends, the more difficult for the attacker to scale it and the more (dead or alive?) bounty that will be put on his head.

Excruciatingly slow transactions (e.g. 100 or more confirmations) is very undesirable.

Instead I proposed a new rule which is consistent with the Longest Chain Rule consensus, which allows the consensus to unwind double-spends (and their derivative transactions). Note this proposal appears to not be compatible with unlinkable block chains such as CryptoNote (Monero et al) and Zerocash.

Thus it is not necessary to increase the number of confirmations to wait on a transaction to more than the typical 1 - 6 as we only need to make sure the consensus hashrate has seen our transactions in a longest chain (before it would be orphaned by the attackers secret longer chain).

Instead in my proposal, we increase the delay that a completed transaction must wait before being transacted again (e.g. 100 or more confirmations), so it won't be unwound as a derivative transaction of an attacker's double-spend, which is much more desirable than excruciatingly slow transactions, because often we don't transact received funds too quickly any way.

Making everyone wait 100 confirmations is not a very good idea.

Why not just make a seller wait 100 confirms upon receiving a large transaction, to reduce the chance they will be double spent against? Let people decide how long to wait, why enforce it?

Is waiting for 100 confirms less secure than your idea of forcing all coins to be locked for 100 confirms prior to being spent?
drawingthesun
Legendary
*
Offline Offline

Activity: 1176
Merit: 1015


View Profile
July 15, 2014, 05:26:20 PM
 #65

If making the seller and buyer wait for 100 confirms is too slow, why can't the seller just look at the buyers address they are going to spend from has an amount of coin that is 100 confirms old.

The seller says they will only accept 100 confirm aged transactions, so the buyer uses an address with the right age or instead waits.
drawingthesun
Legendary
*
Offline Offline

Activity: 1176
Merit: 1015


View Profile
July 15, 2014, 05:27:48 PM
 #66

My above suggestion could even be implemented as a function in the client.

The seller sets that large value purchases must either wait for 100 confirms or come from an address that already has 100 confirms (or a mix of both)

The client automatically makes sure that the payment isn't processed until either of the criteria are satisfied.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 05:38:05 PM
 #67

If miners who have a winning block can simply reroute funds or unwind transactions because they claim they saw a related transaction
in a previous orphaned block, what's to stop miners from doing that arbitrarily?  

Because in my proposal, the majority hashrate won't agree unless they also saw the previous orphaned block. Thus it isn't arbitrary, rather it is only the longest chain rule consensus of the sustained hashrate.

What you are missing from your analysis is that my proposal retains the longest chain rule. It only allows that rule to use smoothing filter to remove aliasing error spikes in the hashrate that are not sustained and impose double-spends.

What I did was view all the variables of the longest chain rule and realized we had another degree-of-freedom to tinker with, without violating the coherence of the rule.

I see your point but I think there's problems.  Consider this scenario:

Chain #1 (eventually orphaned)

block 1: Alice , who has 100 BTC, sends 100 BTC to Bob.

block 2:  Charlie sends 100 BTC to alice

block 3:   Bob sends 100 BTC to Charlie

Chain #2:

block 1: Alice sends 100 BTC to Charlie

block 2: (nothing)

block 3: (nothing)

block 4: nothing -- longest chain

result:  chain 2 as the longest chain is accepted,
however,  Charlie doesn't get the initial 100 BTC,
because miners see it was spent on Bob.  Bob gets the 100 BTC.
However,  Alice is now missing 100 BTC she should have
gotten in block 2,  and Charlie is missing an ADDITIONAL 100 BTC.
from block 3.

Therefore, "double spend attack by omission" can easily occur.







drawingthesun
Legendary
*
Offline Offline

Activity: 1176
Merit: 1015


View Profile
July 15, 2014, 05:39:23 PM
 #68

I believe that locking coins for 16 hours (100 confirms) after being received is too un-user friendly to ever be accepted.

Making people wait longer and longer for larger and larger transactions works now and should be the norm.

A service could even be created by the likes of bitpay, where you store a balance through them and they take on the waiting risk with merchants.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 05:39:59 PM
 #69

Making everyone wait 100 confirmations is not a very good idea.

Why not just make a seller wait 100 confirms upon receiving a large transaction, to reduce the chance they will be double spent against? Let people decide how long to wait, why enforce it?

I did not propose to enforce the number of confirmations. Your multiple redundant posts are very noisy. Perhaps take some time to study more or let me reply first.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
drawingthesun
Legendary
*
Offline Offline

Activity: 1176
Merit: 1015


View Profile
July 15, 2014, 05:45:15 PM
 #70

Making everyone wait 100 confirmations is not a very good idea.

Why not just make a seller wait 100 confirms upon receiving a large transaction, to reduce the chance they will be double spent against? Let people decide how long to wait, why enforce it?

I did not proposed to enforce the number of confirmations.

In a way you did:

Instead in my proposal, we increase the delay that a completed transaction must wait before being transacted again (e.g. 100 or more confirmations), so it won't be unwound as a derivative transaction of an attacker's double-spend, which is much more desirable than excruciatingly slow transactions, because often we don't transact received funds too quickly any way.

Instead of making the buyer and seller wait at point of sale, you make the seller wait to be able to reuse the funds. The confirmations needed take place once a transaction happens to safegaurd not that current transaction, but the next one.

Now:

A sends 1 btc to B
B sends 1 btc to C
C waits 100 confirms
C sends goods to B.

Your solution:

A sends 1 btc to B
B waits 100 confirms
B sends 1 btc to C
C sends goods right away to B

So we still wait 100 confirms, but it happens before the transaction takes place and no one is allowed to send bitcoin with less than 100 confirms.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 05:52:58 PM
Last edit: July 15, 2014, 06:55:52 PM by AnonyMint
 #71

Instead in my proposal, we increase the delay that a completed transaction must wait before being transacted again (e.g. 100 or more confirmations), so it won't be unwound as a derivative transaction of an attacker's double-spend, which is much more desirable than excruciatingly slow transactions, because often we don't transact received funds too quickly any way.

Instead of making the buyer and seller wait at point of sale, you make the seller wait to be able to reuse the funds. The confirmations needed take place once a transaction happens to safegaurd not that current transaction, but the next one.

Now:

A sends 1 btc to B
B sends 1 btc to C
C waits 100 confirms
C sends goods to B.

YourMy solution:

A sends 1 btc to B
B waits 100 confirms
B sends 1 btc to C
C sends goods right away to B

So we still wait 100 confirms, but it happens before the transaction takes place and no one is allowed to send bitcoin with less than 100 confirms.

Both in "Now" and in "My Solution" these number confirmations that any user waits is entirely up to their desired risk of a double-spend.

The advantage of my "My Solution" is this wait doesn't make transactions excruciatingly slow for those transactions at risk of double-spend (which could become more likely as the trend towards rentable hashrate becomes more widespread).

"My Solution" leverages all the existing delays between the time users receive a transaction and re-transact it.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 06:28:37 PM
Last edit: July 15, 2014, 06:40:35 PM by AnonyMint
 #72

If miners who have a winning block can simply reroute funds or unwind transactions because they claim they saw a related transaction
in a previous orphaned block, what's to stop miners from doing that arbitrarily?  

Because in my proposal, the majority hashrate won't agree unless they also saw the previous orphaned block. Thus it isn't arbitrary, rather it is only the longest chain rule consensus of the sustained hashrate.

What you are missing from your analysis is that my proposal retains the longest chain rule. It only allows that rule to use smoothing filter to remove aliasing error spikes in the hashrate that are not sustained and impose double-spends.

What I did was view all the variables of the longest chain rule and realized we had another degree-of-freedom to tinker with, without violating the coherence of the rule.

I see your point but I think there's problems.  Consider this scenario:

Chain #1 (eventually orphaned)

block 1: Alice , who has 100 BTC, sends 100 BTC to Bob.

block 2:  Charlie sends 100 BTC to alice

block 3:   Bob sends 100 BTC to Charlie

Chain #2:

block 1: Alice sends 100 BTC to Charlie

block 2: (nothing)

block 3: (nothing)

block 4: nothing -- longest chain

result:  chain 2 as the longest chain is accepted,
however,  Charlie doesn't get the initial 100 BTC,
because miners see it was spent on Bob.  Bob gets the 100 BTC.
However,  Alice is now missing 100 BTC she should have
gotten in block 2,  and Charlie is missing an ADDITIONAL 100 BTC.
from block 3.

Therefore, "double spend attack by omission" can easily occur.

In my proposal, if Charlie waits the 100 or so confirmations, he would not be in Block 3 and thus would not experience an unwind.

If in my proposal the sustained majority hashrate unwinds the double-spends to the ether, then both Bob & Charlie don't get 100 BTC from Alice. But Charlie should not have accepted the payment as final, because he has seen the orphaned chain and seen there is a double-spend that will be unwound.

If in my proposal the sustained majority hashrate unwind to the transactions in the orphaned chain, then Bob gets 100 BTC from Alice.

I don't see any problem.

Edit: apply the above to Bitcoin, then Bob's transactions are unwound when they are orphaned. Charlie's send to Alice would probably get propagated to the Chain #2.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 06:37:17 PM
 #73

100 confirmations isn't a problem?  Undecided

If Bitcoiners are going to wait that long,
you really don't need any special
"unwinding" in the protocol.
 
Miners can just refuse to accept
chains of 100 blocks or more.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 06:43:05 PM
 #74

100 confirmations isn't a problem?  Undecided

If Bitcoiners are going to wait that long,
you really don't need any special
"unwinding" in the protocol.
  
Miners can just refuse to accept
chains of 100 blocks or more.

Re-read my reply to drawingthesun.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 07:14:53 PM
 #75

100 confirmations isn't a problem?  Undecided

If Bitcoiners are going to wait that long,
you really don't need any special
"unwinding" in the protocol.
  
Miners can just refuse to accept
chains of 100 blocks or more.

Re-read my reply to drawingthesun.

Yes I realize participants can choose their own number of confirmation according to their risk tolerance.
The issue still remains:  an attacker can double spend simply by building a longer chain
that doesn't include the transaction at all, effectively sending the coins back to himself.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 15, 2014, 08:35:06 PM
 #76

The issue still remains:  an attacker can double spend simply by building a longer chain
that doesn't include the transaction at all, effectively sending the coins back to himself.

Nope.

c. If a mining node is in possession of an orphaned block chain which contains transactions that are missing or double-spent in a received longer block chain, the mining node broadcasts this fact and all mining nodes which agree (i.e. had seen this fork before it was orphaned) are expected to add this fact to the next winning block which thus marks any double-spent coins as forfeited to the ether (or adds the spends that were omitted).

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 15, 2014, 08:57:09 PM
Last edit: July 16, 2014, 01:03:33 AM by jonald_fyookball
 #77

The issue still remains:  an attacker can double spend simply by building a longer chain
that doesn't include the transaction at all, effectively sending the coins back to himself.

Nope.

c. If a mining node is in possession of an orphaned block chain which contains transactions that are missing or double-spent in a received longer block chain, the mining node broadcasts this fact and all mining nodes which agree (i.e. had seen this fork before it was orphaned) are expected to add this fact to the next winning block which thus marks any double-spent coins as forfeited to the ether (or adds the spends that were omitted).

How do you which is "the next winning block" ?

If it is the very next block after a longest-chain-wins reorg, and the attacker
wins that block , the attacker could exclude it as well.

And if it doesn't have to be the very next block, then the attacker could work
the other side of the attack, create an orphan transaction on purpose and
spring it several blocks after a reorg, thus double spending that way.

EDIT:  Furthermore, even if an honest miner solves the "next winning block"
required to make the honest correction, what is to stop the 51% attacker
from undoing that block as well?  Where does it end?




Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
July 15, 2014, 09:19:56 PM
 #78


I don't know if the hash rate solution to byzantine-generals is in fact the right solution.  In the presence of rentable computer power, it doesn't necessarily fulfil the assumptions that the security of the model is based on.

There are other solutions to byzantine-generals, but they require O(n^2) communication so they're even harder to scale to large numbers of users.

We have Eve, Sybil, and Trent to worry about. 

Eve is eavesdropping on the blockchain to discover where (what IP address) transactions originate.  In addition, she does blockchain analysis to identify actors and associate them with past transactions.  Generally, we tolerate Eve because we can't create a completely opaque Eve-proof system without creating a Trent. 

Sybil can create any number of wallets/accounts at any time, making voting mechanisms useless.  The hash rate consensus mechanism was supposed to counter Sybil, but in the presence of rentable hashing power, Sybil can (if she pays money) subvert this mechanism as well.  Sybil is endemic to anonymous trustless systems. 

Trent is the "trusted" authority - a centralized node whose defection is capable of making the system not work.  We have tried very hard to avoid creating a Trent.  The Zerocoin proposal's main problem is that it requires a Trent to set up the encryption parameters.  If Trent defects, there can be any amount of hidden coins in the blockchain and nobody will be able to show it.   Existing solutions to Sybil attacks require Trent to issue accounts linked to keys so that Sybil can't just make up as many new ones as she wants.   

Anyway:  No way to completely avoid Eve and Sybil without creating Trent.  No way to completely avoid Trent without tolerating Eve and Sybil. 
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 15, 2014, 11:30:15 PM
 #79

However that doesn't stop an attacker who can rent 100% of the hashrate for 6 confirmations for on the order of 6 x 25 = 150 BTC in cost (rough estimate based on block reward).

An absurd attack deserves an equally absurd defence:  we'll rent 101% of the world's electricity production so that your hashpower will hash in reverse!

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 01:45:30 AM
Last edit: July 16, 2014, 02:46:07 AM by AnonyMint
 #80

However that doesn't stop an attacker who can rent 100% of the hashrate for 6 confirmations for on the order of 6 x 25 = 150 BTC in cost (rough estimate based on block reward).

An absurd attack deserves an equally absurd defence:  we'll rent 101% of the world's electricity production so that your hashpower will hash in reverse!

Hopefully you were just joking? Clearly that is 100% of the sustained hashrate that exists at any point in time, not 100% of the world's electricity production. And of course you don't need 100% of it, just 50+% (> 50%). I was just providing an estimate of the maximum cost to mount the attack, which turns out to be absurdly low.

Edit: The absurd nature of the attack is today probably 50% of the hashrate is not rentable (thus greater than 6 confirmations is probably not necessary now). But this could change over time if the trend is towards getting market prices for ASICs.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
July 16, 2014, 01:46:51 AM
 #81

but they require O(n^2) communication so
Forget that— even ignoring the scaling they require the participants to be enumerated in advance.  Thats generally a non-starter to begin with for what Bitcoin attempts to achieve.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 01:52:18 AM
 #82

The issue still remains:  an attacker can double spend simply by building a longer chain
that doesn't include the transaction at all, effectively sending the coins back to himself.

Nope.

c. If a mining node is in possession of an orphaned block chain which contains transactions that are missing or double-spent in a received longer block chain, the mining node broadcasts this fact and all mining nodes which agree (i.e. had seen this fork before it was orphaned) are expected to add this fact to the next winning block which thus marks any double-spent coins as forfeited to the ether (or adds the spends that were omitted).

How do you which is "the next winning block" ?

If it is the very next block after a longest-chain-wins reorg, and the attacker
wins that block , the attacker could exclude it as well.

And if it doesn't have to be the very next block, then the attacker could work
the other side of the attack, create an orphan transaction on purpose and
spring it several blocks after a reorg, thus double spending that way.

EDIT:  Furthermore, even if an honest miner solves the "next winning block"
required to make the honest correction, what is to stop the 51% attacker
from undoing that block as well?  Where does it end?

Correct. The point is to stop the attacker who can't sustain that indefinitely. Eventually the majority sustained hashrate wins, because the attacker has an increasing rental cost over time to maintain the fixed amount of double-spends he earned.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 16, 2014, 01:55:20 AM
 #83

you didn't answer the first point I made, so I'm unconvinced.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 02:04:33 AM
Last edit: July 16, 2014, 10:14:41 PM by AnonyMint
 #84

I don't know if the hash rate solution to byzantine-generals is in fact the right solution.  In the presence of rentable computer power, it doesn't necessarily fulfil the assumptions that the security of the model is based on.

The only purpose of the longest chain rule is to prevent double-spends, and it seems it continues to serve that function for as long as at-risk transactions wait for enough confirmations to make rentable computer power attacks impractical due to scale (?).

I offer a proposal to shift the cost of that wait to the time between transactions, which is an existing unutilized resource thus mitigating significantly the impact of this wait and also not requiring the user to do anything to institute the wait in many cases. In short, in my proposal the wait comes (in many cases) automatically and at no cost.

We have Eve, Sybil, and Trent to worry about.  

..  

Anyway:  No way to completely avoid Eve and Sybil without creating Trent.  No way to completely avoid Trent without tolerating Eve and Sybil.  

I have an idea for a solution to this problem but I am not revealing it now. Gmaxwell was on the correct direction with CoinJoin in terms of separating the anonymity from the linkability of the block chain, but DarkCoin shows that to implement it you end up with centralizing or Sybil attacked master nodes in order to holistically resolve the jammability problem of CoinJoin's two steps. Also the simultaneity requirement of CoinJoin is a problem.

As well my proposal revealed in this thread appears to be incompatible with unlinkable block chains, e.g. Monero/Cryptonote and Zerocash.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 02:16:14 AM
 #85

you didn't answer the first point I made, so I'm unconvinced.

I did answer it. Think deeply.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 02:23:16 AM
Last edit: July 16, 2014, 02:37:49 AM by AnonyMint
 #86

you didn't answer the first point I made, so I'm unconvinced.

I did answer it. Think deeply.

The hacker and the sustained hashrate are running parallel forks and copying each other's valid transactions whereas the latter continues to unwind the double-spends from the hacker's fork every time the hacker loses a block. The hacker can only defeat this by continuing to maintain > 50% of the hashrate indefinitely.

Nodes who come on to the scene after the fact have a problem of knowing which fork to trust. So if the hacker can maintain the attack for an extremely long duration (so that the majority of the sustained hashrate is from nodes who don't know which fork to trust), then my proposal fails (at least if I try to unwind to the original transaction instead of to the ether). But in that case there is no solution at all in any case (at-risk transaction would be delayed an extremely long duration) and Bitcoin is toast.

Maybe that is why I will decide it must be to the ether. Need to think more on this and would appreciate insight from other astute devs here.

However, that most of the hashrate is in a few pools makes this extremely difficult for the attacker. I tend to think even if mining were more decentralized than currently in Bitcoin, the hashrate will still be concentrated amongst long-lived nodes.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 16, 2014, 02:41:37 AM
 #87

I don't know if the hash rate solution to byzantine-generals is in fact the right solution.  In the presence of rentable computer power, it doesn't necessarily fulfil the assumptions that the security of the model is based on.

The only purpose of the longest chain rule is to prevent double-spends, and it seems it continues to serve that function for as long as at-risk transactions wait for enough confirmations to make rentable computer power attacks impractical due to scale (?).


It also is part of the consensus mechanism and works beautifully due to it's simplicity.
Any added complexity to that rule needs to be considered very carefully and needs
to provably establish consensus as the longest chain rule does.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 02:50:57 AM
Last edit: July 16, 2014, 06:07:05 AM by AnonyMint
 #88

Edit: The absurd nature of the attack is today probably 50% of the hashrate is not rentable (thus greater than 6 confirmations is probably not necessary now). But this could change over time if the trend is towards getting market prices for ASICs.

How likely is it that 50% of the hashrate will become rentable?

Edit: on the one hand no one should offer for rent that much hashrate because attackers could destroy the value of their hardware if they destroy Bitcoin in a Tragedy of the Commons (assuming no other SHA2 coins to mine at same profitability). But another Tragedy of the Commons is that ASICs owners don't each have 50% of the hashrate so they may not have a coordinated vision.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 16, 2014, 10:26:32 AM
Last edit: July 16, 2014, 11:02:08 AM by jonald_fyookball
 #89

I have such a headache from trying to understand this thread so instead, I am bookmarking it for later.  Cheesy

A lot of Anonymint's ideas (Aliasing) are analogies and technobabble, and really have nothing to do with Bitcoin or blockchain technology.

(He should really stop trying to impress everyone with big words and explain his ideas in plain English.)

Anyway, Anonymint, I don't think more devs need to see the proposal.  You've already got Gmaxwell and DeathandTaxes telling you it won't work, what more do you want?  I think I've also given some fairly clear arguments also.  The creativity is appreciated but there doesn't necessarily have to be a "solution" against a 51% attack, and it doesn't mean Bitcoin is toast.

51% attacks done for financial gain are prohibitively expensive and I doubt you can rent half the network power anyway.  Malicious sustained irrational 51% attacks are an extreme scenario that would need an extreme solution such as a fork to an alternative proof of work.

But you did get me thinking about timestamp block validation and whether we do or can limit unwinding of the block chain by a 51% attacker.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 03:05:28 PM
 #90

I have such a headache from trying to understand this thread so instead, I am bookmarking it for later.  Cheesy

A lot of Anonymint's ideas (Aliasing) are analogies and technobabble, and really have nothing to do with Bitcoin or blockchain technology.

(He should really stop trying to impress everyone with big words and explain his ideas in plain English.)

Hey ad hominem bullshit flows out of your mouth. I can't compensate for your intellectual handicap.

Anyway, Anonymint, I don't think more devs need to see the proposal.  You've already got Gmaxwell and DeathandTaxes telling you it won't work, what more do you want?

Neither Gmaxwell nor DeathandTaxes have stated that my idea won't work. D&T hasn't even addressed my idea. Gmaxell stated that the existing strategy has the same problem with derivative unwinds as my strategy-- that is not the same as saying my idea won't work. But you aren't even able to comprehend.

I think I've also given some fairly clear arguments also.

And I've addressed all of your posts.

I doubt you can rent half the network power anyway.

I've put that out there as an open question in my prior post and no one has credibly addressed it yet.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
surae.noether
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
July 16, 2014, 08:58:54 PM
 #91

First impressions of the paper: there are some good insights, and it's written well.

But... time stamps can be manipulated by dishonest actors (addressed in the CryptoNote whitepaper) and hence can not be trusted to prevent double spending. Which is one motivation behind Nakamoto's development of the Blockchain-by-Proof-of-Work solution to the Byzantine General problem.

Proof-of-work methods had been utilized before for various applications, most notably, to mitigate spam e-mail, but, Nakamoto was the first to solve the problem of coming to a concensus about order-of-events in a distributed, peer-to-peer way without timestamps. Even Nakamoto's solution is not a true solution, but simply a method that converges to a solution probabilistically over time. It's provable that a one-time, 2-General problem requires a countable number of verifications for a closed solution. The only other alternative Blockchain-by-Proof-of-X method that has been proposed since Nakamoto's solution has been Blockchain-by-Proof-of-Stake (and it's variant, the Blockchain-by-Proof-of-Stake-Velocity). Other Proof-of-X methods, such as Proof-of-Burn and Proof-of-Publication, have not been proposed to verify transactions, but to bootstrap value from one cryptocurrency to another and to verify the existence of a file by some point in time on the blockchain, respectively.  If either of these methods can be utilized to verify transactions, no method has been proposed to my knowledge.  

Recent rigorous security analyses of Blockchain-by-Proof-of-Stake methods are troubling: unless some Proof-of-Work component is included, a dedicated attacker can "kill" a coin with no cost http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2393940, but the attack requires a vast amount of capital, requires rational behavior on the part of a market (ha!), and requires the actor to enact a PR campaign trying to kill the coin. It's is unlikely to generate profit for the attacker (i.e. it's a strictly malicious attack). Notice, however, this may not apply to Blockchain-by-Proof-of-Stake-Velocity, I'm not sure. This core solution to generating an order of events without time stamps, the Blockchain-by-Proof-of-Work (BPOW) has essentially remained unchanged since it's original inception by Nakamoto. This is the primary strength of any cryptocurrency protocol. Variants in measuring the blockchain, such as following the heaviest subtree, not the longest chain, have been proposed, and are the best hope at improving that basic piece of the protocol. https://eprint.iacr.org/2013/881.pdf
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 10:07:58 PM
 #92

I found my post where I had analyzed this paper on May 14.

Well I see as January 2014, others below started to expound upon what I had explained in November 2013 at the threads given by the quoted links above.

On The Longest Chain Rule and Programmed
Self-Destruction of Crypto Currencies


...

The rest of the point of the above paper regarding tx timestampes is really a flawed ad hoc way of attempting to achieve the decentralization that the prior sentence would achieve more correctly.

http://arxiv.org/pdf/1405.0534.pdf#page=29

Quote from: Nicolas T. Courtois
A big question is whether timestamps are needed at all, see Section 7.3. An
alternative to timestamps could be various pure consensus mechanisms without
timestamps by which numerous network nodes would certify that that they have
seen one transaction earlier than another transaction. In this paper we take the
view that they should be present by default and further con rmed by (the same)
sorts of additional mechanisms.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 10:19:18 PM
 #93

The only other alternative Blockchain-by-Proof-of-X method that has been proposed since Nakamoto's solution has been Blockchain-by-Proof-of-Stake...

Recent rigorous security analyses of Blockchain-by-Proof-of-Stake methods are troubling...

Proof-of-stake == centralization.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 16, 2014, 11:19:59 PM
Last edit: July 17, 2014, 12:34:54 AM by AnonyMint
 #94

...Gmaxell stated that the existing strategy has the same problem with derivative unwinds as my strategy-- that is not the same as saying my idea won't work...

The cited paper:

Variants in measuring the blockchain, such as following the heaviest subtree, not the longest chain, have been proposed, and are the best hope at improving that basic piece of the protocol. https://eprint.iacr.org/2013/881.pdf

Quote
Perhaps the most important question that will affect Bitcoin’s success,
is whether or not it will be able to scale to support the
high volume of transactions required from a global currency system.
We investigate the restrictions on the rate of transaction processing in Bitcoin as a
function of both the bandwidth available to nodes and the network delay, both of which
lower the efficiency of Bitcoin’s transaction processing.

Summarizes Gmaxell's point as reinterpreted as quoted above, and also my orthogonal point that there is an unbounded increase in number of confirmations needed to protect against an attacker who can sustain > 50% of the network hashrate:

https://eprint.iacr.org/2013/881.pdf#page=7

Quote
The replacement of the current world-view with an alternative one has far reaching conse-
quences: some transactions may be removed from the current ledger. This fact can be used by
an attacker to reverse transactions. The attacker may pay some merchant and then secretly
create a blockchain that is longer than that of the network that does not include his payment.
By releasing this chain he can trigger a switch that effectively erases the transaction, or redirects
the payment elsewhere. This is a difficult undertaking, since the honest nodes usually have a
great deal of computational power, and the attacker must get very lucky if he is to replace
long chains. The longer the chain, the more difficult it becomes to generate the proof-of-work
required to replace it. Satoshi’s original security analysis defines a policy for receivers of pay-
ments: a transaction is only considered sufficiently irreversible after it was included in a block
and some n additional blocks were built on top of it. With this policy, Satoshi shows that the
probability of a successful attack can be made arbitrarily low. As a receiver of funds waits for
more blocks (larger n ), this probability goes down exponentially.

However, if an attacker has more computational power than the rest of the network combined
(i.e., it holds at least 50% of the computing power), it is always able to generate blocks faster
than the rest of the network and thus to reverse transactions at will (given enough time). This
stronger form of attack is known as the 50% attack.

The paper even points out that with network propagation advantages the attacker may be able to sustain the longest chain indefinitely with < 50% of the network hashrate:

Quote
In fact, the assumption that at least 50% of the computational power is required for such an
attack to succeed with high probability is inaccurate. If we assume the attacker is centralized
and does not suffer from delays, he can beat a network that does suffer from delays using fewer
resources. We formulate the exact conditions for safety from this attack, and amend Satoshi’s
analysis below. We return to the analysis of the weaker double spend attack in Sections 6 and
7.

The following calculation applies to a block period of 3.5 (1/0.29) seconds and 17 (59/3.5) confirmations:

https://eprint.iacr.org/2013/881.pdf#page=17

Quote
in some network configurations that match the assumptions above, an attacker with just over
24% of the hash-rate can successfully execute a so-called 50% attack, i.e., to replace the main chain
at will

The paper has some related insights as I did for my idea:

https://eprint.iacr.org/2013/881.pdf#page=18

Quote
The basic observation behind the protocol modification that we suggest, is that blocks that
are off the main chain can still contribute to a chain’s irreversibility. Consider for example
a block B, and two blocks that were created on top of it C1 and C2, i.e.,
parent(C1) = parent(C2) = B. The Bitcoin protocol, at its current form, will
eventually adopt only one of the sub-chains rooted at C1 and C2, and will discard
the other. Note however, that both blocks were created by nodes that have accepted block B and its
entire history as correct. The heaviest sub-tree protocol we suggest makes use of this fact, and adds
additional weight to block B, helping to ensure that it will be part of the main chain.

However it is not trying to address the ephemeral > 50% attack that my idea does. Instead the paper's GHOST protocol mitigates the fact that otherwise network propagation delay topologies can give the attacker an advantage such that it can execute attacks with the same probability of success with less than 50% (actually less than any probability curve calculated in Meni Rosenfeld's paper as cited).

GHOST aggregates the proof-of-work over n confirmations of all forks in the subtree above (i.e. after) B, i.e. it is a smoothing function:

Quote
10We are in fact interested in the sub-tree with the hardest combined proof-of-work, but for the sake of
conciseness, we write the size of the subtree instead.

https://eprint.iacr.org/2013/881.pdf#page=19

Quote
Thus, if we wait long enough, the honest subtree above B will be larger than the one constructed
by the attacker, with sufficiently high probability.

In my idea the nodes of the network utilize an additional piece of information which is the observation that a fork above B was orphaned by a fork which double-spends transactions in B (which applies weighting to the forks of the subtree above B by the observations of the nodes). Thus I believe my idea is more powerful and able to address the ephemeral > 50% attack (as well as < 50% attacks with greater probability) because it utilizes more information.

That paper and my idea are applying smoothing filters which incorporate more information, so that aliasing error is mitigated. There is a general concept in sampling theory-- don't discard information, filter it instead.

The paper also says we also shouldn't discard information when retargeting the difficulty:

https://eprint.iacr.org/2013/881.pdf#page=21

Quote
Retargeting (difficulty adjustment).
Given potentially complex relations between the
growth rate of the main chain and the rate of created blocks, and the fact that
GHOST depends more on the total rate of block creation, we suggest a change in the way difficulty
adjustments to the proof-of-work are done. Instead of targeting a certain rate of growth for
the longest chain, i.e., Beta (which is Bitcoin’s current strategy), we suggest that the total rate of
block creation be kept constant (Lambda). As our protocol requires knowledge of off chain blocks by
all nodes, we propose that information about off chain blocks be embedded inside each block
(blocks can simply hold hashes of other blocks they consider off-chain). This can be used to
measure and re-target the difficulty level so as to keep the total block creation rate constant.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 17, 2014, 05:01:46 AM
 #95

I have such a headache from trying to understand this thread so instead, I am bookmarking it for later.  Cheesy

A lot of Anonymint's ideas (Aliasing) are analogies and technobabble, and really have nothing to do with Bitcoin or blockchain technology.

(He should really stop trying to impress everyone with big words and explain his ideas in plain English.)

Hey ad hominem bullshit flows out of your mouth. I can't compensate for your intellectual handicap.

Anyway, Anonymint, I don't think more devs need to see the proposal.  You've already got Gmaxwell and DeathandTaxes telling you it won't work, what more do you want?

Neither Gmaxwell nor DeathandTaxes have stated that my idea won't work. D&T hasn't even addressed my idea. Gmaxell stated that the existing strategy has the same problem with derivative unwinds as my strategy-- that is not the same as saying my idea won't work. But you aren't even able to comprehend.

I think I've also given some fairly clear arguments also.

And I've addressed all of your posts.

I doubt you can rent half the network power anyway.

I've put that out there as an open question in my prior post and no one has credibly addressed it yet.

Uh, pretty sure they did say your idea won't work...more than once.

Notice they stopped posting because they are probably sick of
your antics...AGAIN.  You earned your badge in trolling long ago with
your hysterics that Bitcoiners are going to be broke and
also go to jail because of clawbacks.  

And no, you didn't address all my points.  You didn't solve the
51% attack problem in any way, shape, or form.

And if you want to disprove my assertion that you're a
pseudo-intellectualist, go ahead... explain in plain,
simple English how "Aliasing" relates to blockchains or
distributed consensus or anything related.



AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 17, 2014, 05:05:57 AM
 #96

Uh, pretty sure they did say your idea won't work...more than once.

Quote them to document your assertion. You can't.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 17, 2014, 05:17:15 AM
 #97

gmaxwell said "your proposal is completely ineffective".

But whatever, keep arguing... its what you're best at.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 17, 2014, 06:03:34 AM
Last edit: July 17, 2014, 06:43:24 AM by AnonyMint
 #98

gmaxwell said "your proposal is completely ineffective".

But whatever, keep arguing... its what you're best at.

I am not arguing, I am clarifying that you (are intellectually handicapped—which I avoided stating until you attacked me—and) don't understand what Gmaxell wrote:

and where was my solution proposed before?

But it's not a solution, alas. Ignoring other issues, at best it still leaves it at a simple piece of extortion "return most of the funds to me or I will reliably destroy your payment". It that sense pretty much isomorphic to "replace by fee scorched earth". The ongoing effort has other problems— a txout can be spent again immediately in the same block. Imagine it takes months to get the fraud notice out (heck, imagine a malicious miner creating one and intentionally withholding it).  By that time perhaps virtually all coins in active circulation are deprived from the conflicted coins. Now they finally get the notice out (/finally stop hiding it). What do you do?  Nothing? Invalidate _everyone's_ coins? Partially invalidate everyone's coins?  Each option is horrible. Do nothing makes the 'fix' ineffective in all cases: the attacker just always sends the coins to themselves in the same block, the others make the failure propagate— potentially forever, and don't just hit the unlucky merchant with the potentially unwise policy.

The "makes the 'fix' ineffective in all cases" refers to "The ongoing effort has other problems", so he means there is no solution in Bitcoin (in "the ongoing effort").

And my response:

But it's not a solution, alas. Ignoring other issues, at best it still leaves it at a simple piece of extortion "return most of the funds to me or I will reliably destroy your payment".

That specific threat was paramount in my mind as I was designing my proposal and I think I eliminated it.

The mining nodes reject any double-spend transaction which conflicts with the block chain. The only transactions that can be unwound are those which appear in a competing fork and only when that competing fork does not have enough sustained agreement. The premise is the attacker can't maintain 50+% of the hashrate indefinitely. Essentially what I am proposing is that orphaned chains are not forgotten by the sustained majority when the longer chain temporarily double-spends the orphaned chain, so the sustained majority (eventually) unwinds the temporary attack. The attack is differentiated from the majority because it is not sustained indefinitely. Abstractly I am proposing a smoothing filter on Proof-of-work longest chain rule. The ephemeral attacker is aliasing error.

And I think (perhaps) they can be unwound to eliminate the double-spend, rather than to the ether.

Gmaxell asserted that if transactions can be unwound then any recipient of funds could be under threat by the payer to send a double-spend and invalidate the transaction.

I rebutted by explaining that transactions are only unwound if a double-spend appeared in a block chain, but that consensus nodes were not going to accept a double-spend into the block chain. The only way to get a double-spend into the block chain is to do a 50% attack, thus payers won't be able to make such a threat.

And Gmaxell admitted that for the 50% attack scenario, Bitcoin has the same weakness in that transactions can be unwound when a chain is orphaned.

If I have misinterpreted his writings, he will I assume point that out.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 17, 2014, 06:12:44 AM
 #99

the bottom line is your proposal just shifts around the conditions for an attack , it doesn't really add additional security without trading off security elsewhere.

If you disagree, the only person you seem to have convinced is yourself.

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
July 17, 2014, 01:39:25 PM
 #100

If I have misinterpreted his writings, he will I assume point that out.
You have, but I've given up responding.
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
July 17, 2014, 03:27:43 PM
 #101

I believe transactions-as-proof-of-stake (the heaviest subtree model) is probably the best alternative to proof-of-work - and it isn't all that good.

The basic idea is that the "finite resource" available for deciding to prefer one chain over another, is the set of unspent txouts that exist at the point of the chains' divergence from each other.  If transactions must give the hash of a very recent block (parent or grandparent to the block they'll be in) then they can be counted as a "vote" for a chain including that block. 

In practice, this makes it possible for an attacker to spend ten coins in one chain, then support a different chain by spending a thousand coins (probably to himself) there, and if the second chain is accepted it 'unspends' his ten coins.  Obviously this only works as a double spend if he does it before everybody else in the course of regular spending puts the first chain a thousand coins ahead. 

But it gets worse than that, because at any given moment there may be dozens or even hundreds of crooks looking for a chance to double spend, and if two competing chains appear, their efforts to make a small initial expenditure in the apparent leading chain and then dump a huge transaction into the second chain all reinforce each other. 

On one hand, if everybody understands the security requirement for transactions as proof of stake and regularly transacts their coins several times a day, (which you can arrange with a proof-of-stake interest/security payment for each transaction) the crooks shouldn't be able to overwhelm that traffic with their timing games.  On the other, that would generate an absolutely enormous blockchain and have a high communications overhead.

So in the short run, it doesn't work.  In the long run, it can provide an absolute security guarantee given enough time; Once more than half of all the coins in txouts that existed before a block was created have been spent, that block becomes absolutely irrevocable no matter what proof-of-work anybody pours on or what manipulations they do with spending and transactions.   
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
July 17, 2014, 03:43:33 PM
Last edit: July 17, 2014, 09:11:05 PM by DeathAndTaxes
 #102

I believe transactions-as-proof-of-stake (the heaviest subtree model) is probably the best alternative to proof-of-work - and it isn't all that good.

Agreed.  One issue is that it makes risk analysis difficult.  This means the simplicity of wait for x confirmations and you are safe (unless attacker has a majority of the hashrate) no longer applies.  

Quote
In the long run, it can provide an absolute security guarantee given enough time; Once more than half of all the coins in txouts that existed before a block was created have been spent, that block becomes absolutely irrevocable no matter what proof-of-work anybody pours on or what manipulations they do with spending and transactions.  

One problem is that a large number of outputs have not ever been spend, and may not be spent for years or decades.  So it could be some time before a block was absolutely irrevocable.  The large amount of old unspent outputs create uncertainty.  One variant would be to only include outputs which are below a certain age at the time of the block.   For example you could say for the purpose of block scoring outputs older than one block month (4,320) aren't included in the score.  This would reduce the requirement to only a majority of the outputs less than a month old.

Still it will require some careful analysis to avoid some unexpected weaknesses.   For as complex as Bitcoin is in implementation, it is rather simple (maybe elegant is a better word) in design.  There are still nuances, and gotchas in the Bitcoin protocol and it is built on a simple design.  More complexity may not be the answer.
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
July 17, 2014, 07:46:19 PM
 #103

The heaviest subtree model?   What's that? 
I've not heard that one yet.

AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 17, 2014, 10:21:27 PM
Last edit: July 17, 2014, 10:33:31 PM by AnonyMint
 #104

If I have misinterpreted his writings, he will I assume point that out.
You have, but I've given up responding.

Nice to see technical discussion has been reduced to politics. Smells like the typical "not invented here, so ignore it" phenomenon of vested interests (or I don't want to help the competition).

What is the point of technical discussion if you are not going to make necessary clarifications.

I believe my interpretation is complete. It is up to you to show otherwise, or give up if you can't/won't.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 17, 2014, 10:30:46 PM
Last edit: July 17, 2014, 10:51:50 PM by AnonyMint
 #105

I believe transactions-as-proof-of-stake (the heaviest subtree model) is probably the best alternative to proof-of-work - and it isn't all that good.

Agreed.  One issue is that it makes risk analysis difficult.  This means the simplicity of wait for x confirmations and you are safe (unless attacker has a majority of the hashrate) no longer applies.

I don't know if I have missed some discussion that would have changed the understanding I formed, but I pointed out egregious flaws in the original proposal for Transactions as a Proof-of-stake.

The fundamental math problem with using any metric from the block chain (or any consensus voting such as proof-of-stake) is that the input entropy can be gamed deterministically unlike proof-of-work which is a randomized process, i.e. the input entropy is not orthogonally unbounded as it is in the randomization of proof-of-work.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
July 17, 2014, 11:57:02 PM
 #106

Academics have produced nothing but perfect nonsense on the topic of Bitcoin. This is one of the worst.

Nice "not invented here" bravado but incorrect.


He's also right about the effects of block reward halving on hash power allocation.

No he isn't or at least not his conclusions on what "will" happen are just speculation.

...

a) continue to mine bitcoin for half the revenue
b) sell the hardware to a miner with lower costs (namely cheaper/free electricity and cool climate)
c) mine an altcoin.

The author jumps right to c.

The author might have the wrong justification for the conclusion, but the "Programmed Self-Destruction" conclusion is not speculation because the self-evident fact is that investors only spend a small fraction of their income, thus if you don't redistribute currency then its use dies[1].

[1] https://bitcointalk.org/index.php?topic=597878.msg7900846#msg7900846

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
July 21, 2014, 11:08:29 PM
 #107

The heaviest subtree model?   What's that? 
I've not heard that one yet.

In any decision about which of two potential branches to accept, the transactions-as-proof-of-stake method (aka heaviest-subtree model) prefers the branch whose transactions have spent the greatest proportion of the txouts that existed at the moment when those two branches diverged. 

In order for this to work, transactions must belong clearly to one branch or the other.  So the transaction itself has to have a block ID embedded in it, and it counts as "support" for the branch that includes that block ID. 

So the 'finite resource' that must be used up to support a branch and which, if used, cannot also be used to support another branch, is coins in txouts, not hashing power.  And the people who deserve the coins for helping to secure the blockchain are everybody who made a transaction using their stake, rather than whoever came up with the winning hash. 

andikarl
Newbie
*
Offline Offline

Activity: 10
Merit: 0


View Profile
September 08, 2014, 07:01:00 PM
 #108

@all

A very interesting paper. But I don´t think that the crypto currencies will self destruct themselfes if we can make the right adjustments.
I have thought about a way to guarantee the decentralization of a crypto currency or Bitcoin on the long term. I have written an article and would like to hear what you think about it:
http://techreports2014.wordpress.com/2014/09/07/fundamentals-of-a-possible-new-bitcoin-fork-bitcoin-2-0/

Hope it can help and I would love discuss it with you guys.

Greetings.
Andrew
andytoshi
Full Member
***
Offline Offline

Activity: 179
Merit: 151

-


View Profile
September 08, 2014, 09:22:03 PM
 #109

A very interesting paper. But I don´t think that the crypto currencies will self destruct themselfes if we can make the right adjustments.
I have thought about a way to guarantee the decentralization of a crypto currency or Bitcoin on the long term. I have written an article and would like to hear what you think about it:
http://techreports2014.wordpress.com/2014/09/07/fundamentals-of-a-possible-new-bitcoin-fork-bitcoin-2-0/

Hope it can help and I would love discuss it with you guys.

Hi Andrew,

In future it is better to create a new thread rather than resurrecting an old one, especially one as vivacious as this one.

As to the content of your article, I briefly skimmed it. A few comments -- your concerns about ASIC monopolies are largely addressed in my ASICs and Decentralization FAQ, and secondly, the "anti-monopoly" scheme by Sirer and Eyal is seriously and fundamentally broken by being progress-free. It seems to me that these authors are more concerned with promoting themselves with doomsday headlines than they are getting the fundamentals of what they write about correct, and it's best for the Bitcoin world if they not be given attention.

Andrew
andikarl
Newbie
*
Offline Offline

Activity: 10
Merit: 0


View Profile
September 09, 2014, 01:06:14 PM
 #110


[/quote]

Hi Andrew,

In future it is better to create a new thread rather than resurrecting an old one, especially one as vivacious as this one.

As to the content of your article, I briefly skimmed it. A few comments -- your concerns about ASIC monopolies are largely addressed in my ASICs and Decentralization FAQ, and secondly, the "anti-monopoly" scheme by Sirer and Eyal is seriously and fundamentally broken by being progress-free. It seems to me that these authors are more concerned with promoting themselves with doomsday headlines than they are getting the fundamentals of what they write about correct, and it's best for the Bitcoin world if they not be given attention.

Andrew

[/quote]

Hi Andrew,

thank you for your reply. And thank for your FAQ it does explain a lot of things pretty well. Especially why ASIC-miners are so important to validate the Blockchain on the long term. But on the other hand in my article I have not ruled out the use of ASIC miners. Just possible monopolies in the future for controling the network I see as an threat for Bitcoin. Please read the part of my article which deals with the position stamp for guaranteeing the decentralization of bitcoin. With decentralization I do not mean that in the future the should not be any mining pools or big miners. On the contrary I mean a safeguard which is also implemented into the blockchain to counter attacks and to detect monopolies. My suggestions would also impact on the hardware flow. But these we have to dicuss in detail.

I think I will open a new thread about this topic Smiley

Cheers,
Andrew
UnunoctiumTesticles
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
November 29, 2014, 08:49:31 PM
 #111

If I have misinterpreted his writings, he will I assume point that out.
You have, but I've given up responding.

Btw, a couple of months ago I solved the selfish-mining problem and proved to myself that gmaxell was wrong. Here I quote a snippet from my private design document:

Code:
   3. Sharing revenue proportationally with the fork(s) of lower cumulative
      difficulty entirely defeats selfish mining (positive relative revenue) for
      any α. All blocks in the merged forks receive their full block reward
      adjusted by relative difficulty.

         r_others = p_0(1-α) + p_1(1-α) + p_2(1-α) + p[k>2](1-α), cases (e), (f), (g), and (h)
         r_pool = p_1(1-α)/2 + p_2(1-α)2 + p[k>2](1-α)/2, cases (f), (g), and (h)

         r_others = p_1(1-α)(1/α + 1 + α/(1-α) + (α-1)/(2α-1) - 1 - α/(1-α))
         r_pool = p_1(1-α)(1/2 + 2α/(1-α) + (α-1)/(4α-2) - 1/2 - α/(2-2α))

         R_pool = (3α/(2-2α) + (α-1)/(4α-2))/(1/α  + 3α/(2-2α) + 3(α-1)/(4α-2))

      Plot the following at wolframalpha.com.

         (3α/(2-2α) + (α-1)/(4α-2))/(1/α  + 3α/(2-2α) + 3(α-1)/(4α-2)),  α, 0 < α < 0.5

      However, merging forks of lower cumulative difficulty would remove the
      economic incentive to propagate forks as fast as possible; and issuing a
      double-spend would be trivial. The solution is to only merge forks of
      lower cumulative difficulty when they are known by the majority before
      receiving a fork of higher culmulative difficulty. Thus an attacker or
      network hiccup that creates a hidden higher culmulative difficulty fork
      has an incentive to propagate it before that fork falls behind the
      majority. A node assumes it is part of the majority if it is participating
      non-selfishly; it will attempt to merge any lower culmulative difficulty
      forks it was aware of upon receiving a higher culmulative difficulty fork.
      If it is not part of the majority, then its attempt will not be accepted.
      If there are double-spends at stake, each node will continue to try for
      the longest fork miners wish to be able to merge, i.e. longer forks with
      double-spends won't be merged so the market will decide the value of the
      two forks. If there are no double-spends at stake then to defeat selfish
      mining, each node will continue to try during the duration of the longest
      fork probable for an attacker α < 0.5[6]. If a node selfishly forsakes its
      obligation and joins the attacker, that node attacks the value of the
      block rewards it earned.

      Also, forfeiting double-spends defeats any incentive to temporarily rent
      greater than 50% of the network hashrate because the persistent honest
      miners will forfeit the double-spends from the attacker’s fork upon it
      relinquishing control. Compared to Bitcoin, there is no increased
      incentive to double-spend, because miners will not place into any block a
      double-spend they’ve already seen; double-spends can only appear in forks.
      In Bitcoin one fork wins so one transaction from each of the double-spend
      is forfeited so the honest recipient loses when the attacker’s fork wins.
      In this new algorithm both of the transactions from each double-spend are
      forfeited, so both the honest recipient and the attacker lose.

      Defeating short-term rented hashrate attacks in conjunction with only
      accepting inputs to a transaction which are as old as the longest duration
      fork that will be merged, proves the probability of a double-spend[6] to
      be determined solely by the number of block confirmations no matter how
      fast the block period is.

      Instead of monolithically choosing a winning fork, forfeiting double-
      spends deletes selective downstream transactions. Thus fully opaque block
      chains such as Zerocash, are thought to be incompatible because they not
      only obscure which transaction consumed its inputs but also hide any coin
      identifier that could correlate spends on separate forks. Cryptonote’s
      one-time ring signatures are more flexible because mixes could choose to
      mix only with sufficiently aged transaction outputs.

      Each block of the Proof Chain contains a tree of block hashes, which are
      the forks that were merged and a branch in a subsequent block may continue
      a branch in a prior block. The restricted merging rule, double-spend
      forfeiture and proof trees avoid the risks of Ethereum’s proposed
      approximation[5]. Miners have to maintain transaction history for the
      longest fork they wish to merge. And downstream consumers of inputs must
      wait for the longest fork they anticipate[6].


   [5] https://blog.ethereum.org/2014/07/11/toward-a-12-second-block-time/
       Now, we can’t reward all stales always and forever; that would be a
       bookkeeping nightmare (the algorithm would need to check very diligently
       that a newly included uncle had never been included before, so we would
       need an “uncle tree” in each block alongside the transaction tree and
       state tree) and more importantly it would make double-spends cost-free...
       Specifically, when only the main chain gets rewarded there is an
       unambiguous argument...Additionally, there is a selfish-mining-esque
       attack against single-level...The presence of one non-standard strategy
       strongly suggests the existence of other, and more exploitative,
       non-standard strategies...True. We solved the selfish mining bugs of
       traditional mining and 1-level GHOST, but this may end up introducing
       some new subtlety; I hinted at a couple potential trouble sources in the
       post.
   [6] Meni Rosenfeld. Analysis of hashrate-based double-spending.
       Section 5 Graphs and analysis.
       http://arxiv.org/pdf/1402.2009v1.pdf#page=10
Pages: 1 2 3 4 5 6 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!