Bitcoin Forum
April 27, 2024, 03:33:55 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [31] 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 »
  Print  
Author Topic: DECENTRALIZED crypto currency (including Bitcoin) is a delusion (any solutions?)  (Read 91075 times)
sidhujag
Legendary
*
Offline Offline

Activity: 2044
Merit: 1005


View Profile
January 22, 2016, 08:47:03 PM
 #601

holistic

Every time you write "holistic" I recall https://www.youtube.com/watch?v=RtXF2j5JAds. Is there a synonym which could be used sometimes instead of "holistic"?

LOL!  Same here.

Don't worry, it's just a word for how TPTB has to "load it into his head"

Reference:

I am trying to load your model in my head, but you are not providing sufficient information.

I still have a 1" hole in the top of my skull where I was hit with a hammer by a neighbor when  I used to live in the squalor area (when my ex-wife made a fight about a hammock

ahahhah +1
1714188835
Hero Member
*
Offline Offline

Posts: 1714188835

View Profile Personal Message (Offline)

Ignore
1714188835
Reply with quote  #2

1714188835
Report to moderator
1714188835
Hero Member
*
Offline Offline

Posts: 1714188835

View Profile Personal Message (Offline)

Ignore
1714188835
Reply with quote  #2

1714188835
Report to moderator
1714188835
Hero Member
*
Offline Offline

Posts: 1714188835

View Profile Personal Message (Offline)

Ignore
1714188835
Reply with quote  #2

1714188835
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714188835
Hero Member
*
Offline Offline

Posts: 1714188835

View Profile Personal Message (Offline)

Ignore
1714188835
Reply with quote  #2

1714188835
Report to moderator
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 23, 2016, 02:16:36 AM
Last edit: January 23, 2016, 05:04:45 AM by TPTB_need_war
 #602

[...]

Afaik, the insoluble fundamental problems of decentralized exchanges that operate directly on the block chains of the coins being exchanged are:

  • Block chains don't have fast enough transactions and can't handle the trading volume.
  • The exchange protocol requires long delays (partially because of the third issue below), which means the paradigm can be DDoS attacked[jammed], thus rendering it unsuitable (since exchange is normally a very time sensitive action).
  • Orphaned blocks can lead to one of the parties losing all the coins.

And yes afaik malleability makes decentralized exchange impossible. But even after fixing that, the block chains of all the altcoins need to have special changes made to their protocol (hard forks) and still you will have the insoluble problems I bullet-pointed above.

[...]

As for the block chain scaling issue of the first bullet-pointed above, that is what I am working on now with my proposed design which was discussed upthread.

As for Bitshares[...]

So after further thought, item #1 I am aiming to solve with my block chain technology and others may be working on similar block chain tech.

Item #2 is partially a function of the block chain technology in terms of delays that can be reduced, but the more salient issue is that the protocol can be jammed (not DDoS attacked) by a party that backs out and doesn't complete TierNolan's protocol. But the solution to this jamming problem is to allow participants to set how many Coin Days Destroyed the counter party must possess on the UTXO being offered in the trade. Has anyone else ever suggested this solution?

Item #3 is also a function of block chain tech and how likely an attack on the probabilistic model of confirmations is.

So I am saying that these issues appear to all be solvable, at least excluding the real-time trader scenario.

In regards to decentralised exchanging using ACCT the introduction of CLTV means that malleability is no longer an issue (you don't need to refer to transaction ids that aren't already confirmed).

I have built code that will create a Bitcoin script that works by combining the ACCT and CLTV with a P2SH address and redeem script.

https://bitcointalk.org/index.php?topic=598860.msg13435766#msg13435766

What about the other issues with decentralized exchange I enumerated?

It is only a solution to the malleability issue that rendered TierNolan's original implementation practically unusable (so I just wanted to clarify that malleability is no longer a part of the problem).

I see ACCT with CLTV as being more akin to a street currency exchange than to an exchange like Cryptsy (i.e. useful in the same way as a street currency exchange is for fiat when you are traveling abroad but not very suitable for the purposes of day-trading and cannot be used for high frequency trading of course).

If the purpose of say obtaining LTC for BTC was not to day trade but just to have some LTC as some sort of hedge or to use for some other purpose then you won't need to trust an exchange like Cryptsy being the point.

I will want to pull up TierNolan's BIP and get a full explanation from you as to what you have accomplished. I assume we can do this in PM.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 23, 2016, 05:07:50 AM
 #603

For anyone else who hasn't put their fingers in their ears, it should be obvious that there does exist an equilibrium in bitcoin mining, centered around the 25 BTC cost per block, because if there weren't one of two things would happen:

1. Cost of production exceeds 25 BTC, all miners go out of business since their costs exceed their profits
2. Cost of production is less than 25 BTC, the block interval shrinks to 0, and bitcoin inflates wildly out of control

You might argue that difficulty adjusts to compensate, but that can only happen if there can be an equilibrium.  TPTB_need_war is confusing the fact that some miners are more profitable than others with the fact the overall the network is break even, since some miners are unprofitable.

So the miner attacking the coin is going to have the capital cost which coincides with the equilibrium average.  Roll Eyes

As I wrote to you last time, there is no equilibrium (capital cost) in our consideration of the equation of the profitability of an attack and the incentives thereof.

Sigh.

I get frustrated because this post of yours ignores what I wrote in the prior post.

There is no such equilibrium because not all miners have the same costs for the same difficulty.

Don't feel bad, as it is a mistake so called "economists" make routinely, trying to model some phenomenon with statistics and totally failing to understand that the mean doesn't actually exist and instead the distribution is composed of diverse samples.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 23, 2016, 06:16:59 AM
 #604

You should include a discussion of the halving, to be fair.  It is coming up in a few months... I think.

Already mentioned upthread:

Quote
The professional miners' are aligned to paying back the loans they incurred to buy mining farms. Frankly I think your post is delusional. Get a grip on economics. Usury (debt) enables the banksters to take entire control of the economics of mining and charge the costs to the collective.

I think the current manifestation of professional mining will evolve as bitcoin shifts to lower block rewards. We really have no idea what the next halving will bring, nor the next one. I wouldn't be surprised if a majority of net hash gradually shifts back to the enthusiasts from the mining centers. The big corporate mining farms will shut down, the manufactures running these farms will liquidate their hardware, the hardware will flood the market and be distributed extensively.

Back to Economics 101. The marginal producers are the first to go. The block halving will hand a greater percentage of the hash rate to the hydropower mining farms (and Larry Summers' 21 Inc) whose costs are probably below $50 per BTC. Bitcoin is designed to slowly transition to corporate control. Even Satoshi admitted that.

And those who think riding that trend is in their benefit, my thinking is they don't really have good foresight of where this shit is going to end up real bad for humanity real fast. We are talking about years, not decades.

And most BTC investors will end up losing, not gaining. The banksters are cleverly mining all of us via their funding of the mining farms. We are just stupid cows.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 23, 2016, 06:32:29 AM
 #605

You state that the CAP theorem is fundamental to all of what you are describing.  The CAP theorem is essentially this:

a statement that it is impossible for a distributed computer system to simultaneously provide all three of the following guarantees:

    Consistency (all nodes see the same data at the same time)
    Availability (a guarantee that every request receives a response about whether it succeeded or failed)
    Partition tolerance (the system continues to operate despite arbitrary partitioning due to network failures)

Let's start with an assumption that I agree with the CAP theorem, and further assume that this notion is actually correct.

In that case, let's say all nodes don't see the same data at the same time.  Seems reasonable, after all, how could they?

The problem manifests as an inability to reach consensus on which of the conflicting Partitions has the accepted and rejected double-spend. You might start reading roughly page 27 of this thread to see the discussion between enet, monsterer, and myself, wherein I elucidated this.

Availability - for the sake of argument, let's say every request receives a response about whether it succeeded or failed, or if it doesn't the first time, it is programmed to repeat the same request until it receives a reply, which it will receive within a reasonable period of time.

Inaccessibility manifests as a protocol requirement to prevent interoption between Partitions to solve the Inconsistency mentioned above.

Partition tolerance - Let's say for the sake of argument the system continues to operate despite arbitrary partitioning. One example perhaps being a network failure leading to event like the 2013 Bitcoin fork - analyzed here by Arvind Narayanan, who interestingly poses arguments in favor of a certain degree of centralization within bitcoin development: https://freedom-to-tinker.com/blog/randomwalker/analyzing-the-2013-bitcoin-fork-centralized-decision-making-saved-the-day/

Indeed centralization of policy (thus abandoning decentralized, permissionless commerce) is the only way to deal with the Inconsistency or Unavailability (choose one) when multiple Partitions are allowed.

I have argued upthread that centralization will be the only way Iota (with its multiple Partitions) will avoid divergent, chaotic Inconsistency (since obviously it doesn't limit Accessibility).

So under these conditions as described above, which I suggest are roughly representative of the actual condition of bitcoin (as an example) at most times, then it would seem to me that despite the presence of the CAP theorem, the system continues.  Why?  Because the system is dynamic, not static.

Bitcoin continues because it doesn't allow multiple Partitions and because in the case where chaos would result from a fork, then centralization of the mining is able to coordinate to set policy. But we also see the negative side of centralization when recently the Chinese miners who control roughly 67% of Bitcoin's network hashrate were able to veto any block size increase. And lie about their justification, since an analysis by smooth and I (mostly me) concluded that their excuse about slow connection across China's firewall is not a valid technical justification. Large blocks can be handled with their slow connection by putting a pool outside of China to communicate block hashes to the mining hardware in China.

Possibly for systems that are very fragile or inflexible then I think that changes in these C-A-P conditions could cause them problems, of varying degrees of seriousness.  But even for highly centralized systems (e.g. corporation-states), which are highly resistant to change, they are curiously persistent.  This may be because of people's desire to perpetuate an institutional memory and cultural history of organizational (and national or tribal) ideology.  Belonging to a large group - identity which conveys a larger sense of belonging - has been, for good or for ill, branded into the human psyche.  War, government failures, economic disasters, mass murder - nothing seems to stop the bulk of people in society from falling into the trap of the state, again and again.  But I digress.  What about those decentralized systems?  Does the issue of the CAP theorem necessarily mean that they can't work?

Not necessarily.  If a system is dynamic enough, and is managed well by its community (however that needs to happen with a distributed system in order for some degree of balance to be attained between a "centralized" development structure and a "decentralized" system, as no system is ever 100% "decentralized," then a well-cared for decentralized, distributed system can be continuously propagated (or re-propagated, much like a plant's seeds are used to regrow the fields).  

This noise is what happens when smart people haven't studied an issue in depth and start going off on conceptual abstractions in their mind without really understanding what they are conceptualizing. I hope my explanations above have caused you to realize you are incorrect.

Interesting discussion though!

Thanks. Hopefully you will read it all.

Johnny Mnemonic
Hero Member
*****
Offline Offline

Activity: 795
Merit: 514



View Profile
January 23, 2016, 07:11:22 AM
 #606

TPTB, from what I understand your design is supposed to eliminate mining profitability by requiring users to send PoW with each transaction. Are you (not) trying to tell us that you plan to decouple transaction processing from currency distribution (i.e. no block rewards)?
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 23, 2016, 09:38:29 AM
 #607

TPTB, from what I understand your design is supposed to eliminate mining profitability by requiring users to send PoW with each transaction. Are you (not) trying to tell us that you plan to decouple transaction processing from currency distribution (i.e. no block rewards)?

There could still be a block reward for as long as it is unprofitable for professional mining farm due to the difficulty created by all those submitting PoW and not expecting a profit.

Note however that microtransactions may not be a viable model, so that throws a monkey wrench into my design plans (because need for users to be mining often in order for them to drive difficulty very high), but I have another idea to investigate now as a possible work around.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 23, 2016, 12:24:54 PM
Last edit: February 10, 2016, 05:20:22 AM by TPTB_need_war
 #608

Warning I as AnonyMint proposed this in mid-2013 and referred to it as Proof-of-Storage. I also discovered it was fundementally flawed. If you continue, you will be wasting your time. Eventually I will come back and explain to you and Andrew Miller PhD why this is flawed.

Details are definitely needed here. I know about Proofs-of-Space and Proofs of Space-Time( http://eprint.iacr.org/2016/035.pdf ), Miller's Proof-of-Retrievability(Permacoin), and White's Proof-of-Storage in Qeditas. What's Anonymint's proposal about?

We found some possible drawbacks and attack vectors in Permacoin, but no fundamental flaws.

In my original analysis in 2013, I went down the same rat hole of flaws as in section "4.2 Local-POR Lottery" of the white paper. They assume so many things (including for example that Amazon bandwidth is 10 - 100X more expensive than dedicated host), and when you work through all the analysis, then the scheme does not work to prevent centralization of the mining, and thus the permissionless (uncensored) and robustness/durability of file storage attribute will not be sustained.

P.S. I also read these:

https://bitcointalk.org/index.php?topic=186601.0 (this was an offshoot I think of my idea which was in my March 2013 thread)
https://bitcointalk.org/index.php?topic=555375.msg6536798#msg6536798



Proving data is stored on a decentralised node is something of an ongoing project.  

Others are looking at the issue:

https://bitslog.wordpress.com/2015/09/16/proof-of-unique-blockchain-storage-revised/

So far I think PoW for nodes to validate blocks or data they contain is an interesting approach.

Afaics, all of these proof-of-storage/retrievably designs are fundamentally flawed (MaidSafe, Storj, Sia, etc) and can't ever be fixed.

They try to use network latency to prevent centralized outsourcing, but ubiquitously consistent network latency is not a reliable commodity. Duh. Sorry!

Sergio also the one who started the idea for a DAG which I have explained is fundamentally flawed (and afaics even had an egregious/fundamental error in this white paper for his DagCoin). Find that info in the thread linked above.




The mathematical models are right there in the paper. We are collecting live data from the network, which proves the models are correct.

"Its not going to work" in face of real data showing that is working is not going to cut it. Please provide some data or mathematical models that say otherwise. Latency doesn't matter for proofs.

Testnets do not prove that the Sybil attack resistance and payment model economics work (because game theory is fully incentivized in the wild).

Regarding case 1) in the quote above, the fact is the math models are often myopic[1] (and again that is so in this case), because it is impossible to prove proof-of-storage/retrievability:

These proof-of-storage/retrievability algorithms also employ a challenge/response to force the node to have access to the full copy of the data which should be stored, but this does not prevent the node from outsourcing the storage to a single centralized repository. So to attempt prevent that centralized repository attack (i.e. Sybil attack on the nodes) these proof-of-storage/retrievability algorithms “try to use network latency to prevent centralized outsourcing, but [that is impossible because] ubiquitously consistent network latency is not a reliable commodity”.

[1]Meni Rosenfeld's myopic math, and note Meni is a widely respected academic:
https://bitcointalk.org/index.php?topic=1319681.msg13633504#msg13633504

And my explanation of the myopia:
https://bitcointalk.org/index.php?topic=1319681.msg13797768#msg13797768
https://bitcointalk.org/index.php?topic=1319681.msg13819991#msg13819991
https://bitcointalk.org/index.php?topic=1319681.msg13763395#msg13763395
https://bitcointalk.org/index.php?topic=1319681.msg13647887#msg13647887

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
January 26, 2016, 09:06:50 AM
 #609

That will not work in my design because the payer has to do a roundtrip request to request the current "intra-block chain" hash from a "provider" to include in the PoW share (otherwise the same PoW share could be submitted to multiple providers and thus payers have no vote in the LCR). Therefor the PoW share computation can be outsourced at no extra latency cost.

Why don't you have the providers just act as the mempool? Then users can gather the TXIDs in their round trip and sign their own PoW, rather than just gather the already produced block hash.

edit: why are you replying to this thread as a different user?
Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
January 26, 2016, 09:20:18 AM
 #610

edit: why are you replying to this thread as a different user?

He is not Phoenix, after a ban he has to take another name.
illodin
Hero Member
*****
Offline Offline

Activity: 966
Merit: 1003


View Profile
January 26, 2016, 04:35:48 PM
 #611

I need to correct an error I made upthread. I stated that the reason payers would not pay for ASIC mining farm to compute the PoW share the payer must include with the transaction, would be because the PoW share could be computed locally faster than the latency for a round-trip network request for the PoW share generated on the lowest cost ASIC mining farm. And I stated that this was because the payer would sign the PoW share, so the "provider" receiving the transaction (with the attached PoW share) would not be be able instead compute the PoW share for the payer (without the round-trip latency delay). I had stated this was a difference from Iota's design which can't allow payers to sign PoW, because Iota's defense against certain attacks requires that anyone can recompute the PoW share and reattach a transaction to a different branch of the DAG.

That will not work in my design because the payer has to do a roundtrip request to request the current "intra-block chain" hash from a "provider" to include in the PoW share (otherwise the same PoW share could be submitted to multiple providers and thus payers have no vote in the LCR). Therefor the PoW share computation can be outsourced at no extra latency cost.

However on further analysis this does not entirely weaken the intent of my design to remain decentralized. The key is the power remains in the hands of the payers to choose which provider to submit their transaction to and thus can choose to route away from any malfeasance (since they are paying for the PoW share via a transaction fee to the provider). Although it means mining capital costs will be reimbursed (unlike in the case where the payers' computers would compute the PoW share then the non-payers mining capital costs would be unreimbursed given the block reward would be 0 or very small relative to the difficulty), mining equipment will not be wildly profitable as in the case for Bitcoin since the reimbursement is only for costs, thus still the point remains that mining equipment won't be well capitalized for making LONG-TERM 51% attacks on the protocol (even if forced to by regulation as could be the case in Bitcoin) because the payers can send their PoW share computation else where in a heart beat.

This also makes more sense because mobile users are not going to want to compute PoW shares and drain their battery.

One issue is a mining farm located next to a hydropower plant would maybe have (including better economy-of-scale capital costs on equipment) up to a 10X cost advantage over a provider server that is located any host any where.

Perhaps the latency to the mining farm could still be an issue (delay the transaction by another sub-second perhaps) and this could force providers to be located in the datacenters of mining farms to lower latency (which would be catastrophic to remaining decentralized since the choice of providers available to payers would be limited by such confining requirements). OTOH if the cost of the PoW is miniscule relative to the value of the transaction, then PoW share can be computed by a provider with up to 10X greater cost without impacting the payers decision which provider to choose. But remember also that the computation cost of the PoW share needs to be much greater than the validation cost of the transaction overall, but that should be doable since transaction verification is such a miniscule cost.

Again remember I suggested that payers' clients (wallet software) could be induced to move to other providers when a providers PoW share exceeds 5% or so.

Also it is not impossible to design the system such that payers are always listening for the current "intra-block chain" hash updates and so the original point of my latency design could remain. But this would require all payers to be receiving communications from the block chain network at all times, which would increase network load and there are Sybil attack and centralization issues about who pays for this (perhaps payers can pay a provider to provide this data feed). So it is not impossible to envision retaining my original design, but it seems to be workable only for desktops and not for wireless mobile.

If latency becomes the main issue for wireless mobile then telcoms may have the upper hand any way. So it seems that the key is to keep PoW shares small enough to be miniscule relative to typical microtransaction values yet large enough to be greater than the verification cost. Also PoW has to be large enough to prevent spam on the network (which is essentially saying significantly larger than the verification cost, since the storage cost will be assumed to be even lower than the verification cost but I need to run some calculations to confirm this intuition).

I am probably missing a few details in this quickly written post. The entire design could be explained more coherently in a white paper (hopefully forthcoming).

P.S. Note that Iota has the similar issues, and this aspect of Iota was not my main concern expressed upthread about Iota's ability to remain Consistent about double-spends and whether that will lead to divergence (chaos).



Note the above post was deleted by the mods, so I am reposting it. Someone may wish to quote the above technical discussion before some drunk mod goes "happy finger" again.

TPTB_need_war was banned for 3 days for writing in big red letters that "Ethereum is broken and can't be fixed" and proceeded to defend this point factually.

And so the mods have now demonstrated they are involved in the pump of Ethereum.

So much for the objectivity of this forum.

Yeah, many jumped on the ETH wagon and I guess irrationality (pump) is the only way to unload.
http://qntra.net/2015/09/buterins-waterfall-nearly-spent/

damn_the_truth was permanently banned from the forum.

I herd Doug and eTheRum goin to da moon is it true?
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 30, 2016, 01:04:38 AM
Last edit: January 30, 2016, 04:08:22 AM by TPTB_need_war
 #612

I have roughly sketched a proposal which I am thinking will ameliorate the energy problem, remove the selfish mining attack (because mining becomes unprofitable), fix the block chain scaling, and Tragedy of the Commons around verification asymmetry for PoW (which is Ethereum's insoluble dilemma). It is in the thread I provided a link to previously, but I will point you to a more specific post that gets more direct to the economic math point (though one would still need to read the context of prior and subsequent posts of the thread):

https://bitcointalk.org/index.php?topic=1319681.msg13634257#msg13634257
https://bitcointalk.org/index.php?topic=1319681.msg13647887#msg13647887
https://bitcointalk.org/index.php?topic=1319681.msg13684665#msg13684665

However, this design may not work for Zcash, because Zcash has higher verification costs (but that is not a problem if the verification cost is still insignificant relative to the typical transaction values) and because the rate of transactions might be much more infrequent than the microtransactions market I was designing for (however if Zcash can be widely used then transaction rate may be sufficient and also the PoW share submitted with each transaction can be much higher than in my design because I assume anonymous transactions can be for much higher minimum values).

There may or may not be another hindrance with the fact that Zcash's block chain is afaik monolithic and amorphous, but I haven't thought through that issue yet.

Quote from: Uninvention, post:8, topic:22

Afair (and I don't have time to re-read it given all I have learned since I first read it) that paper did not articulate the most unarguable and salient reasons that PoS is inferior. I think we furthered the understanding in my linked thread. Also I wrote something about this in one of Vitalik's threads:

https://www.reddit.com/r/ethtrader/comments/42rvm3/truth_about_ethereum_is_being_banned_at/czcpoez

On any sufficient time horizon, it has to be a bubble that will crash to near 0 unless they solve the fundamental flaw of how to handle decentralization of a block chain given the Tragedy of the Commons which forces centralization due to asymmetry in block rewards per verification cost, or unless the verification cost is insignificant which isn't to be the case for arbitrarily long running scripts. And I assert this flaw can not be fixed. I have studied this and it is fundamental due to the CAP theorem that they can't use partitioning (shards) nor consensus-by-betting nor any of the other ideas proposed for Casper.

This would be true for any block chain that supports long-running programmable scripting with verification cost that is unbounded (and not assuredly insignificant). The Tragedy of the Commons even applies to Bitcoin and any PoW coin to a lesser extent because verification cost is much less.

I am not just interjecting technical fundamentals, I am also asserting speculative insight that a block chain which can't scale decentralized is not an innovation for anything (well an exception could be a centralized Zcash is still useful for provable anonymity). This doesn't speak about the short-term pump which is more of speculator sentiment and liquidity issue. I am speaking about "On any sufficient time horizon".

Note the viable solution for Ethereum may be to centralize only the verification, and keep the voting power decentralized. This is essentially what I am proposing for my PoW design.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 30, 2016, 04:12:07 AM
 #613

That will not work in my design because the payer has to do a roundtrip request to request the current "intra-block chain" hash from a "provider" to include in the PoW share (otherwise the same PoW share could be submitted to multiple providers and thus payers have no vote in the LCR). Therefor the PoW share computation can be outsourced at no extra latency cost.

Why don't you have the providers just act as the mempool? Then users can gather the TXIDs in their round trip and sign their own PoW, rather than just gather the already produced block hash.

This wouldn't remove the economic incentive to outsource the PoW share computation to an ASIC.

Also note the payer is only computing a PoW share, so they won't be announcing a block solution most of the time, so that extra communication load is on every transaction submitted to the network.

Also you are missing a key point (that I have not yet revealed) about how I do the intra-block partitions so as to enable instant transactions.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 30, 2016, 04:53:55 AM
Last edit: January 30, 2016, 05:42:06 AM by TPTB_need_war
 #614

Pool centralization increased in Bitcoin in 2015 as I predicted back in 2013, despite arguments that said miners will switch pools to avoid centralization  Roll Eyes

When will many prominent people here in this forum ever learn economics.




TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 30, 2016, 05:30:28 AM
Last edit: January 30, 2016, 02:48:45 PM by TPTB_need_war
 #615

Note the viable solution for Ethereum may be to centralize only the verification, and keep the voting power decentralized. This is essentially what I am proposing for my PoW design.

I assume you mean verification as in if the smart contract output matches the input submitted by the requesting node? Centralization of the transaction processing seems to be giving up the decentralized voting power.

Centralization is always more efficient, but complete centralization is not necessary. That is your main gripe with Ethereum, right? That each node has to run the scripts themselves?

I think proper checks and balances could be implemented to reduce the amount of nodes the code has to be ran on while maintaining decentralization.

IE. If 100% of an arbitrary number of randomly selected nodes agree on the output, then it may not be necessary for all nodes on the network to run that code as long as there is one honest node. If there is one dissenting node, then the full network (or a larger percentage of the network) runs the code to solve the dispute. ... or something similar to this.

I don't want to be condescending, so please try to take this reply constructively.

We already discussed upthread why delegating consensus decisions to a minority of the stake (PoS) or hashrate (PoW) will result in divergence and no consensus.

You are not considering the economic game theory that comes into play. When there isn't a resource consumed to force a choice of orderings amongst the plurality of potential arbitrary orderings, then there is discord.

This was covered for example in the discussion starting some where around page 28 of this thread between enet, myself, and monsterer.

I am not going to be able to help you digest this entire thread, by resummarizing the entire thread. You will have to invest the effort/time to read and digest the technical points carefully if you desire to be fully knowledgeable about this issue. What you feel might be okay, is not actually a well researched level of knowledge of this particular consensus/verification issue.

Please don't again accuse me of writing too much technobabble. It isn't my fault that the issue is complex.

You are welcome to quote a specific point upthread and rebutt it.



Again, people shouldn't keep this at arm's length.  Imagine if it really does become the 'world computer' with a billion users, and billions of smart contracts, the blockchain would be too huge.

There won't be any block chain that scales to the significant global usage where most users (and miners) run full nodes. Simply can't be done.

Centralization is required for scaling. The only solution I thought of is to control the centralized full nodes with decentralized power. Keep the mining power in the hands of the users.

Agreed. You need trust to scale. Mining = creating coins + verification. It would be better to talk about verifiers or auditors rather than miners.

Trust & verify (statistically).

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 30, 2016, 06:39:52 AM
Last edit: February 01, 2016, 06:04:27 PM by TPTB_need_war
 #616

Segregated Witness is overviewed here:

https://bitcoinmagazine.com/articles/segregated-witness-part-how-a-clever-hack-could-significantly-increase-bitcoin-s-potential-1450553618

https://bitcoinmagazine.com/articles/segregated-witness-part-why-you-should-care-about-a-nitty-gritty-technical-trick-1450827675

Old nodes have no way to know if they are including a fraudulent transaction in their blocks, thus everyone will upgrade to new node else they can be attacked with fraudulent transactions which cause their block rewards to be ignored by the new nodes. Thus Segregated Witness is effectively a hard fork.

Thus this does nothing to solve the Tragedy of the Commons which is driving PoW mining towards (an oligarchy) centralization in order to scale transaction volume. It is a gimick used to increase the block chain by some factor, while sneaking in the ability for Blockstream to add new scripting versions so they can later soft fork (version) the block chain to support Sidechains. Clever but not clever enough to hide from Sherlock Holmes (me)!

Upthread I have rough sketched a design for a solution that conceptually scales and maintains decentralized, permissionless control over mining.



Apparently you don't understand that you are talking to someone who is much more astute than you and more so than most of the guys on this forum.

blah blah blah

You have a reading comprehension handicap. Try again:

while sneaking in the ability for Blockstream to add new scripting versions so they can later soft fork (version) the block chain to support Sidechains

A third advantage of Wuille's Segregated Witness proposal has Bitcoin programmers just as excited as the first two – if not more-so: script versions.

As explained in the previous article, Segregated Witnesses carry scriptSigs that unlock bitcoin. But they carry something else, too: version bytes. These version bytes preface scriptSigs in Segregated Witnesses, indicating what kind of scriptSig it is. If a node reading the version byte recognizes the type, it can tell what requirements must be met to unlock bitcoin in the scriptSig. If a node reading the version byte does not recognize the type, it interprets the scriptSig as an “Anyone can spend.”

This opens up all sorts of new ways to lock bitcoin up in transactions. In fact, it can be used to lock bitcoin up in any way developers come up with. As such, it's impossible to explain how this will be used in the future, since much of this still needs to be invented. But initial ideas include Schnorr signatures, which are much faster to verify than signatures currently in use, and more complicated types of multisig transactions; perhaps even Ethereum-like scripts.

And importantly: Like Segregated Witness itself, these types of upgrades will not break Bitcoin's existing consensus rules. Rather than every single Bitcoin node, only a majority of miners needs to adapt, making them much easier to deploy.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 31, 2016, 10:13:32 AM
Last edit: January 31, 2016, 10:53:14 AM by TPTB_need_war
 #617

See the quoted content for how this post applies to this Decentralization thread.

Synereo probable scam alert (note I think their intention is sincere but they unnecessarily attach a speculation model to a very high risk experiment in social modeling that n00b speculators will wet their pants for because they can't understand it but it sounds cool):

[...]

Besides the rest of the Synereo network operates on a decentralized, no-consensus model, which is the antithesis of a crypto currency which requires global consensus. Conflating Synereo with a crypto currency will thus end up forcing Synereo to be centralized (see my research in the Decentralization thread for why and also my formerly censored thoughts about why afaics the work that Synereo's Greg Meredith is doing for Ethereum's Casper consensus protocol is flawed and won't solve the inherent flaw in Ethereum w.r.t. to decentralization).

Similarly, when Synereo launches, an initial supply of
AMP s will be available for purchase and distribution to early
users and contributors. Synereo also o ers a unique social approach to proof-of-
work that will be connected to a kind of "mining" and AMP
creation. However, the discussion of this is currently out of scope for this paper.

It is impossible to invent a PoW which derives from a decentralized, no-consensus network which isn't about consuming a resource and also achieve global consensus on the distributed ledger's (block chain's) Consistency due to the CAP theorem.

This is nonsense to claim you have such a PoW.

enet
Member
**
Offline Offline

Activity: 81
Merit: 10


View Profile
January 31, 2016, 11:50:10 AM
 #618

"Bitcoin continues because it doesn't allow multiple Partitions"

Hmm, you have a steep background in relativity, but somehow things go wrong somewhere. Bitcoin partitions all the time - that's the default for everything. Nodes only synchronize ex-post, hence the block cycle.

I'd humbly suggest to start with some through research of some basics:

* computers are electronic elements with billions of components. how does such a machine achieve consistent state? see: Shannon and von Neumann and the early days of computing (maybe even Zuse)

* partitions, blocks, DAG's, .... all this stuff generally confuses the most fundamental notions. after investigating this matter for a very long time, I can assure you that almost nobody understands this. I'll give an example: in any computer language and modern OS, you have the following piece of code:

Code:
declare variable X
set X to 10
if (X equals 10) do Z

will the code do Z? unfortunately the answer in general is no, and its very hard to know why. the answer: concurrency. a different thread might have changed X and one needs to lock X safely. in other words data or state in modern computing is based on memory locations. programs always assume that everything is completely static, when in reality it is dynamic (OS and CPU caches on many levels). These are all human abstractions. The actual physical process of a computing machine is not uniform. In fact it is amazing that one can have such things at all exist, since Heisenberg discovered its impossible to tell even the most elementary properties of a particle with certainty. Shannon found that still one can build reliable systems from many unreliable parts (the magic of computing).

With regards to your basic thesis you're right and wrong at same time. Total coordination is impossible even on the microscopic level. Bitcoin implements a new coordination mechanism, based on the Internet, previously unknown to mankind. It's certainly not perfect but that notion leads nowhere anyway. The foundations of computing is how one treats unreliable physical parts to create reliable systems (things that are imperfect add up to something which as reliable as necessary).
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
January 31, 2016, 02:57:44 PM
Last edit: January 31, 2016, 03:22:02 PM by TPTB_need_war
 #619

Bitcoin continues because it doesn't allow multiple Partitions and because in the case where chaos would result from a fork, then centralization of the mining is able to coordinate to set policy. But we also see the negative side of centralization when recently the Chinese miners who control roughly 67% of Bitcoin's network hashrate were able to veto any block size increase. And lie about their justification, since an analysis by smooth and I (mostly me) concluded that their excuse about slow connection across China's firewall is not a valid technical justification. Large blocks can be handled with their slow connection by putting a pool outside of China to communicate block hashes to the mining hardware in China.

Hmm, you have a steep background in relativity, but somehow things go wrong somewhere. Bitcoin partitions all the time - that's the default for everything. Nodes only synchronize ex-post, hence the block cycle.

Dude I haven't written anything in this entire thread (unless you crop out the context as you did!) to disagree with the fact that Bitcoin's global consensus is probabilistic. My point is the system converges on a Longest Chain Rule (LCR) and doesn't diverge. Duh! The distinction between convergence and divergence has been my entire point when comparing Satoshi's LCR PoW to other divergent Partition tolerance designs such as a DAG.

I request you quote me properly next time, so I don't have to go hunting for the post you quoted from. I fixed if for you above by inserting the proper quote and underlining the part you had written without attribution and link. I presume you know how to press the "Quote" button on each post. Please use it. Respect my time. Don't create extra work for me.

I'd humbly suggest to start with some through research of some basics:

Passive aggressively implying that I haven't studied fundamentals is not humble.

* computers are electronic elements with billions of components. how does such a machine achieve consistent state? see: Shannon and von Neumann and the early days of computing (maybe even Zuse)

* partitions, blocks, DAG's, .... all this stuff generally confuses the most fundamental notions. after investigating this matter for a very long time, I can assure you that almost nobody understands this.

Humble  Huh

Blah blah blah. Make your point.

I can assure you I've understood the point deeply about the impossibility of absolute global consistency in open systems (and a perfectly closed system is useless, i.e. static). Go read my debates with dmbarbour (David Barbour) from circa 2010.

I'll give an example: in any computer language and modern OS, you have the following piece of code:

Code:
declare variable X
set X to 10
if (X equals 10) do Z

will the code do Z? unfortunately the answer in general is no, and its very hard to know why. the answer: concurrency. a different thread might have changed X and one needs to lock X safely.

Perhaps one day you will graduate to higher Computer Science concepts such as pure functional programming and asynchronous programming (e.g. Node.js or Scala Akka) to simulate multithreading safely with one thread using promises and continuations. But you are correct to imply there is never a perfect global model of consistency. This is fundamentally due to the unbounded entropy of the universe, which is also reflected in the unbounded recursion of Turing completeness which thus yields the proof that the Halting problem is undecidable.

If you are still using non-reentrant impure programming with threads (and mutexes), or otherwise threads in anything other than Worker threads mode, you are probably doing it wrong (in most every case).

in other words data or state in modern computing is based on memory locations. programs always assume that everything is completely static, when in reality it is dynamic (OS and CPU caches on many levels). These are all human abstractions. The actual physical process of a computing machine is not uniform. In fact it is amazing that one can have such things at all exist, since Heisenberg discovered its impossible to tell even the most elementary properties of a particle with certainty. Shannon found that still one can build reliable systems from many unreliable parts (the magic of computing).

Higher level abstractions and quantum decoherence. I am not operating at the quantum scale as a classical human being.

With regards to your basic thesis you're right and wrong at same time. Total coordination is impossible even on the microscopic level. Bitcoin implements a new coordination mechanism, based on the Internet, previously unknown to mankind. It's certainly not perfect but that notion leads nowhere anyway. The foundations of computing is how one treats unreliable physical parts to create reliable systems (things that are imperfect add up to something which as reliable as necessary).

Who ever said "perfect". I said probabilistic. The key distinction was convergence versus divergence, but that seems to have escaped you along the way here.

enet
Member
**
Offline Offline

Activity: 81
Merit: 10


View Profile
January 31, 2016, 04:40:02 PM
Last edit: January 31, 2016, 04:52:24 PM by enet
 #620

My post referred to the quote from the first post in this thread

Quote
"The CAP theorem is fundamental. There will be no way to solve it. "

CAP is not fundamental at all - quite the opposite, it is misleading in this context. What are you trying to say - Bitcoin does not work, because... ? It worked quite well for a while. DAG is in my opinion incoherent and a step backwards. It doesn't recognize Bitcoin's achievement to agree on partial order.

If one codes up a P2P one has to reason from the perspective of a single node. Then one realises that local view != global view. But there is no good terminology for these things (yet). If one had it should be easy to show where the DAG idea goes wrong.

Quote
Perhaps one day you will graduate to higher Computer Science concepts such as pure functional programming and asynchronous programming

All data in modern languages are treated as pointers to memory. Even if you know FP its the same thing. You name a pointer, and then the pointed value changes. That is called a variable. What I meant with almost nobody understands this, is that 99.9% of all programming works this way. Variables are the only way to define facts, which is strange. Its not a good way to reason about time. For distributed systems there is:

https://web.archive.org/web/20160120095606/http://research.microsoft.com/en-us/um/people/lamport/pubs/time-clocks.pdf

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [31] 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!