BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
June 20, 2016, 07:17:28 PM |
|
So your argument is that everyone who invested in ETH understands Git. I think you are the moron.
I would suggest the opposite. Most investors of Ethereum likely are unfamiliar with Git , and thus why they are more probabilistically likely to invest in the first place. I am suggesting that in a courtroom situation where technical defense experts give a EIL5 of Git they can clearly show there is no attempt at malice or confusion on Eric's part. can be construed as fraudulent activity
Of course anything can be misunderstood by an idiot or the uninformed. All it takes is a very basic understanding of Git to realize that their was no such intention, http://www.legalmatch.com/law-library/article/modifying-a-contract.htmlSince when is an open source repo that anyone can contribute to and no user needs to sign to use or fork considered a contract?
|
|
|
|
sanas
|
|
June 21, 2016, 12:40:23 PM |
|
I have heard that china is going to make some miners for eth , its price is also increasing and thats good to see mining for eth is not a bad idea.
Where is the source? The Ethereum price is going down at the moment, it is not profitable to add ASIC for eth.
|
.cashaa.... | | | █████ █╬██ █╬█ █╬█ █╬█ █╬█ █╬█ █╬██ █████ | | | | ██████ ██╬██ █╬█ █╬█ █╬█ █╬█ █╬█ ██╬██ ██████ | | | █████ ██╬█ █╬█ █╬█ █╬█ █╬█ █╬█ ██╬█ █████ |
|
|
|
alienesb
|
|
June 21, 2016, 12:41:43 PM |
|
I have heard that china is going to make some miners for eth , its price is also increasing and thats good to see mining for eth is not a bad idea.
Pretty sure it's been stated that if an ASIC appears they will change it up to kill it.
|
|
|
|
sanas
|
|
June 21, 2016, 12:54:27 PM |
|
I have heard that china is going to make some miners for eth , its price is also increasing and thats good to see mining for eth is not a bad idea.
Pretty sure it's been stated that if an ASIC appears they will change it up to kill it. That is good. I think it should change the parameters to the miners to kill any possibility of ASIC. That will deter it.
|
.cashaa.... | | | █████ █╬██ █╬█ █╬█ █╬█ █╬█ █╬█ █╬██ █████ | | | | ██████ ██╬██ █╬█ █╬█ █╬█ █╬█ █╬█ ██╬██ ██████ | | | █████ ██╬█ █╬█ █╬█ █╬█ █╬█ █╬█ ██╬█ █████ |
|
|
|
Minecache
Legendary
Offline
Activity: 2408
Merit: 1024
DGbet.fun - Crypto Sportsbook
|
|
June 21, 2016, 01:12:11 PM |
|
I have heard that china is going to make some miners for eth , its price is also increasing and thats good to see mining for eth is not a bad idea.
Where is the source? The Ethereum price is going down at the moment, it is not profitable to add ASIC for eth. ETH price up 9% last 24 hours. DAO price up 15%. The hodlers will be rewarded in this life.
|
|
|
|
iamnotback
|
|
June 22, 2016, 12:00:32 AM |
|
Here is a very good explanation of what I was explaining upthread about how sharding validation of long-running scripts breaks Nash equilibrium and this paper explains that even without sharding, the validation of scripts on Ethereum can break Nash equilibrium: https://arxiv.org/pdf/1606.05917v1.pdf#page=2I found that here: https://chriseth.github.io/notes/talks/truebit/#/1Tangentially, the salient quote of the algorithm from that first white paper: Consensus in Bitcoin, called Nakamoto consensus[13], achieves low commu- nication complexity in exchange for an extremely high amount of local work from miners. In the other extreme, <--- Satoshi's design requires every full node to verify every transaction Byzantine consensus [3] avoids local work, but requires a <--- Byzantine consensus is entirely via message passing considerable amount of message passing among parties. Our verification game below requires nontrivial interac- tion and some local work, but not nearly as much of either as Byzantine or Nakamoto consensus respectively.
Note that paper can only be applicable to smart contracts which are puzzles that can be validated in parts, which you can see clearly by reviewing the examples such as a challenge and response about the members of an intersection of two sets. That doesn't seem to be applicable to Turing-complete code. Also I had rejected a challenge-response design, because if the assumption that at least one honest node will verify and issue the challenge fails, then the block chain is fucked in some indeterminant state where Nash equilibrium is broken. That is too risky and frankly this is the direction SegWit is headed (and I guess they depend on centralization of mining to protect them ) But if you think about it, code execution can also be validated in parts. And then there is statistical solution for validation which is as analogously as strong of an assurance of correctness as is the reliance that a 51% attack is improbable. That is my design. Now watch somebody go claim it as their invention after reading this.
|
|
|
|
Hueristic
Legendary
Offline
Activity: 4032
Merit: 5590
Doomed to see the future and unable to prevent it
|
|
June 22, 2016, 12:24:22 AM |
|
By analogously are you saying proportionally?
|
“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”
|
|
|
iamnotback
|
|
July 11, 2016, 12:02:55 PM Last edit: July 11, 2016, 12:23:23 PM by iamnotback |
|
I am following up on the discussion that tromp and I had upthread about whether his Cuckcoo proof-of-work is ASIC resistant, which began as a post by myself about Ethereum's Dagger proof-of-work. For all the following reasons, I have concluded that ASIC resistant proof-of-work algorithms can't exist. This applies to Cryptonite's (Monero's) proof-of-work hash and Zcash's Equihash. The CPU simply isn't designed to do any one algorithm most efficiently. It makes tradeoffs in order to be adept at generalized computation. The ASIC will always be at least two orders-of-magnitude more power efficient. In that prior discussion with tromp, I had suggested some strategies of using special hardware threads on the ASIC to coalesce memory accesses in the same memory row and/or queuing hash computations that didn't coalesce to attain greater power efficiency on the ASIC, potentially up to two or three orders of magnitude. I have also realized that it might be possible to design DRAM circuits with much smaller row banks, which also has the potential for two orders of magnitude power efficiency improvement for this specific Cuckcoo algorithm. General computing DRAM must have large row buffer sizes so as to exploit efficiencies in locality of memory, so it wouldn't be possible to put these specialized DRAMs on general purpose computers. As these DRAM power costs are shrunk, the power cost of the hash computations becomes a greater proportion, allowing those to be 3 to 4 orders of magnitude more power efficient on the ASIC. Even if the Cuckcoo algorithm is shrunk to operate on L1 cash and even after I converted it to use up to 32 (16-bit) buckets per slot and track all the cycles and find multiple copies of cycles, so that I could force the consumption of the entire SRAM cache line per random access, still the ASIC will end up probably at least two orders-of-magnitude more power efficient, because for example N-way caches (required for general computation) are not as power efficient as direct-mapped (which is all that is required in this case). This is not to mention that the ASIC mining farm has another 1/2 to 1 order-of-magnitude advantage on electricity cost compared to the CPU/GPU marginal miners. Additionally: 1. Accessing a memory set that is larger than the cache(s) incurs the power consumption cost of loading the cache(s) and resetting the pipeline on cache misses[1]. N-way set associative L1 caches typically read in parallel the SRAM rows for all N ways, before the cache miss is detected[2]. On systems with inclusive L2 and L3 caches such as Intel, these are also always loaded. Since each SRAM access incurs 10X power consumption of DRAM (albeit 10X faster) and albeit that the DRAM row size is typically 128X greater[3], the baseline cache power consumption can become significant relative to that of the DRAM; and especially if our algorithm employs locality to amortize DRAM row loading over many accesses (in excess of the locality of the SRAM row size). An ASIC/FPGA could be designed to eliminate this baseline cache power consumption. 2. Due to the aforementioned N-way power consumption overhead, Android's 2-way L1 cache is more power efficient than Haswell's 8-way[4], and both have the same latency. [1] https://lwn.net/Articles/252125/[2] https://en.wikipedia.org/wiki/CPU_cache#Implementation http://personal.denison.edu/~bressoud/cs281-s10/Supplements/caching2.pdf#page=4[3] http://www.futurechips.org/chip-design-for-all/what-every-programmer-should-know-about-the-memory-system.html[4] http://www.7-cpu.com/cpu/Cortex-A15.html https://en.wikipedia.org/wiki/ARM_Cortex-A57#Overview http://www.7-cpu.com/cpu/Haswell.htmlThe following is sloppy code I just hacked quickly to do the test I needed. // Cuckoo Cycle, an attempt for an ASIC-resistant proof-of-work // a derivative of the original by John Tromp¹
#include "cuckoo.h" #include <stdbool.h> #include <time.h> // algorithm parameters #define MAXBUFLEN PROOFSIZE*BUCKETS
// used to simplify nonce recovery int cuckoo[1+SIZE][BUCKETS]; // global; conveniently initialized to zero uint8_t cycle[1+SIZE]; int nonce, reads=0, len, cycled, count=0; u16 node, path[PROOFSIZE], buckets[MAXBUFLEN];
bool recordCycle(const int pi) { // cycled = true; len = pi; printf("% 4d-cycle found at %d%%\n", len, (int)(nonce*100L/EASYNESS)); return pi == PROOFSIZE && ++count > 256; }
// Returns whether all cycles found. bool walkOdd(const int pi, int bi); bool walkEven(const int pi, int bi) { // End of path? if ((reads++, cuckoo[node][1]) == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node return false; // Cache all the node's buckets so DRAM row buffer loads are not repeated int i = 0; do { buckets[bi+i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack } while (++i < BUCKETS && cuckoo[node][i] != 0); // Walk the path for each bucket int j = bi; bi += i; for (; j < bi; j++) { node = buckets[j]; // Repeating a cycle or reversing the path? int k = pi - 2; while (node != path[k] && --k >= 0); if (k > 0) continue; // Cycle found? (only in WalkEven() bcz bipartite graphs only contain even length cycles, bcz there are no edges U <--> U nor V <--> V) if (k == 0) { // All cycles found? if (recordCycle(pi)) return true; } path[pi] = node; if (walkOdd(pi+1, bi)) return true; } return false; }
bool walkOdd(const int pi, int bi) { // Path would exceed maximum cycle length? if (pi > PROOFSIZE) return false; // End of path? if ((reads++, cuckoo[node][1]) == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node return false; // Cache all the node's buckets so DRAM row buffer loads are not repeated int i = 0; do { buckets[bi+i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack } while (++i < BUCKETS && cuckoo[node][i] != 0); // Walk the path for each bucket int j = bi; bi += i; for (; j < bi; j++) { node = buckets[j]; // Repeating a cycle or reversing the path? int k = pi - 2; while (node != path[k] && --k >= 0); if (k > 0) continue; path[pi] = node; if (walkEven(pi+1, bi)) return true; } return false; }
bool walkThird(const int pi, int bi) { // End of path? if ((reads++, cuckoo[node][1]) == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node return false; // Cache all the node's buckets so DRAM row buffer loads are not repeated int i = 0; do { buckets[bi+i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack } while (++i < BUCKETS && cuckoo[node][i] != 0); // Walk the path for each bucket int j = bi; bi += i; for (; j < bi; j++) { node = buckets[j]; // Repeating a cycle or reversing the path? int k = pi - 2; while (node != path[k] && --k >= 0); if (k > 0) continue; path[pi] = node; if (walkEven(pi+1, bi)) return true; } return false; }
bool walkFirstTwo() { // End of path? if (cuckoo[node][1] == 0) // iff 1 occupied bucket, it reverses the path pointing back to the node that pointed to this node return false; // Cache all the node's buckets so DRAM row buffer loads are not repeated int i = 0; do { buckets[i] = cuckoo[node][i]; // compact into buffer instead of allocating the BUCKETS on stack } while (++i < BUCKETS && cuckoo[node][i] != 0); // Walk the path for each bucket for (int j = 0; j < i; j++) { node = buckets[j]; // Reversing the path? if (node == path[0]) continue; path[2] = node; if (walkThird(2+1, i)) return true; } return false; }
int main(int argc, char **argv) { clock_t start = clock(), diff; assert(SIZE <= (1 << 14)); // 32KB L1 cache (of u16 elements, note u16 allows up 64KB) assert(PROOFSIZE > 2);// c.f. `walk(2, 0)` assert(BUCKETS > 1); // c.f. `(reads++, cuckoo[node][1]) == 0` char *header = argc >= 2 ? argv[1] : ""; setheader(header); printf("Looking for %d-cycle on cuckoo%d%d(\"%s\") with %d edges and %d buckets\n", PROOFSIZE, SIZEMULT, SIZESHIFT, header, EASYNESS, BUCKETS); int j, u, v, hashes=0, writes=0, maxes=0, max=0; for (nonce = 0; nonce < EASYNESS; nonce++) { sipedge(nonce, &u, &v); hashes++; // Edge already exists? reads++; int i = 0; while (cuckoo[u][i] != v && cuckoo[u][i] != 0 && ++i < BUCKETS); if (i < BUCKETS && cuckoo[u][i] == v) continue; // ignore duplicate edges // Not enough buckets? if (i == BUCKETS) { if (i == max) maxes++; else { max = i; maxes = 1; } continue; } else if (i <= max) { if (i == max) maxes++; } else { max = i; maxes = 1; } reads++; for (j = 0; cuckoo[v][j] != 0 && ++j < BUCKETS;) {} if (j == BUCKETS) { if (j == max) maxes++; else { max = j; maxes = 1; } continue; } else if (j <= max) { if (j == max) maxes++; } else { max = j; maxes = 1; } // Add new edge cuckoo[u][i] = v; writes++; cuckoo[v][j] = u; writes++; cycled = false; // Search for cycles? if (i > 0 && j > 0) { path[0] = u; path[1] = v; node = v; // Found? if (walkFirstTwo()) { int pi = len; // Mark the cycle edges while (--pi >= 0) cycle[path[pi]] = true; // Enumerate the nonces for the marked cycle edge for (; len; nonce--) { sipedge(nonce, &u, &v); hashes++; if (cycle[u] && cycle[v]) printf("%2d %08x (%d,%d)\n", --len, nonce, u, v); } break; } if (cycled) { cuckoo[u][i] = 0; writes++; cuckoo[v][j] = 0; writes++; } } } printf("Hashes: %d\n Reads: %d\nWrites: %d\n Maxes: %d\n Max: %d\n", hashes, reads, writes, maxes, max); diff = clock() - start; int msec = diff * 1000 / CLOCKS_PER_SEC; printf("Time taken %d seconds %d milliseconds\n", msec/1000, msec%1000); return 0; }
// ¹https://bitcointalk.org/index.php?topic=1361602.msg15111595#msg15111595
// Cuckoo Cycle, a memory-hard proof-of-work // Copyright (c) 2013-2014 John Tromp
#include <stdio.h> #include <stdint.h> #include <string.h> #include <assert.h> void SHA256(const unsigned char *header, size_t len, unsigned char hash[32]);
// proof-of-work parameters #ifndef SIZEMULT #define SIZEMULT 1 #endif #ifndef SIZESHIFT #define SIZESHIFT 14 #endif #ifndef EASYNESS #define EASYNESS (SIZE*16) // controls probability of finding a cycle before completion #endif #ifndef PROOFSIZE #define PROOFSIZE 6 #endif #ifndef BUCKETS #define BUCKETS 10 #endif
#define SIZE (SIZEMULT*(1<<SIZESHIFT)) // relatively prime partition sizes #define PARTU (SIZE/2+1) #define PARTV (SIZE/2-1) // Otherwise if (d=gcd(U,V)) > 1, frequencies multiples of d are mirrored // in both partions, and SIZE/2 effectively shrinks to SIZE/(2*d); because given // hash(k)=d*hash'(k), U=d*U', and V=d*V', thus d*hash'(k) mod d*U' = // hash'(k) mod U' and d*hash'(k) mod d*V' = hash'(k) mod V': // http://pub.gajendra.net/2012/09/notes_on_collisions_in_a_common_string_hashing_function // http://stackoverflow.com/questions/25830215/how-does-double-hashing-work#comment40410794_25830215
typedef uint64_t u64;
#define ROTL(x,b) (u64)( ((x) << (b)) | ( (x) >> (64 - (b))) )
#define SIPROUND \ do { \ v0 += v1; v1=ROTL(v1,13); v1 ^= v0; v0=ROTL(v0,32); \ v2 += v3; v3=ROTL(v3,16); v3 ^= v2; \ v0 += v3; v3=ROTL(v3,21); v3 ^= v0; \ v2 += v1; v1=ROTL(v1,17); v1 ^= v2; v2=ROTL(v2,32); \ } while(0)
// SipHash-2-4 specialized to precomputed key and 4 byte nonces u64 siphash24( int nonce, u64 v0, u64 v1, u64 v2, u64 v3) { u64 b = ( ( u64 )4 ) << 56 | nonce; v3 ^= b; SIPROUND; SIPROUND; v0 ^= b; v2 ^= 0xff; SIPROUND; SIPROUND; SIPROUND; SIPROUND; return v0 ^ v1 ^ v2 ^ v3; }
u64 v0 = 0x736f6d6570736575ULL, v1 = 0x646f72616e646f6dULL, v2 = 0x6c7967656e657261ULL, v3 = 0x7465646279746573ULL;
#define U8TO64_LE(p) \ (((u64)((p)[0]) ) | ((u64)((p)[1]) << 8) | \ ((u64)((p)[2]) << 16) | ((u64)((p)[3]) << 24) | \ ((u64)((p)[4]) << 32) | ((u64)((p)[5]) << 40) | \ ((u64)((p)[6]) << 48) | ((u64)((p)[7]) << 56))
// derive siphash key from header void setheader(const char *header) { unsigned char hdrkey[32]; SHA256((unsigned char *)header, strlen(header), hdrkey); u64 k0 = U8TO64_LE( hdrkey ); u64 k1 = U8TO64_LE( hdrkey + 8 ); v3 ^= k1; v2 ^= k0; v1 ^= k1; v0 ^= k0; }
// generate edge in cuckoo graph typedef uint16_t u16; void sipedge(int nonce, u16 *pu, u16 *pv) { u64 sip = siphash24(nonce, v0, v1, v2, v3); *pu = 1 + (u16)(sip % PARTU); // "1 +" bcz 0 is the "no edge" ("empty cell or ⊥"¹⁰) value *pv = 1 + PARTU + (u16)(sip % PARTV); }
|
|
|
|
dwgscale11
|
|
July 11, 2016, 12:51:42 PM |
|
I have heard that china is going to make some miners for eth , its price is also increasing and thats good to see mining for eth is not a bad idea.
Where is the source? The Ethereum price is going down at the moment, it is not profitable to add ASIC for eth. ETH price up 9% last 24 hours. DAO price up 15%. The hodlers will be rewarded in this life. Delusional.
|
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1110
|
|
July 11, 2016, 06:48:10 PM |
|
The ASIC will always be at least two orders-of-magnitude more power efficient.
potentially up to two or three orders of magnitude.
potential for two orders of magnitude power efficiency improvement
allowing those to be 3 to 4 orders of magnitude more power efficient on the ASIC.
the ASIC will end up probably at least two orders-of-magnitude more power efficient
another 1/2 to 1 order-of-magnitude advantage on electricity cost
That's like an order of magnitude more orders of magnitude than I imagined! Seriously though, the large DRAM row size may not be that much of an issue, since all DRAM cells need refreshing on a frequent basis anyway. It's this refresh requirement that makes it Dynamic RAM. So I doubt you can make row size much smaller. And given that Cuckoo Cycle is constrained by memory latency, there's not that much need to optimize the computation of the siphashes. I would still say that an all ASIC solution for Cuckoo Cycle is at most one order of magnitude more energy efficient than a commodity one. Not ASIC proof, but certainly resistant...
|
|
|
|
iamnotback
|
|
July 12, 2016, 01:41:13 AM Last edit: July 12, 2016, 08:22:11 AM by iamnotback |
|
The ASIC will always be at least two orders-of-magnitude more power efficient.
potentially up to two or three orders of magnitude.
potential for two orders of magnitude power efficiency improvement
allowing those to be 3 to 4 orders of magnitude more power efficient on the ASIC.
the ASIC will end up probably at least two orders-of-magnitude more power efficient
another 1/2 to 1 order-of-magnitude advantage on electricity cost
That's like an order of magnitude more orders of magnitude than I imagined! Seriously though, the large DRAM row size may not be that much of an issue, since all DRAM cells need refreshing on a frequent basis anyway. It's this refresh requirement that makes it Dynamic RAM. So I doubt you can make row size much smaller. And given that Cuckoo Cycle is constrained by memory latency, there's not that much need to optimize the computation of the siphashes. I would still say that an all ASIC solution for Cuckoo Cycle is at most one order of magnitude more energy efficient than a commodity one. Not ASIC proof, but certainly resistant... Don't forget the other ideas for strategies I had presented to lower electrical power requirements by coalescing row accesses within a row buffer. Seems it is likely to be able to attain at least an order-of-magnitude or two advantage given sufficient investment economy-of-scale. Also the cache costs I mentioned in the prior post might be only a fraction, but all these strategies and issues can add up. Plus at least the half-order of magnitude less costly electricity for the ASIC farm which can be located next to hydropower, and including the efficiency losses due to charging the battery on the mobile phone. Additionally it might be possible to eliminate the refresh of the DRAM if we know statistically due to the nature of the algorithm that the row will be accessed within the refresh requirement. I suppose this still requires the necessary amplifiers to recharge the capacitors, but I don't understand why this can't be scaled down to smaller row buffer sizes. I think the large row buffer sizes is to maximize density which also lowers power consumption, so I assume there is some tradeoff optimum level at a smaller row buffer size for this specific algorithm (readers should note the point that Cuckoo purposely wastes much energy due to randomly accessing only two bits out of 16384 bytes per row, so as to be latency and not memory bandwidth nor computation bounded and for hash computation to not consume the majority of the electrical power, but this causes a problem with the power consumption being optimizeable by the ASIC). I guessestimated at best I could hope for between one and two orders-of-magnitude comparing Cuckoo-variant mobile mining to ASIC farms, and the killer observation is that we can at best expect to draw less than say 1 Watt-hours per day per mobile device (presuming they are mining only when they are spending microtransactions), so even with a billion users that is only roughly 1 - 10 megawatts that the ASIC farms need to overpower my unprofitable proof-of-work design. So mobile mining of the proof-of-work is insecure. I have a solution to this within my block chain design, but I don't see how ASIC resistance can ever be viable for profitable proof-of-work (Satoshi's design) where the marginal mobile miners would at best have an electricity cost that is 30 - 100 times greater than the ASIC farm, thus I don't see how they would be incentivized to mine for profit when not required to, i.e. when on the charger. It is possible that Cuckcoo proof-of-work algorithm has some use case, but I don't see mobile mining as viable which I think I read was one of your goals or inspirations. Of course I am not gloating about that. I wish it wasn't so. P.S. readers I have not detailed why Monero/Cryptonite's proof-of-work hash and Zcash's Equihash would also suffer similar (most likely worse approaching 3 orders-of-magnitude) lack of ASIC resistance, but suffice it to say that there is no way to avoid the fact that a general purchase computer on household electricity costs can't be within an order-of-magnitude of power cost efficiency compared to ASIC mining farms. The details could be investigated and enumerated but this isn't necessary. It would be a waste of my time.
|
|
|
|
|
iamnotback
|
|
August 05, 2016, 04:37:53 AM Last edit: August 05, 2016, 04:49:12 AM by iamnotback |
|
How I Fixed Satoshi's DesignThere was a really good summary of why Casper's planned sharding is flawed. Apparently everybody is still oblivious to my solution. If you shard the blockchain, you've still got to verify it. You can't have shards trusting each other, as that breaks Nash equilibrium (there are game theories other than the one that guarantees the security of the long-chain rule). But if you have every shard verify every other shard, then you don't have sharding any more. My hypothetical solution is a statistical one (where the economic interests of all shards become intertwined) combined with eventual consistency where it is required to maintain the Nash equilibrium. SegWit is (in one aspect but not entirely as afaik it really just centralizes proof-of-work) generally analogous to a similar conceptual idea I had thought of and dismissed, because it relies on the trust that the economically impacted parties will verify before eventual consistency is required, not on the proof that those parties did verify before it was required. The game theory gets quite complex because there are externalities such as shorting the coin. So it is possible I may have a mistake and we will find out once I publish.
|
|
|
|
hv_
Legendary
Offline
Activity: 2548
Merit: 1055
Clean Code and Scale
|
|
October 17, 2016, 11:33:55 AM |
|
to be extended:
How paradox is it to have 2 ETH now ?
Do we get more soon ?
Thats maybe the idea to increase the block size - just run forks ... uhhh
|
Carpe diem - understand the White Paper and mine honest. Fix real world issues: Check out b-vote.com The simple way is the genius way - Satoshi's Rules: humana veris _
|
|
|
Zocadas
|
|
October 27, 2016, 05:06:13 PM |
|
to be extended:
How paradox is it to have 2 ETH now ?
Do we get more soon ?
Thats maybe the idea to increase the block size - just run forks ... uhhh
Some times, maybe many versioin of the Etherereum can be good. People can select the one they like most.
|
|
|
|
iamnotback
|
|
October 29, 2016, 08:35:38 AM |
|
How I Fixed Satoshi's DesignThere was a really good summary of why Casper's planned sharding is flawed. Apparently everybody is still oblivious to my solution. If you shard the blockchain, you've still got to verify it. You can't have shards trusting each other, as that breaks Nash equilibrium (there are game theories other than the one that guarantees the security of the long-chain rule). But if you have every shard verify every other shard, then you don't have sharding any more. My hypothetical solution is a statistical one (where the economic interests of all shards become intertwined) combined with eventual consistency where it is required to maintain the Nash equilibrium. SegWit is (in one aspect but not entirely as afaik it really just centralizes proof-of-work) generally analogous to a similar conceptual idea I had thought of and dismissed, because it relies on the trust that the economically impacted parties will verify before eventual consistency is required, not on the proof that those parties did verify before it was required. The game theory gets quite complex because there are externalities such as shorting the coin. So it is possible I may have a mistake and we will find out once I publish. Reviewing the video I had done for this thread back in February where I critiqued some aspect of Casper: At the end of that rambling video, I finally got to the point. But amazingly I didn't come to the very obvious conclusion on how to fix the problem that computation on a blockchain faces. I basically stated it in the last part of that video, but I failed to connect the dots. Now I see the solution. It was right there in front of our face all along. Why has no one seen it (Of course I am not going to tell you. You tell me. I want to see if anyone else can figure it out)
|
|
|
|
|
iamnotback
|
|
October 29, 2016, 06:48:33 PM Last edit: October 30, 2016, 03:08:10 AM by iamnotback |
|
That is not the correct solution because of course it gives the spammer an asymmetrical advantage. And the problem with sharding is not just that messages between shards are multi-threading (this can actually be solved by requiring messages to be queued to the next block), but rather that then both shards have to verify the entire history chain of those cross-chard "transactions", which defeats the performance improvement of shards. Vitalik probably proposes to have shard validators trust each other with forfeitable deposits, but that like PoS destroys Nash Equilibrium. As well as I explained my video, external business logic can conflate shards even if cross-shard messages are restricted, leading to chaos, discontent when a shard validator set has lied (for profit obviously), and a drop in the value of the token. Bruce Wanker will be laughing again. I finally realized the solution to last sentence in the prior paragraph, which I alluded to in my prior comment. It suddenly just popped into my mind when I listened to myself. Dan Larimer is rushing and making mistakes: Steem, on the other hand, easily survived the flood attacks thrown at it without disrupting service and all without any transaction fees! Were those bandwidth DDoS attacks filtered by perimeter nodes, or validation attacks absorbed by validating nodes? The price of GAS would go up until it stunted the growth of all three applications. Incorrect. If the price of GAS would increase due to higher demand but the lesser amount of GAS needed would still reflect the unchanged cost of validating a script at that higher price. The native implementation would cause all the same outputs given the same inputs, except it wouldn’t know how to calculate the GAS costs because it wasn’t run on the EVM. It could simply compute its own cost based on some counters. If it knows its optimized implementation is less costly than the EVM, then it doesn't harm (i.e. remains compliant) by keeping the GAS if it is depleted before the script completes. Others verifying the depletion case would run the EVM, as this wouldn't cost them more than running native version. For non-depleted scripts, validators run their most efficient native version. Require a proof-of-work on each script Unless this is more expensive in resources than the cost of validating the script, then the attacker has an asymmetric DoS advantage. So all you've done is shifted the cost of paying the fee to generating the equivalent proof-of-work. And unless each script consumer has access to a premium ASIC, then the attacker still has an asymmetric advantage. And if you say the script consumer can farm out this ASIC, then you've shifted the DoS attack to the said farm. Local Blacklist / White list scripts, accounts, and/or peers That is effective for bandwidth DDoS, but Nash Equilibirium can be gamed by an open system w.r.t. to submitting data for validation.
|
|
|
|
iamnotback
|
|
October 30, 2016, 02:11:12 AM Last edit: October 30, 2016, 03:08:22 AM by iamnotback |
|
From what I can gather you will require a minimum of 1250 ethereum and even then only 250 persons will be chosen or elected as validators? even then you could be fined ether if you dont have the required bw or resources to validate at any given time?
Is there a reason only 250 people will secure the network? Is the reason speed? would it not make it more vulnerable to attack having so few validators? or not?
Who understands the ethereum POS method and the reasons behind it?
So ETH will become centralized ! and 1250 is not a amount can everyone can have ! epic.. just grab 50ETH and leave them for ever just in case..!
Pay attention to where I wrote, "Vitalik probably proposes to have shard validators trust each other with forfeitable deposits, but that like PoS destroys Nash Equilibrium. As well as I explained my video, external business logic can conflate shards even if cross-shard messages are restricted, leading to chaos, discontent when a shard validator set has lied (for profit obviously), and a drop in the value of the token. Bruce Wanker will be laughing again.".
Also listen to my video about why validation must always become centralized, and read the Ethereum Paradox thread.
Readers I was telling you last year that Casper can't work technically. Sigh.
|
|
|
|
|
|