Bitcoin Forum
November 19, 2018, 02:59:48 PM *
News: Latest Bitcoin Core release: 0.17.0 [Torrent].
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 »
1  Bitcoin / Bitcoin Discussion / Roger Ver vs Craig Wright, What is splitting the two? on: November 14, 2018, 07:21:08 PM
For some reasons I couldn't manage to do enough research on Faketoshi vs Ver debate and it would be really appreciated if somebody could brief me about their theoretical divergence.

I'm already aware of parts of Wright's agenda to make (or at least keep) bcash more government friendly and persuade his victims to give him their money without hesitation. What I don't exactly know is Ver's agenda.

Personally, I don't recognize bcash as bitcoin and definitively not the idea of increasing block size as a serious scaling solution, but I believe there is always something to learn from debates in crypto ecosystem generally as it is possible to experience same situations in bitcoin.

P.S
I also have been notified that Gregory Maxwell has somehow intervened in this debate I was just curious: what's going on?

2  Bitcoin / Bitcoin Discussion / SEC targets decentralized exchange developer warns others brutally. Lessons? on: November 13, 2018, 09:01:43 PM
U.S. SEC charges EtherDelta smart contract developer Zachary Coburn and threatens developers of decentralized exchange software officially.

It is why we have Satoshi missing, isn't it?

U.S feds never deserved to be considered as friends but Trump administration appears to be the worst enemy ever. They care about nothing specially law when it comes to expanding their authority. Seriously, what kind of a reasonable government charges a programmer for coding open-source?  

Now, what would be the lessons?
3  Bitcoin / Bitcoin Discussion / Axiom of Resistance (Why Craig Wright is not Satoshi) on: November 10, 2018, 09:12:07 AM
In early days of May 2016, when Craig Wright claimed to be Satoshi, by rejecting most of the community members demanding for Satoshi private keys, I argued somehow in favor of him. I don't believe in keys, keys are not our identities, they are certifications to our rights, nothing more. Losing/having access to a couple of keys won't change anything about who Satoshi is or is not. I like Gavin Anderson (personally) and I followed him, it was not a big deal after all, who cares about Satoshi real identity?

Even in the past couple of years, being informed about Wright's suspicious behaviours and moves in the ecosystem, I have not decided about him being a hoax or Satoshi himself. Actually didn't follow the man at all.

Now, I have encountered this article : Drugs, fraud, and murder By Craig Wright and I'm now fully convinced about him being a hoax. Thank you Craig, you are absolutely helpful in making an embarrassment exemplary out of your carrier.

In this article, besides repeatedly denouncing bitcoin and advertising for bcash, Craig Wright is crusading against:
Quote
... a group of misguided anarchistic socialists who refuse to work within the bounds of the law wanting to cry at the world and say, we do not want law, we want to say what the world is like. It is unfortunate that many grown men still act this way.

Other than its poor writing, this article shows a radical difference in philosophy and vision between the fake Satoshi and the original one:
>[Lengthy exposition of vulnerability of a systm to use-of-force
>monopolies ellided.]
>
>You will not find a solution to political problems in cryptography.

Yes, but we can win a major battle in the arms race and gain a new territory of freedom for several years.

Governments are good at cutting off the heads of a centrally controlled networks like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own.

Satoshi

I, personally, wouldn't care about bitcoin if it was not against state control.
Libbitcoin guys have formalized this issue as Axiom of Resistance. The word 'axiom' is used intentionally to prevent any further disputes. They simply ask whether you believe in desirability and feasibility of resisting against state control or not? Yes? You are a bitcoiner. No? You are not! Their words:
One who does not accept the axiom of resistance is contemplating an entirely different system than Bitcoin. If one assumes it is not possible for a system to resist state controls, conclusions do not make sense in the context of Bitcoin; just as conclusions in spherical geometry contradict Euclidean.

I didn't started this to re-new an old hoax story. I'm curious about how other bitcoiners think about this issue.


4  Bitcoin / Development & Technical Discussion / Is Bitcoin infrastructure too Chinese? What should be done technically? on: October 10, 2018, 07:37:56 AM
Hi,
I just read this academic  paper, authors are suggesting that Bitcoin is in danger of being compromised by Chinese government because of ASICs and pools.

I have been championing against ASICs and pools for a while and as of my experience when it comes to any serious improvement to bitcoin and it has to be done by a hard fork an army of 'legendary' shills are ready to make it almost impossible to discuss anymore.

But we have hard-fork-wishlist, discussing an issue won't fork the chain, actual forking does! So, I politely ask these guys to give us a break and let us to have a productive discussion about whether or not we could do anything, any technical improvement obviously, to deal with what the authors are pointing out?

5  Economy / Economics / On Marxism and the bitcoin energy consumption debate on: September 08, 2018, 08:38:44 PM
On Marxism and the bitcoin energy consumption debate

What's the value of bitcoin?


As much as his idea about "changing the world instead of interpreting it" that  Nazists in Germany and Communists in the USSR shared to ruin their societies and recently is employed by Neocons in the USA (apparently for a same purpose) is void and dangerous, Marx's contribution to political economy is one of the greatest human theoretical achievements ever:
He was the first who proposed a scientific and quantitative measure for value of a commodity: work.  

Marx's labour theory of value asserts that although the price of a commodity is determined by supply and demand it is nothing more than a concrete presentation of an abstract and essential  property inherent in each commodity: its value that is determined by the average amount of labour necessary for the society to produce it. Value is not volatile say because of market fluctuations.
By labour Marx implies both live(e.g. man hour) and dead labour which is recursively embedded in the resources that should be consumed/depreciated in the process.

Unfortunately, Das Kapital very soon become the bible of Communists and (remained so for more than a century), fueled by "changing the world" discourse and later completed by a package of other fake revolutionary ideas that fooled an important segment of intellectuals all over the world to act in the best interests of a corrupted regime in Russia.

On the other side, capitalists and their mercenary "scientists" in academies counterattacked by forging their own version of political economy: Marginalism.
More precisely: their own version of anti-political economy or simply, anti-Marx economy.

Marginalism is an exemplary for fake human sciences made/supported for sole political purposes in 20th century. It was based on the most ridiculous interpretation of value: utility.

Common sense is aligned with what utilitarians say: a commodity's value depends on its usefulness, desirability, utility, ... which is wrong just like any other assertion of common sense:
The earth is NOT flat,
Objects do NOT naturally stop moving,
There is NOT any universal clock,
... and
Bitcoin is NOT wasting electricity (as we will see later).

Historically the huge investments on Marginalism helped development of mathematical models, etc. that filled the shelfs of libraries and gave birth to a "science" that somehow was applicable in predicting market behavior and how the demand for a commodity would change due to psychological factors full of excuses for not being precise because of "complexities" in models and probabilistic nature of the variables involved.

Academy's primary mission was rather complicated: eliminating political economy from mainstream and replacing it with more applicable neutralized "sciences" like micro and macro economics.  
Being ruled by giants like  Marx, Ricardo, Smith, ... political economy was not a territory to be conquered by mercenary scientists, after all.

This mission was accomplished by investing on utilitarianism, the trick was presenting  and propagating it as an alternative theory of value in political debates but practically using it as an applicable instrument for predicting the demand (somehow useful sometimes).
This way they managed to convince their students firstly that value is a controversial topic and the neutralized utilitarian point of view is something meaningful like Marx's Labour Theory of Value and finally they became so confident to announce Marx's theory and political economy dead.

And now we are here, bitcoin has emerged and mercenary economists are in a deadly impasse. Their "science" is absolutely void and inefficient for understanding such a revolutionary phenomenon because they have been castrated more than a century ago and don't understand what does a political economic revolution look like.

Recently debating Pos/PoW with a PoS enthusiast I asserted PoS coins are made out of thin air (just like fiat) and the energy consumption in PoW is not a waste because it is the source of bitcoin value. My reasoning was naturally based on the established politico economic labour theory of value, Marx's theory.

Surprisingly, few days later I encountered this article. Again, a pos proponent (I suppose) is questioning the value of bitcoin being measurable by the amount of "work" miners do, this time, by directly claiming Marx's theory to be a fallacy!

It is why I'm becoming more and more convinced that the PoW/PoS debate is nothing less than a final debate between true political economists resurrected after bitcoin on one side and fake mercenary economists with their utilitarian interpretation of value that is incapable of understanding why bitcoin has an inherent value not based on a subjective convention or an artificial demand caused by speculation nor even its usefullness as a medium of exchange and a utility.

From a much wider perspective, I would suggest that the whole crypto currency movement would find its theoretical support in political economy as an original decent science rather than fake anti-Marx discourses that belong to a bitter passed period of history named the cold war.
6  Economy / Scam Accusations / Bittrex first scams $millions and now joins to self-regulatory group on: August 20, 2018, 05:23:35 PM
Just check their tweet

Believe it?  Grin

These shitty scammers destroyed lives of thousands of people few months ago by stealing millions of dollars worth of poor Middle East and  Eastern European countries by playing their dirty "verification needed" game without any notice. I know noobs who had invested all the pennies they had earned so hard during years in this scammy exchange and they lost it overnight because they were Iranian, Ukrainian, Syrian, ... and had not access to US jurisdiction resources and were easy to beat victims.

I personally lost like 3 Eths during that scam, no worries, I'm fine now and I'll be around for a while and I have enough time for retaliation, but I know people who seriously suffered from this scam and never healed, I'll retaliate on their behalf as well. I promise.
7  Bitcoin / Development & Technical Discussion / A framework for designing CPU only mining algorithms on: August 19, 2018, 07:42:03 PM
Hi all,

I'm not thinking of replacing SHA256 of bitcoin or releasing a brand new coin, ... and yet I need a proof of work algorithm resistant against parallelism i.e. not feasible for GPUs to compete with a modern CPU.

I'm already aware of lots of literature and mostly failed efforts regarding this issue but I can't get rid of one simple idea I have for designing such an algorithm. I just don't understand why it shouldn't be considered safe against gpu mining.

It will be highly appreciated if somebody please prove me wrong  Cheesy

Two Phase Mining
Suppose we have this memory hard algorithm, something like Dagger Hashimoto which utilizes a big chunk of memory. As we are already aware, gpus mine this algorithm ways more efficient and faster than a cpu because of their multiple thousands of cores that can share the same memory bank. For EtHash (as a Dagger Hashimoto variant) it causes the whole mining process being bound with memory bus unit performance which resists against ASICs but bypasses cpus because of the significant number of cores that utilize the bus almost completely without affecting miner's performance as the bus is dedicated to the gpu.

Now we change this algorithm such that it goes through 2 phases: estimation phase and targeting phase.

Estimation Phase is a normal run of the algorithm, but instead of looking for a hash less than network target difficulty dn, we look for a hash with much lighter difficulty, like 216 times easier, i.e. d0 = dn << 16. (actually it is difficulty-1 that we are talking about). We assume the shift/multiplication operation won't produce any carry ; i.e target difficulty > 216

A typical GPU with enough RAM will substantially outperform any CPU because of the huge number of cores, obviously. After each hit (that normally happens very frequently), we have a nonce like n1 that satisfies H(s|n1) < d1, Until now everything is in favor of gpus, yet. But ...

Targeting Phase is supposed to be much hard to run using a shared chunk of (like 1 GB) of memory. For this:
1-We primarily set n2 = n1 << 16

2- Suppose we have a function,  f(M, n, s, e, flag) that changes a chunk of memory partially (like 20%) using the supplied n from address s to address e and flag determines whether the function only maps and returns the range or modifies it in the memory as well. This function is supposed to be complicated enough that running it is hundreds of thousands times harder and more time consuming than fetching a page from memory. We change the memory chunk (DAG in EtHash) by applying this function with n1 as the second parameter and start and end addresses of the memory chunk and setting flag to true to modify it. Now we have a dedicated chunk of memory specialized for this nonce.

3-We run the original memory hard algorithm with a special restriction: only a combination of last 16 bits of n2 are allowed to be set to 1 to generate a new nonce n, i.e. n-n2 <= 216
 
4- We need H(s|n) <= 216      
5- Rebuild Memory chunk (e.g. use a backup)

Validation
validating the hash includes:
1- Calculate n1= n >> 16, d0 = dn << 16
2- Check that the supplied block header with its nonce, yields a hash H(s|n) <=  216). For this, in each memory access  f(M,n1, address, address, false) should be called instead of memory read.
3-If step 2 is passed now check H(s|n1 < d0
4-continue checking other rules.

Discussion:
We note that the targeting phase above is optimized once we follow the algorithm by applying 20% change in memory chunk (Dag file for Ethash) otherwise we need checking and calculating the values we read from memory in every single run of the algorithm which is supposed to access memory many times( otherwise it is not memory hard at all).

If in our algorithm we access the memory N (20 for EtHash, I suppose) times applying f function in each round of targeting phase will cost n times executing f and as we have almost 216 rounds. Obviously it wouldn't be cost effective for a gpu to use f function in calculate only mode so many times.

Alternatively, modifying memory by calling f function once is a single thread job because f should hold lock on the memory and the multiple cores of gpu are useless during this process. If  f is defined properly this algorithm in its second phase would outperform a gpu because setting up multiple cores to start searching a 32K space simply doesn't worth it.

Conclusion

We use a two phase algorithm, in phase one, an estimate nonce is generated that is useless without a complementary 32K search that is practically single thread. Although gpus keep their advantage in phase 1, the estimates they generate are useless because they should be kept in the queue behind a single thread task that is deliberately designed to be a bottleneck.

I expect a 2 core cpu to beat a single gpu with up to 10 thousand cores.



8  Bitcoin / Development & Technical Discussion / An analysis of bitcoin blockchain height divergence from its standard behavior on: August 16, 2018, 11:04:05 AM
How far can possibly bitcoin blockchain height diverge from the ideal one- block-per-10-minutes measure?

Motivation
During a discussion with @gmaxwell about a hypothetical DoS vulnerability, in response to my suggestion for blacklisting peers that send block locators unreasonably large, he objected to my proposal by asserting that a loyal node may become maliciously bootstrapped and fooled to commit to a very long chain with trivial difficulty, so blocking it won't be helpful. I responded with a moderate version, ... but after further assessments, I realized that the whole situation is somewhat abnormal, the possibility of having honest nodes with an unreasonable perception of chain height.

In bitcoin, 10 minutes block time interval is not imposed synchronously nor integrally. Protocol adjusts the difficulty every 2016 blocks to keep the generation pace at 10 minutes target but a steady growth in network hash power makes it a possibility for chain height to exceed the ideal number based on 10 minutes generation rate. Actually maximum block height of the network as of this writing is #535461 while it is supposed to be ~504300 showing  a +6% divergence.

Not satisfied with the situation, I asked myself:
Why should we accept a proposed chain with an unreasonable longitude like 10 times or 1000 times longer than normal in the first place?
Why shouldn't we simply reject such proposals and shut down the peers who made them? Putting difficulty requirements aside, is it possible to have a chain orders of magnitude longer than normal?

In the malicious block locator scenario, a peer node, very commonly a spv, sends us a getheaders message with a payload of hundreds of thousands stupid hashes as block locator and instead of being rejected and banned we hopefully and exhaustively try to locate them one by one.

At first, @gmaxwell didn't believe it to be an attack vector because of the cpu bound nature of the processing involved but finally he made a pull request out of that discussion and it was committed to the source. Bitcoin core now puts a MAX_LOCATOR_SZ hardcoded to be equal to 101. So, we have block locator problem fixed now.

A block locator in bitcoin is a special data structure that spontaneously represents (partially) a node's interpretation of block headers in the chain. The spontaneous nature of block locator generation algorithm guarantees its length to be of O(log(n)) where n is the maximum block height in the chain, it is exactly 10+(log2(n)) . For current block height of 535461 a legitimate block locator should carry a maximum of 29 hashes and a node sending 30 hashes is claiming a chain with 1,073,741,824 height which is almost twice longer and it would be 2 million times longer if block locator was holding 50 hashes. Yet we were worried about block locators holding thousands of hashes which represent chains with astronomical heights.

Although the issue is fixed now, the underlying theoretical base didn't get the attention it deserves: A boundary for the divergence of actual block chain's height  from what we could trivially calculate by dividing the minutes elapsed from an accepted checkpoint block's  timestamp by 10 (normal block interval).

At the time of this writing, the divergence is  6+% but we have no measure to consider 60%, 600% , 6,000,000,000% , ... divergence indexes infeasible, until now.

In  this article I'm trying to establish a mathematical framework for further research on this problem, by introducing a relation between hash power increase requirements for a specific divergence within a relatively large window of time.

I also found it useful to include a background section for interested readers, if you are familiar with the subjects, just skip this section.

Background

Difficulty adjustment: How bitcoin keeps the pace
Two rules are applied in bitcoin protocol for regulating the rate by which blocks are generated:
1- Increase/decrease difficulty every 2016 blocks by comparing the actual time elapsed with an expected 2 week period.
2- Don't Increase/decrease difficulty with a factor greater than 4.

The latter constraint is commonly overlooked because such a large retargeting is very unlikely to happen with the huge inertia of the currently installed hash power but as we are studying extreme conditions, the 4 times increase factor is of much importance.

Longest Chain Rule: How Bitcoin chooses between forks
In bitcoin and its clones, LRU is not interpreted naively as selecting the chain with biggest height as the main, instead it is defined as the chain with most work needed to generate it.
To calculate accumulative work done on each chain:
1-work for each block is calculated as:  floor(2^256 / (target + 1))
This is done in chain.cpp of bitcoin source code via GetBlockProof function:
Code:
arith_uint256 GetBlockProof(const CBlockIndex& block)
  122 {
  123     arith_uint256 bnTarget;
  124     bool fNegative;
  125     bool fOverflow;
  126     bnTarget.SetCompact(block.nBits, &fNegative, &fOverflow);
  127     if (fNegative || fOverflow || bnTarget == 0)
  128         return 0;
  129     // We need to compute 2**256 / (bnTarget+1), but we can't represent 2**256
  130     // as it's too large for an arith_uint256. However, as 2**256 is at least as large
  131     // as bnTarget+1, it is equal to ((2**256 - bnTarget - 1) / (bnTarget+1)) + 1,
  132     // or ~bnTarget / (bnTarget+1) + 1.
  133     return (~bnTarget / (bnTarget + 1)) + 1;
  134 }

2- During loading/adding a block to block index in memory, not only the block work is computed by calling the above function but also an accumulated chain work is aggregated in nChainWork with this ocde:
Code:
pindex->nChainWork = (pindex->pprev ? pindex->pprev->nChainWork : 0) + GetBlockProof(*pindex);
which is executed both in LoadBlockIndex and AddToBlockIndex.

The forks are compared based on nChainWork of BlockIndex item of their respective last block and once a chain is found heavier than current active chain a reorg will be performed mainly by updating UTXO and pointing to new chain as the active fork.

Between two difficulty adjustments, this the-most-difficult-chain-is-the-best-chain approach is identical to the-longest-chain-is-the-best-chain originally proposed by Satoshi but when we have to choose between two forks with at least one difficulty adjustment (typically in both forks) there is no guarantee for the results to be identical.

Block timestamp:How bitcoin keeps track of time
In bitcoin few rules apply to block time and current time. This is an important topic for the purpose of this article, defining an upper bound for the height of forks, because to impose such a boundary a node should eventually have a measure of  time and it is not simply its own system clock for obvious reasons.

1- Bitcoin defines a concept named network-adjusted time. It is defined to be the median of times reported by all the peers a node is connected to.

2- In any case network-adjusted time shouldn't offset node's local time more than 70 minutes.

3- Blocks should be marked with a timestamp greater than the median of previous 11  blocks and less than 2 hours after node's network-adjusted-time.

These constraints make it very hard for an adversary to manipulate block times and forging fake chains with fake difficulty and height which is our concern.


Abstract
The short-range difficulty adjustment policy of bitcoin causes a divergence of expected chain height (expected from ideal 10 minutes block time) because of increased hashpower during each epoch, we show that this divergence is exponentially difficult by proving a functional dependency between the ratios by which hash power and the divergence increase: F = (n/N)^n

To prove this relation, we primarily show the maximum divergence caused by introducing any amount of new hashpower to the network in a specific period of time is achieved when its distribution in epochs, is representable by a geometric  progress.

The 4 times maximum threshold imposed by difficulty adjustment algorithm of network is deliberately ignored to to avoid complicating the discussion. It is a justifiable simplifying assumption in the context of this article because a 4+ times per epoch increase in hashpower is not feasible  anyway, as we shortly discuss at the end of the article.

After proving the exponential dependency, we briefly illustrate and discuss its relevance to block locator size problem and other interesting potentials for dealing with similar problems generally.

Lemma:
Suppose at the end of a difficulty adjusting epoch we have already chosen as  a check point and has started at time t0 we have calculated an average hashpower C as current network hashpower and begun to introduce Q as a relatively large hashpower in a period of time t, greater than or equal to 1 ideal difficulty adjustment epoch time T0. The number of epochs n that occur in period [t0, t0+t], is at its maximum when Q is distributed in a way that the network hashrate would go through a geometrical progression With:
 
Ci = C*qi
and we have Sum(Ci-Ci-1)=Q the increased hash power:
proof:
For a network state at the end of a given epoch that is in equilibrium with hashpower C and an arbitrary distribution of new power Q in n epochs we have the sequence
C, C*k1, (C*k1)*k2, ...,  (C*ki-1)*ki, ..., (C*kn-1)*kn
defined recursively as :
Ci=Ci-1*ki , C0 = C
Observing that for n+1>i>0:
Ci=C*K1*k2*...*ki=C*K1k2...*ki
and in the end of application of power Q, we have a total hashpower of C+Q that is equal to the last term of the sequence
We have to prove for n+1>i>0 we have k1=k2=...=kn=q for the n epochs to occur in the least possible actual time. For this, we note that in the end of each epoch the difficulty adjustment algorithm of bitcoin  (4 times limit temporarily being ignored) resets block generation pace to T0 for future epochs, so in each epoch we have:
ti= Ci/Ci-1 = (C*K1k2...ki-1)/ (C*K1k2...ki) = 1/ki
For total elapsed time t during n epochs, we have:
T = T0+T0*1/k1+T0*1/k2+... +T0*1/kn
Then:
T/T0 =1+1/k1+1/k2+...+ 1/kn
                        = 1+ f(k)/(k1k2...kn-1kn
                         = 1+f(k)/ ((C+Q)/C) )
Where f(k) is the sum of the sequence:
ai = k1k2...kn/ki
, now we need to have this sum in its minimum. We first calculate the product of the terms as prod(ai) = (k1k2...kn)n-1 = ((C+Q)/C)n-1

where ((C+Q)/C) and n-1 are constants and the sum is minimum when a1=a2=...=an hence:
k1=k2=...=kn
Now we have proof because for the minimum time to be elapsed the power sequence definitively should be rewritten as:
Ci = Ci-1*q = C0*qi
where C0 = C and n+1>i>1  which is a sequence with geometric progression.

Theorem
Minimum hashpower growth factor F = (Q+C)/C needed for bitcoin blockchain* to grow by an abnormal ratio n/N is determined by F=(n/N)^n
 
where C is current  hashpower of the network and Q is the absolute increase during a period of N*T0.

*Note: Bitcoin difficulty adjustment algorithm applies 4 times maximum threshold that is ignored deliberately for practical purposes

Proof
For a given F = (Q+C)/C Using the geometric progress lemma above we have:
t=T0*n/q    ==> t/T0=N=n/q ==> n/N = q
and
Q=C*(qn-1) ==> (Q+C)/C=F=qn
eliminating q:
F=(n/N)n

Discussion
The exponential relationship with the ratio of power increase and the ratio by which blockchain height diverges from normal N=t/T0 is an important key to understand how impractical would be for a network like bitcoin with tens of thousands penta hash inertia (C) to diverge from its normal behavior in terms of longitude (chain height).
Let's have a graphical illustration of this function. We replace n with N+d where d is the number of abnormally added epochs. So F=((N+d)/N)(n+d)
This form of the equation illustrates how F (relative hashpower) should grow to have N+d epochs instead of N, normal epochs.

figure 1: F/d diagram with various N values for F up to 30 times hashpower increase
Figure 1 illustrates how hard is achieving to 2-3 epochs divergence in 3,6,12,24 standard epoch times (two weeks long each) e.g for reaching to 3 epochs divergence within 24 weeks (12 epochs) even 40 times increase in network hashpower doesn't suffice.

Figure 2 (below) illustrates the same function in larger scale (1000 times hashpower increase). To diverge 6 epochs within 48 weeks we need 800 times increase in network hash power and within 24 weeks even with 1000 times increase divergence can't approach 6.
figure 2: F/d diagram with various N values for F up to 1000 times hashpower increase

This exponential behavior is very interesting and  potentially can be used as a basis for confronting malicious bootstrap attacks and issues like what we mentioned in the beginning of this article: bogus block locator issue.

As of the 4 times maximum threshold bitcoin uses for difficulty adjustment, obviously having a 4 times or more increase in each epoch for the network is infeasible specially when it comes to large windows of time, like 1-2 years that cause exponential network hash power increase up to infeasible quantities. Hence, ignoring that threshold practically won't affect this analysis.
9  Bitcoin / Development & Technical Discussion / How devil competes with its own mined blocks. on: August 09, 2018, 12:06:52 PM
It is really crazy dudes, investigating proximity problem in bitcoin blockchain I was double surprised:

At first I found that for latest 65000 blocks we have just 25 orphan blocks. i.e. a rough 0.00038 ratio or 0.038% which is too low and proves void  the concerns about security consequences of reducing block time. Actually an orphan rate of 1% should be considered safe, this figures suggest we can safely reduce block-time without getting even close to danger zones.

I was enjoying my discovery and planning how to take advantage of this fact in favor of my PoCW proposal,  when I noticed a hilarious point.

Just take a look at this snapshot:

It is how BlockCahin.info represents orphan blocks, the block on the left is the one that have progressed and is added to the main chain and the right one is the orphan block.

Both the progressed and the orphan blocks are relayed by AntPool  Grin
The blocks time stamped with like 90 seconds difference.

How this ridiculous situation should be interpreted?

Option one:This fat boy is bloated that much that is no longer capable of taking advantage of its own premium. So large that cancelling works assigned to its workers and initiating a new search is hell of a job and takes a long time.

Option Two: They have outsourced their operation somehow and the branches are competing with each other.

Option Three: They are just stupid dick heads that have no clue about what they are doing.

Option Four: Both blocks have been found almost simultaneously (it is feasible despite timestamps being 90s divergent) then they've relayed the left one to the part of the network and the right one to another (minor) part intentionally to keep them busy validating and changing work load while the majority (like Antpool) are mining the right chain.

Option Five: Another unknown devil practice.


Any how this giant is really crazy. Cheesy
10  Bitcoin / Development & Technical Discussion / An analysis of Mining Variance and Proximity Premium flaws in Bitcoin on: July 16, 2018, 03:07:29 PM
An analysis of Mining Variance and Proximity Premium flaws in Bitcoin
Preface
The problem of solo mining becoming too risky and impractical for small mining facilities appeared in 2010, less than 2 years after bitcoin had been launched. Although Satoshi Nakamoto made a comment on bitcointalk about first pooling proposals,  it was among the latest posts Satoshi made and he just disappeared few days later from this forum, forever, without making a serious contribution to the subject. Bitcoin was just 2 years old when pooling age began and eventually dominated almost all the hashpower of the network, putting it in danger of centralization as its obvious consequence.

Since then it has been extensively discussed and become a classical problem named Pooling Pressure  in bitcoin and PoW networks which is mainly escalated by both mining variance and proximity premium flaws.

In this article I'm trying to show that:
1-Mining Variance is an inevitable consequence of bitcoin's winner-takes-all approach to PoW.
2-Proximity Premium is basically an amplified version of Mining Variance, Hence another consequence of that approach.

Readers may be already familiar with my proposal for fixing these flaws and removing the unfamous Pooling Pressure from bitcoin and its clones and other blockchains that inherited winner-takes-all from it, i.e all minable coins in the market!
As a matter of fact I was writing the (almost) final version of this proposal when I found myself examining these two flaws more extensively and I thought it would be more helpful to publish this part separately.


Mining variance flaw in traditional PoW
The binary nature of Winner-Takes-All strategy in bitcoin,  implies that a miner's chance to win a block with a relative hash rate of p, will follow a  Bernoulli distribution.
The variance of Bernoulli distribution is known to be p(1-p) Hence the standard deviation for N consequent blocks is:
σ=sqrt(1/p-1) / sqrt(N)

For N=365*24*6 = 52,560 blocks sqrt(N) ~ 229.26
Calculating standard deviation of p for  the following series  (representing respected hashpower ratio of some miners)
0.1,           0.01,             0.001,                 0.0001,                0.00001 ,       0.000001
generates the series
0.013,        0.043,           0.138,                 0.436,                 1.380,            4.362

This quadratic  increase in standard deviation as the hashpower ratio of the miner decreases is a direct consequence of Bernoulli Distribution which in turn is a result of binary nature of winner-takes-all approach (you win/you lose, 1/0).

This illustrates how the risks involved in mining with medium to low hashrates in traditional PoW, are that high to make solo mining for average miners just like participating in a very high stake lottery with a single winner each round. This is  kinda gambling that only hobbyists may be interested in and no rational investor would take it as a serious investment opportunity, unless he might be able to install and run very large facilities.

The direct consequence has been experimentally proved to be a pressure toward forming pools, typically centralized entities who aggregate their own hashpower with their clients' reducing the number of human beings in charge of mining in the network dramatically to very low thresholds.

Proximity Premium flaw
Although Mining Variance is important enough to push miners away from solo mining it is even boosted more by proximity premium, another flaw:

Propagation delay of announcements (most importantly new-block-mined information) gives the node nearer to the source (primarily the source itself) a premium by which they start mining the next block sooner than other participants that are losing their electricity and opportunity costs meanwhile. A thorough mathematical evaluation of this flaw is not available as of this writing, and as of my knowledge it would be hard to produce one. The model proposed here is rather qualitative and based on simplifying assumptions to be used instrumentally to prove my point here: Proximity Premium flaw is an amplified version of Mining Variance (analysed above), hence it is another consequence of bitcoin's winner-takes-all strategy.

The P2P network of bitcoin can be modeled as a weighted undirected complete graph, PP with nodes as the members of P, the vertices and edges being weighted by the number of nodes in the shortest path between every two node (minimum number of hopes from one to other) in the actual P2P network graph.

Assuming that for every specific type of information there is a same propagation delay for adjacent nodes (not necessarily an exact approximation of the real network),  the edges in PP would be assumed as a scale of the amount of time needed for information to be received (and verified) by each peer.

Every node pk of such graph, partitions the network to the subsets of nodes having the same weight on their edges to pk. i.e.:

i.e.
Ik,0 = {Pk}
Ik,1 = {pi| pi∈P & hk,i=1}
...
...
Ik,m = {pi| pi∈P & hk,i=m}

Where m is the maximum edge weight for node k on the complete graph And we have Ik,0 U Ik,1, ..., U Ik,m = P.

Now, one may be interested in how the cardinality of each partition Ik,i is dependent on i. Obviously, it depends on P2P network topology and differs from node to node and for each node from distance to distance. But if we had a choice to impose a strict constraint on the minimum and maximum of peers and at the same time forcing nodes to peek their peers randomly we would observe that in the first few hopes we have few nodes 'near' to the source while the majority of the nodes are 'far' enough to be considered in danger of losing opportunity costs because of working on informations being outdated already.

It should be noted explicitly that the impact of this proximity flaw is not only determined by the distance but also by the nature of the information under consideration.

In Bitcoin network there are 2 types of information continuously produced and distributed by peers: transactions and blocks.

While transactions happen and are relayed constantly their importance for miners are not critical and they don't lose too much because of propagation delay of this type of information. The only risks involved is the impact of such a delay on verifying blocks containing such transactions and missing the opportunity to include them when they have higher fees.

But for 'new block mined' events, the impact of being 'far' from the source is disruptive and causes miners to lose opportunity costs linearly with the length of the time they are kept in darkness. This opportunity cost is proportional to the miner's relative hashpower and could be expressed by means of a scalar variable.

It is theoretically possible to have a thorough analysis of Bitcoin network and suggest the average opportunity costs of each miner when blocks are found in other parts of the network. Such an index would be a function of the network topology because it is expressed  as the ratio of resources miner holds.

At the first glance it may look somewhat odd, big miners lose more because the ratio would be applied on larger resources. But another factor should be taken in consideration: the fact that miners with larger resources have a proportional hashpower and the probability for them to be in bad topological position is again proportionally less than small miners.  

So, everything seems to be in an equilibrium , big miners get premium more frequently and it compensates for situations that they are losing opportunity costs because of topological reasons and the fact that they have not mined the new block. Small miners lose a percentage of their resources which is not too high but this lost happens more frequently.

In reality, two factors change this picture radically:

Firstly,  the topology of the network is not that random and even. Large mining facilities utilize more powerful .
(set of) nodes able to maintain large number of connections with better bandwidth and fault tolerance.

Secondly, and most importantly, mining variance discussed above applies here in an order of magnitude worse scenario:

For a node to take advantage of its premium (in the very short window of time it is in premium) it has to build and mine a new block from scratch.
For a small mining facility it is practically impossible. e.g.  for a node that mines a block every 1 month (yet it is what you may get from a farm with 20 S9s)  or so, it is very unlikely to happen in the fraction of seconds it is in premium.

Hence, while small miners are losing opportunity costs every single round, they will never have a practical chance to be compensated ever.


It is the same assertion that I've made in the beginning and is the main motivation for publishing this analysis:
Proximity Premium flaw  is basically an exaggerated version of Mining Variance flaw and any fix/improvement to the latter would fix/improve it as well.




11  Bitcoin / Development & Technical Discussion / average cardinality of shares per block problem: on: June 17, 2018, 11:05:27 AM
We have encountered an interesting problem:

In Proof of Collaborative Work proposal, miners find shares with difficulties scaled down 10,000 times (hence, easier to mine) and submit them to the network to be accumulated lately both as proof of work and as the basis for block reward distribution. Proof of work is supposed to take place by accumulating shares to reach network target difficulty, cumulatively.

Each share is assigned a score proportional to the share's difficulty compared to target difficulty, like what we do here
Code:
    SCALE = 10000; //scale by which we utilize mining
     T = targetDifficulty; // calculated difficulty for current height
     /***********************************************/
     hash = ComputeHash(share);
     assert (hash <> 0 ); //preventing crash even in the most unlikely event of the hash being exactly 0 (2^-256 probability)
     ratio = T/hash;
     assert (ratio >= 1/SCALE); //shares should have passed this condition before approaching here
     If (ratio > 1 )
         score = 1; //much rare
     else
         score = round(ratio, -5 ) ;

According to the code, shares are in [T , T/SCALE] and consequently scores are in [1/SCALE , 1] neighborhoods.

According to the algorithm, we prove work by presenting a collection of shares with sum of scores exceeding 1. (actually it is 95% but we use 1 for convenience here).

@tromp has calculated the expected value for score to be ln(SCALE)/SCALE:
Would you please do a complete rewrite of your proposed formula, ... for clarification? Substituting ln(1/mindiff) for ln(n) just makes no sense to me or I'm missing something here.

Let T be the target threshold determined by the difficulty adjustment,
and scale be some suitably big number like 10^4.

Let shares be hashes that fall into the interval [T, T*scale], and define their score as T / hash.
When accumulating shares until their sum score exceeds 1, one is interested in the expected score of a share.

This can be seen to equal 1/scale times the expected value of 1/x for a uniformly random real x in the interval [1/scale,1]. Considering the area under a share score, the latter satisfies (1-1/scale) E(1/x) = integral of 1/x dx from 1/scale to 1 = ln 1 - ln(1/scale) = ln(scale).

So the expected score is approximately ln(scale)/scale.


for SCALE = 10,000 it yields an expected value of 0.000921 for scores.

A raw estimation of the expected value (average) number of shares per block, which is of most interest, could be simply assuming zero variance and
 and as we need n shares to exceed 1 cumulatively putting  
n*ln(SCALE)/SCALE = 1
Hence
n= SCALE/ln(SCALE)

For 10,000 times scaling of difficulty the latter equation yields n~1086 for the expected number of shares per block,  which is very encouraging for the protocol, implying very low overheads for high ratios of scaling BUT ...

As @tromp has shown later, this is not a correct assumption from the first hand to suppose the variance of distribution function for number of shares per block as being zero.

Typically, it is not a trivial problem and I'm not personally entitled to approach it in a formal persuasive way.

On the other hand I need the original topic to cover more general aspects of the protocol, so I decided to start a new thread and ask for more contributions by @tromp or other interested people with a good taste for probability theory to be continued here.

Highly appreciate any further assessment to this problem:

For a suitably large number of blocks, what is the expected value for number of shares per block where each block contains a collection of shares each scoring [1/SCALE, 1] in a uniform random distribution, where sum of the scores is supposed to exceed 1.

12  Bitcoin / Development & Technical Discussion / Getting rid of pools: Proof of Collaborative Work on: June 08, 2018, 02:37:57 PM
Proof of Collaborative Work

A proposal for eliminating the necessity of pool mining in bitcoin and other PoW blockchains

Motivation
For bitcoin and the altcoins which are based on common PoW principles,  centralization of mining through using pools, is both inevitable and unfortunate and puts all the reasonings that support the security of PoW in a paradoxical, fragile situation.

A same problem does exist with PoS networks. Things can get even worse  there because of the fact that most PoS based systems enforce long run deposit strategies for miners that is highly discouraging for them to migrate from one pool to another because of the costs involved.

Satoshi Nakamoto's implementation of PoW that is the core of bitcoin client software is based on a winner-takes-all strategy which is the fundamental factor behind two critical flaws: mining variance and proximity premium which are the most important forces that participate in forming pooling pressure.

Until now, both mining variance and proximity premium are known to be unavoidable and hence pooling pressure is considered an inherent flaw for bitcoin and other PoW based currencies.

In this proposal, we are suggesting an alternative variant of PoW in which the traditional winner-takes-all is replaced with a collaborator-takes-share strategy.

The problem of solo mining becoming too risky and impractical for small mining facilities appeared in 2010, less than 2 years after bitcoin had been launched. It was the worst timing ever, although Satoshi Nakamoto made a comment on bitcointalk about first pool proposals,  it was among the latest posts Satoshi made and he just disappeared few days later from this forum, forever, without making a serious contribution to the subject.

This way, the confused community came out with an unbelievable solution for such a critical problem, a second layer centralized protocol, named pooling, boosted by greed and ignorance, supported by junior hackers who as usual missed the forest.

Bitcoin was just 2 years old when pooling age began and eventually dominated almost all the hashpower of the network.

A quick review of Slush thread in which Satoshi has made the above referenced reply, could reveal how immature and naive this solution was and has been discussed and how it has been adopted: In a rush with an obvious greed.
Nobody ever mentioned the possibility of an algorithm tweak to keep PoW decentralized. Instead everybody was talking about how practical was such a centralized service while the answer was more than obvious:
Yes! you can always do everything with a centralized service, don't bother investigating.  

Anyway, in the thread, one couldn't find any arguments about the centralization consequences or the possibility of alternative approaches including the core algorithm improvements Shocked

I think it is not fair. PoW is great and can easily be improved to eliminate such a paradoxically centralized second layer solution. This proposal, Proof of Collaborative Work (PoCW) is an example of inherent possibilities and capacities of PoW. I didn't find any similar proposal and it looks to be original but If there is a history, I'll be glad to be informed about. Smiley

The Idea is accepting and propagating works with hundreds of thousands times lower difficulties and accumulating them as a proof of work for a given transaction set, letting miners with a very low shares of hash power ( say of orders like 10-6) to participate directly in the network and yet experience and monitor their performance on an hourly basis.



Imo, now, after almost a decade being passed, Moore law has done enough to make it feasible utilizing more bandwidth and storage resources and it seems to me kinda hypocritic to make arguments about 'poor miners' and pretending to be concerned about centralization threats and making excuses so for rejecting this very specific proposal that although increases the demand for such resources, can radically disrupt current situation with pools and centralized mining.

This proposal is mainly designed for bitcoin. For the sake of convenience and letting the readers to have a more specific perception of the idea, I have deliberately used constants instead of adjustable parameters.

Outlines
  • An immediate but not practically feasible approach can be reducing blocktime (along with proportional reduction in block reward). Although this approach, as mentioned, can not be applied because of network propagation problems involved, but a very excellent consequence would be its immediate impact on the scalability problem if employed, we will use it partially (reducing blocktime to 1 minute compared to current 10 minutes period).
  • As  mentioned earlier (and with all due respects to Core team), I don't take objections about the storage and network requirements implications and consequences of reducing blocktime as a serious criticism. We should not leave mining in hands of 5 mining pools to support a hypothetical poor miner/full node owner who can not afford installing a 1 terabyte HD in next 2 years!.
  • Also note, blocktime reduction is not a necessary part of PoCW, the proposed algorithm, I'm just including it as one of my old ideas (adopted from another forum member who suggested it as an alternative to infamous block size debate and later has been developed a bit more by me) which I think deserves more investigation and discussion.
  • PoCW uses a series of mining relevant data structures to be preserved on the blockchain or transmitted as network messages
    • Net Merkle Tree: It is an ordinary Merkle hash tree of transactions with the exception that its coinbase transaction shows no block reward (newly published coins) instead the miner charges all transaction fees to his account (supports SegWit)
    • Collaboration Share: it is  a completely new data structure composed of following fields:
      • 1- The root of a Net Merkle Tree
      • 2- Collaborating miner's wallet address
      • 3- A nonce
      • calculated difficulty using previous block hash padded with all previous fields, it is always assumed to be at least as hard as 0.0001 compared to current block difficulty
    • Coinbase Share: it is new too and is composed of
      • 1- A Collaborating miner's wallet address
      • 2- A nonce
      • 3- A computed difficulty score using the hash of
        • previous block's hash padded with
        • current block's merkle root, padded with
        • Collaborating miner's address padded with the nonce field
      • 4-  A reward amount field
    • Shared Coinbase Transaction: It is a list of Coinbase Shares  
      • First share's difficulty score field is fixed to be  2%
      • For each share difficulty score is at least as good as 0.0001
      • Sum of reward amount fields is equal to block reward and for each share is calculated proportional to its difficulty score
    • Prepared Block: It is an ordinary bitcoin block with some exceptions
      • 1- Its merkle root points to a  Net Merkle Tree
      • 2- It is fixed to yield a hash that is as difficult as target difficulty * 0.05
    • Finalization Block: It is an ordinary bitcoin block with some exceptions
      • 1- Its merkle root points to a  Net Merkle Tree
      • 2- It is fixed to yield a hash that is as difficult as target difficulty * 0.02
      • 3- It has a new field which is a pointer to (the hash of) a non empty Shared Coinbase Transaction
      • 4- The Shared CoinBase Transaction's sum of difficulty scores is greater than or equal to 0.95
  • Mining process goes through 3 phases for each block:
    • Preparation Phase: It takes just few seconds for the miners to produce one or (barely) 2 or 3 Prepared Blocks typically. Note that the transaction fees are already transferred to miner's wallet through coinbase transaction committed to the Net Merkle Tree's root for each block.
    • Contribution Phase: Miners start picking one valid Prepared Block's Merkle root, according to their speculations (which become more accurate as new shares are submitted to the network) about it to get enough shares eventually, and producing/relaying valid Contribution Shares for it.
      As the sum of the difficulty scores for a given Prepared Block's Merkle root grows we expect an exponential convergence rate for the most popular Merkle root to be included in Contribution Shares.  
    • Finalization Phase: After the total scores approaches the 0.93 limit, rational Miners would begin to produce a Finalized block
  • Verification process involves:
    • Checking both the hash of the finalized block and all of its Shared Coinbase Transaction items to satisfy network difficulty target cumulatively
    • Checking reward distribution in the shared coinbase transaction
    • Checking Merkle tree to be Net
  • UTXO calculation is extended to include Shared Coinbase Transactions committed to finalized blocks on the blockchain as well
  • Attacks/forks brief analysis:
    • Short range attacks/unintentional forks that try to change the Merkle root are as hard as they are in traditional PoW networks
    • Short range attacks/unintentional forks that preserve the Merkle root but try to change the Shared CoinBase Transaction has  zero side effects on the users (not the miners) and as of redistributing the shares in favor of the forking miner, they are poorly incentivized as gains won't go anything further than like %2-%10  redistribution ever.
    • Long Range attacks with a total rewrite agenda will fail just like Traditional PoW  
    • Long Range attacks with partial coinbase rewrite are again poorly incentivized and the costs won't be justified

Implementation

This is a radical improvement to classical PoW, I admit, but the costs involved are fair for the huge impacts and benefits. I have reviewed the bitcoin Core's code and found it totally feasible and practical form the sole programming perspective. Wallets could easily be upgraded to support the new algorithm as well,  but a series of more complicated issues, mostly political are extremely discouraging but it is just too soon to give up and go for a fresh start with a new coin, or just manage for an immature fork with little support, imo.

Before any further decisions, it would be of high value to have enough feedback from the community. Meanwhile I'll be busy coding canonical parts as a BIP for bitcoin blockchain, I think it takes like 2-3 weeks or even a bit more because I'm not part of the team and have to absorb a lot before producing anything useful, plus, I'm not full time, yet Wink

I have examined the proposed algorithm's feasibility as much as I could, yet I can imagine there might be some flaws overlooked, and the readers are welcome to improve it. Philosophical comments questioning the whole idea of eliminating pools don't look to be constructive tho. Thank you.


Major Edits and Protocol Improvements:
  • June 10, 2018 09:30 pm Inspired by a discussion with @ir.hn

    • A Prepared Block should be saved in the fullnodes for a long period of time enough to mitigate any cheating attempt to avoid Preparation Phase and using non-prepared, trivially generated Net Merkle Roots.  
      • Full nodes MAY respond to a query by peers asking for a block's respected Prepared Block if they have decided to save the required data long enough
      • For the latest 1000 blocks preserving such a data is mandatory.
      • For blocks with an accumulated difficulty harder than or equal to the respected network difficulty, it would be unnecessary to fulfil the above requirement.*
      • Prepared Block and Preparation phase terms replaced the original Initiation Block and Initiation Phase terms respectively to avoid ambiguity
      Notes:
      * This is added to let miners with large enough hash powers choose not to participate in collaborative work.
  • reserved for future upgrades
  • July 3, 2018 02:20 pm inspired by discussions with @anunimint

    • A special  transaction was added to Shared CoinBase Transaction to guarantee/adjust proper reward for the finder of Prepared Block and some enhancements were made to include both block reward and transaction fees (partially) in the final calculations.
      • In Finalization Phase, miners should calculate a total reward for the miner of its respected Prepared Block as follows:  
        preparedBlockReward = blockReward *0.04 + totalTransactionFees *0.3
      • Shared Coin Base Transaction should include one input/output address-amount pair that adjusts the amount of reward assigned to the Prepared Block Miner in the ordinary coinbase transaction
      • All the calculations needed for calculating rewards assigned to the miners of finalized block and shares will be carried out on the sum of network block reward and 70% of transaction fees collected in transactions that are committed to Net Merkle Tree
      Note:
       This change is made to both guarantee a minimum reward for miners of Prepared Blocks and incentivize them for including more transactions with higher fees  
  • reserved for future upgrades





13  Alternate cryptocurrencies / Altcoin Discussion / 51% attack on Zencash and Bitcoin Gold, punishing the resistance? on: June 04, 2018, 12:53:48 PM
The War is getting dirtier ever and ever. While yellow crypto media and venal writers here and there are announcing it over already, Bitmain is busy punishing communities who dare to resist.

After the shameful event of announcing z9 as Bitmain's latest ASIC crack against a PoW algorithm, two equihash based coins that happened to be the ones who officially had announced their intention to resist the ASIC crack by managing for a hard fork, Bitcoin Gold and Zencash, experienced a 50%+1 attack.

It is how it works in the 'real world' when it comes to money, we all know, but there is something else that we know too: Cryptocurrency is about changing this reality instead of giving it a hug.

I'm sure, devs in both camps will do what they should, and Bitmain's evil strategy will fail but here I'm not talking about alternate currencies, instead, I'm willing to discuss Bitcoin situation, being seized by a company like Bitmain, praying for its majesty to play fair, desperately.

This company is dreaming of taking over the whole crypto ecosystem and I think it should be even kicked out of Bitcoin.

Very opposite agendas in a ridiculous distribution of power and resources, they have everything and I'm on my own, I see but I have something they can't even dream of: a plan!

Firstly, the depth of emptiness of Bitmain's strategy should be understood. Taking over a decentralized system is an impossible mission. This company is deemed to failure the question is whether it goes down alone or pulls Bitcoin down with itself.

My plan:

I think eliminating ASICs is a very urgent necessity and it is absolutely feasible, both techically and socioeconomically. For coins that has not been overtaken by Bitmain (yet) it is more convenient, obviously, but in the case of infected ecosystems like Bitcoin, for which my plan is, it gets a bit more complicated. I'm thinking of 3 major projects:

1- Specific enhancements can be made to bitcoin mining protocol that discourage pool operations and help decentralized mining. I call it de-pool project.
Running such a fork would prevent Bitmain from using its Antpool leverage in the first place and brings back mining power to the actual ASIC owners.
The technical challenges involved are not that hard and I have some proposals for it in my turn.


2- A Dag based algorithm can be implemented with very acceptable resistance against ASIC attacks, despite what venal cryptographers and writers say. I call it de-ASIC project.
An enhanced version of Dagger Hashimoto can resist ASIC attempts very well. Bitmain's E3 is a misleading propaganda announcement, a joke and not a true ASIC crack.


2- A smart solution can be devised to let a smooth migration from ASIC to gpu mining within few epochs that can take place in like 2 years. It is ASIC Break project.
This project is based on a multi-algorithm, multi difficulty approach and smoothly will retire ASICs in a 2-3 year period.


Each of the projects needs a seperate thread to discuss but the whole picture deserves one as well.

You are welcome to discuss the feasibility of each project and share your ideas, but please, keep your 'ASIC is not that bad' shit out of this topic. Here I'm inviting people committed to ASIC resistance, not the ones who (being payed or not) write in favor of the enemy. Of course you can write what you want but it will be my right to call you a venal writer. This is how I think, "anybody who writes in favor of Bitmain, is payed by Bitmain" and you know what? I don't feel bad being such an enthusiast this one time.
14  Bitcoin / Development & Technical Discussion / Bitmain claims another crack against PoW: This time Equihash on: May 04, 2018, 12:35:21 PM
Just received this e-mail from Bitmain:
Quote
Dear subscriber,

We are proud and happy to announce the all-new Antminer Z9 mini, a new Antminer model for mining cryptocurrencies based on the hashing algorithm Equihash.

To prevent hoarding by some users or resellers and to ensure that more individuals are able to order this new ASIC miner
{blah, blah, blah}
Their site
promises 30 ksol/s with a 300 W power draw. Unlike Bitmain'es E3 (that has nothing to do with efficiency), it is typically an ASIC, 25 times or so more efficient than a 1070.

It is really sad, they are getting more aggressive on a daily basis. Now people who say ASIC is not that bad, and has told this ever and ever, should come and see their propaganda's outcome: A giant chinese company has become so bloated that can do whatever it wishes to do against the community under the umbrella of their nonsense arguments like 'ASIC has pros and cons', 'ASIC is inevitable', ...

Monero has slashed back, I'm sure that Ethereum miners won't stay silent and a hard fork is going to take place (no matter what Buterin and his org does) I'm not familiar with zcash community but I'm sure they will react properly. For now, I'm not here to discuss how hard should be the slasher approach against Bitmain.

I want just to take a look at the whole picture from both technical and economical perspectives and even the historical point of view.

I want the people (payed by Bitmain or not) who provided weak arguments about ASIC attack against bitcoin and convinced the community to do nothing about it (other than paying $$ to buy the product from the same company that is competing with them in an obvious conflict of interest), to reconsider their strategy once forever in a regretful manner and commit to a practical agenda to stop this company from ruining everything,
15  Alternate cryptocurrencies / Announcements (Altcoins) / I don't care about Vitalik, let's fork to an ASIC resistant ethereum on: April 10, 2018, 03:12:00 PM
To Vitalik Butterin:
Are you kidding? Instead of keeping your promise for Ethash being ASIC resistant, you are happy with Bitmain's attack because of your beloved Casper? Is it a joke?
Actually it is no surprise, a boy who is bored and can't wait to play with his new toy, I understand, but your dad should have done better in raising you to be a bit more responsible, you are now a grown up boy, aren't you?
Ethereum is not your playground, do you understand? Thousands of people are risking their whole life on it and you are just making fantasies about a stupid, unproven, vulnerable to socioeconomic factors  (which you have no clue about, what? You got degrees in economics and sociology too? Besides being a junior programmer? Really? ) algorithm. I'm talking about Casper and your childish 'slasher' ... you know what? Just STFU Angry

To Ethereum community:
It is a shame! Are you out of your minds? It was not such a 'great' job to propose a Turing complete machine against an already working finite state machine and guess what, it was not the golden kid who proposed it first! It was just about getting sponsorship and being funded. Stop exaggerating and making a pop star from an average programmer.

Anyway, I don't care, I'll fork asap.

I'm prepared for the PoW algorithm tweak to make it a nightmare for Bitmain.

Follow me on this topic https://bitcointalk.org/index.php?topic=3286898.0

16  Alternate cryptocurrencies / Altcoin Discussion / I don't care about Vitalik, let's fork to an ASIC resistant ethereum on: April 10, 2018, 02:39:09 PM
To Vitalik Butterin:
Are you kidding? Instead of keeping your promise for Ethash being ASIC resistant, you are happy with Bitmain's attack because of your beloved Casper? Is it a joke?
Actually it is no surprise, a boy who is bored and can't wait to play with his new toy, I understand, but your dad should have done better in raising you to be a bit more responsible, you are now a grown up boy, aren't you?
Ethereum is not your playground, do you understand? Thousands of people are risking their whole life on it and you are just making fantasies about a stupid, unproven, vulnerable to socioeconomic factors  (which you have no clue about, what? You got degrees in economics and sociology too? Besides being a junior programmer? Really? ) algorithm. I'm talking about Casper and your childish 'slasher' ... you know what? Just STFU Angry

To Ethereum community:
It is a shame! Are you out of your minds? It was not such a 'great' job to propose a Turing complete machine against an already working finite state machine and guess what, it was not the golden kid who proposed it first! It was just about getting sponsorship and being funded. Stop exaggerating and making a pop star from an average programmer.

Anyway, I don't care, I'll fork asap.

I'm prepared for the PoW algorithm tweak to make it a nightmare for Bitmain.

Follow me on this topic https://bitcointalk.org/index.php?topic=3286898.0

17  Bitcoin / Development & Technical Discussion / Resurrecting the Champ: PoW to become Bitmain/Buterin resistant on: April 09, 2018, 10:20:39 AM
Hi all,
In this series of articles, I'm going to share my technical analysis of Bitmain's latest attack on Ethash along with my own counterattack proposal. I have not started coding my algorithm tweak proposal yet but will do it in next few days.

It was bitcoin community's fault from the first place not to recognize ASIC as a crack and not to take a proper action against it by upgrading to an ASIC resistant PoW, imo. The endless scalability debate (faked/escalated by Bitmain?  Undecided ) was just a distraction for the community to seat and watch what was happening to the most unique, unprecedented feature of bitcoin, decentralization powered by PoW, being put in danger by an old fashioned way of crack: Application Specific Integrated Circuit, ASIC.

As a direct consequence of this passivitism, Jihan earned billions of dollars and became powerful enough to attack other coins  by investing more on ASIC design and production (besides taking malicious positions in bitcoin ecosystem) Scrypt, X11, Blake, ... cracked one after another in a short period of time. Each time an ASIC miner with crazy efficiency advantage over gpu mining was introduced by Bitmain after it has mined enough of each coin before the disclosure.

Now, the monster has become so reach and self confident to attack the second largest cryptocurrency and one of the most promising ones, Ethereum and its Ethash PoW, by introducing E3. It isn't an ASIC attack, as I'll argue through this topic, but deserves to be classified as an attack, possibly a new class of attack that can be accomplished only by such a resourceful monster and again its purpose is hardware monopolization.

Monero and its Sergio reacted almost instantly, they have already forked the chain and are very committed to their ASIC resistance strategy but Ethereum Foundation and Buterin on the contrary are showing no interest. They have not responded yet, instead, Buterin recently has coldly proposed to take advantage of this threat and boost Ethereum's migration to PoS, using his new toy, Casper.

PoW is not a toy to be replaced childishly, and I'm sure Ethereum Foundation will have a lot of trouble to manage for such a destructive hard fork,( personally I'll fully support any resistance against their agenda), so, I will deliberately eliminate Casper and PoS as a solution, firstly because I don't recognize a coin based on PoS as Ethereum( Posethereum? May be Smiley) ) and secondly I think it is more about Ethash. Pos may save or destroy Ethereum but it has nothing to do with Ethash.

Actually it is more about PoW rather than Ethash, improving bitcoin's SHA256 PoW is not that unlikely to be supposed totally off the table forever (even after the  failed BTG experiment). I think Bitmain is increasingly getting stronger and more dangerous and will take more aggressive positions against the community and one solution for the crisis would be enhancing PoW to get rid of Bitmain. This is why I have labeled this topic as a resurrection attempt toward PoW rather than Ethash, the later is just an interesting case chosen to be studied more precisely.

The upcoming debate in bitcoin over this issue and its result won't be as radical as what Buterin and his mates feel free to do with Ethereum. Bitcoin is three times bigger (in terms of market cap) and unlike the way Buterin and Ethereum Foundation (inappropriately) treat their coin, it is not an experimental project, there will be no PoS or proof of anything migration debate ever in bitcoin but a PoW tweak to become more resistant to Bitmain attcks? Who knows? Undecided

So I see stakes here for bitcoin community to get involved in ASIC resistance debate actively, and it is not that surprising:
Cryptocurrencies have a lot of technology and experience to share and PoW issues are on the top of the list.

After all PoW has gone through, there is disappointment in the air and many give up proposals on the table. Some people argue that because 'ASIC resistant' is not equal to 'ASIC proof '(?) the failure of  Scrypt, Cryptonight, X11, ... algorithms (and supposedly Ethash now), are enough evidences for us to be convinced  that PoW is inherently vulnerable and will lead to hardware centralization. Some use this to suggest approaches other than PoW for securing blockchain ('proof of something' discourse and the trending PoS vaariant) while others recommend coping with the claimed flaw and pray for other ASIC manufacturers to come to the scene and compete, or claim that there is no centralization threat at all(honestly, aren't they payed by Bitmain?  Undecided).

I'm strongly against this arguments and believe that ASIC resistance is the same as ASIC Proof (practically) and if some algorithms have failed their promise it does not imply anything other than they have to upgrade and fix their vulnerabilities.

Plus I think a more general hardware centralization threat should be addressed (including but not limited to ASIC), it is substantially because of my perception of the latest Bitmain E3 which I have come to the conclusion that it is not ASIC but yet a serious hardware centralization threat.

Bitmain's E3 seems to be a new type of attack on PoW based blockchains, It is not an Application Specific Integrated Circuit(ASIC) because it has not the required signature of ASICs being orders of magnitude enhancement in efficiency. From what Bitmain has officially announced, E3 is not more efficient than a 6x570 based gpu rig (it consumes 800 watts to produce 180 Mh/s Ethash mining power) definitely it is not what you expect from an ASIC.

But if Bitmain has not achieved more efficiency, how is it possible to categorize its E3 as an attack? The trivial answer is cost efficiency.

In a sophisticated marketing maneuver, Bitmain is selling its miner for a price far (more than 3 times) below what an ordinary gpu miner can manage to assemble a comparable mining rig. It pushes ordinary miners out of the market and is a hardware centralization threat and deserves to be classified as an attack. I'll show here that it is an special purpose machine built for taking advantage of a specific vulnerability of a modern PoW algorithm like Ethash. It is nothing less than an attack and for the convenience I'll call it Application Specific Architectured Computer, ASAC.

Bitmain, obviously, has not disclosed anything worth mentioning about E3 other than a picture (of an ugly mini case) plus 800 watts power consumption, 180 Mh/s Ethash power and 800$ price besides a 3 month pre-order requirement for the buyers, if it was not Bitmain, it would look  just like a scam, but it IS Bitmain and something is wrong here.

Just like any other technology, the most important secret that will be disclosed once it has been introduced, is always its feasibility. When you announce a product, you have already compromised the most important secret about it: its existence!

My assumption here is Bitmain has managed to reduce costs dramatically and the very few days  after the announcement, I have been busy finding how.

Obviously, I had to review Ethash again, this time, under the lights of E3 disclosure and being 100% convinced that there exists a vulnerability and Bitmain has taken advantage of it to manage for the attack.

I have found a possible answer and a proper solution both not very hard to guess: I think it is a shared memory attack (not the old Dagger vulnerability thou) and mitigation is possible by enforcing dedicated memory requirements, which I'll share in next few days,  but before proceeding anymore, I would like to hear from other forum members about this issue.










18  Economy / Computer hardware / Miners in Iran, just be in touch for a group buy on: January 30, 2018, 03:31:26 PM
This is just for Iranian miners who are interested in a group buy, they can be in touch with me   Smiley

Racists (being legendary or not  Grin) better just ignore this thread and do not commit trolling here or will taste a bitter tongue besides being reported to mods.
19  Economy / Computer hardware / [WTB] 1000-1500 W Server PSU kits adopted for gpu mining rigs on: January 09, 2018, 10:21:30 AM
Hi,

I need  the kits to be fully adopted for gpu mining rigs (PSU + cables+ breakout board) each supporting up to 10 gpus. Both risers and the cards should be feed properly.

I can pay up to 150$ , 180$  for platinum grade 1000 , 1500 W packages respectively and I'm talking about 10 units for the moment. I don't need escrow for forum's respected members, as long as all the discussions (excluding sensitive private data) are done transparently, here on this topic.

I'll choose the vendor within next 48 hours. Shipping is on me and the destination is Tehran, Iran. Cheers  Smiley
20  Economy / Computer hardware / [WTB]: Bulk GPU mining hardware to be shipped to DUBAI on: December 27, 2017, 01:28:04 AM
Hi guys,
I have established a channel to import my mining parts from Dubai to Tehran. Offers for gpu mining hardware with considerably high volume and in a continuous program are welcomed.
Important notice, tho:
I need really good prices, don't waste my or your time with foolish $$ offers just because of a few days warm up situation in the altcoin market. To be more specific and as an example: I don't care what is going on these days, my experience shows an AMD  RX 570 gpu in long run makes nothing more than an average of $1.5 a day, so, for a 5 months ROI it shouldn't be bought more than 200$ and absolutely not more than 220$, retail.

So, please don't argue here or in pm to convince me about bullshit prices, for such a game, I'm not in. Just realistic prices for high volume trading, again, please!
Pages: [1] 2 3 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!