Bitcoin Forum
May 02, 2024, 03:20:55 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49]
961  Alternate cryptocurrencies / Altcoin Discussion / Re: [LTC] Changing the litecoin Proof of Work function to avoid ASIC mining? on: January 30, 2014, 03:44:40 PM
No; ASIC technology will never catch up to requiring many GBs of memory.

Just adopt a PoW that takes at least a GB of memory to solve
(with no time-memory tradeoff) and that depends on latency rather than bandwidth.

That will put an end to specialized mining hardware.

Yes, ASIC technology absolutely will catch up. It may take time but considering how fast larger memory sizes come out and then cheapen it should be no surprise that it will. I suggest you read up on Moore's law and the new memory technologies coming out. This is and always will be a function of price of a coin and price of memory (and of course difficulty).

I suggest you read up on http://en.wikipedia.org/wiki/Random-access_memory#Memory_wall

Main memory latencies have improved very slowly over time.
Highly optimized ASICs for latency-hard algorithms already exist.

They're called memory chips.

There will pretty much always be a time-memory tradeoff, because you can always avoid using memory by recomputing the data when you need it...

There is no TMTO if recomputing the data to save half the memory is millions of times slower, as it is with the right memory-hard PoW.
962  Alternate cryptocurrencies / Altcoin Discussion / Re: [LTC] Changing the litecoin Proof of Work function to avoid ASIC mining? on: January 29, 2014, 06:25:18 PM
No; ASIC technology will never catch up to requiring many GBs of memory.

Just adopt a PoW that takes at least a GB of memory to solve
(with no time-memory tradeoff) and that depends on latency rather than bandwidth.

That will put an end to specialized mining hardware.

Yes, ASIC technology absolutely will catch up. It may take time but considering how fast larger memory sizes come out and then cheapen it should be no surprise that it will. I suggest you read up on Moore's law and the new memory technologies coming out. This is and always will be a function of price of a coin and price of memory (and of course difficulty).

I suggest you read up on http://en.wikipedia.org/wiki/Random-access_memory#Memory_wall

Main memory latencies have improved very slowly over time.
Highly optimized ASICs for latency-hard algorithms already exist.

They're called memory chips.
963  Alternate cryptocurrencies / Altcoin Discussion / Re: [LTC] Changing the litecoin Proof of Work function to avoid ASIC mining? on: January 29, 2014, 05:42:19 PM
I think people really miss the big picture here. If you are worried about centralization then trying to push ASICs out with memory changes will not help. This is why: the first time you change the code, ASICs will not work. People who develop ASICs will immediately get wise and as soon as the technology catches up(and it will)

No; ASIC technology will never catch up to requiring many GBs of memory.

Just adopt a PoW that takes at least a GB of memory to solve
(with no time-memory tradeoff) and that depends on latency rather than bandwidth.

That will put an end to specialized mining hardware.
964  Alternate cryptocurrencies / Altcoin Discussion / Re: True ASIC-proof Attack-Resistant CryptoCurrency Design on: January 29, 2014, 05:32:19 PM
All you need for ASIC resistance is a PoW that is constrained by memory latency
and uses a GB or more, with no feasible time-memory trade-off.
965  Bitcoin / Development & Technical Discussion / Re: Making the target variable according to computing power on: January 28, 2014, 06:22:34 PM
Is it possible to make the target variable for different miners. I know the target changes every 210,000 blocks (two weeks) but it is the same for every miner. But in order to discourage miners to invest in acquiring powerful machines and reduce power consumption I thought of making the target variable. The powerful the machine the smaller the target and so the more challenging the puzzle becomes. Would this be possible from implementation point of view. What consequences might this have. Thanks for explanations.


How would you prevent someone from lying about how powerful their machine is?



You won't believe it, but my Sinclair ZX-80 computer solved another block today at difficulty 1.3 !
966  Bitcoin / Development & Technical Discussion / Re: Using captcha to democratize mining on: January 28, 2014, 06:12:01 PM
There is a simple solution to "democratize" mining;
adopt a PoW that takes at least a GB of memory to solve
(with no time-memory tradeoff) and that depends on latency rather than bandwidth.
That would put an end to specialized mining hardware.

How would that prevent specialized mining hardware? People would just make hardware with huge amounts of memory.

If your PoW is constrained by memory latency, then you can't do much better than using commodity hardware, i.e. high-end PCs (possibly GPUs, although they tend to do worse at memory latency).
You cannot build an ASIC with dozens of GB of memory.
967  Bitcoin / Development & Technical Discussion / Re: Using captcha to democratize mining on: January 28, 2014, 05:52:45 PM
There is a simple solution to "democratize" mining;
adopt a PoW that takes at least a GB of memory to solve
(with no time-memory tradeoff) and that depends on latency rather than bandwidth.
That would put an end to specialized mining hardware.
scrypt?

No way, scrypt only needs a lousy 128KB, and even with that is very parallellizable.
Technically, it's not even a PoW since verification is a nontrivial computation,
and that only gets worse when you try to increase its memory footprint.
968  Bitcoin / Development & Technical Discussion / Re: Using captcha to democratize mining on: January 28, 2014, 05:41:45 PM
There is a simple solution to "democratize" mining;
adopt a PoW that takes at least a GB of memory to solve
(with no time-memory tradeoff) and that depends on latency rather than bandwidth.
That would put an end to specialized mining hardware.
969  Bitcoin / Project Development / Re: Momentum Proof-of-Work Bounty - 30 BTC on: January 09, 2014, 02:53:08 PM
bytemaster,

First, I think the idea is quite interesting and novel. Let me summarize what you are proposing to make sure we’re on the same page. Then, I will raise my concerns with the current proposal (partly in hopes of collecting a bounty and partly because it’s a really cool idea that could be the grounds for some interesting new cryptocurrencies).

The question is whether the Momentum or scrypt give worse performance increases for specialized hardware design over typical commodity hardware already in the hands of users.

Paraphrase of your section 3 in the paper with some notes:
Take data D (transactions, previous block hash, etc), calculate H = Hash(D).
Find nonce A and nonce B such that BirthdayHash(A+H) == BirthdayHash(B+H) (in practice a nonce C, D, etc could also be required to match)
If Hash(H+A+B) < TargetDifficulty then you win.


My argument against Momentum compared to scrypt:
The way you describe the algorithm and present it in the paper is all in terms of finding pairs of birthdays that match from a large set and the probability of finding a match. While this is the intuition behind the idea of how you can require lots of RAM while still being able to verify a correct answer quickly, it does not show the real goal of a miner. A miner does not want to find one match, but as many matches as possible. This is because a miner wants as many chances at finding a hash under the difficulty as possible. When looking at number of matches, we should talk in terms of expected value.

Note that since Hash() is supposed to be trivially fast and is called so much less than other functions I just assume we have an oracle that gives us a win or loss answer for each match for clarity even though technically each match hash to be Hash(H + A + B) to determine if it’s a winner. For the rest of this explanation I’ll just talk in terms of expected matches.

Let B = the number of ‘birthdays’ (hashspace of BirthdayHash).
Let S = the number of BirthdayHash(nonce+H) stored in RAM.
Let M = maximum number that S can be given a particular system’s RAM.

If we find a new BirthdayHash(A + H), the expected number of matches ( BirthdayHash(A+H)==BirthdayHash(B+H) ) = S / B

Now we calculate the total number of expected matches during a time frame (average block creation time for instance). we take the number of BirthdayHash() we compute in that time (N). S will increase from 0 to N unless M<N, meaning we do not have enough RAM to store N results.

N * Mean(S) / B = Total expected matches
If M>=N:
Mean(S) = (1 + 2 + 3 + … + N)/N = (N+1)/2
If N<M:
Mean(S) = ((1 + 2 + 3 + … + M) + M(N-M) )/N = M, if N >> M

If M>=N: Then the system scales the same as scrypt just with the amount of hashes so I make an scrypt ASIC until N>M. At this point they would be the same basically.

If N > M but not by much: Keep deploying scrypt ASICs even though there is a minor hit once RAM is full it doesn’t hurt too bad yet. If the goal is to land in this small sliver, I think that will be very hard to predict where it will be ahead of time and will likely vary drastically between CPU/GPU/ASIC so even hitting it right on for one may miss the others.

If N >> M:
This is when the lack of RAM kills straight scrypt ASICs as a possibility so I assume this is the scenario that Momentum is designed and would be tweaked to get into.
Now the expected value is: N * M / B = Total expected matches

This means that two variables affect the total amount of mining: N the number of BirthdayHash() computed per time and M which is basically the amount of RAM. The fundamental problem I see here is that we now have quadratic scaling with increased resources rather than linear.
For example:
You have a standard system that performs N BirthdayHash() and has M RAM which then expects N*M/B matches. Someone who buys a system that costs five times as much now has 5N*5M/B matches or 25 times as much output while only paying 5 times as much.

With off the shelf hardware there will be a natural limit on RAM perhaps such that the cost increases dramatically as people have to start buying high end servers, but custom ASICs can use standard commodity RAM and while no doubt they would face some scaling and bus issues, even if ASIC makers could achieve a few multiples of the amount of RAM normal users have it would multiply by the amount of hashing power. At the same time, even if RAM isn’t increased past commodity systems, if ASIC makers can use just as much RAM as commodity systems, they can still add the same multiplier as a straight scrypt ASIC would as just increasing the number of hashes per second still scales the number of matches linearly. For this reason I believe that Momentum is worse than scrypt in terms of customized hardware in the long run. In the short term, such hardware would no doubt be harder to design than just scrypt ASICs.

Another interesting piece of the algorithm to look at is the number of nonces that have to have matching birthdays. Scaling this up to 3 or 4+ shouldn’t affect the fundamental hashes per second * RAM scaling (I don’t have time to do the math but each should be so reliant on the pair matching as to be almost identically) although it will affect the storage strategy and how you use the RAM. In fact once the number of birthdays all matching is greater than 2, pools could use this to gain advantage. Consider a pool that in addition to hashing on the same block also share any partial matches (when BirthdayHash(A+H) ==BirthdayHash(B+H) and you need C and D matching as well). Each user of the pool can then improve their odds by saving partial matches and junking some single results they had. In this scenario it makes since to join the largest pool who gives you better partial matches so your expected matches also go up increasing your contribution. This is a dangerous incentive as even with BTC we have seen some large pools without any particular incentive beyond lower variance.


All that said, it is an interesting idea and I’ll keep thinking about ways to utilize/improve it. It’s also quite possible that I’ve misunderstood your algorithm so let me know what you think. These are the kinds of ideas that we need to share and discuss in order to build new innovative cryptocurrencies and improve upon existing ones.

Let me know if anything is unclear. I'm not sure how this board does math notation.


Thanks,

Nathaniel



The new Cuckoo Cycle proof-of-work, described and implemented at
https://github.com/tromp/cuckoo, doesn't suffer these drawbacks, since
1) at the default or any higher difficulty settings, the number of expected solutions of each instance is less than one. so if you fail, you need to change the header (e.g. timestamp field) and try a new instance.
2) giving it much more memory than is used by the standard algorithm doesn't help in finding a solution any faster.

970  Alternate cryptocurrencies / Altcoin Discussion / Re: [LTC] Changing the litecoin Proof of Work function to avoid ASIC mining? on: January 09, 2014, 02:28:12 PM
As far as a PoW that will be more ASIC resistant. What about Momentum PoW.. what Protoshares is using: http://invictus-innovations.com/s/MomentumProofOfWork.pdf

EDIT: After thinking about if for a bit, the Momentum PoW would not work as it would effectively cut out current GPU miners. Any change of the PoW must allow everyone that is already participating in mining Litecoin to continue to do so.

Don't believe everything you read. These protoshare guys apparently have never heard of cycle detection algorithms (like Pollard's rho) that find collisions while avoiding the space complexity blowup, their whitepaper is nonsense.

The new Cuckoo Cycle proof-of-work system that I made available yesterday at

  https://github.com/tromp/cuckoo

should be more robust against time-memory tradeoffs.
971  Alternate cryptocurrencies / Altcoin Discussion / Re: Cuckoo Cycles: a new memory-hard proof-of-work system on: January 09, 2014, 02:15:26 PM
POS is the future!

I think both PoW and PoS have their place in the future.
PoW is useful in the startup phase to establish a sufficiently large
base of coins that can later be leveraged by PoS.

But as far as PoW go, Cuckoo Cycle has some very distinct advantages over hashcash-scrypt.

Both were designed not to be easily parallellizable. But computing a single scrypt is as demanding
on the prover as it is on the verifier, which means we cannot just make the prover's work harder
(i.e. use more memory) without also making life (too) hard on the verifier, i.e. every client.

Cuckoo cycle on the other hand has trivially verifiable proofs, while you can make the prover
use up to 14GB of memory and spend many minutes even at the lowest diffculty setting.

972  Bitcoin / Bitcoin Discussion / Re: If you could travel back in time, how would you change Bitcoin? on: January 09, 2014, 03:34:40 AM
I'd employ a different proof-of-work that doesn't provide much benefit to GPUs/ASICs,
but rather depends on many gigabytes of memory (just for generating, not for verifying)

Well, that's what I'm trying to design right now:)

-John

Where could we read more about ur design?

I'll announce the design in this forum, hopefully within a week.

-John

Well, it took two days longer, but see the Cuckoo Cycle proof-of-work announcement in the Discussion forum.
973  Alternate cryptocurrencies / Altcoin Discussion / Cuckoo Cycle: a new memory-hard proof-of-work system on: January 08, 2014, 06:29:26 PM
I finished designing and implementing a new memory-hard proof-of-work system
based on Cuckoo Hashtables.

A preliminary version of the whitepaper is available online at
https://github.com/tromp/cuckoo/blob/master/cuckoo.pdf

Please have a look!



974  Bitcoin / Bitcoin Discussion / Re: If you could travel back in time, how would you change Bitcoin? on: December 31, 2013, 07:50:32 PM
i would bring an ASIC back in time with me, and instamine everything  

Won't work due to diff adjustement.

Would work fine if he just delays submitting solutions until after, say, 12 mins have passed.
That way he'll get a decent fraction of all blocks without bringing down the average time much...
975  Bitcoin / Bitcoin Discussion / Re: If you could travel back in time, how would you change Bitcoin? on: December 31, 2013, 05:47:34 PM
I'd employ a different proof-of-work that doesn't provide much benefit to GPUs/ASICs,
but rather depends on many gigabytes of memory (just for generating, not for verifying)

Well, that's what I'm trying to design right now:)

-John

Where could we read more about ur design?

I'll announce the design in this forum, hopefully within a week.

-John
976  Bitcoin / Bitcoin Discussion / Re: If you could travel back in time, how would you change Bitcoin? on: December 31, 2013, 05:35:34 PM
I'd employ a different proof-of-work that doesn't provide much benefit to GPUs/ASICs,
but rather depends on many gigabytes of memory (just for generating, not for verifying)

Well, that's what I'm trying to design right now:)

-John
977  Bitcoin / Project Development / Re: Would you be interested in a "gen-wallet"? on: December 31, 2013, 04:59:46 PM
or you could call it after what people might use to hold multiple wallets... a purse.

Happy new year!

(also testing my ability to post in non-newbie forum)
978  Other / Beginners & Help / memory-hard proof-of-work on: December 30, 2013, 08:08:48 PM
I'm interested in alternatives to bitcoin's ASIC-friendly proof-of-work (pow).

A memory-hard pow offers two main advantages:
1) reduce energy waste
2) shift the mining balance back from ASIC-farms to desktops,
    allowing many more people fair odds in the mining lottery

scrypt was developed with memory-hardness in mind, but turned out only
very mildly successful in that regard.

More recently, Invictus Innovations proposed the Momentum pow
with the exact same goals in mind, but that turns out to allow a memory-time trade-off
as well; see the discussion at
https://bitsharestalk.org/index.php?PHPSESSID=8cf66a1c5dbb5822f255c4a37bb0e8f4&topic=22.60

The ideal pow function would require a certain amount of memory (several GB) with no
reasonable memory-time trade-off; in other words it should become grossly inefficient
with, say, only half that minimum amount of memory.

Has there been any other research done in this area?
Have any other pow schemes been proposed with memory-hardness in mind?

I have some ideas that I'm considering putting in a whitepaper,
but I'd like to make sure I'm aware of all previously published efforts.

regards,
-John
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!