Bitcoin Forum
April 28, 2024, 06:09:33 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 »  All
  Print  
Author Topic: MemoryCoin 2.0 Proof Of Work  (Read 21394 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
Sharky444
Hero Member
*****
Offline Offline

Activity: 724
Merit: 500


View Profile
December 04, 2013, 10:45:40 AM
 #41

Freetrade you forget that GPUs can have up to 300GB/s main memory bandwidth. So you get only a latency advantage with L3, not a bandwidth advantage. The data will not be in L2 at all, as you have only 256KB/core.

Radix - just imagine
1714327773
Hero Member
*
Offline Offline

Posts: 1714327773

View Profile Personal Message (Offline)

Ignore
1714327773
Reply with quote  #2

1714327773
Report to moderator
1714327773
Hero Member
*
Offline Offline

Posts: 1714327773

View Profile Personal Message (Offline)

Ignore
1714327773
Reply with quote  #2

1714327773
Report to moderator
1714327773
Hero Member
*
Offline Offline

Posts: 1714327773

View Profile Personal Message (Offline)

Ignore
1714327773
Reply with quote  #2

1714327773
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
December 04, 2013, 11:18:17 AM
 #42

Freetrade you forget that GPUs can have up to 300GB/s main memory bandwidth. So you get only a latency advantage with L3, not a bandwidth advantage. The data will not be in L2 at all, as you have only 256KB/core.

Wow, yes actually I hadn't realized the differential between newer GPUs and CPUs was so great. I'll need to reconsider.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
December 04, 2013, 07:10:13 PM
 #43

Freetrade you forget that GPUs can have up to 300GB/s main memory bandwidth. So you get only a latency advantage with L3, not a bandwidth advantage. The data will not be in L2 at all, as you have only 256KB/core.

Wow, yes actually I hadn't realized the differential between newer GPUs and CPUs was so great. I'll need to reconsider.

As far as I can see, Scrypt won't suffice. I think Perceival was mistaken when he wrote 512 MB would defeat the GPU, because the GPU has much faster main memory bandwidth. And latency can be masked on the GPU by running many threads. And computation is faster on the GPU by running many threads.

You could try to run a 4GB Scrypt to cut down on the number of threads the GPU can run, but it will be so slow (1 minute per hash) your denial of service rejection and pools likely won't work well. Also its still vulnerable because Scrypt requires your latency be significantly less than your BlockMix execution time, else "lookup gap" (cpuminer) can replace the memory with computation, and GPUs can accelerate the computation of BlockMix because Salsa20 is partially parallelizable.

Perceival thinks Litecoin reduced the advantage of ASICs by a factor of 10.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
Adamlm
Hero Member
*****
Offline Offline

Activity: 822
Merit: 1002


View Profile WWW
December 04, 2013, 07:14:00 PM
 #44

TL;DR

I was mining first MemoryCoin - will I be able to keep the wallet from that client and convert all mined coins to 2.0 ?

Etlase2
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


View Profile
December 04, 2013, 08:37:21 PM
 #45

Excuse me I meant 256 KB. I will correct my typo.

It wasn't a typo, there was a logic error there too, and this is the second time you've made this exact mistake. Curious for someone who seems to be very well versed in the subject matter.

superresistant
Legendary
*
Offline Offline

Activity: 2128
Merit: 1120



View Profile
December 04, 2013, 08:43:50 PM
 #46

TL;DR
I was mining first MemoryCoin - will I be able to keep the wallet from that client and convert all mined coins to 2.0 ?

No.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
December 05, 2013, 04:25:26 AM
Last edit: December 05, 2013, 04:40:19 AM by AnonyMint
 #47

Excuse me I meant 256 KB. I will correct my typo.

It wasn't a typo, there was a logic error there too, and this is the second time you've made this exact mistake. Curious for someone who seems to be very well versed in the subject matter.

Possibly. I've been sleepless lately, even just woke up with a headache and dizzy. Feel free to point it out if you want. I hadn't been working on the overall aspect of the CPU-only mining aspect lately (note there are 3 components to Scrypt the overall ROMix, the inner BlockMix, and the innermost Salsa20 choice), so I had to reload into mind the various issues, and I was simultaneously taking on a wide range of topics throughout the forums.

Note I challenged some rich folks on the forum to see if they can spend their money to develop a CPU-only independent of me. So perhaps someone will beat me to it.

There are also very wealthy Chinese investors lurking behind the scenes who want to buy into altcoin development, for example you see the $500,000 that was injected into bytemaster's corporation by Chinese investors. I heard his ProtoShares launch already has a market cap of $24 million but I did not verify.

We could possibly see a proliferation of altcoins soon. I am hoping the quality ones can still be distinguished from the chaff.

Etlase2 remember I told you in April or so that it was urgent and your altcoin needed to be finished within 2013 or mid-2014 at the latest. And you scoffed at me. Do I need to go quote that post for you? I hope you are progressing well and I wish you the best of course. I wish you would stop the occasional spiteful remarks. Let everything be decided on the merits of the code released. "Talk is cheap, show us the code." - Linus Torvalds.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
December 05, 2013, 08:17:08 AM
 #48

Freetrade you forget that GPUs can have up to 300GB/s main memory bandwidth. So you get only a latency advantage with L3, not a bandwidth advantage. The data will not be in L2 at all, as you have only 256KB/core.

Wow, yes actually I hadn't realized the differential between newer GPUs and CPUs was so great. I'll need to reconsider.


I'm coming to the conclusion that the only place (other than size of main memory) where CPUs can keep pace with GPUs is the L2 cache (either size or bandwidth of L2), but that the GPU can compensate by running more processes slowly from main memory. Maybe it is only possible to delay GPUs with sheer complexity - ala Quark.
 

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
December 05, 2013, 08:40:46 AM
 #49

Warning - offtopic and inflammatory posts will be removed. Please stay on topic.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
cryptrol
Hero Member
*****
Offline Offline

Activity: 637
Merit: 500


View Profile
December 05, 2013, 10:01:01 AM
 #50


I'm coming to the conclusion that the only place (other than size of main memory) where CPUs can keep pace with GPUs is the L2 cache (either size or bandwidth of L2), but that the GPU can compensate by running more processes slowly from main memory. Maybe it is only possible to delay GPUs with sheer complexity - ala Quark.

I think the conclusion you are coming to is the right one.

IMHO trying to defeat GPU, Botnets or future ASICs is nonsense, and a waste of resources, it just can't be done. That's specially true for botnets for obvious reasons and in the end GPU's are just massively parrallell slow CPUs.

I would just focus on improving some of the well known PoW algorithms, most are just fine but can be fine tuned to get better results or more GPU resistance (you did it with scrypt).

Just my 2 cents.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
December 05, 2013, 11:11:41 AM
 #51

Maybe I can help you soon FreeTrade, or you can help me. Lets see when there is something tangible to evaluate. Feel free to delete if off-topic. I don't like to leave this hanging, but I better not speak more at this time.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
ludd
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
December 09, 2013, 06:40:28 PM
 #52

Any news about MEC 2.0?
Sharky444
Hero Member
*****
Offline Offline

Activity: 724
Merit: 500


View Profile
December 10, 2013, 03:01:09 PM
Last edit: December 14, 2013, 07:24:20 PM by Sharky444
 #53

I'm coming to the conclusion that the only place (other than size of main memory) where CPUs can keep pace with GPUs is the L2 cache (either size or bandwidth of L2), but that the GPU can compensate by running more processes slowly from main memory. Maybe it is only possible to delay GPUs with sheer complexity - ala Quark.
 

I came to the same conclusion after thinking about it for a week. You can only get an advantage from latency (the L2 bandwith does no matter much, as GPUs have also L2 for every core block, but a smaller size than CPU), which means you would need a shitload of different memory operations within L2 (I would make it about ~ 200 KB, not 256, to ensure that it really sticks to L2). GPU could compensate by running 200 threads at a much slower pace, but still at 10x the speed of the CPU overall. If you make the algo complex enough a GPU miner will still take at least 1 month to code, maybe 3-5, so CPU users get a head start.


Radix - just imagine
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
December 13, 2013, 08:18:09 AM
Last edit: December 14, 2013, 11:06:26 AM by FreeTrade
 #54

Ok, here's the latest modification to the PoW.

1. Generate 1GB PsuedoRandom data using SHA512
2. For each 64K block - Repeat 10 50 times
 2.1 Use the last 32bits as a pointer to another 64K block
 2.2 XOR the two 64K blocks together
 2.3 AES CBC encrypt the result using the last 256 bits as a key
3. Use the last 32bits%2^14 as the solution. If the solution==1968, block solved

Expect 1 solution per set.

This will offer a good level of GPU resistance for the following reasons -
1. Complexity - requires SHA512 hashing and AES CBC encryption
2. CPU instruction sets - many have specific AES instruction sets, GPU's don't
3. SHA512 - more efficient with 64 bit operations, but most GPUs are 32bit
4. Multiple AES encryption and XORing will keep the L2 cache busy, GPUs will be forced to use slower memory for massive parallelization.

I think more efficient GPU miners will be possible, but they should be delayed and not offer performance gains of more than 2X or 3X.

 

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
Sharky444
Hero Member
*****
Offline Offline

Activity: 724
Merit: 500


View Profile
December 13, 2013, 08:23:30 AM
 #55

A dedicated CPU miner that uses hardware AES should be at least 10 times faster. I hope such a miner is released to the public soon after launch and not being kept private.

Radix - just imagine
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
December 13, 2013, 07:11:28 PM
 #56

A dedicated CPU miner that uses hardware AES should be at least 10 times faster. I hope such a miner is released to the public soon after launch and not being kept private.

Can you explain more about what you mean by hardware AES? Are you talking about the AES instructions built into CPUs? If so then I'm hoping these will be compiled into the QT client so we should have a pretty efficient miner there off the bat for any chips with the AES instruction sets - more details here -

http://software.intel.com/en-us/articles/intel-advanced-encryption-standard-instructions-aes-ni

and

http://en.wikipedia.org/wiki/AES_instruction_set#Supporting_CPUs

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
Sharky444
Hero Member
*****
Offline Offline

Activity: 724
Merit: 500


View Profile
December 13, 2013, 07:31:13 PM
 #57

Yes, I did mean those.

p.s.

Freetrade please read the PM I've sent you yesterday.

Radix - just imagine
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
December 15, 2013, 12:47:09 PM
 #58

I didn't see this until just now.

3. Use the last 32bits%2^14 as the solution. If the solution==1968, block solved

How is the difficulty altered if the solution is a fixed value and not instead a variable value which the output of the hash must be less than?

Ok, here's the latest modification to the PoW.

1. Generate 1GB PsuedoRandom data using SHA512
2. For each 64K block - Repeat 10 50 times
 2.1 Use the last 32bits as a pointer to another 64K block
 2.2 XOR the two 64K blocks together
 2.3 AES CBC encrypt the result using the last 256 bits as a key
3. Use the last 32bits%2^14 as the solution. If the solution==1968, block solved

Expect 1 solution per set.

This will offer a good level of GPU resistance for the following reasons -
1. Complexity - requires SHA512 hashing and AES CBC encryption
2. CPU instruction sets - many have specific AES instruction sets, GPU's don't
3. SHA512 - more efficient with 64 bit operations, but most GPUs are 32bit
4. Multiple AES encryption and XORing will keep the L2 cache busy, GPUs will be forced to use slower memory for massive parallelization.

I think more efficient GPU miners will be possible, but they should be delayed and not offer performance gains of more than 2X or 3X.

Even Haswell has 10X less FLOPS than top-of-the-line GPUs. Memory latency will be masked away to 0, if the GPU can run enough parallel copies. Your algorithm is going to be hobbled relative to the GPU by the 10X slower bandwidth of 20 GB/s speed to write out the initial 1 GB.

The use of dedicated AES instructions on CPUs might help a little but I doubt enough to stop the GPU from being 10X faster, and  ASICs could implement AES to run faster.

The latency on L2 is still several cycles and the latency on the CPU will asymptotically go to 0, although you have that 1 GB to limit the number of copies the GPU can run, i.e. 6 parallel copies on a 6 GB GPU (which might be sufficient to eliminate most latency).

You might get the 3X faster expected result, but I am not confident of that.

How does this algorithm validate faster and with less memory? I think I know, but I don't want to say. I want to know what you came up with.

On the rough guesstimate (note I am quite sleepy at the moment), I can't see you've accomplished anything the Litecoin did not already except the extra DRAM requirement, i.e. probably 10X faster on GPU and vulnerable to ASICs (running with DRAM). Or did I miss something?

Note Litecoin ASICs are apparently in development now.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
December 15, 2013, 01:02:18 PM
 #59


How is the difficulty altered if the solution is a fixed value and not instead a variable value which the output of the hash must be less than?

So the solution is then SHA256'd and this value is the variable.

Even Haswell has 10X less FLOPS than top-of-the-line GPUs. Memory latency will be masked away to 0, if the GPU can run enough parallel copies. Your algorithm is going to be hobbled relative to the GPU by the 10X slower bandwidth of 20 GB/s speed to write out the initial 1 GB.

The 1GB write is a small fraction of the overall time required - the 50GB AES encryption is the lion's share of the hash.

The use of dedicated AES instructions on CPUs might help a little but I doubt enough to stop the GPU from being 10 - 100X faster, and  ASICs could implement AES to run faster.

The latency on L2 is still several cycles and the latency on the CPU will asymptotically go to 0, although you have that 1 GB to limit the number of copies the GPU can run, i.e. 6 parallel copies on a 6 GB GPU (which might be sufficient to eliminate most latency).

You might get the 3X faster expected result, but I am not confident of that.

So there are two possible bottlenecks - the memory-bus access and AES speed. A GPU miner will need to solve both. Agreed both bottlenecks can be addressed in a GPU miner - but with the lack of AES-NI, slower cores, and lack of L2 cache to match the cores, I'm hoping it'll be 2X to 3X max. And it should take some time too.


How does this algorithm validate faster and with less memory? I think I know, but I don't want to say. I want to know what you came up with.

The algorithm essentially looks for a pattern in the data. The validation is told where the pattern is, so only produces a fraction of the psuedo-random data, and doesn't need to search for the start location of the pattern.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
AnonyMint
Hero Member
*****
Offline Offline

Activity: 518
Merit: 521


View Profile
December 15, 2013, 01:20:49 PM
Last edit: December 15, 2013, 01:37:01 PM by AnonyMint
 #60

Even Haswell has 10X less FLOPS than top-of-the-line GPUs. Memory latency will be masked away to 0, if the GPU can run enough parallel copies. Your algorithm is going to be hobbled relative to the GPU by the 10X slower bandwidth of 20 GB/s speed to write out the initial 1 GB.

The 1GB write is a small fraction of the overall time required - the 50GB AES encryption is the lion's share of the hash.

So the hash rate is slower than 1 hash per second per CPU core?

All CPU cores share the same main memory bottleneck, so on an 8 core CPU the 1 GB per core is 8GB relative to the 50 GB.

Also the GPU can use its 10X greater FLOPS to likely remove the speed advantage of the AES instructions on the CPU. And this massive parallelization will likely eliminate the L2 advantage on memory latency, while (although I am not sure without studying in detail the code) cache memory bandwidth will not be the bottleneck rather computation of the AES.

I know you don't expect to beat the GPU, perhaps you are only hoping to be near par on hashes per watt.

My wild guesstimate is you see 10X instead of 3X and have a 3X disadvantage on hash per watt. Readers I don't know. I am only wild guessing.

The use of dedicated AES instructions on CPUs might help a little but I doubt enough to stop the GPU from being 10 - 100X faster, and  ASICs could implement AES to run faster.

The latency on L2 is still several cycles and the latency on the CPU will asymptotically go to 0, although you have that 1 GB to limit the number of copies the GPU can run, i.e. 6 parallel copies on a 6 GB GPU (which might be sufficient to eliminate most latency).

You might get the 3X faster expected result, but I am not confident of that.

So there are two possible bottlenecks - the memory-bus access and AES speed. A GPU miner will need to solve both.

Parallelization attacks main memory latency and computation, but not maximum main memory bandwidth. L2 might win on cumulative cache bandwidth with multiple cores (since each cache is independent). Some GPUs have large caches now though.

Agreed both bottlenecks can be addressed in a GPU miner - but with the lack of AES-NI, slower cores, and lack of L2 cache to match the cores, I'm hoping it'll be 2X to 3X max. And it should take some time too.

Maybe. I can't say for sure. I am just giving you feedback.


How does this algorithm validate faster and with less memory? I think I know, but I don't want to say. I want to know what you came up with.

The algorithm essentially looks for a pattern in the data. The validation is told where the pattern is, so only produces a fraction of the psuedo-random data, and doesn't need to search for the start location of the pattern.

That is what I expected. That is what I am doing.

unheresy.com - Prodigiously Elucidating the Profoundly ObtuseTHIS FORUM ACCOUNT IS NO LONGER ACTIVE
Pages: « 1 2 [3] 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!