Bitcoin Forum
April 28, 2024, 08:15:23 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 180 »
  Print  
Author Topic: [ANN][YAC] YACoin ongoing development  (Read 379837 times)
WindMaster (OP)
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 22, 2013, 09:32:34 PM
Last edit: May 22, 2013, 11:35:51 PM by WindMaster
 #281

Since we are doing some Q & A, I've been wondering if an insane high N would (painfully try to) use swap memory.

Have a look at the table of N increments I posted:
https://bitcointalk.org/index.php?topic=206577.msg2162620#msg2162620

The memory column shows how much memory is needed to compute a hash (assuming no TMTO shortcut is in use).  Total memory required is equal to that multiplied by the number of hashes you're trying to compute simultaneously.  For CPU mining, it would be the number in that table multiplied by the number of threads you're using.

As you can see from the table, it's going to be a couple decades before N reaches high enough that even today's typical desktop PC would start swapping.  Even in the year 2421, memory requirements only rise up to 256GB at Nfactor=30, and Nfactor won't go any higher than that (it's capped at 30).
1714292123
Hero Member
*
Offline Offline

Posts: 1714292123

View Profile Personal Message (Offline)

Ignore
1714292123
Reply with quote  #2

1714292123
Report to moderator
BitcoinCleanup.com: Learn why Bitcoin isn't bad for the environment
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714292123
Hero Member
*
Offline Offline

Posts: 1714292123

View Profile Personal Message (Offline)

Ignore
1714292123
Reply with quote  #2

1714292123
Report to moderator
mtrlt
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
May 22, 2013, 09:33:48 PM
 #282

I did some GPU testing with high N values. The gist is: at N=8192, I couldn't get it to output valid shares any more. Therefore, with current information, it seems that GPU mining will stop on 13 Aug 2013 - 07:43:28 GMT when N goes to 8192.

a) why do you think it stopped working at 8192 (physical GPU limitation with current gen GPUs, or more a limitation of the code that can be more easily overcome)?

I don't know yet. It just stops working on its own.

Quote
b) does CPU mining still work at 8192 and beyond? (if not, then we have a problem on our hand that would at minimum necessitate a hard fork I'd think).

Yes, in fact I tested CPU mining up to N=2^19=524288, where my puny Phenom x6 1055T got around 12 H/s.
bitdwarf
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


The cryptocoin watcher


View Profile
May 22, 2013, 09:43:53 PM
 #283

As you can see from the table, it's going to be a couple decades before N reaches high enough that even today's typical desktop PC would start swapping.  Even in the year 2421, memory requirements only rise up to 256GB at N=30, and N won't go any higher than that (it's capped at 30).

Thanks! So even existing workstations (e.g.) can handle N=30.

𝖄𝖆𝖈: YF3feU4PNLHrjwa1zV63BcCdWVk5z6DAh5 · 𝕭𝖙𝖈: 12F78M4oaNmyGE5C25ZixarG2Nk6UBEqme
Ɏ: "the altcoin for the everyman, where the sweat on one's brow can be used to cool one's overheating CPU" -- theprofileth
WindMaster (OP)
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 22, 2013, 09:44:13 PM
Last edit: May 22, 2013, 09:56:09 PM by WindMaster
 #284

You're a feisty one.

This is true.  I'm not going to dispute that.  Smiley


You're correct on the date, it was during the memcoin thread which was apparently November of last year, so I guess only six months old.  Guess my memory slipped, but things move quickly these days.

Thanks for the link, I'll have a read of your memcoin thread.  In a quick glance through, I'd say we're both in agreement YACoin's starting N was far too low, at least.


Quote from: WindMaster
If the scrypt-jane library starts having trouble at certain values of N (and it looks fine up through at least N=32768 that I've tested so far), I'd think we would only need to fix the problem in the library and release a new update to the client, rather than hard fork.

It works fine above that, I've used it beyond 32768 (see that thread).

Roger that.
hanzac
Sr. Member
****
Offline Offline

Activity: 425
Merit: 262


View Profile
May 22, 2013, 11:17:53 PM
 #285

Now I have a question for the N value becomes even larger in a year, the hash speed drops to only 100 hashes/second even on a server.
Won't the client be able to validate the block chain because the hash speed is too slow? I don't know if the client will validate the whole block chain after it makes a fresh download.
WindMaster (OP)
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 22, 2013, 11:27:09 PM
 #286

Now I have a question for the N value becomes even larger in a year, the hash speed drops to only 100 hashes/second even on a server.
Won't the client be able to validate the block chain because the hash speed is too slow? I don't know if the client will validate the whole block chain after it makes a fresh download.

Validation of the previous portion of the block chain would be happening at the original N at the time that block was mined, not at the current value of N.  I think where we would get into trouble is when it takes as much or more time to validate a block than the target time between blocks (1 minute).  Somewhere in the next few years, I'm sure the debate of storing the entire blockchain on every user's computer will be discussed more and more among all the coins.

I'm benchmarking large values of N right now.  The highest I've benchmarked so far is N=262144 on a dual Xeon E5450 server (which is ~4 year old server technology), and have around 30.55 hashes/sec at that value of N, which would be hit on June 23, 2015.  Looks like mtrlt benchmarked his 6-core Phenom to the next N increment beyond that (which I'm benchmarking right now) and got around 12 hash/sec.

Based on the slope of the N increases over the long run, Moore's Law (not that it's a hard and fast rule) will begin overtaking the N increases and average hash rates per user will start rising again.  I don't necessarily subscribe to Moore's Law actually being valid at the immediate moment, looking at the actual processing power changes over the last 18 months.  However, out in the future, YACoin's Nfactor++ events become less and less frequent.
WindMaster (OP)
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 22, 2013, 11:38:01 PM
 #287

As you can see from the table, it's going to be a couple decades before N reaches high enough that even today's typical desktop PC would start swapping.  Even in the year 2421, memory requirements only rise up to 256GB at N=30, and N won't go any higher than that (it's capped at 30).

Thanks! So even existing workstations (e.g.) can handle N=30.

Oops, I made a booboo in that post that you quoted.  I meant Nfactor is capped at 30, not N.  I fixed my original post.

But otherwise yes, it's possible to compute a hash at Nfactor=30 with a box with 256GB, although some swapping of the OS and any other running software would occur since 100% of the RAM would be needed to compute one hash.  Now, it would certainly take a while to compute a hash at Nfactor=30.  I'm guessing by the year 2421, that won't be an issue though, and we're probably not going to be using cryptocurrencies in their current form anywhere near that long either.
hanzac
Sr. Member
****
Offline Offline

Activity: 425
Merit: 262


View Profile
May 22, 2013, 11:49:09 PM
 #288

Validation of the previous portion of the block chain would be happening at the original N at the time that block was mined, not at the current value of N.  I think where we would get into trouble is when it takes as much or more time to validate a block than the target time between blocks (1 minute).  Somewhere in the next few years, I'm sure the debate of storing the entire blockchain on every user's computer will be discussed more and more among all the coins.

I'm benchmarking large values of N right now.  The highest I've benchmarked so far is N=262144 on a dual Xeon E5450 server (which is ~4 year old server technology), and have around 30.55 hashes/sec at that value of N, which would be hit on June 23, 2015.  Looks like mtrlt benchmarked his 6-core Phenom to the next N increment beyond that (which I'm benchmarking right now) and got around 12 hash/sec.

Based on the slope of the N increases over the long run, Moore's Law (not that it's a hard and fast rule) will begin overtaking the N increases and average hash rates per user will start rising again.  I don't necessarily subscribe to Moore's Law actually being valid at the immediate moment, looking at the actual processing power changes over the last 18 months.  However, out in the future, YACoin's Nfactor++ events become less and less frequent.

I know about Moore's Law. But there're still risks in it. Because current technology is reaching its limitation, remember that Intel/AMD/Nvidia still stick to 32nm or so for many years, this doesn't obey the Moore's Law, they can't make a faster CPU instead they can only make multi-core CPUs.

Can you check this Nfactor: 21      4194304      512MB      1636426656      Tue - 09 Nov 2021 - 02:57:36 GMT

If we can be safe at this year, then we can have another 9 years to upgrade our equipments.

This is for the PoW model, here is another story for PoS.

Tell from the code, PoS uses SHA256, right? Thank god, it doesn't use SCRYPT-JANE this gives me relief, if PoW blocks generation becomes slow, the PoS block can make the network continue functioning.
WindMaster (OP)
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 23, 2013, 12:06:00 AM
Last edit: May 23, 2013, 12:40:09 AM by WindMaster
 #289

I know about Moore's Law. But there're still risks in it. Because current technology is reaching its limitation

Indeed, that's the reason I mentioned I don't necessarily subscribe to Moore's Law in the more recent timeframe, due to slow rate of process node improvements.  Cost reductions for those process nodes may still occur and make it possible to continue increasing core counts at a lower cost per core over time though.


remember that Intel/AMD/Nvidia still stick to 32nm or so for many years, this doesn't obey the Moore's Law, they can't make a faster CPU instead they can only make multi-core CPUs.

Ivy Bridge is built on a 22nm, so there's still some forward progress on process nodes occurring.  Just not as quickly lately.  Intel is aiming for 14nm, having started constructing the Fab 42 facility in Arizona, expected to come online for fabricating 14nm later this year (we'll see).  There's still some debate whether 14nm or 16nm will be the minimum size achievable before quantum tunnelling issues prevent any further scale reduction.  So, there will indeed be a stagnation in processes somewhere in the 14-16nm range.  At that point cost reductions of those processes and increased core counts are probably where things are going to head.


Can you check this Nfactor: 21      4194304      512MB      1636426656      Tue - 09 Nov 2021 - 02:57:36 GMT

If we can be safe at this year, then we can have another 9 years to upgrade our equipments.

Sure, I'll test N=4194304 next and report back in a bit.


This is for the PoW model, here is another story for PoS.

Tell from the code, PoS uses SHA256, right? Thank god, it doesn't use SCRYPT-JANE this gives me relief, if PoW blocks generation becomes slow, the PoS block can make the network continue functioning.

I actually have minimal familiarity with how the PoS side of things is going to work.  That part is still a mystery to me.
WindMaster (OP)
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
May 23, 2013, 12:16:51 AM
Last edit: May 23, 2013, 12:39:36 AM by WindMaster
 #290

Can you check this Nfactor: 21      4194304      512MB      1636426656      Tue - 09 Nov 2021 - 02:57:36 GMT

If we can be safe at this year, then we can have another 9 years to upgrade our equipments.

I'm measuring Nfactor=21 (N=4194304) to yield about 1.73 hashes/sec on a 4 year old dual Xeon E5450 server (combined performance in the ballpark of an i7-2600k).

EDIT - All my benchmarks out through Nfactor=21 are now posted in the table of Nfactor++ events:
https://bitcointalk.org/index.php?topic=206577.msg2162620#msg2162620

It's readily apparent at what point the memory usage becomes high enough to no longer fit in the on-die L1/L2 caches on these particular Xeon CPU's and starts falling to entirely off-chip memory.
mtrlt
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
May 23, 2013, 03:55:46 AM
Last edit: May 23, 2013, 04:27:00 AM by mtrlt
 #291

Here are all my GPU benchmarking results, and also the speed ratio of GPUs and CPUs, for good measure.

GPU: HD6990 underclocked 830/1250 -> 738/1250, undervolted 1.12V -> 0.96V. assuming 320W power usage
CPU: WindMaster's 4 year old dual Xeon, assuming 80W power usage. In reality it's probably more, but newer processors achieve the same performance with less power usage.

Code:
N      GPUspeed    CPUspeed     GPU/CPU power-efficiency ratio
32     10.02 MH/s  358.8 kH/s   6.98
64     6.985 MH/s  279.2 kH/s   6.25
128    3.949 MH/s  194.0 kH/s   5.1
256    2.004 MH/s  119.2 kH/s   4.2
512    1.060 MH/s  66.96 kH/s   3.95
1024   544.2 kH/s  34.80 kH/s   3.9
2048   278.7 kH/s  18.01 kH/s   3.88
4096   98.5 kH/s   9.077 kH/s   2.72
8192+  0 H/s       4.595 kH/s   0

GPUs are getting comparatively slower bit by bit, until (as I've stated in an earlier post) at N=8192, GPU mining seems to break altogether.

EDIT: Replaced GPU/CPU ratio with a more useful power-efficiency ratio.
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
May 23, 2013, 04:00:04 AM
 #292

^^ Have you tried playing with the lookup gap more? You should still hit faster GPU performance as by 8192 you will have moved off the CPU cache and are writing to RAM (maybe also L3).

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
bitdwarf
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


The cryptocoin watcher


View Profile
May 23, 2013, 04:15:05 AM
 #293

GPU: HD6990 underclocked from 830/1250 to 738/1250
CPU: WindMaster's 4 year old dual Xeon E5450

Amazon says that's a $1000 card vs a $400 CPU, and Google says it's ~400 Watts vs 80 Watts (just a quick search, these numbers may be off), if anyone wants to calculate corrected ratios.

𝖄𝖆𝖈: YF3feU4PNLHrjwa1zV63BcCdWVk5z6DAh5 · 𝕭𝖙𝖈: 12F78M4oaNmyGE5C25ZixarG2Nk6UBEqme
Ɏ: "the altcoin for the everyman, where the sweat on one's brow can be used to cool one's overheating CPU" -- theprofileth
mtrlt
Member
**
Offline Offline

Activity: 104
Merit: 10


View Profile
May 23, 2013, 04:19:40 AM
 #294

GPU: HD6990 underclocked from 830/1250 to 738/1250
CPU: WindMaster's 4 year old dual Xeon E5450

Amazon says that's a $1000 card vs a $400 CPU, and Google says it's ~400 Watts vs 80 Watts (just a quick search, these numbers may be off), if anyone wants to calculate corrected ratios.

Good point! I'll fix my numbers.

^^ Have you tried playing with the lookup gap more? You should still hit faster GPU performance as by 8192 you will have moved off the CPU cache and are writing to RAM (maybe also L3).

I've played with it quite a bit. Also, nothing helped with 8192, it's all invalids.
digger
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


View Profile WWW
May 23, 2013, 08:05:10 AM
 #295

how many people are mining using GPU?

doge pool: http://dog.ltcoin.net ,yac pool: http://yac.ltcoin.net ,bbq pool: http://bbq.ltcoin.net ,Litecoin pool, http://ltcoin.net dig feathercoin , http://fc.ltcoin.net bitbar pool: http://btb.ltcoin.net wdc pool: http://wdc.ltcoin.net
wire
Newbie
*
Offline Offline

Activity: 55
Merit: 0


View Profile
May 23, 2013, 08:17:19 AM
 #296

guess many insiders do it at least the ones who are capable to write opencl code and they wont share their code with just everyone as long as they are making profit.

as soon as they dont make any profit anymore with mining they will sell the code, then when not making profit with selling anymore you will see it on github for free.

If i am wrong prove it and publish working code Smiley
AGD
Legendary
*
Offline Offline

Activity: 2069
Merit: 1164


Keeper of the Private Key


View Profile
May 23, 2013, 11:16:34 AM
 #297

I am using the latest Windmaster version of yacoind on Debian and now my CPU (Intel Core i7 3930k 12 cores) shows only about half usage while mining. When I run minerd to mine instead of yacoind "top" shows full usage. Any idea?

Bitcoin is not a bubble, it's the pin!
+++ GPG Public key FFBD756C24B54962E6A772EA1C680D74DB714D40 +++ http://pgp.mit.edu/pks/lookup?op=get&search=0x1C680D74DB714D40
rbdrbd
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250



View Profile
May 23, 2013, 01:34:39 PM
 #298

guess many insiders do it at least the ones who are capable to write opencl code and they wont share their code with just everyone as long as they are making profit.

as soon as they dont make any profit anymore with mining they will sell the code, then when not making profit with selling anymore you will see it on github for free.

If i am wrong prove it and publish working code Smiley

The data does not suggest this.

According to mtlrt, at current N = 256, a top end video card could get around 2.004 MH/s with a HD6990 and his kernel. That means for a mid-size GPU farm owner with 50 7950s/7970s or such may see around 75MH/s or so, or more (a very rough estimate, based on performance difference between 6990 and 7950/7970).

According to yacoind:
Code:
# ./yacoind getmininginfo
{
    "blocks" : 68606,
    "currentblocksize" : 2275,
    "currentblocktx" : 5,
    "difficulty" : 3.01596016,
    "errors" : "",
    "generate" : false,
    "genproclimit" : -1,
    "hashespersec" : 0,
    "networkhashps" : 72345435,
    "pooledtx" : 5,
    "testnet" : false,
    "Nfactor" : 7,
    "N" : 256,
    "powreward" : 20.80000000
}

Current network hash rate is estimated at 72.3MH (this is with the newest yacoin source). With just ONE mid-range GPU farm owner being able to generate more than the current YaCoin network at this point, this does not look like there are ANY "GPU insiders" mining. Or if there are, they are not hitting it with a decent sized GPU farm. However, this also raises issues of the possibility of a 51% attack if one or two ill-meaning farm owners get a hold of a GPU miner. Let's hope mtrlt either holds on to his GPU kernel like it appears he has to this point, or if released, it's released to all. Anything else risks the stability of the YaCoin network.

The other risk here is a possible inadvertent "difficulty attack" if this N=8192 issue is real and a GPU miner is released to all. At that point, difficulty would increase sharply, and once N=8192 is hit, drop off a cliff. I wonder if there would be enough hash power after that to get it back to a sane level without necessitating a hard fork like feathercoin has done (maybe WindMaster has more input on this). Also, whether POS minting would be enough during that time to validate new transactions on the network. Maybe others much more knowledgeable than I on these matters could comment.

I am hopeful that the 8192 issue is a more fundamental problem with GPU mining on this coin. Most serious GPU miners have 7950s or 7970s instead of 6990s, so this issue may be run into even earlier on one of those cards, as the memory amount and bandwidth on the 7950 at least is a good amount less than on the 6990. This community is ingenious, however, as the litecoin experience had shown. But if you had the guy that wrote the scrypt kernel for reaper having issues at 8192 after trying multiple lookup gap and TMTO hacks, then there is some real life for this coin I think.
bitdwarf
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


The cryptocoin watcher


View Profile
May 23, 2013, 01:39:29 PM
 #299

... this is the best thread in the entire alt subforum.


𝖄𝖆𝖈: YF3feU4PNLHrjwa1zV63BcCdWVk5z6DAh5 · 𝕭𝖙𝖈: 12F78M4oaNmyGE5C25ZixarG2Nk6UBEqme
Ɏ: "the altcoin for the everyman, where the sweat on one's brow can be used to cool one's overheating CPU" -- theprofileth
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1005



View Profile
May 23, 2013, 02:14:12 PM
 #300

guess many insiders do it at least the ones who are capable to write opencl code and they wont share their code with just everyone as long as they are making profit.

as soon as they dont make any profit anymore with mining they will sell the code, then when not making profit with selling anymore you will see it on github for free.

If i am wrong prove it and publish working code Smiley

The data does not suggest this.

According to mtlrt, at current N = 256, a top end video card could get around 2.004 MH/s with a HD6990 and his kernel. That means for a mid-size GPU farm owner with 50 7950s/7970s or such may see around 75MH/s or so, or more (a very rough estimate, based on performance difference between 6990 and 7950/7970).

According to yacoind:
Code:
# ./yacoind getmininginfo
{
    "blocks" : 68606,
    "currentblocksize" : 2275,
    "currentblocktx" : 5,
    "difficulty" : 3.01596016,
    "errors" : "",
    "generate" : false,
    "genproclimit" : -1,
    "hashespersec" : 0,
    "networkhashps" : 72345435,
    "pooledtx" : 5,
    "testnet" : false,
    "Nfactor" : 7,
    "N" : 256,
    "powreward" : 20.80000000
}

Current network hash rate is estimated at 72.3MH (this is with the newest yacoin source). With just ONE mid-range GPU farm owner being able to generate more than the current YaCoin network at this point, this does not look like there are ANY "GPU insiders" mining. Or if there are, they are not hitting it with a decent sized GPU farm. However, this also raises issues of the possibility of a 51% attack if one or two ill-meaning farm owners get a hold of a GPU miner. Let's hope mtrlt either holds on to his GPU kernel like it appears he has to this point, or if released, it's released to all. Anything else risks the stability of the YaCoin network.

The other risk here is a possible inadvertent "difficulty attack" if this N=8192 issue is real and a GPU miner is released to all. At that point, difficulty would increase sharply, and once N=8192 is hit, drop off a cliff. I wonder if there would be enough hash power after that to get it back to a sane level without necessitating a hard fork like feathercoin has done (maybe WindMaster has more input on this). Also, whether POS minting would be enough during that time to validate new transactions on the network. Maybe others much more knowledgeable than I on these matters could comment.

I am hopeful that the 8192 issue is a more fundamental problem with GPU mining on this coin. Most serious GPU miners have 7950s or 7970s instead of 6990s, so this issue may be run into even earlier on one of those cards, as the memory amount and bandwidth on the 7950 at least is a good amount less than on the 6990. This community is ingenious, however, as the litecoin experience had shown. But if you had the guy that wrote the scrypt kernel for reaper having issues at 8192 after trying multiple lookup gap and TMTO hacks, then there is some real life for this coin I think.

It sounds more like a kernel level bug to me. Note that the author said it was throwing invalid hashes, not a very slow rate of valid hashes.

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 180 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!