-ck
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
October 31, 2012, 11:36:41 PM |
|
Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
...With ASICs being 50x more efficient, the only people mining in the future, bar a very few exceptions, will all be running ASICs. And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes... Ok... so no one will be running p2pool via a remote node?
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
October 31, 2012, 11:42:39 PM |
|
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools. high bandwidth? WTF?
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
forrestv (OP)
|
|
October 31, 2012, 11:47:55 PM |
|
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools. Really? P2Pool uses ~3 GB/month, normally. How much is too much for you? With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage. If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily.
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
Krak
|
|
October 31, 2012, 11:48:50 PM |
|
high bandwidth? WTF?
~250MB a day adds up quick when you have a 150GB cap. My DSL connection also freezes up a lot when I start a download which is made much worse when I'm running p2pool. I limited my p2pool incoming connections to 5 and my Bitcoin-qt connections are limited to 10 and it's still noticeable.
|
BTC: 1KrakenLFEFg33A4f6xpwgv3UUoxrLPuGn
|
|
|
Smoovious
|
|
October 31, 2012, 11:49:22 PM |
|
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools. Well... a while ago, pyramining briefly switched over to my public node while they were working on some stuff. I ended up having between 200-400GH/s at the time, and everything seemed to go just fine with my bandwidth, so p2pmining should be ok with the added bandwidth of the ASIC traffic. If p2pmining would let you use higher diff shares, or maybe they can set up a high-hash node too. They're pretty much doing their own sub-share-chain, so maybe they will do something to accommodate the ASICs. Not sure if they would tho. It sounded like they came into existence mainly to cater to the smaller miner who needed lower-diff shares. -- Smoov ps: Krak is in one of the outlying cities/towns that don't have a lot of infastructure to spread around. Rural-ish kind of area. That's where his main bandwidth bottleneck is. They get priced accordingly. Too much demand, not enough supply. So they get capped more.
|
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
October 31, 2012, 11:52:24 PM |
|
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining. Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node... As Con writes, soon the only profitable miners with be ASIC based. The proportion of the network hashrate of an ASIC will be significantly lower than it would be now since the overall network hashrate will have increased significantly. Will ASICS still be a problem for p2Pool in this case?
|
|
|
|
btharper
|
|
October 31, 2012, 11:55:28 PM Last edit: November 01, 2012, 12:18:45 AM by btharper |
|
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools. Really? P2Pool uses ~3 GB/month, normally. How much is too much for you? With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage. If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily. How hard would it be to implement GBT for pyraminingP2Pool? Seems like that would mitigate a few issues for remote miners anyway. While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise). Edit: Pyramining is already setup how they want to and will probably continue in the same. GBT support for p2pool was what I meant to ask about.
|
|
|
|
kano
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
October 31, 2012, 11:58:42 PM |
|
Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
...With ASICs being 50x more efficient, the only people mining in the future, bar a very few exceptions, will all be running ASICs. And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes... Even the people buying coffee warmers for $150 are serious miners?
|
|
|
|
kano
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
November 01, 2012, 12:00:33 AM |
|
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools. Really? P2Pool uses ~3 GB/month, normally. How much is too much for you? With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage. If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily. How hard would it be to implement GBT for pyramining? Seems like that would mitigate a few issues for remote miners anyway. While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise) Depending on the number of outstanding transactions and what transactions AREN'T being ignored, it is possible for GBT to use more bandwidth than GetWork ...
|
|
|
|
sharky112065
|
|
November 01, 2012, 12:19:34 AM |
|
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining. Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node... Because of course no one uses a remote p2pool node for a failover backup. /sarcasm Sounds like no real testing has been done to verify it will not be an issue. I'm starting to get the feeling that you are sticking your head in the sand and hoping that everything will just work.
|
Donations welcome: 12KaKtrK52iQjPdtsJq7fJ7smC32tXWbWr
|
|
|
btharper
|
|
November 01, 2012, 12:20:53 AM |
|
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools. Really? P2Pool uses ~3 GB/month, normally. How much is too much for you? With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage. If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily. How hard would it be to implement GBT for pyramining? Seems like that would mitigate a few issues for remote miners anyway. While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise) Depending on the number of outstanding transactions and what transactions AREN'T being ignored, it is possible for GBT to use more bandwidth than GetWork ... Whoops, I misspoke, corrected the original post. I meant to ask how hard it would be to setup support for GBT (or stratum, or anything else) in p2pool, as this would be one way to improve remote mining.
|
|
|
|
lenny_
Legendary
Offline
Activity: 1036
Merit: 1000
DARKNETMARKETS.COM
|
|
November 01, 2012, 12:21:55 AM |
|
Anyone running Ztex USB-FPGA on p2pool for long time? Can you please post your stats, efficiency, and stales (orphans/DOA)?
|
|
|
|
forrestv (OP)
|
|
November 01, 2012, 12:29:20 AM |
|
Because of course no one uses a remote p2pool node for a failover backup. /sarcasm
Sounds like no real testing has been done to verify it will not be an issue.
I'm starting to get the feeling that you are sticking your head in the sand and hoping that everything will just work.
It's kind of hard to test with something that you're not even sure exists... Anyway, with sane nTime rolling and adaptive pseudoshare targets, getworks can provide more than enough work to remote hosts with very low bandwidth. Everything for nTime rolling is already there (the HTTP header), and adaptive targets are disabled temporarily, but I'll re-enable them for the next release. How hard would it be to implement GBT for pyraminingP2Pool? Seems like that would mitigate a few issues for remote miners anyway.
While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise).
Edit: Pyramining is already setup how they want to and will probably continue in the same. GBT support for p2pool was what I meant to ask about.
I really don't see any issue with getwork. Why is GBT necessary? P2Pool can handle reward halving. EDIT: Getwork does have some problems with timestamp rolling. Depending on how ASIC miners handle it, it could work, but I'll start working on GBT support to allow timestamp rolling.
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
btharper
|
|
November 01, 2012, 12:41:49 AM |
|
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining. Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node... Because of course no one uses a remote p2pool node for a failover backup. /sarcasm Sounds like no real testing has been done to verify it will not be an issue. I'm starting to get the feeling that you are sticking your head in the sand and hoping that everything will just work. Well... a while ago, pyramining briefly switched over to my public node while they were working on some stuff. I ended up having between 200-400GH/s at the time, and everything seemed to go just fine with my bandwidth, so p2pmining should be ok with the added bandwidth of the ASIC traffic. If p2pmining would let you use higher diff shares, or maybe they can set up a high-hash node too. They're pretty much doing their own sub-share-chain, so maybe they will do something to accommodate the ASICs. Not sure if they would tho. It sounded like they came into existence mainly to cater to the smaller miner who needed lower-diff shares. -- Smoov ps: Krak is in one of the outlying cities/towns that don't have a lot of infastructure to spread around. Rural-ish kind of area. That's where his main bandwidth bottleneck is. They get priced accordingly. Too much demand, not enough supply. So they get capped more. For a quick answer. But the longer answer is still that getwork is anticipated not to work as well with ASICs. Hence the several new implementations out there to address this GBT and Stratum are the main two that I've seen.
|
|
|
|
btharper
|
|
November 01, 2012, 12:47:34 AM |
|
It's kind of hard to test with something that you're not even sure exists... Anyway, with sane nTime rolling and adaptive pseudoshare targets, getworks can provide more than enough work to remote hosts with very low bandwidth. Everything for nTime rolling is already there (the HTTP header), and adaptive targets are disabled temporarily, but I'll re-enable them for the next release. How hard would it be to implement GBT for pyraminingP2Pool? Seems like that would mitigate a few issues for remote miners anyway.
While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise).
Edit: Pyramining is already setup how they want to and will probably continue in the same. GBT support for p2pool was what I meant to ask about.
I really don't see any issue with getwork. Why is GBT necessary? P2Pool can handle reward halving. EDIT: Getwork does have some problems with timestamp rolling. Depending on how ASIC miners handle it, it could work, but I'll start working on GBT support to allow timestamp rolling.The main problem I see coming is that even the slowest announced ASIC (BFL's Jalapeno) can go faster than one getwork per second. Most devices are a lot faster. Pulling the work away from the pools and closer to the actual device is a decent looking way to deal with that. I'm not aware of any huge issues with timestamp rolling as long as they can get enough getwork blocks, especially after a longpoll when they need more new pieces of work to start working on the new block (small surges of getworks every 10 seconds when a block comes out seems excessive).
|
|
|
|
forrestv (OP)
|
|
November 01, 2012, 01:07:26 AM |
|
The main problem I see coming is that even the slowest announced ASIC (BFL's Jalapeno) can go faster than one getwork per second. Most devices are a lot faster. Pulling the work away from the pools and closer to the actual device is a decent looking way to deal with that. I'm not aware of any huge issues with timestamp rolling as long as they can get enough getwork blocks, especially after a longpoll when they need more new pieces of work to start working on the new block (small surges of getworks every 10 seconds when a block comes out seems excessive).
On my desktop, P2Pool can supply ~130 getworks/second, or enough work for 520 GH/s (130/s * 4 GH), without any timestamp rolling. With timestamp rolling one minute backwards and forwards, it can supply 62 TH/s (520 GH/s * 120) of work, or 480 GH/s (1/s * 4 GH * 120) of work at a rate of one getwork per second. I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining. The earlier argument (^) about remote miners not working was a bit of a farce, as shown by the above numbers. The thing to focus on in order to make sure things work is the timestamp rolling support in ASIC mining software. We need to ensure that it can roll ahead of the current time (and potentially backwards, though that would require an extension to getwork). I looked at GBT, and I don't see it improving anything. It requires sending the entire block template (potentially up to 1 MB) to miners every 10 seconds, which is definitely impractical for remote miners. An extension to GBT that allowed only sending the merkle root would avoid this, though.
|
1J1zegkNSbwX4smvTdoHSanUfwvXFeuV23
|
|
|
-ck
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
November 01, 2012, 01:27:45 AM Last edit: November 01, 2012, 01:56:16 AM by ckolivas |
|
I looked at GBT, and I don't see it improving anything. It requires sending the entire block template (potentially up to 1 MB) to miners every 10 seconds, which is definitely impractical for remote miners. An extension to GBT that allowed only sending the merkle root would avoid this, though.
Yes we're debating that very point at length in the GBT thread at the moment, as I do not see the advantage of that either, only potential disadvantage of increasing network bandwidth, or worse, encouraging not perpetuating transactions. Stratum on the other hand uses merkle branches to negate this effect entirely and the bandwidth is virtually static regardless of the miner's hashrate or transaction count.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Luke-Jr
Legendary
Offline
Activity: 2576
Merit: 1186
|
|
November 01, 2012, 01:44:15 AM |
|
It requires sending the entire block template (potentially up to 1 MB) to miners every 10 seconds, which is definitely impractical for remote miners. An extension to GBT that allowed only sending the merkle root would avoid this, though. Sending only the merkle root, whether it be via getwork, Stratum, or some new GBT extension, makes the pool centralized, and kinda defeats the entire point of p2pool... Perhaps it would make sense for "remote" p2pool servers to be the first to move forward with mandatory-miner-provides-the-transactions GBT?
|
|
|
|
btharper
|
|
November 01, 2012, 04:15:19 AM |
|
The main problem I see coming is that even the slowest announced ASIC (BFL's Jalapeno) can go faster than one getwork per second. Most devices are a lot faster. Pulling the work away from the pools and closer to the actual device is a decent looking way to deal with that. I'm not aware of any huge issues with timestamp rolling as long as they can get enough getwork blocks, especially after a longpoll when they need more new pieces of work to start working on the new block (small surges of getworks every 10 seconds when a block comes out seems excessive).
On my desktop, P2Pool can supply ~130 getworks/second, or enough work for 520 GH/s (130/s * 4 GH), without any timestamp rolling. With timestamp rolling one minute backwards and forwards, it can supply 62 TH/s (520 GH/s * 120) of work, or 480 GH/s (1/s * 4 GH * 120) of work at a rate of one getwork per second. I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining. The earlier argument (^) about remote miners not working was a bit of a farce, as shown by the above numbers. The thing to focus on in order to make sure things work is the timestamp rolling support in ASIC mining software. We need to ensure that it can roll ahead of the current time (and potentially backwards, though that would require an extension to getwork). I looked at GBT, and I don't see it improving anything. It requires sending the entire block template (potentially up to 1 MB) to miners every 10 seconds, which is definitely impractical for remote miners. An extension to GBT that allowed only sending the merkle root would avoid this, though. Then this may be less of an issue than I expected. I'd imagine it depends some on the hardware it's being run on, though I have no idea what you're running your node off of. It may be easy enough to just give out work that has a 1 minute old timestamp and just allow 2 minutes of rolltime.
|
|
|
|
kano
Legendary
Offline
Activity: 4606
Merit: 1851
Linux since 1997 RedHat 4
|
|
November 01, 2012, 11:43:28 AM |
|
The main problem I see coming is that even the slowest announced ASIC (BFL's Jalapeno) can go faster than one getwork per second. Most devices are a lot faster. Pulling the work away from the pools and closer to the actual device is a decent looking way to deal with that. I'm not aware of any huge issues with timestamp rolling as long as they can get enough getwork blocks, especially after a longpoll when they need more new pieces of work to start working on the new block (small surges of getworks every 10 seconds when a block comes out seems excessive).
On my desktop, P2Pool can supply ~130 getworks/second, or enough work for 520 GH/s (130/s * 4 GH), without any timestamp rolling. With timestamp rolling one minute backwards and forwards, it can supply 62 TH/s (520 GH/s * 120) of work, or 480 GH/s (1/s * 4 GH * 120) of work at a rate of one getwork per second. I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining. The earlier argument (^) about remote miners not working was a bit of a farce, as shown by the above numbers. The thing to focus on in order to make sure things work is the timestamp rolling support in ASIC mining software. We need to ensure that it can roll ahead of the current time (and potentially backwards, though that would require an extension to getwork). ... Sorry - I don't get the point of this comment. Why do you need rolling the time? Unless I've completely missed your point, you don't need to roll the time with Stratum or GBT, you roll a value in the coinbase instead, named (IMO badly) the secondary nonce instead of screwing with the block timestamp (which is a hack) unnecessary
|
|
|
|
|