-ck
Moderator
Legendary
Offline
Activity: 2492
Merit: 1047
Ruu \o/
|
 |
October 20, 2014, 11:59:38 PM |
|
I say that because I'm pretty sure I've seen a single threaded VM with high CPU usage, yet the underlying OS has low CPU usage spread across multiple threads.
That's pretending to be one guest core by serialising from one host core to another (i.e. jumping around). There is no way to parallelise serial work.
|
|
|
|
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
 |
October 21, 2014, 12:44:26 AM |
|
I say that because I'm pretty sure I've seen a single threaded VM with high CPU usage, yet the underlying OS has low CPU usage spread across multiple threads.
That's pretending to be one guest core by serialising from one host core to another (i.e. jumping around). There is no way to parallelise serial work. Ah. I see your point. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
-ck
Moderator
Legendary
Offline
Activity: 2492
Merit: 1047
Ruu \o/
|
 |
October 21, 2014, 01:37:00 AM |
|
All going well I'm going to be helping drop the biggest mine onto p2pool yet over the next day. Watch for it  Where do you keep finding them?........  People approach me for driver/mining/pool/software solutions based on what I've been providing online, I just don't really advertise it as such. Said p2pool deployment was delayed a couple of days, but it's still planned.
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 1974
Merit: 1145
|
 |
October 21, 2014, 06:47:35 AM |
|
I say that because I'm pretty sure I've seen a single threaded VM with high CPU usage, yet the underlying OS has low CPU usage spread across multiple threads.
That's pretending to be one guest core by serialising from one host core to another (i.e. jumping around). There is no way to parallelise serial work. Ah. I see your point. M Remember what single core machines do when you throw too many simultaneous requests at the OS; they slow down in response to the point where you can notice the CPU interrupts as a flicker in your hourglass-locked mouse pointer. Now recall that machines with multicore CPUs never behave that way when given multiple different tasks, and also that you haven't seen an hourglass mouse pointer since the day you ditched your last single core machine
|
Vires in numeris
|
|
|
HellDiverUK
|
 |
October 21, 2014, 07:28:41 AM |
|
So, the E8400 (2x3GHz) I took out of the machine would have been better than the Q6700 (4x2.66GHz) I put in. Humph. I forgot about the whole single threadedness of p2pool.
|
|
|
|
-ck
Moderator
Legendary
Offline
Activity: 2492
Merit: 1047
Ruu \o/
|
 |
October 21, 2014, 08:34:09 AM |
|
So, the E8400 (2x3GHz) I took out of the machine would have been better than the Q6700 (4x2.66GHz) I put in. Humph. I forgot about the whole single threadedness of p2pool.
For p2pool yes, for bitcoind no. Yes I know, the pain... 
|
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1359
Merit: 1000
Don`t panic! Organize!
|
 |
October 21, 2014, 09:30:24 AM |
|
I was thinking, how to "pull back" small miners (all miners?) to P2Pool and this is some assumptions for new version of share chain: - we have 3-5 separate share chains - each chain has own min and max share power/diff (let say sc1 is from 1k to 10k, sc2 11k-100k, 101k-1M etc) - each share have flag, that tells it is already used/paid or not - when user find a share in one of lower chains, earlier share is marked as used and power of new share and old share is summarized (we can`t pay from lowest chain directly because of dust threshold) - once power or his share reach chain threshold he need to find one stronger share to bump it to higher chain till it reach one that can be paid - each miner (payout address) starts in highest chain for 1-10 mins that node can recognize its hash rate and select proper chain/diff for him - goal is, that every miner found share every 1/10 -1/2 block ETA time
Pros: - share chain length can be reduced because of "summing" thing, every miner can/need have 2 active shares in each chain, no more need - no more "wasted work" when block time exceeds share chain length - new share sum power of last share and current, oldest share can be removed from payout computations - virtually any miner can mine - big miners are in high power chains that small miners can easily participate in mining
Cons: - more P2P data overhead (more chains to transmit) - more CPU overhead: more data to analyze to create payout tx, more job to be done when share found
Sadly, I`m too small in Python tot try implement that.
@forrestv: can this work? B-)
|
|
|
|
HellDiverUK
|
 |
October 21, 2014, 11:40:52 AM |
|
So, the E8400 (2x3GHz) I took out of the machine would have been better than the Q6700 (4x2.66GHz) I put in. Humph. I forgot about the whole single threadedness of p2pool.
For p2pool yes, for bitcoind no. Yes I know, the pain...  Pentium G2358 and overclock the snot out of it. Two Haswell cores running at 3.7GHz or more should be able to do it, surely?
|
|
|
|
-ck
Moderator
Legendary
Offline
Activity: 2492
Merit: 1047
Ruu \o/
|
 |
October 21, 2014, 12:08:11 PM |
|
So, the E8400 (2x3GHz) I took out of the machine would have been better than the Q6700 (4x2.66GHz) I put in. Humph. I forgot about the whole single threadedness of p2pool.
For p2pool yes, for bitcoind no. Yes I know, the pain...  Pentium G2358 and overclock the snot out of it. Two Haswell cores running at 3.7GHz or more should be able to do it, surely? Yes that will run p2pool very nicely (don't forget SSD for bitcoin latency).
|
|
|
|
gigabitfx
Newbie
Offline
Activity: 9
Merit: 0
|
 |
October 21, 2014, 05:51:26 PM |
|
Hi community, I have a instance of p2pool running on windows. I used the latest git available via forestv. The pool has been running all night, over 12 hours but when I submit a share to it, it doesn't contain any additional transaction fees in the share. How does p2pool include the transaction fees in shares, and what am I missing to have them included in mine. I have also followed the p2pool tuning post and have the min-max tx fees included in the bitcoin.conf file as well as server=1. Any other tips before I move the node to a Linux install to see if that corrects the issue. It should be also noted that im running the bitcoin-qt gui and mining the pool against that, perhaps I need to run the daemon? but no information on the web to say the gui client is limited vs the bitcoind.exe.
|
|
|
|
windpath
Legendary
Offline
Activity: 1233
Merit: 1000
|
 |
October 21, 2014, 09:14:45 PM |
|
Hi community, I have a instance of p2pool running on windows. I used the latest git available via forestv. The pool has been running all night, over 12 hours but when I submit a share to it, it doesn't contain any additional transaction fees in the share. How does p2pool include the transaction fees in shares, and what am I missing to have them included in mine. I have also followed the p2pool tuning post and have the min-max tx fees included in the bitcoin.conf file as well as server=1. Any other tips before I move the node to a Linux install to see if that corrects the issue. It should be also noted that im running the bitcoin-qt gui and mining the pool against that, perhaps I need to run the daemon? but no information on the web to say the gui client is limited vs the bitcoind.exe.
P2Pool gets the transactions from the bitcoin node it is running on, to see the current tx pool on your node run "bitcoind getrawmempool". Setting the min/max tx fees in bitcoin.conf will determine what transactions are included in your transaction pool. When your node finds a share that also meets the minimum bitcoin difficulty, the transactions in your bitcoin nodes tx pool are included in the block and broadcast to both the p2pool and bitcoin network.
|
|
|
|
gigabitfx
Newbie
Offline
Activity: 9
Merit: 0
|
 |
October 21, 2014, 09:45:07 PM |
|
Hi community, I have a instance of p2pool running on windows. I used the latest git available via forestv. The pool has been running all night, over 12 hours but when I submit a share to it, it doesn't contain any additional transaction fees in the share. How does p2pool include the transaction fees in shares, and what am I missing to have them included in mine. I have also followed the p2pool tuning post and have the min-max tx fees included in the bitcoin.conf file as well as server=1. Any other tips before I move the node to a Linux install to see if that corrects the issue. It should be also noted that im running the bitcoin-qt gui and mining the pool against that, perhaps I need to run the daemon? but no information on the web to say the gui client is limited vs the bitcoind.exe.
P2Pool gets the transactions from the bitcoin node it is running on, to see the current tx pool on your node run "bitcoind getrawmempool". Setting the min/max tx fees in bitcoin.conf will determine what transactions are included in your transaction pool. When your node finds a share that also meets the minimum bitcoin difficulty, the transactions in your bitcoin nodes tx pool are included in the block and broadcast to both the p2pool and bitcoin network. Bah, I fixed it. I had a modified d3.v2.min and/or share.html. after resycing those two files and a cntrl+f5 on the site brought up all the info I was missing. yay
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
 |
October 22, 2014, 12:58:50 AM |
|
6 blocks today and counting. Loving this new pool hashrate. Not so thrilled with the 17million share difficulty. If this keeps up I'll be squeezed out again or I'll have to get more hashpower.  M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
newbuntu
Member

Offline
Activity: 61
Merit: 10
|
 |
October 22, 2014, 07:01:04 PM |
|
I had this today: Worker 1NBJixrZoXbcUaSkbQE4FxTsADTUAx8Ct6 submitted share with hash > target: 2014-10-22 12:50:57.620751 Hash: 3ff33cca7a322c501a7e754950bf8446009b69e418f94695746291 2014-10-22 12:50:57.620805 Target: 3ff2769861c1a00000000000000000000000000000000000000000
But I didn't get any credit for it - no change in shares - shouldn't I have at least received a share for it?
Cheers.
|
|
|
|
naplam
|
 |
October 22, 2014, 07:14:40 PM |
|
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
 |
October 22, 2014, 07:15:24 PM |
|
I had this today: Worker 1NBJixrZoXbcUaSkbQE4FxTsADTUAx8Ct6 submitted share with hash > target: 2014-10-22 12:50:57.620751 Hash: 3ff33cca7a322c501a7e754950bf8446009b69e418f94695746291 2014-10-22 12:50:57.620805 Target: 3ff2769861c1a00000000000000000000000000000000000000000
But I didn't get any credit for it - no change in shares - shouldn't I have at least received a share for it?
Cheers.
The short answer is no.  The medium answer has to do with terminology. Technically shares need to be smaller than the current difficulty value to count. But the way we view difficulty, and hence shares, we say everything has to be larger. p2pool uses the technical terminology ... which is very confusing to say the least. I don't think I can explain the long answer properly, as I don't fully understand it yet. Has to do with how the value is used. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
newbuntu
Member

Offline
Activity: 61
Merit: 10
|
 |
October 22, 2014, 07:24:15 PM |
|
Ok, I automatically assumed that bigger was better - like finding a block - if I understand that correctly - I assumed that a larger value would solve a block and / or share. I need to read more I guess. Thank you.
|
|
|
|
newbuntu
Member

Offline
Activity: 61
Merit: 10
|
 |
October 22, 2014, 08:18:21 PM |
|
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
All my miners are S3 Antminers, I assume they all use the same algorithm. So this is just a one-off glitch or something? Cheers.
|
|
|
|
PatMan
|
 |
October 22, 2014, 08:23:18 PM |
|
It shouldn't happen very often, rarely in fact. I find my S3's run best using --queue 0 in the cgminer settings, use the latest version also 
|
|
|
|
mdude77
Legendary
Offline
Activity: 1540
Merit: 1001
|
 |
October 22, 2014, 09:08:50 PM |
|
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
All my miners are S3 Antminers, I assume they all use the same algorithm. So this is just a one-off glitch or something? Cheers. What probably happened was p2pool increased the min pseudo share size and the S3 hadn't switched yet and submitted a share size smaller than was allowed. I saw this regularly when analyzing how well S2s don't perform with p2pool. It's a pseudo share, so nothing lost. M
|
I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!
|
|
|
|