Updated to the link in your footer. It still gives the errors below and still shows V3.4b7. Looks like same version.
Also, the test e-mail works, but none of the alert e-mails get through as seen earlier by others and may be why I'm getting the errors below.
Is V3.4b7 in your footer link the most recent full version?
Wow. Apparently I didn't update the ZIP file before I pushed it out! Try it now.. M
|
|
|
Using V3.4b7 and all seems to work fine except I get the following errors.
10/28/2014 5:24:19 PM: Initiated Ant refresh 10/28/2014 5:24:22 PM: ERROR when processing alerts on Rig5 (step 11): Column named IPAddress cannot be found. Parameter name: columnName 10/28/2014 5:24:22 PM: ERROR when processing alerts on RigS35 (step 11): Column named IPAddress cannot be found. Parameter name: columnName
Have 8 S1 and 5 S3+. Rig5 is an S1; RigS35 is an S3+. Don't get any errors on the other 11 miners. Configured all the same way. Any idea why I get these errors on the 2 miners above?
Try upgrading to the latest full version, see if you have the same error. M
|
|
|
Kano,
Any ETA on the payments from the last 2 blocks? - Would love to use some of the proceeds to rent more miners to point at the pool and get the blocks rolling faster.
I'm pretty sure they pay after it confirms. M
|
|
|
My antivirus deleted it since 2 weeks ago.. I don't know why, but now, the "icon" is white without any direct access.
You'll need to white list it. For some reason they don't like what my app does. See the first listing. M
|
|
|
i think that new FW somehow messed up my miners..
since i got them they have been steady at 453 @225.
since i updated the FW and put the FW back to the 826 FW, reset them and put them back @ 225 one is 446 and one is 438.. also they keep beeping..
what do i do??
You can always downgrade to the prior firmware. M
|
|
|
Did we just now? ? Did we, Did we...... YEAH!!! Luck seems to be good here. M
|
|
|
Is there a way to get an S3 to never ever beep for any reason without covering up or removing the speaker?
I have beeping set to false but whenever my mining pools take a dump i'm left with 4 beeping (crying) baby S3s.
Thanks
Not that I've found. And it's not a nice quiet beep like S1s, it's a loud annoying beep. M
|
|
|
12GB should be enough, I'm using 16GB merge mining 8 coins & it's a bit overkill tbh. Mining is also how I learned to use Xubuntu - it's an excellent & fun way to do it eh? It's a great learning curve. I put Ubuntu Server on it, but I'm struggling with the strict text interface, so I broke down and put ubuntu-desktop on for now - until I am more confident. I'm just doing install after install of everything from the raid0 setup, bare minimum server install, updates and everything needed for the p2pool server and merge coins. Get used to it, get comfortable, break it, fix it, break it, fix it, rinse and repeat. I'll be ready to go live with it, perhaps, in a day or so. Any and all advice welcome. Cheers. Check out webmin. Makes managing a text based linux server _so_ much easier. M
|
|
|
Do you have an ETA in which the software would allow regional servers to allow a more global system, even if not shared with other pool operators?
--->>> We just past 200Th... Lets get this up to 1Ph...
I've got a server on the west coast I can put this on ... as long as it's mostly pnp. M
|
|
|
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
All my miners are S3 Antminers, I assume they all use the same algorithm. So this is just a one-off glitch or something? Cheers. What probably happened was p2pool increased the min pseudo share size and the S3 hadn't switched yet and submitted a share size smaller than was allowed. I saw this regularly when analyzing how well S2s don't perform with p2pool. It's a pseudo share, so nothing lost. M
|
|
|
I had this today: Worker 1NBJixrZoXbcUaSkbQE4FxTsADTUAx8Ct6 submitted share with hash > target: 2014-10-22 12:50:57.620751 Hash: 3ff33cca7a322c501a7e754950bf8446009b69e418f94695746291 2014-10-22 12:50:57.620805 Target: 3ff2769861c1a00000000000000000000000000000000000000000
But I didn't get any credit for it - no change in shares - shouldn't I have at least received a share for it?
Cheers.
The short answer is no. The medium answer has to do with terminology. Technically shares need to be smaller than the current difficulty value to count. But the way we view difficulty, and hence shares, we say everything has to be larger. p2pool uses the technical terminology ... which is very confusing to say the least. I don't think I can explain the long answer properly, as I don't fully understand it yet. Has to do with how the value is used. M
|
|
|
6 blocks today and counting. Loving this new pool hashrate. Not so thrilled with the 17million share difficulty. If this keeps up I'll be squeezed out again or I'll have to get more hashpower. M
|
|
|
I say that because I'm pretty sure I've seen a single threaded VM with high CPU usage, yet the underlying OS has low CPU usage spread across multiple threads.
That's pretending to be one guest core by serialising from one host core to another (i.e. jumping around). There is no way to parallelise serial work. Ah. I see your point. M
|
|
|
Looking forward to someone rewriting p2pool, I was running it earlier on a Q6700, and was getting terrible latency - one core was flat out. Single threaded .exe sucks. A Core2Quad 2.66GHz with 6GB RAM should be able to totally maul p2pool. Would running it in a VM with 1 thread help? I've often theorized that a VM with 1 thread on a multicore machine performs better than on the multicore machine, as those multiple threads are used to run the "one" thread in the VM. But I never actually tried it. Only outright core speed matters since p2pool is in python which is single threaded. The faster the cores are the better. Having many cores does nothing as there is no way to "add them up". In fact, on an otherwise lightly loaded system, if you have a hyperthread CPU, disabling hyperthread in the BIOS will speed up python. I say that because I'm pretty sure I've seen a single threaded VM with high CPU usage, yet the underlying OS has low CPU usage spread across multiple threads. M
|
|
|
Looking forward to someone rewriting p2pool, I was running it earlier on a Q6700, and was getting terrible latency - one core was flat out. Single threaded .exe sucks. A Core2Quad 2.66GHz with 6GB RAM should be able to totally maul p2pool. Would running it in a VM with 1 thread help? I've often theorized that a VM with 1 thread on a multicore machine performs better than on the multicore machine, as those multiple threads are used to run the "one" thread in the VM. But I never actually tried it. M
|
|
|
I'm just glad the pool works! Now if only Bitmain would fix the S4 so I can point it here and get the hash I'm suppose to be getting, it'd be a good day.
Do you have other hashpower pointed here? Maybe run a stratum proxy and point your hashpower through it ... and once the proxy difficulty gets up to 1024, point your S4(s) to it. M
|
|
|
For the last block found (325976), I received a payment much higher than normal or expected. I'm definitely not complaining. This is great! But I want to understand why? The payout tab showed the normal expected amount, by my payment was 4x that amount. Anyone know why this could happen? Edit: To add a bit more info. My P2Pool server and default payout address is a different address. I'm talking about my mining address only. Usually that means you found the block. M Wow! You are right. First time ever. I didn't even notice! lol 2014-10-18 22:43:49.668550 GOT BLOCK FROM MINER! Passing to bitcoind! https://blockchain.info/block/0000000000000000192873ad7facbe0898a86dc8426702c5bf19e37e7f93380b Grats! M
|
|
|
For the last block found (325976), I received a payment much higher than normal or expected. I'm definitely not complaining. This is great! But I want to understand why? The payout tab showed the normal expected amount, by my payment was 4x that amount. Anyone know why this could happen? Edit: To add a bit more info. My P2Pool server and default payout address is a different address. I'm talking about my mining address only. Usually that means you found the block. M
|
|
|
Pseudo share doesnt affect in founding block anyotherway than it restarts works everytime that changes, so selecting constant could help abit keeping miner busy. Large miners should use bigger sharedifficulty - that would help poolwide. Like btcaddress+501/100000000, then there should be less 5 s restart times, and big miners earn that way more too. Ckolivas proxy helps keeping miners busy also, I have got very good results with it in p2pool and solo pool, it squeezes juice out of miners, with s3s mindiff and startdiff have to be chosen so that you can see best share.. Last weeks bests with 2th= 2,347,645,293, one ~1,500000000, 537,352,921 .. newer got those kind of results without proxy. S3:s doesnt show best share with all share diff + 512, 1024, 4096 doesnt show, 500, 4000 ... it shows nohup ./ckpool -p -A -k> /dev/null 2>&1 & Wondering how ants would perform if those are turned to diskless nodes and booted with tftp, that way cgminer could be running on some more powerfull machine, i think s2 s3 s4 are suffering constan restarts - the mainoboard doesnt have enough power. And that way I doesnt have to wait new firmware and cgminer... http://wiki.gentoo.org/wiki/Diskless_nodesIt does matter with the S4, and the S2. The S3 works correctly. It doesn't work with my S2. Doesn't matter what difficulty I set it to, I lose up to 10% of my hash rate by pointing it to p2pool. M
|
|
|
Really like your software. Will be sending a tip your way tonight of .10 BTC.
Thanks! Are there any plans to add features? A great feature would be the ability to update settings, such as Frequency/Rate of the S3+ units.
That's only doable via the web interface or SSH, correct? I've shied away from putting anything in SSH that, if it goes wrong, could brick your unit. Also, can you change pool info per miner or only all of them at the same time.
I haven't tried this feature yet because my miners are grouped into lots pointing to different pools.
Anyways thanks again, sending .10 BTC your way for the great work.
Works for all. Note for S1s and S3s you'll have to enable API updates to cgminer, by default it can only be done locally. M How do you enable api updates to cgminer. Thanks. You have to update the cgminer startup/config via SSH. I don't know the specific details, however I'm certain you can find it with a google search. Since I've been asked this enough one of the days I'll take the time to find the info and post it here, for use at your own risk. M
|
|
|
|