Show Posts
|
Pages: [1] 2 »
|
Not sure if you already know this but.... in cgminer-2.10.5 the config script requires libcurl-7.18.2 but libcurl-7.25+ is actually required It looks like you missed a message or test
|
|
|
Marty,
it looks like the getwork proxy cannot connect to the local stratum server. My guess is you have a firewall blocking the stratum port. If that is not the case, please post the top section (listening ports) of netstat -an
|
|
|
It's working now, thanks again for making me go through conf/config.py again. I know nothing about Python as you can probably tell. If anybody wants me to copy and paste some command line to get this up and running on an Ubuntu server let me know. I noticed that there is a problem when using MySQL. MySQL is filling up with processes that are stuck in 'Waiting for table metadata lock'. It becomes completely unresponsive after a while. I tried two different mysql servers with the same results (one on localhost, one on another server in the local network). Using sqlite it works so far. I haven't tried 'none'. http://pastebin.com/Euyst33LOne more thing, how would I run update_submodules? I am now running the proxy on a different host and could probably run it as a separate instance, but having it together with the pool might make sense. I'll have to take a look at the mysql bit. I have a guess but I'll have to confirm it. the update_submodules script just does a git checkout my (very slightly modified) stratum-proxy into the externals directory, github has been up and down the last couple days though, so it's kinda at the mercy of github
|
|
|
http://pastebin.com/Ns158VNGthis looks fine?since i setted up with blocknotify # If using the blocknotify script (recommended) set = to MERKLE_REFRESH_INTERVAL # (No reason to poll if we're getting pushed notifications) and generalfault explain this? both should be turned on ? because i dont understand how you writen this? and yes if i am on VPS what you suggest me prevhash or merkle what will take lower load ? If you are using the blocknotify script then set PREVHASH_REFRESH_INTERVAL = to the same value as MERKLE_REFRESH_INTERVAL so: PREVHASH_REFRESH_INTERVAL = 60 MERKLE_REFRESH_INTERVAL = 60 we never really want to turn off new block checking completely as a failsafe (just in case blocknotify doesn't work for ANY reason, we don't want to be hashing a bad block forever.) you really need both..... prevhash = check for a new block, if it is there, send out the notifies and throw away any old possible solutions since they are now invalid merkle = roll in any new transactions, send out notifies, DO NOT throw away any old possible solutions since they are still valid (just don't include the latest transactions.)
|
|
|
Thanks a lot, now i understand all works fine now does stratum use pps reward payee type or score like slushs pool ? It depends on the pool you use. I use Eclipse so I use DGM but I could pick to use Eclipse's PPS. As askit2 said, it all depends on the pool you use. If you are using slush's or mine, then there are no payouts split to workers it all just goes into the address you specify in the config. I'm not planning on integrating any payout modules as i use it for solo mining, and figure that should be left to the pool operators (but all the share info is in the db for them to use to do the calculations.)
|
|
|
Lol, it's all there as far as I can tell. The config _IS_ complete Including DB_USERCACHE_TIME and also made sure I get my quotes right (I had the single quotes ' missing around the sha256 for the password, but that did not solve the above error message). Merry xmas and thanks for your contribution! Small donation coming your way soon. ok, well it may not be picking up the config at all, or it's using some other config.py so let's say you checkout into mining-pool ... you should have mining-pool/conf/config_sample.py mining-pool/conf/config.py mining-pool/launcher.tac make sure you don't have any other config.py files.... then: cd mining-pool twistd -ny launcher.tac -l - If in doubt, delete conf/config.py, recopy conf/config_sample.py to conf/config.py set the bare minimum of settings (bitcoin port, wallet address etc...) and try again.
|
|
|
Lol, it's all there as far as I can tell. The config _IS_ complete Including DB_USERCACHE_TIME and also made sure I get my quotes right (I had the single quotes ' missing around the sha256 for the password, but that did not solve the above error message). Merry xmas and thanks for your contribution! Small donation coming your way soon. You're sure you removed those ''' above and under the db config right ? Btw generalfault I'm having some weird problem with your fork, http://pastie.org/private/9ewm30rcqbp98t6xjkyvg and it doesn't seem to be listening for connections what is on port 9332 ? is that your bitcoind trusted port? if so, is bitcoind running? My guess is that it's not connecting to bitcoind rpc what does bitcoind getinfo say? can you telnet localhost 9332 ? If it can't connect to bitcoind then it won't start (It just waits for it to come up.)
|
|
|
I have been trying to get the Pool from generalfault to work, unfortunately I haven't been successful. Failed to load application: 'module' object has no attribute 'DB_USERCACHE_TIME' worker@hp1:~/stratum-mining$ twistd -ny launcher.tac -l - 2012-12-22 22:13:30,526 DEBUG example logger.get_logger # Logging initialized 2012-12-22 22:13:30,533 DEBUG interfaces logger.get_logger # Logging initialized 2012-12-22 22:13:30,535 DEBUG DBInterface logger.get_logger # Logging initialized Unhandled Error Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/application/app.py", line 652, in run runApp(config) File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/application/app.py", line 386, in run self.application = self.createOrGetApplication() File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/application/app.py", line 451, in createOrGetApplication application = getApplication(self.config, passphrase) --- <exception caught here> --- File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/application/app.py", line 462, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/application/service.py", line 405, in loadApplication application = sob.loadValueFromFile(filename, 'application', passphrase) File "/usr/local/lib/python2.7/dist-packages/Twisted-12.2.0-py2.7-linux-x86_64.egg/twisted/persisted/sob.py", line 210, in loadValueFromFile exec fileObj in d, d File "launcher.tac", line 19, in <module> import mining File "/home/worker/stratum-mining/mining/__init__.py", line 1, in <module> from service import MiningService File "/home/worker/stratum-mining/mining/service.py", line 7, in <module> from interfaces import Interfaces File "/home/worker/stratum-mining/mining/interfaces.py", line 14, in <module> dbi = DBInterface.DBInterface() File "/home/worker/stratum-mining/mining/DBInterface.py", line 16, in __init__ self.clearusercache() File "/home/worker/stratum-mining/mining/DBInterface.py", line 53, in clearusercache self.usercacheclock = reactor.callLater( settings.DB_USERCACHE_TIME , self.clearusercache) exceptions.AttributeError: 'module' object has no attribute 'DB_USERCACHE_TIME'
Failed to load application: 'module' object has no attribute 'DB_USERCACHE_TIME' Your config.py is not complete (a setting is missing.) Please read the INSTALL file and follow the instructions. (Step 3 in particular is what you didn't do ... or messed up) read manual.... manual good...
|
|
|
Can please someone take a look on this, i did manually install python 2.7 and twisted 12.2 and still facing same problem
2012-12-22 04:25:24,738 ERROR protocol protocol.dataReceived # Processing of message failed Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/stratum-0.2.11-py2.7.egg/stratum/protocol.py", line 181, in dataReceived self.lineReceived(line, request_counter) File "/usr/local/lib/python2.7/site-packages/stratum-0.2.11-py2.7.egg/stratum/protocol.py", line 212, in lineReceived raise custom_exceptions.ProtocolException("Cannot decode message '%s'" % line) 'rotocolException: Cannot decode message 'POST / HTTP/1.1
As slush said above, your client is talking http to a stratum/json port. you most likly have the pool specified on your client as: http://yourhost.somewhere:3333and it should be: stratum+tcp://yourhost.somewhere:3333 and stratum doesn't understand http (nor should it.)
|
|
|
For stratum mining pool i dont understant one think I should first have this proxy runned or it can work normaly without it ? since no much documenation about mining pool(for proxy yes i tryed proxy for other pool and it works) i dont understand how stratum mining pool works, i downloaded it and i only have config file, dont understand even how to turn it on, so i would like to ask here?
Josh, If you are just looking to connect to a pool with stratum then just use cgminer (It supports it internally, so no extra steps.) If you are looking to run your own solo pool: Slush's stratum pool is kept very simple and is for pool software writers. I've forked it and made it a little more suitable for fulltime use (added things like db support, stats page, basic install instructions, etc...) it's at: https://github.com/generalfault/stratum-miningIf you are looking to connect non-stratum aware miners to a stratum pool then use the proxy.
|
|
|
I've been working on my stratum-mining fork (which is pretty far along at this point.) But I've come to a conundrum that I'd like opinions on.
So let's start out with two important points: there are 3 DB Libraries (Sqlite, Postgresql, and MySql) that I'm maintaining I do NOT want to double that number (6) (In fact I WILL not... so I have to make a choice)
Currently they are using the native libraries (sqlite3,psycopg2,and MySQLdb) These libraries are VERY fast (since they are basically just C libs)
I have ported the sqlite DB library interface over to sqlalchemy for testing and to get my footing on what it takes. (quite a bit of work, but doable)
However they do not have connection pooling (not a big deal for this application) and do not auto-reconnect (as far as I know.)
So to get these two features, the only real "popular" option seems to be SQLAlchemy (Core, not ORM) I've done some profiling and SqlAlchemy bulk loading (after optimizations) seems to be half the speed of the native libraries. It's understandable... I'm not complaining mind you.
This brings me to the question, Is it worth the slowdown (and extra work) for the benefits (connection pooling and reconnects)
|
|
|
Slush,
I just want to say THANK YOU! You have done so much for the community. (And I know that thank you's tend to be few and far between.)
I'm using the stratum mining server and it's working very well. I solomine (slush as backup of course) with about 20 clients (some run bfl kit) and using pool software is a good way to monitor/manage them. I hacked in the db code to get it reporting. Do you want any contributions to the code? I could put in some db code and a simple setup how-to (since it's a bit confusing.) If not I understand.
|
|
|
def on_submit_share(self, worker_name, block_header, block_hash, shares, timestamp, is_valid):
is_valid is boolean indicating that share has been accepted. I am so sorry, I didn't quite ask that right.... how do we know if the share solves/generates a block? i.e. how do we know when we generated a block.
|
|
|
A hopefully quick question, and pardon my ignorance.
in stratum-mining/mining/interfaces.py I see where we see submitted shares, but how do we know if the result is accepted as a share? Is is related to on_submit_block?
|
|
|
I for one imagine it to be a hyper-cube design with a spinning torus suspended by magnets in the center. It's powered by a magnetic resonance sender that is place between 1 and 50 feet away. The usb connection is wireless of course.
|
|
|
Ive managed to get it to generate the /dev/ttyUSB0 device now, by adding the following udev rule; SYSFS{idProduct}=="6014", SYSFS{idVendor}=="0403", RUN+="/sbin/modprobe ftdi_sio product=0x6014 vendor=0x0403" but... i now get the following; BFL 0: 1107951616.0C | REST / 0.0Mh/s | A:1 R:0 HW:0 U:9.96/m </snip> [2012-07-21 03:01:02] BFL0: Hit thermal cutoff limit, disabling! [2012-07-21 03:01:05] BFL0: Hit thermal cutoff limit, disabling! <and repeat> im sure if it was that hot, it wouldve gone into meltdown by now! I've actually posted this before, but on centos the bitforce driver is broken. It shows exactly what you see. (apparently the bitforce is as hot as the sun.) To fix it, in driver-bitforce.c find the line: float temp = strtof(s + 1, NULL); and change it to: float temp = strtod(s + 1, NULL);
|
|
|
That's very interesting and likely an issue with the difference between headers and implementation of pthread_cleanup_pop in that distribution/gcc and all the more modern ones we're compiling on. The reason is that the pthread_cleanup_* functions are implemented as macros so you can't see just why this is an issue unless you spit out the pre-processor output. Either way it should be easy to implement something like that as a fix, thanks.
Ah ha! You know, that would completely explain it... I was racking my brain as to why it would throw THAT error. Thank you so much, my brain can rest now.
|
|
|
Error when CGMINER compiling. Versions 2.3.4-2.3.6. Versions 2.3.1-2.3.3 compiles fine. CC usage.o AR libccan.a make[2]: Leaving directory `/home/kanotix/cgminer-2.3.6/ccan' make[2]: Entering directory `/home/kanotix/cgminer-2.3.6' CC cgminer-cgminer.o CC cgminer-util.o CC cgminer-sha2.o CC cgminer-api.o api.c: In function ‘api’: api.c:2409: error: label at end of compound statement make[2]: *** [cgminer-api.o] Error 1 make[2]: Leaving directory `/home/kanotix/cgminer-2.3.6' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/kanotix/cgminer-2.3.6' make: *** [all] Error 2 Tested with NVIDIA and ATi/AMD OpenCL's. OS: Linux x86. GCC: 4.3. Well all I can guess is there must be something wrong with your compiler or something messed up about your git pull/clone or some issue with the ./configure options you used. The "die:" label has been there since 2011-11-24 (go to https://github.com/ckolivas/cgminer/blame/master/api.c to check for yourself) Also the code compiles fine on quite a few architectures including xubuntu, fedora 16, debian, MinGW on windows I've even just done another git pull and compiled it. What git pull/clone command did you use and what ./configure command did you use? If you pulled on top of an old version, make sure you './autogen.sh' './configure --xxx' and 'make clean' again before compiling. okey, so here is my 2cents There is something funky about this file. I'm on Centos 5.X and getting this error. to fix, add a semicoln on the line after die: it should look like this: die: ; pthread_cleanup_pop(true);
and then all is well It shouldn't be need, but it is... I have no idea why....
|
|
|
Why choose one or the other? I use both. When doing a decent size buy/sell, one or the other always has a slight price difference. I use whatever one is more advantageous, why not?
|
|
|
Is there any way to get immediate data out of a Phoenix instance - for example, the hashrate, the results accepted / rejected, and the last status message / timestamp?
Or, since I'm not a Linux god (my Unix skills come from being a Mac OS X guru) - is there any funky file type I can use that I can send the stdout from Phoenix to, and be able to read it from Ruby, but can be 'trimmed' on regular occasions (i.e. removing the old entries), or one that can be set to a specific size which it won't grow any larger than? Sort of like a FIFO but allowing, say, 32K of log text to build up before dropping the first bytes in?
So, I liked that question for some reason. (most likly cause I had a similar problem.) Here is a little perl script: #!/usr/bin/perl
my @s; $/ = "\b"; while(<>){ next if(! m/^\[/); chomp; print "$_\n"; if(push(@s,$_) > 20 or m/(Result|New Work)/ ){ open(FH,">>","out.log"); print FH join("\n",@s)."\n"; close FH; @s = (); } }
you'd just pipe the output of phoenix into it: ./phoenix blah blah blah |./logger It spools up to 20 lines and writes them out to out.log if you are parsing out.log then you'd just: mv out.log out.log.parsing <parse the out.log.parsing file> rm out.log.parsing and a new out.log will be created.
|
|
|
|