d3c0n808
|
|
July 03, 2011, 07:52:37 PM Last edit: July 03, 2011, 08:25:04 PM by d3c0n808 |
|
No unfortunely the ammount of debbugging information provided by pushpoold is very very limited. In my experience with pushpool. I wasn't able to get it to compile without using the latest version of memcached. But thats slackware not Debian. I was able to compile on debian squeeze. But I imagine the libs and binaries for lenny are older than for squeeze. Though if pushpool can't connect to memcached it usually just exitted on my system.
|
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 04, 2011, 03:46:08 PM |
|
Stupid question time plz......can someone tell me how to properly shut down pushpoold please? I cant seem to find the command documented anywhere? Also, is there a clear example of how one's startup line should look to enable pushpoold to output display to the open terminal window?
like: ./pushpoold -F -E
This command seems to sorta work I think....all I get with that is the port list that pushpoold is listening on and that it is initialized..I dont however get output like for when a remote client connects?
Thanks in advance!
d3c0n808, is correct pushpool is a cut and dry application it is very shy about giving details about what is happening although the command that you are using should display when users are connecting. I should note that it will only display when users are connecting on 2 of the 4 ports set in the .json file. One of the 2 ports I mentioned just now will work and the other wont, although they both report they are connected/connecting
|
|
|
|
froggy
|
|
July 04, 2011, 07:47:42 PM |
|
can someone tell me how to properly shut down pushpoold please? I usually just.. then kill -9 puspoold's process number. Brutal but effective.
|
|
|
|
d3c0n808
|
|
July 04, 2011, 08:30:06 PM |
|
can someone tell me how to properly shut down pushpoold please? I usually just.. then kill -9 puspoold's process number. Brutal but effective. sudo killall pushpoold also works....
|
|
|
|
gigabytecoin
|
|
July 05, 2011, 02:15:30 AM |
|
QUESTION: is /tmp/shares.log and /tmp/request.log entirely necessary? Under what circumstances would one require those files? I assume they would become quite massive in a short amount of time.
Send SIGHUP to the process, to re-open the logs (such as after rotation or deletion). Why even create them at all though to begin with I guess is my question? If you are already making exact copies of them in your mysql database that you can clean by "timestamp" much easier? If I simply remove those two lines from the server.json file, is everything going to explode? Can anybody answer the above question? Is it necessary to create the two log files when we are already backing up everything to MySQL? Can I easily remove the two log file lines (or send them to a blackhole type script or a file that never saves or something?) I plan on removing old "shares" from my MySQL table on the fly as a new block is found.
|
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 05, 2011, 03:10:03 AM |
|
QUESTION: is /tmp/shares.log and /tmp/request.log entirely necessary? Under what circumstances would one require those files? I assume they would become quite massive in a short amount of time.
Send SIGHUP to the process, to re-open the logs (such as after rotation or deletion). Why even create them at all though to begin with I guess is my question? If you are already making exact copies of them in your mysql database that you can clean by "timestamp" much easier? If I simply remove those two lines from the server.json file, is everything going to explode? Can anybody answer the above question? Is it necessary to create the two log files when we are already backing up everything to MySQL? Can I easily remove the two log file lines (or send them to a blackhole type script or a file that never saves or something?) I plan on removing old "shares" from my MySQL table on the fly as a new block is found. Yes and no, The log files for for those who don't do terminal based debugging so I believe(don't quote the following) they are hard coded into pushpool to output them in case of a crash. I think that log files usually trim the older lines of logs. Most developers(especially anyone that can program a pool servicing) has probubly been around for a while long before the days of when 512mb was unheard of and text data could file a whole hard drive within minutes. So I personally believe that they do stop getting bigger at some point Can you easlity remove the two log file lines... Idk, i don't think anybody has tried, you should be the first. Let us know what your results are and we'll assist you wit what we can. You should really install Mining Farm and check out the database structure to see how it all works I do something similar. Basically current rounds go to the `shares` table then upon a found block they get converted to the `shares_history` where they wait for the block to be confirmed, changing tables ensures better performance when doing reward counting. After its done rewarding and counting shares it moves all shares for that round into the `shares_dead` which is like a back up for operators in the case that the code went horribly wrong(hopefully never) and everyone got offset rewards. the operator can realy on a "backup" sorta of speak other then that `shares_dead` is just a back up
|
|
|
|
gigabytecoin
|
|
July 05, 2011, 03:33:59 AM |
|
Thanks for that reply Xenland.
And yes you are right, I should just get on to trying it and let everybody know! I will do so in the next few hours and report back on this thread as to whether or not you can safely comment out the two .log files specified in the server.json file that pushpool writes to.
(I was just hoping somebody had already attempted it!)
So what you are saying is you are not sure whether or not the .log files are automatically trimmed/kept in check by pushpoold?
If you are not doing anything with the .log files from mining farm and it has been used by a few pools now in actual production, I would assume that pushpool deals with them directly otherwise people's servers would probably be getting overloaded with them by now, or soon.
But why then would jgarzik mention how to "re-open the logs (such as after rotation or deletion)." if pushpool dealt with them directly?!
So confused!
I will try it out myself and report back ASAP.
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1099
|
|
July 05, 2011, 03:59:31 AM |
|
Sending a normal SIGTERM signal is the normal way to shut down pushpoold. So killall works... provided that it does not send a SIGKILL (kill -9) immediately.
pushpoold will receive the SIGTERM, and initiate safe shutdown procedures, closing files and database connections properly, etc.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 05, 2011, 05:33:43 AM |
|
Yeah i dont know what happens in a scientifc proven method to the log files but i do alot of testing with pusgpool on my local server and the log files seem to stay in the same range from a first glance. Im pretty curious about this problem as well and will be sure to try it out after celebrating. Happy 4th of July everyone. Thanks for that reply Xenland.
And yes you are right, I should just get on to trying it and let everybody know! I will do so in the next few hours and report back on this thread as to whether or not you can safely comment out the two .log files specified in the server.json file that pushpool writes to.
(I was just hoping somebody had already attempted it!)
So what you are saying is you are not sure whether or not the .log files are automatically trimmed/kept in check by pushpoold?
If you are not doing anything with the .log files from mining farm and it has been used by a few pools now in actual production, I would assume that pushpool deals with them directly otherwise people's servers would probably be getting overloaded with them by now, or soon.
But why then would jgarzik mention how to "re-open the logs (such as after rotation or deletion)." if pushpool dealt with them directly?!
So confused!
I will try it out myself and report back ASAP.
|
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 06, 2011, 02:41:02 PM |
|
Just thought I'd update the install tut scince I have to reinstall the whole thing on my VPS for further testing of my Mining Farm software, might as well get this outta the way.
Thanks!
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1099
|
|
July 06, 2011, 05:30:19 PM |
|
If you are not doing anything with the .log files from mining farm and it has been used by a few pools now in actual production, I would assume that pushpool deals with them directly otherwise people's servers would probably be getting overloaded with them by now, or soon.
But why then would jgarzik mention how to "re-open the logs (such as after rotation or deletion)." if pushpool dealt with them directly?!
You must rotate/trim the logs yourself. The logrotate package can help you with that.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
redshark1802
Newbie
Offline
Activity: 44
Merit: 0
|
|
July 08, 2011, 11:02:21 AM |
|
Hello,
i've played a bit aroung with pushpoold and everything worked just fine. But when I asked a friend for a little test something strange happened. I have about 100MH/s my friend has 1500MH/s. As soon as my friend joined for testing i got a lot of stales, I looked it up in the datase and the reason is "unknown-work". Where does this come from?
Help with this would be really nice.
regards, redshark1802
|
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 08, 2011, 12:09:27 PM |
|
If you are not doing anything with the .log files from mining farm and it has been used by a few pools now in actual production, I would assume that pushpool deals with them directly otherwise people's servers would probably be getting overloaded with them by now, or soon.
But why then would jgarzik mention how to "re-open the logs (such as after rotation or deletion)." if pushpool dealt with them directly?!
You must rotate/trim the logs yourself. The logrotate package can help you with that. To gigabytecoin, Miningfarm dosn't use log files becuase of the extra steps it would take to parse them. It does a series of checks and balances Thanks jgarzik for that insight full info btw
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1099
|
|
July 08, 2011, 11:33:40 PM |
|
I would consider this a bug. pushpoold should already know the target difficulty, so why can't it do a correct check? Also, doing a proper check would reduce the network activity between pushpoold and bitcoind slightly (it would submit less false Proof of Works to bitcoind).
Patches welcome...
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
smoothie
Legendary
Offline
Activity: 2492
Merit: 1473
LEALANA Bitcoin Grim Reaper
|
|
July 10, 2011, 04:13:22 AM |
|
When compiling pushpool on ubuntu 11 I get a database.engine error message and then it quits compilation.
I have tried multiple database setups and I still get the same error. Any help please?
Thanks
|
███████████████████████████████████████
,╓p@@███████@╗╖, ,p████████████████████N, d█████████████████████████b d██████████████████████████████æ ,████²█████████████████████████████, ,█████ ╙████████████████████╨ █████y ██████ `████████████████` ██████ ║██████ Ñ███████████` ███████ ███████ ╩██████Ñ ███████ ███████ ▐▄ ²██╩ a▌ ███████ ╢██████ ▐▓█▄ ▄█▓▌ ███████ ██████ ▐▓▓▓▓▌, ▄█▓▓▓▌ ██████─ ▐▓▓▓▓▓▓█,,▄▓▓▓▓▓▓▌ ▐▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▌ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓─ ²▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓╩ ▀▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▀ ²▀▀▓▓▓▓▓▓▓▓▓▓▓▓▀▀` ²²² ███████████████████████████████████████
| . ★☆ WWW.LEALANA.COM My PGP fingerprint is A764D833. History of Monero development Visualization ★☆ . LEALANA BITCOIN GRIM REAPER SILVER COINS. |
|
|
|
d3c0n808
|
|
July 10, 2011, 06:20:12 AM |
|
you need to install i believe its called mysql dev.... then reconfigure. If you dont have the mysql dev package it wont work
|
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 10, 2011, 08:32:29 AM |
|
When compiling pushpool on ubuntu 11 I get a database.engine error message and then it quits compilation.
I have tried multiple database setups and I still get the same error. Any help please?
Thanks
Read the most recent updated Pushpoold tutorial for 5.1 over at the original post it describes how to install MySql package
|
|
|
|
LivTru
Newbie
Offline
Activity: 24
Merit: 0
|
|
July 10, 2011, 06:03:08 PM |
|
I am getting an error "[2011-07-10 18:00:35.988488] mysql sharelog failed at execute". Any suggestions? Here's what my table looks like: DROP TABLE IF EXISTS `btcserver`.`shares`; CREATE TABLE `btcserver`.`shares` ( `id` bigint(30) NOT NULL AUTO_INCREMENT, `time` int(255) NOT NULL, `rem_host` varchar(255) NOT NULL, `username` varchar(120) NOT NULL, `our_result` enum('Y','N') NOT NULL, `upstream_result` enum('Y','N') DEFAULT NULL, `reason` varchar(50) DEFAULT NULL, `solution` varchar(257) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
|
|
|
|
Xenland (OP)
Legendary
Offline
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
|
|
July 11, 2011, 02:30:30 AM |
|
I am getting an error "[2011-07-10 18:00:35.988488] mysql sharelog failed at execute". Any suggestions? Here's what my table looks like: DROP TABLE IF EXISTS `btcserver`.`shares`; CREATE TABLE `btcserver`.`shares` ( `id` bigint(30) NOT NULL AUTO_INCREMENT, `time` int(255) NOT NULL, `rem_host` varchar(255) NOT NULL, `username` varchar(120) NOT NULL, `our_result` enum('Y','N') NOT NULL, `upstream_result` enum('Y','N') DEFAULT NULL, `reason` varchar(50) DEFAULT NULL, `solution` varchar(257) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Post your json file(with out RPC credientials or MYSQL username or password) sounds like an easy fix.
|
|
|
|
LivTru
Newbie
Offline
Activity: 24
Merit: 0
|
|
July 11, 2011, 05:16:11 AM |
|
I am getting an error "[2011-07-10 18:00:35.988488] mysql sharelog failed at execute". Any suggestions? Here's what my table looks like: DROP TABLE IF EXISTS `btcserver`.`shares`; CREATE TABLE `btcserver`.`shares` ( `id` bigint(30) NOT NULL AUTO_INCREMENT, `time` int(255) NOT NULL, `rem_host` varchar(255) NOT NULL, `username` varchar(120) NOT NULL, `our_result` enum('Y','N') NOT NULL, `upstream_result` enum('Y','N') DEFAULT NULL, `reason` varchar(50) DEFAULT NULL, `solution` varchar(257) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Post your json file(with out RPC credientials or MYSQL username or password) sounds like an easy fix. json: { # network ports "listen" : [ # binary protocol (default), port 8342 { "port" : 8342 },
# HTTP JSON-RPC protocol, port 8341 { "port" : 8341, "protocol" : "http-json" },
# HTTP JSON-RPC protocol, port 8344, #proxy is most likely your external ip address if your running a public p... # requests to us | "proxy" should be set to your ip address that people w... { "port" : 8344, "protocol" : "http-json", "proxy" : "192.168.XXX.XXX" }, # binary protocol, localhost-only port 8338 # host is most likely your localhost address { "host" : "127.0.0.1", "port" : 8338, "protocol" : "binary" } ],
# database settings "database" : { "engine" : "mysql", "host" : "192.168.XXX.XXX", "port" : 3306, #database name "name" : "XXX", #database username "username" : "XXX", #database password "password" : "XXX", #enable sharelog | to insert share data or sometimes known as "work" "sharelog" : true, "stmt.pwdb" : "SELECT password FROM pool_worker WHERE username = ?", "stmt.sharelog" : "INSERT INTO shares (rem_host, username, our_result, upstream_result, reason, solution) VALUES (?, ?, ?, ?, ?, ?)"
},
#uncoment this when you want to use memcached (Recommended for servers over... # cache settings #"memcached" : { # "servers" : [ # "servers" : [ # { "host" : "127.0.0.1", "port" : 11211 } # ] #},
"pid" : "/tmp/pushpoold.pid",
# overrides local hostname detection "forcehost" : "localhost.localdomain",
"log.requests" : "/tmp/request.log", "log.shares" : "/tmp/shares.log",
# the server assumes longpolling (w/ SIGUSR1 called for each blk) "longpoll.disable" : false,
# length of time to cache username/password credentials, in seconds "auth.cred_cache.expire" : 75,
# RPC settings #Bitcoind Protocal settings #Host were bitcoind can be found on the network "rpc.url" : "http://127.0.0.1:8332/", #Username & password to connect to bitcoind "rpc.user" : "XXX", "rpc.pass" : "XXX",
# rewrite returned 'target' to difficulty-1? "rpc.target.rewrite" : true }
|
|
|
|
|