Bitcoin Forum
November 10, 2024, 11:09:54 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 7 8 »  All
  Print  
Author Topic: PoolServerJ - Tech Support  (Read 27529 times)
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 20, 2011, 10:58:33 PM
Last edit: September 20, 2011, 11:19:11 PM by shads
 #21

I am currently 'testing' the latest 0.3.0rc1. The first thing i notice is the CPU usage. I am running in pushpoold compatability. So why this CPU usage? Pushpoold rarely used any CPU but PJS uses one full core.

problem is most likely default cache size set too high.  see: source.local.1.maxCacheSize and source.local.1.cacheWaitTimeout

also: http://poolserverj.org/documentation/performance-memory-tuning/

I'm going to change the default settings in the next release to be more suitable for a small pool as I've had this default setup for some extreme load tests and I keep getting this question.  Suitable for a small pool is probably the most sane default since most people will fire it up in a low usage test environment first.  I figure anyone evaluating psj for a pool larger than a couple of hundred GH is more likely to read the documentation and make the high performance adjustments needed..

Quote
The second thing i notice is that PSJ inserts shares quite a bit slower than pushpoold. And what it affects is the speed counted on my frontend. I am currently getting 40mh/s less detected in the frontend thereby people WILL lose coins.

shares are logged slower to the database by design.

shares.maxEntrysToQueueBeforeCommit=5000

shares.maxEntryAgeBeforeCommit=10

you can effectively disable this delayed writing by setting these to 0.

You are not getting any less Hashes on your pool.  If anything you should be getting slightly more.  I'd lay money that the problem is with reporting.  The work is still being dished out, hashed, returned and if valid submitted to the bitcoind so nothing is going missing.  The timestamps are added when the work is submitted by the worker not when it's written to the database.  Is your database using timestamps with a CURRENT_TIMESTAMP default value by any chance?


PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
wtfman
Member
**
Offline Offline

Activity: 118
Merit: 10

BTCServ Operator


View Profile WWW
September 21, 2011, 09:23:52 PM
 #22

Hey shadders,

I have the following problem:

Some users experience a couple of rejected shares after each LP. This is what I can see in debug mode. You have an explanation or maybe even a solution for me here?

Thanks for your help

Quote
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@89e2f1@REDISPATCHED,resumed
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@3228a1@REDISPATCHED,resumed

# BTCServ - EU based Mining Pool
# 0% PPS - 0.0000399757 - Hopping Proof
# Official Thread
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 22, 2011, 01:30:15 AM
 #23

Hey shadders,

I have the following problem:

Some users experience a couple of rejected shares after each LP. This is what I can see in debug mode. You have an explanation or maybe even a solution for me here?

Thanks for your help

Quote
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@89e2f1@REDISPATCHED,resumed
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@3228a1@REDISPATCHED,resumed

I could be wrong but I think those log messages are a seperate and possibly unrelated thing.  Actually they shouldn't be a problem, what I think it is is that the request was previously suspended and resumed by the QoS filter before it reached the LP servlet.  I didn't take that into account and expected any LP request should arrive in an initial state.  I'll log it as a bug.   A workaround for now would be disable the QoS filter.  Actually I'd be interested to see if that makes a difference.

As for the rejected shares, the ideal sequence of events is something like:
1/ worker gets work
2/ worker submits share
3/ worker gets work
4/ psj detects new block
5/ psj collects fresh work from bitcoind
6/ psj sends LP response
7/ worker receives LP response
8/ worker submits share

The time between 4 and 7 should be very minimal but if worker happens to submit share in that space it will be rejected.  This should be no more than 1-2 seconds on a busy server provided the bitcoind is patched and able to feed new work to the poolserver fast enough.

If it's not some other fault then something is probably causing that gap to be higher than it should... watch the log during a block change with debug=true.  You should see when the new block is detected and it should also tell when all LP responses have been dispatched (and how long it took).  Anything longer a 1000ms is not ideal.  Large pools are able to push several thousand in under a second with psj.  A likely candidate for slowing this down is an unpatched bitcoind.  please see https://bitcointalk.org/index.php?topic=22585.msg384157#msg384157 if you don't have the multithreaded rpc patch.

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
wtfman
Member
**
Offline Offline

Activity: 118
Merit: 10

BTCServ Operator


View Profile WWW
September 22, 2011, 10:15:30 AM
 #24

I actually have that patch running. Takes > 5 secs to dispatch the LP responses, though on a ~20 GHash/s load with 70 workers. I didn't even notice the incoming rate > 500/s ever, even after restart and big max cache.

Is it very likely  then, that it is caused by the server performance overall?

# BTCServ - EU based Mining Pool
# 0% PPS - 0.0000399757 - Hopping Proof
# Official Thread
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 22, 2011, 10:37:19 AM
 #25

Is it very likely  then, that it is caused by the server performance overall?

I doubt it, unless you're running on very limited hardware.  70 LP's should be taking 500ms *max* unless you're running on a pocket calculator with dialup.

perhaps you should try collecting some logs and send to me as described in this post: https://bitcointalk.org/index.php?topic=33142.msg538639#msg538639

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 22, 2011, 11:50:29 AM
 #26

ok looked at yr logs and I can only see one possibility atm if you're running the 4diff patch...

source.local.1.blockmonitor.maxPollInterval=20

this is a constant 50 requests/sec.  Which is loading yr bitcoind as well as the pool... This should be fairly trivial to reasonable hardware though. 

What is the spec of yr server and what else is running on it?

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 22, 2011, 10:40:52 PM
 #27

Is it very likely  then, that it is caused by the server performance overall?

ok looking at yr top output it looks like your system is definately paging.  You should probably restrict java's max heap size as it's default is quite greedy.  Take a look at: http://poolserverj.org/documentation/performance-memory-tuning/

Particularly the very last section: "Limit the JVM Heap Size"


PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
DavinciJ15
Hero Member
*****
Offline Offline

Activity: 780
Merit: 510


Bitcoin - helping to end bankster enslavement.


View Profile WWW
September 28, 2011, 02:29:07 PM
Last edit: September 29, 2011, 07:19:09 PM by DavinciJ15
 #28

Quick question?

Is there a way to include the USER_ID found in the worker table when inserting into the shares table?
Nebuluz
Newbie
*
Offline Offline

Activity: 41
Merit: 0



View Profile
September 28, 2011, 03:45:26 PM
 #29

First, awesome work there shad... PoolServerJ performance over pushpool is great.

Just got a two short question and a request Cheesy

1)
 atm i'm running poolserverj in screen and it works great, but if anyone got other suggestion please post Cheesy

2)
When i check the screen it looks like this:

Doing database flush for Shares: 10
Flushed 10 shares to DB in 9.0ms (1111/sec)
Trimmed 14 entries from workmap and 29 entries from duplicate check sets in 0ms
Dropping submit throttle to 2ms
Submit Throttling on: false
Doing database flush for Shares: 7
Flushed 7 shares to DB in 4.0ms (1749/sec)
Submit Throttling on: false
Doing database flush for Shares: 15
Flushed 15 shares to DB in 8.0ms (1874/sec)
Trimmed 16 entries from workmap and 1003 entries from duplicate check sets in 0ms
Submit Throttling on: false
Doing database flush for Shares: 14
Flushed 14 shares to DB in 4.0ms (3499/sec)

I just wonder what does Submit Throttling on: false mean?

Request:
I also got an request about ?method=getsourcestats, it would be nice to have a short description on what each stats really mean, if you could make a short list it would be nice.

Sorry all dumb question Smiley

Btw, i run bitcoind 0.4.0, did switch from bitcoind 0.3.24(patched with joelkatz diff4) and it seems to run great with the new bitcoind.

/Best regards
DavinciJ15
Hero Member
*****
Offline Offline

Activity: 780
Merit: 510


Bitcoin - helping to end bankster enslavement.


View Profile WWW
September 29, 2011, 07:18:40 PM
 #30


2)
When i check the screen it looks like this:

Dude I have it running with "screen" command how do you check the screen?

shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 30, 2011, 09:46:34 AM
 #31

Quick question?

Is there a way to include the USER_ID found in the worker table when inserting into the shares table?

It's easy to do but I'd have to make a minor code change and add it as a config option so it doesn't break backward compatibility.

If you log a feature request on the source code repo site I'll add it to the todo list.

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
September 30, 2011, 10:22:13 AM
 #32

First, awesome work there shad... PoolServerJ performance over pushpool is great.

Just got a two short question and a request Cheesy

1)
 atm i'm running poolserverj in screen and it works great, but if anyone got other suggestion please post Cheesy

2)
When i check the screen it looks like this:

Doing database flush for Shares: 10
Flushed 10 shares to DB in 9.0ms (1111/sec)
Trimmed 14 entries from workmap and 29 entries from duplicate check sets in 0ms
Dropping submit throttle to 2ms
Submit Throttling on: false
Doing database flush for Shares: 7
Flushed 7 shares to DB in 4.0ms (1749/sec)
Submit Throttling on: false
Doing database flush for Shares: 15
Flushed 15 shares to DB in 8.0ms (1874/sec)
Trimmed 16 entries from workmap and 1003 entries from duplicate check sets in 0ms
Submit Throttling on: false
Doing database flush for Shares: 14
Flushed 14 shares to DB in 4.0ms (3499/sec)

I just wonder what does Submit Throttling on: false mean?

Request:
I also got an request about ?method=getsourcestats, it would be nice to have a short description on what each stats really mean, if you could make a short list it would be nice.

Sorry all dumb question Smiley

Btw, i run bitcoind 0.4.0, did switch from bitcoind 0.3.24(patched with joelkatz diff4) and it seems to run great with the new bitcoind.

/Best regards

1/ daemonizing poolserverj has been on my todo list since day one but this is actually the first time anyone has asked about it.  I'm not much of a bash expert but I think if you do something like this:

normalStartCommand > mylogfile.txt &

that should redirect the output to a file and run it as a background process so you can keep using your shell. 

2/ submit throttling is a fairly useless feature.  It kicks in if a db flush is triggered and the previous one hasn't finished yet.  It was useful during stress testing when I had a fixed number of fake miners so it would effectively throttle the submit rate.  In the real world though it won't really have any effect aside from giving the miner a short delay before they receive a response to their share submits...  I will get rid of it one of these days.

3/ getsourcestats is also something I built as an aid to testing so I didn't really give much though to making it readable.  Most of the stats are useful, some of them do nothing and some give completely rubbish results.  I also have a TODO to turn that in proper API probably with JSON output so when that's done and I've stripped out the useless stuff and put some other more useful things in I'll write up some doco...

But for now here's a brief rundown (comments in italics):

Memory used: 4.4375 MB - Freed by GC: 9.900390625MB

This only refers to heap memory.  Depending on how many connection threads you've got assigned you can add about 10-30mb to this to get real memory usage.  The total of 'used' + 'freed' is approx the currently allocated heap size.

State [bitcoind-patch] these stats are per daemon, if you multiples you see this section repeated
   
    Current Cache Max: 1000 - min/max: 1/1000
    Current Cache Size: 998
this is the work cache.  min/max are meaningless atm.  There used to be a dynamic cache sizing algorithm but it was a bit crap so I took it out.  The 1000 represent the max number before it will stop calling for more work.  The 998 is current number in cache.  This can occasionally creep over your max by up to Concurrent DL Requests.

    Concurrent DL Requests: 10 - min/max: 1/20

This is a gotcha, currently this gets set to 1/2 the value you've set.  Another hangover from the old dynamic cache sizing algo which also regulated request rate.

    DL Request Interval (ms): 0 - min/max: 0/100
    Current Ask Rate (works/req): 10

these 2 are rubbish

    Consecutive Connect Fails: 0
    Consecutive Http Fails: 0
    Consecutive Http Auth Fails: 0
    Consecutive Http Busy Fails: 0

These do actually work but they'll always be zero unless yr daemon crashes or you have network problems between psj and bitcoind.

    Cache Excess: 1,002.5
    Cache Excess Trend: 0.06

rubbish

    Cache Retreival Age: 13885

avg age of work work when retrieved from cache.  measured in millis from when it was retrieved from daemon

    Incoming Rate: 36.69/sec

how many works/sec you're getting from the daemon.  This doesn't represent the maximum but if you look straight after a block change for a few seconds it will probably be close.

    Incoming Fullfillment Rate: 100%

number request/number received.  This will drop under 100% if you have http errors or daemon stops responding to requests for any reason.

    Outgoing Requested Rate: 0.78/sec
    Outgoing Delivered Rate: 0.78/sec
    Outgoing Fullfillment Rate: 100%

same thing basically.  Except if fullfillment drops below it most likely means psj can't get work from the daemon fast enough.

Longpoll Connections: 1 / 1000

Ignore the 1000.  There used to be a limit but there's not anymore.  Connections will include connections that have been silently dropped by the client but not connection that have expired.

WorkSource Stats:
      Stats for source: [bitcoind-patch]
        Current Block: 147243
        Cache:
          Work Received: 42965
          Work Delivered: 978

fairly obvious, duplicate below

          Upstream Request Fail Rate: 0%
          Upstream Request Fail Rate Tiny: 0%
rubbish

          Immediately Serviced Rate: 91.2%

This one is worth a comment.  Immediately serviced mean miner request work, psj checked cache and found one already available.  If the cache is empty it will wait a few seconds for one to arrive and this counts as 'not immediate'  In reality it might only be a millisecond or two.

          MultiGet Delivery Rate: ?%
          Delayed Serviced Rate: 100%
          Not Serviced Rate: 0%
          Expired Work Rate: 100%
rubbish

          Duplicate Work Rate: 0%

Usually 0 but if you have an unpatched daemon there's a bug that cause duplicates... If you ever see this higher than 0.01% keep an eye on it, it's a definately indicator something is wrong.

          Cache Growth Rate: 17.228%
          Cache Growth Rate Short: 29.76%

rubbish

        Work Submissions:
          Work Supplied: 978
          Work Submitted: 0
          Work Submitted Invalid: 0
          Work Submitted Unknown: 0
          Work Submitted Valid: 0
          Work Submitted Valid Real: 0

not rubbish but doesn't work.

        HTTP:
          Requests Issued: 42972
          Fail Rate: 0%
          Success trip time: 24.56 ms
          Header trip time: 24.55 ms
          Fail trip time: ? ms
          Expire trip time: ? ms

These are about the HTTP connection between psj and daemon.  Nothing to do with miner side of the server.  The trip times are a useful for tuning max concurrent connections... Once the latency start to go up dramatically you've probably got too many.

        Cache Age:
          Entries: 998 Oldest: 10337 Newest: 4032 Avg: 5418 Reject Rate: 0%

stats on the current contents of the cache.  Oldest. Newest, Avg are age in millis.  Reject Rate is basically the same thing a Duplicate rate except with a different moving average period.

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1007



View Profile
October 06, 2011, 08:34:12 PM
 #33

If it's not some other fault then something is probably causing that gap to be higher than it should... watch the log during a block change with debug=true.  You should see when the new block is detected and it should also tell when all LP responses have been dispatched (and how long it took).  Anything longer a 1000ms is not ideal.  Large pools are able to push several thousand in under a second with psj.

Just to add a reference point, BTC Guild pushes out between 6,000 and 8,000 LPs depending on the time of day.  Our average time is between 600ms and 1000ms.  It can be even faster if both of our bitcoinds detected the new block simultaneously (our record was 490ms for ~7,500 LPs).  This is with two bitcoind clients, running on dedicated servers.  One is local on the same server as PSJ, one is running on another server in the same datacenter which runs the database.

This may improve even more soon, ArtForz has been looking into an extra optimization in bitcoind's getwork code and merging it with JoelKatz's 4diff patch.

RIP BTC Guild, April 2011 - June 2015
Remember remember the 5th of November
Legendary
*
Offline Offline

Activity: 1862
Merit: 1011

Reverse engineer from time to time


View Profile
October 09, 2011, 02:28:09 AM
 #34

Can you please add the X-Roll-Ntime header?

BTC:1AiCRMxgf1ptVQwx6hDuKMu4f7F27QmJC2
wtfman
Member
**
Offline Offline

Activity: 118
Merit: 10

BTCServ Operator


View Profile WWW
October 23, 2011, 11:36:04 AM
 #35

running mm-0.5 I stumble upon this error every now and then.

org.eclipse.jetty.io.RuntimeIOException: org.eclipse.jetty.io.EofException
        at org.eclipse.jetty.io.UncheckedPrintWriter.setError(UncheckedPrintWriter.java:107)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:280)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:295)
        at java.io.PrintWriter.append(PrintWriter.java:977)
        at com.shadworld.poolserver.LongpollHandler.completeLongpoll(LongpollHandler.java:177)
        at com.shadworld.poolserver.LongpollHandler$LongpollTimeoutTask.call(LongpollHandler.java:340)
        at com.shadworld.poolserver.LongpollHandler$LongpollTimeoutTask.call(LongpollHandler.java:1)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)
Caused by: org.eclipse.jetty.io.EofException
        at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:911)
        at org.eclipse.jetty.http.AbstractGenerator.flush(AbstractGenerator.java:431)
        at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:89)
        at org.eclipse.jetty.server.HttpConnection$Output.flush(HttpConnection.java:1139)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:168)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:96)
        at java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:126)
        at org.eclipse.jetty.server.HttpWriter.write(HttpWriter.java:283)
        at org.eclipse.jetty.server.HttpWriter.write(HttpWriter.java:107)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:271)
        ... 12 more
Caused by: java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcher.writev0(Native Method)
        at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
        at sun.nio.ch.IOUtil.write(IOUtil.java:182)
        at sun.nio.ch.SocketChannelImpl.write0(SocketChannelImpl.java:383)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:406)
        at java.nio.channels.SocketChannel.write(SocketChannel.java:384)
        at org.eclipse.jetty.io.nio.ChannelEndPoint.gatheringFlush(ChannelEndPoint.java:347)
        at org.eclipse.jetty.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:285)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:259)
        at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:843)
        ... 21 more

# BTCServ - EU based Mining Pool
# 0% PPS - 0.0000399757 - Hopping Proof
# Official Thread
Jine
Sr. Member
****
Offline Offline

Activity: 403
Merit: 250


View Profile
October 23, 2011, 01:56:34 PM
 #36

I've started testing poolserverj at bitcoins.lc for handling larger loads better (Having issues with LP's against large amount of connections).
But before rolling out anything public, i really need to get rid of the DATETIME-fields in MySQL. Is that possible?

I'd like to have everything in GMT UNIX Timestamps. One "hackish" way would be to do the conversion in the statement, but I'd like actually make poolserverj to insert unix timestamp instead of having to do a TO_UNIXTIME(?) in the statment.

Even better would be to drop MySQL entirely and finally use a better scaling database (MongoDB/other No-sql DB) and let Mongo take care of timestamps by it's own, also let mongodb take care of replication and load balancing / sharding.

Any planned NoSQL-support?

Previous founder of Bit LC Inc. | I've always loved the idea of bitcoin.
wtfman_mobile
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
October 23, 2011, 02:37:32 PM
 #37

hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.
urstroyer
Full Member
***
Offline Offline

Activity: 142
Merit: 100


View Profile
October 23, 2011, 03:17:58 PM
 #38

hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.

I have exactly the same issue, poolserverj mm is running smooth at low cpu usage and finding both nmc and btc blocks until network block is found and lp happens.
Then bitcoind is under heavy cpu usage until i restart the poolserverj. We currently use a 4diff patched version of vinced bitcoind.

shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
October 24, 2011, 12:05:52 AM
 #39

running mm-0.5 I stumble upon this error every now and then.

org.eclipse.jetty.io.RuntimeIOException: org.eclipse.jetty.io.EofException
        at org.eclipse.jetty.io.UncheckedPrintWriter.setError(UncheckedPrintWriter.java:107)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:280)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:295)
        at java.io.PrintWriter.append(PrintWriter.java:977)
        at com.shadworld.poolserver.LongpollHandler.completeLongpoll(LongpollHandler.java:177)

This is a normal/expected exception.  It happens when psj tries to send a longpoll response but the client has silently dropped the connection.  In this case psj will recycle the work for the next LP connection in the queue.

The reason yr seeing it now and may not have before is that while psj-mm is in alpha I'm dumping a lot more events to the log so I can see better what's going on inside.  This particularly exception happens all the time with pre-mm version but isn't logged.

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
shads (OP)
Sr. Member
****
Offline Offline

Activity: 266
Merit: 254


View Profile
October 24, 2011, 12:21:57 AM
 #40

I've started testing poolserverj at bitcoins.lc for handling larger loads better (Having issues with LP's against large amount of connections).
But before rolling out anything public, i really need to get rid of the DATETIME-fields in MySQL. Is that possible?

I'd like to have everything in GMT UNIX Timestamps. One "hackish" way would be to do the conversion in the statement, but I'd like actually make poolserverj to insert unix timestamp instead of having to do a TO_UNIXTIME(?) in the statment.

Even better would be to drop MySQL entirely and finally use a better scaling database (MongoDB/other No-sql DB) and let Mongo take care of timestamps by it's own, also let mongodb take care of replication and load balancing / sharding.

Any planned NoSQL-support?

Well before the advent of merged mining PSJ had exceptional longpoll performance but as you can see from the last few posts in this thread there's a few issues to be ironed out....

There is one good reason why you'd want to have timestamps set on the psj side rather than DB side.  Because psj caches shares and bulk writes them there can be a delay between when they came in and when the DB sees them.  PSJ timestamps the share as soon as it's received and uses this timestamp when writing to the db.  So if accurate share times are important to you that's something to consider.

I'm not really familiar with mongo or no-sql.  If they have JDBC drivers then adding support would be fairly trivial.  However it won't happen until mm is stabilised.  Dropping mysql support isn't likely sits it's most commonly used.

Having poolserver insert a timestamp directly should also be fairly trivial.  The internal representation is the same as a unix timestamp GMT but in millis instead of seconds.  If you're comfortable building from source it would only be a couple of lines needed modding in DefaultPreparedStatementSharesDBFlushEngine.  If you really want crazy performance and your share writes don't update existing rows have a look at the bulkloader engines in the source.

I presume what yr actually after is just an integer column?  If so have you tried just changing the column type to see if it works?  The code that sets is this:
stmt.setTimestamp(7, new Timestamp(entry.createTime));

And I have a feeling if the target column type is a BIGINT it will probably just convert it.

If not the change you'd need to make would be something like:
stmt.setLong(7, entry.createTime / 1000);

PoolServerJ Home Page - High performance java mining pool engine

Quote from: Matthew N. Wright
Stop wasting the internet.
Pages: « 1 [2] 3 4 5 6 7 8 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!