Bitcoin Forum
May 29, 2024, 10:44:50 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 17 18 19 »
201  Other / CPU/GPU Bitcoin mining hardware / Re: Ufasoft Miner Thread - SSE2/OpenCL/AMD CAL/CUDA for Windows, v0.20 (2011-August) on: October 08, 2011, 03:26:37 PM
Where is it specified that hexadecimal data should be in lower case?
Any parser should understand any- and mixed-case letters.

How about precedent?  No miner software or pool uses uppercase except yours.  It may seem trivial to you but every operation counts on the poolserver side and having to convert a long string to lowercase everytime is a waste of cycles.  particularly when in 99.9% of cases the string is already completely lowercase. If you have a good reason for converting to uppercase then fine but if you don't please do what every other miner does so I can stop wasting cpu time for absolutely no reason.
202  Bitcoin / Mining software (miners) / Re: [Bounty] Proxy software with requirements on: October 08, 2011, 07:17:23 AM
Ok, so here is what I am looking for.
A proxy similar to bithop without the hopping support.
The ability to point hashing power to the proxy and then config proxy to forward a specific amount of hashing power to a specific pool.  (ex. If i have 20ghs in house and point it at the proxy, and I have the proxy set to divide that cording to 3ghs to pool A, 7ghs to pool B, and 4 to pool C with remainder to pool D)

poolserverj will do this with some minor mods to use LP and a fairly specific config setup.  It currently can split amongst pools using a weighting factor.  yr above example you would configure 4 sources with weightings 3, 7, 4, 6.  The final weighting per pool is it's portion of the total weightings.  e.g. for pool one 3 / (3+7+4+6) * 100 %

In the instance one or more pools are down it will split amongst the remaining pools whilst still maintaining their respective weightings.  Weightings are calculated based on work in that block.  If a failed pool comes back online it will be temporarily favoured until weighting distribution has been restored.  As a fallback if anything has gone wrong it will revert to any pool that is able to provide it with work.

Quote
Also, if possible, same conditions as the example above, but lets say pool A,B and D are up but C is down. In that case I would like a backup pool configured for the down pool. (Ex, Pools A,B and D are still pointing work to their correct pool, but pool C is now switched to Deepbit until the original pool for C is back up)

If you want to do this, message me with a quote or post here. Looking for an immediate start.

failover url's and accounts could be added and I'd estimate it would double the dev time over making the mods to achieve the first parts.

In terms of robustness and scalability it's currently the backend of BTC Guild so it shouldn't have any problems in yr scenario.

If you're interested PM me a bounty offer.
203  Bitcoin / Pools / Re: Merged Mining is NOT ready and should be stopped until it is on: October 08, 2011, 12:41:01 AM
Thing is,  we have already had months to work on things, and no one has till it is almost here.
No. We don't get to start working until there is proper protocol documentation in place.

^^ this

It far from a trivial process to construct a set of mm blocks and even less so to validate them.  Even one small ambiguity in the process means two different implementations quite likely won't accept each others blocks.

204  Bitcoin / Pools / Re: Merged Mining is NOT ready and should be stopped until it is on: October 07, 2011, 08:54:14 AM
The key problem though is that you and other more cautious pool ops will come under a lot of pressure to adopt.  There will be pools that do straight away and it's a compelling argument for a miner.  Mine BTC only or switch pools and get bonus NMC at no extra cost.

The usual scenario would be if a new tool is crap no one will use it until it's not crap.  The producer of the tool is strongly incentivized to fix it fast.  In this case there's a strong incentive to use it even if it is crap.  The producer of the tool has no incentive to fix it until they damn well feel like it.



205  Bitcoin / Pools / Merged Mining is NOT ready and should be stopped until it is on: October 07, 2011, 06:42:09 AM
Merged mining is only a few blocks away now and from what I can see the bitcoin side of the equation is grossly underprepared.  I hope I'm wrong but I don't think this is going to go well.  The problem as I see it is that the merged mining community has prepared itself but not given the bitcoin community the tools it needs to adapt.  Let me explain.

Documentation is woefully lacking.  Basically limited to a wiki page a some scattered discussion threads on the namecoin forum.  Miners don't need to do anything but bitcoin pools that wish to adopt merged mining need to implement the spec one way or another.  Currently the only way to do this is to use vinced's patched bitcoind and namecoind along with the python merged-mining-proxy.  This proxy has to sit between the poolserver (e.g. pushpool, poolserverj or custom) and the bitcoind.  This represents both a potential bottleneck and an extra point of failure and to my knowledge has never been tested on a load greater than about 50GH.  The alternative is for pools to implement the merged mining spec themselves.  This is easier said than done.  The spec is NOT documented anywhere.  The limit of the documentation that I can find is this:

Parent blockchain (AKA bitcoind)

In order to verify that a client has done hashing on the namecoin blockchain it is nessecary to add a proof that the work has been done on the bitcoin blockchain to the namecoin blockchain. In order to make it possible to have this proof created by the bitcoin blockchain the bitcoind needs a patch. This patch makes it possible for the bitcoind to cryptograhically sign, that there there was work submitted to the namecoin blockchain as well.

Status: Patch ready for testing. Need to ask if it gets implemented to stock bitcoind
[edit] Auxiliary Blockchain (AKA namecoind)

The namecoind now needs to accept the cryptographically signed proof from the bitcoind.

Status: Patch ready for testing. Proposed for block 24000 (but still not agreed upon).
[edit] merged-mine-proxy

This is the daemon all miners connect to. The daemon itself knows how to connect to the bitcoind and the namecoind checking if shares are valid for one of its downstream blockchains. Furthermore it requests getworks from the parent (which must be patched, see above) and delivers it to the miners.

Status: Ready for testing. Implemented using python.


There is also an article in the bitcoin wiki by Mike Hearn that predates the MM implementation by many months but it at best it explains the general principles using quite a different example and none of the specifics.

For a protocol that involves talking to multiple chains, building merkle trees of block hashes in a barely defined order with extra fields that are defined but not actually used in the implementation, extracting merkle branches, parsing undocumented contructs of headers, parts of coinbase transactions and merkle branches and somehow using this to validate aux blocks where the parent block may not even exist... This documentation woefully inadequate, bordering on pathetic.

If someone wants to make an alternative implementation they have no choice but to pore over the source code in two different languages.  I've done this, it took me days to get a grip on it and if weren't for ArtForz's considerable help it would have taken a lot longer.  Even having done that there are a number of ambiguous issues that leave implementation decisions to be guessed at.  Luke-jr has submitted a partial solution but has also pointed out that it's partial because there is no documentation for the spec.  I have attempted to seek clarifications on the spec myself both on the namecoin forum and their IRC channels only to be told that only one person really knows how it works and they are hardly ever online.  The few answers I was able to get simply confirmed that source code can be relied upon as a guide only, there are details of the spec that are ambiguous in the reference implementation and cannot be definatively determined using it as the only reference.

So the bottom line is that every pool that wants to adopt merged mining currently has no choice but the use the untested merged-mining-proxy black box.  So the MM spec is not documented and frankly I think it's grossly irresponsible of the people pushing MM to have let it get this far without doing so well in advance of the cutover block. Without hours/days of poring over the code (in 2 different languages) it's essentially a black box. The provided solution creates a number of problems and the sensible thing to do would be for pools to simply refuse to touch it until that situation is addressed.  

Unfortunately merged mining has created a situation where adopting the change is very difficult to resist. Once one pool adopts it, it puts enormous pressure on the others to follow as the miners will flock to the pool that offers free extra Xcoins for the same mining effort. End result is we have a mass of pools struggling to cope with new xcoind patches and an unproven python point of potential failure/bottleneck and no one available to support it except the merged mining developers who seem to be online for about 10 mins/week.

At best the provided tools (merged-mining-proxy) should be considered a reference implementation of a spec that doesn't exist. I predict there will be grief for pools and miners over the next few weeks/months.  The best case outcome I can think of is that the extra NMCs mined tank the namecoin market and miners lose interest.

If there are problems and 1/2 pools roll back it still doesn't create any incentive for the MM crowd to step up and do the job properly because their motive is increase hash power on the NMC network and they will have acheived that whatever the consequences to bitcoin mining might be...

IMHO merged mining should be stopped until this situation is properly addressed.
206  Bitcoin / Pools / Re: [800 GH/s 0% fee SMPPS] ArsBitcoin mining pool! Come join us! on: October 07, 2011, 12:28:42 AM
The current status of nrolltime and noncerange  is that it's partly implemented and has been that way for quite a while.  I never really made it a priority for a couple of reasons.  1/ it wasn't really an agreed standard and at the time I first looked at it few miners supported it 2/ it involved some non-trivial overhead that didn't have a payoff until the majority of miners supported. 3/ it was below the threshold of 'needed'.  i.e. miners not being able to get work fast enough shouldn't be happening.

That last statement probably sounds a bit bold in the face of the above evidence however I believe in this case nrolltime would simply be serving to mask another problem.  BT has already said he's got maxed CPU load which indicates another problem with PSJ and would explain the slow getwork responses.  Given what's been proven elsewhere with minimal CPU load I'd say it's some obscure config issue and I'll work with BT to find out what that is ASAP.

Having said that I'm not writing off rolllntime and noncerange, it does introduce efficiencies and now that it's more widely supported the upside probably exceeds the downside.  But I think the time when it's absence is going to be problem for miners is quite a way off so I'm not in a hurry while there's more pressing issues to deal with.

If my take on it is wrong and ppl really do want it treated as a priority then by all means make some noise about it.
207  Bitcoin / Development & Technical Discussion / Re: Bitcoin messages implementation on: October 06, 2011, 06:18:44 AM
Also take a look at Network Address, it's other common one that varies by version and is embedded in version message.

http://code.google.com/p/bitcoinj/source/browse/trunk/src/com/google/bitcoin/core/PeerAddress.java
208  Bitcoin / Development & Technical Discussion / Re: Coinbaser branch's new JSON-RPC method on: October 06, 2011, 06:04:37 AM
I'm not going to vote until I understand it better.

I think it replicates some of the functions of getworkaux from vinced's merged-mining fork.  If so there should be some discussion of which to use before considering either for pulling into the main branch.

Given that merged-mining-proxy uses getworkaux along with a couple of other new rpc calls, and that merged-mining-proxy will most likely be the fallback option for pools using pushpool to become merged-mining capable... and that I'm currently implementing merged mining into poolserverj using these calls from vinced's fork it would seem likely that it's going to gain wider adoption.

I'm totally open to using your method if it's somehow an improvement on the vinced method but simply saying it allows you insert data into the coinabse doesn't really explain the workflow of constructing a merged mining block.  You still need a way to get the header hash of each aux chain block (getwork and hash yrself?), contruct a merkle tree then presumably pass the merkle root of this to setworkaux.  I can figure it out that far but how under your scheme do you submit a solution to an aux chain?
209  Bitcoin / Bitcoin Technical Support / Re: PoolServerJ - Tech Support on: September 30, 2011, 10:22:13 AM
First, awesome work there shad... PoolServerJ performance over pushpool is great.

Just got a two short question and a request Cheesy

1)
 atm i'm running poolserverj in screen and it works great, but if anyone got other suggestion please post Cheesy

2)
When i check the screen it looks like this:

Doing database flush for Shares: 10
Flushed 10 shares to DB in 9.0ms (1111/sec)
Trimmed 14 entries from workmap and 29 entries from duplicate check sets in 0ms
Dropping submit throttle to 2ms
Submit Throttling on: false
Doing database flush for Shares: 7
Flushed 7 shares to DB in 4.0ms (1749/sec)
Submit Throttling on: false
Doing database flush for Shares: 15
Flushed 15 shares to DB in 8.0ms (1874/sec)
Trimmed 16 entries from workmap and 1003 entries from duplicate check sets in 0ms
Submit Throttling on: false
Doing database flush for Shares: 14
Flushed 14 shares to DB in 4.0ms (3499/sec)

I just wonder what does Submit Throttling on: false mean?

Request:
I also got an request about ?method=getsourcestats, it would be nice to have a short description on what each stats really mean, if you could make a short list it would be nice.

Sorry all dumb question Smiley

Btw, i run bitcoind 0.4.0, did switch from bitcoind 0.3.24(patched with joelkatz diff4) and it seems to run great with the new bitcoind.

/Best regards

1/ daemonizing poolserverj has been on my todo list since day one but this is actually the first time anyone has asked about it.  I'm not much of a bash expert but I think if you do something like this:

normalStartCommand > mylogfile.txt &

that should redirect the output to a file and run it as a background process so you can keep using your shell. 

2/ submit throttling is a fairly useless feature.  It kicks in if a db flush is triggered and the previous one hasn't finished yet.  It was useful during stress testing when I had a fixed number of fake miners so it would effectively throttle the submit rate.  In the real world though it won't really have any effect aside from giving the miner a short delay before they receive a response to their share submits...  I will get rid of it one of these days.

3/ getsourcestats is also something I built as an aid to testing so I didn't really give much though to making it readable.  Most of the stats are useful, some of them do nothing and some give completely rubbish results.  I also have a TODO to turn that in proper API probably with JSON output so when that's done and I've stripped out the useless stuff and put some other more useful things in I'll write up some doco...

But for now here's a brief rundown (comments in italics):

Memory used: 4.4375 MB - Freed by GC: 9.900390625MB

This only refers to heap memory.  Depending on how many connection threads you've got assigned you can add about 10-30mb to this to get real memory usage.  The total of 'used' + 'freed' is approx the currently allocated heap size.

State [bitcoind-patch] these stats are per daemon, if you multiples you see this section repeated
   
    Current Cache Max: 1000 - min/max: 1/1000
    Current Cache Size: 998
this is the work cache.  min/max are meaningless atm.  There used to be a dynamic cache sizing algorithm but it was a bit crap so I took it out.  The 1000 represent the max number before it will stop calling for more work.  The 998 is current number in cache.  This can occasionally creep over your max by up to Concurrent DL Requests.

    Concurrent DL Requests: 10 - min/max: 1/20

This is a gotcha, currently this gets set to 1/2 the value you've set.  Another hangover from the old dynamic cache sizing algo which also regulated request rate.

    DL Request Interval (ms): 0 - min/max: 0/100
    Current Ask Rate (works/req): 10

these 2 are rubbish

    Consecutive Connect Fails: 0
    Consecutive Http Fails: 0
    Consecutive Http Auth Fails: 0
    Consecutive Http Busy Fails: 0

These do actually work but they'll always be zero unless yr daemon crashes or you have network problems between psj and bitcoind.

    Cache Excess: 1,002.5
    Cache Excess Trend: 0.06

rubbish

    Cache Retreival Age: 13885

avg age of work work when retrieved from cache.  measured in millis from when it was retrieved from daemon

    Incoming Rate: 36.69/sec

how many works/sec you're getting from the daemon.  This doesn't represent the maximum but if you look straight after a block change for a few seconds it will probably be close.

    Incoming Fullfillment Rate: 100%

number request/number received.  This will drop under 100% if you have http errors or daemon stops responding to requests for any reason.

    Outgoing Requested Rate: 0.78/sec
    Outgoing Delivered Rate: 0.78/sec
    Outgoing Fullfillment Rate: 100%

same thing basically.  Except if fullfillment drops below it most likely means psj can't get work from the daemon fast enough.

Longpoll Connections: 1 / 1000

Ignore the 1000.  There used to be a limit but there's not anymore.  Connections will include connections that have been silently dropped by the client but not connection that have expired.

WorkSource Stats:
      Stats for source: [bitcoind-patch]
        Current Block: 147243
        Cache:
          Work Received: 42965
          Work Delivered: 978

fairly obvious, duplicate below

          Upstream Request Fail Rate: 0%
          Upstream Request Fail Rate Tiny: 0%
rubbish

          Immediately Serviced Rate: 91.2%

This one is worth a comment.  Immediately serviced mean miner request work, psj checked cache and found one already available.  If the cache is empty it will wait a few seconds for one to arrive and this counts as 'not immediate'  In reality it might only be a millisecond or two.

          MultiGet Delivery Rate: ?%
          Delayed Serviced Rate: 100%
          Not Serviced Rate: 0%
          Expired Work Rate: 100%
rubbish

          Duplicate Work Rate: 0%

Usually 0 but if you have an unpatched daemon there's a bug that cause duplicates... If you ever see this higher than 0.01% keep an eye on it, it's a definately indicator something is wrong.

          Cache Growth Rate: 17.228%
          Cache Growth Rate Short: 29.76%

rubbish

        Work Submissions:
          Work Supplied: 978
          Work Submitted: 0
          Work Submitted Invalid: 0
          Work Submitted Unknown: 0
          Work Submitted Valid: 0
          Work Submitted Valid Real: 0

not rubbish but doesn't work.

        HTTP:
          Requests Issued: 42972
          Fail Rate: 0%
          Success trip time: 24.56 ms
          Header trip time: 24.55 ms
          Fail trip time: ? ms
          Expire trip time: ? ms

These are about the HTTP connection between psj and daemon.  Nothing to do with miner side of the server.  The trip times are a useful for tuning max concurrent connections... Once the latency start to go up dramatically you've probably got too many.

        Cache Age:
          Entries: 998 Oldest: 10337 Newest: 4032 Avg: 5418 Reject Rate: 0%

stats on the current contents of the cache.  Oldest. Newest, Avg are age in millis.  Reject Rate is basically the same thing a Duplicate rate except with a different moving average period.
210  Bitcoin / Bitcoin Technical Support / Re: PoolServerJ - Tech Support on: September 30, 2011, 09:46:34 AM
Quick question?

Is there a way to include the USER_ID found in the worker table when inserting into the shares table?

It's easy to do but I'd have to make a minor code change and add it as a config option so it doesn't break backward compatibility.

If you log a feature request on the source code repo site I'll add it to the todo list.
211  Bitcoin / Mining / Re: [ATTN: POOL OPERATORS] PoolServerJ - scalable java mining pool backend on: September 29, 2011, 02:26:03 PM
An update:  BTC Guild is running PoolServerJ for the entire pool.  We were able to push out 10 pushpool/10 bitcoind nodes with load balancing and replace them with a single PoolServerJ and 2 bitcoind nodes. 

you forgot to mention the whopping 16% cpu load... but I'm glad yo forgot to mention the memory usage Smiley
212  Bitcoin / Mining / Re: 0.3.0.FINAL Released on: September 26, 2011, 02:08:19 PM
What is "worker cache preloading"?

Well as it says it's not activated... you'd have to make some code mods and rebuild from source to use atm...

But in a nutshell... when a busy pool comes up it's worker cache is empty. It suddenly get's hit by a ton of requests which translates into a ton of single selects to the db.  Preloading dumps the worker id's from the cache to a file on shutdown.  Then on start up it grabs the worker id's and does a single bulk select to fill the worker cache.  Much more efficient but probably not an issue until you get to the terahash range.
213  Bitcoin / Mining / 0.3.0.FINAL Released on: September 26, 2011, 11:32:32 AM
Changelog:

[0.3.0.FINAL]
- partial implementation of worker cache preloading.  This is not active yet.
- fix: stop checking if continuation state is initial.  It can be if a previous Jetty filter has suspended/resumed the request.  In that case it immediately sends and empty LP response.  This might be the cause of a bug where cgminer immediately sends another LP.  This turns into a spam loop.  This only seems to be triggered under heavy load and only seems to happen with cgminer clients connected.
- added commented out condition to stop manual block checks if native LP enabled and verification off.
- remove warning for native LP when a manual block check is fired.  We want this occur in most circumstances.
- extra trace targets for longpoll empty and expired responses.
- fix: handle clients sending longpoll without trailing slash.  This can result in the LP request being routed through the main handler and returning immediately setting up request spamming loop.  This patch checks for the LP url from the main handler and redirects to the LP handler if it's found.
- add threadDump method to mgmt interface
- add timeout to notify-lp-clients-executor thread in case dispatch threads do not report back correctly and counters aren't updated.  Solved a problem where counter mismatch can prevent the thread from ever finishing thus hogging the executor and preventing future long poll cycles.
- add shutdown check to lp dispatch timeout
214  Bitcoin / Mining software (miners) / Re: CGMINER CPU/GPU miner overclock monitor fanspeed in C linux/windows/osx 2.0.4 on: September 26, 2011, 07:49:15 AM
I've considered your request and believe that the responsibility should, unfortunately, rely with the proxy software. cgminer is simply given a url and works with that. Resolving the IP address per work item has the potential to actually be quite slow. As far as I'm aware, some big pools use a round robin system of their own without problems.

Implementing the sharing of delivered work between servers would be non-trivial and error prone not to mention adding significant extra load to the most stretched resource in the chain.  I'm not aware of any round-robin DNS load sharing in big pools (not to say there isn't any).  Load balancing that handles the stickyness at the load balancer is probably the best pool side solution but the load balancer needs to have a lot more grunt than a round robin DNS server.  From the miner side the simplest solution would be to simply respect the TTL of the DNS response and the pool op can set a long TTL.  Doesn't give as much realtime adaptability as a short TTL but neither would client side stickyness.
215  Other / CPU/GPU Bitcoin mining hardware / Re: Ufasoft Miner Thread - SSE2/OpenCL/AMD CAL/CUDA for Windows, v0.20 (2011-August) on: September 26, 2011, 07:35:33 AM
I encountered an Problem with Ufasoft and PoolServerJ.
It seems PoolServerJ only accept lowercase Results and Ufasoft sends it uppercase...

However, till this behavior is fixed in PoolServerJ, I´ve got a little fix for Ufasoft:


This is now handled by poolserverj.  Though I prefer the term workaround for Ufasoft's weird behaviour to fix.  Why on earth would you convert the solution to uppercase?
216  Bitcoin / Pools / Re: Thoughts and questions on BTC Pools and merged mining on: September 24, 2011, 01:30:05 PM
Saying it is 33 bytes doesn't help. I'll look at the code and determine what it really is.

It's the merkle root of all the aux chains blockheaders + number of aux chains.

217  Bitcoin / Pools / Re: Thoughts and questions on BTC Pools and merged mining on: September 24, 2011, 01:16:28 PM
For the big pools, poolserverj is not really an option.

Really?  Is that why the 2nd biggest pool (and the largest that uses pushpool) is changing over to it?  If you have concerns about it's capacity ask Eleuthria how many pushpool instances he replaced with a single poolserverj instance in a production stress test before it choked.

Quote
 We (not that I'm including myself in the big pools) have our getwork servers, either custom or pushpool highly modified to conform to our database and methods.  Switching over to poolserverj would be a major undertaking and rewrite of the poolserverj software.

Again suggest you ask Eleuthria how long it took him to make the necessary mods.  You don't even have to rebuild the binary, it's designed with plugin architecture so you can just override the commonly customised parts (like worker auth, db interaction etc) into a separate jar...

Quote

Not something that couldn't be done, but it's not something that's going to happen very quickly.  It needs to work with current implementations to gain traction, not foisted off on yet another getwork server.  I don't relish the idea of :

a) Running ANYTHING java based, having to use Java based junk in the enterprise, I can tell you I have never once seen a java based program that was worth anything in high availability/heavy load environments.

Leaving aside the obvious java hate here (I've learned it's not worth having that argument).  poolserverj was designed from the ground up for large pools.  pushpool was not.  That's not an indictment on the pushpool author, it's simply a consequence of the fact that the network was a lot smaller when pushpool was designed than when poolserverj was.   

Quote

  Couple that with the fact that without some severe tweaks, the memory limits of the java VM leave a LOT to be desired.


umm... it's ONE command line switch... -Xmx128m or whatever you want to limit the heap size to.    psj does not use memcached, it keeps all it's caches internal so of course the process is going to use more than pushpool.  It also uses the general principle of preferring performance over limiting memory footprint where there's a choice.  That's a design choice because it's meant to handle high loads.


218  Bitcoin / Pools / Re: Thoughts and questions on BTC Pools and merged mining on: September 24, 2011, 12:55:00 PM
As this nears completion, it would be benificial for pool operators if there was:
A bitcoind diff for the merged client (since most of us run custom clients)
A merged-mining poolserverj binary
An easily modifiable namecoind source to apply joel catz fixes (or a premodified one).

pooserverj will only have one binary.  wether it uses merged mining or not will be configurable via properties.  There will be no additional overhead if not using merged mining.

Yes a JK 4diff for the mm version of bitcoind is needed however I don't think it's really necessary for namecoind or other aux daemons.  They do not deal with a torrent of rpc requests like the bitcoind does.  Just some aux block updates every few seconds and submission of valid shares which is also rare.
219  Bitcoin / Pools / Re: Thoughts and questions on BTC Pools and merged mining on: September 24, 2011, 12:43:59 PM
have a look at PoolServJ which replaces pushpool, lp and proxy

To clarify, poolserverj does not support merged mining natively yet.  It will work behind a merged-mining-proxy instance.  I've been getting my head around it all this last week and plan to make a native implementation tomorrow.  It should be ready by the changeover block unless there's a sudden upsurge in hashrate but it will be alpha.
220  Bitcoin / Bitcoin Technical Support / Re: PoolServerJ - Tech Support on: September 22, 2011, 10:40:52 PM
Is it very likely  then, that it is caused by the server performance overall?

ok looking at yr top output it looks like your system is definately paging.  You should probably restrict java's max heap size as it's default is quite greedy.  Take a look at: http://poolserverj.org/documentation/performance-memory-tuning/

Particularly the very last section: "Limit the JVM Heap Size"

Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 17 18 19 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!