Bitcoin Forum
May 29, 2017, 05:27:37 PM *
News: If the forum does not load normally for you, please send me a traceroute.
   Home   Help Search Donate Login Register  
Pages: [1]
Author Topic: Information about getwork?  (Read 1015 times)
Offline Offline

Activity: 98

View Profile
July 03, 2011, 01:26:31 AM

Okay as some of you might have noticed i'm developing a new pool backend: however as it's widly known around pool operators json-RCP is highly inefficient, especially the currently single threaded implementation in the official client.
For that reason i figured it might be a good idea to implement the chain handling and getwork method directly into the pool software which already is mostly done: however it does not seem to have getwork implemented and as such i'm looking into what processing is involved in getting the different parts of the work data.
So far i've found very little about it, what i've found so far is from the client source code:
    if (params.size() == 0)
        // Update block
        static unsigned int nTransactionsUpdatedLast;
        static CBlockIndex* pindexPrev;
        static int64 nStart;
        static CBlock* pblock;
        if (pindexPrev != pindexBest ||
            (nTransactionsUpdated != nTransactionsUpdatedLast && GetTime() - nStart > 60))
            if (pindexPrev != pindexBest)
                // Deallocate old blocks since they're obsolete now
                BOOST_FOREACH(CBlock* pblock, vNewBlock)
                    delete pblock;
            nTransactionsUpdatedLast = nTransactionsUpdated;
            pindexPrev = pindexBest;
            nStart = GetTime();

            // Create new block
            pblock = CreateNewBlock(reservekey);
            if (!pblock)
                throw JSONRPCError(-7, "Out of memory");

        // Update nTime
        pblock->nTime = max(pindexPrev->GetMedianTimePast()+1, GetAdjustedTime());
        pblock->nNonce = 0;

        // Update nExtraNonce
        static unsigned int nExtraNonce = 0;
        static int64 nPrevTime = 0;
        IncrementExtraNonce(pblock, pindexPrev, nExtraNonce, nPrevTime);

        // Save
        mapNewBlock[pblock->hashMerkleRoot] = make_pair(pblock, nExtraNonce);

        // Prebuild hash buffers
        char pmidstate[32];
        char pdata[128];
        char phash1[64];
        FormatHashBuffers(pblock, pmidstate, pdata, phash1);

        uint256 hashTarget = CBigNum().SetCompact(pblock->nBits).getuint256();

        Object result;
        result.push_back(Pair("midstate", HexStr(BEGIN(pmidstate), END(pmidstate))));
        result.push_back(Pair("data", HexStr(BEGIN(pdata), END(pdata))));
        result.push_back(Pair("hash1", HexStr(BEGIN(phash1), END(phash1))));
        result.push_back(Pair("target", HexStr(BEGIN(hashTarget), END(hashTarget))));
        return result;
        // Parse parameters
        vector<unsigned char> vchData = ParseHex(params[0].get_str());
        if (vchData.size() != 128)
            throw JSONRPCError(-8, "Invalid parameter");
        CBlock* pdata = (CBlock*)&vchData[0];

        // Byte reverse
        for (int i = 0; i < 128/4; i++)
            ((unsigned int*)pdata)[i] = CryptoPP::ByteReverse(((unsigned int*)pdata)[i]);

        // Get saved block
        if (!mapNewBlock.count(pdata->hashMerkleRoot))
            return false;
        CBlock* pblock = mapNewBlock[pdata->hashMerkleRoot].first;
        unsigned int nExtraNonce = mapNewBlock[pdata->hashMerkleRoot].second;

        pblock->nTime = pdata->nTime;
        pblock->nNonce = pdata->nNonce;
        pblock->vtx[0].vin[0].scriptSig = CScript() << pblock->nBits << CBigNum(nExtraNonce);
        pblock->hashMerkleRoot = pblock->BuildMerkleTree();

        return CheckWork(pblock, *pwalletMain, reservekey);

I get why it's checking for the number of parameters however i have a very hard time following the rest of the code as it seems rather cluttered with other only indirectly related stuff, for example what does the deallocation of old blocks have to do with getting new work?

So all in all the process of getting work is still in the dark for me, would anyone mind sharing links or their own knowledge about this process? Smiley
Hero Member
Offline Offline

Posts: 1496078857

View Profile Personal Message (Offline)

Reply with quote  #2

Report to moderator
+50% Profit and more via TELEGRAM
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Offline Offline

Activity: 1526

Democracy is vulnerable to a 51% attack.

View Profile WWW
July 04, 2011, 05:07:18 AM

Once the chain has changed, you will never successfully add blocks based on any previously-issued work units. So you no longer need to track them. The client needs to track all issued work units that could ever make valid blocks because otherwise, if the miner solves the hash, the client will still not be able to get the full block into the public chain because it doesn't know which transactions are in the block.

If a new transaction appears on the network, we want to include it in newly-issued work units so we get the fee and so transactions process faster. But we don't want to invalidate previous work units just for that -- miners would scream bloody murder about the stale shares. And if they actually solved a block and we threw it away just because it was missing a new transaction, well, that would be truly dumb.

Multi-threading the JSON code and fixing persistent connections are part of my '3diff' patch set.

In about two days or so, I'll release massive 'getwork' optimizations that roughly halve the CPU consumption in the typical 'pool manager calls getwork a lot' case.

I am an employee of Ripple. Follow me on Twitter @JoelKatz
1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN
Pages: [1]
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!