A client that connects up, subscribes, and then gets updates as they happen might be an easier start.
Connects via another port? Or would you teach bitcoin's minimal http implementation to keep the connection open? (and service multiple connections at once) As discussed on IRC, bitcoin's JSON-RPC httpd should be converted to use select(2) (or poll or epoll) as is done now in ThreadSocketHandler2(), which enables HTTP/1.1 persistent connections. But let's leave that aside for a second. However, for monitorblocks and push mining ("pushwork"), I would urge consideration of running a TCP server on a third port (8331?) with a simple protocol designed for long-running TCP connections, and data broadcasting. Server (bitcoind) simplified pseudocode for monitorblocks: processed_new_block_event(block): for each incoming TCP connection on port 8331: if (connection mask & BROADCAST_BLOCK) write(socket fd, block)
Client simplified pseudocode for monitorblocks: TCP connect to host:8331 send curtime, protocol version, client version string, username, [possibly empty] list of options, sha256(all previous args + password) wait for server response ("auth ok, here are my capabilities", or "rejected, goodbye") send "send me new blocks" message
while (true) select/poll for new input from remote server (bitcoind) read message from server ("hi, I'm a new block!") take client-specific action based on message...
In this manner, bitcoind's logic is simple because you don't have to keep track of a list of monitored URLs, nor care about retrying a monitorblocks HTTP POST if server is down, etc. This same client connection model can be used for "push" mining, where miners are automatically delivered new work if bitcoind receives a new TX or block from the P2P network. Server (bitcoind) simplified pseudocode for push mining: processed_new_block_event(block): for each incoming TCP connection on port 8331: if (connection mask & BROADCAST_WORK) work = RPC.getwork() write(socket fd, work)
And client miners would behave similarly (warning to efficiency nuts: this is a simplified example): TCP connect to host:8331 send curtime, protocol version, client version string, username, [possibly empty] list of options, sha256(all previous args + password) wait for server response ("auth ok, here are my capabilities", or "rejected, goodbye") send "send me new work" message
start proof-of-work GPU/CPU mining threads in background: while (true) send "get work" message read work from server execute GPU/CPU miner core if solution found: send "found solution" message
while (true) select/poll for new input from remote server (bitcoind) read message from server ("hi, I'm a new work unit!") interrupt GPU/CPU miner core, restart with new work
Or something to that effect. The general idea is simply a binary protocol and connection model that's efficient for data broadcasting tasks such as: monitorblocks, push mining, and a future monitortx feature.
|
|
|
How will the Push API handle the case of the listener being temporarily unavailable, or what happens if the listener gets the message but dies before it has a chance of processing it? In the latter case, careful listeners will note there is discrepancy in the block chain and will have to issue a getblock anyway...
When monitoring blocks, it is obvious when you have missed blocks from the data. A block chain reorg is something to think about, though. And when monitoring for work, it is ok to miss a 'getwork' unit and simply retreive a fresh one. Monitoring transactions is, as noted, more difficult from this perspective as well...
|
|
|
The pool has been shut down. Thanks to all who helped test! Only one block was found during this test, from beginning to end.
I assume it didn't go all that well, I'm sorry to hear it. Scroll up for test results ("smashing success"). It's always been my opinion that if we really must have pools (instead of running hundreds of independent miners) we should have many of them to chose from, as to avoid central points of failure. Should slush turn out to be taken down or replaced by a malicious entity we'd have a problem at hand.
I agree, the more pools, the better for bitcoin. That's why I published my code!
|
|
|
When all bitcoins are mined, why would people set their computer to do computational work required in the p2p network? Are bitcoin owners required to pay transaction fees in the future to reward for this?
It is presumed that transaction fees will be large enough that fees will provide the incentive to support the network and verify transactions, after the per-block BTC rewards reduce to zero.
|
|
|
A client that connects up, subscribes, and then gets updates as they happen might be an easier start
This is true, and in fact, that was my plan for a push-mining implementation. hmmmm.
|
|
|
I think we should go ahead with 'monitorblock', because it is simple and immediately useful. monitortx requires a more complex API, that specifes how bitcoin should select (match) which transactions to POST.
Suggested RPC API:
monitorblocks add URL [senddata]
Add URL to list of those monitoring incoming blocks. senddata=true: bitcoin will send block contents in JSON format senddata=false (default): bitcoin will indicate new block chain height to URL, and nothing else
Returns: numeric id, represented URL just added to monitor list
monitorblocks del id
Deletes URL [id] from monitor list.
Returns: success / failure
monitorblocks clear
Clears monitor list, and stops all monitoring.
Returns: success / failure
|
|
|
Yeah, that's what I thought too ... however so far I have got three of the
PROOF OF WORK RESULT: true (yay!!!)
outputs but no BTC in my bitcoin pooled mining account, unconfirmed or confirmed reward ...
The mining stats are delayed by two hours, among other things.
|
|
|
Well, the queue length I was testing was three. My processor can get through a getwork() in about 5 seconds per core, so I guess I was falling outside of that range for your server. Well, I guess this can't be solved in the way I was thinking. Maybe there's some other way to do it, I'll have to think through how I could do it without a queue and without taking a long time. Maybe having the miner threads call to the main thread to get the next work unit ready just before they crunch an existing one would be a good idea. I'll look into doing that.
Someone implemented a work queue a long time ago. The reason that changed was never merged into upstream: it basically guaranteed that you are always working on "old work," lagging a second or three. Ideally, a background thread polls for work, carefully observing timings sufficient that it issues a 'getwork' JSON-RPC call, downloads and parses the JSON response just in time for a thread to request new work from the queue.
|
|
|
Just committed two changes to git, based on suggestions here: - Re-use CURL object. This means persistent HTTP/1.1 connections, if your pool server supports it!
- Use bswap_32(), if compiler does not provide intrinsic. Useful for older OS's.
|
|
|
I'm interested in starting my own mining server, got most of it down, all I have to implement is the getwork method, for geting new hashes, is the latest poold.py the one I should be looking at?
Where did you get starting on reading about getwork, or did you just look up the code from the bitcoin client?
You will need to intimately understand the mining process, notably the binary layout of the 80-byte bitcoin block headers. The best reference is the C++ source code for the official bitcoin client.
|
|
|
Bitcoin going mainstream means embracing government regulations. Mt Gox has already started, by limiting withdrawals to $1000/person/day (or equivalent withdrawal in bitcoins).
Everyday consumers simply want an easy way to pay vendors.
|
|
|
'listtransactions' gives you 'time', which is Unix time (seconds since Jan 1, 1970 UTC)
|
|
|
That would totally kill it for me. I refuse to do business with anyone who asks for government-issued numbers or identification. This defeats the whole purpose of bitcoin, imho. It is then no longer the people's free currency. I'm afraid we will have to solve the exchange problem in another manner.
I agree. I wouldn't even touch MtGox if they required those things, or any other business for that matter. That illustrates the difference between mainstream and non-mainstream users. In the US, your real name, address, and tax ID # are all associated with your bank accounts and brokerage accounts. This obviously does not alarm the populace at large, who continue to bank or buy stocks. If Mt Gox started requiring KYC for accounts > $100 or > $500, I think that would be a fantastic signal of mainstream bitcoin acceptance, and a sign that Mt Gox will remain viable long term.
|
|
|
The pool has been shut down. Thanks to all who helped test! Only one block was found during this test, from beginning to end.
|
|
|
I don't see someone who is interested in a pre-built mining computer having the knowledge to run gui-less linux on one of these. I think the market for these exists with those who have little knowledge of how to build a computer, which mostly goes hand in hand with not knowing cli-linux. <shrug> After years of building my own computers, I learned the value of pre-built computers with warranties and support. Plus, my own time has a cost.
|
|
|
A bank solicits deposits and makes loans at interest. Mt Gox is a financial services business (FSB). Since these definitions come from France, I will assume they are also used there. FATF is Groupe d'action financière (GAFI)
In the US, this is called a Money Services Business (MSB). Mt Gox is probably an MSB, but there is FinCEN administrative ruling that states "our regulations define a money services business to include a seller or redeemer of stored value who sells or redeems stored value in an amount greater than $1,000 per person per day in one or more transactions." So it can be argued that limiting daily withdrawals to $1000, exchanging for stored value (==bitcoins), fits this definition. At least for the US jurisdiction. Nevertheless, I would certainly recommend to Mt Gox and others in the US to register as an MSB. It's not as complicated or expensive as it sounds, and it basically involves the following: - AML: requires at least two people. One is an auditor/reviewer of anti-money-laundering (AML) practices, who should not be involved in implementing AML. For small websites, this means the website owner (mtgox) must find another person willing to review his code for security/privacy/KYC/AML.
- AML: You must file electronic reports on all transactions leaving the country, or are over $10,000
- AML: You are expected to notice transactions smaller than $10,000, that appear to attempt to evade the $10,000 reporting limit
- KYC: You are expected know to details on your website's users, primarily name, address and tax ID, in case law enforcement comes calling.
There are more details, but these were the big details that stick out in my mind, from my own research.
|
|
|
The pool will be shut down Monday at 11:59pm EST. Until then, I'm keeping my CPU miners going on it, at least
|
|
|
cpuminer version 0.6.1 released (see top of thread for URLs).
Changes: - Fully validate "hash < target", rather than simply stopping our scan if the high 32 bits are 00000000. - Add --retry-pause, to set length of pause time between failure retries - Display proof-of-work hash and target, if -D (debug mode) enabled - Fix max-nonce auto-adjustment to actually work. This means if your scan takes longer than 5 seconds (--scantime), the miner will slowly reduce the number of hashes you work on, before fetching a new work unit.
SHA1: dca174e0e433cda072a1acd7884dbdc15fa4a53c cpuminer-installer-0.6.1.zip MD5: 675242118b337c6d2bfc23a13eb2dd82 cpuminer-installer-0.6.1.zip
|
|
|
|