Bitcoin Forum
May 04, 2024, 05:00:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 »
81  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: November 10, 2017, 11:25:45 PM
What size CPU and RAM do I need to run a node of your branch? Would it be feasible to run on a VPS server (i.e. one with the space necessary to hold the blockchain)?
The most recent version of 1mb_segwit needs about half as much RAM but a little more CPU than p2pool master.

If you're using pypy, you should probably have about 3-4 GB of available RAM and a 2.2 GHz Intel Core-based CPU or better. AMD Ryzen CPUs should work fine. Older AMD CPUs (e.g. A10 or A8) will need a lot more GHz, but if you have a 3.5 GHz older AMD CPU, that will probably also work fine. You might be able to get away with slower CPUs, but I can't promise anything.

If you're using python2.7, then you only need about 1.5 GB of available RAM but you might need a 3.0 GHz Intel Core i3 CPU.

I strongly recommend using pypy. That's much easier to do on Linux. Orphan rates will be lower and revenue will be higher if you use a fast CPU and if you use pypy.

Thread and core counts are basically irrelevant. GHz and IPC are king. This means a cheap desktop Core i3 CPU will be quite a bit better than a high-end server 16-core Xeon. The Xeon should still work fine, though.

Unfortunately, p2pool requires that the full node's blockchain be unpruned. It needs to access the genesis block for some reason; I haven't looked into the details on why. I recommend making the ~/.bitcoin/blocks/ directory a symlink to a cheap HDD and keeping the ~/.bitcoin/chainstate/ directory on an SSD. The chainstate is the UTXO database, and it needs to be read from and written to frequently and randomly when validating blocks. The blocks directory is just the raw block storage, and is accessed infrequently and sequentially, which makes it perfect for HDDs.
82  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: November 09, 2017, 11:00:55 PM
When i trying to mine with small hashrate, i got this errors:
The "Block stale detected" message is normal (though eliminated in jtoomimnet) and does not indicate a problem.

The "submitted share more than once" message is more unusual, but by itself does not itself indicate that things aren't working, nor does it give much of a hint how things would be failing if they were failing. My only guess for why you might be getting that is if the difficulty set by your node is too low. It's also possible that your mining hardware is just buggy or your network is buggy/congested/laggy or your node's CPU is getting saturated.

And also i don't know if jtoomimnet has a block finding reward like p2pool.org has!
It currently has the same 0.5% bonus that mainnet has. I might want to increase that in the future, but for now it's the same.

Is the forrestvnet not well configured
or is just a big blow of bad luck?
With mainnet's current hashrate of 760 Th/s, it is expected to find one block every 95 days or so. Mainnet last found a block 85 days ago on August 15, 2017, so this doesn't even qualify as bad luck. With this hashrate, "bad luck" would mean not finding a block for about a year.
83  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: November 09, 2017, 09:46:36 PM
@jtoomim given today's news, can we now get back on the same sharechain?
I don't see how that's relevant. The absence of my code in the main repo was never about 2x; it was about compatibility issueswith altcoins.
84  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: November 01, 2017, 02:48:42 AM
Sure hope jtoomim's fork finds a block soon, or I'm gonna have to bail. I'm trying to hang in there, but I wish we had a few more PH to bring down the expected time a little.
We will have about 3x as much hashrate in December.
85  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: October 27, 2017, 06:16:31 PM
Looking at joining, but can someone confirm when the last block was found by the pool.
Last block found on mainnet was 8/15/2017. mainnet currently has 0.8 PH/s, and is expected to find one block every 91 days on average.

Last block found on jtoomimnet was 9/18/2017. jtoomimnet currently has 2.6 PH/s, and is expected to find one block every 28 days on average. jtoomimnet will be adding at least 4 PH/s over the next 45 days.

List of all blocks found (both mainnet and jtoomimnet):
https://blockchain.info/address/1Kz5QaUPDtKrj5SqW5tFkn7WZh8LmQaQi4
86  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: October 21, 2017, 07:43:04 PM
Sorry for the slow response, Cryptonomist.

1) Is it correct to assume that the instance of the Tracker class in p2pool/util/forrest.py is the actual share chain?
That's one part of the relevant code. The actual tracker for the share chain is the OkayTracker class which inherits from forest.tracker, as instantiated as node.tracker. I think OkayTracker's code is more relevant and interesting.

By the way, it's forest.py with one 'r' (as in a bunch of trees), not forrest.py (as in the author's name).

Quote
2) Can someone suggest a way to get the time between the "Time first seen" and the addition to the share chain.
I would suggest printing out the difference between the time first seen and time.time() at the end of data.py:OkayTracker.attempt_verify(). That seems like useful information for everyone. If you put it under the --bench switch and submit it as a PR to 1mb_segwit, I'd likely merge it. Don't worry about it if you're not good with git/github, as it's not a big deal either way.

Quote
3)  the flow of a share between the moment the node detects its existence and the final addition of it to the share chain is not very clear to me.
Yeah, that code is kinda spaghettified. It might help to insert a "raise" somewhere and then run it so you can get a printout of the stack trace at that point.

Quick from-memory version: the stuff in data.py (the BaseShare class and its child classes) gets called during share object instantiation and deserialization. When p2p.py receives a serialized share over the wire, it deserializes it and turns it into an object, then asks the node object in node.py what to do with it. node.py then passes it along to the node.tracker object and asks the tracker if it fits in the share chain; if it does, then node.tracker adds it to node.tracker.verified, and the next time node.tracker.think() is run (which is probably immediately afterward), node.tracker may choose to use that new share for constructing work to be sent to miners. This causes work.py:get_work() to generate a new stratum job (using data.py:*Share.generate_transaction() to make a coinbase transaction stub and block header) which gets passed via bitcoin/worker_interface.py and bitcoin/stratum.py to the mining hardware.

Quote
4a) Under the rules in the main p2pool network the shares are 100kb. So after 300 seconds on average the shares will have a total of 1mb transactions, and after 600 seconds on average the bitcoin blocks would be 2mb. Is this correct?
Sorta. It's a limit of 100 kB of new transactions. Less than 100 kB of new transactions can be added per share. The serialized size of the share is much lower than this, since the transactions are referred to by hash instead of as the full transaction; the serialized size of the candidate block that the share represents is much larger than this, and includes the old (reused) transactions as well as the new ones.

If the transaction that puts it over 100 kB is 50 kB in size, and has 51 kB of new transactions preceding it, then only 51 kB of transactions get added. If some of the old transactions from previous shares have been removed from the block template and replaced with other transactions, then those old transactions don't get included in the new share and your share (and candidate block) size goes down.

In practice, the candidate block sizes grow slower than 100 kB per share. I haven't checked very thoroughly how much slower, but in the one instance that I followed carefully it took around 25 shares to get to 1 MB instead of 10 shares.

Quote
4b) The hash of the header of the bitcoin block contains the merkle tree of the transactions the block contains. ... How can the transactions of several shares be added to get for example after 300 seconds 1 mb of transactions in a bitcoin block.
The hash of a share *is equal to* the hash of the corresponding bitcoin block header. The share structure includes a hash of all p2pool-specific metadata embedded into the coinbase transaction (search for gentx in data.py). The share has two different serializations: the long serialization (which is exactly equal to the block serialization, and which only includes the hash of the share-specific metadata), and the short serialization (which includes the block header plus the share-specific metadata such as the list of hashes of new transactions, the 2-byte or 3-byte reference links for the old transactions, the share difficulty, timestamps, etc.). Any synced p2pool node can recreate the long serialization from the short serialization, data in the last 200 shares of the share chain, and the hash:full_tx map in the node's known_txs_var.

The transactions aren't "added". If a transaction has been included in one of the last 200 shares in the share chain, then a share can reference that share using the number of steps back in the share chain (1 byte) and the index of the transaction within that share (1 or 2 bytes). These transactions -- "old" transactions -- do not count toward the 100 kB limit. If a transaction has not been included before, then the share will reference this transaction using its full 32-byte hash, and counts its full size (e.g. 448 bytes) against the 100 kB limit. Both types of references are committed into the metadata hash in the gentx, so both are immutable and determined at the time the stratum job is sent to the mining hardware.

https://github.com/jtoomim/p2pool/blob/9692d6e8f9980b057ae67e8970353be3411fe0fe/p2pool/data.py#L156

My code currently has a soft limit of 1000 kB (instead of 100 kB or 50 kB) on new transactions per share, but unlike the p2ool master branch, this is not enforced at the consensus layer, so anyone can modify their code to exceed this limit without consequences from should_punish_reason().
87  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: October 06, 2017, 05:48:13 PM
If uncles was applied to bitcoin then special care would have to be taken to make sure a transaction in an uncle doesn't conflict with a transaction in the main chain.
Actually, you'd simply ignore all transactions in uncles except the coinbase transaction, which gets programmatically modified to reduce the amounts.

Quote
If uncles are added then I think the share interval could be reduced even more to (say) 15 seconds.
I'd really rather not, for CPU and RAM performance reasons in addition to orphan rate reasons. It takes a medium-slow CPU on Python2.7 up to 4 seconds to process a share and issue new work with 1 MB blocks. With 4 MB blocks, you would generally be unable to run p2pool on medium-slow CPUs with a 15 second share interval, as it would take 16 seconds to process each share. Share variance is currently insignificant compared to block variance on p2pool. Increasing the share interval to 60 seconds on average is more likely to be a good idea than decreasing it.

While there are a few major changes to p2pool's architecture that would drastically reduce p2pool's performance load, those changes are quite large and may never be done. It would certainly involve more code than just adding uncles.

Quote
There'd be no more orphans at all. This would reduce the difficulty (by a constant factor but still) which would mean if p2pool gets really big then small hashers can still solve shares fairly often.
There would be uncles instead. Uncles have a smaller reward for the person who mined them than normal shares, so there's still a potential fairness penalty to having high uncle rates. The game theory of uncles is complicated, and if you don't set the rules right, it can be in a miner's best interest to not mention other people's shares as uncles and orphan them instead. Giving the uncled shares a hit in revenue is necessary to making sure there's an incentive to include them.

Quote
Another way of solving this apart from being probabilistic...
A probably better solution is to just follow the share with the lowest hash, regardless of its difficulty. This inherently prefers high-diff shares, but is fair in that low-diff shares have a chance to win against a high-diff share if the low-diff share is particularly lucky in its hash. Basically, it is effectively equivalent to the algorithm I proposed in the post that you linked to, but where the source of randomness is the hashes themselves.
88  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: October 04, 2017, 06:51:50 PM
On p2pool.org, it shows it has been 83 days since the last block, so are they not running jtoomin's fork?
Correct. The most recent block found on jtoomimnet was found on September 18th. Jtoomimnet currently has around 2.6 PH/s and an expected time per block of around 22 days. Mainnet has around 0.7 PH/s and an expected time per block of around 90 days.

89  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 25, 2017, 04:33:06 AM
"Warning: LOST CONTACT WITH BITCOIND for" is displayed ... perhaps these linux bash box dont runs cleanly.
That's what happens when your CPU gets overloaded and stalled. Let's say you have enough incoming traffic in your TCP buffers to keep p2pool's CPU busy for 1.1 minutes. P2pool first sees that it has a queued need to get a new block template, so p2pool sends a getblocktemplate request to bitcoind. Bitcoind may respond immediately, but the response goes to the end of that 1.1 minute queue. P2pool then works through its queue of stuff, then gets to the bitcoind request, and process it, but notices that it took 1.1 minutes and so complains about it.

Same thing for web UI requests. When you point your browser to localhost:9332, that request goes into p2pool's queue of jobs to work on, and p2pool doesn't get around to it until e.g. 1.1 minutes has passed.

Quote
by the way, P2Pool is using at linux bash with pypy 1,3Gb Memory an betwen 0 and 15% cpu
The 15% CPU is the problem. 15% is above the 12.5% threshold for fully utilizing a single core on your machine, which means that the work in p2pool's to-do queue is snowballing.

The --bench output is somewhat helpful, but it seems that the bulk of the load is happening in a function that I didn't put benchmarking code into. In your log files, I frequently see line pairs like this one (note the timestamps):

Code:
2017-09-24 21:52:37.814726    0.000 ms for 7 txs in p2p.py:handle_remember_tx (0.000 ms/tx)
2017-09-24 21:52:58.864085    0.000 ms for 11 txs in handle_losing_tx (0.000 ms/tx)
...
2017-09-24 21:52:59.239189 > >>> Warning: LOST CONTACT WITH BITCOIND for 1.6 minutes! Check that it isn't frozen or dead!
The "Lost contact" message means that your CPU was stalled for 1.6 minutes. 1.6 minutes before the "Lost contact" message at 12:52:59 would be 12:51:20 or something like that. That means that your CPU was busy between 21:52:37 and 21:52:58. However, we don't know what was happening at that time, since the code that was running was something that I didn't add benchmarking code to.

Can you add this to your p2pool startup command line and send me the resulting profile1.log file? Try to run it for about an hour. If you run it for too much less than that, and the profile data will be dominated by the start-up share loading time.

Code:
pypy -m cProfile -o "profile1.log" run_p2pool.py [other options]

If anyone is curious, you can analyze those cProfile log files with a python script that looks like this:
Code:
import pstats, sys

if not len(sys.argv) > 1:
   print "Usage: cumtime.py input_file [-t] [lines]"
   sys.exit()

p = pstats.Stats(sys.argv[1])
if '-t' in sys.argv:
    p.sort_stats('tottime')
else:
    p.sort_stats('cumtime')

lines = 100
if len(sys.argv) > 2:
    try:
        lines = int(sys.argv[2])
    except:
        pass

p.print_stats(lines)
90  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 24, 2017, 02:48:59 AM
Xantus, maybe what's happening is that there just aren't any exceptions in Windows Firewall for this. Native Windows apps usually trigger a request to open ports in the firewall when needed, but I maybe WSL pypy p2pool isn't doing that. Try disabling Windows Firewall for an hour to see if that helps. If it does, then you can manually add a firewall rule to allow incoming tcp connections on port 9332 and 9333 (or whatever you're using).

Edit: Pages like this suggest that you do indeed need to open up ports on the Windows Firewall if you want to allow WSL processes to serve as servers.
91  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 21, 2017, 05:02:47 PM
pypy cpu usage is lower than 5%, but it seams as anything is verry slow.
That sounds like you might be out of RAM and have swapping to disk. Can you check? Task manager -> Performance -> Memory. In Use, Available, Committed, Paged Pool.
92  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 20, 2017, 09:19:10 PM
Interesting, it seems that time.time() on Windows only has 15 or 16 ms resolution, so all of the short function calls are reported as taking either 0 ms, 15 ms, or 16 ms, even though they're actually probably taking about 4 ms each.

I also see a few lines like this one:
Code:
2017-09-20 07:00:17.936000  453.000 ms for 0 txs in p2p.py:handle_remember_tx (453.000 ms/tx)
453 ms for handle_remember_tx when no transactions are being added seems strange and significant. I'll have to reread the code and see if I can figure out what could be taking so long.

There's also this:
Code:
2017-09-20 07:00:37.877000  750.000 ms for work.py:get_work()
2017-09-20 07:00:37.877000 1734.000 ms for 1 shares in handle_shares (1734.000 ms/share)
That makes about 1.7 seconds of latency just in those two functions for your node switching work. (IIRC, handle_shares() calls get_work, so the 750 ms is already included in the 1734 ms.) There may be a few other functions that I didn't add the benchmarking code to that also play a role, but that latency alone should be responsible for about 5.8% of your DOA+orphan rates.

Code:
2017-09-20 07:02:24.241000 Decoding transactions took 31 ms, Unpacking 1188 ms, hashing 16 ms
The 1188 ms (unpacking) is something I think I know how to fix via a CPU/RAM tradeoff. That will probably be my next task. However, that will only reduce the CPU usage in processing new getblocktemplate responses, which I think is not in the critical code path for share propagation, so I don't expect it to help with DOA+orphan rates much. It's a pretty easy and obvious optimization, though. The node I do my testing on (4.4 GHz Core i7, pypy) only shows about 40 ms for unpacking instead of 1188 ms, so I wasn't sure it would be worthwhile. But if it's stalling your CPU for over a second, that sounds like it could be interfering with things.

Not sure what's the issue with pypy on your machine. Does your CPU show activity when you run it? How much?
93  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 20, 2017, 07:13:11 PM
jtoomimnet found another block two days ago.

https://blockchain.info/block/00000000000000000096cef4bfd159307030923b0111acd39bfb6561b277a375

1014 kB, 3998 kWU, 13.858 BTC.

We also added a few hundred TH/s yesterday, and are now up to 2.2 PH/s.
94  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 20, 2017, 04:41:51 PM
This "linux subsystem" is like 5-10x slower than you make a Vitrualbox VM and run Ubuntu in it.
That assertion is inconsistent with most of the benchmarks I have seen, in which WSL is often faster than native Linux and rarely more than 5% slower.
https://www.phoronix.com/scan.php?page=article&item=windows-10-lxcore&num=2

However, for some reason, code compilation appears to be the exception to that rule. For some strange reason, compiling code on WSL is extremely slow:
https://www.phoronix.com/scan.php?page=article&item=windows-10-lxcore&num=4

Since running p2pool is not code compilation, performance on WSL should be quite good. P2pool is mostly just a CPU-bound task with very little disk IO and a small amount of network IO, so the poor results from code compiling should not manifest here.
95  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 19, 2017, 04:20:48 PM
However, the reason I made these CPU improvement changes was in the hope that it would allow people to run CPython on medium-speed CPUs without crazy orphan rates.

config is Windows 10 64Bit

A 2.0 GHz A10 is not what i would call a medium-speed CPU, unfortunately. Sure, it might have 4 cores, but Python is effectively single-threaded, so that doesn't help at all. Pre-Ryzen AMD chips have poor single-threaded performance, so a 2 GHz A10 is equivalent in performance to a roughly 1.3 GHz Core i3 (if such a thing existed). I wish I could get p2pool to run nicely on such a CPU, but I'm afraid that's out of reach at the moment.

You've got plenty of RAM, so I think you can get it to work decently if you go through the steps I described to get Pypy running on Windows.

Also, if you could run p2pool with the --bench option, that would make the log file much more useful to me.
96  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 18, 2017, 09:46:41 PM
1) On the web interface of my node I find the following in relation to the shares: "Best share", "Verified heads", "Heads", "Verified tails", "Tails". I'm not sure I fully understand what they mean.
P2pool has a share chain that is very much like the Bitcoin blockchain, with a few differences. The tip of a chain (a block/share with no children) is known as a head. The root of a chain is known as the genesis block in Bitcoin (the block with no parent), but in p2pool it's known as a tail. P2pool's tails *do* have parents, unlike bitcoin, and consequently have a height greater than 1; however, they're called tails in p2pool because their parents have been pruned off from the node's copy of the share chain, so as far as your node is concerned, they have no living parents.

The best share is always a head. It's usually the head with the most work behind it, but not always. The rules for determining the head are a little complicated, and are different in different versions of p2pool.

Quote
     *I suppose "best share" is the last share added to the sharechain (I'm I right about this???).
Not necessarily, but usually. Just because a share is recent doesn't mean that it is placed at the tip of the chain.

Quote
     *"Verified heads"  -- Other heads that are or will be orphaned or so?
Yes, precisely.

Quote
     *What is the difference between "Verified heads" and "heads"?
      *What are the "Verified Tails" and "Tails"?
I'm not sure. I think unverified heads or tails are shares that have not yet been connected to the share chain, such as for a share that is was just downloaded during initial sync but whose ancestors have not yet been downloaded. However, I haven't verified that hypothesis, as I have not yet needed to work on that part of the code.

Quote
     *I suppose "Time first seen" gives the time that my node received a Inv ... Or is it the time when my node has completely downloaded the share?
Completely downloaded and mostly deserialized.

Quote
     *"Peer first received from" gives then probably the node that has send my node the Inv containing the share.
Yes, but it's a sendShares or share_reply message.

Quote
     *"Timestamp" is the timestamp from the node that has found the share I guess. But this timestamp probably depends on the accuracy of that nodes clock, and can in theory deviate from reality?
Correct. In the current p2pool master, the timestamp is required to be between 1 second after the previous share's timestamp and 59 seconds after the previous share's timestamp. Since the timestamps are used for minimum share difficulty adjustments, this can result in a situation where the share difficulty is very slow to adjust downwards after a rapid decrease in the network hashrate. In the jtoomimnet fork, I got rid of this timestamp clipping, and replaced it with a rule that rejects shares that are timestamped to come from the future in order to prevent share difficulty manipulation.

Keep in mind that the timestamp is the time at which the mining job was sent to the hardware, not the time at which the hardware returned the nonce value.

Quote
3) Say for example that I want to know the time it takes my node to receive the latest share of the sharechain (so I want to know the time it takes my node to converge to the consensus sharechain). I probably can use the timestamp of the share gegenerated by the node that found the share, as it will be very close to the moment that it sends it to the rest of the network.
Nope, it's not close. It will usually be off by around 5-50 seconds. But what you can do is use the "Peer first received from" line to browse to the info page for that share on the peer's address (you may have to guess the port number), and repeat, until eventually you get to a peer that says "Peer first received from: Null" for that share, and then you can use the "Time first seen" shown by that ndoe (or just the node that you got the share from, if you're lazy).

When I've done this measurement in the past, it's usually around 1 second per hop for nodes running CPython with 100 ms network latency, and around half that for nodes running pypy.

Quote
4) If my reasoning in point 3 is correct, I just need to find the time when my node is finished downloading the new share, and the share is added to the local sharechain. Is there a way to access the sharechain of my node directly through a log file or something? I know that p2pool creates logs in the directory /p2pool/data/bitcoin/, and occasionaly I find references to shares downloaded and stuff like that in the output. But is there another file for the sharechain, that contains time data of when a new share is downloaded and added to the share chain?
If you run my code (e.g. 1mb_segwit), you can install the rfoo package for python, then run run_p2pool.py with the --rconsole command line option, and then run the rconsole command in a separate terminal. This will give you a python interpreter that effectively runs inside the p2pool process, and allows you access to all of the variables and functions of p2pool while it's running. The share objects can be found in the gnode.tracker.verified object.
1mb_segwit also now has a --bench command-line option, which I think you would find interesting.

You could also maybe parse the log in data/bitcoin/log. That's just the stdout output of p2pool for (usually) the last few days.
97  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 18, 2017, 09:41:08 PM
okey, thanks. i will take a look these days. would by nice to get it run on Win10 with Pyton 2.7 (pypy does not run here ...)

I was able to get pypy running on Windows 10 by using the Linux Subsystem for Windows:

https://bitcointalk.org/index.php?topic=18313.msg21025074#msg21025074

However, the reason I made these CPU improvement changes was in the hope that it would allow people to run CPython on medium-speed CPUs without crazy orphan rates. I'm very much interested in hearing whether it works.
98  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 17, 2017, 04:28:32 AM
I pushed some more code to https://github.com/jtoomim/p2pool/tree/1mb_segwit that adds more CPU and (maybe) RAM performance improvements for p2pool. I think it should improve CPU usage and latency by about 30% or more. These improvements should reduce DOA and orphan rates on the network a bit. It looks like running p2pool on CPython with medium-slow CPUs should now be viable without huge DOA/orphan costs, although I still recommend using pypy whenever possible. The new code makes fewer memory allocations when serializing objects for network transmission, which might reduce total memory consumption, or it might not. We'll see in a few days.

I also added a performance profiling/benchmark mode. If p2pool is too slow for you, I would find it helpful if you ran python run_p2pool.py --bench and then sent me a snippet of the output, especially if you can get the output near when a share is received over the network.
99  Alternate cryptocurrencies / Mining (Altcoins) / D3 release effect was Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: September 13, 2017, 07:15:31 PM
More like 100%. The current network hashrate for DASH is around 35 TH/s, which is equivalent to about 2250 D3s. I expect Bitmain made at least 2k in their first batch.

All Dash/X11 pools will be affected equally.
100  Bitcoin / Development & Technical Discussion / Re: Payment Channel Payouts: An Idea for Improving P2Pool Scalability on: September 09, 2017, 07:25:49 PM
This scenario would probably be one in which the capital cost of the miner itself is nearly zero. If the miner capital cost is high, then the miner needs to be run nearly 24/7 in order to make a profit. Currently, capital costs (or amortized depreciation, if you prefer) is the greatest cost in Bitcoin mining, and electricity costs are the second-greatest cost. I expect capital costs of miners will always be significant, but will probably eventually be slightly less than electricity costs. If you're only running your miners when the sun shines (8-12 hours a day), then those capital costs become much harder to recoup.

Utility-scale solar is probably going to always be cheaper than rooftop solar. As of Dec 2016, the cost (LCOE) of rooftop solar was about 3x higher than utility-scale solar ($0.05/kWh vs $0.15/kWh). If it ever makes sense to mine at home with rooftop solar, then it will probably make even more sense to mine in a warehouse in the middle of a desert where it's sunny 360 days out of the year. That said, solar has a long way to go before it can catch up to hydro, which is about $0.015 to $0.05/kWh with 24/7 uptime.

Solar power is great, but it is not very well suited to mining. It's better suited to supplying the grid and helping with peak load, since solar power peaks when demand peaks, rather than trying to supply a 24/7 load like mining. Wind is usually cheaper, and might make a better partner, but still not perfect. Ultimately, hydro, nuclear, or (ugh) fossil fuels are what make the most economic sense.

heating your house
https://www.reddit.com/r/Bitcoin/comments/6y8305/if_pool_miners_run_segwit2x_they_get_a_better/dmmzl99/
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!