Bitcoin Forum
November 01, 2024, 05:45:00 AM *
News: Bitcoin Pumpkin Carving Contest
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: Selfish Mining Simulation  (Read 14059 times)
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
November 23, 2013, 08:54:45 PM
 #21

At a quick glance, eb3full's post here predates the other two by a couple of weeks, and is more full featured.

I'm slightly biased here, but only because I saw an early prototype of ebfull's simulator months ago, thought it was really cool, and since then have been looking forward to seeing the improved version.

Mostly unrelated, but this simulation was also acknowledged by the ES authors: http://hackingdistributed.com/2013/11/09/no-you-dint/

I started writing my simulation because this one didn't appear to be fast enough to do proper Monte Carlo simulations.  I think there'd be a lot of value in being able to do this.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
November 23, 2013, 09:03:12 PM
 #22

Quote
If neither of us get to it first, I'm willing to pitch in 1 BTC as a
bounty for building a general bitcoin network simulator framework. The
simulator should be able to account for latency between nodes, and
ideally within a node.  It needs to be able to simulate an attacker that
owns varying fractions of the network, and make decisions based only on
what the attacker actually knows.  It needs to be able to simulate this
"attack" and should be generic enough to be easily modified for other
crazy schemes.

So far, this does not meet my requirements.  Given that it is written in Javascript, I doubt it ever will.*

Still, this is really neat work, and I've enjoyed watching several runs of it evolving.  I was planning to send him a tip just for that, and I'd encourage others to also.

* Note that I can be outvoted on this matter.  From the later email, "Also, I don't want anyone to think that they need to satisfy me personally to collect on either of these two bounties.  I will pay mine for a product that is generally along the lines I have laid out, if a couple of the core devs (Gavin, Greg, Jeff, sipa, Luke, etc) agree that your work is useful."

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
socrates1024
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
November 23, 2013, 09:27:55 PM
 #23

So far, this does not meet my requirements.  Given that it is written in Javascript, I doubt it ever will.*
Can I infer from this that you do not believe the requirements are satisfied, specifically because it is not generic enough? Otherwise could you clarify why not?

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
November 24, 2013, 02:39:43 AM
Last edit: November 24, 2013, 02:53:02 AM by eb3full
 #24

I intend to release a much more polished, documented and approachable general simulator in the coming days, far more efficient than my original post. Here some progress was made implementing difficulty adjustments, transactions (mempool, UTXO) and even niche things like mapOrphans, and bitcoind's SendMessages-style polling. (It's also now tested to work on firefox, chrome is still better.)

I intend for it to use a middleware pattern when I'm all done:

Code:
var btc = new Node()
btc.use(PeerMgr) // peermgr.js
btc.use(Inventory) // inventory.js
btc.use(Transactions) // transactions.js
btc.use(Blockchain) // blockchain.js
btc.use(Miner) // miner.js

var net = new Network()
net.add(100, btc)

net.run(10000) // run 10 seconds

You can attach to network events with .on, and do PoW tasks/polling with .prob and .tick.

Ultimately I just hope my simulator architecture can help inspire future research for the community. Thanks for the feedback.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
November 26, 2013, 03:45:52 AM
 #25

That's cool, but the key metric for a miner is not percentage of revenue, it's revenue per hour.

It seems to me that this behavior (*) only gains a higher percentage of revenue by lowering overall revenue. It'd be nice for some simulated numbers which shows that, though.

(I'm not sure how you'd even do a simulator though, since this behavior lowers difficulty, which in turn would attract more miners. Does the simulator assume that the SHA256d processing power of the world is static even though in reality it is exponentially increasing?)

(*) Sorry, I still refuse to call it selfish mining, because so far as I can tell there's actually no advantage to doing it.
I think the idea is that if the attack can be sustained long enough for the difficulty to adjust downward to reflect the much higher orphan rates induced by the attack, then that higher percentage of revenue will equate to a higher overall revenue.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
November 26, 2013, 04:05:29 AM
 #26

That's cool, but the key metric for a miner is not percentage of revenue, it's revenue per hour.

It seems to me that this behavior (*) only gains a higher percentage of revenue by lowering overall revenue. It'd be nice for some simulated numbers which shows that, though.

(I'm not sure how you'd even do a simulator though, since this behavior lowers difficulty, which in turn would attract more miners. Does the simulator assume that the SHA256d processing power of the world is static even though in reality it is exponentially increasing?)

(*) Sorry, I still refuse to call it selfish mining, because so far as I can tell there's actually no advantage to doing it.
I think the idea is that if the attack can be sustained long enough for the difficulty to adjust downward to reflect the much higher orphan rates induced by the attack, then that higher percentage of revenue will equate to a higher overall revenue.

So the attack indeed assumes that the SHA256d processing power of the world is static even though in reality it is exponentially increasing?

And it indeed ignores the fact that a lower difficulty means more miners?

The attack assumes a lot of things.  My personal favorite is that global block spread can be described by a state machine with percentage chances for state transitions.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
November 26, 2013, 04:15:21 AM
 #27

So the attack indeed assumes that the SHA256d processing power of the world is static even though in reality it is exponentially increasing?

And it indeed ignores the fact that a lower difficulty means more miners?
Getting from a non-selfish steady state to a selfish one would certainly be risky and expensive for the attacker - he'd have to struggle to maintain his fraction of the overall hashing power along the way - but it might be worthwhile for him in the long run.  I guess that's one reason why a simulator would be nice to have Smiley
socrates1024
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
November 26, 2013, 04:19:17 AM
 #28

Getting from a non-selfish steady state to a selfish one would certainly be risky and expensive for the attacker - he'd have to struggle to maintain his fraction of the overall hashing power along the way - but it might be worthwhile for him in the long run.  I guess that's one reason why a simulator would be nice to have Smiley
A miner could (I think, haven't simulated it) become "gradually" selfish, by just withholding some blocks for a little bit of time, to keep the difficulty down and his percentage of the revenue up!

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
November 26, 2013, 08:00:55 AM
 #29

Getting from a non-selfish steady state to a selfish one would certainly be risky and expensive for the attacker - he'd have to struggle to maintain his fraction of the overall hashing power along the way - but it might be worthwhile for him in the long run.  I guess that's one reason why a simulator would be nice to have Smiley
A miner could (I think, haven't simulated it) become "gradually" selfish, by just withholding some blocks for a little bit of time, to keep the difficulty down and his percentage of the revenue up!
Yeah, makes sense to get to "full selfishness" in a kind of quasistatic process in order to avoid any significant loss of revenue during the transition.
socrates1024
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
December 06, 2013, 12:20:39 AM
 #30

So far, this does not meet my requirements.  Given that it is written in Javascript, I doubt it ever will.*
Can I infer from this that you do not believe the requirements are satisfied, specifically because it is not generic enough? Otherwise could you clarify why not?

Bump because I am still waiting for kjj to clarify his statement, or for the other people who committed to the bounty (jgarzik, etc) to chime in.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
December 06, 2013, 02:05:09 AM
 #31

So far, this does not meet my requirements.  Given that it is written in Javascript, I doubt it ever will.*
Can I infer from this that you do not believe the requirements are satisfied, specifically because it is not generic enough? Otherwise could you clarify why not?

Bump because I am still waiting for kjj to clarify his statement, or for the other people who committed to the bounty (jgarzik, etc) to chime in.

Oh, sorry.  I thought I clarified this already.

The initial release was not general enough, and also seemed unlikely to become fast enough.

I've been a bit short on free time lately, so I haven't been following his progress.  He's done some amazing work, and it sounded like he was working hard on the first part.

Java has a reputation for being, shall we say, not quick.  I'd be delighted to be wrong about this, but I have a hard time seeing it being able to run a few hundred sessions with a few thousand nodes out to several hundred thousand blocks, with an appropriate level of detail and accuracy.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
December 06, 2013, 02:33:14 AM
 #32

a few hundred sessions with a few thousand nodes out to several hundred thousand blocks, with an appropriate level of detail and accuracy.

This will definitely be my target then, but it depends on what you're simulating. If you're simulating a selfish mining attack, you don't need to simulate transactions (unless you're testing transactionseconds or some other idea). If you're simulating transaction propagation, you don't really need to simulate a blockchain.

I'm actually pretty happy with V8's performance, it's not native, but it gets the job done and gives you the rapid prototyping advantage. With WebWorkers, and perhaps using some node.js fun, it could be scaled to large network simulations. I just enjoy building it, for what it teaches you about bitcoin. It could be a useful educative tool even if someone comes up with a super-efficient Haskell simulator to obviate mine.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
socrates1024
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
December 06, 2013, 03:19:12 AM
 #33

Oh, sorry.  I thought I clarified this already.

The initial release was not general enough, and also seemed unlikely to become fast enough.

I've been a bit short on free time lately, so I haven't been following his progress.  He's done some amazing work, and it sounded like he was working hard on the first part.

Java has a reputation for being, shall we say, not quick.  I'd be delighted to be wrong about this, but I have a hard time seeing it being able to run a few hundred sessions with a few thousand nodes out to several hundred thousand blocks, with an appropriate level of detail and accuracy.
Please say what you think would count as "general" enough and what is "fast" enough, because imo it is already sufficient in both ways.

Here is evidence that it is both reasonably general and reasonably fast:
a) In fact ebfull's implementation is a fully general purpose network simulator, mostly developed before the selfish mining paper was written. He only modified it to illustrate the selfish mining attack after this thread showed up! I unfortunately don't have a link to give you to show the original interface, which illustrates the generality. Maybe eb3full will respond to this thread with one... Additionally, the Selfish Mining simulation interface already supports varying the main parameter (alpha), as well as whether or not network dominance is assumed (essentially gamma=ordinary or gamma=1.0 from the paper), and whether or not the paper's suggested patch is implemented. What more generality is it you are expecting?
b) I just ran an experiment, with 100 nodes, on my browser at 1000x speed (with graphics turned off). Back of envelope, this means I could do 100k blocks in 16 hours. So it would cost roughly about a $160 worth of EC2 nodes to do a hundred sessions out to a thousand blocks. That seems tolerable to me.

I'm annoyed just because I feel like you're trying to avoid honoring your bounty by being vague about the conditions - a simulation can always be "faster" and more "general."

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
December 06, 2013, 05:34:04 AM
 #34

Oh, sorry.  I thought I clarified this already.

The initial release was not general enough, and also seemed unlikely to become fast enough.

I've been a bit short on free time lately, so I haven't been following his progress.  He's done some amazing work, and it sounded like he was working hard on the first part.

Java has a reputation for being, shall we say, not quick.  I'd be delighted to be wrong about this, but I have a hard time seeing it being able to run a few hundred sessions with a few thousand nodes out to several hundred thousand blocks, with an appropriate level of detail and accuracy.
Please say what you think would count as "general" enough and what is "fast" enough, because imo it is already sufficient in both ways.

Here is evidence that it is both reasonably general and reasonably fast:
a) In fact ebfull's implementation is a fully general purpose network simulator, mostly developed before the selfish mining paper was written. He only modified it to illustrate the selfish mining attack after this thread showed up! I unfortunately don't have a link to give you to show the original interface, which illustrates the generality. Maybe eb3full will respond to this thread with one... Additionally, the Selfish Mining simulation interface already supports varying the main parameter (alpha), as well as whether or not network dominance is assumed (essentially gamma=ordinary or gamma=1.0 from the paper), and whether or not the paper's suggested patch is implemented. What more generality is it you are expecting?
b) I just ran an experiment, with 100 nodes, on my browser at 1000x speed (with graphics turned off). Back of envelope, this means I could do 100k blocks in 16 hours. So it would cost roughly about a $160 worth of EC2 nodes to do a hundred sessions out to a thousand blocks. That seems tolerable to me.

I'm annoyed just because I feel like you're trying to avoid honoring your bounty by being vague about the conditions - a simulation can always be "faster" and more "general."

I've already posted a means to overrule my judgment.  If you feel that it should be paid now, just get a couple of the guys from my list to say so and I'll pay it.

My intention in providing the bounty was to encourage the development of a tool that would be useful for researchers to test their ideas.  For the most part, that means that it needs to model how nodes actually work.  That means it needs to have a model of connections and latency (network, disk and memory), a model of how long it takes to verify blocks, including taking into account transactions already verified.  You may have missed it, but I also posted a bounty for patches to enable profiling in the client so that we can collect real world performance data that can be used to plug back into the simulation.

In short, I'm looking for a simulator of the real network, not an implementation of the nonsense model they used in their paper.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
socrates1024
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
December 06, 2013, 05:42:22 AM
 #35

I've already posted a means to overrule my judgment.  If you feel that it should be paid now, just get a couple of the guys from my list to say so and I'll pay it.

My intention in providing the bounty was to encourage the development of a tool that would be useful for researchers to test their ideas.  For the most part, that means that it needs to model how nodes actually work.  That means it needs to have a model of connections and latency (network, disk and memory), a model of how long it takes to verify blocks, including taking into account transactions already verified.  You may have missed it, but I also posted a bounty for patches to enable profiling in the client so that we can collect real world performance data that can be used to plug back into the simulation.

In short, I'm looking for a simulator of the real network, not an implementation of the nonsense model they used in their paper.
Ok. Thanks for the response, I'm satisfied for now that this is a clear enough goal of things to model somehow.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
December 06, 2013, 08:11:38 AM
 #36

Any chance we can get a readout on hashes per second and difficulty? The probability of solving a hash during any particular second changes over time as the difficulty changes.

Note: It would be pointless to simulate actual hash functions or authentication. The simulator instead simulates probabilistic events over time (say 25% chance of mining a block = 25% hashrate). As you'd expect, two nodes may solve two different blocks at the exact microsecond, or no block may appear for quite a while.

I have indeed added difficulty adjustments (to http://ebfull.github.io/test.html) and those adjustments do affect the probability of a node (which has adopted the new difficulty) solving a block. It shows their network percentage there as well.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
December 06, 2013, 08:58:35 PM
Last edit: December 06, 2013, 09:09:19 PM by eb3full
 #37

You say "25% chance of mining a block = 25% hashrate". 25% chance of mining a block over what time period? If you have a 25% chance of mining a block every 10 minutes, and this doesn't change, then your revenue doesn't change. There's no need for a simulator to measure that. So obviously there must be something more going on. As I understand it, what goes on is that the difficulty decreases over time (because everyone is forced to waste time mining orphans), and therefore the chance of mining a block every 10 minutes increases. Obviously this is going to take many blocks to happen, though. For the first 2016 blocks, the non-orphaned blocks per minute found by everyone, including the attacker, will decrease. For the next 2016 blocks... I don't know for sure. That's why I'm interested in seeing the simulator. On the one hand, the attacker will be wasting time on orphans. On the other hand, the difficulty has decreased.

When I say "25% chance of mining a block" I speak in terms of every 100msec. The difficulty dilutes this probability on a per-node basis. At a difficulty of 1000, a node with 31% chance of mining a block will have 0.00031 chance of mining a block each 100msec.

Let me make clear that the average time between blocks is an emergent phenomena in the simulation. Often, just as in the real network, blocks are solved within moments of each other even though (on average) they occur every ten minutes if the difficulty adjustments are allowed to take their course. You can witness the difficulty adjustments in the javascript console, but that page will be slow at reaching 2016-multiple heights because it is also simulating a UTXO/polling/propagation system for other purposes.

Code:
(height = 2016) Difficulty adjustment: from 1000 to 1441.9452528101006 (1.4419452528101004x)

Here's the way the simulator currently handles mining: when a node calls its own .mine() method with an argument of network percentage, that network percentage is allocated to the node. What remains in the network can be claimed by other miners, or nodes could simply call .mine(true) and receive an evenly distributed portion of the unallocated hashrate. Here's how you can simulate bitcoin-like mining pools:

Code:
btc.init(function(self) {
switch(self.id) {
case 0:
self.mine(0.31) // btcguild
break;
case 1:
self.mine(0.22)
break;
case 2:
self.mine(0.13)
break;
case 3:
self.mine(0.06)
break;
case 4:
self.mine(0.05)
break;
case 5:
self.mine(0.03)
break;
case 6:
self.mine(0.02)
break;
case 7:
self.mine(0.02)
break;
default:
if (self.id < 100)
self.mine(true);
else
self.mine(false);
break;
}
});

The above code results in the following (default difficulty is 1000):



pnothing is the probability of nothing happening each time the event fires (currently 100msec), mprob is the node's claimed hashrate.

An emergent phenomena of the simulation is that (because the hashrate does not change) the difficulty will settle at a certain point as you'd expect.

To put it another way, one thing I'd really like to know is how long it takes for the attacker to break even, let alone gain from this attack. I don't see anything like that in this simulator.

As I say above, another consideration is that the network is actually dynamic, and not static, but if we can start with something simple like "how long does it take to break even if everything stays static", I think that can give us a better feel for how much the dynamic nature of reality is going to matter. It also gives us an idea of how long we as a community have to respond and try to punish anyone performing this attack. If it's only 2016 blocks or so, okay, that's one thing to deal with. If it's 20,160 blocks, then it's going to be much harder to successfully pull this off in the real world.

Currently it is more useful to approach the simulator as such: run it normally, witness an accurate revenue relative to hashrate, run it with the attack, witness the revenue over time larger than normal.

I hope to have a more accurate answer to your questions (so that the actual thresholds of the attack can be witnessed and graphed) soon.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
December 06, 2013, 11:55:11 PM
 #38

Well, no, I wouldn't expect that. Yes, the hashrate does not change, but because a large portion of the hashrate is wasted on orphaned blocks, the difficulty should go down.

Sorry, I was speaking of the simulator in general, not the attack.

You will not see an effect from the difficulty adjustments in the current simulator, because the attacker will never retain a large enough lead coinciding during a difficulty adjustment for it to matter. If you want to see the effects, change target_avg_between_blocks from 2.5 minutes (litecoin) to something like 10 seconds, so the attacker will have an extremely large lead on the network. I haven't investigated the effects of the difficulty adjustment yet, but it could be interesting.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
December 08, 2013, 06:49:28 PM
Last edit: December 09, 2013, 02:58:57 AM by eb3full
 #39

Update:

I have now published a much better version of the simulator at the original URL.

http://ebfull.github.io/

This simulator performs much better than before, but is also much more accurate. (Still runs best on Chrome.)

Here's just a few of the changes:
  • discrete probabilistic events (mining) now more accurately model Poisson distribution
  • inter-node latency more accurate
  • full blown inventory system, modeled by a network-wide state machine for efficiency
  • peer manager more accurately models TCP (especially the sequence of messages)
  • new modular "middleware" architecture of the simulator
  • events use the closure library's priority queue
  • difficulty adjustment added
  • blockchain branches are compared by work not just height
  • much more memory efficient

The page now just runs several trials of every network percent between 10% and 49%, for every combination of sybil attack and selfish mining attack toggle. 200 nodes are simulated for ~5000 blocks each trial.

I left this running and had the following results:





Disclaimer: This simulation does not accurately reflect the bitcoin network.
  • The topology of the simulation, as well as inter-node latency, is dissimilar from the real bitcoin network although I've made it more accurate. Does anyone know what the average latency between nodes should be? How much variance? Packet loss characteristics?
  • Block validation is assumed to be instantaneous because no transactions are occurring. (Yet!)
  • There are only 200 nodes being simulated. Hopefully in the future the simulation can support thousands of nodes.
  • Nodes just grab whatever peers they can get and don't optimize for latency.



More details:

Is it done? No.

I'll finish it and publish it as an independent project with better documentation soon.

Is this simulator general enough? Yes!

You can cook up any type of p2p simulation with this. It took me a few minutes to build a p2p (trustful) time syncing protocol simulation. (Demo here.) The code is pretty fun and easy to work with.

TODO:

  • Is my implementation of the selfish mining attack correct? See miner.js's .attack()
  • (de)registering events in NodeProbablisticTickEvent needs to update delay retroactively
  • Thoroughly test UTXO system.
  • Integrate mempool with blockchain.
  • mapOrphans (transactions and blocks)
  • getheaders, relay set
  • Document and release as standalone project
  • Use WebWorkers for something, anything.
  • Play with node.js, maybe throw this on EC2 and see what I can do.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
eb3full (OP)
VIP
Full Member
*
Offline Offline

Activity: 198
Merit: 101


View Profile
December 08, 2013, 08:29:19 PM
 #40

Percentage of revenue is irrelevant. BTC per hour is the important number.

Would you mind explaining why the distinction is so critical? Those two numbers are trivially mathematically related. Never mind, I know why you're asking this. I'll start plotting this instead.

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." John von Neumann
buy me beer: 1HG9cBBYME4HUVhfAqQvW9Vqwh3PLioHcU
Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!