Bitcoin Forum
November 06, 2024, 05:20:00 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 [8] 9 10 »  All
  Print  
Author Topic: Please run a full node  (Read 6661 times)
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 13, 2017, 01:10:29 PM
 #141

It IS hopeless.

lol. I had this really long reply to him typed up and just deleted it as I knew it would go no where. I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining. So I was writing a simulation of 100k miners. But after seeing the true distribution (which, despite knowing better I still was thinking of it as a normal one), I shelved it as I had proved enough to myself that what he's saying just can't be true. For me the whole "average" thing is sort of misleading as it immediately puts a normal distribution in the back of your mind. But when you actually see it, things get pretty clear. At least it did for me.

Interesting.  So what is the distribution then , if not gaussian?

jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 13, 2017, 01:12:31 PM
 #142

Lol.

Well, 95% of all blocks being mined into existence are mined using my software so if he can't believe me telling him he's wrong then I'm not sure how much higher an authority you can appeal to?

Perhaps he is mentally stuck on "proof of work" when a more descriptive term might be "proof of lottery participation"

(Hint:  A powerball lottery doesn't have a winner ever week necessarily.  If no one wins, the pot keeps growing until a winning ticket is found)

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
May 13, 2017, 01:13:04 PM
 #143

It IS hopeless.

lol. I had this really long reply to him typed up and just deleted it as I knew it would go no where. I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining. So I was writing a simulation of 100k miners. But after seeing the true distribution (which, despite knowing better I still was thinking of it as a normal one), I shelved it as I had proved enough to myself that what he's saying just can't be true. For me the whole "average" thing is sort of misleading as it immediately puts a normal distribution in the back of your mind. But when you actually see it, things get pretty clear. At least it did for me.

Interesting.  So what is the distribution then , if not gaussian?

Exponential.
(at least, the "inter-block times" are distributed according to an exponential distribution)
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 13, 2017, 01:17:27 PM
 #144

It IS hopeless.

lol. I had this really long reply to him typed up and just deleted it as I knew it would go no where. I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining. So I was writing a simulation of 100k miners. But after seeing the true distribution (which, despite knowing better I still was thinking of it as a normal one), I shelved it as I had proved enough to myself that what he's saying just can't be true. For me the whole "average" thing is sort of misleading as it immediately puts a normal distribution in the back of your mind. But when you actually see it, things get pretty clear. At least it did for me.

Interesting.  So what is the distribution then , if not gaussian?

Exponential.
(at least, the "inter-block times" are distributed according to an exponential distribution)


gotcha. thx.

Viper1
Sr. Member
****
Offline Offline

Activity: 686
Merit: 320


View Profile
May 13, 2017, 01:53:14 PM
 #145

It IS hopeless.

lol. I had this really long reply to him typed up and just deleted it as I knew it would go no where. I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining. So I was writing a simulation of 100k miners. But after seeing the true distribution (which, despite knowing better I still was thinking of it as a normal one), I shelved it as I had proved enough to myself that what he's saying just can't be true. For me the whole "average" thing is sort of misleading as it immediately puts a normal distribution in the back of your mind. But when you actually see it, things get pretty clear. At least it did for me.

Interesting.  So what is the distribution then , if not gaussian?

Exponential.
(at least, the "inter-block times" are distributed according to an exponential distribution)


gotcha. thx.

Yeah, like he said, exponential. My first full run was using a low difficulty and the solve times were really low as I just wanted to get a feeling for how it would work out. For that run, the "average" solve time was 4 seconds. The longest was 28 seconds. The median was somewhere between 2 and 3 seconds. i.e. half the block were solved before that, the other half after that. For the last run I was doing I had increased the difficulty and tweaked some performance issues and was running a much longer test but the damn computer locked up so I have't bothered starting it up again. The interm data from that test was following the same sort of distribution anyway.

BTC: 1F8yJqgjeFyX1SX6KJmqYtHiHXJA89ENNT
LTC: LYAEPQeDDM7Y4jbUH2AwhBmkzThAGecNBV
DOGE: DSUsCCdt98PcNgUkFHLDFdQXmPrQBEqXu9
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4760



View Profile
May 13, 2017, 02:17:03 PM
Last edit: May 13, 2017, 02:31:19 PM by franky1
 #146

I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining.

actually there are a few things, which help.
in laymens.(simplified so dont knitpick literally)

say you had to go from "helloworld-0000001" to "helloworld-9999999" hashing each try where the solution is somewhere inbetween
solo mining takes 10mill attempts and each participent does this
"helloworld-0000001" to "helloworld-9999999" hashing each try (very inefficient)
however, pools gives participant
A: "helloworld-0000001" to "helloworld-2499999" hashing each try
B: "helloworld-2500000" to "helloworld-4999999" hashing each try
C: "helloworld-5000001" to "helloworld-7499999" hashing each try
D: "helloworld-7500000" to "helloworld-9999999" hashing each try

which is efficient...
which at 1-d makes people think that killing POOLS takes 4x longer...


but here is the failure...
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
it takes each pool similar times to get to 9999999 and each would get a solution inbetween should they not give up
and if you take away pool W,X,Y guess what..
pool Z doing "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try would NOT suddenly take 4x longer to get to 99999999
because Z is not working on a quarter of the nonce of other pools!!!!!!!!!!!!!!!!!

because the work pool Z is doing 'HelLoWorLd' is not linked to the other 3 pools.

so 2 dimensionally
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" 20min to get to 10mill (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" 20min to get to 10mill (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" 20min to get to 10mill (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" 20min to get to 10mill (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" 20min to get to 10mill (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" 20min to get to 10mill (average 10min to win)

because they are not LOSING efficiency pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" still takes 20min to get to 10mill (average 10min to win)


now do you want to know the mind blowing part..
lets say we had 10minutes of time
you would think if pool W had 650peta and that if pool Z had 450peta
you would think pool Z =14 minutes due to hash difference

but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so
if pool W had 650peta
if pool Z had 450peta
pool Z =11min33 due to other factors because the calculating of hash is not based on 10 minutes.. but only ~8ish (not literally) of hashing occuring per new block to get from 0-9999999 (not literally)

now imagine Z done spv mining.. to save the seconds-2minutes of the non-hashing tasks- propogation, latency, validation, utxo cache.. (note: not the hashing))
Z averages under 11min:33sec

so if Z went alone his average would be UNDER 11:33sec average


so while some are arguing that out of 6 blocks
U wins once, V wins once, W wins once, X wins once, Y wins once, Z wins once..
they want you to believe it take 60 minutes per pool to solve a block (facepalm) because they only see W having 1 block in an hour

if you actually asked each pool not to giveup/stale/orphan .. you would see the average is 10 minutes(spv:10min average or 11:33 if validate/propagate).. but only 1 out of 6 gets to win thus only 1 gets to be seen.

but if you peel away what gets to be seen and play scenarios on the pools that are not seen (scenarios of if they didnt give up).. you would see it not 60 minutes

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 13, 2017, 02:25:21 PM
 #147


but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so 

not gonna argue with you Franky , cause i'd be simply repeating myself.

On a sidenote.... question for _ck:  -- how much time is actually spent validating, and is this typically done in parrellel?


franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4760



View Profile
May 13, 2017, 02:35:23 PM
Last edit: May 13, 2017, 02:45:36 PM by franky1
 #148


but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so  

not gonna argue with you Franky , cause i'd be simply repeating myself.

On a sidenote.... question for _ck:  -- how much time is actually spent validating, and is this typically done in parrellel?


ask him
not to be biased on the leanest linear block... but a average block that has some quadratics and where UTXO cache delays things
and
not to be biased of FIBRE header only.. but a average full block relay or average where latency and other things are included, such as average connections
and
all the other non hashing functions, then come to a total

and guess what.. if they try to argue its all milliseconds of non hashing function...
then that debunks all the issues core extremists ever had against "big blocks"

P.S
im gonna laugh when he wants to knit pick '2min' difference.. but cannot explain himself out of the 50-60 min difference he thinks

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
May 13, 2017, 02:48:37 PM
 #149

Block validation speed depends on a combination of software, hardware, optimisations, block complexity etc. An average current 1MB block is about 3000 inputs, 30,000 sigops and on my pool's heavily customised coin daemon and server hardware it takes 70ms. This is done in parallel to the functioning of bitcoind, but it cannot process more than one block at a time - if it did, the memory requirements of doing so would blow all out of proportion for a sigop heavy block. Older versions of the core client were much slower (such as 0.12 which BU is based on.) This doesn't take into account time to generate new work for the pool which adds another 70ms.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 13, 2017, 02:50:28 PM
 #150

Block validation speed depends on a combination of software, hardware, optimisations, block complexity etc. An average current 1MB block is about 3000 inputs, 30,000 sigops and on my pool's heavily customised coin daemon and server hardware it takes 70ms. This is done in parallel to the functioning of bitcoind, but it cannot process more than one block at a time - if it did, the memory requirements of doing so would blow all out of proportion for a sigop heavy block. Older versions of the core client were much slower (such as 0.12 which BU is based on.) This doesn't take into account time to generate new work for the pool which adds another 70ms.

70 milliseconds?

So, then... negligible, right?  As far as orphan rates.


dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
May 13, 2017, 03:18:51 PM
 #151

I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining.

actually there are a few things, which help.
in laymens.(simplified so dont knitpick literally)

say you had to go from "helloworld-0000001" to "helloworld-9999999" hashing each try where the solution is somewhere inbetween
solo mining takes 10mill attempts and each participent does this
"helloworld-0000001" to "helloworld-9999999" hashing each try (very inefficient)
however, pools gives participant
A: "helloworld-0000001" to "helloworld-2499999" hashing each try
B: "helloworld-2500000" to "helloworld-4999999" hashing each try
C: "helloworld-5000001" to "helloworld-7499999" hashing each try
D: "helloworld-7500000" to "helloworld-9999999" hashing each try

which is efficient...
which at 1-d makes people think that killing POOLS takes 4x longer...


but here is the failure...
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
it takes each pool similar times to get to 9999999 and each would get a solution inbetween should they not give up
and if you take away pool W,X,Y guess what..
pool Z doing "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try would NOT suddenly take 4x longer to get to 99999999
because Z is not working on a quarter of the nonce of other pools!!!!!!!!!!!!!!!!!

because the work pool Z is doing 'HelLoWorLd' is not linked to the other 3 pools.

so 2 dimensionally
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" 20min to get to 10mill (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" 20min to get to 10mill (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" 20min to get to 10mill (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" 20min to get to 10mill (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" 20min to get to 10mill (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" 20min to get to 10mill (average 10min to win)
...


Your error is (again) that in your proposal, one is doing cumulative work.  If you are going to do an exhaustive search, here, over 10 million potential solutions, and you have done 5 million of them without success, then you have INCREASED your probability to have a good answer next time from 1/10 million, to 1/5 million.  The more you work on a block, the higher the probability becomes that the next trial will be a winning one.

So having to reset, in this case, is a pain in the butt, because you lose the advantage of cumulative work.  This is because your statistical model (in 20 minutes, you have the answer for sure) is not the one of the proof of work.    If you had been working for 19 minutes and 59 seconds on a block, you KNOW that you will win in the second that follows.  Your probability to win is 1 now.  While the probability to win, the first second you started on that block, was 1/1200.  In the bitcoin PoW function, this is never the case: the probability to win, after working for 10 hours on a block, is exactly the same as the probability to win the first second.  This is because there is not "one answer in a set of 10 million" but "gazillion answers out of megasupergazillion".

I know it's in vain, but I am fascinated.
Viper1
Sr. Member
****
Offline Offline

Activity: 686
Merit: 320


View Profile
May 13, 2017, 04:14:29 PM
 #152

I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining.

actually there are a few things, which help.
in laymens.(simplified so dont knitpick literally)

say you had to go from "helloworld-0000001" to "helloworld-9999999" hashing each try where the solution is somewhere inbetween
solo mining takes 10mill attempts and each participent does this
"helloworld-0000001" to "helloworld-9999999" hashing each try (very inefficient)
however, pools gives participant
A: "helloworld-0000001" to "helloworld-2499999" hashing each try
B: "helloworld-2500000" to "helloworld-4999999" hashing each try
C: "helloworld-5000001" to "helloworld-7499999" hashing each try
D: "helloworld-7500000" to "helloworld-9999999" hashing each try

which is efficient...
which at 1-d makes people think that killing POOLS takes 4x longer...


but here is the failure...
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
it takes each pool similar times to get to 9999999 and each would get a solution inbetween should they not give up
and if you take away pool W,X,Y guess what..
pool Z doing "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try would NOT suddenly take 4x longer to get to 99999999
because Z is not working on a quarter of the nonce of other pools!!!!!!!!!!!!!!!!!

because the work pool Z is doing 'HelLoWorLd' is not linked to the other 3 pools.

so 2 dimensionally
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" 20min to get to 10mill (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" 20min to get to 10mill (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" 20min to get to 10mill (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" 20min to get to 10mill (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" 20min to get to 10mill (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" 20min to get to 10mill (average 10min to win)

because they are not LOSING efficiency pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" still takes 20min to get to 10mill (average 10min to win)


now do you want to know the mind blowing part..
lets say we had 10minutes of time
you would think if pool W had 650peta and that if pool Z had 450peta
you would think pool Z =14 minutes due to hash difference

but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so
if pool W had 650peta
if pool Z had 450peta
pool Z =11min33 due to other factors because the calculating of hash is not based on 10 minutes.. but only ~8ish (not literally) of hashing occuring per new block to get from 0-9999999 (not literally)

now imagine Z done spv mining.. to save the seconds-2minutes of the non-hashing tasks- propogation, latency, validation, utxo cache.. (note: not the hashing))
Z averages under 11min:33sec

so if Z went alone his average would be UNDER 11:33sec average


so while some are arguing that out of 6 blocks
U wins once, V wins once, W wins once, X wins once, Y wins once, Z wins once..
they want you to believe it take 60 minutes per pool to solve a block (facepalm) because they only see W having 1 block in an hour

if you actually asked each pool not to giveup/stale/orphan .. you would see the average is 10 minutes(spv:10min average or 11:33 if validate/propagate).. but only 1 out of 6 gets to win thus only 1 gets to be seen.

but if you peel away what gets to be seen and play scenarios on the pools that are not seen (scenarios of if they didnt give up).. you would see it not 60 minutes

There really isn't any efficiency there. How you assign the nonces doesn't matter since which nonces will result in a solution is completely random. For example, nonces that yield a solution for a given block could be 500000, 600000, 7000000. In which case the distribution you've shown would result in them taking a very long time to get a solution. You could just give each miner the next sequential nonce as they completed their work and it would be just as "efficient".

The only reason in your example all the pools take the same amount of time to get to 10mil nonces is because you're giving them the exact same hash rate which isn't reality. They're also each trying to solve completely different blocks (same # but different data). The block they're solving for has their address in it and whatever transactions they decide to put in that block. They're not all racing towards the exact same nonce/solution.

Regardless, can you explain how the entire premise and math that has gone into bitcoin that says that 4,300PH at 559,970,892,891 difficulty will yield an average block time of 10 minutes and yet a pool with 20% of that hash rate will still get an average block time of 10 minutes. Can you provide the math that shows that?


BTC: 1F8yJqgjeFyX1SX6KJmqYtHiHXJA89ENNT
LTC: LYAEPQeDDM7Y4jbUH2AwhBmkzThAGecNBV
DOGE: DSUsCCdt98PcNgUkFHLDFdQXmPrQBEqXu9
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4760



View Profile
May 14, 2017, 02:48:16 PM
 #153

because a miner with 10% of the hash power has NO INCENTIVE to step back from remaining in agreement with the other miners, simply because he's then hard-forking all by himself, and will make a 10 times shorter chain.

Your erroneous understanding of mining made (probably still makes) you think that that betraying miner is going to mine all by himself a fork of just the same length as the chain of the rest of the miners, and hence "reap in all the rewards, orphaning the 90% chain" because full nodes agree with him, and not with the miner consortium.  

But this is not the case: our dissident miner will make just as many blocks on his own little fork, than he would have made on the consortium chain (*), with just as many rewards: so there's no incentive for him to leave the consortium,

(facepalm)

im starting to see where you have gone wrong...

you at one point say
"then hard-forking all by himself"
"going to mine all by himself"

 but then you back track by bringing him back into the competition by talking about orphans.

if a pool went at it alone.. there would be no competition. no stales no orphans no giving up..

now can you see that it would get every block
now can you see that if he only got 1 block out of 6 in the "consortium competition", he will get 6 out of 6 "on his own"
now can you see that instead of timing an hour and dividing it by how many blocks solved in competition.. you instead of look at the ACTUAL TIME of a block from height to height+1... not height to height+6

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 14, 2017, 04:06:53 PM
 #154

Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
May 14, 2017, 04:22:50 PM
 #155

because a miner with 10% of the hash power has NO INCENTIVE to step back from remaining in agreement with the other miners, simply because he's then hard-forking all by himself, and will make a 10 times shorter chain.

Your erroneous understanding of mining made (probably still makes) you think that that betraying miner is going to mine all by himself a fork of just the same length as the chain of the rest of the miners, and hence "reap in all the rewards, orphaning the 90% chain" because full nodes agree with him, and not with the miner consortium.  

But this is not the case: our dissident miner will make just as many blocks on his own little fork, than he would have made on the consortium chain (*), with just as many rewards: so there's no incentive for him to leave the consortium,

(facepalm)

im starting to see where you have gone wrong...

you at one point say
"then hard-forking all by himself"
"going to mine all by himself"

 but then you back track by bringing him back into the competition by talking about orphans.

if a pool went at it alone.. there would be no competition. no stales no orphans no giving up..

now can you see that it would get every block
now can you see that if he only got 1 block out of 6 in the "consortium competition", he will get 6 out of 6 "on his own"
now can you see that instead of timing an hour and dividing it by how many blocks solved in competition.. you instead of look at the ACTUAL TIME of a block from height to height+1... not height to height+6

I know you still think that.  See above. 

But it is still just as wrong as before Smiley

dinofelis
Hero Member
*****
Offline Offline

Activity: 770
Merit: 629


View Profile
May 14, 2017, 04:27:24 PM
 #156

Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

Franky1 is still locked in thinking that a mining pool that had 1/6 of the hash rate, and hence 1/6 of the blocks when he was in consensus with the others, is going to make 6 times faster blocks if he forks of on a hard fork, pleasing the full nodes and leaving his peers behind with their 5/6 of the hash rate.

Of course, he will make all the blocks on his own new forked chain ; but he will make them 6 times slower too, so concerning "winning rewards", he's not going to make any more profit if his chain gets adopted in the end, than when he was remaining on the consensus chain.  Franky1 thinks he will make 6 times more rewards because now "all the blocks are his", but he doesn't understand that our miner will make 6 times less blocks in the same time.  

(back to square one....)

Caveat: at least, if our forking miner is not MODIFYING the difficulty or reward of the chain, but that's not very probable that the full nodes would be running code that has this changed...
fikihafana
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250



View Profile
May 14, 2017, 04:35:22 PM
 #157

If you have a machine you can spare, please run a full node. The more nodes there are, the stronger the network is. Also, if you run a full node you can potentially mine your node for information of various kinds. You can tell if you have a full node by giving the following command:

Code:
bitcoin-cli getinfo

If the "connections" field is greater than 8, then you are running a full node, congratulations!

You can find information on how to run a full node on bitcoin.org here:

https://bitcoin.org/en/full-node


To run full node, atleast need minimum storage around 200GB with unlimited bandwidth. These node wont make network stronger, thats miner will make network stronger. Full node really help to secure network if network use POS as main consensus
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4760



View Profile
May 14, 2017, 04:36:31 PM
Last edit: May 14, 2017, 05:14:56 PM by franky1
 #158

Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )
forget about counting acceptd blocks over an hour and divide by brand amount (theres more then you think. )


instead JUST LOOK at the times to create a BLOCK:
height X to height X+1...
not
height of last visible brand z to height of next visible brand z / hour

what you dont realise is that more block attempts occur then people think.
EG. dino thought the only blocks a pool works on is the block that gets accepted(visible), hense the bad maths.

i did not bring up showing the orphans to talk about %
just to display and wake people up to the fact that more blocks are being attempted in the background

look beyond the one dimensional (literal) view.
actually run some scenarios!!


P.S
orphan % is only based on the blocks that actually got to a certain node..
EG blockchain.info lists
466252
465722
464681

blockstrail lists
466252
466161
463316

cryptoid.info lists
466253
466252
464792

again.. dont suddenly think you have to count orphans. or do percentage games..
just wake up and realise that pools do more block attempts then you thought.
think of it only as a illustration of opening the curtains to a window to a deeper world beyond the wall that the blockchain paints

then do tests realising if those hidden attempts behind the curtain (all pools every blockheight) worked out...
the times of every blockheight by continuing instead of staling, giving up, orphaning, etc....
you would see a big difference between  
height X to height X+1...
vs
height of last visible by brand z to height of next visible by brand z / hour

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4760



View Profile
May 14, 2017, 05:46:55 PM
 #159

moral of this topic:

run a full node, not just to:
make transactions without third party server permission
see transactions/value/balance without third party server permission
secure the network from pool attack
secure the network from cartel node(sybil) attack
secure the network from government shutdown of certain things
secure the data on the chain is valid
secure the rules
help with many other symbiotic things


but
to also be able to run tests and scenarios and see beyond the curtain of the immutable chain and see all the fascinating things behind the scene that all go towards making bitcoin much more then just a list of visible blocks/transactions

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1008


Core dev leaves me neg feedback #abuse #political


View Profile
May 14, 2017, 05:49:36 PM
 #160

Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )

Nonsense.

An orphan only becomes an orphan because another valid block beat it out.

Since the time between valid blocks is so much larger than the propagation/validation time
(which is seconds, not minutes), the proportion of orphans to valid blocks has to be tiny.

The only way that, say, 5 orphans would be created during 1 valid block is if they
all happened to be published within a few seconds of each other -- which, given
that valid blocks only occur about every 600 seconds, is quite unlikely.

Pages: « 1 2 3 4 5 6 7 [8] 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!