Bitcoin Forum
December 14, 2019, 11:36:41 PM *
News: Latest Bitcoin Core release: 0.19.0.1 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 [215] 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 ... 860 »
4281  Bitcoin / Bitcoin Discussion / Re: An Open Letter to Bitcoin Miners – Jonald Fyookball on: May 16, 2017, 12:41:35 PM
-ck's response.. close eyes, put fingers in ears and just scream go with segwit.
(facepalm)
Jonald's response:
Quote
I am not a contributor to any Bitcoin projects, but I am quite familiar with the scaling topic because I’ve been following it for some time now, and I am knowledgeable enough to clearly understand the technical details.

Quote
As others have explained, there is no security provided to the network by non-mining ‘full nodes’.

Are you telling me you support the latter? Roll Eyes

no, i think jonald is FLAWED to the Nth degree in his understanding of what nodes do. i have face palmed him many times. and corrected him also.

but someone personal beliefs of why they should or should not run a full node is not as big a deal as the empty promises/guarantee's/expectations of segwit which is more of a network wide issue

people should learn about what would truly benefit/hinder the bitcoin ecosystem and what would actually occur due to certain changes, proposals

here is a copy of a PM i sent to jonald as soon as i read this topic
Quote from: jonald
The most ludicrous is the “all users should be running full nodes” idea.

As others have explained, there is no security provided to the network by non-mining ‘full nodes’. Only mining nodes secure and extend Bitcon’s distributed ledger.

The white paper explains why most users do not need to run full nodes:

    It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he’s convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it’s timestamped in. He can’t check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirm the network has accepted it… …Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.

The idea that a lot of non-mining full nodes will make the network more decentralized (because they can make sure the miners are behaving) is erroneous, because an SPV client can already query the network’s nodes. Generally, there would only be a problem if a majority mining of nodes were colluding dishonestly, in which case Bitcoin would be already broken.

(facepalm)

your taking quotes of (not verbatim) 'some people just want to balance check their own funds which is ludicrous to get those people to run a full node..'
but erroneously trying to twist it into sounding like NO ONE should run a full node and just let pools have full control.
(facepalm)

EG
"As others have explained, there is no security provided to the network by non-mining ‘full nodes’. Only mining nodes secure and extend Bitcon’s distributed ledger."
(facepalm)

your literally saying that relying on pools to be the sole holders of the full data is good
your literally saying that relying on pools to be the sole verifiers of the full data is good
(like a fiat bank) just so that users can balance check...
but that would be making bitcoin insecure and centralised.
(facepalm)

1. pools collate the data, yes. but it needs independent verifiers to accept it in that format as valid. merchants and people that care about security do this and should continue to do this. they do and should continue to reject/orphan blocks that cause issues and make pools follow the rules or find themselves unable to spend rewards.

2. yes some people that dont care and only want to check their balance can just run SPV/lite clients. but that does not mean we should only let pools be the only verifiers of the data.

lets reword your words. maybe that would help you understand:

non-mining full nodes make the network more decentralized (because they can make sure the miners are behaving) because there would be a problem if a majority of pools were colluding dishonestly, in which case Bitcoin would be broken.
..
in short to explain what non mining nodes do:
if a pool offers a new block that does not contain the last accepted block hash (previous hash). and/or does not meet the standards of the node rules(funky tx's, creating funds from nowhere, fraud, etc), then that pool get their block orphaned. once pools realise their blocks are getting orphaned, thus cant spend their rewards with merchants/people. the pools would fall inline and only make acceptable blocks.

removing that power from merchants/people is BAD.
nodes play an important role. and should continue.

you should have stuck with the argument of not everyone needs to be their own bank,.. but not push it into being a plea to centralise pools into being more authoritarian by suggesting merchants and those that do care, should just let pools do all the work.

.. but it seems lately you have jumped over to the other side wanting centralisation by only accepting the one dimensional twisted scripts as gospel.
4282  Bitcoin / Bitcoin Discussion / Re: An Open Letter to Bitcoin Miners – Jonald Fyookball on: May 16, 2017, 11:31:34 AM
-ck's response.. close eyes, put fingers in ears and just scream go with segwit.
(facepalm)

1. segwit is not as 'compatible' as promised
2. segwits activation event itself, is not about solving quadratics/malleability scaling or anything else. its about setting up the tier network
3. moving funds to segwit keys after activation has a POTENTIAL to affect quadratics/malleability scaling or anything else.
4. but malicious people will stick with native keypairs and continue to do quadratics/spamming to prevent good utility
5. segwit does not stop/disarm/solve native keypairs. thus not 'fix' issues.
6. segwit just disarms innocent people who volunteer to move funds across while letting the network continue doing the same as usual.
7. the 'expected' potential scaling boost of segwit is the same potential scaling of 7tx's on chain of 2009-2017. which still has no guarantee of actual achievability
4283  Bitcoin / Bitcoin Discussion / Re: BREAKING NEWS: Bitcoin Dominance < 50% on: May 16, 2017, 10:58:23 AM
these guys understand it
Bitcoin's share of the total marketcap (not its dominance) is decreasing because of a very simple math.

market capitalization = supply * price

increase the supply you increase the marketcap.
increase the supply to a ridiculous number like 100 billion you get a ridiculously big marketcap.
now have hundreds of altcoins with ridiculous numbers you get a huge total marketcap in which bitcoin has a tiny share.

Good thing that no one cares.  There are hundreds of completely worthless alts with market caps of >$1,000,000 even if their trading volume is just a few thousand.  All it takes is a meaninglessly large supply.

in short make an alt with 5trillion coins premined.. put ONE.. yes ONE coin on an exchange and sell it to yourself for $1.. yes spend just $1.. and BAM market cap of that coin is $5trillion

market caps are a MEANINGLESS number. its a bubble number backed by nothing.
there is not $5trilling backing the premined coins i used in my example. just like there are not $28+billion in bank accounts backing bitcoins market cap. nor any other altcoin.

the sooner people realise this and stop giving a crap about market cap, the better.


here are things people SHOULD care about:
how many merchants accept coin X
how many employee's are working to do something within a certain coin, (devs, consultants/trainers, convention organisers, device(HWwallet/asic,etc) manufacturing employees, payment processors/exchanges etc, etc)
how many users a coin has

though some of these details are not easy to find. they are a hell of a lot more important stat than market cap
4284  Bitcoin / Bitcoin Technical Support / Re: Got a ? on: May 15, 2017, 04:49:09 PM
the good(for chance, but bad for your pocket) fee is 240sats/byte - https://bitcoinfees.21.co/

so...sending the 0.00184008 out

edit: oops . didnt check if compressed/uncompressed (as danny highlighted).. maths now sorted

(in:1*180)+(out:1*34)=214(+-10) so go with 224 to be easy
   fee: 224*240=0.00053760               
   so send:   0.00130248   
            
edit: (to explain why i done below) incase you want to pay out to 2 destinations / get back some change if not wanting to send whole amount

(in:1*180)+(out:2*34)=248(+-10) so go with 258 to be easy
   fee: 258*240=0.00061920            
   so send:   0.00122088   split across the two outputs however you want      
4285  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 15, 2017, 03:33:05 PM
1. no one wants to or can just blindly accept the opinion of data from others, its always best to run tests on data yourself
You seemed to have missed the part (on two occasions actually), where I said I had written an actual simulation and that once I had seen enough of the data to realize you were out to lunch, I shut it down.

i have said for years dont get spoonfed data
dont just take things on face value
dont just read something on a forum/reddit and take it for granted.

do your own tests/research/scenarios/validation.
this is why DAYS AGO i said ill give dino a few months to have his mind blowing experience of seeing the bigger picture of the real depths of bitcoin rather than the 1d overview he has displayed over the last few months.

yet apparently many want me to spoonfeed them everything. and then debunk it before even examining it.. (making it pointless to spoonfeed)

so if you want to learn run your own tests for your own peace of mind.

anyway this topic has meandered soo far off track.

but i still await -ck explain his biased 'only 70ms' timing of all the combined propagation, validation, parts (outside of hashing).. as i want to see how if its just 70ms he and his fellow friends can justify their "2mb is bad" rhetoric

PS. to pre-empt short sightedness
 my "minutes" is not to be taken literally as in for all blocks... but has been the case in the past where certain 'tasks' used to be done certain ways without efficiencies. and more seconds/milliseconds can be shaved off too even now
 but on average the block (non-hashing task) is more than just 70ms..

but i would like to know where -ck can defend a bigger blocks are bad stance if non-hashing tasks are 'just 70ms)

im done with this topic.
if anyone else is unsure about the meandered 'hashtime' stuff.. just run your own scenarios
4286  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 15, 2017, 03:10:37 PM
Oh my lord. You can't prove your point by using RAND or anything that doesn't also take into account difficulty. Why the heck do you think I was writing a simulation that was doing actual hashing with difficulty. My first thought was to use "rand" and then I immediately tossed that out as it would in no way represent what really happens. For one thing, it results in a normal distribution which is NOT what you have with bitcoin. wow.. just... wow..

I think I'm done. At this point I can't take anything franky1 ever says seriously.


it isnt just RAND!!!
(facepalm)
the formulae also includes the difficulty vs hash.
AND
i even factored in some efficiencies too

as you can see.. look at blockheight 469992.. there is a big difference between A and J due to MANY factors including the math of nonce and other things.

emphasis: not just rand
i only mention rand to pre-empt to the simple minds of one dimensional thinkers who would try dismissing any data by saying "i bet he manually typed in biased data" simply to avoid waffling

but seeing as people cant accept other peoples scenarios.. RUN YOUR F**KING OWN scenarios!!!

summary of this topic (NODES) - not just this meandered ('hashtime' debate)

TL:DR;
this whole topic proves a few things:
1. no one wants to or can just blindly accept the opinion of data from others, its always best to run tests on data yourself
2. running a full node is the same logic. dont just be a downstream node / sheep / follower of a tier network. doing own validation is important for the network
3. when there is a dispute between the data, just sheep following certain data is bad. run a full node and fully validate the data.

4. then the non-mining consensus. can all agree that blockheader 83ba26... is the most correct highest height the nodes can all agree on.
and if a pool made a new block that is not even using 83ba26 as a previous hash then that pool wont win or get support
4287  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 15, 2017, 02:13:55 PM
a. if a pool only has 1 block out of 10 on the blockchain, does not mean he was only working on 1 block for the entire time

I hope you understand that the probability of winning a block doesn't depend on what block you are mining, or how long you were mining on that block.  Each hash you calculate, on each thinkable block, has exactly the same probability to "win the block" as any other.  I hope you understand that.

You should first answer this:

@franky1.  One more trial.

Take an old piece of block chain, say, around block number 200 000 or so, but consider the actual, today's, difficulty, take a given miner setup, with a given hash rate, say, 1/6 of the total hash rate for that difficulty, and compare two different experiments:

A) take the transactions of block 200 000, make your own block of it, and hash on it.  Regularly, you will find a solution, but you continue trying to find new solutions on that very same block.  Do this for a day.   ==> at what average rate do you think you will find solutions for this same block ?

B) do the same as in A, but switch blocks every 30 seconds, that is, work 30 seconds on a block made of the transactions of block 200 000 ; then work 30 seconds on the block made of the transactions of block 201 000 ; then work 30 seconds on the transactions of block 200 002 etc...  Do this also for a day.
==> at what average rate do you think this time, you will find solutions for some of the blocks during the time you hash on them ?

How do the rates in A and in B compare ?



B is just meandering... 30 seconds has nothing to do with anything.. ..
screw it.. ill throw something at you and let you wrap your head around it



also to answer jonalds meander of the meander of the topic (his poking at the orphan's)
take the top table and block height 469990
C won.
but A would have been a close second.. if it did not stale, giveup, etc..
but even then without giving up/staling it would not show as a "orphan" unless there was an issue with C. where C won... and then got replaced by A.

this is why i said do not take the orphans as literally showing all background attempts..
but just as a quick opening of the curtains to those that think the only blocks ever worked on are the ones that win are wrong.. by just illustrating that there are more background attempts then they thought
EG dino only counting the wins and dividing by X hours (very very bad math)
4288  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 15, 2017, 09:37:25 AM
as for -ck
he thinks im a BU shill..
(facepalm)

as for -ck's 70ms stat
that is not a complete validation/propogation/new raw template creation(non hashtime parts) timing that factors in all the things like latency, caching, and many other factors.
hense why pools do SPV... because the combined non hashtime parts are more than just 70ms

but thats a separate dimension debate to the 1st dimension error that dino cant grasp..

anyway. lets all agree to disagree and leave people to do their own scenario's
if you cant be bothered to run scenarios to realise what happens behind the curtain.. then just agree to disagree and move on until you can run scenarios.
4289  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 15, 2017, 09:27:51 AM
Some real block times over a few hours from yesterday. Each pool was working towards solving a block at each of those heights. Each pool was trying to solve a completely different "block" as the data they work on is different from any other pool. I seriously don't know how franky1 could possibly think that a pool with 5 S9s (as an example), would be able to solve their unique block at the same average time as a pool with 1000 S9s. At this point I have to conclude he's simply incapable of admitting he's wrong and/or is trolling us.

viper...
go read the scenario DINO presented!!
HE said if 10 pools all had 10% hash  meaning all pools had 1000 s9's
then if 1 pool went at it alone it would take that pool 1 hour 40 minutes to make a block.

that was HIS 1 dimensional view..
which would be wrong

the last 3-4 pages of debate was about equal hash and how dino thought even in equal hash a pool would take 10x longer that another pool..


separately.. and not even related to dino's error

bringing in such details of x=5 y=1000 was going to be something i was going to handle once dino and others realise his error of his mis understanding of the 1 dimensional view of all pools with same hash power



i know a pool of just 5 S9's vs a pool of 1000 S9's would have different timings..

i would have gone into this as a 3rd dimension discussion. but dino and others were still locked into the error of the 1 dimensional error concerning all pools of equal hash scenario.. which would have confused the whole matter if they couldnt even get around the basics

such as confusing them further by saying x=5 y=1000 is not a 200x variance.
for instance a 1000 S9 could be forced to do full validation and not do all its efficiency gains (non hash tasks) and not do overt/covert hash gains.

bringing the different average timings down by 20%+ for the 1000 S9
while if the 5 S9 pool was not doing efficiency gains before could be allowed to on a new separate chain.

making the efficiency variance between the two be more like, as if x=6 and y=800 efficiency while not actually changing the asic count which would be a variance of 133 not 200


I must admit, for some reason I had thought that these times would be a lot closer to the 10 min average since pooling is supposed to "smooth out" the times.

again this is a 3rd dimensional discussion about the ~2week2016 block understanding. and not the 'literal' 10 minute misunderstanding by them same people. but that would confuse the 1st dimensional scenario dino was erroneous over..



.. last thing, i would have if they grasped it all. threw in a curveball to then say..
if one pool went at it alone. who said it would be the xof 5 s9's going at it alone. what if the y of 1000 s9's went at it alone.. to really make dino think..

but dino first needed to grasp these 1 dimensional scenario errors he made:
a. if a pool only has 1 block out of 10 on the blockchain, does not mean he was only working on 1 block for the entire time
b. out of 10 blockheights every pool attempts every blockheight win or lose
c. if the other 9 attempted blocks a pool attempted(but didnt win) followed through without staling, giving up, aborting, moving on, orphaning. each block would not be 1hour 40mins per blockheight

but even after several pages dino and others could not grasp that. they could not see beyond the curtain of the blocks they cant see and were only counting and dividing the times of the winners. not the bachground hidden attempts (if they ran scenarios where the background attempts had timings too)

tl:dr;
i do understand alot more then you think but i was trying to give dino baby steps of eli5 layman worded understanding, to atleast get him to realise the scenario he presented of ALL pools having same hash wont take 1 hour 40 minutes if they went alone.
but even after several pages dino and others could not grasp that.
4290  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 14, 2017, 08:22:47 PM
The natural frequency to find a block for the entire network (which is set by the difficulty level) is always 600 seconds on average.[/b]

you are right. but your not seeing it from more than a 1 dimension...

so lets just get back to the topic at hand..

running a node is just as important as running an asic. infact more important

having diverse codebases of nodes is as important as having multiple pools. infact more important
4291  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 14, 2017, 08:19:42 PM
Moral of this topic:  franky1 isn't listening to a vast selection of technically proficient users explaining in detail why his perception of mining is wrong.

i understand more then you think. but people cant even get passed the basics for me to even start confusing them further with the extra dimensions..
it would take a book to explain it all.. but some are stuck at the first paragraph.. so this topic im only talking about their first paragraph failures ..

ok.. lets word it this way to confuse the matter by talking about some 2 dimensional stuff
(using some peoples rationale)
if it only takes 70ms (im laughing) to see a block, grab the block, validate the block, make a new(unsolved) block template, add transactions..
                                ...... before hashing

then why SPV??
why do(avoiding grey): see a block, grab the block validate the block, make a new block add transactions start hashing
hint:  its more then 70ms to do all the tasks before hashing.
hint: the efficiency gains of doing spv are noticable
hint: by doing spv, the gains are more than 5%, compared to a pool that does the full validation
hint: even OVERT asicboost can gain more than 5% efficiency by tinkering around with certain things too
hint: even COVERT asicboost can gain more than 5% efficiency by tinkering around with certain things too

remember 5% of 10 minutes is 30 seconds.
there are ways to shave off more than 20% of average block creation processes (2minutes) without buying 20% more hash power

once you realise there is much more than just hashing to make a block. the difference between each pools "hash power" becomes negligible..

where all those tasks sat beside the time of hashing to make up the solved block creation time..
dilute the 'hash time' per block solution variance. thus making the "hashing time" negligible

tl:dr;
without buying more ASIC rigs
a 11% hashpower pool can out perform a 13% hashpower pool. just by knowing some efficiency tricks
meaning arguing about the



until dino and others can grasp the basics that pools dont just work on 1 block an hour.. there is no point going into the deeper level stuff


third level hint..
if a pool went at it alone. it can happily avoid all the latency, validation, propogation times (which would be more than 70ms if it was competing)
because going it alone means the previous block already belongs to them so they already know the data.. and as such they gain time to create the next block by not having to relay, propagate, etc, etc..

totally separate matter.
but the bit i laugh at.
if it only takes 70ms to see a block download a block validate a block then why are cry babies crying so much that "2mb blocks are bad".

look beyond the curtain, find the answers. piece the layers together, see the whole picture
4292  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 14, 2017, 06:34:47 PM
(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )

Nonsense.

An orphan only becomes an orphan because another valid block beat it out.

Since the time between valid blocks is so much larger than the propagation/validation time
(which is seconds, not minutes), the proportion of orphans to valid blocks has to be tiny.

The only way that, say, 5 orphans would be created during 1 valid block is if they
all happened to be published within a few seconds of each other -- which, given
that valid blocks only occur about every 600 seconds, is quite unlikely.
[/quote]

(facepalm)
seems your not gonna run any scenarios.. so you might aswell just carry on with one dimensional thinking and move on
its like i open up a curtain. and all you want to talk about is the next wall.. your not ready to see beyond the wall and finding reasons to avoid looking beyond the wall..

might be best to let you have more time to immerse yourself with all the extra things behind the scene.. which your not ready to grasp just yet
4293  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 14, 2017, 05:46:55 PM
moral of this topic:

run a full node, not just to:
make transactions without third party server permission
see transactions/value/balance without third party server permission
secure the network from pool attack
secure the network from cartel node(sybil) attack
secure the network from government shutdown of certain things
secure the data on the chain is valid
secure the rules
help with many other symbiotic things


but
to also be able to run tests and scenarios and see beyond the curtain of the immutable chain and see all the fascinating things behind the scene that all go towards making bitcoin much more then just a list of visible blocks/transactions
4294  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 14, 2017, 04:36:31 PM
Orphaning makes up a small percentage of blocks.  This is known both from actual data and common sense:  If it takes milliseconds to validate a block and seconds to propagate one, compared with the fact that the entire network solves a block every 10 minutes, its a very small ratio.

So its better to ignore orphaning to simplify the conversation.

(facepalm)

for the third time
forget about % of VISIBLE orphans (theres more then you think. )
forget about counting acceptd blocks over an hour and divide by brand amount (theres more then you think. )


instead JUST LOOK at the times to create a BLOCK:
height X to height X+1...
not
height of last visible brand z to height of next visible brand z / hour

what you dont realise is that more block attempts occur then people think.
EG. dino thought the only blocks a pool works on is the block that gets accepted(visible), hense the bad maths.

i did not bring up showing the orphans to talk about %
just to display and wake people up to the fact that more blocks are being attempted in the background

look beyond the one dimensional (literal) view.
actually run some scenarios!!


P.S
orphan % is only based on the blocks that actually got to a certain node..
EG blockchain.info lists
466252
465722
464681

blockstrail lists
466252
466161
463316

cryptoid.info lists
466253
466252
464792

again.. dont suddenly think you have to count orphans. or do percentage games..
just wake up and realise that pools do more block attempts then you thought.
think of it only as a illustration of opening the curtains to a window to a deeper world beyond the wall that the blockchain paints

then do tests realising if those hidden attempts behind the curtain (all pools every blockheight) worked out...
the times of every blockheight by continuing instead of staling, giving up, orphaning, etc....
you would see a big difference between  
height X to height X+1...
vs
height of last visible by brand z to height of next visible by brand z / hour
4295  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 14, 2017, 02:48:16 PM
because a miner with 10% of the hash power has NO INCENTIVE to step back from remaining in agreement with the other miners, simply because he's then hard-forking all by himself, and will make a 10 times shorter chain.

Your erroneous understanding of mining made (probably still makes) you think that that betraying miner is going to mine all by himself a fork of just the same length as the chain of the rest of the miners, and hence "reap in all the rewards, orphaning the 90% chain" because full nodes agree with him, and not with the miner consortium.  

But this is not the case: our dissident miner will make just as many blocks on his own little fork, than he would have made on the consortium chain (*), with just as many rewards: so there's no incentive for him to leave the consortium,

(facepalm)

im starting to see where you have gone wrong...

you at one point say
"then hard-forking all by himself"
"going to mine all by himself"

 but then you back track by bringing him back into the competition by talking about orphans.

if a pool went at it alone.. there would be no competition. no stales no orphans no giving up..

now can you see that it would get every block
now can you see that if he only got 1 block out of 6 in the "consortium competition", he will get 6 out of 6 "on his own"
now can you see that instead of timing an hour and dividing it by how many blocks solved in competition.. you instead of look at the ACTUAL TIME of a block from height to height+1... not height to height+6
4296  Bitcoin / Bitcoin Discussion / Re: Why Bitcoin Core Developers won't compromise on: May 13, 2017, 09:11:43 PM
Fact: If you raise the blocksize up to a point where people can't run their own nodes, you cannot call it a peer to peer network anymore.

get the "gigabytes by midnight" script out of your head. the rises of blocksize can grow at a natural progressive rate that nodes can cope with.
core already admit 8mb is safe..
with all the code efficiencies since 2009 (libsecp256k1=5x efficient for instance),
the fact that we are not average homeline of 512mbit/s(38mbyte/10min) ADSL, but alot more as an average now
the fact that hard drives ar cheaper
the fact that the baseline raspberry Pi is now raspberrypi3

all show that 8mb is safe and admitted as such, but even so just going to 4mb is also ok. with a few tweaks ONTOP to further becoming extra safe such as limiting txsigops to 4k per tx or less forever...
all would show that there is nothing technically hindering the ability to run a full node at home


No amount of tricks can overcome the importance of a full validating node, so forget about SPV. The moment people can't have full validating nodes the whole concept of "peer to peer cash" it's game over.

and i now hope you see why the whole filters(gmaxbuzz) bridging(lukeJrbuzz) to create a cesspit of a TIER network by going soft is something i have hate of.
4297  Bitcoin / Bitcoin Discussion / Re: For those that downplay the importance of full validating nodes on: May 13, 2017, 03:12:37 PM
i have actually seen many blockstreamists saying nodes dont matter
especially when the blockstreamists went soft with segwit by saying nodes dont matter/count

my view is have MANY different "brands"
because if everything was core.. then its just another centralised titanic/wall street waiting to happen, where people think "too big to fail"... until it fails

as for why BU went down at very similar times. is because some malicious people made a script and the grabbed the ip list (or ran it from a DNS seed that had the list already) to spam all the nodes flagged as BU.

but hey if you want to keep thinking everyone should only run core.. then you really are inhaling the smoke of the titanic chimney stacks singing "im the king of the world".. not realising a iceberg can leave you for dead in the water at any time

running a full node IS important. as is ensuring not everyone runs the exact same codebase. because it diversifies and decentralises control.
dont be fulled into running litenodes, prunned nodes, no witness nodes. if you care about the network security run a full validation node
4298  Bitcoin / Bitcoin Discussion / Re: Why Bitcoin Core Developers won't compromise on: May 13, 2017, 02:49:09 PM
The Lightning Network can take a huge amount of transactions largely offchain.  The idea of having all transactions fully onchain is not a matter of principle, it's a matter of control from miners so that they can receive transaction fees more often.  SegWit allows a slight increase in onchain capacity which is enough for the short term while this offchain scaling can also be implemented.

malicious spammers wont use segwit keys nor will they use lightning.
they will continue to native spam the baseblock which will still distrupt segwit keys users and lightning open/close channel operations

segwit/lightning does not solve the real problems. it just pushes innocent people away from native bitcoin with hopes and utopian dreams, but no promiss/guarantee's
4299  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 13, 2017, 02:35:23 PM

but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so  

not gonna argue with you Franky , cause i'd be simply repeating myself.

On a sidenote.... question for _ck:  -- how much time is actually spent validating, and is this typically done in parrellel?


ask him
not to be biased on the leanest linear block... but a average block that has some quadratics and where UTXO cache delays things
and
not to be biased of FIBRE header only.. but a average full block relay or average where latency and other things are included, such as average connections
and
all the other non hashing functions, then come to a total

and guess what.. if they try to argue its all milliseconds of non hashing function...
then that debunks all the issues core extremists ever had against "big blocks"

P.S
im gonna laugh when he wants to knit pick '2min' difference.. but cannot explain himself out of the 50-60 min difference he thinks
4300  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 13, 2017, 02:17:03 PM
I was thinking maybe there was some unique thing that happens when you stick a bunch of miners in a pool that doesn't happen if they were all solo mining.

actually there are a few things, which help.
in laymens.(simplified so dont knitpick literally)

say you had to go from "helloworld-0000001" to "helloworld-9999999" hashing each try where the solution is somewhere inbetween
solo mining takes 10mill attempts and each participent does this
"helloworld-0000001" to "helloworld-9999999" hashing each try (very inefficient)
however, pools gives participant
A: "helloworld-0000001" to "helloworld-2499999" hashing each try
B: "helloworld-2500000" to "helloworld-4999999" hashing each try
C: "helloworld-5000001" to "helloworld-7499999" hashing each try
D: "helloworld-7500000" to "helloworld-9999999" hashing each try

which is efficient...
which at 1-d makes people think that killing POOLS takes 4x longer...


but here is the failure...
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try 20min to get to 10mill where a solve is somewhere inbetween (average 10min to win)
it takes each pool similar times to get to 9999999 and each would get a solution inbetween should they not give up
and if you take away pool W,X,Y guess what..
pool Z doing "HelLoWorLd-0000001" to "HelLoWorLd-9999999" hashing each try would NOT suddenly take 4x longer to get to 99999999
because Z is not working on a quarter of the nonce of other pools!!!!!!!!!!!!!!!!!

because the work pool Z is doing 'HelLoWorLd' is not linked to the other 3 pools.

so 2 dimensionally
pool U does "helloWORLD-0000001" to "helloWORLD-9999999" 20min to get to 10mill (average 10min to win)
pool V does "HELLOworld-0000001" to "HELLOworld-9999999" 20min to get to 10mill (average 10min to win)
pool W does "helloworld-0000001" to "helloworld-9999999" 20min to get to 10mill (average 10min to win)
pool X does "HElloworld-0000001" to "HElloworld-9999999" 20min to get to 10mill (average 10min to win)
pool Y does "HelloWorld-0000001" to "HelloWorld-9999999" 20min to get to 10mill (average 10min to win)
pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" 20min to get to 10mill (average 10min to win)

because they are not LOSING efficiency pool Z does "HelLoWorLd-0000001" to "HelLoWorLd-9999999" still takes 20min to get to 10mill (average 10min to win)


now do you want to know the mind blowing part..
lets say we had 10minutes of time
you would think if pool W had 650peta and that if pool Z had 450peta
you would think pool Z =14 minutes due to hash difference

but
what if i told you out of the 10 minutes upto 2minutes is wasted on the propogation, latency, validation, utxo cache.. (note: not the hashing)
so
if pool W had 650peta
if pool Z had 450peta
pool Z =11min33 due to other factors because the calculating of hash is not based on 10 minutes.. but only ~8ish (not literally) of hashing occuring per new block to get from 0-9999999 (not literally)

now imagine Z done spv mining.. to save the seconds-2minutes of the non-hashing tasks- propogation, latency, validation, utxo cache.. (note: not the hashing))
Z averages under 11min:33sec

so if Z went alone his average would be UNDER 11:33sec average


so while some are arguing that out of 6 blocks
U wins once, V wins once, W wins once, X wins once, Y wins once, Z wins once..
they want you to believe it take 60 minutes per pool to solve a block (facepalm) because they only see W having 1 block in an hour

if you actually asked each pool not to giveup/stale/orphan .. you would see the average is 10 minutes(spv:10min average or 11:33 if validate/propagate).. but only 1 out of 6 gets to win thus only 1 gets to be seen.

but if you peel away what gets to be seen and play scenarios on the pools that are not seen (scenarios of if they didnt give up).. you would see it not 60 minutes
Pages: « 1 ... 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 [215] 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 ... 860 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!