Bitcoin Forum
May 01, 2024, 07:34:48 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 [412] 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 ... 1464 »
8221  Other / Politics & Society / Re: Anti-Vax Nurse's Attempt To Prove COVID Vaccines Make People Magnetic Backfires on: June 16, 2021, 09:58:50 PM
i bet the woman at the legislative committee spent all morning lathering herself up with honey or other sticky stuff. and yet at the crucial minute it didnt stick

the amount of magnetic dust needed to attract a fridge door magnet or even a piece of metal is more milli litres than the vaccine.

meaning impossible to inject enough magnets to make things stick
the only explanation is sticky skin due to sweat or lotion

many of the people doing this videos ad since admitted they didnt do it to prove vaccine had it in it. they done the trick of tricking idiots to see how many idiots will believe its actual magnetism

seems badecker is such an indepth believer that even after being told how the magic trick was done. he still thinks its magic

so badecker. if you ever go to a magic show and you see a woman lay down in the box..
im going to have to spoil it for you so you dont cry out murder..
.. she was not really sawn in half

i know you will waste a year of your life shouting that you witnessed a murder.. and a religious resurrection 2minutes later.. but thats not what happened

so i have told you how the magic tricks happened. and i really hope you dont waste a year thinking its some big conspiracy

vaccines dont have enough room in a syringe to put magnetic dust to cause the things you speak of
their skin is just sticky due to normal life stuff

now save yourself a year of wasted time by not being an idiot
8222  Bitcoin / Development & Technical Discussion / Re: Soft Fork | Can the users who didn't update their client still mine blocks? on: June 16, 2021, 09:11:56 PM
to truly correct the details of the segwit-bch (im a btc maximalist not an altnet lover)

this graph shows the actual flags in the actual blockchain and thus real proof of what happened rather then the social propaganda some people imply

what you see is the red line is the actual flag of wanting segwit
even upto mid july less than half wanted to flag for segwit

so what was then implemented was another flag to ask the network will they ignore non segwit(legacy blocks) to make the segwit flag appear as 100%

so thats the blue line. and when that got its lower threshold met for in june. it then triggered the ignoring/rejecting legacy blocks and only accept segwit blocks from july 23rd onwards
and as you can see the redline rose from 45% to 100%

at the start of august segwit locked in but the segwit transaction format rules were not activated yet
however the pools not doing segwit flags being rejected by the network made their own block at the same time segwit flat got to 100%
(the then started accepting legacy blocks again in september once segwit was locked in)


the first block of BCH and the second block were not seconds apart(no 100k blocks mad fast from fork) the second block was HOURS later. and so they had to reduce their difficulty because blocks were taking hours
.....
so looking at the chart and actual block data. you find the bch split occured due to those not flagging for segwit were pushed off the network(legacy blocks) before segwit actually got locked in.

the funnier part was those opposing segwit done so simply by using normal unedtited unupdated software without any flags.. basically running the normal rules of 2009-2017
however segwit required new software with new flags and then new temporary rules to ignore normal legacy blocks
and these segwit advocates were blaming non segwitters by saying the non segwitters didnt put in certain replay protections into their code..
..um old software already complied and used for years is default software so its the new software of segwit that should have had replay protections when segwit decides to fork away from default blocks

but hey even after 4 years many will prefer to say some social drama propaganda of those opposing segwit. even though the blockchain data shows which flag caused which changes to which versions of nodes.


but in short. if the segwit admiration brigade did not implement the blue line flag to mandate a fork. segwit would probably have remained stagnant at under 50% and never have been activated

i predict showing real blockchain data of flag statistics will earn me another ban. because the truth hurts too much
8223  Bitcoin / Bitcoin Discussion / Re: Lightning Network -- Is it GOOD? on: June 16, 2021, 07:29:17 PM
best way to think about it
bitcon is gold. LN is bank notes

to move gold costs you, say $2 meaning at a 1% fee. most people wont want to move less than $200 of gold without incuring more than 1% loss due to fee's

however you can deposit your $200 of gold into a bank and have banknotes to trade in a banknote community where it only costs you maybe 0.1cent fee. so now you can spend 10c-$200

however remember to get a bank note. vaulting up your gold costs you $2 and unlocking your gold cost you $2
so remember it costs you $4 in total to use the bank note system . but then you can enjoy small spend denominations of 10c+ once in that community

..
the flaws are its not a pay the destination system and have everyone audit your payment
its a pay your nextdoor neighbour who pays the milkman who pays his wife that works at the destination

so you have to be reliant on your
naighbour having funds to pay forward
milkman having funds to pay forward
wife who has funds to pay forward

..
this bottlenecking of routes due to lack of funds. then causes centralised hubs with large bank note accounts to route the payment and by pass the friend-friend-fried-friend route

next up is if everyone was to do a friend-friend network they would need to have 5 different friends to have 5 different possible routes to destination where each friend has 5 friends
this means you are not putting $200 into one bank account but puting $40 into 5 accounts each
meaning you can only spend $40 in any direction

again this leads to people prefering to be linked to a hub(bank) that has large reserves so that you can just have one route via them. and let them manage the route so that you can have your whole $200 in one account

the next flaw is that you wont want to have to pay $4 to close and reopen your account every couple months as thats then like having a $4 membership subscription just to use paypal
so what you do is near the end of the month. you swap out your bank notes.. not to exit with gold. but instead to exit and return with silver(ltc) as the close-reopen fee is alot less.
so now you have given up your gold pegged bank notes. given up your gold and now playing with silver(ltc) and silver based bank notes.

and thats the whole game theory hope of the LN kings.
they want people to deposit in gold and then swap out to silver so that the hubs get to keep the gold(btc)

(institutional banks done this with gold<>bank note 1870-1970 and then didnt want to honour gold returns)
LN hopes to do this with btc<>millisats in ~10-20 years, where eventially millsats can only be converted to altcoin)
8224  Bitcoin / Development & Technical Discussion / Re: Soft Fork | Can the users who didn't update their client still mine blocks? on: June 16, 2021, 06:46:24 PM
unless the topic creator is a pool using an outdated stratum server.. this topic is redundant
no one solo mines from their personal PC. so its a null topic

however if the topic creator is a pool using outdated stratum software
there are a couple scenarios

firstly. ill explain. old software may not recognise new tx formats. but a lil code trick is implied that new transactions use a flag that basically says, accept it without validating it
and thus any transaction with this flag wont be validated. but will be auto-treated as good.

and so if the transaction is good, meets the rules the whole network validates and accepts. and old software just accepts without validation.. no chain split as everyone is accepting

but if the transaction is dodgy. (from a malicious pool that added in a bad tx)
whole network rejects the block but old software blindly accepts the block
so suddenly old software is at a different height with different block-tip hash

however in most cases the network then produces a good block of same block height and then builds ontop of that. in which case the old software realises the new block is on a different parent. and then orphans his accepted(dodgy) block and then uses the valid parent and child blocks to stay on the network as thats the new heighest blockheight/tip

the only time an old software would continue to build on bad blocks the rest of the network has rejected. is if the old software is getting blocks from a pool thats continually building on bad blocks.and the old software is not getting any different versions from any other peers
8225  Bitcoin / Development & Technical Discussion / Re: The Lightning Network FAQ on: June 16, 2021, 06:26:24 PM
By the way, Bitfinex's nodes refuse to accept channels lower than 0.04 BTC.

exchanges dont want to set up channels with every random user. they prefer having some hub manage the users. in a depleting hierarchy where values go down the furthest from center they are. to ensure values can flow in the right direction without any disorder of poor nodes in the middle causing bottlenecks for routes

                                                                                         /<0:0.0016>userf1a
                                                                                        //<0:0.0016>userf1b
                                                   <0.008:0.008>factory1-<0:0.0016>userf1c
                                                  /                                     \\<0:0.0016>userf1d
                                                 /                                       \<0:0.0016>userf1e
                                                /
                                               /                                        /<0:0.0016>userf2a
                                              /                                        //<0:0.0016>userf2b
                                             /     <0.008:0.008>factory2-<0:0.0016>userf2c
                                            /    /                                    \\<0:0.0016>userf2d
                                           /   /                                       \<0:0.0016>userf2e
                                          /  /
                                         / /                                        /<0:0.0016>userf3a
                                        //                                        //<0:0.0016>userf3b
exchange<0.04:0.04>hub<--<0.008:0.008>factory3-<0:0.0016>userf3c
                                       \\                                        \\<0:0.0016>userf3d
                                        \ \                                       \<0:0.0016>userf3e
                                         \ \          
                                          \ \                                        /<0:0.0016>userf4a
                                           \  \                                     //<0:0.0016>userf4b
                                            \   <0.008:0.008>factory4-<0:0.0016>userf4c
                                             \                                     \\<0:0.0016>userf4d
                                              \                                     \<0:0.0016>userf4e
                                               \
                                                \                                        /<0:0.0016>userf5a
                                                 \                                      //<0:0.0016>userf5b
                                                   <0.008:0.008>factory5-<0:0.0016>userf5c
                                                                                       \\<0:0.0016>userf5d
                                                                                        \<0:0.0016>userf5e


this is how the lightning network will eventually lay out like a hierarchy in a network of 5 peer with descending accepted value dependant on where in the network hierarchy certain nodes are

after all if say hub was actually user5f with only 0.0016 to move to exchange.. NONE of the 5 factories can move their 0.08 to the exchange through the hub(user5f)
8226  Bitcoin / Bitcoin Discussion / Re: Quantum Computing and wallet security? on: June 16, 2021, 05:07:14 PM
in laymens

imagine you are blindfolded and dropped off in the center of the city. you are given only 2 directions in binary(11=right then down.. 00=left then up) only two options.
you follow this route of leftdown or rightup to get to a destination

however if you only understand leftdown or rightup. but someone else gave you a route of 4 directions.
0=leftdown
1=leftup
2=rightdown
3=rightup
 you will buzz out and not understand. and end up just not moving("what is left-up or right-down"or 'im binary. what is 2 and what is 3)

quantum can find many ways to the destination using a 4 direction method. but the problem is giving that 4 dimension route to a 2 dimension walker just does not work.

the best a quantum computer can do is run 2x 2 dimensions and try every route possible at 2x to get to the destination and then hand that 2dimension path to the walker.

so its just doing 2 operations per 2 qubit instead of 1 operation per 2 bit
so if it took 10billion years in binary. it would take 5 billion years.. still too long to worry about

a slightly bit more technical
if the rules of cryptography first: right then down.
and then asymetrically: left then up
quantum is limited to that. it cant for instance do: left down. or right up
if it wants to be recognised by the binary rules, its limiting quantums oppertunity to do multiple directions at once. and instead only able to do 2 dimensions 2x

quantum is great at allowing for new cryptography that uses more then 2 directions at at a time. but thats new cryptography with new rules
but not so great at only using 2 directions at a time as then its no better then just having 2x of 2
..
slightly bit more technical
binary has 2 logic gates 0,1
quantum has 4 gates 0123
if binary map was first bit 0=left
                                      1=right
                       second bit 0=up
                                       1=down
where asymetrically the rules of 2 bits had to be 00 or 11
(left up) (right down)
if quantum map was qubit 0=left
                                       1=right
                                       2=up
                                       3=down
sending a solution of 1302 would confuse binary
what is 3 what is 2
translating 1302 to 1100 would help
(right down) (left up)
but if quantum got to the same destination as 1100 by going 10 01 that breaks the rule of 00 or 11
(right up)=error  (left down)=error

to stay within the rules. what quantum can do. and only do
0=right down
1=left up
2=right down
3=left up
and then instead of 2 binary bits. it only uses 1 qubit per attempt
so 12= two attempts one doing 1=translate 00 second doing 2=translates 11

where as a binary system would need 4bits to do same 2 attempts 0011
8227  Bitcoin / Development & Technical Discussion / Re: Research Proposal to classify UTXOs into different groups on: June 16, 2021, 04:26:19 PM
and now you add in more buzzwords.. divide and concour
i get it splitting the datasets

but having all datasets all in single file all on harddrive is not going to save any harddrive ware and tare
it actually ends up costing more cpu resources and more file opening operations

the thing is while for instance putting the coinbase into the lost forest for the first 100 heights
your taking a coinbase from previous -100 height out of the lost forest and putting it into the young forest.
and also taking a coinbase from an even older height out the young forest to put into an old forest

thats 3x more operations happening to every coinbase before they are even spent. just to shuffle then around by age
and then when a coinbase is spent. you have to if statement its age to work out which forest it might be in. to then seek the forest to make sure its a utxo still
so thats like a dozen operations

rather than just get height. find height check if coinbase is still utxo... 3 operations

same goes for all UTXO .. newly confirmed young forest. later on check utxo at older age to put them into old forest
then when being spend check age to find out which forest to look in
....
it seems you spent too much effort trying to push buzzwords and metaphoric forests without running scenarios of utility and computation to see if it actually results in efficiency

many people done the same with blockchains. thinking all databases in the world should use blockchains. without realising some database structures dont need them and also would waste more time/resources to have them. but they still try to push them 'coz its blockchain'


think about it. othe parts of your separate datasets include a lost forest of burn addresses
to separate out the burn addresses to put utxo into a lost forest forever. you have to put every transaction of every block through a check procedure to see if its a burn address.
this adds like (if 10burn addresses) 30 cpu operations per tx(2utxo) per block just to figure out if the utxo is a burn address
1:  for 0 to X of txcount of new block
2:    for 0 to X of outputs of tx get address
3:         is it 1burnaddress1
..          ....
13:       is it 1anotherburn10
14:            if yes
15:               put in lost forest
16:            else
17:               put in new forest
18:            end
19:       end
20:  end

where by if each tx has say 2 utxo its done 1->13 twice and 14&15 or 16&17 twice so thats 30

but without your is burn check it would just add utxo to dataset . 1 operation
..
and thats without being 100% sure if it truly is a list of 10 burn addresses that you have manually listed. or just 10 vanity addresses that took alot of brute to make
..
so again without concentrating on making a post to mention a buzzword. can you specifically lay out the forests in the format of root tree branch leaf fruit formation
eg
blockheight_txid_output[X]_..

then using basic pseudocode of operations and if statements to count computations required
and run some scenarios, numbers, examples of computational saving/increase to actually run your idea

.. heck you havnt even made it clear what age the threshold of young/old would be. you linked some people saying 15ish months and then you said elsewhere younger threshold.

atleast put some parameters in and examples and some math. and not just buzzwords
(i can put examples and you can argue the examples. but thats because your not giving any defined thresholds, examples yourself.. as thats where the misunderstanding is)
8228  Bitcoin / Development & Technical Discussion / Re: Research Proposal to classify UTXOs into different groups on: June 16, 2021, 02:40:32 AM
very true
but it seems you are missing its fundemental usage. and instead deciding you think you found something new that should be used everywhere. much like how many are overusing 'blockchain' in places that dont need it.

for instance merkle trees its the same analogy as 'ancestor' like grandparent, parent, child. is like root, tree, branch

how ever when you say you want to 'tree' and 'forest' all the burn addresses
however in reality. there is no relationship of all the burn addresses.
you would have to add alot of blacklisting of random addresses manually into code to then have them rooted to then tree their utxo... but then you are making long lists and still not being 100% sure if the addresses you manually added are truly burn addresses and not just lucky vanity addresses

you also say you want to "tree" transaction hashs
sorry but transaction hashes have no relationship to other transactions direct

yes blocks have relationships with the transactions within blocks.
but a tx in one block has no relationship with transaction in another block. because once spent. from one block to become a utxo in another block. that relationship is then cut. (the previous utxo is spent thus no longer in a utxo set, thus no tie)
transactions within a block have no direct relationship with each other. but do have a relationship with the parent block.
thus your not 'treeing' a transaction hash. but are 'treeing' a blockheight

...
anyway
getting to the point having your 2 forests in one file. becomes a wasted efficiency as having to open and read the file. is still open and read the file. however truly separating them into for instance ram vs hard drive. might have some efficency gain.
but now we circle back to the efficiency loss of identifying and then splitting blocks of transactions into 2 sets

EG instead of
seek 3000 inputs from set.
 if all valid.
   delete 3000 records(spent inputs)
   add 6000 records(new unspent outputs)
end

your idea is
read 3000 and split them 2700 in 1 and 300 in other
seek 2700 record from A
 if all valid
    delete 2700
    add 5400
 end
seek 300 from B
  if valid
    delete 300
    add 600
 end
seek current block minus 80k(100 in your case)
 move any utxo found at threshold from one set to other

see how many more operations at the most leanest. and thats without your manually added blacklists. nor any other groupings

..
but anyway
organising utxo in relationships of
blockheight
                |\_tx1
                |      |\_output1
                |       \_output2
                |
                |\_tx2
                |      |\_output1
                |       \_output2

works better for many reasons than your
"only their hashes r linked (connected) together in some manner using additional data structure usually called Merkle Tree"
"txhash"
           |\_address
           |\_address

for instance
having a blocks coinbase separate from the same blocks transactions. is not a tree for the coinbase
because the coinbase has no relationship to anything. so there is no tree

trying to form relationships due to some random social analysis of spending habit, requires more cpu resources to create these new imagined relationships. like the age threshold. or the burn address reference

the amount of cpu computations needed then outweigh any gains from having grouped datasets

in short. you appear to be wanting to make merkle trees for the sake of thinking merkle trees should be in all datasets. for the sake of WOW merkle trees are cool
much like many people before you think all databases should be blockchains

..
unless you can express the relationship of the data. in a way that can show clear usage that does not require huge CPU computation to form these relationships/adoptions/divorces(replanting/seeding/deforesting forests)
then please dont just throw around the word tree's for the sake of it

EG organising by blockheight. is easy to see the relationships. and clearly doesnt need much computation to group the child(branches) together to the parent(tree)to the gransparent(root)
8229  Bitcoin / Development & Technical Discussion / Re: The Lightning Network FAQ on: June 15, 2021, 10:25:08 PM
there is a limit of bitcoin hoarding/ peg vaulting and thus other network token/htlc utility
lets go with the cheapest fee/lowest denomination to get a max utility number
of users

so lets start
onchain fee imagine 1vbyte=1sat
2in 2out ~209vbyte. needing one spend to open one to close. so 418sat min onchain for LN membership/inclusion/access
(rough numbers dont knitpick)

people want to beleive in fee's being 1% of denomination they want to move. so min denomination would be
41800sat they would have per tx

this is then ~2392 denominations per btc(100000000/41800)
imagine all 21mill coins are active circulation and not dead/burned/lost
so thats 50,239,234,450 sharable denominations
50 billion oppertunities to spend/vault

it might seem alot. but if users were to make only 10 spends or vault up 10 allotments to have 10 peg/channels open thats only 5 billion users.

but even more so with life. things are unfair where some citizen earns minimum wage of 1 allotment some manager/ceo is earning 100x an allotment

so if there are say 25million ceo with 100x allotments it only allows for 2.5billion with 1 allotment
thus only 2.525billion users.

which is 33% of population where 2.5bill population only has 10 channels or 10 allotments EVER
at todays price
thats like having 3rd population only have $160 of crypto each

..
i made some lowball assumptions just to show best case EG if fees were 10sat/byte then its only ~250m users. with $1,600 each of crypto ever.. and having a $16 openclose fee to vault.unvault

yep less than population of just america only getting to play with a months labour salary or crypto is not much

ill leave others to do some maths on splitting circulation by population.
8230  Bitcoin / Development & Technical Discussion / Re: Research Proposal to classify UTXOs into different groups on: June 15, 2021, 03:37:39 PM
if you stop concerning yourself with buzzwords such as trees and forests.. and actually speak using normal words like datasets, relationships, linked together

then it might make things clearer

because although you mention a single database file. you then tangent to say separate sets.
the logical assumption then becomes when you mention not having to look at the old forest (old coin set) than these 2 groups should be treated as completely separate
where logically one is viewed constantly and the other is not.

it seems what you mean by coinbase stays longer. i presume unlike the people you linked in your other idea's you are prefering to have a 100 block group of trees and everything else in another forest set block0-687k

which is not going to cause much efficiency gain. because the <1% is still in your same database. thus taking just about as much time as if it was in the 1% set or the 99% set to be read
(it takes same operational speed to find tree root: block 687703 as it does to find tree root: block10)

i tried to skip a few steps ahead and predict extra efficiencies. but hey seems ill have to step backwards again. and mention some other flaws.

flaws
1. all the data in one database on hard drive (hard drive slower then ram)
2. data sets of 1% 99% not going to do much (both hard drive so no gain really)
3. extra code needed to dig up the tree and replant which is more wasted resource
4. a transaction with new utxo in block 687700 has NO relationship with utxo in block687701
5. a 'burn address' has no relationship with another 'burn address'

so your buzzwording of trees for utxo hash linking are meaningless unless you are talking about having trees where the tree root is a block number and branches as txid and leaves are the spents(inputs) and fruit is the new utxo(outputs)
where by your then pruning the leaves and fruit all the time

because you cant really 'tree' together uxto address in 1 tx to another utxo address in another tx
for instance the 10 main burn addresses have absolutely no relationship with each other
there is no tree. no taint no pattern to link them..
no clode can identify them meaning you have to manually select them and tag them manually as burn addresses
yes you can tag/blacklist them as burn utxo's to ignore and never filter for. but you seem to be wanting to plant tree's everywhere by just using tree buzzwords without understanding it or explaining it.

its like you learned the new word of the month blockchain. then learned merkle tree and now you want everything to have a blockchain or merkle tree in it even if not needed. just for buzzwords sake of looking smart

anys ways getting back to the point .
having the young forest of say your 100 blocks(coinbase age cant spends)
you are not saving any resources because when a coinbase is 100confirms. you then have to move it from your old forest to you young forest because its now spendable

basically if the newest blockheight now is 687704 although your not putting that blocks coinbase in the young forrest as it cant be spent now.
what you are doing is
putting blocks 687704 in the old forest as not spendable
putting blocks 687604 into the young forest as now it can be spent

but soon that 687604 if not spent will age out and be deemed inactive and be put back into the old forest again

all you are doing is wasting more resource moving and replanting trees

i tried to be more reasonable with a 18month threshold for inactivity. and showed you the results of any efficiency gains/losses(64%/36%)
and i skipped ahead few steps by imagining extra efficiencies like not using the hard drive as much for the new forest by having the 2 forests separate. one ram stored one harddrive stored
because just having 2 sets in one database does not really change much compared to having one hard drive vs one ram

but even so. after trying to add much efficiency. your still losing that efficiency by the extra operations of replanting the tree's between the forests and learning the ages and other stuff..
whether in one database on hard drive for 2 sets. or ram&hard drive for 2 sets. the very factor of having 2 sets works against you when swapping in and out trees between the sets

EG if utxo set has 85million utxo's
splitting the sets = more operations to check them and plant them into correct set
but the hard drive file then ends up still being 85mill utxo. but now slightly more bloated and more cpu intensive to read
8231  Bitcoin / Development & Technical Discussion / Re: Research Proposal to classify UTXOs into different groups on: June 15, 2021, 10:19:14 AM
i understand current idea of this topic is 2 datasets.
some call them tree or forrest or other buzzwords.. but they are datasets(sets of data)

i was predicting that within a few weeks you would come up with a new idea of a 3rd.set where you split the data up further (separating blockaddress from the value, txid, etc)

but anyway sticking with current idea.

if you split the utxo set by just age.. , say either:
18month age. thats (at current set) 64% >18month.. 36%<18month
12 month age. thats (at current set) 69% >12month.. 31%<12month
6months age. thats (at current set) 78% >6month.. 22%<6month
1day age, thats (at current set(99%>1day..1%<1day)


so while some of your links shown others people looking at more then a 12 month area. i assumed 18month

so lets go with this 36% ram utility/64% ram efficiency model1
so that there is less hard drive ware

lets assume transactions are 1in 2 out
lets assume 10% old coin spend 2
lets assume 3k tx per block(1 in spent 2 out unspent(3000in 6000 out))
..

just the very fact that you have to run extra operations: just to separate them into sets(forests)
(im dumbing down the operations to pseudocode(dont knitpick))

Code:
  create temp batch list 'ram' (spends under 80k coin age, new utxo)
  create temp batch list 'hd' (old spends in current block, aged out spends from ram set)
    for each tx of block (0 to ~3000)
         if input < 80k confirm
             list 'Ram' [check then del record](usually 2700 in list(90%))
         else
             list 'Hd' [check then del record](usually 300 in list(10%))
         end if
         output(s) list 'ram' [add record]
    end loop

validate list 'ram' to ram(young forest) utxoset(2700 reads)
delete records(2700 writes)

check ram utxoset for unspents in oldest block (now minus 80k (aged out and need to be in hd dataset))
add to list 'Hd' [add record] (about 3456(64% of 90% of 6000))
remove(3456 old trees) from ram utxo(young forest)

open hard drive utxo file(old forest)
validate list B (300 reads) to hard drive utxoset
delete records(300)
add aged out unspent (write 3456 old trees)
close file

add new unspent outputs to ram utxoset(~6000 writes young forest)
purge lists 'ram' & 'hd'

as oppose to
Code:
open hard drive utxo file  
    for each tx of block (0 to ~3000)
       check input is in file
       del input
       add 2 outputs
    end loop
close file

what you find is allthough a block is 90% new 10% old utxo on average spent and 6000 new unspent
eventually in 18 months there might be 36% of that 90%(3456) still unspent that need to be shifted from ram to hard drive every block(shifted from one forest to the other)
so your still having to access the hard drive and open the file every block.

yes i can see that even though a hard drive does previously
  read 3000
  write: del 3000 add 6000
which becomes
  read 300
  write: del 300 add 3456

ram becomes
  read 2700
  read 3456
  write: del 2700 del: 3456: add: 6000

the amount of hard drive read reduction is 90%
the amount of hard drive write reduction 60%
the amount of ram read increase
the amount of ram write increase
but the amount of operations have increased from like 12000 to ~27000 operations
so its now 2.3x cpu utility to do this

trying to save hard drive ware (single forrest ware) but at the cost of more cpu burn. is not a great idea

.. i see other flaws. but i wont get into those just yet
8232  Bitcoin / Development & Technical Discussion / Re: The Lightning Network FAQ on: June 14, 2021, 09:39:53 PM
needing to run a full LN node to use LN is like telling someone they need to take their desktop computer with them when they go buy a coffee at starbucks

..
what ends up happening is people will end up depositing funds into central payment processors(LN factories) and so users can just have lite wallets and trust their watchtower wont mess around when they are not eyes-glued to their app

LN will not get rid of payment processors. as the whole investment is to offer these payment processors a nice new niche of customers to manage.

i know many people will say that LN will be a bright glittery network of hop model nodes all independant and in unity of allowing their funds to be spend as routers

but the reality will be centralised hubs with reserve sharing channels between hubs offering cheap fast route via the hubs and their own special app so that users dont have to manage their own funds.

by this im not being anti-LM.
im being critical thinking of the real life scenario end-game
8233  Bitcoin / Development & Technical Discussion / Re: Research Proposal to classify UTXOs into different groups on: June 14, 2021, 09:20:32 PM
First of all it is important that we actually have the categories of UTXOs named beforehand, which boils down to classifying them by script. Right now I can only think of coinbase scripts, OP_RETURN scripts and everything else so that makes three types.

My question is how will you optimize UTXO storage for a specific kind of script, e.g. making OP_RETURN UTXOs more compact? Or more specifically, where in the merkle forest will this try to place each of these three script types (and whether there are more types you have in mind to categorize with)


the standard bitcoin-core UTXO(chainstate db) is usually laid out like
EG
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
<------------------------------------------------------------------------------>
                                           |
                                         TXID
01 04 8358 00 816115944e077fe7c803cfa57f29b36bf87c1d35 8bb85e
<><><--><><--------------------------------------------------> <---->
  |  |      |      \                  |                                                       |
  |  |    value   |           address                                            blockheight
  |  code       tx type
  |              
version

from what i can fathum from the topic creators idea (from many forum subcategories posts)
is instead of this one bloated database

have 2-3 databases. mainly
1. utxosets over 80k blocks ago(older than 18months) block 0-607600
2. utxoset under 80k blocks ago(less than 18month old) block 607601-687601

where by its deems old hoarded coins older then 18months are less likely to move soon
and so not needed to by sifted through every time

his next point, adding in other topic idea of utreexo
is to organise the records.

i presume
blockheight that then branches off the transactions within the blockheight. and then branch off the utxo of the transaction

i am going to predict his next idea in a few weeks is to have 3 databases
whereby its still 2 databases of under/over 18month
but instead of having the txid. ver,code,type, value, address, blockheight
its simply
blockheight:address:index

where that index then seeks a third database for the txid ver,code, type, value

so that when checking if an input is actually a UTXO it just seeks if the address is in the blockheight
and if found then seeks the extra data in the 3rd database
where blockheight+index finds the corresponding branch for details of the utxo

..

in short database 1. is always open(in ram)
whereby with only holding 80k blocks its very lean
4byte blockheight* 80k
20byte address * ~3000 *80000
2byte index *3000 * 80000

the other 2 databases are stored on harddrive and are only opened when needed
thus reducing the amount of ram used by not storing all blockchain of utxos (EG 10m instead of 80mill utxo)
not using as many hard drive reads. thus less wear on hard drives

the problems with this:
utxo's can be formed and spent in the same block
though its more organised to find a tx by its blockheight its falling foul of errors of duplications and other issues
some people still re-use address and so identifying each utxo idependantly is crucial

as for other methods of treeing utxo together. well most uxto are independant of each other. you cant really cluster transactions together based on whatever social analysis topic creator tries.
even spends originating from exchanges. once distributed you cant really code if statements to know if one utxo is going to be hoarded for 2 years or spend in less time
8234  Other / Politics & Society / Re: Do you trust the co-vid19 vaccine ? on: June 14, 2021, 06:24:37 PM
if you think tash's links are 'hard work' to post.. then danngggg. you have no idea what work is
idiots chatting to each other and passing each other stuff is about as hard work as a 5yo's birthday party
8235  Other / Politics & Society / Re: Putin and Biden are Only true leaders on: June 13, 2021, 04:12:53 PM
plenty of people argued with trump.
he pretended he was a dictator declaring victory at 2020. and the world laughed at him and told him to just give up and play golf.. so he did

putin while yes a strong leader of russia. but he does not have much sway or voice in world matters. he has been sanctioned more times and been told to sit on the naughty step so many times that when he tries to set out idea's for world agenda's they turn away and pretend he is a toddler asking for cookies, but left having a tantrum

biden although a leader of USA he is still new to the role so is still stepping lightly. but the other countries do listen to him.

france/italy.. well they just turn up to the dinner table to appear like they are part of the political family. frances/italy's real leader is the EU. france/italy cant do anything without EU say so

UK is the go between of US and EU so EU has to be friendly with uk. and US have to be friendly with the UK
so it ends up the the UK becomes the middleman/mediator/decision maker of what countries should agree on
8236  Bitcoin / Bitcoin Discussion / Re: Using renewable resources in Bitcoin Mining on: June 13, 2021, 09:04:00 AM
ever since 2014.
majority of bitcoin mining farms had already set up in renewable energy regions due to the deals they could do with power companies.

so for mining farms its been a non issue

the complaints about bitcoin mining and climate change is not the large pools/farms. but instead the hobby miners that cant just move house to a new region and so just use their domestic house electric supply where they live which mostly isnt in renewable regions.

its the hobby miners overloading their streets circuit capacity and causing brownouts in residential area's that governments are trying to tackle.. not the farms in renewable regions.

what you will find is that most altcoins dont do 'farming'/large mining projects and their mining attempts are instead done by hobby miners in residential neighbourhoods using residential quality circuit breakers. in regions with fossil fuel power stations

thus altcoins are the coins more likely to be fossil fuel burners.. not bitcoin
8237  Bitcoin / Development & Technical Discussion / Re: Increasing outgoing connection limit on: June 13, 2021, 08:33:59 AM
seems the topic creator wants 'fast' tx relaying so he can gain a vew millisecond advantage to see bitcoin deposits going into exchanges before they are even confirmed

what he doesnt realise is although a whale might be depositing funds into an exchange. it may not result in a on exchange market order that day or week
also the deposit does not reveal if that funds leads to a long or a short order. thus becomes redundant metric to use
lastly that whale tx being relayed may not actually be a deposit resulting in an on-exchange market order. but could also be just the exchange itself re-organising its cold store reserves. thus making analysing the tx relay and blockchain useless for predicting in-exchange activity

i found it myself. depositing funds into an exchange made me flag up as a whale and suddenly i had to fill in some KYC stuff and a 48 hour admin time.. to get level X utility within the exchange.
so the topic creator stressing over a few milliseconds of tx relay wont benefit him in predicting when trades are going to happen because a deposit does not = instant trade on exchange
and a deposit does not reveal which direction a trader wants to swing. (long or short)

Someone can connect to 10,000 nodes, submit a transaction and disconnect. Relaying is optional for this purpose.
Have you heard of https://en.wikipedia.org/wiki/Front_running and https://www.investopedia.com/terms/p/paymentoforderflow.asp ?
Those delays enable exactly that. This is how many rigged markets screw over participants.

the disconnect. search ip list, connect. handshake send tx disconnect cycle of operations wastes more seconds to 83k peers. rather than the time it takes to just pass data through the network the normal way peer-to-peer relay. to 83k

also the peers you wish to connect to must have available slots to even allow the connection to a new peer and also not already ban hammered you with spam connections from previous tries. so its not something that would last long if you were to try it.
you end up just wasting time and bandwidth and eventually treated like a bad node the network refuses to connect with

.. in short the very thought of a max of 6 hops and max of 2 second delay per hop in your scenario is 12 seconds to get from originator to all nodes.
batch connect send disconnect and repeat for all 83k nodes would end up taking over 12 seconds to attempt and still no guarantee you reached all 83k nodes in your attempts. thus redundant

sending a tx 83k time in atleast 12 seconds is more of a waste than peer-to-peer network relay of upto 12 seconds where your node only sends the tx 10 times
8238  Bitcoin / Development & Technical Discussion / Re: Increasing outgoing connection limit on: June 12, 2021, 03:58:13 PM
becasue your not just connecting to 10k nodes to send 1 transaction a day
your relaying the entire networks transactions and blocks. so very very bandwidth heavy

also every node would have to accept you as their peer.

..
i still cant see why you are so head strong on having a unconfirmed transaction requiring to be seen any faster
its a meaningless effot as your still waiting 10mins to get a confirm

if its because you want to double spend to services that accept zero confirms by playing them
that can be achieved without needing 10,000 peers connected to your node.
8239  Bitcoin / Development & Technical Discussion / Re: Increasing outgoing connection limit on: June 12, 2021, 03:41:30 PM
I don't understand how could it be miliseconds for propagation if every hop delays my transaction by 2 seconds. Can you help me understand?

Also what are whitelisted peers? Isn't it a way to play favorites? Why don't people group together and whitelist each other giving them priority on transaction propagation, and avoid connection problems if network is under load?

i dont want to over complicate something so insignificant .. but if you really want to know

if the network has 83k nodes

imagine it requires 5 hops if you have 9 peers and rest of network defaults as 8
           *8  *8    *8     *8
9        -72-576-4608-36864
hops      1   2     3         4
at the 4th peer its like 36k nodes. still not 83k so needs another hop.. right?
so the very first packet to their very first peer makes it 72k. still not enough
so they milliseconds later send to their 2nd peer and now its like 108k

meaning at the 4th hop to get to 83k network nodes requires all forth hop nodes to send to atleast 1 peer and about a 3rd of them to send to a second peer

so allthough each hop is 2 seconds...  peers within the hop are milliseconds of difference

where as if you have 12 peers and network defaults as 8
           *8   *8   *8     *8
12      -96-768-6144-49152
hops     1    2     3        4
at the 4th hop 49k peers see the tx. so only requiring nother 34k nodes. which is less then a full set of 1 peer
meaning it can reach all 83k nodes in under the first 9peer data packet sends.. thus milliseconds faster

..
yet you are now personally broadcasting 33% more data as you have 12 peers vs 9 but only gaining milliseconds of network reach

also to note. whilst im using a patterned efficient network spread (8degres of separation model) for easy network hop demonstration, where all nodes are uniquely and precisely positioned between their peers to be a 5 hop example.
reality is nodes are randomly connected and preferably connected so its not an even efficient web of layers but clusters and deserts of nodes.
(5k of nodes in one section of the network may be clusterd and double connected within each other.. )
(5k of nodes in another second maybe distantly and singularly distributed)

making all this redundant and meaningless.. hense why its so insignificant to even bother with yourself
the important thing would be if the entire network was to change the defaults
but then thats only going to be a 2 second difference
..

all in all ..miliseconds or 2 seconds is meaningless for unconfirmed receipt.. because unconfirmed tx's are meaningless until confirmed ~10mins later+

...
as for prefered peers. you can white list peers that have stable connections and never cause any issues sending you bad data. whre as normally peers drop and change and just become random connections
by not having requests every milisecond just means your not DDoSing your peer and they not on you.

again miliseconds or seconds of a unconfirmed tx is meaningless of a concern
8240  Bitcoin / Development & Technical Discussion / Re: Increasing outgoing connection limit on: June 12, 2021, 09:53:51 AM
Please look at https://github.com/bitcoin/bitcoin/blob/55a156fca08713b020aafef91f40df8ce4bc3cae/src/net_processing.cpp#L87
/** How long to delay requesting transactions from non-preferred peers */
static constexpr auto NONPREF_PEER_TX_DELAY = std::chrono::seconds{2};

A bit off-topic, but what does non-preferred peers mean?

Also, what "financial situation" is being impacted by the 8 second delay?

My guess is 0-confirmation on physical store with long queue.
(explaining the offtopic question)

non preferred peers are peers not whitelisted.
in another topic i raised a point that by playing around with the code to make delays in when to send out transactions can be used as a pool exploit to bottleneck the blockchain of its competitors.
basically a pool gives itself a headstart on the next block

EG a pool creates a block but does not pre-relay its chosen transactions(pools own utxo spends it doesnt relay). then when solving a block it can send the compact block out and send the network into a frenzy of them requesting the unseen transactions (as the tx's are not in the networks distributed mempools due to no pre relay) (2second delay)

and at the transaction request, the block creator is further delaying supplying the transactions thus delaying the ability for competing pool nodes to validate the block(few more seconds)

giving the pool many seconds of advantage to start their next block while the competitors are in this limbo of waiting for transactions to validate the broadcast compact block

the pool does not need to make a invalid block. it just needs to delay the other pools from building on their own blocks whilst the exploiter pool is making their next block

note: its not a guaranteed strategy as their is risks of a competing pool solving the same blockheight as the exploit pool. thus making the competing block the winner.  
..
anyways on topic
as many posters have said the OP adding more peers does not cause any significant network effect on tx relay speeds. it only causes excess bandwidth utility of the OP node. the time gain is milliseconds/no gain due to the same number of hops other peers and their branches do to relay the OP's tx

imagine there were 83k nodes.. source: luke JR stats
and imagine ALL nodes accepted more peers to reduce the hops
it would require
all nodes with 8 peers (8^6) means 6 hops, as 5(8^5) hops is not a number that goes upto 83k
all nodes with 10 peers (10^5) means 5 hops as 4(10^4) hops is not a number that goes upto 83k
all nodes with 17 peers (17^4) means 4 hops as 3(17^3) hops is not a number that goes upto 83k

so to even get a situation where the OP transaction reaches the entire network faster would require the entire network having more peers per node. which then only results in a few millisecond/seconds difference. but alot more bandwidth per node
Pages: « 1 ... 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 [412] 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 ... 1464 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!