Bitcoin Forum
June 21, 2024, 03:19:51 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 [713] 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 ... 1472 »
14241  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 07:06:10 PM
here is another way how i see a system where shards could be used, (dismissing my own concerns of masternode monitoring inevitability)

imagine one chain as the master financial audit chain. where the transactions are smaller
FFFFF AAAAAAA -> FFFFF AAAAAAA
                            FFFFF AAAAAAA
in byte count
5 7 5 7
     5 7
=36 bytes

the F is a byte and are an identifier. 5 bytes allow over 1 trillion identifiers
the A is a byte and are coin amount.7bytes allows 72quadrillion so easy to store numbers upto 2.1quad satoshi

and then a shard stores an ID chain
   EG:       FFFFF = bc1q.... lets say less than 50 bytes per entry
and then another shard stores the signatures
  lets say under 100bytes per entry

essentially making the financial chain that audits coins right back to the coinreward(creation)
only using 36bytes of data per minimal tx instead of 225bytes and a multisig of 2in- 2 out being 48bytes instead of 300bytes+

this not only lets more tx's per mb. but brings down the utxo down to 12 bytes per 'address' and coinage

Nop. Bitcoin doesn't need to evolve because of security. Who says that? It is already secure ways more than necessary,

ok imagine it. everything got locked down tomorrow. hashrate doesnt evolve. difficulty locks, developers retire and we stay with 10,000 full nodes..
how long do you think it will be before things start to go bad
14242  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 04:35:38 PM
this is not about 32mb blocks.
(...)
this is not about EB

How do you propose something and then basically say "this is not about the thing I'm literally proposing right now".   Roll Eyes

Perhaps it would allow us to forego the continual hardfork drama, but it's still not remotely as simple and clear-cut as you're making it out to be.  There are very good reasons why people are opposed to such a system and if you aren't even going to attempt to overcome the objections and only talk about the positives, then don't expect people to take this proposal seriously.

because "EB" is a buzzword
EB is for a particular limitd proposal

the way EB handles increments is one way. but i can think of dozens. so again its not about EB.. but about increments without hardforks.
just like mentioning 32mb. suddenly your mind instantly thinks of an existing proposal.
this is not about those specific proposals.

the 32mb is about something entirely different. which is technical. by which certain proposals latched onto. now if you ignore the proposals which came second to the 32mb thing. and then concentrate on the 32mb as its own thing. where many concepts and proposals can develop from. you will see that i am not talking anything about resurecting old proposals. but getting to the root issue of hardforks and the 32mb issue.
again try not to make is a thing about old proposals. but about how to scale bitcoin with known things that need to be addressed.
14243  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 03:31:42 PM
anyway once people play scenarios around that sharding will eventually lead back around to needing masternodes (full nodes)
and then once people play scenarios of if a masternode is needed that stores all shard data. why have it separate because the regional nodes are now just lower down the network and of less importance.
and play around with scenarios of if not separate we are back to a single nodebase of code monitoring everything.. it just full circles. that the strongest network is one that is united under one ruleset

It doesn't have to be an integer, so let's get rid of that myth too.  Why not:

1.25mb base/5mb weight, requires hard fork to move to
1.5mb base/6mb weight,  requires hard fork to move to
1.75mb base/7mb weight  requires hard fork to move to
2mb base/8mb weight requires hard fork to move to
and so on?  

It's not just about it happening "over time", it's also about sensible increments.  Based on what you've witnessed to date, it should be more than obvious that most BTC users are in no rush to double or quadruple the base.

fixed that for you.
however having a case where we remove the witness scale factor and have code set in consensus of
4mb pool policy /32mb consensus,
4.25mb pool policy/32mb consensus,  
4.5mb pool policy/32mb consensus

means no hard forks pre increment. and then just some code of when the blocks soft increment by 0.25
again to demyth the PR campaign.
this is not about 32mb blocks.
this is about avoiding 3 years of debate just to perform one hard fork. then another 3 year debate to perform another hard fork
blocks will not be 32mb. they will increment at the pool policy amounts. all monitored and adhered to by nodes enforcing that pools dont go over the policy amount until a coded thing happens that soft activate a 0.25 increment

again to demyth the PR campaign
this is not about EB (trying to turn the debate into mentioning a certain group of people)
this is about allowing progressive growth without hard forks and 3 year debates per increment
14244  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 03:18:17 PM
There has been misinformation attempts on the Lightning Network everywhere, made by the people who want all transactions to be processed on-chain, in big blocks, and by all nodes. That is not scalable.

But I will ask around and find a good answer for you.

or... thre are issues. that the LN devs themselves admit. but some people that want LN to be a success dont want the positive PR train to stop. so will argue endlessly that LN is utopia

here are LN devs themselves saying about issues with LN that wont be fixed.
https://youtu.be/8lMLo-7yF5k?t=570

and yes factories is the next evolution of LN concepts where factories will be the new masternodes housing lots of peoples data and also monitoring multiple chains because they know LN is not bitcoin. its a separate network for mutiple coins where LN wishes to be the main system and leave bitcoin and litecoin as just boring shards/data stores
14245  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 03:06:02 PM
Bitcoin has always been secure since the first day while the costs of carrying out such an attack has increased substantially from a few bucks to hundreds of million dollars.

Security is not quietly an 'indexable' measure, saying 'this coin is less secure', 'that coin is more secure' is absurd in cryptocurrency context, the way I understand bitcoin there is no "less" or "more" security, you are secure or you are not ...

bitcoin and crypto is not secure. its why difficulty exists.
yes bitcoin is secure against a CPU attack as it wil require trillions of PC's to match/overtake
but its not secure against certain things though. which is why it has to keep evolving.

only last month there was a bug that could have DDoSed the network

too many people have the mindset that once the titanic is built its too big to fail
once banks are in power they are too big to fail
once bitcoin was made in 2009 its too big to fail.

the mindset should be look for weakenesses find weakenesses solve weakenesses and then repeat
this is why so many ICO's and shardings dont launch. they spread out a utopian dream and instead of finding/solving problems they double down on promoting the utopia and try shutting people up if they mention weakenesses.

true developers want to hear weakenesses so they can fix them. bad developers only want to hear "great job, now you can retire rich"
14246  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 12:40:54 PM
scaling bitcoin is not a 1mb base block or 1 gigabyte base block argument
so lets demyth that old PR campaign right away

its 1,2,4,8,16,32 and so on.. here is the important thing. over time
just like 2009-2017 (2009: 0.25->0.5 upto 2013  then 2013: 0.75->1mb upto 2017)

so when you mention costs, i have to ask at what cost.
people do not keep hold of and demand to only ever use their windows xp computer forever. they naturally upgrade.
things progress over time.
trying to keep things at floppy disk space / dialup internet speed forever is not natural.

the whole PR campaign narrative of "visa by midnight or else fail" is a fail in itself. the stats of visa are not of one network. but of multiple networks. and then combining the numbers to then assume one network and that bitcoin needs as one network to be that powerful ASAP.


so many people think scaling bitcoin means as soon as code is activated servers and centralisation occur. all because people think 1mb->1gb overnight.

..
as for sharding

imagine there are 5 regions with 5 maintainers per region where the final region(5) is the important one everyone wants to attack
5   5   5   5   5 taking over one region is easy

3   3   3   3   5
                   8 the last region is now being 160% attacked

4   4   4   4   5
                   4 the last region is now being 80% attacked

imagine an outside has 6 friends
5   5   5   5   5
                   6 the last region is now being 120% attacked

thus what happens is a master node that takes in all 5 regions where to break the masternodes rules now requires more than 25 malicious participants to take over the masternodes rules. because the masternode can just reject blocks made by the (8)160%, (4)80%, 120%(6) attackers allowing the (5) to be accepted while wasting the attackers time.
thus keeping the (5) region alive and acceptable

this is why bitcoin came into being because although there are(now) 'pools'.. there is 1 rule all node/pools(regions) have to abide by.
sharding does not solve the byzantine generals rule. sharding are undoing the byzantine generals solution and taking the debate back a decade to what cypherpunks couldnt solve before satoshi came up with the solution. where pools have separate rules.

for instance. in the 5 regions where 5 separate maintainers. without oversight of a master rule the maintainers can change the rules of their region which can affect the other 4 regions.
imagine one region decided not to accept transactions from another particular region. they can do it as they have no masternode that rules that each region must accept each other.

once you wash away all the buzzwords created by the "sharding" community and play out scenarios as a real world usage and not the utopian 'it will always work'... you will see issues arrise.
too many people have a project and only run tests on 'how its intended to work' and not run 'hammer it until it breaks to find the weakeness' tests

take visa. thats sharding in basic form(washing away buzzwords). america decided not to accept transactions from russia because the visa system is separate networks and one network can change the rules and just cut off another network.

however if there was a single rule that all transactions are acceptable. then russia would be treated the same as america. and america couldnt do a thing about it

bitcoins beauty is about how to solve having multiple generals ruling multiple regions but ensuring they all comply to one rule.
and solves how those generals abide by that one rule without having one general.

the answer was everyone watch everything and reject the malicious individuals
we have had "sharding" systems in the real world for decades. sharding is DE-inventing what makes bitcoin, bitcoin
14247  Bitcoin / Bitcoin Discussion / Re: Hashrate is falling down on: November 25, 2018, 01:22:22 AM
pools selling their old s9's ready to grab new T15's.

just wait for the rise
and more importantly

enjoy the discounted coins while they last.

2019 gonna be a bumper year
14248  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 25, 2018, 12:58:39 AM
shards don't.

i already looked months ago into sharding and played around ran scenarios. and like i said a few posts ago. once you wash away all the buzzwords it all just comes full circle

many sharding concepts exist.
some are:
master chain(single) where every 10blocks.. each block designated to a certain region/group
    - this way no group can mine 10 blocks in one go. but only get 1 block in. then have to wait 9 blocks before another chance

master node(multichain) where by there are multiple chains that swap value
    - i say master node. because although some sharding concepts pretend to not need them.
      inevitably. without a master node the regions/group nodes end up having to "trust" the other chain when sending utxo's

and many more concepts
issues are the "trust" of data if its not in one place becomes its own weakness
even LN devs have noticed this and realised that LN full nodes would need to be master nodes downloading and monitoring bitcoin, litecoin, vertcoin
seems some of the sharding devs of sharding projects have not yet seen the dilemma play out.
(5 weak points more prone to attack than 1 strong point.
EG
easier to 51% attack one of the 5 points of 5exahash than it is to 51% attack one point of 50exahash. thus if one of the 5 weak points gets hit.. damage is done.)

I do agree that using atomic swaps (with recent advancements in HTLC) and forks has something to do with scaling, the problem being price as a free variable. It would be interesting tho, having a solution for this problem.

no, atomic swaps and HTLC are BADDDDDDD. think of the UTXO set. (as atomic swaps is about 2 tokens and pegging)
as i originally said, better to double mine (bc1q->sc1) that way bitcoin sees the bc1q as spent and thus no more UTXO
thus no holding large UTXO set of locked unspents (the sc1just vanishes and not counted on btc's UTXO as btc cant spend a SC1....
but again the whole needing a master node to monitor both chains comes up and circles around.

so yea LN, sharding, sidechains, multichains.. they all end up back full circle of needing a "masternode"(essentially a full node)
that monitors everything.. which end up as the debate of if a master node exists. just call it a full node and get back to the route problem.

i could waffle on about all the weakenesses of the 'trust' of relying on separate nodes holding separate data, but ill try to keep my posts short

block time reduction ..  a moderate improvement in bitcoin parameters it is ways better than a comparable block size increase. They may look very similar but there is a huge difference: A reduction in block time helps with mining variance and supports small pools/farms. The technical difficulties involved are not big deals as everything could be adjusted easily, block reward, halving threshold, ...

nope
transactions are already in peoples mempool before a block is made.
nodes dont need to send a block with transactions again. they just send the blockheader.
this is why stats show transactions take 14 seconds but a block only takes 2 seconds. because a block header is small and the whole verification is just joining the header to the already obtained transactions.
again
all the nodes are looking for is the blockheader that links them together. the blockheader doesnt take much time at all.

because transactions are relayed (14 seconds) which is plenty of time within the 10minute window.
if that 10 minute window is reduced to 5minute. then thats less time for transactions to relay.
i say this not about the average txsigops of upto 500.. but in cases where a tx has 16000sigops which i warned about a few pages ago.
a 5 minute interval will be worse than a 10min interval
math: a 16k sigop tx will take 7 and ahalf mins to relay the network. meaning if a pools seen a tx first. relayed it out. and then started mining a 5 minute block.. solves it. and relays out the blockheader ..
the block header would have reached everyone in 5minutes 2 seconds. but a bloated transaction (degrees of separation) only reached 1000 nodes. as the last 'hop' from 1000 to their 10 nodes each to multiply it to the last lot has not had time to deal with the bloated tx

also its kinda why the 16k sigops limit exists to keep things under 10 minutes. but foolisly allow such a large amount that it doesnt keep tx's down to seconds.

yes a solution would be bring it down to a lower txsigoplimit when reducing thee blocktime.
which alone brings the
difficulty retarget to weekly. so discussions of moving it to 4032 blocks to bring it back to fornightly.
reward halving happening every 2 years. which means 21mill in less time unless to move it to 420,000 blocks for a 4 year half
and as i said 5minute interval which without reducing txsigop limit. will hurt propagation

so reducing block time is NOT simpler than just increasing block size.. thers alot of ripple effects of reducing blocktime than there is increasing blocksize.
also what do end users gain with a ~5min confirm.. its not like standing at a cashier desk waiting for a confirm is any less frustrating.. it would require a 2second confirm to make waiting in line at a cashier not be a frustration.
even 30 seconds seems the same eternity as 10 minutes when you see an old lady counting change

Quote
Right, but it adds up once you go to next and next hops. It is why we call it proximity premium. Bitcoin p2p network is not a complete graph, it gets like 10 times more for a block to be relayed to all miners. When you double or triple the number of txs, the proximity flaw gets worse just a bit less than two or three times respectively.
i kinda explained it a few paragraphs ago in this post

Quote
No disputes. I just have to mention it is infeasible to engineer the p2p network artificially and AFAIK current bitcoin networking layer allows nodes to drop slow/unresponsive peers and if you could figure out an algorithm to help with a more optimized topology, it would be highly appreciated.

you have kinda self found your solution..
have 15 potential nodes and pick the best 10 with the best ping/speed. naturally the network finds its own placement where the slower nodes are on outter rings and faster ones are at the center
14249  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 24, 2018, 10:47:36 PM
last point raise by aliashraf about my idea of using spv
it was not to actually be spv. it was just to use the spv mechanism for the first time load screen of fresh users. and then after 10seconds as a secondary be the fullnode.

..
think of it this way.
would you rather download a 300gb game via a torrent wait hours and then play.
or
download a small freeroam level that while you play its downloading the entire game via torrents in the background.

my idea was not to just (analogy) download a free roam level and thats it.
it was to just use the SPV mechanism for the first loading screen to make the node useful in the first 10seconds so that while the node is then downloading the entire blockchain people can atleast do something while they wait instead of frustrating themselves waiting for the sync
14250  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 24, 2018, 10:26:48 PM
the topic creator is talking about splitting the population/data in half

to split the block data in half is to have each half have traceability. thus. basically 2 chains.
yea you split the population in half. but the community tried that with all the forks
(i should have explained this in earlier post. my methodology is working backwards)

with all that said. its not just a fork the coin but make it atomically swappable.

the other thing the topic creator has not thought about is not just how to atomically swap but
also the mining is split across 2 chains instead of 1. thus weakening them both instead of having just 1 strong chain

its also that to ensure both chains comply with each other. a new "master"/"super" node has to be created that monitors both chains fully. which ends up back wher things started but this time the master node is juggling two datachain lines instead of one.
.
so now we have a new FULL NODE of 2 data chains.
a sub layer of lighter nodes that only work as full nodes for a particular chain..

and then we end up discussing the same issues with the new (master(full node)) in relation to data storage, propagation, validation.. like i said full circle, which is where instead of splitting the network/population in half,,
which eventually is just weakening the network data. from the new node layer its not changing the original problem of the full node(now masternode)

(LN for instance wants to be a master node monitoring coins like bitcoin and litecoin and vert coin and all other coins that are LN compatible)

which is why my methodolgy is backwards because i ran through some theoretical scenarios. skipped through the topic creators idea and went full circle back to addressing the full node(masternode) issues

wich is why if your going to have masternodes that do the heavy work. you might as well just skip weaking the data by splitting it and just call a masternode a full node. after all thats how it plays out when you run through the scenarios
14251  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 24, 2018, 09:45:23 PM
The problem we are discussing here is scaling and the framework op has proposed is kinda hierarchical partitioning/sharding. I am afraid instead of contributing to this framework, sometimes you write about side chains and now you are denying the problem as being relevant completely. Considering what you are saying, there is no scaling problem at all!
the topic creator is proposing having essentially 2 chains.  then 4 chains then 8 chains.

we already have that, ever since clams split and then every other fork

the only difference the OP is saying is that the forks still communicate and atomic swap coins between each other..
the reason i digressed into sidechains is about the fact that without going into buzzwords. having 2 chains that atomic swap is when simplifying it down to average joe experience.. exactly the same on-offramp experience of sidechains.

i just made a simple solution to make it easily visable which "node-set"(chain) is holding which value (bc1q or sc1) without having to lock:peg value to one nodeset(chain) to peg:create fresh coin in another node-set(chain).

because pegging(locking) is bad.. for these reasons
it raises the UTXO set because coins are not treated as spent
the locks cause the coins in UTXO are out of circulation but still need to be kept in UTXO
the fresh coin of a sidechain dont have traceability back to a coinbase(blockreward)

...
the other thing is bitcoin is one chain.. and splitting the chain is not new (as my second sentence in this post highlighted)
...
the other thing about reducing blocktime. (facepalm) reducing blocktime has these issues:
1. reduces the interval of 10mins for the whole propagation things you highlight as an issue later in your post
2. its not just mining blocks in 5 minutes. its having to change the reward. difficulty. and also the timing of the reward halvening
3. changing these affect the estimate of when all 21mill coins are mined.(year ~2140)
...
as for propagation. if you actually time how long it takes to propagate it actually is fast, its only a couple seconds
this is because at transaction relay. its  about 14 seconds for transactions to get around 90% of the network, validated and set into mempool. as for a solved block. because fullnodes already have the (majority) of transactions in their mempool already they just need block header data and list of tx, not the tx data. and then just ensure all the numbers(hashes) add up. which is just 2 seconds
....
having home users of 0.5mb internet trying to connect to 100 nodes is causing a bottleneck for those 100 nodes.as they are only getting data streaming at 0.005mb (0.5/100)
where as having a home user of 0.5mb internet with just 10 connections is a 0.05 data stream speed

imagine it your a business NEEDING to monitor millions of transactions because they are your customers.
you have fibre.. great. you set your node to only connect to 10 connections. but find those 10 connections are home users who they are connecting to 100 nodes. you end up only getting data streamed to you at a combined 0.05mb.(bad)

but if those home users also decided to only connect to 10 nodes youll get data streams at 0.5mb
thats 10x faster

if you do the 'degrees of separation' math for a network of 10k nodes and say 1.5mb of blockdata
the good propagation: 10 connection(0.5mb combined stream)
  10  *  10   *   10   *   10=10,000
3sec + 3sec + 3sec + 3sec=12seconds for network

the bad propagation: 100 connection(0.05mb combined stream)
  100  *   100   =10,000
30sec + 30sec  =1 minute

alot of people think that connecting to as many nodes as possible is good. when infact it is bad.
the point i am making is:
home users dont need to be making 120 connections to nodes to "help the network". because that infact is causing a bottleneck

also sending out 1.5mb of data to 100 nodes instead of just 10 nodes is a waste of bandwidth for home users.
also if a home user only has bottomline 3g/0.5mb internet speeds as oppose to fibre. those users are limiting the fibre users that have 50mb.. to only get data at 0.5mb due to the slow speed of the sender.

so the network is better to be centralised by
10000 business fibre users who NEED to monitor millions of transactions
rather than
10000 home users who just need to monitor 2 addresses

yes of course. for independance have home users be full nodes but the network topology should be that slow home users be on the last 'hop' of the relay. not at the beginning/middle.
14252  Economy / Speculation / Re: Bitcoin for Less Than $4000 - Welcome Everyone That Missed the Jamie Dimon Boat on: November 24, 2018, 01:59:30 PM
with all that said. i still kinda was hoping after the T15 announcement, hashrates would have gone up to 65exa before delivery of majority of t15(which would be in late december) to of stayed on track of the above $5800 bottom.

Great calculations, I can tell a lot of thought and effort went in to that. Do you know traditionally how much of a disparity there has been between the minimum mining cost of bitcoin and the actual price? Has there ever been an extended period whereby it was unprofitable to mine?

not really.


what happens is. if someoone has tanked the price below mining costs. 2 decisions are made
1. its cheaper to buy bitcoin than waste money on new hardware/electric/current mining costs to try pushing hashrate up, so lets just buy btc = price rise
2. lets find more efficient mining, or shut down some hash power as costs vs reward is not there.
=hashrate decrease until profitable again


in the end mining BOTTOMLINE mining costs end up going below btc market price.. 99.9% of the time (bar id say a few instances in the last year(few hours) of surprise market drama)

most mining farms (main majority of smart business minded pools) pre-empt events EG. they have 12 month contracts to only increase X% a fornight and totally ignore spikes and dips.
EG they didnt ramp up hashpower during the $20k event

you will notice that although new asics were starting to be tested over a month ago. the swap by some pools and selling og cheap second hand asics happened earlier seems the pools a month ago decided to reduce hashpower BEFORE the price movements. thus less hashpower and more efficiency = less cost = able to sell for less = price went down
(also VC funding helped. and selling asics = more btc in hands of those asic seller means having to convert more to btc)

all in all this is just a temporary drama event

take yesterdays hashrate of ~38exa thats above $3415 bottomline costs
take todays hashrate of ~43exa thats above $3864 bottomline costs

i was and am hoping to see hashpower increasing and some buy pressure on the markets before end of december

but for now. yay nice discount coins price.
14253  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 24, 2018, 01:48:28 PM
5. we are not in the pre-millenia era of floppy disks. we are in the era where:
256gb is a fingernail size not server size.
4tb hard drives are the cost of a grocery shop not a lifetime pension.
4tb hard drives evn for 20mb blocks would be the average life cycle of a pc anyway if all blocks were filled
internet is not dialup, its fibre(landline), its 5g(cellular)
if your on a capped internet then your not a business, as your on a home/residance internet plan
if your not a business then you are not NEEDING to validate and monitor millions of transactions

if you think bandwidth usage is too high then simply dont connect to 120 nodes. just connect to 8 nodes

..
now. the main gripe of blocksize
its not actually the blocksize. its the time it takes to initially sync peoples nodes.
now why are people angry about that.
simple. they cannot see the balance of their imported wallet until after its synced.

solution
spv/bloom filter utxo data of imported addresses first. and then sync second
that way people see balances first and can transact and the whole syncing time becomes a background thing no one realises is happening because they are able to transact within seconds of downloading and running the app.
i find it funny how the most resource heavy task of a certain brand of node is done first. when it just causes frustrations.
after all if people bloomfilter important addresses and then make a tx.. if those funds actually are not spendable due to receiving bad data from nodes.. the tx wont get relayed by the relay network.
in short
you cannot spend what you do not have
all it requires is a bloomfilter of imported addresses first. list the balance as 'independently unverified' and then do the sync in the background. once synced. the "independently unverified" tag vanishes
simple. people are no longer waiting for hours just to spend their coin.
14254  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 24, 2018, 01:47:33 PM
The OP was right about increasing of bitcoin blocksize to also be one of the solution to bitcoin scaling because big block size promote more nodes but we also have to put into consideration the side effect of the block increasing which I presume could lead to  the 51% attacks and if Lightning does not which I believe it will another solution will arouse.

51% attack will not be caused by larger blocks.

here is why
1. ASICS do not touch the collated TX data. asics are handed a hash and told to make a second hash that meets a threshold.
it does not matter if the unmined blockhash is an identifier of 1kb of block tx data or exabytes of tx data. the hash remains the same length.
the work done by asics has no bearing on how much tx data is involved.

2. the verifying of transactions is so fast its measured in nano/miliseconds not seconds/minutes. devs know verification times are of no inconvenience which is why they are happy to let people use smart contracts instead of straight forward transactions. if smart contracts/complex sigops inconvenienced block verification efficiencies they would not add them (well moral devs wouldnt(dont reply/poke to defend devs as thats missing the point. relax have a coffee))

they are happy to add new smart features as the sigops is a combined few seconds max compared to the ~10min interval

3. again if bloated tx's do become a problem. easy, reduce the txsigops. or remove the opcode of features that allows such massive delays

4. the collating of txdata is handled before a confirmed/mined hash is solved. while ASICS are hashing a previous block, nodes are already verifying and storing transactions in mempool for the next block. it takes seconds while they are given upto 10 minutes. so no worries.
pools specifically are already collating transactions from the mempool into a new block ready to add a mined hash to it when solved to form the chain link. thus when a block solution is found:
if its their lucky day and they found the solution first. boom. miliseconds they hand the ASICS the next block identifier
if its a competitors block. within seconds they know its valid or not
it only takes a second to collate a list of unconfirmed tx's to make the next block ID to give to asics.
try it. find an MP3(4mb) on your home computer and move if from one folder to another. you will notice it took less time than reading this sentance. remember transactions in the mempool that get collated into a block to get a block identifier had already been verified during the previous slot of time so its just a case of collating data that the competitor hasnt collated
14255  Economy / Speculation / Re: Bitcoin for Less Than $4000 - Welcome Everyone That Missed the Jamie Dimon Boat on: November 23, 2018, 11:10:51 PM
with all that said. i still kinda was hoping after the T15 announcement, hashrates would have gone up to 65exa before delivery of majority of t15(which would be in late december) to of stayed on track of the above $5800 bottom.
14256  Economy / Speculation / Re: Bitcoin for Less Than $4000 - Welcome Everyone That Missed the Jamie Dimon Boat on: November 23, 2018, 10:54:08 PM
pools running new gen T15 asics. and with bitcoins hashrate going down to 34exa.
puts the break even cost for those LUCKY limited number of miners at
34exa*89.88=$3056

but this $3059 is just a break even for the lucky few running new t15'
and for the very bottom hashrate recently
emphasis again its the BOTTOM LINE break even for limited amount of people with most efficient miners with the very bottomline hashrate recently
(majority of public not getting delivery of t15 until late december)

as for those running s9's bought at $450 last month multiple of hashrate is 111 $3774
as for those running s9's bought at $850 in summer multiple of hashrate is 154 $5236
as for those running s9's bought at $2000 late 2017 multiple of hashrate is 270 (not bothering with most shut down by now)

how this was calculated to find the magic 89.88 multiplier number for easy cost bottom line

hashrate 34exa = 34,000,000thash
34m thash / hashrate of asic(28thash)

34000000/28=1214285.714285714

1.214m asics * $950 cost of asic unit
1214285.714285714*$950=$1153571428.57 of hardware

$1.153bill of hardware 26 fortnights / 2016 blocks /12.5btc = cost of hardware per btc
yes i based it on spreading hardware cost over average year lifecycle of hardware
1153571428.57/26/2016/12.5=$1760.64 of hardware cost for pools using T15

now the electric (based at 5cents... yea some can get cheaper but include facility lease and labour. 5cent reasonable)
1,214,286 asics *1.6kwh * 0.05 = electric $ per hour at 5 cents
1214285.714285714*1.6=$97142.86 (now multiply it by 24 then by 14 to get a fair fortnightly cost)
97142.86*24*14=32640000 for a fortnight (now divide that by 2016 blocks and 12.5btc)
32640000/2016/12.5=1295.24

now hardware and electric=cost of mining using T15's
1295.24+1760.64=$3055.88 cost per btc

you may see me multiply by 24 then 14 then divide by 2016 divide by 12.5
instead of just divide by 6 divide by 12.5
i done this because bitcoins rule is 2016 blocks a fortnight. not 6 blocks an hour(10min)
some people multiply by 24 multiply by 365 and then multply down by 26 fortnights then 2016 blocks then 12.5btc
which gives a ~$3 variance. thats because theres a difference of a few days. of the year compared to 26 fortnights
(you can choose how anal you want to be over the math)


now this ~$3056 is based on a variable of the exa hash.. so if you divide how many exahash. your left with a constant
of cost per single exa hash.. which is the magic constant to do easy mining cost of T15 mining
3055.88/34=89.88

so 89.88 is the magic constant for T15 mining.
you can double checking with the lengthy math of different hashpower.
but youll still see the same result if just taking the hashrate in exa and multiplying by 89.88 give or take a few pennies


now with that said
$3056 is the BOTTOMLINE break even per btc for the lucky few with the most efficient miners.
the range of mining costs is
s9@$450(111)=$3774
s9@$850(154)=$5236

so $3056-$5236 is the majority of range of mining costs. so dont take $3056 as average or majority. just think of it as the minimum BASELINE for 34exahash mining for a lucky few
14257  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 23, 2018, 08:07:21 PM
hurray. back on topic.. hopefully we can stay on topic.

franky1 & DooMAD, both of you starting going off-topic again

bc1q ->bc1q that has a lock. can have openings for abuse based on timing and also loss of key for the bc1q address.
where as moving funds to a sc1 address is obsolving loss/risk from the mainnet as value is not in a bc1q address no more(as its spent). and moving the value with the transaction to the sidechain.
(thus solves the UTXO issue on mainnet of not having to hold 'locked' value)


This is interesting idea and oddly it has similarity with proposal Superspace: Scaling Bitcoin Beyond SegWit on part moving between main-chain and side-chain.
But thinking about UI/UX, introducing another address format is confusing for most user. Even 1..,3... and bc1... are plenty confusing.

its not that difficult. its like if you never intend to use a side chain. you wont have to worry. because you wont get funds from a SC1 address. and never need to send to a sc1 address

as for the UI. well again a UI can be designed to have an option eg

File   Options
          display segwit features
          display sidechain features

if you dont want it. you dont select it / dont realise it exists as the UI wont display features.
again you wont get funds from a SC1 or send to an SC1 unless you want it. so easy for average joe


but yea it will help the UTXO set stay down
unlike some sidechain concepts and definitely unlike LN (as locks means keeping the funds as UTXO for a locked time(facepalm)



.. anyway.. superspace project... (hmm seems they missed a few things and got a few details misrepresented) but anyway

the specific No_op code used to make segwit backward compatible cant be used again.
this is why

imagine a transaction in bytes, where a certain byte was a option list($)
***********$*******************
in legacy nodes
if $is: (list in laymans eli-5 so dont knitpick)
     0= ignore anything after (treat as empty, meaning no recipient meaning anyone can spend)(no_op)
     1= do A
     2=do B

in segwit nodes the 0 option was changed to become do segwit checks
they also added a few other opcodes too. as a sublist
so now with segwit being the active fullnodes. there is no 0='ignore anything after' at that particular $ byte
as its now
EG
***********$%******************
if $is: (list in laymans eli-5 so dont knitpick)
     0= do segwit if %is: (list in laymans eli-5 so dont knitpick)
                            0= ignore anything after (meaning anyone can spend)(no_op)
                            1= ignore anything after (meaning anyone can spend)(no_op)
                            ....
                            11= do A
                            12=do B
     1= do A
     2=do B
theres actually more No_ops now for segwit(%)

so if someone was to want to do what segwit did. they would first need to find a new no_op thats not been used.
and then they would need to ensure pools didnt treat it as a no_op at activation. (yep not really as soft as made out)
which would be another 2016-2017 drama event.

what the link does not explain is that the summer 2017 was actually a hard fork as nodes that would reject segwit needed to be thrown off the network. and pools needed to treat the no_op as not a 'anyonecanspend'

which means another hard fork would be needed. (hopefully not mandated this time)
14258  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 23, 2018, 07:55:54 PM
again another offtopic poke from that certain person.. one last bite


i make a point. and then you say i am missing and deflecting my point

thats like me speaking english. you speak german. i say a point about english and you get upset and then waffle on that my point is about german and how im missing a german point.

ill make it real clear for you. although there are dozens of topics that repeat the word enough

mandatory mandatory mandatory

you cannot rebut the mandatory. so you are deflecting it.

they had segwit planned back in 2014 and had to get it activated ($100m was at stake)
no matter what the community done/said/want/didnt want. they needed it activated THEIR WAY
they didnt get their way 2016-spring 2017
so they resorted to mandatory activation

my point is about mandatory.
i should know my point. because im the one making it.

point is: mandatory

if you want to argue against my point then you need to address the point im making.

again
for the dozenth topic you have meandered off topic with your pokes. my point has been about the MANDATORY

if you cannot talk about the mandatory not being decentralised.. then atleast hit the ignore button.

as for the whole no community permission.. re-read your own post i gave you merit on. and see your flip flop

as for your deflection of the writing code. its not that they just write code they want. its that they avoid community code/involvement. as it doesnt fit their internal circles PLAN they had as far back as 2014..

yea anyone can write code.. but making it mandatory.. no. as thats anti concensus

also i said if they actually listened to the community and went with the late 2015 consensus agreement of a early varient of segwit2x they would have got segwit activated sooner. and the community would have had legacy benefits too.

but again they mandated only their pre existing plan which is what caused such delays /drama and still causing drama today as we are still discussing scaling even now 3 years later.
14259  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 23, 2018, 06:51:54 PM
here we go again  you poke, i bite.
shame you are missing the point of decentralisation

they had segwit roadmap plan from 2014. before community input
they had code before community got to download.
Which means someone made a compelling argument about the idea and most of the developers in that team agreed with it.  Ideas can come from anywhere, including from developers themselves.  Saying that developers shouldn't work on an idea just because a developer proposed it isn't a mature or realistic stance.

^ their internal circle agreed. before letting the community have a say
i guess you missed the 2014-5 drama.

v not letting the community be involved is prime example of centralisation
 
I don't know where you get this perverse notion that developers need permission from the community before they are allowed to code something.

do you ever wonder why i just publicly give out idea's and let people decide yay or nah. rather than keep idea's in secret and make code and then demand adoption. again before trying to say im demanding anything. show me a line of code i made that had a mandatory deadline that would take people off the network if not adopted.
.. you wont. there is no need for your finger pointing that im an authoritarian demanding rule changes. because there is no demanding rule changes made by me

i find it funny that you flip flop about community involvement.
my issue is that they plan a roadmap. code a roadmap. release it and even if rejected, they mandate it into force anyway

emphasis.. MANDATE without community ability to veto

again the point your missing
having code that allows community vote/veto (2016, good)
having code that mandates activation without vote/veto (2017, bad)

you do realise that core could have had segwit activate by christmas 2016 if they just actually went with the 2015 consensus(early variant of segwit2x). which was a consensus compromise of the wider community finding agreement
which gave legacy benefits too.
but by avoiding it. and causing drama all the way through 2016 of how they want it only thier way(segwitx1).. pretending they couldnt code it any other way
they still didnt get a fair true consensus vote in their favour in spring 2017. so had to resort to the madatory activation and swayed the community is (fake) option of segwit2x(nya) just to then backtrack to segwit1x once they got the segwit part activated
14260  Bitcoin / Development & Technical Discussion / Re: Bitcoin Scaling Solution Without Lightning Network... on: November 23, 2018, 06:26:58 PM
anyway, back on topic.

the scaling onchain
reduce how much sig-op control one person can have is a big deal.
i would say the sigops limits alone can be abused more so than the fee war to victimise other users. and needs addressing

as for transactions per block. like i said (only reminding to get back ontopic) removing the witness scale factor and the wishy washy code to realign the block structure into a single block that doesnt need stripping is easy. as the legacy nodes are not full nodes anyway

but this can only be done by devs actually writing code. other teams have tried but found themselves relegated downstream as "compatible" or rejected off the network. so the centralisation of devs needs to change
(distributed nodes does not mean decentralised rule control.. we need decentralised not distributed)

as for other suggestions of scaling.
others have said sidechains. the main issue is the on-off ramp between the two

a alternative concept could be whereby a new transaction format. (imagine bc1q.. but instead SC1) which has no lock
bitcoin network sees a
bc1q->SC1 as a go to side chain tx (mined by both chains)
and
SC1->bc1q as a return to main net(mined by both chains)

mainnet will not relay or collate(mine to blocks) any sc1 -> sc1 transactions. (hense no need to lock)
sidechain will not relay or collate(mine to block) any bc1q -> bc1q transactions. (hense no need to lock)

this way it avoids a situation of "pegging" such as
bc1q->bc1q(lock)                                sc1(create)->sc1

having bc1q->sc1 is not about pegging a new token into creation.
its about taking the transaction of mainchain. mining it also into a sidechain. and then only able to move sc1 addresss->sc1 the sidechain until its put back into a bc1q address which are then only able to move on mainnet

i say this because having a
bc1q ->bc1q that has a lock. can have openings for abuse based on timing and also loss of key for the bc1q address.
where as moving funds to a sc1 address is obsolving loss/risk from the mainnet as value is not in a bc1q address no more(as its spent). and moving the value with the transaction to the sidechain.
(thus solves the UTXO issue on mainnet of not having to hold 'locked' value)

allowing value to flow without time lock allows the auditing of funds to show its still one value moving. instead a locked value in main chain and new value in sidechain

i do have issues and reservations about sidechains too but the original "pegging" concept of sidechains really was bad and open to abuse. (not having the BTC side seen as "spent" while spending was actually happening)
Pages: « 1 ... 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 [713] 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 ... 1472 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!