Bitcoin Forum
April 25, 2024, 02:06:41 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4] 5 6 »  All
  Print  
Author Topic: Bitcoin Scaling Solution Without Lightning Network...  (Read 1692 times)
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 24, 2018, 01:47:33 PM
 #61

The OP was right about increasing of bitcoin blocksize to also be one of the solution to bitcoin scaling because big block size promote more nodes but we also have to put into consideration the side effect of the block increasing which I presume could lead to  the 51% attacks and if Lightning does not which I believe it will another solution will arouse.

51% attack will not be caused by larger blocks.

here is why
1. ASICS do not touch the collated TX data. asics are handed a hash and told to make a second hash that meets a threshold.
it does not matter if the unmined blockhash is an identifier of 1kb of block tx data or exabytes of tx data. the hash remains the same length.
the work done by asics has no bearing on how much tx data is involved.

2. the verifying of transactions is so fast its measured in nano/miliseconds not seconds/minutes. devs know verification times are of no inconvenience which is why they are happy to let people use smart contracts instead of straight forward transactions. if smart contracts/complex sigops inconvenienced block verification efficiencies they would not add them (well moral devs wouldnt(dont reply/poke to defend devs as thats missing the point. relax have a coffee))

they are happy to add new smart features as the sigops is a combined few seconds max compared to the ~10min interval

3. again if bloated tx's do become a problem. easy, reduce the txsigops. or remove the opcode of features that allows such massive delays

4. the collating of txdata is handled before a confirmed/mined hash is solved. while ASICS are hashing a previous block, nodes are already verifying and storing transactions in mempool for the next block. it takes seconds while they are given upto 10 minutes. so no worries.
pools specifically are already collating transactions from the mempool into a new block ready to add a mined hash to it when solved to form the chain link. thus when a block solution is found:
if its their lucky day and they found the solution first. boom. miliseconds they hand the ASICS the next block identifier
if its a competitors block. within seconds they know its valid or not
it only takes a second to collate a list of unconfirmed tx's to make the next block ID to give to asics.
try it. find an MP3(4mb) on your home computer and move if from one folder to another. you will notice it took less time than reading this sentance. remember transactions in the mempool that get collated into a block to get a block identifier had already been verified during the previous slot of time so its just a case of collating data that the competitor hasnt collated

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714054001
Hero Member
*
Offline Offline

Posts: 1714054001

View Profile Personal Message (Offline)

Ignore
1714054001
Reply with quote  #2

1714054001
Report to moderator
1714054001
Hero Member
*
Offline Offline

Posts: 1714054001

View Profile Personal Message (Offline)

Ignore
1714054001
Reply with quote  #2

1714054001
Report to moderator
1714054001
Hero Member
*
Offline Offline

Posts: 1714054001

View Profile Personal Message (Offline)

Ignore
1714054001
Reply with quote  #2

1714054001
Report to moderator
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 24, 2018, 01:48:28 PM
Merited by bones261 (2)
 #62

5. we are not in the pre-millenia era of floppy disks. we are in the era where:
256gb is a fingernail size not server size.
4tb hard drives are the cost of a grocery shop not a lifetime pension.
4tb hard drives evn for 20mb blocks would be the average life cycle of a pc anyway if all blocks were filled
internet is not dialup, its fibre(landline), its 5g(cellular)
if your on a capped internet then your not a business, as your on a home/residance internet plan
if your not a business then you are not NEEDING to validate and monitor millions of transactions

if you think bandwidth usage is too high then simply dont connect to 120 nodes. just connect to 8 nodes

..
now. the main gripe of blocksize
its not actually the blocksize. its the time it takes to initially sync peoples nodes.
now why are people angry about that.
simple. they cannot see the balance of their imported wallet until after its synced.

solution
spv/bloom filter utxo data of imported addresses first. and then sync second
that way people see balances first and can transact and the whole syncing time becomes a background thing no one realises is happening because they are able to transact within seconds of downloading and running the app.
i find it funny how the most resource heavy task of a certain brand of node is done first. when it just causes frustrations.
after all if people bloomfilter important addresses and then make a tx.. if those funds actually are not spendable due to receiving bad data from nodes.. the tx wont get relayed by the relay network.
in short
you cannot spend what you do not have
all it requires is a bloomfilter of imported addresses first. list the balance as 'independently unverified' and then do the sync in the background. once synced. the "independently unverified" tag vanishes
simple. people are no longer waiting for hours just to spend their coin.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
November 24, 2018, 05:57:15 PM
 #63

5. we are not in the pre-millenia era of floppy disks. we are in the era where:
256gb is a fingernail size not server size.
4tb hard drives are the cost of a grocery shop not a lifetime pension.
4tb hard drives evn for 20mb blocks would be the average life cycle of a pc anyway if all blocks were filled
internet is not dialup, its fibre(landline), its 5g(cellular)
Although I like the tone I have to remind you of a somewhat bitter fact: None of these would help with scaling bitcoin, like definitively. It is good news that Moore law is still working (somehow) but the problem is not about the resources, it is the propagation delay of blocks because of the time it takes to fully validate transactions they are committing to. Unfortunately, propagation delay does not improve by Moore law.

That said, I'm ok with a moderate improvement in current numbers (by decreasing block time rather than increasing block size which are just the same in this context) but it won't be a scaling solution as it couldn't be used frequently because of proximity premium problem in mining. Larger pools/farms would have a premium once they hit a block as they are able to start mining next block while their poor competitors are busy validating the newborn and relaying it (they have to do both if they don't want to be on an orphan chain).

Many people are confused about this issue, even Gavin were confused about it, I read an article from him arguing about how cheap and affordable is a multi-terabyte HD. It is not about HDs neither about internet connectivity or bandwidth, it is about the number of transactions that need validation and the delayed progress of blocks and the resulting centralization threats.

Quote
if your on a capped internet then your not a business, as your on a home/residance internet plan
if your not a business then you are not NEEDING to validate and monitor millions of transactions
Home/non-business full nodes are critical parts of bitcoin ecosystem and our job is strengthening them by making it more feasible for them to stay and grow in numbers considerably.

Quote
now. the main gripe of blocksize
its not actually the blocksize. its the time it takes to initially sync peoples nodes.
now why are people angry about that.
simple. they cannot see the balance of their imported wallet until after its synced.
Good point but not the most important issue with block size.

Quote
solution
spv/bloom filter utxo data of imported addresses first. and then sync second
that way people see balances first and can transact and the whole syncing time becomes a background thing no one realises is happening because they are able to transact within seconds of downloading and running the app.
i find it funny how the most resource heavy task of a certain brand of node is done first. when it just causes frustrations.
after all if people bloomfilter important addresses and then make a tx.. if those funds actually are not spendable due to receiving bad data from nodes.. the tx wont get relayed by the relay network.
Recently, I have proposed a solution for fast sync and getting rid of the history but surprisingly I did it to abandon spvs(well, besides other objectives). I hate spvs, they are vulnerable and they add zero value to network, they just consume and give nothing because they don't validate blocks.

The problem we are discussing here is scaling and the framework op has proposed is kinda hierarchical partitioning/sharding. I am afraid instead of contributing to this framework, sometimes you write about side chains and now you are denying the problem as being relevant completely. Considering what you are saying, there is no scaling problem at all!

franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 24, 2018, 09:45:23 PM
 #64

The problem we are discussing here is scaling and the framework op has proposed is kinda hierarchical partitioning/sharding. I am afraid instead of contributing to this framework, sometimes you write about side chains and now you are denying the problem as being relevant completely. Considering what you are saying, there is no scaling problem at all!
the topic creator is proposing having essentially 2 chains.  then 4 chains then 8 chains.

we already have that, ever since clams split and then every other fork

the only difference the OP is saying is that the forks still communicate and atomic swap coins between each other..
the reason i digressed into sidechains is about the fact that without going into buzzwords. having 2 chains that atomic swap is when simplifying it down to average joe experience.. exactly the same on-offramp experience of sidechains.

i just made a simple solution to make it easily visable which "node-set"(chain) is holding which value (bc1q or sc1) without having to lock:peg value to one nodeset(chain) to peg:create fresh coin in another node-set(chain).

because pegging(locking) is bad.. for these reasons
it raises the UTXO set because coins are not treated as spent
the locks cause the coins in UTXO are out of circulation but still need to be kept in UTXO
the fresh coin of a sidechain dont have traceability back to a coinbase(blockreward)

...
the other thing is bitcoin is one chain.. and splitting the chain is not new (as my second sentence in this post highlighted)
...
the other thing about reducing blocktime. (facepalm) reducing blocktime has these issues:
1. reduces the interval of 10mins for the whole propagation things you highlight as an issue later in your post
2. its not just mining blocks in 5 minutes. its having to change the reward. difficulty. and also the timing of the reward halvening
3. changing these affect the estimate of when all 21mill coins are mined.(year ~2140)
...
as for propagation. if you actually time how long it takes to propagate it actually is fast, its only a couple seconds
this is because at transaction relay. its  about 14 seconds for transactions to get around 90% of the network, validated and set into mempool. as for a solved block. because fullnodes already have the (majority) of transactions in their mempool already they just need block header data and list of tx, not the tx data. and then just ensure all the numbers(hashes) add up. which is just 2 seconds
....
having home users of 0.5mb internet trying to connect to 100 nodes is causing a bottleneck for those 100 nodes.as they are only getting data streaming at 0.005mb (0.5/100)
where as having a home user of 0.5mb internet with just 10 connections is a 0.05 data stream speed

imagine it your a business NEEDING to monitor millions of transactions because they are your customers.
you have fibre.. great. you set your node to only connect to 10 connections. but find those 10 connections are home users who they are connecting to 100 nodes. you end up only getting data streamed to you at a combined 0.05mb.(bad)

but if those home users also decided to only connect to 10 nodes youll get data streams at 0.5mb
thats 10x faster

if you do the 'degrees of separation' math for a network of 10k nodes and say 1.5mb of blockdata
the good propagation: 10 connection(0.5mb combined stream)
  10  *  10   *   10   *   10=10,000
3sec + 3sec + 3sec + 3sec=12seconds for network

the bad propagation: 100 connection(0.05mb combined stream)
  100  *   100   =10,000
30sec + 30sec  =1 minute

alot of people think that connecting to as many nodes as possible is good. when infact it is bad.
the point i am making is:
home users dont need to be making 120 connections to nodes to "help the network". because that infact is causing a bottleneck

also sending out 1.5mb of data to 100 nodes instead of just 10 nodes is a waste of bandwidth for home users.
also if a home user only has bottomline 3g/0.5mb internet speeds as oppose to fibre. those users are limiting the fibre users that have 50mb.. to only get data at 0.5mb due to the slow speed of the sender.

so the network is better to be centralised by
10000 business fibre users who NEED to monitor millions of transactions
rather than
10000 home users who just need to monitor 2 addresses

yes of course. for independance have home users be full nodes but the network topology should be that slow home users be on the last 'hop' of the relay. not at the beginning/middle.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 24, 2018, 10:26:48 PM
Last edit: November 24, 2018, 10:55:36 PM by franky1
 #65

the topic creator is talking about splitting the population/data in half

to split the block data in half is to have each half have traceability. thus. basically 2 chains.
yea you split the population in half. but the community tried that with all the forks
(i should have explained this in earlier post. my methodology is working backwards)

with all that said. its not just a fork the coin but make it atomically swappable.

the other thing the topic creator has not thought about is not just how to atomically swap but
also the mining is split across 2 chains instead of 1. thus weakening them both instead of having just 1 strong chain

its also that to ensure both chains comply with each other. a new "master"/"super" node has to be created that monitors both chains fully. which ends up back wher things started but this time the master node is juggling two datachain lines instead of one.
.
so now we have a new FULL NODE of 2 data chains.
a sub layer of lighter nodes that only work as full nodes for a particular chain..

and then we end up discussing the same issues with the new (master(full node)) in relation to data storage, propagation, validation.. like i said full circle, which is where instead of splitting the network/population in half,,
which eventually is just weakening the network data. from the new node layer its not changing the original problem of the full node(now masternode)

(LN for instance wants to be a master node monitoring coins like bitcoin and litecoin and vert coin and all other coins that are LN compatible)

which is why my methodolgy is backwards because i ran through some theoretical scenarios. skipped through the topic creators idea and went full circle back to addressing the full node(masternode) issues

wich is why if your going to have masternodes that do the heavy work. you might as well just skip weaking the data by splitting it and just call a masternode a full node. after all thats how it plays out when you run through the scenarios

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 24, 2018, 10:47:36 PM
Merited by aliashraf (2)
 #66

last point raise by aliashraf about my idea of using spv
it was not to actually be spv. it was just to use the spv mechanism for the first time load screen of fresh users. and then after 10seconds as a secondary be the fullnode.

..
think of it this way.
would you rather download a 300gb game via a torrent wait hours and then play.
or
download a small freeroam level that while you play its downloading the entire game via torrents in the background.

my idea was not to just (analogy) download a free roam level and thats it.
it was to just use the SPV mechanism for the first loading screen to make the node useful in the first 10seconds so that while the node is then downloading the entire blockchain people can atleast do something while they wait instead of frustrating themselves waiting for the sync

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
November 24, 2018, 10:59:35 PM
Last edit: November 24, 2018, 11:30:26 PM by aliashraf
 #67

franky,
Splitting is far different from forking. Forks inherit the full history and the state, shards don't. @mehanikalk has done good job on a similar idea as OP and his topic Blockreduce ... is trending (in this subforum measures) too. In both topics we are dealing with sharding, neither forks nor side-chains.

I do agree that using atomic swaps (with recent advancements in HTLC) and forks has something to do with scaling, the problem being price as a free variable. It would be interesting tho, having a solution for this problem.

Back to your recent post:
the other thing about reducing blocktime. (facepalm) reducing blocktime has these issues:
1. reduces the interval of 10mins for the whole propagation things you highlight as an issue later in your post
2. its not just mining blocks in 5 minutes. its having to change the reward. difficulty. and also the timing of the reward halvening
3. changing these affect the estimate of when all 21mill coins are mined.(year ~2140)
I'm not offering block time reduction as an ultimate scaling solution, of course it is not. I'm just saying for a moderate improvement in bitcoin parameters it is ways better than a comparable block size increase. They may look very similar but there is a huge difference: A reduction in block time helps with mining variance and supports small pools/farms. The technical difficulties involved are not big deals as everything could be adjusted easily, block reward, halving threshold, ...

Quote
as for propagation. if you actually time how long it takes to propagate it actually is fast, its only a couple seconds
this is because at transaction relay. its  about 14 seconds for transactions to get around 90% of the network, validated and set into mempool. as for a solved block. because fullnodes already have the (majority) of transactions in their mempool already they just need block header data and list of tx, not the tx data. and then just ensure all the numbers(hashes) add up. which is just 2 seconds
Right, but it adds up once you go to next and next hops. It is why we call it proximity premium. Bitcoin p2p network is not a complete graph, it gets like 10 times more for a block to be relayed to all miners. When you double or triple the number of txs, the proximity flaw gets worse just a bit less than two or three times respectively.

Quote
having home users of 0.5mb internet trying to connect to 100 nodes is causing a bottleneck for those 100 nodes.as they are only getting data streaming at 0.005mb (0.5/100)
...
yes of course. for independance have home users be full nodes but the network topology should be that slow home users be on the last 'hop' of the relay. not at the beginning/middle.
No disputes. I just have to mention it is infeasible to engineer the p2p network artificially and AFAIK current bitcoin networking layer allows nodes to drop slow/unresponsive peers and if you could figure out an algorithm to help with a more optimized topology, it would be highly appreciated.

On the other hand, I think partitioning/sharding is a more promising solution for most of these issues. Personally I believe in sharding of state (UTXO) which is a very challenging strategy as it resides on edges of forking.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
November 24, 2018, 11:13:57 PM
 #68

last point raise by aliashraf about my idea of using spv
it was not to actually be spv. it was just to use the spv mechanism for the first time load screen of fresh users. and then after 10seconds as a secondary be the fullnode.

..
think of it this way.
would you rather download a 300gb game via a torrent wait hours and then play.
or
download a small freeroam level that while you play its downloading the entire game via torrents in the background.

my idea was not to just (analogy) download a free roam level and thats it.
it was to just use the SPV mechanism for the first loading screen to make the node useful in the first 10seconds so that while the node is then downloading the entire blockchain people can atleast do something while they wait instead of frustrating themselves waiting for the sync

As you may have noticed, I meritted this idea of yours and as you know, I've a lot to add here. Most importantly, a better idea than getting UTXO via a torrent download which implies using trust (you need the hash) and being subject to sybil attacks, could be implementing it in bitcoin as what I've described in this post.
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 25, 2018, 12:58:39 AM
Last edit: November 25, 2018, 01:15:27 AM by franky1
 #69

shards don't.

i already looked months ago into sharding and played around ran scenarios. and like i said a few posts ago. once you wash away all the buzzwords it all just comes full circle

many sharding concepts exist.
some are:
master chain(single) where every 10blocks.. each block designated to a certain region/group
    - this way no group can mine 10 blocks in one go. but only get 1 block in. then have to wait 9 blocks before another chance

master node(multichain) where by there are multiple chains that swap value
    - i say master node. because although some sharding concepts pretend to not need them.
      inevitably. without a master node the regions/group nodes end up having to "trust" the other chain when sending utxo's

and many more concepts
issues are the "trust" of data if its not in one place becomes its own weakness
even LN devs have noticed this and realised that LN full nodes would need to be master nodes downloading and monitoring bitcoin, litecoin, vertcoin
seems some of the sharding devs of sharding projects have not yet seen the dilemma play out.
(5 weak points more prone to attack than 1 strong point.
EG
easier to 51% attack one of the 5 points of 5exahash than it is to 51% attack one point of 50exahash. thus if one of the 5 weak points gets hit.. damage is done.)

I do agree that using atomic swaps (with recent advancements in HTLC) and forks has something to do with scaling, the problem being price as a free variable. It would be interesting tho, having a solution for this problem.

no, atomic swaps and HTLC are BADDDDDDD. think of the UTXO set. (as atomic swaps is about 2 tokens and pegging)
as i originally said, better to double mine (bc1q->sc1) that way bitcoin sees the bc1q as spent and thus no more UTXO
thus no holding large UTXO set of locked unspents (the sc1just vanishes and not counted on btc's UTXO as btc cant spend a SC1....
but again the whole needing a master node to monitor both chains comes up and circles around.

so yea LN, sharding, sidechains, multichains.. they all end up back full circle of needing a "masternode"(essentially a full node)
that monitors everything.. which end up as the debate of if a master node exists. just call it a full node and get back to the route problem.

i could waffle on about all the weakenesses of the 'trust' of relying on separate nodes holding separate data, but ill try to keep my posts short

block time reduction ..  a moderate improvement in bitcoin parameters it is ways better than a comparable block size increase. They may look very similar but there is a huge difference: A reduction in block time helps with mining variance and supports small pools/farms. The technical difficulties involved are not big deals as everything could be adjusted easily, block reward, halving threshold, ...

nope
transactions are already in peoples mempool before a block is made.
nodes dont need to send a block with transactions again. they just send the blockheader.
this is why stats show transactions take 14 seconds but a block only takes 2 seconds. because a block header is small and the whole verification is just joining the header to the already obtained transactions.
again
all the nodes are looking for is the blockheader that links them together. the blockheader doesnt take much time at all.

because transactions are relayed (14 seconds) which is plenty of time within the 10minute window.
if that 10 minute window is reduced to 5minute. then thats less time for transactions to relay.
i say this not about the average txsigops of upto 500.. but in cases where a tx has 16000sigops which i warned about a few pages ago.
a 5 minute interval will be worse than a 10min interval
math: a 16k sigop tx will take 7 and ahalf mins to relay the network. meaning if a pools seen a tx first. relayed it out. and then started mining a 5 minute block.. solves it. and relays out the blockheader ..
the block header would have reached everyone in 5minutes 2 seconds. but a bloated transaction (degrees of separation) only reached 1000 nodes. as the last 'hop' from 1000 to their 10 nodes each to multiply it to the last lot has not had time to deal with the bloated tx

also its kinda why the 16k sigops limit exists to keep things under 10 minutes. but foolisly allow such a large amount that it doesnt keep tx's down to seconds.

yes a solution would be bring it down to a lower txsigoplimit when reducing thee blocktime.
which alone brings the
difficulty retarget to weekly. so discussions of moving it to 4032 blocks to bring it back to fornightly.
reward halving happening every 2 years. which means 21mill in less time unless to move it to 420,000 blocks for a 4 year half
and as i said 5minute interval which without reducing txsigop limit. will hurt propagation

so reducing block time is NOT simpler than just increasing block size.. thers alot of ripple effects of reducing blocktime than there is increasing blocksize.
also what do end users gain with a ~5min confirm.. its not like standing at a cashier desk waiting for a confirm is any less frustrating.. it would require a 2second confirm to make waiting in line at a cashier not be a frustration.
even 30 seconds seems the same eternity as 10 minutes when you see an old lady counting change

Quote
Right, but it adds up once you go to next and next hops. It is why we call it proximity premium. Bitcoin p2p network is not a complete graph, it gets like 10 times more for a block to be relayed to all miners. When you double or triple the number of txs, the proximity flaw gets worse just a bit less than two or three times respectively.
i kinda explained it a few paragraphs ago in this post

Quote
No disputes. I just have to mention it is infeasible to engineer the p2p network artificially and AFAIK current bitcoin networking layer allows nodes to drop slow/unresponsive peers and if you could figure out an algorithm to help with a more optimized topology, it would be highly appreciated.

you have kinda self found your solution..
have 15 potential nodes and pick the best 10 with the best ping/speed. naturally the network finds its own placement where the slower nodes are on outter rings and faster ones are at the center

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Wind_FURY
Legendary
*
Offline Offline

Activity: 2898
Merit: 1820



View Profile
November 25, 2018, 06:08:37 AM
 #70

@franky1
This is what we call being proactive and anticipation. The example you give about the SegWit roadmap from 2014 is one example. Are we forced to use SegWit? As DoomAD says, they cannot integrate everyone's wishes, but they anticipate to make Bitcoin usable with various convenients solutions. It's like complaining because someone is working to improve Bitcoin and talk about consensus. A consensus from the mass could turn in a 10 years old kid decision.

  No, we are not forced to use Segwit. However, if someone chooses not to use Segwit, you are penalized by paying higher fees. This may only amount to pennies at the moment, but it can add up. If BTC starts to get used even more, many casual users will then be compelled to use LN, to avoid prohibitive fees.

Or be forced to use Bitcoin Cash. I believe that was their idea of why they split from Bitcoin, right? But apparently, not that many people in the community believed that bigger blocks for scalability were a good trade-off on decentralization.

The social consensus remains "Bitcoin is Bitcoin Core".

Why should anyone be forced to settle for something which is less secure?  So far LN is still in alpha testing stage. Risk of loosing funds is too high ATM. Maybe when they improve their network, I'll want to use it. BCH has always had less hash rate and is therefore less secure. I think people should be able to utilize the most secure network out there in an affordable manner and not be forced to settle for some less secure stuff. Even if lightning network gets its act together, a second layer solution will be second best when it comes to security. So I guess the BTC blockchain will only be secure vip2vip cash. The riffraff can settle for less secure crap.  Cheesy

VIP2VIP cash? Bitcoin will remain an open system that anyone in the world can use. What is "VIP" with Bitcoin? Nothing.

Are the fees constantly high that it discourages everyone from using Bitcoin? I don't believe it is. The fees have been low since the increasing adoption of Segwit.

Plus about sharding. Franky1, do you agree that bigger blocks are inherently centralizing, and that "sharding" is just prolonging the issue instead of solving it?

██████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
██████████████████████
.SHUFFLE.COM..███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
█████████████████████
████████████████████
██████████████████████
████████████████████
██████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
██████████████████████
██████████████████████
██████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
.
...Next Generation Crypto Casino...
bones261
Legendary
*
Offline Offline

Activity: 1806
Merit: 1826



View Profile
November 25, 2018, 06:32:24 AM
 #71


VIP2VIP cash? Bitcoin will remain an open system that anyone in the world can use. What is "VIP" with Bitcoin? Nothing.

Are the fees constantly high that it discourages everyone from using Bitcoin? I don't believe it is. The fees have been low since the increasing adoption of Segwit.

Plus about sharding. Franky1, do you agree that bigger blocks are inherently centralizing, and that "sharding" is just prolonging the issue instead of solving it?

We are discussing the scaling issue.  Roll Eyes You really think the blockchain fee is going still going to be low if and when the demand is 100x higher than currently ? Let's hope if and when that ever happens, LN will somehow ease the risk of losing coins either due to your channel partner closing a channel in an earlier state and you not catching it or you having a system error and closing a channel in an earlier state in error, and getting a penalty. (Or closing it in an earlier state not in your favor.) BTW, can someone get a penalty if they close a channel in an earlier state that is not in their favor? It appears that way to me. Talk about adding insult to injury. Sure hope that I am dead wrong about that. Otherwise the penalty system is a joke.
Wind_FURY
Legendary
*
Offline Offline

Activity: 2898
Merit: 1820



View Profile
November 25, 2018, 08:29:46 AM
Merited by bones261 (1)
 #72


VIP2VIP cash? Bitcoin will remain an open system that anyone in the world can use. What is "VIP" with Bitcoin? Nothing.

Are the fees constantly high that it discourages everyone from using Bitcoin? I don't believe it is. The fees have been low since the increasing adoption of Segwit.

Plus about sharding. Franky1, do you agree that bigger blocks are inherently centralizing, and that "sharding" is just prolonging the issue instead of solving it?

We are discussing the scaling issue.  Roll Eyes


Then "VIP2VIP cash" is the wrong terminology. Bitcoin remains to be an open system.

Quote

You really think the blockchain fee is going still going to be low if and when the demand is 100x higher than currently ?


No. I already said that users will be forced to use Bitcoin Cash. Other more secure altcoins would be better though.

Quote

Let's hope if and when that ever happens, LN will somehow ease the risk of losing coins either due to your channel partner closing a channel in an earlier state and you not catching it or you having a system error and closing a channel in an earlier state in error, and getting a penalty. (Or closing it in an earlier state not in your favor.)


As any software development project, it may succeed, or it may fail. But Lightning has been developing well, let's hope that continues.

Quote

BTW, can someone get a penalty if they close a channel in an earlier state that is not in their favor? It appears that way to me. Talk about adding insult to injury. Sure hope that I am dead wrong about that. Otherwise the penalty system is a joke.


There has been misinformation attempts on the Lightning Network everywhere, made by the people who want all transactions to be processed on-chain, in big blocks, and by all nodes. That is not scalable.

But I will ask around and find a good answer for you.

██████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
██████████████████████
.SHUFFLE.COM..███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
█████████████████████
████████████████████
██████████████████████
████████████████████
██████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
██████████████████████
██████████████████████
██████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
███████████████████████
.
...Next Generation Crypto Casino...
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
November 25, 2018, 09:17:31 AM
Last edit: November 25, 2018, 10:29:54 AM by aliashraf
 #73

many sharding concepts exist.
...
issues are the "trust" of data if its not in one place becomes its own weakness
...
(5 weak points more prone to attack than 1 strong point.
Security, is good but too much security is nightmare as it comes with costs and costs should be paid somehow. Sharding is what we need in over-secure situations where we can safely split.

In the context of bitcoin and cryptocurrencies, security is not defined as an absolute measure linearly dependent on the costs of 50%+1 attack, it is just an unfortunate misunderstanding:  Bitcoin has always been secure since the first day while the costs of carrying out such an attack has increased substantially from a few bucks to hundreds of million dollars.

Security is not quietly an 'indexable' measure, saying 'this coin is less secure', 'that coin is more secure' is absurd in cryptocurrency context, the way I understand bitcoin there is no "less" or "more" security, you are secure or you are not ... and wait ... there is a third state:
You may be ridiculously overpaying for being secure against threats that don't ever exist, e.g.  the current situation with bitcoin!

A proper sharding/splitting/partitioning would not put anybody in danger if it is applied to an over-secure blockchain and I'm not talking about an overloaded one, like what OP proposes.

As of your other arguments regarding propagation delay and my block time decrease idea, I choose not to go through an endless debate over this for now but to be clear, I denounce almost everything you say in this regard. Let's do it later, somewhere else.
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 25, 2018, 12:40:54 PM
Last edit: November 25, 2018, 02:08:37 PM by franky1
Merited by bones261 (2)
 #74

scaling bitcoin is not a 1mb base block or 1 gigabyte base block argument
so lets demyth that old PR campaign right away

its 1,2,4,8,16,32 and so on.. here is the important thing. over time
just like 2009-2017 (2009: 0.25->0.5 upto 2013  then 2013: 0.75->1mb upto 2017)

so when you mention costs, i have to ask at what cost.
people do not keep hold of and demand to only ever use their windows xp computer forever. they naturally upgrade.
things progress over time.
trying to keep things at floppy disk space / dialup internet speed forever is not natural.

the whole PR campaign narrative of "visa by midnight or else fail" is a fail in itself. the stats of visa are not of one network. but of multiple networks. and then combining the numbers to then assume one network and that bitcoin needs as one network to be that powerful ASAP.


so many people think scaling bitcoin means as soon as code is activated servers and centralisation occur. all because people think 1mb->1gb overnight.

..
as for sharding

imagine there are 5 regions with 5 maintainers per region where the final region(5) is the important one everyone wants to attack
5   5   5   5   5 taking over one region is easy

3   3   3   3   5
                   8 the last region is now being 160% attacked

4   4   4   4   5
                   4 the last region is now being 80% attacked

imagine an outside has 6 friends
5   5   5   5   5
                   6 the last region is now being 120% attacked

thus what happens is a master node that takes in all 5 regions where to break the masternodes rules now requires more than 25 malicious participants to take over the masternodes rules. because the masternode can just reject blocks made by the (8)160%, (4)80%, 120%(6) attackers allowing the (5) to be accepted while wasting the attackers time.
thus keeping the (5) region alive and acceptable

this is why bitcoin came into being because although there are(now) 'pools'.. there is 1 rule all node/pools(regions) have to abide by.
sharding does not solve the byzantine generals rule. sharding are undoing the byzantine generals solution and taking the debate back a decade to what cypherpunks couldnt solve before satoshi came up with the solution. where pools have separate rules.

for instance. in the 5 regions where 5 separate maintainers. without oversight of a master rule the maintainers can change the rules of their region which can affect the other 4 regions.
imagine one region decided not to accept transactions from another particular region. they can do it as they have no masternode that rules that each region must accept each other.

once you wash away all the buzzwords created by the "sharding" community and play out scenarios as a real world usage and not the utopian 'it will always work'... you will see issues arrise.
too many people have a project and only run tests on 'how its intended to work' and not run 'hammer it until it breaks to find the weakeness' tests

take visa. thats sharding in basic form(washing away buzzwords). america decided not to accept transactions from russia because the visa system is separate networks and one network can change the rules and just cut off another network.

however if there was a single rule that all transactions are acceptable. then russia would be treated the same as america. and america couldnt do a thing about it

bitcoins beauty is about how to solve having multiple generals ruling multiple regions but ensuring they all comply to one rule.
and solves how those generals abide by that one rule without having one general.

the answer was everyone watch everything and reject the malicious individuals
we have had "sharding" systems in the real world for decades. sharding is DE-inventing what makes bitcoin, bitcoin

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 25, 2018, 03:06:02 PM
 #75

Bitcoin has always been secure since the first day while the costs of carrying out such an attack has increased substantially from a few bucks to hundreds of million dollars.

Security is not quietly an 'indexable' measure, saying 'this coin is less secure', 'that coin is more secure' is absurd in cryptocurrency context, the way I understand bitcoin there is no "less" or "more" security, you are secure or you are not ...

bitcoin and crypto is not secure. its why difficulty exists.
yes bitcoin is secure against a CPU attack as it wil require trillions of PC's to match/overtake
but its not secure against certain things though. which is why it has to keep evolving.

only last month there was a bug that could have DDoSed the network

too many people have the mindset that once the titanic is built its too big to fail
once banks are in power they are too big to fail
once bitcoin was made in 2009 its too big to fail.

the mindset should be look for weakenesses find weakenesses solve weakenesses and then repeat
this is why so many ICO's and shardings dont launch. they spread out a utopian dream and instead of finding/solving problems they double down on promoting the utopia and try shutting people up if they mention weakenesses.

true developers want to hear weakenesses so they can fix them. bad developers only want to hear "great job, now you can retire rich"

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
bones261
Legendary
*
Offline Offline

Activity: 1806
Merit: 1826



View Profile
November 25, 2018, 03:07:58 PM
 #76

We are discussing the scaling issue.  Roll Eyes


Then "VIP2VIP cash" is the wrong terminology. Bitcoin remains to be an open system.


People must have a short memory. During the prolonged tx backlog event back in 2017, it certainly seemed that way. Since the tx fee is based on tx size and not the amount sent, people wanting to move around smaller amounts were getting eaten alive with fees. Although a fee of 300 sats per byte is trivial for someone wanting to move around 1 BTC, it was prohibitive for someone wanting to move around 1 million sats.

Quote
Quote

You really think the blockchain fee is going still going to be low if and when the demand is 100x higher than currently ?


No. I already said that users will be forced to use Bitcoin Cash. Other more secure altcoins would be better though.

So the riffraff have to settle for a 3rd rate shitcoin network? Sounds like a vip2vip attitude to me.  Cheesy

Quote
Quote

Let's hope if and when that ever happens, LN will somehow ease the risk of losing coins either due to your channel partner closing a channel in an earlier state and you not catching it or you having a system error and closing a channel in an earlier state in error, and getting a penalty. (Or closing it in an earlier state not in your favor.)


As any software development project, it may succeed, or it may fail. But Lightning has been developing well, let's hope that continues.

How long has this been in development? I may not be from Missouri, but you still have to show me. Perhaps I will be less critical when and if I see a product that is actually usable and less prone to me losing funds for computer/human error.

DooMAD
Legendary
*
Offline Offline

Activity: 3766
Merit: 3100


Leave no FUD unchallenged


View Profile
November 25, 2018, 03:11:18 PM
 #77

scaling bitcoin is not a 1mb base block or 1 gigabyte base block argument
so lets demyth that old PR campaign right away

its 1,2,4,8,16,32 and so on.. here is the important thing. over time

It doesn't have to be an integer, so let's get rid of that myth too.  Why not:

1.25mb base/5mb weight,
1.5mb base/6mb weight,
1.75mb base/7mb weight
2mb base/8mb weight
and so on?  

It's not just about it happening "over time", it's also about sensible increments.  Based on what you've witnessed to date, it should be more than obvious that most BTC users are in no rush to double or quadruple the base.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 25, 2018, 03:18:17 PM
 #78

There has been misinformation attempts on the Lightning Network everywhere, made by the people who want all transactions to be processed on-chain, in big blocks, and by all nodes. That is not scalable.

But I will ask around and find a good answer for you.

or... thre are issues. that the LN devs themselves admit. but some people that want LN to be a success dont want the positive PR train to stop. so will argue endlessly that LN is utopia

here are LN devs themselves saying about issues with LN that wont be fixed.
https://youtu.be/8lMLo-7yF5k?t=570

and yes factories is the next evolution of LN concepts where factories will be the new masternodes housing lots of peoples data and also monitoring multiple chains because they know LN is not bitcoin. its a separate network for mutiple coins where LN wishes to be the main system and leave bitcoin and litecoin as just boring shards/data stores

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4200
Merit: 4442



View Profile
November 25, 2018, 03:31:42 PM
 #79

anyway once people play scenarios around that sharding will eventually lead back around to needing masternodes (full nodes)
and then once people play scenarios of if a masternode is needed that stores all shard data. why have it separate because the regional nodes are now just lower down the network and of less importance.
and play around with scenarios of if not separate we are back to a single nodebase of code monitoring everything.. it just full circles. that the strongest network is one that is united under one ruleset

It doesn't have to be an integer, so let's get rid of that myth too.  Why not:

1.25mb base/5mb weight, requires hard fork to move to
1.5mb base/6mb weight,  requires hard fork to move to
1.75mb base/7mb weight  requires hard fork to move to
2mb base/8mb weight requires hard fork to move to
and so on?  

It's not just about it happening "over time", it's also about sensible increments.  Based on what you've witnessed to date, it should be more than obvious that most BTC users are in no rush to double or quadruple the base.

fixed that for you.
however having a case where we remove the witness scale factor and have code set in consensus of
4mb pool policy /32mb consensus,
4.25mb pool policy/32mb consensus,  
4.5mb pool policy/32mb consensus

means no hard forks pre increment. and then just some code of when the blocks soft increment by 0.25
again to demyth the PR campaign.
this is not about 32mb blocks.
this is about avoiding 3 years of debate just to perform one hard fork. then another 3 year debate to perform another hard fork
blocks will not be 32mb. they will increment at the pool policy amounts. all monitored and adhered to by nodes enforcing that pools dont go over the policy amount until a coded thing happens that soft activate a 0.25 increment

again to demyth the PR campaign
this is not about EB (trying to turn the debate into mentioning a certain group of people)
this is about allowing progressive growth without hard forks and 3 year debates per increment

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
DooMAD
Legendary
*
Offline Offline

Activity: 3766
Merit: 3100


Leave no FUD unchallenged


View Profile
November 25, 2018, 04:20:49 PM
Last edit: November 25, 2018, 04:35:58 PM by DooMAD
 #80

this is not about 32mb blocks.
(...)
this is not about EB

How do you propose something and then basically say "this is not about the thing I'm literally proposing right now"?   Roll Eyes

Perhaps it would allow us to forego the continual hardfork drama, but it's still not remotely as simple and clear-cut as you're making it out to be.  There are very good reasons why people are opposed to such a system and if you aren't even going to attempt to overcome the objections and only talk about the positives, then don't expect people to take this proposal seriously.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Pages: « 1 2 3 [4] 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!