kiklo
Legendary
Offline
Activity: 1092
Merit: 1000
|
|
March 07, 2017, 03:38:23 PM |
|
Hehe - yes and: Just in case SW would be the better long term solution , nobody is really able to grasp it fully AND is able to convince crowds of the potential or things are fractioned / censored endless by different forum policies. We only see stupid fan boy chilling here - that's it. Sorry - just in case...
Aside from the fact Segwit lets LN steal tranaction fees from the miners and will bankrupt them. Which is why the miners will never vote segwit in. What kind of mallet do I need to use to get that part to register with you.
|
|
|
|
AngryDwarf
|
|
March 07, 2017, 03:40:42 PM |
|
Hehe - yes and: Just in case SW would be the better long term solution , nobody is really able to grasp it fully AND is able to convince crowds of the potential or things are fractioned / censored endless by different forum policies. We only see stupid fan boy chilling here - that's it. Sorry - just in case...
Aside from the fact Segwit lets LN steal tranaction fees from the miners and will bankrupt them. Which is why the miners will never vote segwit in. So what secures the LN network then?
|
|
|
|
kiklo
Legendary
Offline
Activity: 1092
Merit: 1000
|
|
March 07, 2017, 03:49:53 PM |
|
So what secures the LN network then?
Description : From Bitcoin Wiki. Lightning Network is a proposed implementation of Hashed Timelock Contracts (HTLCs) with bi-directional payment channels which allows payments to be securely routed across multiple peer-to-peer payment channels.Dec 22, 2016 Segwit supporters say : It relies on the underlying blockchain, be it Bitcoin’s or otherwise, for its security. In the case of Bitcoin, it uses the underlying proof-of-work algorithm that secures the entire network to secure But I will tell you the truth, LN Security Relies only on : proposed implementation of Hashed Timelock Contracts (HTLCs)Time Locks are what you are relying on for Security in LN. Examples: When the Time Locks expire your BTC can be stolen My fear with LN is rather the opposite: that propagating "waves of panic" will overwhelm the block chain with transactions, because the amount of transactions pending on the LN network can in principle be orders of magnitude larger than what a block chain can handle (that's its main idea !). So if a block chain can handle, say, 100 000 transactions per hour, and the LN network has 10 million transactions pending in 10 minutes, and there's a panic wave going through the network, those 10 million transactions will need to go on-chain which will create a backlog of 100 hours, often passing the safety time limit of regularisation, and huge opportunities to scam.
Your fears are confirmed , article from Jul 5, 201612:28 PM EST by Kyle Torpey https://bitcoinmagazine.com/articles/here-s-how-bitcoin-s-lightning-network-could-fail-1467736127/And also what would happen in this scenario if the locks on the main chain expire before the tx from LN can be pushed back to the main chain, it would be a bit like double spending issue no ? @IadixDev, Nice, you see the problems with LN also. How to Steal LN Funds from the LN WhitePaper itself. https://lightning.network/lightning-network-paper.pdfPage 49 thru 51 Improper Timelocks Participants must choose timelocks with sucient amounts of time. If insuf- cient time is given, it is possible that timelocked transactions believed to be invalid will become valid, enabling coin theft by the counterparty. There is a trade-o between longer timelocks and the time-value of money. When writing wallet and Lightning Network application software, it is necessary to ensure that sucient time is given and users are able to have their trans- actions enter into the blockchain when interacting with non-cooperative or malicious channel counterparties 9.2 Forced Expiration Spam Forced expiration of many transactions may be the greatest systemic risk when using the Lightning Network. If a malicious participant creates many channels and forces them all to expire at once, these may overwhelm block data capacity, forcing expiration and broadcast to the blockchain. The re- sult would be mass spam on the bitcoin network. The spam may delay transactions to the point where other locktimed transactions become valid 9.3 Coin Theft via Cracking As parties must be online and using private keys to sign, there is a possibility that, if the computer where the private keys are stored is compromised, coins will be stolen by the attacker. While there may be methods to mitigate the threat for the sender and the receiver, the intermediary nodes must be online and will likely be processing the transaction automatically. For this reason, the intermediary nodes will be at risk and should not be holding a substantial amount of money in this \hot wallet." Intermediary nodes which have better security will likely be able to out-compete others in the long run and be able to conduct greater transaction volume due to lower fees. Historically, one of the largest component of fees and interest in the nancial system are from various forms of counterparty risk { in Bitcoin it is possible that the largest component in fees will be derived from security risk premiums. A Funding Transaction may have multiple outputs with multiple Com- mitment Transactions, with the Funding Transaction key and some Commit- ment Transactions keys stored oine. It is possible to create an equivalent of a \Checking Account" and \Savings Account" by moving funds between outputs from a Funding Transaction, with the \Savings Account" stored oine and requiring additional signatures from security services.
|
|
|
|
kiklo
Legendary
Offline
Activity: 1092
Merit: 1000
|
|
March 07, 2017, 03:54:46 PM |
|
Little tidbit no one likes to mention, LN Funds can be counterfeit like so. We all know that LN is a Proposed Offchain solution to BTC backlog of Unconfirmed Transactions. (Which could easily be fixed just with a blocksize increase or a faster BlockSpeed.)But instead BTC core devs want to shove Segwit & LN down everyone throats. Facts LN Notes are a Offchain Representation of the value of a BTC (with the actual BTC locked in place on the actual BTC Onchain Blockchain)LN Devs have continuously implied that BTC will be placed on LN and very rarely if ever be returned / unlocked on the real BTC Blockchain. Combined the Chinese Mining Pools have over 51% necessary for an attack, (~68% at last count). (With a 51% attack they can perform a history rewrite attacks, rewriting the blockchain.) (A few years ago at the prompting of the BTC devs, a group (with over 51%) REWROTE the Last 12 Hours of the Blockchain to fix a fork, cause by a programming error.) (So that was 76 Blocks that were rewritten.) (We also know the Miners can choose which transactions are included in their blocks.)So now back to the Title of this OP, Exactly how do you Counterfeit BTC on LN. Option 1 : Form a group of collusion between the Miners that control 51%, Send 50 BTC to an address. Now follow the steps on LN to Lock up that 50 BTC on the Blockchain. Whether LN requires 1 or 3 confirmations , as soon as LN confirms the representation of LN notes match your amount. You and your colluding friends, rewrite the blockchain and include a transaction moving that 50 BTC to another address before the lock took place. You now still have your 50 BTC Free & Clear Onchain, and a representation value of 50 BTC Offchain on LN .(Which you can use for LN transactions forever.) BTC Onchain Transactions ended Counterfeiting , LN Offchain Transactions will bring Counterfeiting into Crypto.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 03:55:45 PM |
|
Neither has been tested on the main net.
Segwit is much more complicated which is why it required so much testing even on test net. BU is by comparison a much simpler change.
dynamics has been tested actually.. though 1mb was the limit (in consensus.h).. much like some dynamics propose to move this to 16mb(in consensus.h) pools have been dynamically moving their preferential block sizes since 2009.. (in policy.h) 2013 was below 500kb in 2015 it was below 750kb so much like the many dynamic block proposals that want to elevate (in consensus.h) to 4mb, 8mb, 16mb, 32mb whatever.. there is and always has been a second limit that is dynamically moved below this(in policy.h)... some proposals want the policy.h to have a bigger usefulness for the nodes. where the nodes flag to allow or not allow pools to go beyond X policy.h maxblocksize good example of a previous event: just imagine the headache if we stayed at 500kb blocks when Sipa done the leveldb bug event.. as thats the reality of the debate today. the same as the "do we go above 500kb in policy.h in 2013 eventhough consensus.h was 1mb" p.s my node and many other nodes have a consensus.h of 8mb right now and my node in particular has a policy.h limit of 1mb (and a few tweaks to validation) .. and im not having any problems
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
March 07, 2017, 03:57:42 PM |
|
Neither has been tested on the main net.
Segwit is much more complicated which is why it required so much testing even on test net. BU is by comparison a much simpler change.
dynamics has been tested actually.. though 1mb was the hard limit (in consensus.h).. much like some dynamics propose to move this to 16mb(in consensus.h) pools have been dynamically moving their preferential block sizes since 2009.. (in policy.h) 2013 was below 500kb in 2015 it was below 750kb so much like the many dynamic block proposals that want to elevate (in consensus.h) to 4mb, 8mb, 16mb, 32mb whatever.. there is and always has been a second limit that is dynamically moved below this(in policy.h)... some proposals want the policy.h to have a bigger usefulness for the nodes. where the nodes flag to allow or not allow pools to go beyond X policy.h maxblocksize just imagine the headache if we stayed at 500kb blocks when Sipa done the leveldb bug event.. as thats the reality of the debate today. the same as the "do we go above 500kb in 2013" so you're saying the basic idea of emergent consensus that the core devs are pretending to be so freaked out about and claiming is so 'radically different' has actually been done already...
|
|
|
|
|
dinofelis
|
|
March 07, 2017, 04:04:30 PM |
|
If segwit reaches locked-in, you still don’t need to upgrade, but upgrading is strongly recommended. The segwit soft fork does not require you to produce segwit-style blocks, so you may continue producing non-segwit blocks indefinitely. However, once segwit activates, it will be possible for other miners to produce blocks that you consider to be valid but which every segwit-enforcing node rejects; if you build any of your blocks upon those invalid blocks, your blocks will be considered invalid too.
This is the general behaviour of a soft fork: if a majority of miners adopts a soft fork, as a minority miner, you have no choice but to follow, or become insignificant. Remember the definition of a soft fork: a soft fork is a protocol change, such that all what happens under the new protocol seems still valid under the old protocol, but on the other hand, what used to be valid under the old protocol isn't necessarily valid under the new one. For instance, a typical soft fork it to black list addresses or to turn back former transactions (what is supposed not to be done, but it can, with a soft fork). The old protocol allows these addresses to transact ; the new protocol doesn't. Any new block that contains these forbidden transactions, will be considered valid by the old protocol, but invalid by the new one. As such, if you are an old-protocol miner, and you make such blocks, it will be orphaned by all new protocol miners. If they have the majority hash power, it will ALWAYS end up being orphaned. On the other hand, old protocol miners will build upon new protocol blocks without problems. They will not orphan new protocol blocks. This makes that old protocol miners will always end up losing in majority acceptance of a soft fork. A soft fork accepted by a majority IMPOSES ITSELF upon the rest. This is totally different with a hard fork. With a hard fork, new protocol blocks are considered not valid by the old protocol. As such, if a fraction of the miners applies it, it will make a new chain, on which old protocol miners will never build. The old protocol miners will continue building the old protocol block chain and will not suffer from the forked chain that the new protocol chain miners are now building. Even 10%-90% or 90%-10% splits, nobody is FORCED to follow another protocol than what he wishes. The chain that is being mined is always mined with full consensus, but the price to pay is that there are now two chains (which is normal, there are two non-agreeing consensus groups). With a hard fork, nothing is imposed upon nobody. With a soft fork, the majority imposes its will on the minority.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 04:09:16 PM |
|
so you're saying the basic idea of emergent consensus that the core devs are pretending to be so freaked out about and claiming is so 'radically different' has actually been done already...
emergent consensus (BU specific proposal of dynamics) has not been around since day one because BU hasnt been around since day one. then again core hasnt been around since 2009 either. (it was satoshi-qt prior to 2013) but the whole thing about "excessive blocks"(BU specific proposal) is about making policy.h more important as the lower threshold and the "FLAGGER", while making it more automatically moveable.. rather than manually movable. in the past 2013 sipa and core devs had to manually move the policy.h and so did pools.. though nodes were not really using policy.h as the node validation of block rule. pools were reliant on it. infact early clients had 3 layers protocol =32mb consensus=1mb policy <500kb in the early days
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 04:12:45 PM |
|
If segwit reaches locked-in, you still don’t need to upgrade, but upgrading is strongly recommended. The segwit soft fork does not require you to produce segwit-style blocks, so you may continue producing non-segwit blocks indefinitely. However, once segwit activates, it will be possible for other miners to produce blocks that you consider to be valid but which every segwit-enforcing node rejects; if you build any of your blocks upon those invalid blocks, your blocks will be considered invalid too.
This is the general behaviour of a soft fork: if a majority of miners adopts a soft fork, as a minority miner, you have no choice but to follow, or become insignificant. what your quoting of my quote of a quote.. is: 1. breaking the "backward compatible" promise - yea i laughed reading they will literally want to ban blocks because they are not segwit branded even if the data was valid 2. causes a split in the network. yep i laughed that even going soft can cause a bilateral split. breaking the promise that going soft avoids such drama
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
hv_
Legendary
Offline
Activity: 2534
Merit: 1055
Clean Code and Scale
|
|
March 07, 2017, 05:58:02 PM |
|
so you're saying the basic idea of emergent consensus that the core devs are pretending to be so freaked out about and claiming is so 'radically different' has actually been done already...
emergent consensus (BU specific proposal of dynamics) has not been around since day one because BU hasnt been around since day one. then again core hasnt been around since 2009 either. (it was satoshi-qt prior to 2013) but the whole thing about "excessive blocks"(BU specific proposal) is about making policy.h more important as the lower threshold and the "FLAGGER", while making it more automatically moveable.. rather than manually movable. in the past 2013 sipa and core devs had to manually move the policy.h and so did pools.. though nodes were not really using policy.h as the node validation of block rule. pools were reliant on it. infact early clients had 3 layers protocol =32mb consensus=1mb policy <500kb in the early days So chilled with laudbankbraking words: BU is much more Satoshi consensus like than anything else (= hacked agenda stuff ) we ve seen before?
|
Carpe diem - understand the White Paper and mine honest. Fix real world issues: Check out b-vote.com The simple way is the genius way - Satoshi's Rules: humana veris _
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 06:19:41 PM |
|
So chilled with laudbankbraking words: BU is much more Satoshi consensus like than anything else (= hacked agenda stuff ) we ve seen before? using gmaxwells words(not verbatim) 'BU is just a core 0.12 copy and paste job with a few minimal changes...' well thats just proof that offering more capacity doesnt require a total game changing re-write of the entire thing, doesnt require "upstream filters" doesnt require new keys, doesnt require users moving funds to new keys just to see a feature, doesnt require intentionally banning nodes.
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
AngryDwarf
|
|
March 07, 2017, 07:58:56 PM |
|
Here is one idea of scalability and transaction rate. It's quite old: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306For it to work though, I don't think we can allow blockchains demand to exceed capacity, or for mempools to forget transactions for bitcoin service providers, or for nodes to be selective on the transactions they relay.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 08:14:46 PM |
|
Here is one idea of scalability and transaction rate. It's quite old: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306For it to work though, I don't think we can allow blockchains demand to exceed capacity, or for mempools to forget transactions for bitcoin service providers, or for nodes to be selective on the transactions they relay. many dynamic blocks are envisioning including a 'speedtest' algo that tests their effectiveness to notice a new block download it, validate it and relay it out and set a score of start to end. then use that to help flag the upper limit. they will accept which becomes consensus.h where by they then have the lower limit policy.h has the prefered size as acceptable safety which can automaticaly grow at need BELOW the ultimate limit this using the network consensus by x% flagging big no no to Xmb pools wont then make Xmb. thus not killing off nodes. and where nodes abilities as technology and telecommunications improve over the years allows the network t grow at an acceptable natural and progressive amount (EG raspberryPi3 even behind the china firewall can handle 8mb blocks.. so a 8mb consensus.h and a 2mb policy.h to begin with which can increase naturally upto 8mb without having to manually do anything) meaning pools would then go from 1mb.. and try 1.001mb to test the water and increment to say 1.99mb before worrying about orphans. and then the automated moving of the policy.h can occur. all while blocks are way below 8mb do not be fooled by the "visa by midnight" "gigabyte by midnight" "servers by midnight" rhetoric that blockstreamers are spewing out when they exaggerate satoshis words to fit a fake narative that bitcoin needs to commercialise and centralise to survive
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
AngryDwarf
|
|
March 07, 2017, 08:18:48 PM |
|
How would BU prevent malicious actors gaming the system, by running lots of nodes with restrictive settings? It would be a node race trying to set the consensus.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 08:24:38 PM |
|
How would BU prevent malicious actors gaming the system, by running lots of nodes with restrictive settings? It would be a node race trying to set the consensus.
your talking about the same threat as what could have happened over the last 8 years by sybil attacking the network with lots of 500kb limit nodes
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
AngryDwarf
|
|
March 07, 2017, 08:29:28 PM |
|
How would BU prevent malicious actors gaming the system, by running lots of nodes with restrictive settings? It would be a node race trying to set the consensus.
your talking about the same threat as what could have happened over the last 8 years by sybil attacking the network with lots of 500kb limit nodes You mean by compiling a node with a lower block size limit and starting them all over the network?
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4761
|
|
March 07, 2017, 08:31:33 PM Last edit: March 07, 2017, 09:10:25 PM by franky1 |
|
How would BU prevent malicious actors gaming the system, by running lots of nodes with restrictive settings? It would be a node race trying to set the consensus.
your talking about the same threat as what could have happened over the last 8 years by sybil attacking the network with lots of 500kb limit nodes You mean by compiling a node with a lower block size limit and starting them all over the network? you brought up the question of 'malicious actors gaming the system, by running lots of nodes with restrictive settings?' so im just assuming you mean sybil attack.. and assuming you mean restrictive settings.. where by i gave an example of 500kb.. which is just as likely to happen even now or at anytime in the past it was as likely to have happened. .. malicious actors gaming the system, by running lots of nodes with restrictive settings is no more or less a threat no different than the last 8 years. there are many ways to mitigate these threats. EG recognising a jump in node count of nodes using things like amazon servers. and not including them in the tally. that way REAL decentralised nodes decide what the settings are, by simply not caring about amazon server capabilities. that way the network sticks to what rational/true nodes are ok with .. most sybil attacks are where people run LITE nodes(not full nodes) but tweak the useragent to look like its a full node to then spam out bad requests. a simple way to know if a node is a full node could be: take 3 numbers from 1to 450000... say 234567, 321234, 432111(randomly chosen at each handshake) my node could send that. and want a reply. EG the other node has to grab the block hashes of those 3 block through its own blockchain.. sha256 them together and send me the result. which if correct they get whitelisted (imagine it like a 'are you human' captcha 'select the image of a roadsign....... but an 'are you fullnode' sha the hashes of these 3 blocks' ) it could go one step further.. by asking for tx number 27(randomly chosen at each handshake) of those blocks to sha together the TXID's most sybil nodes malicious attackers wont shell out $$$ buying thousands of amazon accounts with 100gb data allowance each so they wont have the blockdata to reply. however real nodes will have the data. so its as an example just one way to check nodes are actually full nodes.
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
AngryDwarf
|
|
March 07, 2017, 09:12:04 PM |
|
most sybil attacks are where people run LITE nodes(not full nodes) but tweak the useragent to look like its a full node to then spam out bad requests.
a simple way to know if a node is a full node could be: take 3 numbers from 1to 450000...
say 234567, 321234, 432111(randomly chosen at each handshake) my node could send that. and want a reply. EG the other node has to grab the block hashes of those 3 block through its own blockchain.. sha256 them together and send me the result. which if correct they get whitelisted (imagine it like a 'are you human' captcha 'select the image of a roadsign....... but an 'are you fullnode' sha the hashes of these 3 blocks' )
it could go one step further.. by asking for tx number 27(randomly chosen at each handshake) of those blocks to sha together the TXID's
most sybil nodes malicious attackers wont shell out $$$ buying thousands of amazon accounts with 100gb data allowance each so they wont have the blockdata to reply.
however real nodes will have the data. so its as an example just one way to check nodes are actually full nodes.
An idea worth considering.
|
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
March 07, 2017, 09:50:32 PM |
|
Franky is correct.
Basically, sybil attacks are thwarted in Bitcoin because of Proof of Work. Honest nodes need to control a majority of the hashing power. That's never changed and won't change under BU.
|
|
|
|
|