Bitcoin Forum
May 28, 2024, 02:48:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: A few lines of code...  (Read 1246 times)
achow101
Staff
Legendary
*
Offline Offline

Activity: 3402
Merit: 6659


Just writing some code


View Profile WWW
October 09, 2016, 03:42:59 PM
 #21

Changing a few lines of code to remove a temporary limit (1MB) is a tiny change to return to the original bitcoin.  
I don't think you have read any source code for any of the proposed hard forks to increase the block size limit. It most certainly is not just "a few lines of code". This is Gavin's original implementation PR that he submitted to Bitcoin Core: https://github.com/bitcoin/bitcoin/pull/6341. It is most certainly more than just a few lines of code. Why? Because it must include proper deployment of the hard fork and unit tests (tests are always necessary regardless of the change). Furthermore, IIRC that PR did not include anything about fixing the O(n^2) signature validation problem, that needs to be fixed separately with another set of code changes. Lastly, to help reduce the bandwidth, so that people can actually still run full nodes, you need something like XThin or Compact Blocks, which is yet another large code change. Suffice to say, it is most certainly not just "a few lines of code" and a "tiny change".

How in the world did anyone every believe SegWit was a good thing?  
Because it is a good thing and it fixes a ton of issues. Segwit fixes malleability issues, which have been a problem in the past when people have maliciously attacked Bitcoin transactions by malleating them. It also fixes the O(n^2) signature validation issue and makes it O(n) which is much much better. It introduces script versioning and allows for further improvements to the scripting system. And of course, it can also help with increasing the number of transaction that will fit into a block.

segwit uses different keypairs, where other older implementations cannot validate signatures of segwit keypairs. segwit keypairs cannot directly move funds back to traditional keypairs without having to spend funds twice to get it back to traditional configuration that can be validated by traditional implementations.
Segwit does not use different keypairs. It still uses the exact same Elliptic Cure Cryptography with the secp256k1 curve. Segwit uses different scripts, which are completely separate from keypairs.

meaning its a headache to transact with others.
In what way? You can still send to traditional outputs. Segwit uses nested outputs so that people can still send to segwit wallets and the receiver can still take advantage of segwit.

segwit makes traditional implementations no longer full validating nodes. and instead just limp wristed relay nodes.
And so did every other soft fork.

200GB is nothing.  Super fucking tiny.  The total size of the chain isn't at all important.  You only download it once.  You can buy 10 terabyte for very cheap.  So, 200GB is laughably small.

The real issue is passing 8MB around to all the nodes every ten minutes.  Some effects occur there.  No big deal.  Internet is freaking fast and getting freaking faster.  Netflix bandwidth load is >>>>>>> than bitcoin with 8MB.  

If we want people to be able to run a node behind a 1200 baud modem, then 8MB is problematic.  If we abandon those having 1200 baud and less, then the only reason to keep 1MB is to drive need for Blockstream's bullshit solutions.  

8MB blocks are very lightweight for nearly all modern systems.  
What about your bandwidth? It's not storage that's the issue, its bandwidth. You have to download the entire blockchain in order to run a full node. And then you have to upload and download all of the blocks. If you have a bandwidth cap (e.g. you have comcast), then you are royally screwed and can't run a full node.



For those of you who claim that Bitcoin Core is "centrally planning" Bitcoin, you should take a look at Bitcoin Classic. Thomas Zander commits directly to the development branch of Classic. He doesn't follow a pull request and code review process like Bitcoin Core does. He is centrally planning the direction of Classic by taking its development directly into his own hands by bypassing code review and just putting his changes into the repo.

franky1
Legendary
*
Offline Offline

Activity: 4228
Merit: 4501



View Profile
October 09, 2016, 03:47:05 PM
 #22

Do you mean that the bitcoin blockchain is coded so the transactions will first gather up to a big total and than will be confirmed instead of one by one? So basically it is meant to be non instant confirmation?

no, im saying the opposite

your post needs to be answered in two parts.
transactions are not confirmed instantly. they are seen on the network(unconfirmed) near instantly. but not confirmed instantly.
transactions are seen, but are then held in a list.

while mining groups are solving a block(previous list) they are building up a new list ready for the next block, they select the transactions that validate and have a good fee and make an unsolved block. this occurs on average (based on a two week measure) roughly 10 minutes
then when the previous block is solved they start working on the next block.
when the blocks are solved. the transactions in those blocks are classed as confirmed.

lets say each 1mb block has a max buffer of 2500tx's

in 2013 blocks only had a thousand transactions due to not many transactions were being sent around inbetween each block. so every transaction went in and still had space for more transactions..
as bitcoin got more popular more transactions start happening and blocks started to get more filled up

today we are at the limit and its causing bottlenecking often.

if we moved to 2mb. mining groups could add more transactions. and have space to allow for future growth before hitting the wall again.
if we didnt move to 2mb but to 4mb. mining groups could add more transactions. and have space to allow for future growth and hitting the wall again wont happen for a longer period compared to 2mb.

everyone knows that a 2mb or a 4mb is safe.. but some are trying to jump the gun with a 8mb block limit so that we wont hit that limit for years to atleast not keep having this debate every few years.
they are sick of the oliver twist script of asking devs "please sir can i have some more"

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
FruitsBasket
Legendary
*
Offline Offline

Activity: 1232
Merit: 1017


View Profile
October 09, 2016, 03:58:53 PM
 #23

Do you mean that the bitcoin blockchain is coded so the transactions will first gather up to a big total and than will be confirmed instead of one by one? So basically it is meant to be non instant confirmation?

no, im saying the opposite

your post needs to be answered in two parts.
transactions are not confirmed instantly. they are seen on the network(unconfirmed) near instantly. but not confirmed instantly.
transactions are seen, but are then held in a list.

while mining groups are solving a block(previous list) they are building up a new list ready for the next block, they select the transactions that validate and have a good fee and make an unsolved block. this occurs on average (based on a two week measure) roughly 10 minutes
then when the previous block is solved they start working on the next block.
when the blocks are solved. the transactions in those blocks are classed as confirmed.

lets say each 1mb block has a max buffer of 2500tx's

in 2013 blocks only had a thousand transactions due to not many transactions were being sent around inbetween each block. so every transaction went in and still had space for more transactions..
as bitcoin got more popular more transactions start happening and blocks started to get more filled up

today we are at the limit and its causing bottlenecking often.

if we moved to 2mb. mining groups could add more transactions. and have space to allow for future growth before hitting the wall again.
if we didnt move to 2mb but to 4mb. mining groups could add more transactions. and have space to allow for future growth and hitting the wall again wont happen for a longer period compared to 2mb.

everyone knows that a 2mb or a 4mb is safe.. but some are trying to jump the gun with a 8mb block limit so that we wont hit that limit for years to atleast not keep having this debate every few years.
they are sick of the oliver twist script of asking devs "please sir can i have some more"
Thanks for explaining it to me!
I think we should at least go to 2mb blocks, yes I noticed that sometimes my transaction is stuck due to 1mb blocks and more transaction get created than that 1mb block could handle. 2mb blocks is a solution for the short term, but what happens if we eventually hit the 8mb blocks and those will be full? We go to 16mb blocks? Or do we have other options, like reducing the storage every transaction uses?

fck@dt-alwayzz_newbz
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
October 09, 2016, 05:09:41 PM
 #24

2mb blocks is a solution for the short term, but what happens if we eventually hit the 8mb blocks and those will be full? We go to 16mb blocks? Or do we have other options, like reducing the storage every transaction uses?

That's another aspect of Segwit.

The nested outputs format (described above by achow101) will be joined by a native segwit output format at some point. Native segwit format has a small space saving compared to both the standard outputs now or nested format (which incur a small size increase when compared to traditional outputs).

In addition, Segwit is a pre-requisite to Lightning channels, which will do more than any other improvement can to scale transaction rate. 8MB will be a long way off once the protocol for Lightning is ready.  

Vires in numeris
mikewirth
Sr. Member
****
Offline Offline

Activity: 532
Merit: 250


View Profile
October 10, 2016, 07:11:28 AM
 #25


In addition, Segwit is a pre-requisite to Lightning channels, which will do more than any other improvement can to scale transaction rate. 8MB will be a long way off once the protocol for Lightning is ready.  

The protocol for Lightning is a scam, owned by scammers who took over bitcoin.  SegWit was crammed down our throat so they could get Lightning working and it does nothing appreciable for actually improving Tx bandwidth.  Bitcoin is hijacked by Blockstream so they can own blockchain access via their bullshit 'Lightning'.  Everyone can see it.  The protest will become quite a bit louder the day they announce the fees.  But then it will be far too late.
pedrog
Legendary
*
Offline Offline

Activity: 2786
Merit: 1031



View Profile
October 10, 2016, 08:36:27 AM
 #26

Changing a few lines of code to remove a temporary limit (1MB) is a tiny change to return to the original bitcoin. 

SegWit is a ridiculous AltCoin.  Segwit is a piece of garbage that barely improves transaction bandwidth and comes with a very high price of having to recode a bunch of stuff for all users/wallets/etc. 

How in the world did anyone every believe SegWit was a good thing? 

We need 8MB now.  Let's get back to the original bitcoin.

Bosses already decided, they will implement Segregated Witness, even if it takes 10 years to do it.

Dassi
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250


View Profile
October 10, 2016, 10:08:15 AM
 #27

Sounds like a massive gap in the market is opneing up, then.

The question is: why don't any of the above complainers want to own +1000x their share of BTC in GavinCoin? Isn't it more logical to let Bitcoin fail, and instead own a larger amount of a superior asset?

Well no one wants to leave bitcoin because it is the grandfather of crypto and in different ways, we've all grown attached to it.
HCLivess
Legendary
*
Offline Offline

Activity: 2114
Merit: 1090


=== NODE IS OK! ==


View Profile WWW
October 10, 2016, 10:48:41 AM
 #28

Sounds like a massive gap in the market is opneing up, then.

The question is: why don't any of the above complainers want to own +1000x their share of BTC in GavinCoin? Isn't it more logical to let Bitcoin fail, and instead own a larger amount of a superior asset?
Listen dumbass - SegWit and Lightning ARE the alts!!!!  Bitcoin, the original Bitcoin Didn't have segWit bullshit, off-chain bullshit, or even 1MB block limits.  

The real Bitcoin got hijacked by assholes that are trying to shove their crap on top of the original.  The original works fine and the original anticipated 8MB (and greater) blocks.  


The fucking alt is Blockstream/SegWit/Lightning/Thermos/Maxwell

you got me triggered right there

Kprawn
Legendary
*
Offline Offline

Activity: 1904
Merit: 1073


View Profile
October 10, 2016, 07:35:59 PM
 #29

Gentlemen, you are having a serious problem answering a simple question.


If "a few lines of code" would solve the cryptocurrency scaling problem, how come no-one is coding that up into a super Bitcoin-killing altcoin? They'd be rich, wouldn't they?

If it becomes the Bitcoin-killing Altcoin, everyone will have a stab at that, because we all know, the highest trees catch the most wind. The

main difference between that, and Bitcoin will be that the new Alt coin will have a Master, not like Bitcoin, where the Master fled the

scene. A individual or a known group of people, will be a easy target, and I do not know if anyone is ready to take on that challenge.  Roll Eyes

THE FIRST DECENTRALIZED & PLAYER-OWNED CASINO
.EARNBET..EARN BITCOIN: DIVIDENDS
FOR-LIFETIME & MUCH MORE.
. BET WITH: BTCETHEOSLTCBCHWAXXRPBNB
.JOIN US: GITLABTWITTERTELEGRAM
pereira4
Legendary
*
Offline Offline

Activity: 1610
Merit: 1183


View Profile
December 05, 2016, 04:09:11 PM
 #30

If we go to 8 mb blocks, wouldn't the size of the blockchain .dat file increase with 400%?
That would mean over 200gb for downloading the blockchain, which is prety asburd. Then more people will use online wallets that are less secure.

200GB is nothing.  Super fucking tiny.  The total size of the chain isn't at all important.  You only download it once.  You can buy 10 terabyte for very cheap.  So, 200GB is laughably small.

The real issue is passing 8MB around to all the nodes every ten minutes.  Some effects occur there.  No big deal.  Internet is freaking fast and getting freaking faster.  Netflix bandwidth load is >>>>>>> than bitcoin with 8MB.  

If we want people to be able to run a node behind a 1200 baud modem, then 8MB is problematic.  If we abandon those having 1200 baud and less, then the only reason to keep 1MB is to drive need for Blockstream's bullshit solutions.  

8MB blocks are very lightweight for nearly all modern systems.  

How is it not important even if you have to download it once one? It has to fit on your hard disk, so use your brain. 8MB blocks would mean the average computer user which has a 1TB hdd, wouldn't be able to run a node and since 8MB would get quickly filled again, people would demand more because you make them used to low onchain fees which is a mistake. The result is we end up with datacenters running nodes and no people runs nodes anymore, in other words we are fucked, stuck with a bitcoin run by corporations instead of people on their personal computers as it should be to be able to survive government control. The Blockstream conspiracy shit is so 2015, time to think for once.
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!