Bitcoin Forum
May 25, 2024, 02:47:31 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ... 73 »
41  Bitcoin / Pools / Re: Blocks are full. Are pool owners sleeping ? on: March 30, 2016, 08:25:17 PM
Because Bitcoin has a 1mB limit and a hard fork would be needed to change it, along with all that this entail.
Did u even understand the meaning of this statement ?

Why are not you still mining with a client that supports 2mb blocks ?
Coz Classic includes SPV mining which is bad for bitcoin.

did they already merge Gavin's pull request?

while at it if you don't mind I would like to know your opinion about this Sergio's post:

https://bitslog.wordpress.com/2016/01/08/spv-mining-is-the-solution-not-the-problem/

since Gavin's proposal is implementing what is described by Sergio as SPV mining.
42  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: March 30, 2016, 08:04:44 PM
The developers only matter when they say what you want to hear?
I've yet to see a single person who completely understands Segwit and is against it.


if you are among the group of people that completely understand SegWit, could you please explain to me the reason why a discount of 75% is applied to signatures (witnesses) while computing block space limit? (serious question)
43  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: March 29, 2016, 03:19:10 PM
Quote from: Carlton Banks link=topic=1330553.msg14326418#msg14326418
Given today's technology, 2MB would be a bad idea tomorrow.

Miners that are using  RLN could support 2MB blocks without even noticing. For normal nodes -blockonly could do the trick. According to gmax this would save 88% of bandwidth.

Now if you're worried about nodes not relaying txs, just do not use -blockonly and convince Core dev to apply Xthin to BitcoinCore.  You could save a lot of bandwidth while propagating new blocks  (more or less 10 times less BW required) .

Well, that solves the problem for miners, but what about the users? Remember, I said "tomorrow". Thin blocks/IBLT are not available on the network tomorrow. But in principle, I agree, it's just that "tomorrow" part.



-blocksonly is available today in Bitcoin Core, (see: #6993 b632145 Add -blocksonly option (Patrick Strateman))

even Xthin is available today, not in Bitcoin Core though.
44  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: March 29, 2016, 01:16:49 PM
No matter how many times you bump the block size limit you will always run into this issue. This is because increasing the block size limit is not a solution, it is a band-aid.

meh i disagree about the band-aid analogy.

in 100 years when internet speeds are 10000X faster 1GB blocks will be viable.

Given today's technology, 1GB blocks would be a bad idea tomorrow. Given today's technology, 2MB would be a bad idea tomorrow. Given next year's tech and SegWit active on the network for several months, 2MB is probably not too bad. Let's hope the internet itself doesn't take any backward steps between now and then, I guess


(bold's mine).

Miners that are using  RLN could support 2MB blocks without even noticing. For normal nodes -blockonly could do the trick. According to gmax this would save 88% of bandwidth.

Now if you're worried about nodes not relaying txs, just do not use -blockonly and convince Core dev to apply Xthin to BitcoinCore.  You could save a lot of bandwidth while propagating new blocks  (more or less 10 times less BW required) .

edit: fix grammar
45  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: March 17, 2016, 05:12:19 PM
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc
46  Bitcoin / Bitcoin Discussion / Re: Mempool is now up to 25.5 MB with 22,200 transactions waiting. on: March 01, 2016, 07:19:56 AM
Bitcoin’s ‘New Normal’ Is Slow and Frustrating

No, it's a fair game ... now with 0.12.0 : we can purge mempool from 0-fee transaction automaticly during heavy (and low fees) period of time.

https://bitcoin.org/en/release/v0.12.0

Quote
Memory pool limiting

Previous versions of Bitcoin Core had their mempool limited by checking a transaction’s fees against the node’s minimum relay fee. There was no upper bound on the size of the mempool and attackers could send a large number of transactions paying just slighly more than the default minimum relay fee to crash nodes with relatively low RAM. A temporary workaround for previous versions of Bitcoin Core was to raise the default minimum relay fee.

Bitcoin Core 0.12 will have a strict maximum size on the mempool. The default value is 300 MB and can be configured with the -maxmempool parameter. Whenever a transaction would cause the mempool to exceed its maximum size, the transaction that (along with in-mempool descendants) has the lowest total feerate (as a package) will be evicted and the node’s effective minimum relay feerate will be increased to match this feerate plus the initial minimum relay feerate. The initial minimum relay feerate is set to 1000 satoshis per kB.

Bitcoin Core 0.12 also introduces new default policy limits on the length and size of unconfirmed transaction chains that are allowed in the mempool (generally limiting the length of unconfirmed chains to 25 transactions, with a total size of 101 KB). These limits can be overriden using command line arguments; see the extended help (--help -help-debug) for more information.

Cheating don't work anymore ... when Bitcoin network is busy.


are you for real?

Being able to drop customers is technically doable and it was even before of 0.12, the tools at your disposal are more and more effective.

That said, the real problem is that your turning down customers not because you can't handle them but just just because you decided that your product deserve higher prices.

You decided to do it without advertising the price increase in advance.

You decide to do it even if your competitors have better products to offer both in absolute terms and even in relative terms once you discounted your recent price rise.

Sure bitcoin still have the network effect on its side, but not for long.

47  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: February 28, 2016, 06:14:37 PM
core can do the same.
If these features really are pointless and not noteworthy then just ignore it
Its feels like an awful waste to dismiss all code from anyone else simply because they're not your friend.
No. That's not what I'm saying. All I'm saying is that acting this way is disrespectful towards the people who did most of the work. If the code is good and useful somebody will present it to Core. You've quoted an earlier version of my post, I re-wrote it several times.

Maybe Core should just change licence if they don't like this kind fo dynamics.

That's how open source works, as long as a forking project respect forked project's license I see no harm in advertising a new feature developed independently by the former.

About your last point, trying to make your code into Bitcoin, especially a new feature, is extremely tiring and close to impossible if you do not belong to the group of devs that usually contribute to Core.

Just an anecdote: Peter Tschipper, the dev behind BU xthin, a few months ago proposed to include into Core a datastream compression scheme for blocks and txs that could have save 25% in terms of block space and improved network latency by 30%.

He did all the right things, starting from posting a general sketch of his idea to bitcoin-dev ml, refine the design based on feedbacks, tested it in reproducible manner and upon request provide a BIP draft including working implementation.

After a never ending process at the end the feature and its quite significant gaining wasn't merged, nor a BIP number was assigned.

Guess what Peter at the end gave up and contribute his work to other more receptive project.

It's my understanding that various compression schemes have been looked into and rejected, it's most likely that his code didn't break any new ground or worse opened up new DoS vectors. Failing that if it required a hard fork then it would defiantly be something for the back burners.

no need to guess the reasons why it wasn't merged. it's all in btc-dev ml archives open to anyone who want to check it. nevertheless I can give you a few hints: 25% is not enough, Corallo's relay network is better etc etc,

actually I don't even remember if they explicitly nack it or just avoid to give a final response
48  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: February 28, 2016, 02:45:20 PM
core can do the same.
If these features really are pointless and not noteworthy then just ignore it
Its feels like an awful waste to dismiss all code from anyone else simply because they're not your friend.
No. That's not what I'm saying. All I'm saying is that acting this way is disrespectful towards the people who did most of the work. If the code is good and useful somebody will present it to Core. You've quoted an earlier version of my post, I re-wrote it several times.

Maybe Core should just change licence if they don't like this kind fo dynamics.

That's how open source works, as long as a forking project respect forked project's license I see no harm in advertising a new feature developed independently by the former.

About your last point, trying to make your code into Bitcoin, especially a new feature, is extremely tiring and close to impossible if you do not belong to the group of devs that usually contribute to Core.

Just an anecdote: Peter Tschipper, the dev behind BU xthin, a few months ago proposed to include into Core a datastream compression scheme for blocks and txs that could have save 25% in terms of block space and improved network latency by 30%.

He did all the right things, starting from posting a general sketch of his idea to bitcoin-dev ml, refine the design based on feedbacks, tested it in reproducible manner and upon request provide a BIP draft including working implementation.

After a never ending process at the end the feature and its quite significant gaining wasn't merged, nor a BIP number was assigned.

Guess what Peter at the end gave up and contribute his work to other more receptive project.
49  Bitcoin / Development & Technical Discussion / Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay on: February 26, 2016, 08:51:32 AM
How much less bandwidth does blocksonly use in practice?  I recently measured this using two techniques: Once by instrumenting a node to measure bandwidth used for blocks vs all other traffic, and again by repeatedly running in both modes for a day and monitoring the hosts total network usage; both modes gave effectively the same result.

How much is the savings?  Blocksonly reduced the node's bandwidth usage by 88%.

Do you care to share the raw data? I would like to independently verify your claim.

edit: the patch to introduced code changes to measure different type of bandwidth would be even better.
50  Bitcoin / Bitcoin Discussion / f2pool not supporting roundtable was Re: 「魚池」BTC:270 Phash/s - LTC:500 Ghash/s - New Server in U.S. stratum-us.f2pool.com on: February 24, 2016, 02:49:44 PM

I'm not sure continuing to edit a document that was supposed to remain unchanged from the beginning sends the right message. Whats to stop someone from editing it further in the future?

The signers perseverance of their reputation. If code is not delivered and well intention effort is not made , than their reputation will be permanently besmirched. Saving the statement to the blockchain doesn't really change anything as anything with enough interest posted on the web is permanently recorded by caching servers regardless.

serious question: anybody knows the reason why Adam Back's title was changed in the first place (from president of BS to individual)?
51  Bitcoin / Development & Technical Discussion / Re: Wondering out loud: Which should Chinese miners support - Core, Classic or another? on: January 29, 2016, 09:04:37 AM
Also-- witness Bitcoin Classic arguing that it's proper to put the 21m cap up to a popular vote. I think that is reprehensible. A simple majority shouldn't just be able to vote to undermine the property rights of a minority, even if there a strongly fair global voting mechanism were possible.

 

Even if you don't buy my argument that the risk is real; the argument that Bitcoin could easily have its rules changed is FUD that our competition would ruthlessly exploit. After all, this is an earnest concern held by many of the longest term and most experienced among us... it would be an easy sell to someone looking for "the catch".


gmaxwell, are you trying to win "the-best-out-of-context-quote-of-the-year" competition ?

52  Bitcoin / Bitcoin Discussion / Re: Analysis and list of top big blocks shills (XT #REKT ignorers) on: January 21, 2016, 05:25:20 PM
Uh, no I won't get lost.  I've just as much right to express myself as you do.

You're welcome to make predictions though.  I think they would be
more interesting if you explain why Chinese miners wouldn't support
bigger blocks.

Have you missed the news from today? They have stated that they are staying with Core.

is this the news you're referring to:

https://twitter.com/AaronvanW/status/690120783281156097

?

if this is the case:

https://www.reddit.com/r/btc/comments/41zk79/chinese_pools_withdraw_their_support_for_classic/cz6etuv

Quote from: /u/KoKansei
So I just read the entire essay and the thread and it seems like this is at most just a personal opinion of the HaoBTC COO, who seems to be fairly biased and uninformed on several issues. The account posting the essay admitted in the comments when pressed that "the essay just expresses my personal views and is not an official statement from HaoBTC."
Something seems fishy here, but whatever the case these guys don't seem very professional.
Personally I would want official confirmation from HaoBTC on their website or a corroborating account from /u/jgarzik before I took this seriously.
53  Bitcoin / Bitcoin Discussion / Re: The Lightning Network Reality Check on: January 20, 2016, 10:46:01 PM
2 MB blocks are dangerous at the moment because a transaction could be created that would take too long to validate (10 minutes or more).

https://github.com/bitcoin/bitcoin/commit/97e5b55c6fabf5deb57be13bdd8f8b9c90d21570
54  Alternate cryptocurrencies / Pools (Altcoins) / Re: 「魚池」BTC:180 Phash/s - LTC:550 Ghash/s - New Server in U.S. stratum-us.f2pool.com on: January 16, 2016, 07:11:12 PM
SW, at it's theoretical maximum, will force you to transmit 4MB worth of data for only a 1.75MB maximum gain in tx's and associated fees.  how does that help you vs a simple blocksize increase to 4MB worth of pure tx's and fees?

This is a complete lie and misfabrication.

macbook-air please, if you are wang chun, do consult with the Core devs.

Segwit is the most responsible way to end this dead lock for now and will provide for ample time and headroom to optimize the propagation problems so that a 2MB hard fork may go through with absolute network consensus down the road.

There is still clear dissent amongst users about a contentious hard fork and while miners may agree it would create a bad precedent for you to force this on the community. 

you could find reassuring that chypherdoc number are supported even by the official "Capacity Increase Faq", see:

https://bitcoin.org/en/bitcoin-core/capacity-increases-faq#segwit-size

for the sake of clarity:

Quote from: Capacity Increase Faq
According to some calculations performed by Anthony Towns, a block filled with standard single-signature P2PKH transactions would be about 1.6MB and a block filled with 2-of-2 multisignature transactions would be about 2.0MB.

to that add that you could ave 4MB virtual block size only if a block is completely filled by 3-of-3 txs.

Based on such actual data and the avg block txs compositions SegWit will give s scaling factor of ~1.75x once the soft-fork will be adopted by 100% of the network.

This is a possible scenario:

- SegWit deployed on april/may 2016
- Soft-Fork triggers in Jun/July 2016
- 50% of adoption after one year

if all the above are valid that means that you will have 1.35MB  vitual max block size by june/july 2017.



 
55  Economy / Speculation / Re: Segwit soft fork is probably the best possibility for investors right now on: December 23, 2015, 10:40:28 AM
Peter Wuille on segregated witness:

https://m.youtube.com/watch?v=fst1IK_mrng&time_continue=2179

Skip the first 35 minutes, it's blank.  Why they didn't edit out the dead time I have no idea.
Wuille's presentation on segwit is well worth watching.
Segwit is really an ingenious solution to inherent inefficiencies in Bitcoin, and has multiple benefits.  Extra room in the blocks is just the icing on the cake.

fwiw same presentation at San Francisco bitcoin meetup:

https://youtu.be/NOYNZB5BCHM

that said SegWit is definitely ingenious, nevertheless a two critical points:

- proposed development timeframe is too short (look at issues reported by Peter Todd here http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html)

- in terms of block size increase it's equal to having a slow ramping to a new 1.3/1.5MB cap in 12-18months.
56  Economy / Speculation / Re: Segwit soft fork is probably the best possibility for investors right now on: December 23, 2015, 08:44:12 AM
And there's this...

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html

Complexity is tough to get right, and it's the unknown unknowns that get you. If this is being sold on its speedy deployment, I beg to differ. 

sorry can't resist:

Some ideas are easy to explain but hard to execute. Other ideas are easy to execute but hard to explain. Segregated witness (segwit) seems to be the latter.
57  Bitcoin / Bitcoin Discussion / Re: Bitcoin XT - Officially #REKT (also goes for BIP101 fraud) on: December 23, 2015, 08:27:34 AM
I think we should all need to take a few minutes and read latest Peter Todd's email (on btc dev mailing list):

"Segregated witnesses and validationless mining"
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html
58  Economy / Speculation / Re: Segwit soft fork is probably the best possibility for investors right now on: December 23, 2015, 08:23:06 AM
Just for the sake of completeness

The current proposal for soft fork segregated witness (segwit) counts each byte in a witness as 0.25 bytes
towards the maximum block size limit, meaning the maximum size of a block is just under 4MB.

However, blocks are not expected to consist entirely of witness data and each non-witness byte is counted
as 1.00 bytes towards the maximum block size limit, so blocks near 4MB in size would be unlikely.

According to some calculations performed by Anthony Towns, a block filled with standard single-signature
P2PKH transactions would be about 1.6MB
and a block filled with 2-of-2 multisignature transactions would
be about 2.0MB.
59  Economy / Speculation / Re: Segwit soft fork is probably the best possibility for investors right now on: December 22, 2015, 10:45:02 PM
no need to do it anymore, apaprently they found a way to have 4mb without increasing it to 4 mb

it's called segregated witness, still i want to now more about the possible weakness

SegWit won't give 4MB,  or better it gives you such a gain only if the all txs included in the
block are 3-of-3 multisig.

On average (depending on txs type) you'll get a virtual size equivalent to 1.6-2.0 * 1MB.



yeah 4 is as a maximum, didn't know that the average was only 2, but is still enough for a small egde, without breaking anything else

at this point they could have just increased it to two and that's it...

pwuille (sipa) said that the typical gain would be 1.75

that said SegWit will be deployed through a soft-fork hat means that you want 1.75 since the activation.

more realistically after 6 months from the activation you could expect an adoption rate between 25  and 50%, hence the real capacity would something like 1.3

well 1.3 is very underwhelming, i'm not so sure anymore if it will make any difference in case the adoption rate, will take off for whatever reason in the future, and before another(an hard fork presumably) change is needed

add to that we have to take into account that 50% adoption after 6 month from the activation is a very optimistic scenario. this means 1.1375 in case of more realistic estimate.
60  Economy / Speculation / Re: Segwit soft fork is probably the best possibility for investors right now on: December 22, 2015, 05:44:22 PM
no need to do it anymore, apaprently they found a way to have 4mb without increasing it to 4 mb

it's called segregated witness, still i want to now more about the possible weakness

SegWit won't give 4MB,  or better it gives you such a gain only if the all txs included in the
block are 3-of-3 multisig.

On average (depending on txs type) you'll get a virtual size equivalent to 1.6-2.0 * 1MB.



yeah 4 is as a maximum, didn't know that the average was only 2, but is still enough for a small egde, without breaking anything else

at this point they could have just increased it to two and that's it...

pwuille (sipa) said that the typical gain would be 1.75

that said SegWit will be deployed through a soft-fork hat means that you want 1.75 since the activation.

more realistically after 6 months from the activation you could expect an adoption rate between 25  and 50%, hence the real capacity would something like 1.3
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ... 73 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!