Bitcoin Forum
November 14, 2024, 06:11:16 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [27] 28 29 30 31 32 »
  Print  
Author Topic: So who the hell is still supporting BU?  (Read 29827 times)
-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
February 20, 2017, 11:48:09 PM
 #521

Just as a question ... do you have an estimate of the % of solved blocks that are attributable to your SW?
On the client software side with cgminer it would be over 95% with what hardware is currently out there and knowing what is embedded on most of it. At the pool server end with ckpool it's probably less than 5% at present.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
sgbett
Legendary
*
Offline Offline

Activity: 2576
Merit: 1087



View Profile
February 21, 2017, 12:29:00 AM
 #522


'member this?

"Furioser and furioser!" said Alice.

Fear does funny things to people.

Wasn't your precious XT fork supposed to happen today?

Or was that yesterday?

Either way, for all the sturm und drang last year the deadline turned out to be a titanic non-event.

Exactly as the small block militia told you it would be.

The block size is still 1MB, and those in favor of changing cannot agree when to raise it, nor by how much, nor by what formula future increases should be governed.

You are still soaking in glorious gridlock despite all the sound and fury, and I am loving every second of your agitation.
  Smiley


I 'member.

Keep at it old boy, you're hilarious.

"A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution" - Satoshi Nakamoto
*my posts are not investment advice*
Viscount
Sr. Member
****
Offline Offline

Activity: 243
Merit: 250


View Profile
February 21, 2017, 01:13:55 AM
 #523

Roger Ver, notorious foe of Bitcoin, again in the center of the war, trying to sue and close one of Bitcoin Exchanges.  Sad Beware of the scoundrel and his hard fork unlimited if you're good bitcoiner...
kiklo
Legendary
*
Offline Offline

Activity: 1092
Merit: 1000



View Profile
February 21, 2017, 02:58:20 AM
Last edit: February 21, 2017, 04:04:24 AM by kiklo
 #524

This is, again, a limitation of the code rather than a protocol problem

I see we agree. On this small point, at any rate.

I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...

Bitcoin is moving to Schnorr sigs.  We need them not only to stop O(n^2) attacks, but also to enable tree signatures and fungibility, etc.

Why would anyone waste time trying to fix the obsolete Lamport scheme?

Perhaps the Unlimite_ crowd will decide to dig in their heels against Schnorr?

Oh wait, by blocking segwit they already have!  Grin

Point of Clarification

The Miners are the ones blocking segwit installation, you know the ones you depend on making new blocks and including transactions and keeping BTC secure.
Those are the guys blocking segwit, the ones your entire BTC network depends on.
Maybe they know more than you do or they just don't care what you think.   Cheesy

 Cool

FYI:
Combining BU & 8MB & Not Voting , Over 70% is refusing to install segwit.
In a normal race , that is a LandSlide.
What is strange is the pro-segwitters are too stupid to grasp that NO ONE WANTS SEGWIT or LN.  Tongue

Larger BlockSizes and keeping the Transactions ONCHAIN is what everyone wants.
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 21, 2017, 03:22:11 AM
 #525

Everyone wants more tps, but doesn't look like segwit/ln is going to become the solution in near future :p

classicsucks
Hero Member
*****
Offline Offline

Activity: 686
Merit: 504


View Profile
February 21, 2017, 05:08:58 AM
 #526

By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

Gotcha. How about if spammy looking transactions were filtered by each node when they are first broadcast? I suppose ultimately we'd be working toward prioritizing transactions based on their merits... I also see that it's risky to start judging transactions and dropping them, but perhaps there should be an obligation for the transaction creator to be legitimate and respectful of the limited blockspace? 

I understand that multi-ithreading could open up a can of worms...  It still seems like raising the blocksize would be quite easy, and is the logical way forward. ( inb4 "OH MY GOD HARD FORK....") BTW Synthetic Fork seems like a decent proposal from the Chinese.
franky1
Legendary
*
Online Online

Activity: 4410
Merit: 4766



View Profile
February 21, 2017, 05:45:51 AM
 #527

Gotcha. How about if spammy looking transactions were filtered by each node when they are first broadcast? I suppose ultimately we'd be working toward prioritizing transactions based on their merits... I also see that it's risky to start judging transactions and dropping them, but perhaps there should be an obligation for the transaction creator to be legitimate and respectful of the limited blockspace?  

you dont need to filter out transactions. you just need a better 'priority' formulae that works with the 2 main issues.
1. bloat of tx vs blockspace allowed
2. age of coin vs how fast they want to respend

rather than the old one that is just based on the richer you are the more priority your rewarded(which became useless)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
poloniexwhale
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
February 21, 2017, 05:47:43 AM
 #528

Why bitcoin unlimited requires a hard fork? Without hard fork, it is possible to implement? What is the advantages to use BU instead?

-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
February 21, 2017, 07:41:17 AM
 #529

Gotcha. How about if spammy looking transactions were filtered by each node when they are first broadcast? I suppose ultimately we'd be working toward prioritizing transactions based on their merits... I also see that it's risky to start judging transactions and dropping them, but perhaps there should be an obligation for the transaction creator to be legitimate and respectful of the limited blockspace? 
That would have zero effect. The transactions are INCLUDED IN THE BLOCK so if anything, ignoring the transactions in the first place means the node has to request them from the other node that sent it the block.

Now if you're talking about local block generation mining, bitcoind already does extensive ordering of transactions putting spammy transactions ultra low priority and likely not even stored in the mempool, so there's little room to move there. Filtering can always be improved but the problem isn't local block generation but block propagation of a intensely slow to validate block.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 21, 2017, 11:44:32 AM
Last edit: February 21, 2017, 12:28:22 PM by IadixDev
 #530


No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

franky1
Legendary
*
Online Online

Activity: 4410
Merit: 4766



View Profile
February 21, 2017, 02:18:33 PM
 #531


No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

100 blocks?
1000 blocks?

um theres only 20ish pools and the chances of them all having a potential solved block all within the same few seconds is small
at most devs and pools have to worry about is a couple potential blocks competing to be added to the blockheight so dont throw fake doomsdays into a narrative,

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 21, 2017, 02:37:34 PM
 #532


No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

100 blocks?
1000 blocks?

um theres only 20ish pools and the chances of them all having a potential solved block all within the same few seconds is small
at most devs and pools have to worry about is a couple potential blocks competing to be added to the blockheight so dont throw fake doomsdays into a narrative,

If there is only a couple of them possibly being processed at the same time, that can help.

But still the goal is to avoid wasting processing time on them, not wasting more of it on multiple thread.

And there can still be this issue on single tx no ? It's not necessarily coming up only with solved blocks ?

And in that case there can still be many degenerate tx with sigops not coming only from the pools.


franky1
Legendary
*
Online Online

Activity: 4410
Merit: 4766



View Profile
February 21, 2017, 02:56:14 PM
 #533


No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

100 blocks?
1000 blocks?

um theres only 20ish pools and the chances of them all having a potential solved block all within the same few seconds is small
at most devs and pools have to worry about is a couple potential blocks competing to be added to the blockheight so dont throw fake doomsdays into a narrative,

If there is only a couple of them possibly being processed at the same time, that can help.

But still the goal is to avoid wasting processing time on them, not wasting more of it on multiple thread.

And there can still be this issue on single tx no ? It's not necessarily coming up only with solved blocks ?

And in that case there can still be many degenerate tx with sigops not coming only from the pools.


im not seeing the big devastating problem your saying dev's should avoid. these days most computers have multiple cores (including raspberry Pi) so if an full node implementation has a '64-bit' release. then automatically you would think devs have already programmed it so that it is already shifts processing resources across the different cores rather than queuing up on a single core.
so the problem and solution should have been solved by just having a 64bit version of a bitcoin full node

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 21, 2017, 03:03:40 PM
 #534

The thing is i'm also thinking on general blockchain problem, because i'm trying to generalize blockchain stuff with a generic engine, and trying to generalize issue not depending on a particular blockchain topography or other blockchain / network specific thing.

Maybe for the current bitcoin configuration that can work, but if you take POS coins for example, where blocks can be sent more easily, or other configuration, it can be more of a problem.

Even if there are multiple core, there is still not an infinity of cores. If there is a finite amount of such block who has to be processed, a finite amount below the number of core, that can be ok.

Otherwise it's just pushing the problem away using more of finite resources.

franky1
Legendary
*
Online Online

Activity: 4410
Merit: 4766



View Profile
February 21, 2017, 03:32:38 PM
 #535

The thing is i'm also thinking on general blockchain problem, because i'm trying to generalize blockchain stuff with a generic engine, and trying to generalize issue not depending on a particular blockchain topography or other blockchain / network specific thing.

Maybe for the current bitcoin configuration that can work, but if you take POS coins for example, where blocks can be sent more easily, or other configuration, it can be more of a problem.

Even if there are multiple core, there is still not an infinity of cores. If there is a finite amount of such block who has to be processed, a finite amount below the number of core, that can be ok.

Otherwise it's just pushing the problem away using more of finite resources.

but there is not an infinite amount of problems for you to worry about finite resources.

its like others who dont want bitcoin to naturally grow to 2mb-4mb blocks because you fear gigabytes by midnight(tonight). the reality is that REALITY, REAL WORLD results wont be gigabytes by midnight.

its like not letting a toddler learn to walk because you worry one day the kid when he grows to be an adult will have an accident crossing the road.

your saying prevent optimisation and cause issues out of a worry of something that's not a problem today and wont be a problem,
you seem to be creating a doomsday that has no bases in reality of actually occurring.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 21, 2017, 03:40:11 PM
Last edit: February 21, 2017, 04:11:40 PM by IadixDev
 #536

Optimisation can be seen in differents ways.

Here the concern is resources availabilty. The goal is too keep resources available a maximum, to keep the processing capacity at maximum possible, which include not wasting it on useless computation.

If the solution to maximise capacity in case of useless long block is to monopolize the ressource to check them, it's not really a big win. Ok that remove some of the processing from the main thread, it doesn't mean it's free either, or that those resources could not be used for something more useful.

With the nature of blockchain there will always be some time wasted to invalidate some long stuff, but if the goal is to optimize this, need to find a way to avoid this processing in a way that is shorter than actually validating it. Otherwise it's just pushing the problem away. If there was always some idle core who has nothing better to do than checking useless block, I say ok it's a win, otherwise it's not solving the pb.

With the thread thing it can ceil the waste to the time of longest block, which can maybe be a win in some cases.

BillyBobZorton
Legendary
*
Offline Offline

Activity: 1204
Merit: 1028


View Profile
February 21, 2017, 04:03:15 PM
 #537

Raise the blocksize = automatically spammed with crap and blocks are full again = idiots wanting another blocksize increase. They will never stop crying.

Im up for a conservative 2MB increase AFTER segwit is implemented as recommended by 100% experts. No segwit = no blocksize increase, blame chinkshit miners.
franky1
Legendary
*
Online Online

Activity: 4410
Merit: 4766



View Profile
February 21, 2017, 04:26:04 PM
 #538

Raise the blocksize = automatically spammed with crap and blocks are full again = idiots wanting another blocksize increase. They will never stop crying.

Im up for a conservative 2MB increase AFTER segwit is implemented as recommended by 100% experts. No segwit = no blocksize increase, blame chinkshit miners.

lol oh look someone used cores 2017 script buzzword "conservative" (becoming too obvious now)

seriously the script readers need to spend more time reading code and running bitcoin scenario's rather then script reading "recommended by.."

segwit is not a fix. malicious people wont use segwit keys. they will stick to native keys. segwit solves nothing
the real solution is to let nodes flag what they can cope with and then pools see the node consensus and then the pools make their own pool consensus of what they will produce that the nodes can accept.

as for bloat/spam.
a more effective method is to have a real 'priority' formulae that actually solves the problem.
a more effective method is to have tx sigop limits to solve the issues
a more effective method is to not be kissing dev's asses, and think about CODE solutions

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 21, 2017, 04:39:36 PM
Last edit: February 21, 2017, 04:57:16 PM by IadixDev
 #539

Maybe an indicator of the total estimated processing time for a block could be added in the block headers, and limiting the efffective processing time to this. If it's not processed before the indicator,  bye.


or advertising the number of sig op in the block more explicitly from start, and limiting the number of sigop processed to this, if there is more sig op than advertized, bye, as mining nodes are already supposed to know this, if a way can be found not using too much extra bandwidth.

iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
February 21, 2017, 04:54:58 PM
 #540

Maybe an indicator of the total estimated processing time for a block could be added in the block headers, and limiting the efffective processing time to this.


or advertising the number of sig op in the block more explicitly from start, and limiting the number of sigop processed to this,as mining nodes are already supposed to know this, if a way can be found not using too much extra bandwidth.

Maybe we could activate segwit, implement Schnorr sigs, stop worrying about O(n^2) attacks, and enjoy the other benefits like Lightning, tree multisignature, fungibility, etc.

/common sense


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [27] 28 29 30 31 32 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!