Bitcoin Forum
May 09, 2024, 01:12:36 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 27 28 29 30 31 32 »
  Print  
Author Topic: So who the hell is still supporting BU?  (Read 29827 times)
kiklo
Legendary
*
Offline Offline

Activity: 1092
Merit: 1000



View Profile
February 19, 2017, 03:40:18 AM
 #481

The O(n^2) sigop attack cannot be mitigated with Electrum X or by simply buying a faster Xeon server.

As Gavin said, we need to move to Schnorr sigs to get (sub)linear sig validation time scaling.

And AFAIK moving to Schnorr sigs at minimum requires implementing Core's segwit soft fork.

Informed Bitcoiners like Adam Back and the rest of Core plan to do segwit first, because it pays off technical debt and thus strengthens the foundation necessary to support increased block sizes later.


So you are saying their Developer is not competent enough to find a solution.
I think if the Developer of electrum was actually worried about it , he would have mentioned it when asked point blank on the blocksize issue.



Electrum devs cannot change the fact Bitcoin uses Lamport sigs.  As currently implemented, Lamport sig validation scales quadratically with tx size.

Nobody can fix that until we have segwit and may then change to Schnorr sigs.


There is always more than one way to make something work, saying there is only 1 way shows a lack of imagination.

No offense to Lauda
, There is always more than 1 way to Skin a Cat.  Wink

Until the Electrum dev actually come out and say they can't make it work, your assumptions are irrelevant.

 Cool
1715260356
Hero Member
*
Offline Offline

Posts: 1715260356

View Profile Personal Message (Offline)

Ignore
1715260356
Reply with quote  #2

1715260356
Report to moderator
1715260356
Hero Member
*
Offline Offline

Posts: 1715260356

View Profile Personal Message (Offline)

Ignore
1715260356
Reply with quote  #2

1715260356
Report to moderator
Even if you use Bitcoin through Tor, the way transactions are handled by the network makes anonymity difficult to achieve. Do not expect your transactions to be anonymous unless you really know what you're doing.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715260356
Hero Member
*
Offline Offline

Posts: 1715260356

View Profile Personal Message (Offline)

Ignore
1715260356
Reply with quote  #2

1715260356
Report to moderator
1715260356
Hero Member
*
Offline Offline

Posts: 1715260356

View Profile Personal Message (Offline)

Ignore
1715260356
Reply with quote  #2

1715260356
Report to moderator
iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
February 19, 2017, 04:15:27 AM
Last edit: February 19, 2017, 12:04:05 PM by iCEBREAKER
 #482

segwit will never be activated , just can't get past that furball you call a brain

You are new here so it's understandable you are not aware that I've been cheering for stalemate and gridlock since 2015 (when you were restricted to the Introduce Yourself noob quarantine thread).

'member this?




'member this?

"Furioser and furioser!" said Alice.

Fear does funny things to people.

Wasn't your precious XT fork supposed to happen today?

Or was that yesterday?

Either way, for all the sturm und drang last year the deadline turned out to be a titanic non-event.

Exactly as the small block militia told you it would be.

The block size is still 1MB, and those in favor of changing cannot agree when to raise it, nor by how much, nor by what formula future increases should be governed.

You are still soaking in glorious gridlock despite all the sound and fury, and I am loving every second of your agitation.
  Smiley


'member this?

Love watching the Blockstream idiot hijaakers gloat.  

It's beautifully Ironic.  Today they gloat a Mike's failure to hijaak bitcoin.

Tomorrow we laugh at all of them for their failed hijaak attempt of Core.

www.BitcoinClassic.com #Winner.
* Classic isn't XT.  It has actual concensus, support, and is reasonable.  And let's remember this... as much as I think Mike Hearns is a traiterous ass, I totally respect him for being the first to standup and throw a punch at these hijaaking, whiny, lying, manipulating, censorship wielding, bitcoin crippling losers at Blockstream/Core.

Oh, and don't count Mike out yet.  He is in my guess a huge threat to Bitcoin.  I predict that when chaos around the Classic Fork is going strong - the R3CEV/Hyperledger/Bank team will strike with a Fiat Coin announcement.

What makes you think Blockstream is going to pull a Hearn (IE, write self-indulgent Goodbye Cruel World + Bitcoin Obiturary Medium post, rage quit, and rage dump) tomorrow?

All you Gavinistas' 6 months of whining and threats has accomplished is providing the rest of us with amusement.

You haven't moved the needle towards Gavinblocks at all, not one iota.

We warned you the outcome of your contentious vanity fork and governance coup attempts would be gridlock, which effectively preserves the 1MB status quo.

In response, you guys amped up the drama, using ridiculous bullet words like censorship/dictatorship/hijack/crippling/strangling to goad the Reddit mob into lighting their torches and stamping about chanting "rabble rabble rabble!"

How's that working for you?

Are you starting to understand why you can't win this fight, or do I need to make a new #REKT meme?   Smiley


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
February 19, 2017, 04:58:39 AM
 #483

Electrum devs cannot change the fact Bitcoin uses Lamport sigs.  As currently implemented, Lamport sig validation scales quadratically with tx size.

Nobody can fix that until we have segwit and may then change to Schnorr sigs.


There is always more than one way to make something work, saying there is only 1 way shows a lack of imagination.

No offense to Lauda
, There is always more than 1 way to Skin a Cat.  Wink

Until the Electrum dev actually come out and say they can't make it work, your assumptions are irrelevant.

There you go again, avoiding the concrete specifics of the O(n^2) attack and retreating into lazy hand-waving generalizations about "always" and "something" and useless slogans about cats.

It must really suck going through life encumbered by such sloppy, random thought processes.  You must greatly resent those of us with the ability to function at much higher levels of focused attention to detail.

There is no known way to change the current Lamport sig implementation such that it avoids quadratic scaling.

That algorithm is not trivially parallelizable.  We can't get there from here.  If we want lightning fast multi-threaded validation, that requires segwit+Schnorr.

Gavin already explained why.

Here, I'll repeat it one more time in the hope that this spoonful of nutritious information somehow manages to make its way into your mental metabolism.

Here comes the plane!  *Vrrrrooooom!*  Open wide!  Nummy nummy knowledge for sweet widdle baby kiklo!

The attack is caused by the way the signature hash is computed-- all n bytes of the transaction must be hashed for every signature operation.

Perhaps if you took the time to read his post and understand all of it, I wouldn't have to sit here spoon feeding you premasticated fact and wiping most of it off your chin/bib/high chair when you just spit it out rather than successfully starting the digestion process.


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
February 19, 2017, 06:00:21 AM
 #484

Well as far as i understand, LN channels can be somehow shut down, via certain glitches or other, and in that case, what would remain of the operation made in that channel ?

And what validity would this journalizing have regarding on chain state if there are difference at the end ?

The problem with caching is not about HD crash, but if the controller is stopped before the cache is a actually wrote, even if the hard drive works well the data is still lost.


Just to push analogy with cache to show certain caveeat, with smp system and cpu cache, there are certain case when the memory is shared with other chips with dma, or virtual pagination, in system with high concurency on the data, cpu cache can become "out of date", even with sse2 there are certain thing to help dealing with this, but as far as i know, most os disable caching on certain shared memory because of all the issues with cache, and instruction reordering etc When having access to up to date data in concurent system is more important than fast access to potentially out of date data.

If LN is to be seen as a cache system , it doesn't look like they are taking all the precautions for it to be really safe.

Cache  are easily safe when all the write access to the data are made throught the same interface doing the caching, which is not the case with bitcoin & LN.

With hard drive it works because all the access goes throught the same controller doing the caching.

But anyway as LN locks the bitcoin on the main chain, it's not even really a true cache system, because the principle of a cache system is to fasten multiple access on the same data, as the bitcoin are locked, the channel have exclusive access to it, and so it's not really to be seen as a true system of blockchain caching.

There are multiple implementations of the payment channel schema.  Of course they have different trade-offs.

You remind me of young GMAX proving Bitcoin was impossible, and I hope you are similarly happy when shown to be wrong about LN.

I thought you might be interested in this tweet; to me it seems there is an interesting congruence afoot.  Convergent morphology perhaps....or simply Data Structures 101?   Cheesy


To paraphrase:
"What sucks about directly buying Frappuchinos with Bitcoin?"
"The biggest issue I think is random small tx are literally the work that kills Blockchain the most."

The message seems to be that without write caches we don't get to have nice things!  Tongue


In terms of mechanical engineering, write caches function as a shim which reduces friction and the resulting heat/damage.

https://en.wikipedia.org/wiki/Shim_(spacer)


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
kiklo
Legendary
*
Offline Offline

Activity: 1092
Merit: 1000



View Profile
February 19, 2017, 06:25:00 AM
 #485

@iCEBREAKER

You know what is funny , you actually think you are relevant,

You Not , but it is funny you think so.   Cheesy

If a side product can't keep up with innovation , that is on the Devs of the side product.

Electrum can update or get left behind , it is that simple.
You trying to say they are not able to get their code working , is really their problem,
not BTC Core and not the BU , only electrum.

Their were a lot of side products that are broken with every microsoft OS release.
Vendors either update & adapt or Die, their Choice.

Electrum is no different.
Thinking you are going to hold back an entire network because 1 vendor can not make their product work with it , is beyond stupid.


 Cool
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 19, 2017, 09:44:12 AM
Last edit: February 19, 2017, 11:42:42 AM by IadixDev
 #486

Well as far as i understand, LN channels can be somehow shut down, via certain glitches or other, and in that case, what would remain of the operation made in that channel ?

And what validity would this journalizing have regarding on chain state if there are difference at the end ?

The problem with caching is not about HD crash, but if the controller is stopped before the cache is a actually wrote, even if the hard drive works well the data is still lost.


Just to push analogy with cache to show certain caveeat, with smp system and cpu cache, there are certain case when the memory is shared with other chips with dma, or virtual pagination, in system with high concurency on the data, cpu cache can become "out of date", even with sse2 there are certain thing to help dealing with this, but as far as i know, most os disable caching on certain shared memory because of all the issues with cache, and instruction reordering etc When having access to up to date data in concurent system is more important than fast access to potentially out of date data.

If LN is to be seen as a cache system , it doesn't look like they are taking all the precautions for it to be really safe.

Cache  are easily safe when all the write access to the data are made throught the same interface doing the caching, which is not the case with bitcoin & LN.

With hard drive it works because all the access goes throught the same controller doing the caching.

But anyway as LN locks the bitcoin on the main chain, it's not even really a true cache system, because the principle of a cache system is to fasten multiple access on the same data, as the bitcoin are locked, the channel have exclusive access to it, and so it's not really to be seen as a true system of blockchain caching.

There are multiple implementations of the payment channel schema.  Of course they have different trade-offs.

You remind me of young GMAX proving Bitcoin was impossible, and I hope you are similarly happy when shown to be wrong about LN.

I thought you might be interested in this tweet; to me it seems there is an interesting congruence afoot.  Convergent morphology perhaps....or simply Data Structures 101?   Cheesy


To paraphrase:
"What sucks about directly buying Frappuchinos with Bitcoin?"
"The biggest issue I think is random small tx are literally the work that kills Blockchain the most."

The message seems to be that without write caches we don't get to have nice things!  Tongue


In terms of mechanical engineering, write caches function as a shim which reduces friction and the resulting heat/damage.

https://en.wikipedia.org/wiki/Shim_(spacer)

Anyway saying LN is a caching solution is like saying it would be normal if a browser would lock down a picture for the whole internet because it need to use it locally.

LN miss many thing to be able to be called a true cache system.


Memory management with smp & pci bus is a very complex things, and architecture evolved with more instruction and better instructions pipelining, more functions coming with c11 & openmp,  but handling of cache with smp/pci/south bus is far from trivial.

The issues can be seen more clearly with arm architecture because the cpu architecture is much simpler and they dont have built in handling of these issue of cache and concurent access with south bus and memory bridge,memory space conversion etc.


https://en.m.wikipedia.org/wiki/Conventional_PCI#PCI_bus_bridges

Posted writes
Generally, when a bus bridge sees a transaction on one bus that must be forwarded to the other, the original transaction must wait until the forwarded transaction completes before a result is ready. One notable exception occurs in the case of memory writes. Here, the bridge may record the write data internally (if it has room) and signal completion of the write before the forwarded write has completed. Or, indeed, before it has begun. Such "sent but not yet arrived" writes are referred to as "posted writes", by analogy with a postal mail message. Although they offer great opportunity for performance gains, the rules governing what is permissible are somewhat intricate.


Caching help when it take in account temporality when multiple access on the same data are made , it can help skipping some likely useless long write, but it's still quite probabilistic.

LN would be a cache if it didn't lock the resources on the main chain, and would be able to detect with good ratio of success when the btc are only going to be used locally and keep the modification off chain on the "local cache" when it's most likely not to be used outside of the local channel, and only write it to the main chain when the state is more likely to be shared outside of the local cache shared by a limited number of participant , and it should always keep the local cache updated from the main chain when there is a modification in the on chain state. And anytime there is an access to the state of the chain outside of the local cache it should be wrote back to the main network as fast as possible or the request could not be processed before the state is fully synchronized. The efficiency of cache system dépend on how successful it is at guessing when the data is going to be used again in the local cache before a modification on it happen outside of the cache, otherwise there is zero gain.






https://en.m.wikipedia.org/wiki/Temporal_database

https://en.m.wikipedia.org/wiki/Locality_of_reference

Locality is merely one type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of processor core

Temporal locality
If at one point a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future. There is a temporal proximity between the adjacent references to the same memory location. In this case it is common to make efforts to store a copy of the referenced data in special memory storage, which can be accessed faster. Temporal locality is a special case of spatial locality, namely when the prospective location is identical to the present location.


it's this kind of problematic involved for efficient caching. With a prospective approach on how likely the data is to change in a certain time frame, which allow for faster cacheed access during this time frame.


With transaction it mean you need to predict if the state of the onchain input is going to be potentially accessed within the time frame when it's  used in the local cache. In case it's only going to be a accessed in the local cachenfor a certain period of time, it's worth keeping it in the cache, if the data is shared with other processes, and they need to read it or modify it during that time frame, the cache is useless and data need to be updated from/to the main chain for each operation.

iCEBREAKER
Legendary
*
Offline Offline

Activity: 2156
Merit: 1072


Crypto is the separation of Power and State.


View Profile WWW
February 19, 2017, 12:57:51 PM
 #487

saying LN is a caching solution is like saying it would be normal if a browser would lock down a picture for the whole internet because it need to use it locally.

[tl;dr ignored]

Payment channels only lock down as many Bitcoins as the participants see fit to lock down.  The rest of the 21M coins may keep shuffling around without restriction.

A static .gif may be duplicated across the edge of CDNs (caching proxies) as needed; unlike e-cash, there is no double-spending problem for cat memes.   Grin

If random small writes are literally the work that kills NAND the most, why do you think random small writes are good for blockchains?

Why not have a distributed layer of ad hoc write caches consolidating and optimizing blockchain commits, especially when the ROI is a TPS increase from ~12tps to basically infinity tps?   Huh

If you insist on Bitcoin competing with commercial banking (ie Visa/Paypal/ACH/SEPA) and absolutely must use it for Starbucks lattes, payment channels are the only way to get there from here.

Unlimite_ is vaporware; Core has working code ready to start laying the foundation for scaling Bitcoin to high-powered super-money.

If segwit is implemented and we still have insufficient tps capacity, the Big-blockers will have a much more believable, perhaps compelling, case for an increase to 2MB.

Blocking segwit and LN out of spite while implausibly moaning about how Bitcoin needs to be unlimited is the epitome of cynical hypocrisy.

I admire the cynicism, but abhor the hypocrisy.  Looking forward to the Unlimite_ #REKT thread...


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
Is Dash a scam?
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 19, 2017, 01:31:23 PM
Last edit: February 19, 2017, 01:56:48 PM by IadixDev
 #488

Yeah i think LN is a good idea in the overall and is reasonable trade off that can have its advantage, but it has to be taken for that it is too, and not saying it has no impact on the way it's done vs regular bitcoin transaction and it doesn't give the same security and scrutiny than the global block chain with the proof of work.

And it doesn't have the mechanism to make it as transparent  and reliable as a cache.

It doesn't even really supposed to act as a cache at all.

And even if it was, it's far to be as simple as it's same if using a cache or not like the hd example.

Using cache in concurrent system is full of "tl;dr ignored" problem. One sure thing is it's more complicated than a 3 tweets issue. And even so, the effective gain dépend entierely on good management of data temporality otherwise it's either useless or unsafe.

The effect is not as dramatic as with browser cache & internet data because supposedly the people owning the key to the btc being locked on the LN already have more or less exclusive access to it, but if there was potentially several independent users using those same keys, it would make a difference.




manselr
Legendary
*
Offline Offline

Activity: 868
Merit: 1004


View Profile
February 19, 2017, 03:23:50 PM
 #489

@iCEBREAKER

You know what is funny , you actually think you are relevant,

You Not , but it is funny you think so.   Cheesy

If a side product can't keep up with innovation , that is on the Devs of the side product.

Electrum can update or get left behind , it is that simple.
You trying to say they are not able to get their code working , is really their problem,
not BTC Core and not the BU , only electrum.

Their were a lot of side products that are broken with every microsoft OS release.
Vendors either update & adapt or Die, their Choice.

Electrum is no different.
Thinking you are going to hold back an entire network because 1 vendor can not make their product work with it , is beyond stupid.


 Cool

The only thing irrelevant here is BUnlimistas and your nonsense.

BU has already been proven as a recipe for disaster. Everyone that's actually relevant in the bitcoin circles including Nick Szabo which knows more about all of this than you will ever know, is already saying we are wasting time by not activating segwit then max up with LN for VISA competition purposes.

You guys need to get wiped. You fell victim of the Roger propaganda machine and now you are contributing to stagnating bitcoin development, good job.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
February 19, 2017, 03:53:52 PM
 #490

activating segwit then max up with LN for VISA competition purposes.

segwit solves nothing because people wanting to spam and double spend scam will just stick to native keys. meaning segwit is just a gesture/empty promise. not a 100% fix.

LN can be designed without segwit.
after all its just a 2in 2out tx which is not a problem of sigops. and not an issue of malleability.
LN functions using dual signing. so its impossible for 1 person to malleate because the second person needs to sign it and could easily check if malleated before signing and refuse to sign if someone malleates.
and the person malleating cannot double spend by making a second tx because again its a dual signature. so again LN doesnt need segwit


nothing is stopping a well coded LN to be made.. we just dont want devs to think a commercial hub LN service to be the end goal of bitcoin scaling. it should just be a voluntary side service. much like using coinbase or bitgo

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
hv_
Legendary
*
Offline Offline

Activity: 2506
Merit: 1055

Clean Code and Scale


View Profile WWW
February 19, 2017, 04:04:56 PM
 #491

Too sad that again in the SW supporting forums there is just only fear about having an open discussion

https://www.reddit.com/r/btc/comments/5ut05w/why_im_against_bu/ddxiool/

That way I see no chance to get along with it (SW) in a open world with open blockchain on top of an open internet.


Simply this behaviour makes me getting on distance with SW and activates critical thinking - luckily this still works.

 Wink

Carpe diem  -  understand the White Paper and mine honest.
Fix real world issues: Check out b-vote.com
The simple way is the genius way - Satoshi's Rules: humana veris _
manselr
Legendary
*
Offline Offline

Activity: 868
Merit: 1004


View Profile
February 19, 2017, 04:17:59 PM
 #492

activating segwit then max up with LN for VISA competition purposes.

segwit solves nothing because people wanting to spam and double spend scam will just stick to native keys. meaning segwit is just a gesture/empty promise. not a 100% fix.

LN can be designed without segwit.
after all its just a 2in 2out tx which is not a problem of sigops. and not an issue of malleability.
LN functions using dual signing. so its impossible for 1 person to malleate because the second person needs to sign it and could easily check if malleated before signing and refuse to sign if someone malleates.
and the person malleating cannot double spend by making a second tx because again its a dual signature. so again LN doesnt need segwit


nothing is stopping a well coded LN to be made.. we just dont want devs to think a commercial hub LN service to be the end goal of bitcoin scaling. it should just be a voluntary side service. much like using coinbase or bitgo

Yes, segwit isn't needed for payment channels, but segwit adds a lot of cool stuff, including schnorr which can make bitcoin more private. The positives of segwit outweight the cons, everyone knows this its 2017.

Too sad that again in the SW supporting forums there is just only fear about having an open discussion

https://www.reddit.com/r/btc/comments/5ut05w/why_im_against_bu/ddxiool/

That way I see no chance to get along with it (SW) in a open world with open blockchain on top of an open internet.


Simply this behaviour makes me getting on distance with SW and activates critical thinking - luckily this still works.

 Wink


A nice post by that guy. Funny to see jstolf the resident PhD troll struggling to keep up with the conversation.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
February 19, 2017, 04:48:46 PM
 #493

Yes, segwit isn't needed for payment channels, but segwit adds a lot of cool stuff, including schnorr which can make bitcoin more private. The positives of segwit outweight the cons, everyone knows this its 2017.

seems your reading a script.

schnorr doesnt even add any benefit to LN either.
LN uses multisig.
so it still requires 2 signatures. not a single schnorr sig

so its not going to make any benefit to LN.

yes for other transactions where someone has many unspents of one address to spend.. can reduce the list of signatures because one sig is proof of ownership of all the unspents to the same public address. instead of signing each unspent. but its not needed for LN

plus people wanting to spam the network. are not going to use schnorr with their dust inputs. they will stick with native keys to spam their tx's. so even schnorr is not a 100% spam fix. again empty gesture

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 19, 2017, 05:40:05 PM
Last edit: February 19, 2017, 06:22:28 PM by IadixDev
 #494

For me the only real issue I would have with LN is the marking as locked which is misleading, as lock should mean "NAK" nothing going on with those bit coins, whereas it's more locked away in a parallel system, and they should be marked as such on the main chain.

That would clarify things more, and maybe it can avoid a bit the fractional reserve issue or make it easier to detect as the bitcoin value would be marked as undetermined on the blockchain as long as it's being used on LN, and the up to date state can be then fetched from LN if needed, or the state being undetermined , instead of marking them as locked and "nak". Or making the lock explicitly a full exclusive lock instead of just exclusive write lock.

Because there it still make the btc marked as locked, whereas in reality they are still being used. And then it start to look more like banks who do stuff with your money without saying it, as the fund are supposed to be locked on a bank account , but In fact they are not locked at all Shocked

classicsucks
Hero Member
*****
Offline Offline

Activity: 686
Merit: 504


View Profile
February 20, 2017, 08:29:00 AM
 #495

Riddle me this: built off a parent of the same block height, a miner is presented -- at roughly the same time:
1) an aberrant block that takes an inordinate amount of time (e.g.,  10 minutes) to verify but is otherwise valid;
2) a 'normal' valid block that does not take an inordinate amount of time to verify; and
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
-ck
Legendary
*
Offline Offline

Activity: 4102
Merit: 1632


Ruu \o/


View Profile WWW
February 20, 2017, 09:19:47 AM
 #496

By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
kiklo
Legendary
*
Offline Offline

Activity: 1092
Merit: 1000



View Profile
February 20, 2017, 10:12:23 AM
 #497

@iCEBREAKER

You know what is funny , you actually think you are relevant,

You Not , but it is funny you think so.   Cheesy

If a side product can't keep up with innovation , that is on the Devs of the side product.

Electrum can update or get left behind , it is that simple.
You trying to say they are not able to get their code working , is really their problem,
not BTC Core and not the BU , only electrum.

Their were a lot of side products that are broken with every microsoft OS release.
Vendors either update & adapt or Die, their Choice.

Electrum is no different.
Thinking you are going to hold back an entire network because 1 vendor can not make their product work with it , is beyond stupid.


 Cool

The only thing irrelevant here is BUnlimistas and your nonsense.

BU has already been proven as a recipe for disaster. Everyone that's actually relevant in the bitcoin circles including Nick Szabo which knows more about all of this than you will ever know, is already saying we are wasting time by not activating segwit then max up with LN for VISA competition purposes.

You guys need to get wiped. You fell victim of the Roger propaganda machine and now you are contributing to stagnating bitcoin development, good job.


So you are saying you are Nick Szabo's girlfriend.  Cheesy

No offense to Szabo , but if he was all you claim , we would be talking about his attempt (Bitgold) and not bitcoin.

Hate to break it to you , I came to my conclusions about segwit & LN all on my own,
that what happens when you can think for yourself and not someone's else's puppet.

Segwit will not be adopted on BTC or LTC.
Reason being the miners don't care what you or Szabo think either?
Cheesy

 Cool
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
February 20, 2017, 10:41:41 AM
Last edit: February 20, 2017, 11:22:05 AM by franky1
 #498

sigops
the way bitcoin is moving, most nodes would have pre-validated the transactions at relay. and all they need is the block header data to know which transactions belong to which block. so sigops are less of an issue at block propagation because the validation by most nodes is pre-done.

if we start not relaying transactions then all of a sudden nodes will start needing to request more TX data after block propagation because some tx's are not in a nodes mempool. we should not be rejecting tx's because it will slam nodes later, causing block propagation delays

however we should think about changing something real simple.
the tx priority formulae to actually solve things like bloat/respend spam. where by the more bloated (tx bytes) and the more frequent (tx age) a spend is made. the more it costs.

the old priority fee solved nothing. but was used to just make the richer tx's gain more priority but a small value tx left waiting weeks to get priority.

so lets envision a new priority formulae that actually has real benefit.

imagine that we decided its acceptable that people should have a way to get priority if they have a lean tx and signal that they only want to spend funds once a day. where if they want to spend more often costs rise, if they want bloated tx, costs rise.. which then allows those that just pay their rent once a month or buys groceries every couple days to be ok using onchain bitcoin.. and where the costs of trying to spam the network (every block) becomes expensive where by they would be better off using LN. (for things like faucet raiding every 5-10 minutes)

so lets think about a priority fee thats not about rich vs poor but about respend spam and bloat.

lets imagine we actually use the tx age combined with CLTV to signal the network that a user is willing to add some maturity time if their tx age is under a day, to signal they want it confirmed but allowing themselves to be locked out of spending for an average of 24 hours.

and where the bloat of the tx vs the blocksize has some impact too... rather than the old formulae with was more about the value of the tx

here is one example


as you can see its not about tx value. its about bloat and age.
this way
those not wanting to spend more than once a day and dont bloat the blocks get preferential treatment onchain.
if you are willing to wait a day but your taking up 1% of the blockspace. you pay more
if you want to be a spammer spending every block. you pay the price
and if you want to be a total ass-hat and be both bloated and respending often you pay the ultimate price

yes its not perfect. but atleast lets think about using CODE to choose whats acceptable. rather than playing bankers economic value games of rich guys always win, that way we are no longer pushing the third world countries out of using bitcoins mainnet.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
IadixDev
Full Member
***
Offline Offline

Activity: 322
Merit: 151


They're tactical


View Profile WWW
February 20, 2017, 12:10:22 PM
Last edit: February 20, 2017, 12:51:14 PM by IadixDev
 #499

Quote
however we should think about changing something real simple.
the tx priority formulae to actually solve things like bloat/respend spam. where by the more bloated (tx bytes) and the more frequent (tx age) a spend is made. the more it costs.

..

I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.

But in the same time, i'm thinking memory pool already have their specific purpose for the miners, and i'm not sure it's that easy to introduce more complex logic in the mem pool algorithm directly, and it could make its real purpose and functioning more complex.

But maybe something like a completely different transaction pool could be thought before the memory pool, where eventually all the data temporality is taken in account, and how often the input/output will change, it can do the operations in the cache in non confirmed manner, to save up mining fee and confirmation time on intermediate result, in case it's explicitly clear that the intermediate result are non confirmed and the party used this memory pool can accomodate with temporarily non confirmed tx ,and only push the tx in the actual mem pool when they are not going to be used in the cache for a little bit, or a new input from those tx enter the memory pool.

It could make sense if several node in a related commercial network would share such an alternative tx pool when there is high number of tx chain that can be easily verified because they all originate from the same trusted network. And it could probably save up a number of intermediate operation on the blockchain, without giving too much security problems. With the drawback that those transaction could only be visible in this network the time they are finally push to the main memory pool for mining. And they would still be 'temporary' transaction as long as they are not push to the main memory pool and mined.

That would not replace true blockchain confirmation when it's needed, but in certain case it could probably make some sense when data temporality can be predicted because lot of operations on this data happen in a trusted network subset.

Like taking a e-shop and a supplier, who would have enough mutual trust with each other, the customer would put orders on the transaction cache, but maybe the shop will only collect them at the end of the day, and they don't have to be mined instantly, but to be still on the network. And then maybe the supplier also will not collect the shop orders before a certain time too, and the tx from shop to the supplier don't need to be fully confirmed before a certain time. Or certain intermediate result can be totally skipped from the memory pool.

Or making a memory pool who can fully take in account data temporality with more marking to have better optimization on when the transaction really need to be mined. Or when some operation can be done and intermediate result skipped from the memory pool.

 But i'm not sure it's a good thing to do this directly in the main memory pool because not every body will necessarily agree, or this behavior should be completely optional and explicitly requested for certain transactions when it can make sense to optimize data temporality before the mining.


franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4475



View Profile
February 20, 2017, 01:14:06 PM
 #500

I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.


you do know there is no central mempool.

every node has their own mempool. and as long as a node is not deleting transactions for random non-rule reasons each node keeps pretty much the same transactions as other nodes... including the nodes of pools. the only real varient is if a node has only just been setup to not have been relayed the transactions other nodes have seen.

pools and nodes validate transactions as they are relayed to them. thus when a block is made there is less reason to re validate every transaction over again.(unless they are new to the network to have not seen the relayed transactions. or as has been hypothesised in this thread. where nodes are biasedly rejecting transactions for no rule breaking reason)


I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 27 28 29 30 31 32 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!