IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
February 20, 2017, 01:24:18 PM |
|
I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.
every node has their own mempool. and as long as a node is not deleting transactions for random non-rule reasons each node keeps pretty much the same transactions as other nodes... including the nodes of pools. the only real varient is if a node has only just been setup to not have been relayed the transactions other nodes have seen. It's why another tx pool could be made that is only shared between trusting nodes, with it's own pre processing, like this useless or bad transaction are never even relayed to the mining nodes at all through the memory pool. But intermediate/temporary result can still be seen in those node. Even if they don't necessarily need to be confirmed or mined before a certain time.
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4760
|
|
February 20, 2017, 01:29:00 PM |
|
I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.
every node has their own mempool. and as long as a node is not deleting transactions for random non-rule reasons each node keeps pretty much the same transactions as other nodes... including the nodes of pools. the only real varient is if a node has only just been setup to not have been relayed the transactions other nodes have seen. It's why another tx pool could be made that is only shared between trusting nodes, with it's own pre processing, like this useless or bad transaction are never even relayed to the mining nodes at all through the memory pool. But intermediate/temporary result can still be seen in those node. Even if they don't necessarily need to be confirmed or mined before a certain time. why? why even have "trusted nodes".. all nodes should be on the same FULL validation playing field. if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing.. all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same. i think you need to go spend some more time researching bitcoin. and start learning how to keep consensus of nodes.. not fragment it.
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
February 20, 2017, 01:48:22 PM Last edit: February 20, 2017, 02:10:41 PM by IadixDev |
|
why? why even have "trusted nodes".. all nodes should be on the same FULL validation playing field. if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..
all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.
It's more about having private agreement between nodes that is not necessarily based on the blockchain Not saying this should be assumed as the norm, but when several node could reach off chain agreement on how the transaction flow is supposed to be timed on their side, it can still allow for optimization, if the intermediate results doesn't need to be seen on the whole network. Or otherwise need a better definition of transaction flow to allow decentralized optimization in the case it can make a difference, but it's also bloating the whole network for things that can be kept private without making big security problem for the party involved. i think you need to go spend some more time researching bitcoin. and start learning how to keep consensus of nodes.. not fragment it.
I guess i'm more like anakin skywalker I care about objective, result and timing. Consensus is too slow. You need to understand the real nature of the force The consensus have to agree on the end result but they dont always need to know all the details :p
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4760
|
|
February 20, 2017, 02:02:30 PM |
|
why? why even have "trusted nodes".. all nodes should be on the same FULL validation playing field. if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..
all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.
It's more about having private agreement between nodes that is not necessarily based on the blockchain LN is a separate network than bitcoin.. the hint is in what the N stands for. though LN grabs the tx data from bitcoins network. the "private agreement" is on a separate network managed by separate nodes (currently programmed in Go, not C++). no point bastardising bitcoins network for LN when LN can remain its own separate offchain network that only needs to grab bitcoin data once every couple weeks (current mindset of channel life cycle). LN should remain as just a voluntary second layer service, outside of bitcoins main utility.
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
February 20, 2017, 02:40:23 PM |
|
why? why even have "trusted nodes".. all nodes should be on the same FULL validation playing field. if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..
all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.
It's more about having private agreement between nodes that is not necessarily based on the blockchain LN is a separate network than bitcoin.. the hint is in what the N stands for. though LN grabs the tx data from bitcoins network. the "private agreement" is on a separate network managed by separate nodes (currently programmed in Go, not C++). no point bastardising bitcoins network for LN when LN can remain its own separate offchain network that only needs to grab bitcoin data once every couple weeks (current mindset of channel life cycle). LN should remain as just a voluntary second layer service, outside of bitcoins main utility. It's why if anyway the alternative are being stuck with a slow inefficient consensus, or going a full private network where all the transaction will be shadowed, why not bastardizing a bit the way node works to deal with private processing of certain intermediate result. Because anyway, as far as i know, LN is not going to solve much more than this, even if it's still better because it has true mechanism of confirmation, but as this mechanism of confirmation is still not completely as safe as pow, it still imply weakened security. And if it's to be used as a private network of trusted node anyway, with no way to make sure it's completely in synch with the rest of the network, maybe it's not worst to make it more explicit and make mechanism to allow faster/cheaper transaction between trusted party outside of the memory pool, and only pushing the transactions to the memory pool when it's more optimal, and eventually reworking the whole transaction flow to make it more optimal at the moment it has to be mined. And keeping the intermediate operation only privately in the network.
|
|
|
|
BillyBobZorton
Legendary
Offline
Activity: 1204
Merit: 1028
|
|
February 20, 2017, 03:05:01 PM |
|
We have two options:
1) Decentralized gold + second layer on top (Core + LN) 2) Centralized gold + second layer on top (BU + LN)
Pretty simple decision.
|
|
|
|
LFC_Bitcoin
Legendary
Offline
Activity: 3710
Merit: 10436
#1 VIP Crypto Casino
|
|
February 20, 2017, 03:13:40 PM |
|
Upgraded to 0.13.2 finally yesterday. Fuck BU
|
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
February 20, 2017, 04:26:08 PM |
|
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.
Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules? This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks... No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4760
|
|
February 20, 2017, 06:58:48 PM |
|
We have two options:
1) centralized gold + second layer on top (Core(upstream filtered SEGWIT nodes) + LN) 2) decentralized gold + second layer on top (any node thats not biased + LN)
Pretty simple decision.
fixed that for you
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
February 20, 2017, 07:09:56 PM |
|
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.
Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules? This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks... No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop. Gee it's all so simple. I wish BlockstreamCore would get off their sensor ships and implement common sense solutions like the one you have described. But since you're here to save Bitcoin, they don't need to. You're going to make them obsolete with this amazing, novel "no filtering required" approach. Now where is the link to your gibhub? I'd like to try out the the fine grained locking implemented in your drastic rewrite of the existing code and test it for new risks. Oh wait, what's that? You don't code and are just spewing criticism from the development-free zone called your armchair? Hold on... You don't even understand how Bitcoin's tx flooding and block creation mechanisms interact. But here you are presuming to tell Con Kolivas of all people how to make Bitcoin great again. "pre-filter tx using firewall rules?"OMG I CAN'T EVEN
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
-ck
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
February 20, 2017, 08:48:45 PM |
|
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.
Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules? This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks... No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop. You missed the point of my original response entirely then - you CAN'T spawn a thread to validate it because of the locking I said before. If you spawn a thread to validate the block, nothing else can do anything in the meantime anyway - you can't process transactions, you can't validate other blocks. This is, again, a limitation of the code rather than a protocol problem but it would take a massive rewrite to get around this.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
February 20, 2017, 09:16:43 PM Last edit: February 20, 2017, 09:51:43 PM by IadixDev |
|
if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..
all full nodes should all do the same job, because thats the purpose of the network. they tall follow the same rules and all treat transactions the same.
For me it's more a question of macro organisation than power grabbing. If you take for example how a good caching proxy or intranet solution would work to optimize the traffic and caching, because they have access to all the traffic from the subnetwork, and can optimize and cache a certain number of things more efficiently because they have a "meta node" view of the whole traffic, it can know what other node of the subnet are requesting or sending to who, and that can allow for some macro management which is impossible to do on a single node level. Even if it allow some optimization on the local subnetwork due to macro management, you wouldnt say it's "power grabbing" or even hierarchic, even if it see the traffic on a level above the individual nodes. The role is still mostly passive, just macro management, and already I believe this could open way to optimize bitcoin traffic inside of subnetwork. And only what really need to be read or sent outside of the local network will really be sent. Aka mined. It's more this kind of idea than introduction of true layer or hierarchy I can say I have been doing my lot of pretty hard-core stuff with cores and interrupt & stuff, if there is a constant I can say in thus thing is : you want something to scale ? You have to divide it into independent subset who can be processed separately. That's the only golden rule for good scaling. Can call this fragmenting or hierachizing, but it's just about organising data in sub group when it make more sense to process them by group because they are highly related with each others. Kinda like octree are used in video game to limit the amount of computation a client has to do on what he can see or interact with. Not that those subnetwork have to be static or follow 100% deterministic pattern, and can be adapted when it make sense if certain nodes interact more with certain addresses than others.
|
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
February 20, 2017, 09:27:04 PM |
|
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.
Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules? This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks... No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop. You missed the point of my original response entirely then - you CAN'T spawn a thread to validate it because of the locking I said before. If you spawn a thread to validate the block, nothing else can do anything in the meantime anyway - you can't process transactions, you can't validate other blocks. This is, again, a limitation of the code rather than a protocol problem but it would take a massive rewrite to get around this. LMFAO. This is like watching a sincere but naive do-gooder try to convince brain-damaged aborigines they should stop huffing gas. There is missing the point and then there is incapable of understanding the point. jbreher and classicsucks fall into the latter category, because obviously neither has read SICP.
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
February 20, 2017, 09:35:28 PM |
|
This is, again, a limitation of the code rather than a protocol problem
I see we agree. On this small point, at any rate. I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
February 20, 2017, 10:01:56 PM Last edit: February 20, 2017, 11:00:00 PM by IadixDev |
|
With the code im doing with purenode, I have good hope it will bring great simplification to the code base and allow for experiment to be kick started more easily and normally it's already thought from the first asm instruction to be thread smart, with object references, atomic instructions & all. And it should adapt to most chains, including btc.
|
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
February 20, 2017, 10:08:48 PM |
|
This is, again, a limitation of the code rather than a protocol problem
I see we agree. On this small point, at any rate. I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol... Bitcoin is moving to Schnorr sigs. We need them not only to stop O(n^2) attacks, but also to enable tree signatures and fungibility, etc. Why would anyone waste time trying to fix the obsolete Lamport scheme? Perhaps the Unlimite_ crowd will decide to dig in their heels against Schnorr? Oh wait, by blocking segwit they already have!
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
-ck
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
February 20, 2017, 10:54:43 PM |
|
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
February 20, 2017, 11:00:43 PM |
|
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it. It's a bit the idea with how im doing purenode https://github.com/iadix/purenode And there is already a multi thread sse raytracer that works with it
|
|
|
|
franky1
Legendary
Offline
Activity: 4396
Merit: 4760
|
|
February 20, 2017, 11:02:20 PM |
|
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it. maintain it indefinitely..? a node should function without total reliance on one man to control what nodes do or dont do if you just stuck to simple rules rather than half baked rules that skip around the issue with half promises im sure you can get some VC funding. segwit for instance is not a final fix.. its not even an initial fix. malicious users will simply avoid using segwit keys and stick to native keys. even schnorr is not a solution because again malicious people just wont use those keys. as it serves no benefit for those wanting to bloat and cause issues. however finding real beneficial solutions such as a new 'priority' formulae that actually has a real purpose and solves a real problem, benefits everyone and knowing your in the pool dev arena.. thats something you should concentrate on
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
February 20, 2017, 11:03:32 PM |
|
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it. Well, that seems a perfectly reasonable stance. Just as a question ... do you have an estimate of the % of solved blocks that are attributable to your SW?
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
|