Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 15, 2014, 03:43:34 PM Last edit: January 16, 2014, 02:23:58 PM by Exbuhe27 |
|
*** If you're just now seeing this thread, I apologize for the walls of text. These are my ideas for improving the mining scheme of Bitcoin. Every post I put up it changes pretty drastically, so if you want to read through the whole evolution of the idea then start here. If you just want to read the idea as it is now read my last (currently 3rd of mine) post in the thread**** Here's the basic idea: Right now it's tough for small miners to not join a pool simply because they don't have hashing power and will never dream of finding a block. If we had a more granular way of awarding block rewards built in, they wouldn't need to pool. So, instead of making them mine in pools where they solve easier problems, let them just solve easier problems directly off the current blockchain. Here's how it goes in my head at the moment: 1) Major miner finds a block on the main chain 2) Minor miners start mining an easier problem (with a different hashing algo to make it so you have to pick the easier or the harder problem, but not both) while the major miners search for the next big block 3) A sub-block is found (tuned to be faster than the main-blocks with a sub-difficulty) and the sub-chain from the previous main-block is started - all sub-blocks for a super-block have to be on the same chain (so all minor minors start working on same sub-chain and it's easier for a super-block-miner to sweep the sub-chain into the next super-block) - the heaviest summed difficulty sub-chain is what will be included by the major minors in the next main-block 4) After a sub-block is found, you could have ANOTHER level of granularity added, with an even easier problem (still another different algo) built off the first sub-block, obviously this idea continues as far as is needed until the smallest miner can get payouts without joining a large pool 5) Once a block is found on the super-chain, the sub-chain from the previous super-block (and all sub-sub-blocks off that one) is swept into that block 6) Only when blocks are found on the main chain are payouts calculated and handed out Now we have a limited tree structure going on (limited so that it's easier to sweep in sub-chains), that can go to any depth. The summed difficulty of the entire tree is increased, making it more necessary to have > 51% of the hashing power in every algorithm in order to fork the main chain, or to have much greater than 51% of the hashing power of just the main chain to fork the chain. Payouts go to the main-block that was found, and all the sub-blocks built off the previous main blocks. Some issues that came up: 1) How do you give main-miners incentive to include sub-chains when that means they have to split the payout? Simple, if they don't, another miner that will can find another main block even after the first miner who will include all the sub-chains - the added summed difficulty of the second miners main-block will beat out the difficulty of the original miners main-block. 2) How do you verify that blocks you're creating on sub-chains are valid? Well, they're built on an existing block, not the about-to-be-created block. It's up to them to broadcast the sub-chains/sub-blocks enough for the super-miners to hear about it and include it in the super-block. It's in the best interest of the super-miners to include the sub-chains (or at least we can tune it to be so by adjusting the relative difficulties and payouts for mining a sub/super block). This seems to be the biggest leap to me, but I think it can be tuned so it's true. User Jorge7777 on Reddit kept mentioning that we have these middle blocks that can't be fully verified as included in the chain - making only one sub-chain per super block makes it easier to sweep them in, and here we're making the same tradeoff that Bitcoin inherently does with transactions just with sub-blocks - we're trading "instantly verified but potentially invalid" for "verified a bit in the future but agreed upon to be valid". 3) How do you deal with the insane block-chain bloat that will occur? Well, how insane will it be? The sub-blocks that are created will be transaction-less, just proof of work with the address of the miner who found it or something. And we can tune the sub-difficulties so that they are generated every minute or so - surely one sub-chain running a scrypt algo wouldn't add that much bloat, it just needs block headers and a little info. 4) If we include transactions in the first sub-block, it could be that the main-block miner doesn't hear about a whole sub-block of transactions and doesn't include them in his main-block, we'd have to have a way of dealing with these "dangling" transactions as you can't push a sub-block from one main-block to the next - it won't be valid. Some ideas: 1) Put the transaction bundling in the first sub-block off the main chain. Then people can get their "verification" earlier. It is a bit less secure of a verification, but for most transactions it's probably good enough. If you want to make a big transaction, you wait for a new main-block to be created, if you are just buying a pack of smokes or something, you only need to wait for a sub-block verification. Additionally, if your transaction is included in a sub-block, it means that the main-block you are basing that sub-block on already exists. This means if someone forked the previous main block they would have to generate both a main block and a sub-block and whatever sub-sub-blocks also exist (which could be different algos -> need different hardwares for each) which are valid to reverse your transaction, because otherwise they would have less weight in their version of the blockchain. Additionally, after the next main-block is put on top of the current sub-block where you transaction sits, you may be able to (though we probably need to do some math to say this) say with MORE security that your transaction has been validated, because you have all the additional weight of the sub-chains to back up the new main-block. 2) If transaction bundling is off the main chain, then the main chain is only for paying out rewards to miners and such. Here's an ascii illustration of the time-ordering: O - main-chain block, here rewards are calculated and spread out to the main-block finder and sub-block finders from the previous main block v - first sub-block, here transactions are swept in? (potentially), but otherwise just proof of work for smaller miners c - second sub-block, just proof of work for even smaller miners (O1) - (v11) - (v12) - (c111) - (c112) - (c113) - (v13) - (c121) - (c122) - (v13) - (O2) - (v21) - (c211) - (c212) - (v22) - (c221) - (O3) ..... etc... Anyway, poke it all full of holes. Figured it would be good to start a discussion though Edit: Gotta head out for a bit, be back in a few hours - if anyone reads this. Original concept/discussion thread here: (it has mutated a bit since then): http://www.reddit.com/r/Bitcoin/comments/1v9gp5/removing_the_incentive_to_mine_in_pools/
|
|
|
|
erre
Legendary
Offline
Activity: 1680
Merit: 1205
|
|
January 15, 2014, 03:55:06 PM |
|
seems to me that, in jour vision, "main" miners will act as a pool. But wait to someone less newbie than me for a smarter and more complete response
|
|
|
|
cr1776
Legendary
Offline
Activity: 4214
Merit: 1313
|
|
January 15, 2014, 04:27:32 PM |
|
You should take a look at p2pool which provides distributed mining while giving consistency similar to pooled mining as compared to solo mining.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 15, 2014, 08:42:51 PM |
|
Ok, I'm back.
To the first comment. Yes you could have pool mining at any level of sub-blocks, but because people will naturally find a place to "fit in" their hashing power, any attempt to do so would be spotted as an attack on the network much more easily. The miners working on the main-chain aren't inherently pooling though, they are just working on a similar problem as each other.
As for p2pool - yes I've looked at it. I think this system has several advantages: a) Built into the Bitcoin protocol. No need to rely on good-Samaritans donating to the miners and such in the long run. I don't think we can count on that continuing to happen forever, and even if we think we can we shouldn't. A lot of Bitcoins success as a decentralized protocol is in providing incentive (built directly in) to support the protocol (mining rewards, etc...), so we should build in protective measures too. If we can utilize game theory to make people support the network, we can utilize it to make people want to decentralize too. People making the protocol understand the importance of making it non-abusable, but people looking to just make money by mining or whatever means and are willing to throw money at it to make it happen don't understand that importance (especially in the future when people look at Bitcoin like cars - it does what it does and they don't look under the hood). b) Weight of smaller problems solved is added to the block chain (as the sub-blocks). In pool mining, the share-chain generated has weight for the people operating in the share-mining operation, but you still can only add weight to the block chain in big main-block-sized chunks. Here you can have weight added to the chain much quicker by allowing smaller problems being solved to directly contribute to the main-chain. Then if someone tries to fork the block-chain by ONLY mining the main chain, they'll have a much harder time of it because they'll have to generate main-chain blocks faster enough to outweigh the original main chain + the sub-blocks off the main-chain.
After a bit more discussion though, these are the conclusions I've drawn: 1) This can't be done with Bitcoin, it would have to be an alt-coin. Bitcoin mainchain supports collecting transactions, not really collecting sub-blocks as much. Maybe it can be done, but would require a well-coordinated shift in the protocol. 2) It's hard to give the miners incentive to not refuse the sub-chain transactions - the argument I made before was that they would want the heaviest main-chain so that they could be that much closer to guaranteeing their block rewards wouldn't be reversed. It's tough to tell if that's enough incentive. Instead, I thought up this: whatever level of sub-block is the last sub-block is the one that collects transactions and validates them. Then those sub-blocks are the ones that also collect transaction fees. The super-blocks above them would demand a "sub-block-inclusion" fee - whether this is based on a hard number or a percentage - who knows. This would happen all the way up to the main-chain. Additionally, instead of awarding the main-chain blocks the block reward, you split it among the smallest sub-blocks. Super-blocks won't get money unless they include sub-blocks, sub-blocks won't get included (therefore won't get money when it comes reward time - which would be every main-block find) if they don't include an inclusion fee.
Problems I see now: DoS attack: Someone with a ton of hashing power and ill-will towards Bitcoin (or whatever alt this is), takes their hashing power to the lowest level and makes tons of blocks, huge amounts of them - raising the difficulty to mine at that level - then doesn't include any transactions in his blocks. So he's still getting his block-rewards and can still pay up the to the super-blocks to get his validated and included. Maybe this can be avoided by simply saying "you get your share of the block reward based on the percentage of the last x transactions you included" or something. Then he won't have the block reward necessary to put his transactions on the block chain - but could supply his own money by making transactions back and forth between two of his accounts to generate the block-reward distribution and the transaction fee distribution - this would get expensive though and isn't sustainable. And it would help to distribute his wealth among the super-block miners in the long-run.
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
January 16, 2014, 09:27:20 AM |
|
As for p2pool - yes I've looked at it. I think this system has several advantages: a) Built into the Bitcoin protocol.
Miners often have relatively low complexity computers managing their miners. A bitcoind node requires around 20GB of hard drive space (and growing) 1) This can't be done with Bitcoin, it would have to be an alt-coin. Bitcoin mainchain supports collecting transactions, not really collecting sub-blocks as much. Maybe it can be done, but would require a well-coordinated shift in the protocol
Right, it is 2 separate things. Overloading the official reference client with even more functionality is the wrong way to go.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 16, 2014, 02:20:00 PM |
|
I'm not sure what you're arguing TierNolan. I agree, running a full node is low complexity, I run one off my laptop whenever it's turned on and my I notice absolutely no difference unless it's been turned off for a couple days and takes a couple minutes to download and verify everything. As for the second point. I'm not talking about overloading the official reference client, I'm talking about the whole protocol - the way we generate blocks, verify them, etc... I think we can do better to make it scale better for the massive amounts of computing power that are moving into mining. Now it's nearly impossible to get payout without joining a pool - we can structure it so that people only compete with those of similar mining power to them, and have incentive to do so. We can also structure it so that someone with an insane amount of mining power would have to distribute that power in a way that makes it harder to attack the network. We're still the early adopters, and as more people adopt, more people will buy mining ASICs and do a quick Google search "which mining pool should I go with?" - The big pools will be the first hits - and so the big pools will slowly grow their computing power. Sure it's a stable system theoretically - but when things like incentive get involved people start doing funny things. We need to provide incentive for everyone to do what's best for the network - that's the beautiful thing about much of the bitcoin setup is it's in everyone's best interests to support the network. It just seems like a flaw to have block rewards go to the ONE MINER who happened to get lucky enough to find the correct block, despite all the other work others could have done, it's not a distributed rewards system to match the distributed network (I understand that it's not luck, and that in the limit this proof of work system smooths everything out - but we don't live in that limit, not even close. We live in the here and now where people with 10Gh/s of power could get a block in 25 years! Meanwhile all their proof-of-work they've donated to Bitcoin goes to waste, it's not recorded anywhere. People live and die in 25 years). Pooling solves that problem, but creates another more serious problem of individual pools having crazy amounts of network power - we shouldn't be shooting for "not 51%", we should be shooting for "not 2%". There is incentive to pool though - otherwise no payout. So p2pool is good, because it mitigates the problem of crazy big centralized pools being built up, BUT THERE IS NO INCENTIVE to use p2pool over normal pooling (other than "well it's good for the network", which we can't and shouldn't count on people to think about or follow). In fact, it's arguably HARDER to use p2pool (as of right now, not because it's actually harder, but because there is less entry-level documentation about it), so there is incentive not to use it. *************************************** Full warning - Wall of text ahead. Most of my posts are wall of texts anyway. But this idea for a protocol is getting pretty refined in my head. I think it's shaping up pretty well. *************************************** A system that rewards you for mining benevolently and intelligently (regarding stabilizing the network) is what we need. The protocol itself is the one thing everyone using the network agrees upon, so if everyone agrees upon a protocol for benevolent mining, then it will be enforced by the network. All I'm trying to do is come up with a mining protocol which allows any entry-level miner to turn a profit without having to join a pool (and hopefully a larger profit than they would get in a pool) - while I'm at it I think we can make the network more secure against 51% attacks. Here's the idea so far (it's mutated even more today - keeps changing as I think about it). Until now I've been thinking about building blocks smaller and smaller rather than bigger and bigger. Now that I think about it it makes much more sense to go bigger and bigger. So, first block size (the one that gathers transactions), has a quick target rate, something like once every 30 seconds - but more importantly it has a target one-block per week-miner hash-cost rate (for instance, a miner who can spend 1BTC on his rig would get 1 block every week from that rig) - which can be made easier to enforce as just a pure hash-rate - say a miner who has 10Gh/s (achievable with 1BTC as of now), would get one block at this level every week on average. This is like the normal block-chain today (except the faster target rate), mined in the same way. People get block-rewards and transaction fees for these blocks, small block rewards, maybe 1/20th the current size? Who knows, it doesn't actually matter in the end, since eventually the currency would stabilize relative to other things anyway. Maybe in this new crypto-currency these could be 1coin rewards. Then you have miners who gather those blocks (calling them sub-blocks now), and does a similar thing that the current miners do with transactions - they group the sub-blocks together in the order they were generated, hash them, hash the previous super-block at his mining level, and try for a difficulty. These are super-blocks, and are harder to find than the sub-blocks. Same algo can be used to generate these blocks, but the target difficulty could be some set number higher than the sub-difficulty, but even better it could be that the difficulty of the block you generate has to be greater than or equal to the sum of the difficulties of the sub-blocks you're including in your super-block (or some fraction there-of). This allows for really organic determination of the difficulty necessary to group a bunch of sub-blocks. Block reward is calculated similarly (as a sum of sub-rewards). A study of how to sum difficulties is necessary. Sub-blocks have to be mined/included in the order that they were generated - just like transactions have to be grouped in order (to prevent double spending). So you have miners who are mining the main chain, which is collecting and verifying transactions, and you also have miners who are mining off of that chain, collecting the blocks up and generating super-blocks. The two chains are developed parallel to each other, and most importantly if the super-chain stops being generated you can still generate the sub-chain. You can't generate a super-chain if the sub-chain stops though. Every time someone mines a super-block, it adds that total difficulty to the main-chain (the transaction chain), meaning if you wanted to undo transactions from earlier you would have to recalculate previous sub-blocks and whatever super-blocks have been generated on top of those blocks - and you would have to do it sequentially (limiting you more), because you can't generate super-blocks before you generate the sub-blocks - making it very difficult to 51% attack a network. This is enforced by the network protocol - if a miner sees a chain + super-chains that has heavier difficulty he switches to that chain instead. This seems like a really good way to make it so that everyone can contribute to the network security (by adding total difficulty to the hash-chain) including the very small miners who just want to run a single ASIC or two. As the super-chain starts to catch up or even try to outrun the sub-chain, people will move to another level super-chain, which will throw even more difficulty weight onto the already existing block-chain, making it even harder to fork. With this scheme though, sub-block inclusion fees would be near-impossible to enforce. If the miner decided not to grab a sub-block because it didn't have an inclusion fee then that super-chain would be stopped altogether right there. There is an aging algorithm already in place for transactions though, yes? As a transaction gets older its priority is bumped up so it will be included without a fee even? (*not sure about that*) Maybe a similar thing could be done with sub-block-inclusion fees. But really, the inclusion-fee would turn into a "you better put a small incentive for me to mine a level higher than you or I'll absolutely decimate your ability to find blocks" fee, as the super-miner could easily threaten to descend to the sub-miners level and drive difficulty up if they don't feel fairly compensated for moving their mining power out of the smaller pond. There is no end to how far up the hierarchy we allow people to mine, eventually we could have it be so that transactions are bundled in hundreds of levels of blocks. The important thing is that the smallest chain, the first chain, is where transactions happen. Nothing that happens in the other chains can stop this chain from being generated, so you can't DoS the entire network by stopping generation somewhere higher up in the chain-hierarchy. An interesting idea is to try to utilize the super-chains as they are generated to make it so you don't have to store the contained sub-blocks in them anymore or something. Pruning the block-chain based on proof-of-work - making it so the super-miners provide a two-fold service - 1) effectively doubling the total hashing power needed to attack the network with every block they find and 2) pruning the block-chain by finding super-blocks. This allows mining to scale with technology in a way that doesn't encourage pool mining. It also allows smaller miners to contribute to the total safety of the network (if I'm mining at 1Gh/s solo, and I never find a block, my proof-of-work currently does nothing to contribute to the safety of the network). One issue is the payout - you can halve the block-reward at the transaction block level every so often (like we do), then the reward at the super-block level will also halve every so ofter, etc...but because infinite levels of super-blocks can be created you could generate many more units than originally anticipated, in fact infinite. It's an interesting problem.....but honestly who knows if a deflationary currency is what's best? At least here the inflation is controlled by how fast technology marches forward - just like with gold the inflation is controlled by how fast mining advances. And in todays/futures society, how fast technology grows is probably a good estimate/approximation to how fast commerce is growing, meaning that prices should stay relatively stable. Interesting thing, this could be implemented in bitcoin. It doesn't invalidate any of the previous blocks/transactions. We would just add functionality to the protocol. So no one would have to switch over to a new crypto-coin. Such a sweeping change would probably upset some people though. Let me know what you think....I was looking for a project anyway
|
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
January 16, 2014, 03:20:26 PM |
|
I'm not sure what you're arguing TierNolan.
I agree, running a full node is low complexity, I run one off my laptop whenever it's turned on and my I notice absolutely no difference unless it's been turned off for a couple days and takes a couple minutes to download and verify everything.
You can manage miners with much less resources than running a full node. You can even get a router to do it (if it can be flashed). This kind of hardware won't support a full node. As for the second point. I'm not talking about overloading the official reference client, I'm talking about the whole protocol - the way we generate blocks, verify them, etc... I think we can do better to make it scale better for the massive amounts of computing power that are moving into mining. Now it's nearly impossible to get payout without joining a pool - we can structure it so that people only compete with those of similar mining power to them, and have incentive to do so. We can also structure it so that someone with an insane amount of mining power would have to distribute that power in a way that makes it harder to attack the network.
Changing the protocol means changing the reference client. In my view, they should split the client into a server (bitcoind) and a client mode. The server would just verify transactions and blocks, and wouldn't be able to create new ones. You pass it blocks and transactions and it tells you what is valid. It simply defines what counts as valid transactions. This would be a much simpler piece of software. Making changes is hard though. It has been described as redesigning a plane while in flight. They want to keep risks as low as possible (which is reasonable). A formal/official p2p mining pool system means that they don't have to update the official client. We're still the early adopters, and as more people adopt, more people will buy mining ASICs and do a quick Google search "which mining pool should I go with?"
Mining against a centralised pool means just pointing their hardware at the pool. Any p2p system has a larger overhead than that. A very lightweight p2p mining system might be acceptable. Miners can use a proxy to actually connect to the pool. The p2p pool would have to run on those proxies. We live in the here and now where people with 10Gh/s of power could get a block in 25 years! Meanwhile all their proof-of-work they've donated to Bitcoin goes to waste, it's not recorded anywhere. People live and die in 25 years). Their POW is used just as much as a larger miners. Virtually all hashes that are performed are worthless. So p2pool is good, because it mitigates the problem of crazy big centralized pools being built up, BUT THERE IS NO INCENTIVE to use p2pool over normal pooling
Right, in fact, there is a disincentive. You have to run a p2pool node too. Currently, miners don't have to run a full node. They can connect direct to a mining pool. How does adding p2pool capability to the reference client help? As far as I can see, you are making the reference client more complex. There is no real benefit and now the reference client is more complex and more difficulty to maintain. The protocol itself is the one thing everyone using the network agrees upon, so if everyone agrees upon a protocol for benevolent mining, then it will be enforced by the network.
Fundamental changes to the protocol pretty much create an alt-coin. first block size (the one that gathers transactions), has a quick target rate, something like once every 30 seconds
That is a hark-fork change right there. but more importantly it has a target one-block per week-miner hash-cost rate (for instance, a miner who can spend 1BTC on his rig would get 1 block every week from that rig) Maths please. How do you work that out without looking outside the network? People get block-rewards and transaction fees for these blocks, small block rewards, maybe 1/20th the current size? You get a 30 second target by reducing POW by 20. If the minting fees were also scaled down by 20, then everything remains in balance. However, again, it is a hard-fork. Maybe in this new crypto-currency these could be 1coin rewards.
Ok, so you are proposing an alt-coin explicitly then? Ideally, if you want changes in the official protocol, you need to do it in a way so that old clients will still accept new blocks. This is called a soft-fork. If you make the rules more strict, then old clients will still accept the new blocks, since the rules are more strict than their requirements. If a majority of the miners follow the new rules, then blocks which meet the old rules, but fail the new (stricter) rules will be rejected by miners, so never get into the chain. So, you need to understand the protocol. Maybe what you want needs a hard-fork change (fails backwards compatibility). But, you should try to find a soft-fork way of getting your ideas accepted.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
NanoAkron
|
|
January 16, 2014, 05:58:20 PM |
|
What if hashing difficulty was made to be a function of some measure of 'proof of connectivity' - the best-connected clients are slightly penalised vs. those with higher latencies?
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 16, 2014, 07:03:55 PM Last edit: January 16, 2014, 07:21:22 PM by Exbuhe27 |
|
Alright, first thanks for reading my posts, I know they're long. I think you're right that I need to look into how the full-nodes/clients/miners all interact with each other more. I think writing an alt-coin would be a good way to do that. Changing the protocol means changing the reference client.
In my view, they should split the client into a server (bitcoind) and a client mode.
The server would just verify transactions and blocks, and wouldn't be able to create new ones. You pass it blocks and transactions and it tells you what is valid.
It simply defines what counts as valid transactions.
This would be a much simpler piece of software.
Making changes is hard though. It has been described as redesigning a plane while in flight.
They want to keep risks as low as possible (which is reasonable).
A formal/official p2p mining pool system means that they don't have to update the official client.
Yeah, I figured this is pretty much what it would come down too. By separating the reference client into the two modes, do you mean that you can, as a participator in the network, run the client which doesn't store blocks only verifies the relevant ones to you OR run the server which stores and verifies blocks? Hmmm, that seems like it would be a good split to have - advantage being that you can update one without tinkering with the other as much yes? So, you need to understand the protocol. Maybe what you want needs a hard-fork change (fails backwards compatibility). But, you should try to find a soft-fork way of getting your ideas accepted.
The interesting thing is this doesn't really *conflict* with the existing protocol, in that it doesn't change how current blocks are generated other than the target block-time (which maybe we don't have to adjust, though it could be better to do so I think). This just adds another layer of blocks on top of those blocks, then another, then another. So maybe the old clients (people who don't update) wouldn't actually notice the difference - except for the fact that we're now awarding block rewards for the super-blocks, which would look unspendable to the old clients - is there any way around that? If my interpretation of splitting the reference client and server is correct, then that's all it would take to make my idea implementable, yes? You could have people running the client, who just participate in transactions, and they don't need to know about the super-blocks that are being generated giving their transactions more security - they just need to check that their current transaction is valid in the main-chain (the transaction chain), which is what the whole bitcoin setup already does. Then people running the servers/full-nodes are the only ones who would have to update their to support super-blocks (or other protocol changes that don't affect how transactions are viewed)? Maybe we could make it so that the super-blocks generate a different coin? Not directly spendable as bitcoin but as an alt-coin on top of bitcoin with a different value system? Then it may as well be an alt-coin that quite literally only harvest the bitcoin block-chain, then it's own block-chain. But we would still have to make ties between them which force to depend on each others success, otherwise the bitcoin blockchain could say "screw this alt, I'm going to forget that part of the protocol" effectively reducing tons of hashing power and proof of work to nothing. It doesn't seem like a good idea to completely separate them. Anyway, I think the benefit of this idea is that it provides incentive for really big miners to remove themselves from writing transaction history, but allows them to still solidify the written transaction history. This way smaller more distributed miners are writing the transactions (less chance of a 51% at that level if any entry level miner can consistently get rewards and doesn't need to pool), and the big miners then come in and harden the transactions. The levels above the first one would be pretty ok to pool up in even - and that's probably what we would see is entry level miners coming in, difficulty going up, then a bunch of them forming a pool and moving to the next level so that they can make more money, then more entry level miners coming in. As far as referencing a money value of hash power - that was just an ideal. Instead you would have to base it on the hashing power of the network at each level I think (which would mean referencing the difficulty itself) - which may be a close approximation to dollar value in short time-scales. Ideally the difficulty to mine at the first level would scale with how much hashing power an entry level miner can buy, but that's unrealistic to calculate - so if we just shoot for adjusting the rewards so that the difficulty at this level stays fairly constant over time or follows some expected hashing power ease-of-access curve, maybe that's good enough. Which of course brings in the idea of playing around with rewards in the same way we play around with difficulty to provide incentive for benevolent action. The next big issue is the inflation that this system brings. Is it too much? Inflation in this system would pretty much follow the fall in hardware prices - which maybe is a good metric, but the more I think about it the more it actually seems that hardware improves much faster than other sectors of the economy. Maybe that will slow down? (*needs researched*) Either way that means we're building economic policies into the coin itself (which bitcoin does a bit already by picking a deflationary track), and perhaps we need to think about what the *best* economic policy to take is. Having currency supply scale with industry doesn't seem bad. Should we be making these decisions though? And finally, what about when we "run out of transactions" and 1st level blocks? The first super-chain will catch up really fast, probably within a couple of days, the next one will take longer, the next one longer. How long will it take until we are at a block so massive that it encompasses the entire blockchain? Can we play with how much more difficult super-chains are to prevent that from happening very quick? How much faster will transactions be in the future? How much faster will mining be? If we're trying to future proof a coin these seem to be questions to consider. Edit: Damn, just realized I take way to long to write these things. I guess I had it open for like, 2 hours.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 16, 2014, 07:13:43 PM |
|
I thought about a proof of connectivity thing or something where you had to connect to X servers to learn a secret (shamirs secret sharing), or something.
But it would put heavy strain on the network. Really heavy strain I think. Which is good in the sense that it makes it "difficult", but it also slows down the rest of the internet, which sucks. It would encourage building high-speed links between places, but most people would abuse the crap out of it - build two miners super close, run virtual machines on them, hard-wired ethernet connections, etc......
But you could also say they have to have a "proof of volume" of flow, like they have to prove they provided a certain amount of connectivity for lots of people - effectively changing how people are using ISPs, they get money just for connecting, maybe per-kB fees, not the shitty ways they are doing it now. We could also say that they only get credit for some really advanced super secure encrypted protocol we develop so that it encourages people to only use encrypted communications?
What about servers then just sending random messages back and forth to each other to make it look like they sent lots of traffic? Probably could be rejected by the network on a "unique connection basis", like every time within the same 10 seconds a packet between the same two people is sent it's reward is cut in half? But then our protocol wouldn't be very good because the mid-points would know the end-points. Interesting ideas.....
Anyway, how does it fit specifically into what we've been talking about?
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 16, 2014, 07:19:58 PM |
|
Damn newbie limits. Frustrating. But understandable. They should at least save my reply so that I don't have to type it again. I see what you're saying now though. I don't think it's enforceable. People in a mining pool don't have to be connected to the bitcoin network at all, just the pool "exit node". The bitcoin network only sees the exit node I think. The rest of the network just sees the measure of hashing power, and not how it's being generated. p2pool is different, but it's also not possible to enforce people to use p2pool, so something like that would actually *discourage* p2pool even more as the people in the p2pool would be "see-able" by the network right? Not sure, gotta look at the protocols more
|
|
|
|
NanoAkron
|
|
January 16, 2014, 11:34:16 PM |
|
Exbuhe27 - thanks for considering my 'proof of connectivity' issue, but I think you're getting the wrong end of the stick. The network wouldn't REWARD low latency connections but actually PUNISH them, in a small way.
Two massive co-located mining servers would be rewarded less than the isolated miner in Mongolia who successfully releases a block.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 17, 2014, 07:08:58 AM |
|
Right, but I think it would be really easy to fake is the thing, where-as you can't fake things like one-way-function proof-of-work.
How do you measure connectivity in a way that can't be faked? Does the whole network try to ping that server? They could just delay the response to make it look like they're high-latency.
Besides, the two massive co-located servers could pool up really easily and present just one exit node to the network - then they only have to worry about making one connection look high-latency through faking.
Perhaps you could do something where if the same entity wins a two blocks in a row the reward is halved for each successive block, then brought back up to full reward if someone else wins a block - but that's not enforceable at all either. It's too easy to look like anyone else in a network though, they could just relay the block to a colluding miner and have that miner release the block for a small reward.
If the network was de-psuedonymized it would be possible, but then you lose a lot of the advantages of Bitcoin.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 17, 2014, 09:58:10 AM Last edit: January 17, 2014, 10:44:09 AM by Exbuhe27 |
|
Had another idea. Tell me if this should just be an edit to the previous post, not 100% on forum etiquette here yet (mainly I'm afraid this will be seen as self-bumping, when really I just have another idea to throw out there, but I also want the idea to be read by people who have read the previous ones already).
Why not make it so that the difficulty of the block you find has to scale with how many transactions you include in that block. Then small miners can work on generating 1-<some relatively small number> transaction blocks, and bigger miners can develop larger blocks with more transactions, but have to have higher difficulty requirements - then they get more transaction fees, as well as a larger block reward (base the block reward on how many transactions you include as well). Everything else about verifying transactions is the same, but this just allows even more granularity in the mining power required to mine the main chain.
We still want a target minimum difficulty to generate a block with zero transactions in it (would we even allow a zero-transaction block? I suppose it still adds weight and thus value to the chain, but the block reward should be relatively small), and that's the main difficulty that is adjusted up and down every so often - (maybe say take the quickest quartile of blocks generated from the last difficulty period and make it so that they average out to our minimum block time of 10 - 30 seconds or so --- or we could remove the time element from it a bit, make it based on transactions, we have a certain number of transactions we want verified per-block as a target, or a certain block-size (in kB) target, adjust the difficulty so that more or less transactions are included in each block). But then every transaction you include in your block increases the difficulty ever so slightly while increasing the block reward too.
EDIT: With an average transaction number per block target instead of an average time per block target, we could make it even more difficult to scoop up tons of transactions and throw it all in a big block by making it so that adding transactions to your block gets exponentially harder with the knee of the exponential right about at 1 standard deviation away from the target block-size. Then big miners would have even more incentive to move away from the main-chain blocks and mine on the super-chain, because they can make much more money there.
It would be the same idea as with the super-blocks, where a super-block has to have a difficulty that scales with the summed difficulty of the sub-blocks they combine, but with transactions instead. The parameters for how much the difficulty scales with number transactions would have to be figured out - but then we're rewarding people more for supporting the network more. EDIT: Perhaps we would have two difficulty parameters adjusted each time - the main difficulty scales with the average transaction number per block, and the sub-difficulty scales with the standard deviation of the transaction number per block. We could still keep a time component even, by letting it weakly influence the target average transaction number (so the average transaction number affects the difficulty, and the target time for blocks affects the target average transaction number - we could have just a simple oscillator function that draws the target block time back with more and more force as it starts to deviate more).
It would also effectively FORCE big miners out of the smaller main-chain group of miners - either that or they would have to generate lots of really small blocks which we could make less profitable. If the big miner comes down to mine the main chain, they'll be building up one of their really big blocks, then suddenly some of the transactions they were using to build that block would be swept out from under them, making them "start over". If someone had so much more mining power that they could generate a much larger block FASTER than some small miner could generate the small block, that would be a problem. Maybe make the difficulty scale up a ton after a certain number of transactions (say like, 1000 or something?) to where it becomes impractical to mine blocks with that many transactions when you could just be mining on the super-block chain?
Similar idea to set super-block difficulties could be used - the base difficulty (a zero-sub-block super-block) would be set by the maximum difficulty of any one sub-block in the sub-chain. But then adding sub-blocks to your super-block requires adding difficulty to that base difficulty. Problem is if mining power starts to desert the network - suddenly you don't have enough mining power to mine super-blocks because just that one sub-block is too big. Though it wouldn't stop the main chain, so I guess it's not actually that bad of a problem. And if mining power deserts the network then people will just go back to mining the sub-blocks and forget about mining the supers for a while, though they are still there. Probably fine to happen, they only add security but don't detract any by not being there.
Really it seems that the problem boils down to block rewards. It's a way to slowly release currency into circulation, but it's the largest problem with the mining algorithms it seems. If instead it was all based on transaction fees (as it will be eventually), I think it would be much simpler to program in incentives and keep a consistent system over time.
|
|
|
|
NanoAkron
|
|
January 17, 2014, 05:25:00 PM |
|
Right, but I think it would be really easy to fake is the thing, where-as you can't fake things like one-way-function proof-of-work.
How do you measure connectivity in a way that can't be faked? Does the whole network try to ping that server? They could just delay the response to make it look like they're high-latency.
Besides, the two massive co-located servers could pool up really easily and present just one exit node to the network - then they only have to worry about making one connection look high-latency through faking.
Perhaps you could do something where if the same entity wins a two blocks in a row the reward is halved for each successive block, then brought back up to full reward if someone else wins a block - but that's not enforceable at all either. It's too easy to look like anyone else in a network though, they could just relay the block to a colluding miner and have that miner release the block for a small reward.
If the network was de-psuedonymized it would be possible, but then you lose a lot of the advantages of Bitcoin.
I haven't read your follow-up in detail yet, but again I don't think you've fully understood what I'm proposing. No pinging required. It would just use a combination of the node ID, the time stamps for block received and transmitted, and the fact that time moves in a forward direction. 2 fast co-located servers would share a local node. Their timestamps would be near-identical. You could sign the block with an irreversible stamp of 'node ID & timestamp'. Any 'reversal of time' to try to spoof the system would be picked up. The local connectivity biases the difficulty further as a 'local difficulty factor' ADDED to the network's general 'difficulty' factor which is a function of hash rate. IE 'local difficulty' is a function of node ID and timestamps, compared to 'difficulty' in general which is a function of hash rate. Following from this, nodes are rewarded with lower difficulty for blocks with higher latencies (larger differences between the local node timestamp and the stamp of when the distant node sent the block out) than those with lower latencies. Summary: Using timestamps, create a hash of local node ID and local node time, compare this with the time the recent block was received. Large differences = low 'local difficulty' to add to background difficulty, small differences = high 'local difficulty' to add to background difficulty. This encourages block sharing with more distant nodes. Spoofing of local stamps to get low local difficulty can be detected because at some point the local node time will drift forwards out of the window for block validity as the spoofing node would have to keep making their timestamps further and further into the future, because local 'time reversal' would be detected. Time reversals or timestamp volatility at the local node will also be flagged as suspicious. There could still be shenanigans by changing the local node ID AND the timestamp simultaneously, but this would be risky because other nodes would still be relaying valid blocks between each other which then get processed and enter the chain in a normal fashion with valid times.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 17, 2014, 06:26:42 PM |
|
Ok, I understand what you're saying. Assuming you could enforce time-stamping accuracy (which I think you can to within a few milliseconds, good enough), what would this idea get us? How would it stop pooling? Also, you have to decide ahead of time (before block generation) whether a block is difficult enough or not. So does the server generate a block that they "think" will be difficult enough then send it out? What if it's not? Also, network loads change all the time, how can this be a consistent measure to use for basing difficulty on? And how different is the latency for two massive nodes vs the latency for two small nodes? Is there enough difference to base this difference in difficulty on? ** I just re-read your post, understand it better (don't believe much in erasing ideas though, so I left the last two paragraphs). ** It looks like you're saying enforce it on the receiving side, as in a server gets a block, looks at the time-stamp and then decides if they received it late enough to make the "added wait difficulty" enough to accept the block. Is this right? Hmmmmm.....It's an interesting idea, but I still don't think it's enforceable. People would spoof timestamps by making them look *earlier*, not later, so it looks like they are less connected. And it's not relative to any block, because there is NO WAY you can enforce not spoofing a miners identity, so each time they can just make it a set amount of time before the block generation that they spoof, they don't have to get further and further in the past. Summary: Using timestamps, create a hash of local node ID and local node time, compare this with the time the recent block was received. Large differences = low 'local difficulty' to add to background difficulty, small differences = high 'local difficulty' to add to background difficulty. This encourages block sharing with more distant nodes.
...I thought you said that you're rewarded for lower connectivity? So large differences = high 'local difficulty' added to the apparent difficulty? I like the idea of using the network itself as proof of work or proof of something, but I just don't see it as enforceable. With proof-of-work we have a definite "you did this much *stuff*, here's a reward", and it's mathematically enforcable, whereas with networks it's too easy to spoof. Maybe if we found a way that forced you to connect to unique nodes every time to get larger rewards? But then you have to somehow stop spoofing, which I just don't think is possible.
|
|
|
|
NanoAkron
|
|
January 17, 2014, 09:11:01 PM |
|
Hmmmmm.....It's an interesting idea, but I still don't think it's enforceable. People would spoof timestamps by making them look *earlier*, not later, so it looks like they are less connected. And it's not relative to any block, because there is NO WAY you can enforce not spoofing a miners identity, so each time they can just make it a set amount of time before the block generation that they spoof, they don't have to get further and further in the past.
...I thought you said that you're rewarded for lower connectivity? So large differences = high 'local difficulty' added to the apparent difficulty?
It is enforceable because each block would have to contain the plaintext, hash values, and checksum hash for the timestamps from the last 32 nodes - these can be checked when a block is received and if the timestamps are incorrect (i.e. spoofed) the block would be rejected. I choose 32 because it makes a 51% attack against the timestamping very unlikely - they'd need to produce a chain of 32 consecutive blocks in order to spoof the stamps. Furthermore, the previous 'local difficulty' can be hashed and included in the block header, and this is then compared to the list of stamps and the block's contents to check its validity when first received. And rewarding for larger differences between the 'sent' and 'received' stamps is the same as rewarding for lower connectivity. Of course, these stamps are compared to the local node time, which is synchronised to the rest of the network anyway so it all evens out. We're not talking milliseconds of latency here, but more likely tenths or even whole seconds differences.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 17, 2014, 09:31:47 PM |
|
Hmmmm, I'm liking the idea more, using the network and previously generated timestamps to validate the timestamps.....
But still, I don't think it's usable. What if you just decide to not include transactions past a certain point in your block? It encourages miners to not include newer transactions in their blocks so they have more chance of being accepted. Also, what if you generate a block then just wait? Sure, it's a game you could lose, but it encourages miners to think about the benefits of not supporting the network. All the current systems encourage miners to benefit the system as much as possible.
Even that way, how does this discourage pooled mining? How does this make people not gang up their computing power against the network?
|
|
|
|
NanoAkron
|
|
January 17, 2014, 11:54:56 PM Last edit: January 18, 2014, 12:15:15 AM by NanoAkron |
|
To address your questions in reverse order:
1. A pool would still be able to solve blocks very rapidly behind a single node, it would just reduce the likelihood of them solving sequential blocks, which is what leads to 51% attacks. By encouraging them to solve a block served from a more distant node next and disadvantaging them from solving a more locally generated block, good network health is encouraged.
Local block --> Remote block --> Local block --> Remote block --> etc.
Trying to 'game' this system with a nearby server running a purposely faulty clock and just serving out locally generated blocks that have been stamped with this fake release time (because they can't re-stamp a block that's already been released or this will be detected, even if it is one of their own), this error will accrue over the 32 block check.
Example:
Block released from pool at t=0, received by 'fake lag node' at t=0 (they're right next to each other), fake delay introduced to add +1 and block is sent back to the pool. In order:
Sent by pool at t=0 Received by lag node at t=1 (fake lag of 1 added by server but they're right next to each other so really t=0) Received back at pool t<1 (pool server has same time original as before, which now falls before that of the lag node).
And because this is all hashed and encoded by the receiver and sender, along with the hash of the 'local difficulty' level of the previous 32 blocks to confirm the timestamps/purported connectivity, this won't be successful - they would have to shuttle blocks back and forth between the pool and lag node with ever increasing time delays.
Because they only profit by getting their blocks accepted by the rest of the network, they would have to release them for external checking at some point before their time stamp exceeds the 70 minutes median limit or 90 minute heartbeat, but the spoof requires them to successfully solve and fake the timestamps on a chain of 32 blocks. At 10 minutes per block, they can't do this, even with a local difficulty adjustment of 0.
The accrued time difference through spoofing would therefore be seen as fake - a fake chain of 32 blocks * 10mins/block PLUS the additional spoof time of a few minutes exceeds the limits of acceptance for the rest of the network (it exceeds the 70 minute median limit and the 90 minute heartbeat). We could perhaps even just require a chain of 10 timestamps in this case.
2. If they don't accept transactions from the rest of the network, or have their blocks accepted by the rest of the block chain, they're not actually mining bitcoin. They can sit whirring away at their own blocks all they want, but they won't accomplish anything.
|
|
|
|
Exbuhe27 (OP)
Newbie
Offline
Activity: 27
Merit: 2
|
|
January 18, 2014, 01:41:37 AM |
|
How would this stop anyone?
So your argument is stopping sequential blocks from the same node. Assuming you could stop them from spoofing timestamps (your example is them passing blocks back and forth, but really they could just hold blocks instead, they don't have to pass them back and forth to allow time to accrue), it *still* relies on people being able to see who generated the block. There is pretty much no way to enforce an identity check on the network, buying a VPS costs something like, 20 euro a month?! I can get 5 IP addresses for 20 euro a month, on a machine MUCH more than capable of delaying blocks a pre-calculated amount of time, then releasing them to random nodes in the network. Just take your pool, isolate it from the network (because they are isolated anyway, the individual miners in the pool can't tell the difference), and send the generated blocks through a different server that you control every time you get one.
You're trying to say it's hashed by the sender and the receiver - sure the receiver wouldn't verify a hash that they saw as "too low difficulty", but the sender could just withhold the block, and after a certain amount of time, when the time-delay has added enough psuedo-difficulty to the block, release it - no one in the pool has to know, no one in the network, ONLY the malicious pool-operator node.
I think the error comes in when you say that an error accrues over the 32-block check. What exactly is the mechanism for that? Blocks are found at random times, there is no way to predict when the next one will be found - you can only look at statistics and say "well, in the past it's been once every 10 minutes *on average*, so probably in the next 10 minutes with a certainty of 68%" or something. So your fake chain of 32 blocks * 11 minutes (because of the delay from the mean) doesn't and can't look weird to the network - that looks perfectly fine - perfectly random means that 50 heads in a row can and will come up in a game of flipping coins.
So you have the selfish miner finding ALL the blocks in an attempt to re-write transaction history. He finds one - network accepts, no problem (it's his first). He finds the next, waits just a minute and for some reason the rest of the network doesn't find one in that time (which is completely possible, block times are all over the place), then releases that block. The network accepts. What mechanism forces him to add the extra time he waited for the previous block into his next block? If you say his ID does, then we can agree there is no mechanism, seeing as how he can easily spoof an ID. So he mines his next block, based on the hash of the previous block, which was released at time X+x (X being the time he found it, x being the time delay before he released it - and therefore the time he time-stamped it at) - but the network doesn't see X+x, all they see is Y, which is the sum X+x, he never releases the info that he delayed the block. So he doesn't have to delay his next block 2x, he can just delay it x again, to the network they both look like legit blocks. The time doesn't add up, it's the same delay each time. So if you had someone with absolutely massive mining power on the network, who on average could discover blocks 2x faster than a normal miner (which we're already seeing quite a bit of, GHash.io reaching the 40%-ish mark recently), he could find a block super quick, then delay it as much as possible to the point that there is another miner who could potentially mine another block, then release that block with the correct difficulty (which includes the difficulty accrued in that delay).
Now, if what you're talking about is that a miner must generate a block, then release it to the network and wait for a certain number of other miners to sign it, then bring it back in and sign it again before he releases it and it's validated by the network - that's also easily spoof-able. He just sends it to a colluding server (one he owns, in fact it doesn't actually even have to be a server, could just be another node running on the same hardware as the original pool operator - and you could have several such nodes running on that server), the colluding server holds the data for a while (however long it takes), signs the block at the later time and "sends" it back - the data actually only travels a few micro-meters onto a different part of the same hard drive. There is nothing forcing the colluding server to send the block back as quick as it can, and there is nothing that can enforce the identity of the colluding server.
The way I see it there really can't be much benefit from encouraging lazy or slow transaction of data. Slowing down data is all too easy - delays and holding is completely figured out. Speeding up data? That's a hard problem, that's a problem that takes work, actual work, to solve. And that's what mining needs to be based on - actual work or at least actual proof that *something* has been sacrificed - you have to make it worth it that someone will sacrifice themselves for the network (the block rewards in bitcoins case). Nothing about delaying data is difficult.
|
|
|
|
|