MarketNeutral
|
|
July 06, 2015, 05:22:58 PM |
|
I'm looking forward to the day when mining income is derived mostly from including transactions. Miners/pools who create empty blocks on purpose are bad for the blockchain.
By creating problematic 0tx blocks (and reminding us that scaling Bitcoin isn't as simple as bloating a sanity-check constant), miners force the devs to invent better solutions. Et voilà, anti-fragile!Well put. I love anti-fragile technology.
|
|
|
|
eneilwex
|
|
July 06, 2015, 05:23:54 PM |
|
They were cheating the system. Bitcoin has one variable that can't be predicted - human behavior. The system will always be flawed as long as someone sees an advantage in doing the wrong thing. Bitcoin isn't a worldwide corporation but it's players must act as if it were. Independent global actors must each do their part correctly and for the good of the whole except all of them are motivated differently. The problem with a system controlled by no one and everyone is the old adage, what's best for me is what's best.
This! Mannotkind is the biggest threat to the system. And so it will always be vulnerable.
|
|
|
|
oblivi
|
|
July 06, 2015, 05:44:51 PM |
|
I wonder why miners don't pay more attention to this. Im assuming they want the best for Bitcoin, so why can't they just predict this from happening and take action? I can't be that difficult to pay attention to the hashrate distribution.
That is a pretty bad assumption. You should assume that miners are in it for the money. Sure there are some miners that mine to help the network, but really, right now the largest miners exist because of the financial incentives. They do everything they can to make the most money, which includes cutting corners. And sometimes cutting corners has big risks and can cause problems for other people. Everyone knows miners are in for the money, but if they don't act right, soon they could be mining a valueless coin because they fucked up, something no one here wants, not if you are a dev, not if you are a miner, and not if you are a regular user.
|
|
|
|
JorgeStolfi
|
|
July 06, 2015, 06:49:11 PM |
|
So basically, we should wait for the transactions to get a confirmation other than from Antpool or f2pool to safely get the bitcoins in the wallet? If Antpool and/or f2pool finds blocks in a chain its not safe anymore? Did someone actually get a loss from this incident?
Some pools apparently monitor other pools and "steal" the hash of their last mined block even before the block has propagated to the relay nodes. Since they don't have the block, not even the header, they cannot validate it immediately as they start mining on top of it. I have read that those two pools and also BTC-China use this trick. I don't know whether others do the same, but since it gives them several seconds of advantage in the race for the next block, we must assume that every miner will do it if they can. And the shortcut usually works; it failed this time only because of the version switch. Therefore, the miners will not stop doing that, in spite of having lost 9 block rewards altogether (half of which they would have lost anyway).
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
JorgeStolfi
|
|
July 06, 2015, 06:53:35 PM |
|
The market has proven to be self correcting. When GHash got 49%, action was taken, now its down to 10% or something.
IIRC, GHash actually got more than 50%. The community can be sure that they got down to 10% by turning off 80% of their equipment, all for the good of the community. They would never think of moving that equipment to other pools. [/sarcasm]
|
Academic interest in bitcoin only. Not owner, not trader, very skeptical of its longterm success.
|
|
|
tl121
|
|
July 06, 2015, 07:10:45 PM |
|
Let's see if the miscreant pool operators are suitably punished for their incompetent and/or dishonest behavior. A suitable punishment would be for minors to remove sufficient hash power from these pools that they cease to be major players.
|
|
|
|
acid_rain
Newbie
Offline
Activity: 41
Merit: 0
|
|
July 06, 2015, 07:19:47 PM |
|
This is ridiculous. I think people are ignoring the fact that 30 confirmations is freakin 12 hours. What a coincidence right? So everyone was talking about forks last week, and now there are rogues and new chains being built.
People make stuff up, and then someone takes the hint and makes it reality. People are so oblivious to what's happening right under their noses.
|
|
|
|
adaseb
Legendary
Offline
Activity: 3878
Merit: 1733
|
|
July 06, 2015, 07:21:27 PM |
|
For some reason I didn't get my Westhash mining payout today? Was suppose to get it 3 hours ago and I check the trans IX and it says 0 confirmations in the last 3 hours. What is going on?
|
|
|
|
acid_rain
Newbie
Offline
Activity: 41
Merit: 0
|
|
July 06, 2015, 07:22:34 PM |
|
BTC has gone rogue. CFR has taken over biatch. '
While this will only inflate the price even more, legit businesses are hurting man.
|
|
|
|
Cryddit
Legendary
Offline
Activity: 924
Merit: 1132
|
|
July 06, 2015, 07:46:35 PM |
|
On a different note, here's one proposed solution for SPV nodes that does not require us to wait for 30 confirmations. Please comment on its correctness.
You are proposing is to modify the client so that it would do what a full node would do, only truncated to that last 50 nodes. Just checking the version tag is not enough. This hacked client app must also know that the BIP66 rule became active at a certain block b_{n}. Otherwise it would have to keep fetching blocks until it finds a "95% majority" event, or reaches a block before the v3 software was published. It's true. To determine the place in the last-thousand-blocks rule, you need to download and check the last thousand blocks (not just the last 50). A different consensus-rule could be used that would be amenable to checking over shorter sequences. For example, each block could publish a "current block version" as it does now, _and_ a logarithmic average - with the changeover to occur when the logarithmic average gets within 0.05 of the higher integer version. A logarithmic average at block n would (as an example) be 999/1000 of the logarithmic average at the previous block, plus 1/1000 of the current block version. Using a logarithmic average rather than a computed-over-the-last-thousand average would mean you could check and make sure the logarithmic average is right without looking more than one block into the past.
|
|
|
|
Cryddit
Legendary
Offline
Activity: 924
Merit: 1132
|
|
July 06, 2015, 07:49:30 PM |
|
I don't see a way of checking that the miners even have a copy of the parent block let alone checking that they have verified it without breaking compatibility with all existing mining hardware.
Fuck all existing mining hardware. If miners can do this then clearly it isn't correct and needs to be thrown out anyway.
|
|
|
|
MCHouston
|
|
July 06, 2015, 08:11:06 PM |
|
This is ridiculous. I think people are ignoring the fact that 30 confirmations is freakin 12 hours. What a coincidence right? So everyone was talking about forks last week, and now there are rogues and new chains being built.
People make stuff up, and then someone takes the hint and makes it reality. People are so oblivious to what's happening right under their noses.
5 Hours but close.
|
BTC 13WWomzkAoUsXtxANN9f1zRzKusgFWpngJ LTC LKXYdqRzRC8WciNDtiRwCeb8tZtioZA2Ks DOGE DMsTJidwkkv2nL7KwwkBbVPfjt3MhS4TZ9
|
|
|
l1m3st0n3
Newbie
Offline
Activity: 8
Merit: 0
|
|
July 06, 2015, 09:56:59 PM |
|
This is ridiculous. I think people are ignoring the fact that 30 confirmations is freakin 12 hours. What a coincidence right? So everyone was talking about forks last week, and now there are rogues and new chains being built.
People make stuff up, and then someone takes the hint and makes it reality. People are so oblivious to what's happening right under their noses.
5 Hours but close. lol @ 12 hrs.
|
|
|
|
Mikestang
Legendary
Offline
Activity: 1274
Merit: 1000
|
|
July 06, 2015, 10:05:57 PM |
|
This is ridiculous. I think people are ignoring the fact that 30 confirmations is freakin 12 hours. What a coincidence right? So everyone was talking about forks last week, and now there are rogues and new chains being built.
30 tx (x) ~10 min/tx (/) 60min/hour = 5 hours What coincidence are you alluding to? Sounds to me like you're some sort of conspiracy nut who's bad at math. If you talk bitcoin/blockchain, you talk forks. People have been talking about them since the network first came on line. Back to topic, latest Coindesk article on the fork: http://www.coindesk.com/double-spending-risk-bitcoin-network-fork/
|
|
|
|
edonkey
Legendary
Offline
Activity: 1150
Merit: 1004
|
|
July 07, 2015, 01:34:57 AM |
|
The only way to get rid of un-ethical pools, is to vote with your hash (the mining hash ). Already F2pool has lost about 5% of its network hashing power since the fork. If you are a miner, mining on F2pool and looking for an alternative pool, why not consider Slush pool . Slush was the first mining pool in 2011, and he is also the creator of SatoshiLabs - the manufacturer of the Trezor hardware wallet. His pool is about 3% of the bitcoin network, why not give him a boost, and some support for the innovations he brought to bitcoin since 2011 ! As a bonus, you will receive a discount for the purchase of a Trezor, just to mine there! +1 for Slush's pool! I bailed on Antpool after the "Fork of July" (just made that up, but I'm probably not the first) and went back to Slush. It's a great pool. Although since it's a little on the small side you have to be OK with some variance.
|
Was I helpful? BTC: 3G1Ubof5u8K9iJkM8We2f3amYZgGVdvpHr
|
|
|
almightyruler
Legendary
Offline
Activity: 2268
Merit: 1092
|
|
July 07, 2015, 03:20:49 AM |
|
As I understand it, f2pool didn't even have a copy of the block they were mining on top of. All they had was the bare minimum needed to build on top of it.
I believe cgminer (and probably others) have supported this trick for some time: if a backup pool (that you otherwise never send any work to) signals new block data before your primary, the miner starts work on the new block, even though the primary has not yet acknowledged the block. The implementation of detecting this from one pool to another would differ to that of an end miner, but I guess it's effectively the same thing.
|
|
|
|
Amitabh S
Legendary
Offline
Activity: 1001
Merit: 1005
|
|
July 07, 2015, 03:43:05 AM |
|
On a different note, here's one proposed solution for SPV nodes that does not require us to wait for 30 confirmations. Please comment on its correctness.
You are proposing is to modify the client so that it would do what a full node would do, only truncated to that last 50 nodes. Just checking the version tag is not enough. This hacked client app must also know that the BIP66 rule became active at a certain block b_{n}. Otherwise it would have to keep fetching blocks until it finds a "95% majority" event, or reaches a block before the v3 software was published. It's true. To determine the place in the last-thousand-blocks rule, you need to download and check the last thousand blocks (not just the last 50). A different consensus-rule could be used that would be amenable to checking over shorter sequences. For example, each block could publish a "current block version" as it does now, _and_ a logarithmic average - with the changeover to occur when the logarithmic average gets within 0.05 of the higher integer version. A logarithmic average at block n would (as an example) be 999/1000 of the logarithmic average at the previous block, plus 1/1000 of the current block version. Using a logarithmic average rather than a computed-over-the-last-thousand average would mean you could check and make sure the logarithmic average is right without looking more than one block into the past. That is too complex. I believe a shortcut is as follows (quoted from another post): On a different note, here's one proposed solution for SPV nodes that does not require us to wait for 30 confirmations. Please comment on its correctness.
Let us assume that an "invalid chain" cannot be more than 50 blocks.
Step 1. We boot our SPV node "now", several days after the July 4 fork. Step 2. We receive the mth block, say b_m. We ensure that b_m is version 3 block. Step 3. We need to ensure that at least 50 previous blocks before b_m were also correct. We download those blocks (not just the headers). That is blocks b_{m-50}, b_{m-49}... b_{m-1}. Then we ensure that all were version 3, have correct signatures, etc, and correctly validate the next block (via the prevBlockHash). Step 4. We can start counting b_m as a block of a valid chain if above checks pass. Step 5. For any further received blocks, we follow steps 2, 4, 5 (skipping steps 1 and 3), since we have already validated 50 blocks before the "boot" block. Hence we validating only with block b_{m-1} is sufficient.
You are proposing is to modify the client so that it would do what a full node would do, only truncated to that last 50 nodes. Just checking the version tag is not enough. This hacked client app must also know that the BIP66 rule became active at a certain block b_{n}. Otherwise it would have to keep fetching blocks until it finds a "95% majority" event, or reaches a block before the v3 software was published. I agree, we need a timestamp. Version 3 started from orphaned block 363731 onwards. That block was generated at 2015-07-04 02:09:40. which turns out to be 1435955980000 in Unix time. We ensure that any block with timestamp greater than above is checked using the above logic with appropriate modification (example, v3 check in previous blocks stops at block 363730). This is still cheaper than running a full node. This is "mixed mode" operating as SPV but bootstrapping using the previous 50 blocks (stopping at block 363730).
|
|
|
|
|
|
yeponlyone
|
|
July 07, 2015, 05:13:46 AM |
|
Thanks for the swift replies. Both transactions are getting confirmations now.
|
|
|
|
|