Bitcoin Forum
April 30, 2024, 04:02:08 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 [129] 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 ... 288 »
2561  Bitcoin / Development & Technical Discussion / Re: Time to wait for 99% surety of no double spend, accounting for pool hashrate % on: June 26, 2014, 04:40:14 PM
Mike Hearn was saying recently that ONLY Gavin was working on protocol stuff, mostly focusing on floating fees.
If Mike actually said that he's confused (and heck, floating fees isn't even a protocol thing— it's wallet behavior).

Quote
GHash have proven there is a valid business model that other pools can adopt and erode GHash's share. *fingers crossed*
And what business model do you think they have?
2562  Alternate cryptocurrencies / Altcoin Discussion / Re: Recover wallet passphrase - special circumstance? on: June 26, 2014, 04:19:21 PM
You want the dumpwallet rpc.

Your computer doesn't know the passphrase.

Are you sure it was the dog and instead you didn't happen to hack some poor person's system  and find their wallet encrypted but unlocked and forget that you have a moral obligation to report the issue and not steal from them? Just checking… Smiley
2563  Bitcoin / Development & Technical Discussion / Re: I may have overlooked that satoshi spent an additional 25 BTC in 2009. Not sure. on: June 26, 2014, 04:07:44 PM
Something wrong with the forum software - it dropped the part where you say, "I am 100% sure satoshi was/is not a bad actor".
Thanks for showing so clearly that you missed my point and also don't understand Bitcoin technology.

Bitcoin's design is such that it could have been started by a malicious force and it wouldn't matter— in fact, in an abundance of prudence you should assume that it was and analyze it from that perspective: many severe bugs in the original software have been fixed that way.

Quote
Looking at founder actions and coin sources/destinations is another.
No— in fact, it's not. Because unlike centralized systems the integrity of the software exists independently and orthogonally with the system's creator. The system could have holy origins, but if its bugged and bad, it's bugged and bad. It could have been created by the devil but if the problems have been removed, then the problems are removed and the system is trustworthy.

Quote
can take in private to direct/quell the inquiry
This sounds like some veiled extortion. Am I reading you incorrectly?
2564  Bitcoin / Development & Technical Discussion / Re: How to modify "standard" multisig script to create multiple P2SH addresses? on: June 26, 2014, 04:02:48 PM
Yuck. BIP32 exists so that people can derive their partners addresses and do so privately with respect to the rest of the world (and without adding overhead on the network).
Come on man.  Nobody derives their partners address and certainly not for creating unique P2SH addresses. Smiley Would you really give the sender (thousands of potentially novice users) the three master pubkeys and have them create the proper P2SH address?  I mean isn't any good end user wallet support for sending to a BIP32 based Pay2PubKeyHash address much less P2SH.

If you are saying that the receiver (me) should use BIP32 to create the keys and the unique P2SH that certainly would work.  Still all three PubKeys in the script would need to be unique to maintain privacy.  As soon as bitcoind supports BIP32 I will gladly use that.  In the interim since you are worried about space let me know if you can think of a more space efficient way to accomplish the stated goal in the redeemScript.
By "partners" I'm referring to the various multisig signers not the sender, it lets them all continue generating a sequence of addresses in parallel without further coordination. A sender should never change anything they've been given (and a receiver should never accept a script unilaterally changed by the sender, if somehow they notice it at all), and no one has specified an address encoding that changes things for them.

Bitcoind doesn't support any of the other alternatives here— it won't generate "padded" p2sh, it won't redeem them, so I'm not following your support concern. The only space efficient way to do this is to use the entropy thats already there, and change one (or all) of the pubkeys.

There is at least one web wallet that does this and http://ms-brainwallet.org/#bip32  so it's not a complete unicorn.
2565  Bitcoin / Development & Technical Discussion / Re: How to modify "standard" multisig script to create multiple P2SH addresses? on: June 26, 2014, 09:51:54 AM
Yuck. BIP32 exists so that people can derive their partners addresses and do so privately with respect to the rest of the world (and without adding overhead on the network).
2566  Bitcoin / Development & Technical Discussion / Re: I may have overlooked that satoshi spent an additional 25 BTC in 2009. Not sure. on: June 26, 2014, 08:20:41 AM
The fixation on Satoshi makes me a little sad.  Bitcoin was created to create a system of money without trust and central control— all the salient features are in the system, not in the motivations or non-public activities of its creator... so this kind of research is just valuable as a historical novelty, not in any practical way.

Making Bitcoin available was a gift to the world, but we seem to be repaying it with a not stop effort at violating the privacy of someone who clearly prefers it, plus these analysis are seldom very accurate— in most cases it may just as well be someone else... and then some other poor slub gets accused erroneously of being Satoshi, and potentially endangered when mentally ill people make assumptions about the returns on kidnapping. ::sigh::

I'd be less disappointed if the focus were just on early transactions, which are somewhat interesting on their own merits without the unnecessary game of pin the transaction on the person game.
2567  Bitcoin / Development & Technical Discussion / Re: Using a DHT to reduce the resource requirements of full nodes. on: June 26, 2014, 04:26:32 AM
Whenever you hear the term DHT what you should be hearing is "total farce of attack resistance failure"— existent DHT systems  and proposals are trivially vulnerable to attack in a multitude of ways and are fragile enough that they tend to perform poorly or fail outright with alarming frequency even when not attacked. They are generally an overcomplicated under-performing solution which get invoked in ignorance to every distributed systems problem because they're the first distributed systems tool people have heard of (sadly, "blockchain" is seems to be stealing this role), much as "neural network" has infested lay understanding of machine learning, or perhaps in other times "XML" was treated as a magical solution for inter-working serialization in places where it made little sense.

The few DHT's that exist which are proposed to be attack resistant in any serious way— things like CJDNS's routing or Freenet— work by imposing a 'social' network link topology on the network which is required (by the security assumptions) to by largely sybil proof.  ... A pretty strong requirement.

Fortunately, this is neither here nor there because the requirements of the Bitcoin system are almost but not completely unlike the services provided by a DHT.  DHT's provide users with random access to unordered data. In Bitcoin there is no access which resembles a hash table access.

To verify a block we must confirm that the inputs for the transactions it contains are spendable— that they they were previously created in the same chain and have not (yet) been spent— for this all nodes require the same data, not random data. We do not even require the full prior transactions, just the TXouts (potentially reducing tens of kilobytes of data to tens of bytes of data).

If we do not have this data, but could verify it if it were handed to us (e.g. if we'd been tracking a committed UTXO set root hash) our peer could provide it for us along with the block.  So long as we have _any_ peer willing to give us the block we'd have a guaranteed way to obtain the required data— immediately eliminating most of the DHT attack weaknesses (in the literature sometimes systems with properties like this are called D1HTs).

Unfortunately, obtaining just in time data comes with a large bandwidth overhead: If you're storing nothing at all then any data you receive must come with hash-tree fragments proving their membership: With convention hash trees each txin requires on the order of a 768 bytes of proof data... and with current technology bandwidth is far more limited than storage, so this may not be a great tradeoff.  One possibility here is that with some minor changes nodes could randomly verify fractions of blocks (and use information theoretic PIR to hide from peers which parts they are verifying), and circulate fraud notices (the extracted data needed to prove to yourself that a block is bad) if they find problems.  This may be a good option to reduce bandwidth usage for edge clients which currently verify nothing, but its not helpful overall (since one hop back from the edge the host must have the full block)... I'd say it would be a no brainer but getting the rarely executed fraud proof codepaths correct may be too much of an engineering challenge (considering the level of failure alt implementations have had with the consensus rules in Bitcoin as is).

Managing storage also does not have any need for an kind of sophisticated DHT— since 0.8 the reference client separates storage of the UTXO and block data. The UTXO, stored in the chainstate directory, is the exclusive data structure used for verifying new blocks.  The blocks themselves are used only for reorganizations and for feeding up new nodes that request them— there is no random access used or required to transaction data.  If you remove the test for block data in init.cpp the node will happily start up with all the old blocks deleted and will work more or less correctly until you call a getblock/etc rpc that reads block data correctly or until a newly initializing peer requests old block data from you.  The chainstate data is currently about 450MB on disk, so with that plus some recent blocks for reorganization you can already run a full verifying node.   The task of initializing a new node requires verifying the historic blocks so to accommodate that in a world with many pruned nodes we'd want to add some data to the addr messages, but it a couple of ranges of served blocks is sufficient— no need for routing or other elaboration. And block ranges match the locality of access so there is no overhead  (beyond a couple bytes in the addr messages).

The change to the reference client to separate out the UTXO in this way in the "ultraprune" patchset was no accident— it was a massive engineering effort specifically done to facilitate the transition to a place where no single node was required to store all the historical data without any compromise to the security model. You can see this all discussed in detail on the bitcoin-development list on and off going back years.

I don't think that any of this is actually incompatible with what you were _actually_ thinking, — you're no fool and your intuition of what would actually work seems more or less the same as mine. But the fuzzy armwave about just scattering block data to peers with no mind to locality or authentication is a common trope of people who have no clue about the security model and whom are proposing something which very much won't actually work, so I think it's important to be clear about it!  ... that keeping pruned validation data locally and not storing blocks historically is already the plan of record, and it's not a DHT, and the DHT proposals are usually completely misguided.

Quote
First for efficiency reasons it would make sense for all DHT nodes to retain a "full local" cache of recent blocks.  How "recent" is recent probably needs some research but the goal would be to reduce the overhead in the event of reorgs.  I think keeping "one block day" (144 blocks) of the tip of the blockchain would be a good compromise between storage and efficiency.

Sipa's analysis about a year ago was that there is an exponential tail in block access frequencies follow an exponentially decaying pattern to ~2000 blocks back, and after that have uniform access probability (further back than 2000 and the requesting host requests all of them in order), so it would be prudent to try to have hosts store as much as the first 2000 or so as they can, and then use additional space for a random range or two in the history.  I'd consider 144 to be a sane minimum because with less than that long reorganizations may cause severe problems.
2568  Bitcoin / Development & Technical Discussion / Re: implementation of a redlisting mechanism on: June 25, 2014, 07:07:02 PM
It's odd that the success rate graphs in the paper do not account for the fact that every non-enforcing miner will automatically keep attempting to restore transactions from blocks that fell out of the chain from their perspective causing the censoring miner to constantly redo his reorg attack until he gives up (or until the next censored transaction shows up). Unless the attacker is also able to author a fraudulent double-spend which conflicts with the censored transaction or has a majority hash-power the censored transaction will eventually make it into the chain, for any/all selection of reorg tolerance the only result is a delay (and massive theft exposure for unrelated transactions during the reorganizations that will constantly happen until the miner turns off the feature or until no more censored transactions are authored).

But it does make some nice examples for the risk created by mining operators (pools, cloud facilities, etc.) with a non-trivial share of the total hashrate, as they would be pretty successful at double spending theft— just not at censorship.
2569  Bitcoin / Development & Technical Discussion / Re: Programmed self-destruction and how to prevent it on: June 25, 2014, 04:00:47 PM
This paper was already discussed extensively in this subforum: https://bitcointalk.org/index.php?topic=600436.0

It exhibits a number of serious technical misunderstandings of the Bitcoin system which moot some of its arguments.

In the future please search before posting.
2570  Bitcoin / Bitcoin Discussion / Re: Thought experiment: Own the bitcoin network by paying off node operators on: June 25, 2014, 03:58:09 PM
Bitcoin is based on trustless verification.

To the greatest extent possible we do not believe our peers at all.  They make claims— and we verify them.  Because we check for ourselves, we cannot be deceived by peers lying to us about most of the properties of the system.  Most of the time the most such an attack could hope to do is isolate us from the true state of the network— a denial of service attack... but as soon as we find a single peer that tells us the honest state we'll recognize it and accept it.

The only element of Bitcoin which cannot be trustlessly verified is the ordering of transactions— one ordering of one set of valid transactions is just as good as any other ordering of any other set of valid transactions, so we can't distinguish the real one without the help of a consensus algorithm. So we use mining to produce the ordering, and here again your node modification attack doesn't help because to substantially change the mining ordering you need to apply more computing power than the rest of the network.
2571  Bitcoin / Development & Technical Discussion / Re: Split block reward to reduce miner variance on: June 25, 2014, 03:52:27 PM
Why not enforce each block to have 25 coinbase tx's to different payout addresses and force the hash of each of those addresses combined with a subset of tx's in the block to be less than a specified difficulty value.  Nobody gets a payout until 25 such hashes meets a difficulty threshold.
Any kind of "You must find X (x>1) results meeting condition Y" is not progress free. Rather than being like a lottery where you win instantly by chance— even if you just make a single lucky roll— with probability linearly proportional to how much you play, these patterns accumulate work and as a result faster participants have a much more likely chance to win compared to normal mining.

If you're not seeing that, imagine an extreme case where you must have a billion difficulty-one solutions to form an acceptable block. In that case a miner with 40% hash-power would win virtually every time against a number of other miners with 10 and 20%.
2572  Bitcoin / Hardware / Re: ANN: BITMAIN has Tested Its 28nm Bitcoin Mining Chip BM1382 on: June 24, 2014, 08:20:42 PM
They delayed the launch.
2573  Bitcoin / Development & Technical Discussion / Re: Split block reward to reduce miner variance on: June 24, 2014, 04:43:52 PM
It's very similar to p2pool, though p2pool has a much larger merging window (8640 shares instead of 25) and isn't forced by the network.

(Not the sum-difficulty part— that wouldn't be progress free: it would give super-linear returns to faster miners, that part is a bad idea)
2574  Bitcoin / Development & Technical Discussion / Re: Yet another Pool Problem solution : Mining multiple blocks simultaneously ? on: June 24, 2014, 04:32:41 PM
The p2pool sharechain has an expected interval of ~30 seconds per share and that is probably close to what is achievable.  Maybe with a greenfield solution optimized to be as lightweight and low latency as possible you might get that down to 10 seconds
P2Pool was previously 10 seconds but had to be changed after asics started shipping because so many of them had very high latencies and 10 seconds was somewhat problematic.
2575  Bitcoin / Hardware / Re: [ANN] Spondoolies-Tech - Best W/GH/s ratio, Best $/GH/s ratio on: June 24, 2014, 01:36:23 PM
Please keep this thread on topic.  All posts in a thread should be strictly related to the subject set by the original post.  The incessant name-calling and allegations related to other vendors and unrelated to this thread do not belong here.  Complaints about things being removed from this thread do not belong here. (I will also remove this post, once it's been around enough for people to see it— replies to it, again, do not belong here).
2576  Bitcoin / Mining speculation / Re: How much are you willing to pay for Bitmain (Antminer) S3? on: June 24, 2014, 04:05:10 AM
Delaying the release a week affects your price that much?
7 days is a hashrate change of 1.072x under 1%/day assumption— which suggests that you should pay 0.9327x the price you were willing to pay before—, recent growth looks more like 2%/day which would be 1.148x the hashrate in 7 days— which suggests you should pay 0.87x the price. Obviously S3 itself shipping in quantity will also drive rates higher…

Obviously the exact timing of the target matters when you're getting down to details like this, but in general small timing difference can make a big difference in income.  It may not matter that much unless you're already right at the verge of a sure loss, but the trend with mining hardware lately has been to sell at near or across that boundary.
2577  Bitcoin / Development & Technical Discussion / Re: For fun: the lowest block hash yet on: June 22, 2014, 06:17:23 AM
Not too much has happened, I was waiting to get one with apparent work > 2^80 to update again.

Best right now is 0x000000000000000000049bb3b6b9c135f66536e066704369905043df809c2441 ... or about 2^77.79, the cumulative block measured work level on the network is at 2^79.3 so we're behind.
2578  Bitcoin / Mining speculation / Re: How much are you willing to pay for Bitmain (Antminer) S3? on: June 22, 2014, 01:17:28 AM
For me I can make .9 btc work even 1.0 btc  but higher then 1.0 btc
How? this doesn't make a lot of sense to me.

Assuming 1% hashrate growth per day (which is lower than we've seen anytime in recent memory), .15/kwh power @$593/btc, and delivery on July 15th, the income should peak out at 0.92 BTC in a couple months.   1.0 BTC sounds like a nearly guaranteed loss— it's a strong bet that hashpower will suddenly stop growing, unless you're not paying for power.

I loved the Bitmain S1 (S2 appeared less often, I'm glad to see the S3 looking more like the S1 design) but break even before considering the costs of power/cooling/psu or trouble is not a good deal, not for the miner or the ecosystem. I hope Bitmain will be willing and able to get a bit more aggressive with the prices.

There is considerable risk in mining— including the risk that the Bitcoin community chooses to make some miner breaking POW changes to improve some of the centralization problems. I'm not saying people should be guaranteed a windfall, but if people are taking on risks without a solid prospect of being compensated for them I think they're crazy.
2579  Bitcoin / Pools / Re: [460 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 21, 2014, 08:17:31 PM
That does not invalidate my point: why make more than trivial commits if miners are being asked to pay another developer in an opt-out manner?
Having been around p2pool for longer than most— I can't agree with your views here. The donation income on P2pool has always been a trivial amount— sure, aggregates sound impressive when you don't realize that most of it was at $10 price levels— and I understand forrestv spent a fair amount buying mining hardware (since vendors have never eagerly tried to work with p2pool).  If someone was contributing substantially I don't have any doubt that forrestv would happily share donations with them.

From my own perspective p2pool exists in something of a design minima— it does everything _I_ want well.  There are some tweaks here or there that I might like (e.g. support for failover to another bitcoind) which don't even reach the level of want for me to go dig into the codebase and to it.   Further development of P2pool would be things like reorganizing the sharechain to support other payout schemes or reduce variance for small miners... this isn't small work that most people are just going to go out in do— I think my experience in bitcoin core tells me that contributors in that space almost don't exist.  For myself I'm perfectly happy with my level of share variance on P2Pool, so I can certantly understand why others aren't rushing in.
2580  Bitcoin / Press / Re: [2014-06-18] CoinDesk: List of Possible Silk Road Bitcoin Bidders Leaked on: June 19, 2014, 06:56:52 AM
This wasn't really a bidders list— it was a list of parties who'd contacted USMS with questions.
Pages: « 1 ... 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 [129] 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!