Bitcoin Forum
April 30, 2024, 06:07:20 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 [119] 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ... 288 »
2361  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: August 31, 2014, 10:50:41 PM
An altcoin make a technical change? Keep dreaming. Smiley  I am aware of none of them that have this.

It isn't available in testnet either.  It it's just a question of "enabling it"— you have to prevent it from being a memory exhaustion attack via exponential growth. (this isn't theoretically hard but it would, you know, require a little bit of work).

Care to describe your protocol some? it turns out that a lot of things are possible with a bit of transformation.
2362  Bitcoin / Development & Technical Discussion / Re: Pruning of old blocks from disk on: August 30, 2014, 06:09:59 PM
The reason the blocks are accessed in so few places now is because 0.8 did most of the work for this change, it just didn't carry through all the way to the deletion part.

The deletion itself is already implemented in patches staged in github which will probably be merged soon.  It requires turning off  the node-network service bit as failing to do so would be very detrimental to the network (cause new nodes to be unable to bootstrap in reasonable time). It's also important to not remove the most recent blocks, since they are potentially needed for reorganization.

What I hope to ultimately do here is have a knob where you can set the amount of space you'd like to use (subject to a minimum that is at least enough for the most recent blocks and the chainstate), the host would then pick a uniformly random sequence of block ranges to keep, up to their maximum. Bootstrapping could then continue normally even if no single peer were available which had all the blocks. The caching approach you mention doesn't seem to make a lot of sense to me, since access to very old blocks is uniformly probable... basing what you save on requests opens stupid dos attacks where some goes and fetches block 66,666 from all nodes over and over again to make that one block super popular to the expense of all others. Smiley

If you'd like to work on this, I might suggest starting from 4481+4468  and work on making it so that the depth that undo files are kept is different from block files (e.g. keep undo for 2016 blocks and blocks for 288) and make it so if blocks are needed for reorg beyond the block limit it can go re-fetch them.

Another path would be working on improving key-birthday support in the wallet— e.g. comprehensive support for dates on imported keys, and support for rescanning ranges where you don't have the blocks using bloom filtering.

A third path is working on the encoding for signaling sparse blocks— I'd like to see it so that nodes have a compact random seed (e.g. could just be 32 bits) where knowing the current height, seed, and number of block ranges, you'd know which ranges a node has (e.g. working like how selecting N uniform random entries over large data doesn't need more then N storage).  I've got one solution, but it requires O(n) computation with the current height per peer to figure out what ranges a peer currently has, and perhaps better is possible.  My thinking here is that we'd allocate a service bit that says "I keep the last 2016 blocks plus some sparse range" and a network message to advertise the sparse range seed on connect. (Later, we'd create a new addr message that lets you just include the seed directly).



2363  Bitcoin / Development & Technical Discussion / Re: Really Really ultimate blockchain compression: CoinWitness on: August 29, 2014, 07:32:42 PM
As people are talking about scalability again, is there any new development in SCIP?
Yes, check out the recent paper on  "Scalable Zero Knowledge via Cycles of Elliptic Curves": http://eprint.iacr.org/2014/595

Which is a pretty wild technique.  Basically they managed (through an enormous amount of computation) to find a pair of pairing-compatible elliptic curves such that the number of points on one is the size of the finite field the other is defined over, and vice versa.

What this means is that in a ZKP written using curve A it's cheap to run the verifier for ZKP written in curve B. And for ZKP in curve B its cheap to verify proofs for curve A.

They take this structure and write proofs of the form "Verify a ZKP in the other curve of the machine state;  Execute one more instruction on top of that state.". Then they alternate these constructions, allowing for completely linear scaling.

The downside is that this magical stunt requires they use curves where the ultimate verifier (not insider a proof but on a computer) is a far bit slower. It also only allows for 80 bit security (The size ratios make achieving 128 bit security much harder). It also only helps for problems that work by repeated application of a universal circuit, like running tinyram, rather than running a hard wired application specific circuit— which many applications will have preferred for performance.
2364  Bitcoin / Development & Technical Discussion / Re: Share your ideas on what to replace the 1 MB block size limit with on: August 28, 2014, 10:43:47 PM
The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?
Letting miners _freely_ expand blocks is a bit like asking the foxes to guard the hen-house— it's almost equivalent to no limit at all. Individual miners have every incentive to put as much fee paying transactions in their own blocks as they can (esp. presuming propagation delays are resolved or the miner has a lot of hashpower and so propagation hardly matters)— because they only need verify once the cost of a few more cpus or disks isn't a big deal. In theory (I say because miners have been bad with software updates), they can afford the operating costs of fixing things like integer overflows that arise with larger blocks, especially since they usually have little customization— other nodes, not so much?

Since miners can always put fewer transactions in, it's not unreasonable for the block-chain to coordinate that soft-limit (— in the hopes that baking in the conspiracy discourages worse ones from forming). But in that case it shouldn't be based on the actual size, but instead on an explicit limit, so that expressing your honest belief that the network would be better off with smaller blocks is not at odds with maximizing your revenue in this block.

If you want to talk about new limit-agreement mechanisms, I think that txout creation is more interesting to limit than the size directly though... or potentially both.

Even for these uses— Median might not be the right metric, however— consider that recently it would have giver control to effectively a single party, at the moment it would effectively give it to two parties. You could easily use the median for lowering the limit and (say) the 25th percentile for raising it though... though even thats somewhat sloppy because having more than half the hashrate in your conspiracy means you can have all the hashrate if you really want it. Sad
2365  Bitcoin / Bitcoin Technical Support / Re: Bitcoin Core stopped connecting??? on: August 28, 2014, 01:53:01 AM
Are you in China?
2366  Bitcoin / Development & Technical Discussion / Re: Bitcoin blockchian in sql db on: August 24, 2014, 11:41:13 PM
Your english seems fine— far better than my {whatever your native language is}, no doubt.

Importing Bitcoin data into a SQL database is tricky, you end up with hundreds of gigas of data once you're out of Bitcoin's pretty efficient encoding.   If you're prepared to deal with that, look into Bitcoin ABE.
2367  Economy / Trading Discussion / Re: custom exchange rate between two parties on: August 24, 2014, 11:39:10 PM
Capability?  Agreements between two people are between two people, they can do whatever they like.  Perhaps you intended to ask a question that was directed to actual technology? If so— I can't extract it from your message.
2368  Bitcoin / Development & Technical Discussion / Re: [ANN] Scalable Bitcoin Mixing on Unequal Inputs on: August 22, 2014, 08:09:51 PM
Neat, I will be sure to read this.

The second paper, forthcoming, is on a new mixing primitive, CoinShift, based on TierNolan's atomic cross-chain solution.
Have you seen https://bitcointalk.org/index.php?topic=321228.0 ?
2369  Bitcoin / Development & Technical Discussion / Re: Bitcoin Core Replace By Lowest Hash on: August 22, 2014, 05:10:42 PM
the block with the most work would be the block with the lowest hash
Incorrect. The block with the best target (and target history sum, more specifically) is the one with the most work. Having a lower hash is just chance.

The protocol is that the first valid chain greatest sum work (and work is defined as the target specified in the bits header) is the correct one. What you are suggesting would harm convergence, especially in an adversarial model— as CJYP notes.  Imagine nodes choose the lowest hash in a race. Say you find an unusually rare block, since you are sure you'll win any equal length race instead of announcing the block you keep it secret until you hear competition in the network.

Even absent adversarial miners, lowest hash is less stable and less safe for 1-confirm transactions since the network is not synchronous some miners can just be late to report and late to switch. Right now a few seconds after you've seen a block and not seen any competition it's fairly likely that the block will not be orphaned, but with lower-hash-wins it would be less likely, moreso when you consider miners wouldn't have anywhere near as much incentive to optimize for block forwarding.

 
Quote
it immediately ends the current situation where it is more profitable to mine blocks with less transactions
Miners don't actually give a darn, otherwise they'd do the thing P2Pool has done for years and would setup the ability to relay blocks taking advantage of the transactions sent first. (Which you can get for all blocks running the relay node client, http://sourceforge.net/p/bitcoin/mailman/message/32676543/)
2370  Bitcoin / Development & Technical Discussion / Re: Share your ideas on what to replace the 1 MB block size limit with on: August 22, 2014, 05:31:13 AM
Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.
Can you cite these people specifically?  The strongest I've seen is that it "may not be necessary" and shouldn't be done unless the consequences are clear (including mining incentives, etc), the software well tested, etc.

Quote
I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.
Great. I live in the middle of silicon valley and no such domestic connection is available at any price (short of me paying tens of thousands of dollars NRE to lay fiber someplace). This is true for much of the world today.

Quote
Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse.
I agree with this partially, but I know it's at all the whole truth of it. Right now, even on a host with solid gigabit connectivity you will take days to synchronize the blockchain— this is due to dumb software limitations which are being fixed... but even with them fixed, on a quad core i7 3.2GHz and a fast SSD you're still talking about three hours. With 100x that load you're talking about 30 hours— 12.5 days.

Few who are operating in any kind of task driven manner— e.g. "setup a service" are willing to tolerate that, and I can't blame them.

Quote
People are not solo mining mostly because of variance,
There is no need to delegate your mining vote to a third party to mine— it would be perfectly possible for pools to pay according to shares that pay them, regardless of where you got your transaction lists from— bur they don't do this.

Quote
At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world.
And tell them what?  That hours ago the miners created a bunch of extra coin out of thin air ("no worries, the inflation was needed to support the economy/security/etc. and there is nothing you can do about it because it's hours burried and the miners won't reorg it out and any attempt to do so opens up a bunch of double spending risk")—  How exactly does this give you anything over a centralized service that offers to let people audit it?  In both there can always be some excuse good enough to get away with justifying compromising the properties once you've left the law of math and resorted to enforcement by men and politics.

In the whitepaper a better path is mentioned that few seem to have noticed "One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency". Sadly, I'm not aware of even any brainstorming behind what it would take to make that a reality beyond a bit I did a few years ago. (... even if I worked on Bitcoin full time I couldn't possibly personally build all the things I think we need to build, there just isn't enough hours in the day)

That isn't the whole tool in the belt, but I point it out to highlight that what you're suggesting above is a real and concerning relaxation of the security model, which moves bitcoin closer to the trust-us-were-lolgulated-banking-industry... and it that it is not at all obvious to me that such compromises are necessary.

It's beyond infuriating to me when I hear a dismissive tone, since pretending these things don't have a decentralization impact removes all interest from working on the technical tools needed to bridge the gap.

Quote
The real problem for scaling is probably in mining.
I'm not sure why you think that— miners are paid for their participation. Some of them habe been extract revenue on the hundred thousands dollars a month in fees from their hashers. There is a lot of funds to pay for equipment there.

Quote
I hate those spams
Oh, I wasn't trying to express any opinion/dislike on the inefficient use but to point out that to some extent load expands to fill capacity, and if the price is too low people will use it wastefully or selfishly.

Quote
Merge mining incurs extra cost, with the same scale property of bitcoin. I'm not sure how bitcoin mining could be substantially funded by merge mining.
Same cost for miners, who are paid for their resources. Not the same cost for verifiers, because not everyone has to verify everything.

Quote
I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.
In spite of all the nits I'm picking above I agree with you in broad strokes.
2371  Bitcoin / Development & Technical Discussion / Re: Share your ideas on what to replace the 1 MB block size limit with on: August 21, 2014, 08:17:06 PM
1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions:
Gee. And yet no one is suggesting that. Perhaps this should suggest your understanding of other people's views is flawed, before clinging to it and insulting people with an oversimplification of their views and preventing polite discourse as a result? :-/

Quote
Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now.
Maybe, we've suffered major losses of decentralization with even many major commercial players not running their own verifying nodes and the overwhelming majority of miners— instead relying on centralized services like Blockchain.info and mining pools. Even some of the mining pools have tried not running their own nodes but instead proxying work from other pools. The cost of running a node is an often cited reason.  Some portion if this cost may be an illusion, some may be a constant (e.g. software maintenance), but to the extent that the cost is proportional to the load on the network having higher limits would not be improving things.

What we saw in Bitcoin last year was a rise of ludicrously inefficient services— ones that bounced transactions through several addresses for every logical transaction made by users, games that produced a pair of transactions per move, etc. Transaction volume rose precipitously but when fees and delays became substantial many of these services changed strategies and increased their efficiency.   Though I can't prove it, I think it's likely that there is no coincidence that the load has equalized near the default target size.

Quote
We want to maximize miner profit because that will translate to security.
But this isn't the only objective, we also must have ample decentralization since this is what provides Bitcoin with any uniqueness or value vs the vastly more efficient centralized payment systems.

Quote
We need to find a reasonable balance
Agreed.

Quote
but 1MB is definitely not a good one.
At the moment it seems fine. Forever? not likely— I agree, and on all counts. We can reasonable expect available bandwidth, storage, cpu-power, and software quality to improve. In some span of time 10MB will have similar relative costs to 1MB today, and so all factors that depend on relative costs will be equally happy with some other side.

Quote
Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.
This is ignoring various kinds of merged mining income, which might change the equation somewhat... but this is hard to factor in today.

Quote
I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.
I think at the moment— based on how we're seeing things play out with the current load levels on the network— I think 100MB blocks would be pretty much devastating to decentralization, in a few years— likely less so, but at the moment it would be even more devastating to the existence of a fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?
Yes, sort of— fee requirements at major pools, monero apparently planning a hard-fork to change the rules, I'm not sure where thats standing— I'll ping some of their developers to comment.  Monero's blockchain size is currently about 2.1GBytes on my disk here.

My understanding is that gmaxwell and andytoshi (et. al.?) have come up with "substantial cryptographic improvements" to the BCN system which potentially are a "pretty straight forward to add to Bitcoin" as per gmaxwell, see:  https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt and previous comment(s) cited in this thread.  However, I still have my (unanswered) questions, to wit:
How would this output-encoding scheme work realistically for something of *every possible size?*  
Assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  
How are the scalability issue(s) addressed?
The improvements Andrew and I came up with do not change the scalablity at all, they change the privacy (and do work for all possible sizes), and since its not scalability related it's really completely off-topic for this thread.
2372  Economy / Service Discussion / Re: Negative balance on: August 21, 2014, 12:01:24 AM
https://archive.today/VZbyc
2373  Bitcoin / Development & Technical Discussion / Re: Running a full node is starting to be a pain on: August 19, 2014, 10:06:23 PM
So here's my question: is there something about the way the bitcoin client is built that causes my computer to freeze up when people start to download the blockchain from me?
Sounds like you have a broken SATA driver/controller. Try starting up some disk benchmarks and see if your computer becomes unusable when there is the slightest activity.
2374  Alternate cryptocurrencies / Altcoin Discussion / Re: Towards a better proof of work algorithm on: August 19, 2014, 09:09:33 PM
https://download.wpsoftware.net/bitcoin/asic-faq.pdf
2375  Bitcoin / Development & Technical Discussion / Re: bitcoind-ncurses: Terminal front-end for bitcoind on: August 17, 2014, 02:24:16 AM
My peers page has no heights
You're running a non-git version of bitcoind. Next major version will have them for you.
2376  Bitcoin / Development & Technical Discussion / Re: bitcoind-ncurses: Terminal front-end for bitcoind on: August 16, 2014, 11:28:50 PM
WRT peer height.

The syncheight is based on what the peer has advertised to us, so if a peer is not synced up yet its syncheight will be -1. You might want to fall back to displaying the starting height there if and only if it's value is more than one less than your current height and syncheight is -1.
2377  Alternate cryptocurrencies / Altcoin Discussion / Re: Blowing the lid off the CryptoNote/Bytecoin scam (with the exception of Monero) on: August 16, 2014, 04:21:08 AM
My favorite redflag so far, if you look at the timestamps of the first BCN block: https://minergate.com/blockchain/bcn/block/1 it says:
 2012-07-04 05:00:00 (2 years ago)
How the heck did they mined the first block at this exact zeroed time period,
It's completely normal and unsurprising for the genesis block timestamp to be 'rigged'. ... I don't know about in BCN, but in Bitcoin there really is no facility to mine a genesis block, it's hard coded... so you end up building a separate tool to create one. Your tool _could_ read the current time, but thats ~2 more lines of code than just manually setting the time at some value. By itself I wouldn't find that concerning.
2378  Bitcoin / Development & Technical Discussion / Re: Why does store all inputs and outputs, instead of “account/balance” ledger? on: August 15, 2014, 04:11:50 PM
First, the entire privacy model for Bitcoin is predicated on users not reusing keys. Without that, bitcoin is obscenely non-private, and gravely disadvantaged compared to any other store of value or transaction system in wide use.  This alone immediately would destroy any value an 'account' system would have, and so you can basically stop at this point and think no more.

Though there are other reasons, beyond the increased difficulty of evaluating zeroconf transactions Sergio mentioned above—

Then the account model has major problems with corner cases... e.g. You have problems with determinism in conflict situations:  e.g. Alice has 2 BTC and pays Bob 0.9 BTC, then Alice pays Carol 1 BTC.  Later Bob gets angry the transaction hasn't confirmed and demands Alice reissue with a fee. Alice does but then Bob gets paid twice and Carol not at all.

Even just anti-replay you need an random access lookup to see if the nonce has been used already, and the nonce is per input if you ar eto have conflict control. You have more data (the list of nonces) is unprunable (or difficult to make prunable)... and so any space you saved by not having extra outputs is lost in abundance by having to store nonces forever— even after an account goes to zero value, since if it were funded again someone could replay a years old spend— once the spending actually does happen.

In short, account like setups _seem_ simpler and more intuitive, but they have ugly edge conditions that are difficult to get right and create overhead that wipes out the possible gains. When you consider that the privacy model precludes that kind of use in any case, there really is no upside and a lot of downside.
2379  Other / Meta / Re: What is the forum's policy on blatant software license abuse? on: August 15, 2014, 03:02:26 PM
Beyond being unethical and stupid, closed source miners are a risk to the ecosystem. What happens when some important update is needed to these devices? Or what if they're shipping with a back door? What if they need fixes to work with p2pool or some other future mining improvement?
2380  Alternate cryptocurrencies / Altcoin Discussion / Re: Blowing the lid off the CryptoNote / Bytecoin scam (excluding Monero) on: August 15, 2014, 11:36:24 AM
I don't disagree with you on the presale thing. I just don't understand why they felt the need to do it, they could have just been up-front and open and everyone would have lavished them with praise.
Not clear to me. A lot of technical work doesn't get invested in. I mean— sure you could get a few thousand dollars or something. Getting millions would be much more dicy, especially if you're thinking like I do and thinking that any serious altcoin competition with bitcoin could potentially be very bad for all cryptocurrencies (damaging network effect), plus competing with bitcoin is a long uphill battle. Perhaps it's arguably more profitable to do some splash where you've got effectively infinite supply, and feed a huge incoming speculative demand ripple/stellar style.

I'm probably out of my depth— figuring out how to extract money from speculators isn't something I really have experience with... it's just not so surprising to me. Though next I'd be worrying that if someone did all these dishonest things, what backdoors do they have in the software? :-/ In my case I only ran any of these codebases sandboxed on separated machines. Presumably altcoin exchanges have similar setups?
Pages: « 1 ... 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 [119] 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!