Bitcoin Forum
April 30, 2024, 12:21:28 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 [118] 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 ... 288 »
2341  Bitcoin / Bitcoin Technical Support / Re: for Win - share a data directory on: September 04, 2014, 03:31:46 AM
You can specify the same data directory with "-datadir" in a .conf file. See here.
This does not enable you to share a data-directory. Doing that will just corrupt your data directory if you manage to bypass the startup locks.

Actual storage can be shared by using a file system that support copy on write, or copying a data directory and replacing the block files with hardlinks.

Because of those alternatives I wouldn't spend any time developing or reviewing functionality to improve this further, and would instead favor finishing the pruning support.
2342  Bitcoin / Development & Technical Discussion / Re: [BIP][Draft] SPV improvement for light wallet on: September 04, 2014, 03:27:34 AM
I can very very cheaply create 100 difficulty 1 blocks and feed them to your client and claim that they're the tip of the best chain. You'd know no better.

I don't think compromising the security of SPV wallets further is worth an unnoticeable small improvement.

A zero trust compression of the past headers with log(n) scaling is possible too, and will hopefully be deployed for other applications. Because of the above point it is not very interesting for SPV, but if it were deployed in any case using it would be much better than having an unverified and unverifiable dependence on peers.
2343  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 04, 2014, 01:28:13 AM
Well guys, I broke theoretical bitcoin. My lack of relevant knowledge has theoretically doomed us all.
In all seriousness (not that breaking theoretical bitcoin isn't) the whole take-down-the-network-in-one-transaction is scary as shit. I'd love to be able to use string functions, but I'd rather not advocate risking the network for some silly scriptsigs Tongue
I send you my theoretical condolences. Smiley

No worries, everyone breaks theoretical Bitcoin.
2344  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 04, 2014, 12:44:19 AM
Typed data on the stack makes writing correct code much harder, I can't say that I've ever wished for that. I general prefer the stack be "bytes" and everything "converts" them to the right type. Yes, additional constraints would make things like your provably undependable code easier, but they do so by adding more corner cases that an implementation must get right.

I'm also a fan of analyizability, but that always has to be second seat to consensus safeness. Smiley
2345  Bitcoin / Development & Technical Discussion / Re: Unique Ring Signatures using secp256k1 keys on: September 04, 2014, 12:31:34 AM
Something to watch out for here is that it's coercion vulnerable, which I think I'd addressed in my science project work.

E.g. I can go to Satoshi and Andytoshi and demand they give me they publish public keys, and then in doing so prove the message came from tacotime.

To avoid this,  you generate a random blinding key Q  and sign with gP+gQ  instead, proving knoweldge of P, then you forget Q.  Later you cannot be coerced because you can honestly claim to have forgotten Q.

Making the threshold scheme e.g. where you have a set of N of M signers where _no_ person knows who all the N (not even the members themselves) is more complicated with this blinding, however, because someone must create the Qs for the involuntary participants.

You currently don't support composing signatures but you totally could doing so results in useful applications.
2346  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 03, 2014, 11:11:18 PM
Well, there can only be one OP_CHECKSIG...
Thats not true.

Quote
Why not make that kind of limit for OP_CAT?
All the string functions, in fact, should be enabled (even if they are "expensive words" like checksig)
What if there was a minimum base transaction fee (rendering a tx with an insufficient base fee invalid) that would be incremented by a certain amount for every OP_CAT in the transaction?

No one is saying that things like OP_CAT cannot be done, or that they're bad or whatever. But making them not a danger requires careful work. Case in point: What you're suggesting is obviously broken.  So I write a transaction which pays 100x that (presumably nominal fee) and I crash _EVERY BITCOIN SYSTEM ON THE NETWORK_, and I don't really have to pay the fee at all because a transaction needing a zillion yottabytes of ram to verify will not be mined, so I'll be free to spend it later.  Congrats, you added a severe whole network crashing vulnerability to hypothetical-bitcoin.

You should also remove "enabled" from your dictionary, that those opcodes were "disabled" doesn't mean they can just be enabled. They're completely gone— adding them is precisely equivalent to adding something totally novel in terms of the required deployment procedure.
2347  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 03, 2014, 10:02:54 PM
Set a maximum total memory for the stack and a script that exceeds that value automatically fails.
Sure, but this requires: a consistent way of measuring it and enforcing it, and being sure that no operation has unlimited intermediate state.

As Bitcoin was originally written it was thought that it had precisely that: There was a limit on the number of pushes, and a limit on the number of operations. This very clearly makes the stack size "limited", but because of operations that allow exponential growth, the limit wasn't useless.  Being absolutely sure that the limits imposed are effective isn't hard for any fundamental reason, as I keep pointing out. "Just have a limit", but being _sure_ that the limit does what you expect is much harder than it seems.

2348  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 03, 2014, 05:30:50 PM
I'm not a programmer so this may sound very stupid:
[...]
Max OP_CAT output size = 520 bytes: why risky?
I mean, is there any fundamental difference between these cases?
All the limits are risks, all complexity— practically every one of them has been implemented incorrectly by one alternative full node implementation or another (or Bitcoin core itself) at some point. They miss them completely, or count wrong for them, or respond incorrectly when they're violated.  E.g. here what happens if you OP_CAT 520 and 10 bytes? Should the verify fail? Should the result be truncated? But even that wasn't the point here.

Point here was that realizing you _needed_ a limit and where you needed it was a risk.  The reasonable and pedantically correct claim was made that OP_CAT didn't increase memory usage, that it just took two elements and replaced them with one which was just as large as the two... and yet having (unfiltered) OP_CAT in the instruction set bypasses the existing limits and allowed exponential memory usage.

None of it insurmountable, but I was answering the question as to why it's not just something super trivial.
2349  Other / Archival / Re: How (and why) to use the Relay Network on: September 03, 2014, 01:15:36 AM
Would something similar to this relay network be useful for the p2pool share chain?
P2Pool already does something similar within its own network for the share chain.
Yup, has for some time— it's is part of why p2pool's observed orphan rate is less than 1/10th several of the larger pools.

Identify blocks coming from the biggest pools, by coinbase address and/or signature. If not recognized consider the source unknown and untrusted.
For known sources, keep track of how many times your local bitcoind accepts and rejects blocks from this source. Sources with 10+ accepted blocks and zero rejected are considered trusted. Other sources are untrusted.
Please don't do this. It undermines the security model of Bitcoin, and at most saves you a few _milliseconds_ (and even there it isn't lost work because you could theoretically find a block and win the block race) since virtually all the transactions are already verified, and cached in memory and the node doesn't check them again.

SPV clients are counting the blocks you produce to be valid. Setting up that kind of verification also produces extreme fragility in that if a software error makes a good node produce a bad block you could have a whole series of bad blocks created by mutually trusting hosts, creating a large fork in the network. This kind of approach is also vulnerable to attack since without a PKI infrastructure you cannot determine who the source of a block is— for example it could just be a sybil who is relaying good blocks to you from other parties but later starts feeding you trash. (and, of course, imposing identities on mining has a multitude of problems and risks that go beyond the simply technical)
2350  Bitcoin / Development & Technical Discussion / Re: Running a full node is starting to be a pain on: September 02, 2014, 07:19:54 PM
Not that im aware of, there are multiple websites that have all that information available freely without having to download anything.
You could also just switch to paypal and avoid all the complexity of that fussy Bitcoin stuff. Since you're apparently happy to trust oft-anonymous oft-judgement proof parties, paypal would likely be a big security upgrade too.
it doesn't seem right for a core developer to recommend paypal over bitcoin
It doesn't seem right that you're posting in the technical subforum without understanding what I was saying there.
2351  Other / Archival / Re: How (and why) to use the Relay Network on: September 02, 2014, 08:54:11 AM
I've been running the relay node client since Matt wrote it and it appears save a considerable amount of bandwidth (e.g. avoiding resending >95% of all transactions) which means faster block relaying and fewer orphans.

Plus, alternative transports make the network more robust e.g. some crazy firewall starts blocking the Bitcoin P2P network, the relay client may get through and prevent partitioning.

Great stuff.
2352  Bitcoin / Development & Technical Discussion / Re: Pruning of old blocks from disk on: September 02, 2014, 08:39:13 AM
Quote
Why not just store blocks at regular intervals?  What is the benefit of making it (pseudo) random?

I have X gigabytes today that I want to contribute to the Bitcoin network. I'd like it to use all of that X as soon as possible.

But I don't want to be in a situation where as the network grows the distribution of blocks stored becomes non-uniform. E.g. because I used up my space and then more blocks came into existence I'm over-heavy in old blocks... or where my locality becomes poor (creating connection overhead for peers having to fetch only a few blocks per connection).

I don't want to have to rely on trying to measure my peers to find out what sections are missing and need storing, since an attacker could falsely make some areas of the chain seem over represented in order to make them unrepresented.

Eventually, if sparse nodes are to be useful for nodes in their IBD you'll want to learn about what ranges nodes support before you try connecting to them, so I don't want to have to signal more than a few bytes of data to indicate what ranges I'm storing, or have to continually keep updating peers about what I'm storing as time goes on... as that would require a lot of bandwidth.

I want to be able to increase or decrease my storage without totally changing the blocks I'm storing or biasing the selection. (though obviously that would require some communication to indicate the change).

So those are at least the motivations I've mostly been thinking about there.

Quote
Do you know whether someone also intents to work on the later ideas you mention below?  I don't want to duplicate any work, but I'm interested in working on this set of features since I believe it is both very useful and also interesting.
I don't think anyone is coding on it. There has been a bunch of discussion in the past, so at least for my part I have an idea of what I want to see done eventually (the stuff I outlined).

Pieter posted on bitcoin-development about service bits for this... though that wasn't talking about sparse blocks support, but just bits indicating that you can some amount of the most recent blocks. (He also did some measurements which showed that the request probability became pretty uniform after about 2000 blocks deep, obviously recent blocks were much more frequently requested due to nodes being offline for just a few days/weeks).

Quote
Yes, I know that the caching idea has lots of problems.  I also thought about randomly removing blocks so that all nodes together will have the full history available even if each node only has a fraction - but I didn't go as far as you did with the "predicatable" block ranges.  I like this idea very much!  If you are trying to bootstrap and currently have no connected peers which have a particular block, would you randomly reconnect to new peers until you get one?  Or implement a method to explicitly ask the network "who has this block range"?

My thinking is that initially you'll have to connect to a node to find out what ranges it has (e.g. we'll just add a handshake for it to the p2p protocol) and you'd just disconnect peers that weren't useful to you while also learning who has what (e.g. if you need blocks 1000-2000 but find 100000-110000 you'll know where to go later), but latter the address message should be revised so that it could carry the data with it. So your peers will tell you about what nodes are likely to have the blocks you need.

Quote
My opinion is that the undo files are a good starting point - they are only used for chain reorgs, so the networking issues are not present with them.  If you keep just the last couple of blocks in them (or to the last checkpoint), you should have exactly the same functionality as currently, right?  So why not remove them before the last checkpoint without turning off the "network" service flag, or even do this always (not only when enabled explicitly)?  This seems like the logical first step to me (unless there are problems I'm missing with that approach).
We're (At least Sipa and I) planning on removing checkpoints almost completely after headers first is merged. They are really damaging to people's reasoning about the security model, and headers first removes practically actual use of them. (Instead they'd be converted into a "chainwork sufficiency" number, e.g. how much work should the best chain have, and something to put the wallet into safe-mode and throw an error if a large reorg happens).

Undo files take up comparatively little space, all of them right now amount to only about two and a half gigs... and unlike the blocks, if you do want them again you can't just go fetch them.

In general I'm concerned about proscribing any particular depth beyond which a node should refuse to reorg— any at all, or at least not any that is remotely short.  The risk is that if someone does create a reorg at that limit they can totally split the consensus. (e.g. let half the nodes get one block that crosses the threshold then announce the reorg). Doubly so when you consider corner cases like the initial block download, e.g. where you get a node stuck on a bogus chain (though that would be part of the idea behind a sufficiency test).

Quote
Is this the O(n) approach you mention?  I don't think that O(n) with n being the block height is too bad.
Yep, thats basically it. The only reason I don't really like O(n) is perhaps your addr database has 100,000 entries— 3/4 of which are bogus garbage put out by broken or malicious nodes. Now you need to do a fair amount of work just to figure out which peers you should be trying.

Quote
However, I think that it is in theory possible to choose the ranges for Ni as well as the values for Si in such a way that you get O(log n):

Ni randomly in [2^i, 2^(i+1)),
Si = 2^i

By tuning the base of the exponential or introducing some factors, it should be possible to tune the stored fraction of blocks (I've not done the math, this is just a rough idea).  I don't know whether exponentially rising sizes of the ranges have any detrimental effect on the statistical properties of which blocks are kept how often by the network.  Maybe something more clever is also possible.
Oh interesting. I got stuck trying to come up with O(1) that I didn't think about log shaped solutions. I'll give that some thought...

I think in general we want to be able to claim that if the node seeds are uniformly distributed, that we expect the block distribution to approach uniform at all times as the node count approaches infinite, regardless of how the size is set. In practice I expect (and hope) that there will be a fair number of nodes with really large size settings (e.g. all the blocks) which will help even out any non-uniformity— really, in my mind most of the reason for supporting sparse nodes is to make sure everyone can contribute at whatever level they want, and to make sure that those who do choose to contribute a lot of storage aren't totally saturated with requests.

2353  Bitcoin / Press / Re: [2014-09-02] Bitcoin is Really Fragile” – Bitcoin Core Developer Mike Hearn on: September 02, 2014, 06:03:31 AM
Jeff's comments, and my (I'm nullc on reddit) comments: http://www.reddit.com/r/Bitcoin/comments/2f6iiq/bitcoin_is_really_fragile_bitcoin_core_developer/ck6es4t#ck6es4t

2354  Bitcoin / Development & Technical Discussion / Re: Reducing the memory footprint but still retain full node capabilities. on: September 02, 2014, 03:37:39 AM
I am measuring the actual usage. And while ram may be cheap, memory controllers aren't infinite, I am already using 4 sticks of RAM and only had 4 slots. Anyway, the measurement is on Windows with Bitcoin-Qt/Core default settings.
Presumably if your system was made in the last decade it support modules larger than 2GB? Smiley  What is the virt usage corresponding to your 800 mb actual usage reading?

Quote
As for git master, what are the potential side effects of mining on it?
That it may crash or end up forked and leave you effectively not mining. This is also true for any other version, but non-released versions are (by definition) less widely tested.  The amount of risk depends on whats going on recently in git master. At the moment, there isn't anything exceptionally new which is obviously highly risky.  Similarly if you keep coins on that system exposing immature software to the internet you might have an increased risk of a security compromise. That said, hopefully you have no coins on the system considering you're using it for web-browsing. Marginal risk from new code in bitcoin core is going to be much less than running any browser.

If your hashrate is only fairly modest (I'd hope— since you're not running your node on a separate dedicated system), you could just consider a bit of risk taken here as a public service and increase your monitoring vigilance to match.  If you were talking about a large amount of hashrate, I'd recommend against not— not just due to the risk to yourself, but if you start producing multiple invalid blocks it may cause (minor) problems for SPV nodes or alt implementations. Risk that you take on your own is one thing, subjecting other people to it is another matter.
2355  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 02, 2014, 03:30:40 AM
Sure sure, as I said about it's not hard in theory, but theres the lesson— even after I pointed out there was exponential memory usage in a naive implementation the OP thought otherwise.  And before anyone else trips over your egos, it's not that the OP was foolish or anything. There are subtle interactions in the dark corners which make making promises about the behavior difficult.  So while the actual safe behavior isn't fundamentally hard, being confident that all the corner cases and interactions are handled is fundamentally hard.

OP_CAT isn't the only "disabled" opcode with those properties.... e.g. multiplying also does it.

When behavior like this is fixed via limits great care must be taken to make sure the limits are implemented absolutely consistently everywhere or the result is a consensus splitting risk. Alt full node implementers have repeatedly implemented the limits wrong— even when they're obvious in the code, called out in the comments, documented on the wiki, etc... even by just simply not implementing them (... coverage analysis and testing against the blockchain can't tell you about limits that you're just missing completely).

Going back to the OP's question. I'm not seeing how OP_CAT (at least by itself) facilitates any of the high level examples there. Can you give me a specific protocol and set of scripts to show me how it would work?
2356  Bitcoin / Development & Technical Discussion / Re: Running a full node is starting to be a pain on: September 02, 2014, 03:22:31 AM
Not that im aware of, there are multiple websites that have all that information available freely without having to download anything.
You could also just switch to paypal and avoid all the complexity of that fussy Bitcoin stuff. Since you're apparently happy to trust oft-anonymous oft-judgement proof parties, paypal would likely be a big security upgrade too.
2357  Bitcoin / Development & Technical Discussion / Re: Reducing the memory footprint but still retain full node capabilities. on: September 02, 2014, 03:10:06 AM
I would like somebody more familiar with the bitcoin core client to tell me if it's possible to basically recompile bitcoin-qt(or just the daemon) with various changes to reduce the memory footprint to the bare minimum, say below 400mb at all times, while still retaining mining capabilities, e.g I want to mine solo, but not having to worry about the excessive use of ram by the client. Usually it's over 800mb, and I daresay I want it below 200.
Git master, -dbcache=4, -rpcthreads=1 -dnsseed=0 -discover=0  -par=1  Uses about 200MBytes on x86_64, maybe getting up to 250 with long uptime and a lot of connections. If you are wallet disabled it'll be another 30mb below that.  Reducing the connection count may make it a bit smaller.  In general, I advise against having inbound connections from the general internet on a node thats mining (e.g. set -listen=0 which will also helpfully limit your connection count as a side effect). Running the relay network daemon is advisable.

In general the defaults are not well tuned for low memory— considering that 1GB ram costs about $10 marginally... and a bit of extra caching is worth it when the memory is available.

The fact that you're saying "usually over 800mb" suggests that you might be confusing virtual address space with memory usage. Virtual address space does not consume any actual memory (not in swap either). The processes memory map is not contiguous, e.g. there are regions of allocated memory in seas of non-allocated address space. Be sure you're measuring the right thing.

Though several hundred megabytes per tab? Might want to reconsider what browser you're running. I apologize that it's a little difficult to run a whole world wide digital currency with memory usage comparable to one or two tabs in a web-browser. Smiley

Mine on git master at your own risk, though— no warranties are provided generally, but even less so on git master.
2358  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 01, 2014, 12:56:22 PM
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together...
Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.

To complete the lesson, for those who never liked homework: With a 201 cycle limit, OP_CAT lets you use approximately 534,773,760 YiB memory, vs 102510 bytes without it.

Quote
is unlikely to exhaust the memory.  And I agreed very much.
And maybe you will realize why all these altcoins worry me so?  Or perhaps you've got cheaper sources of ram than I do?
2359  Bitcoin / Development & Technical Discussion / Re: Any reason to allow multiple incoming connections from same peer? on: September 01, 2014, 11:42:05 AM
We should certantly prioritize bumping duplicate source IPs when we implement doing that— (I have a long list of criteria written up for that).

The reason we really shouldn't just ban is because there often are many independent nodes behind a NAT and it would be impolite to unnecessarily deny them connectivity.  In at least one case there is a whole country behind a single IP.

Opening up multiple connections is really only the first and most boring of abusive behaviors people can / have been engaging in... there are quite a few parties which are now trying to connect constantly to every reachable host on the network and wasting a lot of connectivity. If we implement measures to address the bulk resource consumption more generally the repeats on one IP will also be approved as a side effect.

Free cookie to someone who can figure out what these specific ones are attempting to do in any case.
2360  Alternate cryptocurrencies / Altcoin Discussion / Re: Parse the blockchain for all addresses with positive balance on: September 01, 2014, 02:42:06 AM
What are you trying to accomplish— at a macroscopic level?  We might have better advice if we knew what you were trying to accomplish.
Pages: « 1 ... 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 [118] 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!