Bitcoin Forum
May 30, 2024, 09:24:35 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 [138] 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 ... 288 »
2741  Bitcoin / Development & Technical Discussion / Re: How does pool mining and mining work under the hood? on: April 27, 2014, 08:24:40 PM
That isn't right. Mining is memoryless, there is no progress made— analogous to throwing fair dice, if you go 100 rolls without rolling a 1 your next roll is still no more or less likely than your first roll.

With respect to OP's questions on fees and selecting transactions. The important thing is that as far as the protocol is concerned someone who is just working on hashes isn't a miner any more than AMD is— someone who is just working on hashes is just selling CPU time to the real miner elsewhere (the pool).  Only P2Pool users, solo miners, and mining pools themselves are actual miners from the perspective of the protocol.
2742  Bitcoin / Development & Technical Discussion / Re: List of good seed nodes to put in bitcoin.conf? on: April 27, 2014, 05:04:11 PM
Please don't just stuff addnodes in to your configuration for random public nodes— unless that kind of usage has been solicited. If you do that you'll cause unequal load for nodes that people have listed online.
2743  Bitcoin / Development & Technical Discussion / Re: superblock checkpoint architecture on: April 26, 2014, 11:39:18 PM
I guess one could argue that we trust the network to verify transactions and provide the block chain, why not trust it to
We do not trust it to verify transactions, we trust it only to order transactions, verifying we do for ourselves.  By verifying for ourselves we eliminate possible benefits of including invalid transactions and the profit that would go along with doing so. This is important because we depend only on economic incentives to get the honest behavior for ordering at all— there is no exponential gap between attacking and defending in POW consensus. If by being dishonest you can steal coins (or even reclaim lost ones) it's an entirely different trade-off question than a case where you can only rearrange transactions (and potentially replace your own).  This isn't to say that a system couldn't be constructed where only miners verified, but its a different and weaker set of security/economic incentives— and not just some minor optimization.

Quote
but that needs to be weighed against the integrity and security lost by the number of full nodes dropping slowly off due to the storage and computation requirements.
The computation is a one time requirement at initialization, not ongoing (and because of bandwidth requirements I don't expect computation to ever be limiting) and could be performed in the background on new nodes.  There is _no_ storage requirement in general for the past history. Full nodes do not need to store it, they've already validated it and can forget it.  This isn't implemented in full nodes today— little need because the historical storage is not very great currently, though Bitcoin core's storage is already structured to enable it: You can delete old blocks and your node will continue to work normally, validating blocks and processing transactions, until you try to request an older block via rpc or p2p (and then it will crash). The protocol is specifically designed to avoid nodes having to store old transaction data as described in section 7 of bitcoin.pdf, and can do so without any security tradeoff.

Quote
A state commitment which redistributes unspent outputs, lost or not, to different addresses, would be easily spotted and rejected
Validation is the process which accomplishes this spotting and rejection. Smiley If you seek to not validate, then in cases where there is no validation those errors can be passed in. If some parties validate and some do not then you risk partitioning the network— e.g. old nodes 'ignore' your "superblock" and stay on the honest network, while new nodes use it and are willing to follow a dishonest network (potentially to the exclusion of the honest one).... and the inconsistency is probably worse than the fraud. So you're still stuck with if someone mines a invalid state that some nodes will not catch because they do not verify, then all must accept it without verifying it.

Couldn't a block still be created and after consensus has been established on the block, and after some time has passed, it could be used instead of the entire chain? How would that violate security assumptions?
In addition to the incentives point above: The participants are not constant  and  set out at the front... anonymous parties come and go, so what does a "consensus" really mean if you're not a part of it and all those who are are anonymous self-selecting parties and perhaps sybils?  If I spin up 100 fake nodes and create a "consensus" that I have a trillion bitcoins and you join later— so it it matters greatly that the rules were followed before you showed up, e.g. that the creator of the system hadn't magically assigned himself a trillion coins using a fake consensus before you showed up. Smiley Of course you don't need the data any more once you've validated it— you can just remember that it was valid... but if you haven't, how are you to know except either by processing a proof of it (e.g. checking it yourself) or by trusting a third party?  Bitcoin was created to eliminate the need for trust in currency systems, at least to the extent thats possible.
2744  Bitcoin / Development & Technical Discussion / Re: Orphaned blocks on: April 26, 2014, 11:25:51 PM
Looking on blockchain.info, I see there's been orphaned blocks in the last month or so ,and never any before that.  Is this something they just started tracking , or is there a sudden emergence ...and if so, why?
As usual, BC.i has given out misleading data. They're obviously just forgetting old ones. There have always been orphan blocks (on the order of 1%) and will always be some, the finite speed of light ensures it.
2745  Bitcoin / Development & Technical Discussion / Re: OP_CHECKMULTISIG question on: April 26, 2014, 11:23:33 PM
Yes, they have to be in order in order to pass.
2746  Bitcoin / Development & Technical Discussion / Re: superblock checkpoint architecture on: April 26, 2014, 05:47:35 PM
This kind of stuff has been discussed many times before.  What you're suggesting here violates the Bitcoin security assumptions— that the integrity of the data is solid because all nodes full nodes have verified it themselves and not trusted third parties to do it— and in the simple form described opens up some pretty nasty attacks:  Make your state commitment always be to one where you own all the "lost" coins, sure, you're unlikely to find the next commitment yourself, but if you do— ca-ching!.   Of course, if you're verifying the state commitments, then there is no reason to use a 'high apparent difficulty' to select which blocks provide them (see the proposals about merkleized utxo for how a reduced security model can be implemented this way).
2747  Bitcoin / Development & Technical Discussion / Re: OP_VERIFY question on: April 26, 2014, 10:01:51 AM
It doesn't matter, someone was being overly detailed while documenting there.
2748  Bitcoin / Development & Technical Discussion / Re: Performance of Account structures in bitcoind on: April 26, 2014, 09:33:29 AM
I was able to severely corrupt the wallet file by terminating bitcoind process. I did not lose any keys, but the account balance information was corrupted. In essence I was able to lose track of what the correct balance is in each account without any effort at all.
Can you provide some more information here?  Were you running the release binaries? What version? What operating system? How did you kill the process? What state was it in when you brought it back up? What errors did you receive?  Would it be possible for you to provide the courrupted wallet and database/ directory to me?

I ask because last year I ran a loop killing the process under load for more than a month, killing it thousands and thousands of time trying to tease out some rare issues and was not able to generate a single instant of corruption that way. Before I start trying to reproduce your experience I want to have a comparable setup.

Generally use of the 'account' functionality is not recommended it wasn't designed for what most people who try to use it expect to use it for, and other methods (which support durability across hardware failure) should be used instead.  Wrt large amounts of transactions, there I must disagree— for better or worse some of the largest bitcoin using sites collect their transactions in a bitcoind using wallet. Unfortunately, none of the people interested in those high transaction load applications are contributing to the code base but they tell me that they don't need to because it currently works for them with reasonable considerations.  If you've automated your tests enough that they could be run against a testnet/regtest wallet out of a script it might be useful to get them imported into the integration testing used for bitcoin core— it's quite shy on wallet related tests.

Quote
The really bad news is that transfers end up taking several seconds each, on average
I assume you were spending unconfirmed coins in these transactions?   Taking several seconds per-spend is a known artifact of the current software behavior— the code that traverses unspent coins has factorial-ish complexity. While it could be improved— there are patches available, and simply disabling spending unconfirmed outputs avoids it—, since the overall network capacity is not very great I've mostly considered this bug helpful at discouraging inept denial of service attacks so I haven't personally considered it a priority. (And most of the people who've noticed it who have mentioned it to me appear to have just been conducting tests or attempting denial of service attacks…)
2749  Bitcoin / Bitcoin Technical Support / Re: Bitcoin-Qt 0.9.1 (Core) doesn't require password for creating addresses. Why? on: April 26, 2014, 09:24:26 AM
On older versions (perhaps in 0.8.x), the wallet password was required to add a new receiving address.
This is no longer the case on Bitcoin Core 0.9.1.
This was a bug— it asked for the key in those cases but did nothing with it.

Quote
Why is this? I find this a bit strange, because shouldn't a password be required to store the new private keys?
No, 100 addresses (by default) are precomputed— this is also what makes your backups stable. If it runs out it will prompt you for the password so it can generate more.
2750  Bitcoin / Development & Technical Discussion / Re: Can Maidsafe be used to solve the "storage problem" of Bitcoin's blockchain? on: April 26, 2014, 08:30:09 AM
You can run a full node without storing the whole historic block chain. See section 7 of the Bitcoin whitepaper for one approach.
2751  Bitcoin / Development & Technical Discussion / Re: Can Maidsafe be used to solve the "storage problem" of Bitcoin's blockchain? on: April 26, 2014, 02:08:17 AM
Can hyped up non-existent vaporware cure cancer?  Anyone's guess.  I saw and early whitepaper for that and it was full of technobabble and set of my personal sleaze alarms.

What/which "storage problem" are you talking about?
2752  Alternate cryptocurrencies / Mining (Altcoins) / Re: **Introducing....HELIX CHAIN - Newest Addition to Crypto Mining Community** on: April 26, 2014, 12:02:35 AM
I can't tell what this is about, so it's offtopic.
2753  Bitcoin / Development & Technical Discussion / Re: Proof of Storage to make distributed resource consumption costly. on: April 25, 2014, 07:12:39 PM
Wait...  I thought that was exactly what we were talking about.  
Could you explain in more detail what you mean by a 'publicly verifiable version'?
It was, you're not crazy.

The reason I mentioned it is because someone recently contacted me about using this kind of technique to make things like having tor hidden service ID's concurrently costly... e.g. you could still connect to as many peers as you want, but you'd still need to use one identity across all of them. It's a different application, but would work with the original proposal just by using a constant value for the peer challenge.

That also breaks the basic version too.
Yes, thats not news though. All of these so far allow for a choice of time/memory tradeoff, it's still maximally fast if you keep everything. The invocation of the sequential function was to try to diminish the available level of choice.
2754  Bitcoin / Development & Technical Discussion / Re: Proof of Storage to make distributed resource consumption costly. on: April 24, 2014, 11:37:08 PM
If you don't play it straight, there is probably a way to use a hash function or 'salt' to defeat rainbow tables; I'll have to think about it.
Right. The way to do that is to basically use the idea here.

You don't ask them for (2^(2^t)) (mod n),  you give them H((2^(2^t)) (mod n)) and ask them for t.  I don't see how to construct a rainbow table out of the hashed form because there is no useful function H(R) -> R  that maps to R's along your solution path. You could still extract some parallelism by first computing T sequentially and then saving way-points along the way, though that requires you to store elements of n which are large compared to the hashes... but probably reasonable enough for extracting some parallelism. Hm.

Downside of this is you can't do a publicly verifiable version, e.g. where you have just one piece of storage per identity but one piece per identity,peer pair (which was what I wanted in the motivation for this post; but there are other applications, e.g. like making a hidden service identity costly).
2755  Bitcoin / Development & Technical Discussion / Re: Proposal: Base58 encoded HD Wallet root key with optional encryption on: April 24, 2014, 11:08:59 PM
The two 16-bit hashes is fine by me. It would remove a bit of functionality, but 99% of use cases probably just want two passwords.
If you really want more than two, you could search for either a seed or set of passwords (if the seed is fixed by prior use, since password searching is strictly slower) resulting in check value sharing.

I'm not sure how valuable the blockchain scan is, since what you'd do there is extract all addresses seen in the blockchain into a bloom filter ... one which fits in L3 cache can have a low enough false positive rate to be effective.
2756  Bitcoin / Development & Technical Discussion / Re: Proposal: Base58 encoded HD Wallet root key with optional encryption on: April 24, 2014, 10:02:18 PM
I did a simple empirical test of the false positive rate, mostly to check to see how much rejecting results with too many bits set improved things, and because I'm too lazy to check the math. I found much lower performance than expected:


#include <stdio.h>
#include <stdint.h>
#include <stdlib.h> /*random()*/
#include <assert.h>

uint32_t toflt(uint64_t x)
{
  int i;
  uint32_t result=0;
  for(i=0;i<11;i++)
  {
    result |= 1U<<(x&31);
    x>>=5;
  }
  return result;
}

int chkflt(uint32_t flt, uint64_t x)
{
  int i;
  uint32_t flt2=0;
  int result=1;
  for(i=0;i<11;i++)
  {
    if (!((1U<<(x&31))&flt)){
      result=0;
      break;
    }
    flt2 |= 1U<<(x&31);
    x>>=5;
  }
  if (__builtin_popcount(flt&(~flt2))>11)return 0;
  return result;
}

int main(int argc, char **argv)
{
  int i;
  uint64_t total=0;
  uint64_t fp=0;
  (void)argc;
  (void)argv;
  assert(RAND_MAX == 2147483647);
  for(i=0;i<9973;i++)
  {
    int j;
    uint64_t x;
    uint64_t y;
    uint32_t flt;
    x = ((uint64_t)random()<<24)^random();
    y = ((uint64_t)random()<<24)^random();
    flt = toflt(x)|toflt(y);
    for(j=0;j<1021;j++){
      uint64_t x2;
      int match;
      x2 = ((uint64_t)random()<<24)^random();
      match = chkflt(flt,x2);
      if (x2==x || x2==y){
        if(!match){
          printf("Something bad happened.\n");
          exit(1);
        } else {
          continue;
        }
      }
      total++;
      fp+=match;
    }
  }
  printf("%llu false positives out of %llu tests.\n",(long long unsigned)fp,(long long unsigned)total);
  return 0;
}


Which yields 7508 false positives out of 10182433 tests or 8148 with the too-many test disabled.

I think thats an unacceptably high failure rate.  The one of two 16 bit check values approach gets me 289 with the same sequence. (I could argue that the 16 bit check is too lossy too, considering that someone might get the password wrong, use the result to generate public keys, then later be able to recover the funds— but I think on the balance the denyability feature is pretty good.)

What am I screwing up here? A copy using /dev/urandom instead of random() gets the same results, so it's not just random() having some odd correlations or a low period.
2757  Bitcoin / Group buys / Re: [SOLD OUT][Worldwide] Gridseed 1 chip + 5 chip Dual miners 0.105/0.405 btc on: April 24, 2014, 07:12:17 PM
I've unlocked this because there are people complaining to me about unshipped orders.
2758  Bitcoin / Development & Technical Discussion / Re: 15% transaction fee and 3,5 months to confirm? on: April 24, 2014, 05:26:38 PM
Sorry to burst your bubble, but 100 satoshi outputs are not spendable according to the current protocol rules that miners apply.
That isn't true!  The network tries to discourage you from _creating_ such outputs (because they tend to be uneconomical to spend), but you are in no way discouraged from spending them other that just from the cost of doing so.
2759  Bitcoin / Development & Technical Discussion / Re: Proof of Storage to make distributed resource consumption costly. on: April 24, 2014, 05:20:53 PM
I am not clear why a tree is required.  "index maps to hash(seed | index)" seems to work too?  That is fast and just as irreversible and it allows the server to compute entries in linear time, rather than log time.
Constant, not linear.

I'm not sure now if there was a reason I thought I needed the tree-structured PRNG, or if I'd just worked myself into a design corner before coming up with the working idea. If there is, I don't see it now. My original goal had been to find a function that was linear-time and mostly-sequential (e.g. not fast on PRAM) the first time for the client, but fast the second time... and always fast (log or better) for the server. The tree imposes some upper bounds on the non-precomputed search parallelism, but fairly weak ones— the non-precomputed search must do 2x the work and that work has geometrically increasing parallelism.

The reason to reduce parallelism in the non-precomputed search is, of course, based on the notion that any ridiculously fast hardware for bypassing this would likely work by searching most of the space concurrently.

An interesting question is can it be changed in the other direction— rather than simplified (thanks!), slightly complexified to decrease the parallelism in the client further?— I don't think the log() behavior in the server hurts one way or another, though I agree if its not doing anything important simpler is better!
2760  Bitcoin / Development & Technical Discussion / Re: ECDSA on fpga on: April 24, 2014, 05:06:32 PM
Signing? Verification? Key generation?

This subject is relevant to my interests, but I haven't seen anyone doing anything in it.
Pages: « 1 ... 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 [138] 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!