Bitcoin Forum
July 09, 2024, 11:50:22 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 [2] 3 »
21  Bitcoin / Development & Technical Discussion / Re: Decentralised crime fighting using private set intersection protocols on: March 25, 2013, 11:13:43 AM
Mike, did you say DMCA was "balanced"?  As are all laws written by industry.

In the present design, trackability and fungibility are more or less the same.  Trackability and fungibility are (or should be) distinct properties and the discussions should be independent.

I'm sorry, Mike, but this thread demonstrates quite clearly that Bitcoin is the most Orwellian currency in the history of mankind.  (Trackability)

Without fungibility, I think Bitcoin is doomed.  Rather than spending time figuring out how to ruin Bitcoin, effort should be spent on how to adapt Bitcoin (or another cryptocurrency) to be completely fungible.
22  Bitcoin / Development & Technical Discussion / Re: Improving Offline Wallets (i.e. cold-storage) on: March 25, 2013, 10:27:43 AM
I'm not an Armory user, but I like the concept!  I'm not sure the problems you are trying to solve are articulated and enumerated well.  Perhaps if each point of concern were individually called out, it would make following this thread a little easier.

Anyway, here are a few comments, directly relating to recent posts:

First off, I don't care for security by obscurity, although it might be OK if you are trying to protect your private installation (with no other insiders except yourself) against a generic piece of malware.  It is no longer obscure if everybody is using the same technique or if insiders can spill the beans.

-- Removing standard /dev files and replacing them with misnamed device files may not do any good.  Wouldn't a smart piece of malware use the major and minor numbers to determine which devices to attack?  Couldn't a smart piece of malware with superuser access simply create its own device files or access the drivers directly?

-- IBM sold what was essentially a null modem for parallel printer ports during the heyday of their Personal System/2 machines (circa 1987-1992).  They were sold with a floppy disk containing software to help you move your files from your old PC to their new PS/2 systems.  The block had a DB-25 on one end (to plug into your new PS/2), and a female centronics on the opposite side (to plug into a standard parallel port cable).  Parallel ports on the pre-PS/2 computers were generally write-only.  The new PS/2 systems introduced R/W parallel ports.  (At least for IBM's line.)  I believe other manufacturers had something similar.  Not recommended due to high cost and obscurity.

-- Are you worried about malware changing the TX request between when it is generated and when it is signed?  Clearly, the TX request should be displayed to the operator on the secure computer before the secure computer operator applies the signature.  Humans will readily spot changed amounts.  Humans will less-readily spot a changed destination address, especially one that resembles the intended destination address.  Or perhaps the operator should be asked to manually enter the most relevant parts of the transaction (especially amount and destination addresses) for confirmation, before any opportunity is given to sign the transaction.  The cleanroom computer must take a more active roll than simply signing any transaction that is presented.

-- Are you worried about the destination address being changed before your insecure computer prepares the transaction?  This is a tough problem, given the bitcoin protocol.  For privacy reasons, the intended recipient cannot sign the public hash.  So, the public hash must be obtained outside the bitcoin protocol.  I think this problem is outside the scope of Armory, but if $50M transactions are expected, somebody better figure out how to prevent my request to send the money to address X from being changed by (somebody who hacks the Web site; someone who has a successful man-in-the-middle attack against SSL, for example; or someone who gets malware on the machine that constructs the transaction; or etc.) to address X', which might have been generated by a vanity miner to look a lot like address X.

-- Anything that goes across USB, LAN, serial, or parallel ports is a potential vector for malware, is subject to manipulation, and is subject to side-channel leakage.  As someone else hinted, any channel bigger than the size of a transaction has the potential to deliver malware to the cleanroom computer and/or leakage from the cleanroom computer.  Even QR codes (and barcodes) are basically unreadable by a human, so are also subject to manipulation and some side-channel leakage.  It's hard to avoid any risk of manipulation or leakage without forcing an operator to retype what the other computer display or prints.  I suppose you could use an OCR font on each screen, read by a Webcam on the other machine's screen.  But, now you have to worry whether malware on the dirtyroom computer isn't watching the operator type the password on the cleanroom computer.  A similar threat exists if sounds are exchanged by speaker and microphone, as attacks have been published that allow an adversary to determine what is being typed on the keyboard by sound alone.

-- I think the best idea mentioned is the secure Armory OS.  Build it off of something small and stable.  No dependencies on any software the Armory team hasn't written or inspected.  As they say, though, treat every line of code as a potential security vulnerability.  Can one write a perfect Armory OS / signing component?  Not in the short term, I suspect.

-- Another possibility to help avoid some attacks might be to have the operator enter valid destination keys into the cleanroom computer by hand.  If the destination address isn't in the cleanroom computer, the transaction won't be signed.

-- Also, it might not be unreasonable to have an operator enter all aspects of an expected transaction on the cleanroom computer AND the dirtyroom computer.  Every aspect of the transaction must match or the transaction will not be signed.  This method would remove the need to send any data from dirtyroom computer to cleanroom computer.  Only a one-way channel for sending the signed TX from cleanroom to dirtyroom is required.  [A serial cable with the cleanroom's Rx signal cut might do the trick.]  Here, nothing dirty ever touches the clean computer.  The cleanroom computer keeps track of transactions it has spent.  When the owner of the system wants to move funds from the dirtyroom computer to the cleanroom computer, the transaction (at least the output script) that loads up a (new?) account must be manually entered into the cleanroom computer.
23  Bitcoin / Mining / Re: Compensating miners on the wrong side of The Big Fork on: March 23, 2013, 03:31:06 AM
After talking with a few groups of people, I decided a good use of Bitcoin Faucet funds would be compensating miners who had blocks that were orphaned in last week's Bit Chain Fork.

This is a one-time thing-- don't expect orphaned blocks in the future to be compensated! It is just a coincidence that I haven't had time to fix the Faucet, and have a bunch of coins waiting to be given away.

Transaction id paying the to the addresses in the coinbases of the orphaned blocks: c931f1aa9f0d211dca085342ec472e77b538b55980a2c7b0ff9fab9a20a9acd2

Obviously, this is pretty popular, based on the last few posts.  But I am completely baffled.

1.  It's not unlike the Cyprus government bailing out the failing banks by giving them 10% of the citizens bank accounts.  [Is there nobody aside from the big bankers (miners) who could use Faucet funds?]

2.  It rewards miners who ignored the advice of the night of the event.  Paraphrasing the advice:  "If you can't mine the 0.7 fork, please stop mining."  OK.  So many miners and individuals stopped mining the 0.8 fork.  Their compensation?  Nothing.  But for those who ignored your advice and continued to mine the 0.8 fork?  Here's some coin to compensate for your efforts.  Even though those efforts were against our posted request.  And even though those efforts caused the fork to be longer than it otherwise might have been.  And even though those efforts are causing us to compensate more miners like yourself who mined the wrong fork after we asked you to stop.

3.  I don't know if Eligius mined any block on the bad fork.  But if Eligius (or other mining pols who pay directly to the end-users) did mine any orphan blocks, won't your policy of paying to the addresses in the coinbases of the orphaned blocks result in a windfall double-payment for a few individual miners, and nothing for the rest of their pool?  Update:  I don't see any such payouts.

I'm sorry I sound so negative, but it seems the big mining pools are pulling the strings of BItcoin, just like big banks are pull the strings of their governments.

4.  I thought there were 25 orphaned blocks.  It appears the referenced transaction is reimbursing approximately 40 25-coin transactions.

I thought Bitcoin decisions were originally to be voted on by the masses of users, not a handful of elite mining conglomerates and/or "a few groups of people".  "Groups of people" can do what they want with their bitcoin.  But, coming through Gavin, this sounds like an official bitcoin decision.
24  Bitcoin / Bitcoin Discussion / Re: Storing BTC/LTC long term in safe deposit box on: March 23, 2013, 02:23:33 AM
First, I would recommend you avoid any computer media.  Even the US Census Bureau lost a significant portion of the 1960 census before it was 20 years old due to magnetic tape bitrot.  How many 5.25" floppy disk drives do you see these days?  How about 3.5"/5.25" magneto-optical cartridge drives, which were claimed to have an immense shelf-life?  Any TRS-80 cassette tape drives, anyone?

I think human readable text is the only safe and secure long-term storage method.  Whether carved onto stone tables and stored in a pyramid, written with pencil on non-acidic paper, embossed on good old-fashioned Dymo label-maker tape, or etched on corrosion-resistant metal.

DId anybody mention secret-sharing?  (See http://en.wikipedia.org/wiki/Secret_sharing)  This is a method by which you need k of n shares to extract the secret.  Knowing k-1 shares of the secret tells you nothing about the secret.  Knowing any k shares discloses the entire secret.  The mathematics is relatively simple, and the math will still exist in 100 years.  Sample programs exist now (and should be easy to reconstruct in the future) that accept a text secret string (a BTC private key, for example), and output n text strings containing the secrets.

So, split a private key into (say) 15 of 40 shares.  Give 1 share to each of your best friends.  Give a couple shares to each of your close family members.  Give a few to your lawyer.  Keep at least 15 for yourself.  Put a few in the safe deposit box.  Put a few under your mattress.  Ask everybody to keep them safe for you (or your heirs, if you die).  You also want to generate more secrets than you actually use.  [Burn the excess.  Nobody but you should know how many secrets are out there to be found.]

You want to always have at least 15 shares available to you, even if your house burns down or your bank safety deposit boxes are robbed.  If you have more than one circle of friends, spread the secrets around.  You might want to avoid a conspiracy where 15 shares held by former "friends" can rob you.  If you think your government may serve a search warrant and take every last piece of paper from your office, your home, your car, and the vaults of all of your banks, we hope they will only get 14 shares.  Now is the time you hope some of your friends can find those secrets you gave them 10 years ago and you hope you can remember where are those secrets you buried in the wilderness in 7 different states.  [Are you paranoid?]

If you think Bitcoin will continue to rise in value long into the future, there's other fun stuff you can do with a virtual currency.  Ever hear of the bottle of scotch shared by a group of close friends.  When the next-to-last person dies, the sole remaining survivor gathers his new friends and makes a toast to his old buddies.  For Bitcoin, let all five (or whatever) friends deposit some bitcoin into a "treasure chest".  Split the private key into 5 of 5 shares.  Each person bequeaths a copy of their secret share to their other four buddies.  Only when the last surviving buddy gets all four preceding bequests and combines it with his own secret can he open the treasure chest to retrieve the bitcoins that have been there for the last 60 years.  Or maybe the last two buddies want to get together to share the contents of the treasure chest while they are still young, so to speak.
25  Bitcoin / Development & Technical Discussion / Re: Vanitygen: Vanity bitcoin address generator/miner [v0.22] on: March 21, 2013, 11:34:16 AM
christop:  +1

Dabs:  -1

The 51-character *private* key that starts with a '5' and the 52-character *private* key that starts with a 'K' or an 'L' are equivalent to each other and can be converted back and forth between each other.  They are the same private key.

Either of the private keys (since they are one and the same) will allow you to derive the (uncompressed) *public* key, which is 65 binary octets in length OR the compressed *public* key, which is 33 binary octets in length.  For transactions where the public key is stored in the blockchain, obviously, the compressed *public* key saves some space.  However, most transactions do not contain a *public* key.  And no transaction ever contains a *private* key.

The hash of the (uncompressed) *public* key yields the standard bitcoin address.  The hash of the compressed *public* key yields an alternate bitcoin address.  Both bitcoin addresses are the same size.

The two forms of bitcoin address cannot be converted from one to the other without knowledge of the private key.  [Remember, there is only 1 private key, whether it is compressed or not.]

It is possible some wallets may not know the two forms of private key are one and the same.  It is possible some wallets may assume compressed public keys go with compressed private keys and that uncompressed public keys go with uncompressed private keys.  If that is the case, the wallet is deficient.  The mathematics are as I stated above.

Or there is something new that hasn't made it into the official Bitcoin specifications.  Smiley
26  Bitcoin / Bitcoin Discussion / Re: Amateur hour on: March 14, 2013, 09:18:55 PM
As a professional software developer this may be an opportune time to point out that the bitcoin code is an amateur production.

I have the greatest respect for Gavin and others that have donated untold hours to make bitcoin into a reality and I know from experience how tough self-funded development is.

Nevertheless, make no mistakes, the current incarnation of Bitcoin has a lot of ill-conceived design points and implementation weaknesses (as we have seen from the events of the last 24 hours).

Aside from the blunder that just resulted in a blockchain fork, there is a much larger, related issue looming on the horizon, which is the inability of the design to process large numbers of transactions. It is ludicrous we have people whining about "Satoshi Dice" creating numerous transactions. I could sit down and write a software component that could easily generate billions of transactions without breaking a sweat once it is deployed to a few thousand boxes, if I so chose, and yet you are concerned about Satoishi Dice generating a few million transactions. The problem of high-volume transaction handling needs to be answered at a new level which is, unfortunately, way above the paygrade of the current development team.

@Blinken:  ty for the BC.

@all:  Do people actually disagree with the 4 paragraphs written above, or do you just disagree with the person who made the comments? 

It's sad that the vast majority of posts on these forums are attacking the person, rather than critiquing the message.  And it's not just this message, its virtually every posting to every topic on this board.  And I suspect that is part of the reason there is an insufficiency of developers.  [Name any successful open source project where those who offer constructive criticism are attacked like this.]

While I may be guilty, too, in my handful of posts, I have tried to object to the action or the process or the result, rather than the person.  Earlier in this thread, I referred to "Blinken's Bloatware".  Fact is, given the algorithm he was using, it was good.  But compared to a different algorithm, it was bloatware.  Blinken manned up and acknowledged with cold hard bitcash.

I happen to think SD is spamming the blockchain, and I happen to think their traffic should be blocked.  But I also know anybody else can spam the blockchain in a similar way.  [Blinken:  just clone SD *and* provide your customers with a highly customizable bot to place their bets.  Several forms of Martingale.  Charts and stats on recent numbers chosen by your Oracle.  And a single "spin" button or "auto" button to start things rolling.  Etc.  When you are rich, send me some more BC.   Grin ] Blocking SD is just a way of buying some time, while the development team works out a solution.  I know it isn't a permanent fix.  My longer-term idea is to have the client analyze the blockchain in a crude attempt to determine common ownership.  [It is very hard to maintain anonymity for a regular user.  It's even harder for a commercial enterprise who must advertise their address(es) to receive business.]  The first few transactions between any pair of users is free or cheap.  But if A and B keep exchanging coins, that becomes spam, and the price should go up.  [More precisely, if there's a cycle that rotates coins, that's spam.]  This lets Sister Theresa's charities receive their tiny donations.  This lets commercial clearinghouses exchange BC with their 10000 customers per day.  But mainly, this makes SD hire a programmer to rework their model, and the rework might as well be something that's easy on the blockchain.  I'm not going to suggest this, because I know what kind of attacks will come my way.

If Bitcoin is going to be a currency of the future, many problems need to be addressed.  Blinken's ideas are generally sound.

If Goldman Sachs, D B Shaw, and others are paying their "quants" upwards of $10M per year, how much should the Bitcoin community be paying its best thinkers and developers, eh?

It is very difficult to understand the nuances of the BC protocol, because it is poorly documented.  "The code is the standard."  Despite this, the Bitcoin community has been given at least two scholarly papers (one from a University and one from Microsoft).  High-quality research.  Free.  But little action.  When someone does make a reasonable suggestion, the response is often "That's coin protocol XXX.  That's not Bitcoin.  Go invent your own protocol."  Those kinds of responses are completely unproductive.

About 2.5 days ago, the blockchain forked.  A few centralized miners were able to put Bitcoin back on track.  But they left an orphan chain *25 blocks long*.  Let me repeat:  "A handful of miners orphaned a chain of 25 blocks."  If a few miners can do this, what do you think the NSA could do?  Or any other major corporation or government?  If Bitcoin becomes a real global currency, and if some nefarious outfit starts to manipulate the blockchain, I can easily see the US Government joining in the mining "to stabilize" the currency.  But, I can just as easily see the US Government determining Bitcoin addresses used by the Taliban or WikiLeaks, and saying "no spend transactions from those addresses".  If someone mines a block that includes such a transaction, Uncle Sam mines two blocks that make the first one an orphan.  If anybody doesn't think this could happen you have *no idea* of the NSA's capabilities.  They could do this tomorrow.  I don't know how to solve this problem.  It may be the fatal flaw in Bitcoin.

The heroic action 2.5 days ago was not in the best short-term interests of the miners who switched, but I reckon they were both benevolent and they thought it was in their best long-term interests.  Next time, they might not be so heroic.  Or next time, the centralized mining pool operators might decide their self-interest overrides the interest of the community.  Centralized mining may be a fatal flaw in Bitcoin.
27  Bitcoin / Bitcoin Discussion / Re: Amateur hour on: March 13, 2013, 08:06:01 PM
That's a nice thought, but I checked out the donations to the Armory address and it came to about 200 bitcoins. That would last me about 2 weeks.

You can blow 8 grand in two weeks?

that's sickening. (yea, i know there's plenty of people who are even more spoiled. this is still sickening)

Maybe you do "contribute more to society" for what you earn, but the fact that you take THAT amount of money for granted kinda makes me wanna punch you.

8 grand in two weeks is way, way more than plenty enough to live happily on.
Relax.  I doubt you're being trolled.  Look at this article:  http://www.sfgate.com/bayarea/matier-ross/article/S-F-police-chief-highest-paid-U-S-cop-3815665.php  The article asserts that "401 San Francisco city workers made more than $200,000 in the past fiscal year".  Don't you think a first-rate programmer, with an IQ of 140 should earn something equivalent to a civil-servant in a fairly average American City?

$4000 per week for a good contract programmer is not at all unusual, assuming they are paying their own taxes, health insurance, liability insurance, IRA/401K contributions, etc., etc., etc.  Sometimes the contracting organization stiffs you.  Sometimes you underbid and have to work 3x as long to get the job done.  Sometimes you have to work crazy long hours.  Rarely are you under contract 52 weeks a year.  And never do you have any job security.

OTOH, you can probably hire a code monkey for about $1k per week.  You get what you pay for.
28  Bitcoin / Bitcoin Discussion / Re: Amateur hour on: March 13, 2013, 06:07:02 PM
That is very good. I did not think of using Euclid's algorithm. I should send you a bitcoin for that one.
Thank you.  The address is in my signature.
29  Bitcoin / Bitcoin Discussion / Re: Amateur hour on: March 13, 2013, 08:18:44 AM
The Fed doesn't pay me enough to put up with this, but ok... here is byte code for x86 that translates either a decimal or hexadecimal string to a binary value (yes, I actually wrote the byte code, and yes, I can read and write x86 machine encodings in hex):

31C053515657555231F68A0280F8307C1080F8397F0B2C3083C6015083C201EBE9C7C50A000000C 7C10100000031DB89F783FE00741058F7E101C383EE0189C8F7E589C1EBEB89D85A01FA5D5F5E59 5BC331C053515657555231F68A0280F8307C0980F8397F042C30EB0C80F8417C1080F8467F0B2C4 B83C6015083C201EBDBC7C510000000EB9F

The decimal reader is at offset 0 and expects a non-digit terminated string at an address in the EDX register. The hex reader is at offset 136 and expects the same input. Both routines return the answer in EAX register. Total bytes: 136.

Here is a more readable piece of code in Java if you are not into machine language:

   /** Number of combinations
    *  In the case that items > slots this value is items! / (slots! * (items - slots)!) .
    *  Goes up to Choose( 66, 33 ) = 7219428434016265740, the maximum that fits in a long.
    *  This is an optimal implementation that uses factorization to reach the largest exact values possible.
    *  Try Choose( 60, 30 ) in a web-based calculator, for example, and you will not get an exact answer.
    *  This is because naive implementations do not use factorization.
    *  @param items  The count of unique things to be combined.
    *  @param slots  The number of slots or opportunities into which to combine them.
    *  @return number of possible unique combinations or 0 in the event of an error or invalid input or -1 in the event of an overflow
    */
   public final static long combinationCount( int items, int slots ){
      if( items < 1 || slots < 1 ) return 0;
      if( slots > items ) return 0;
      if( slots == items ) return 1;
      int extra = items - slots;
      if( extra > slots ){
         slots = extra;
         extra = items - slots;  // extra always has as many or fewer items than slots
      }
      int[] aiNumeratorFactors = new int[100];
      for( int xNumeratorFactorial = items; xNumeratorFactorial > slots; xNumeratorFactorial-- ){
         int[] factors = factor( xNumeratorFactorial );
         if( factors == null ) return 0; // an error has occurred
         for( int xFactor = 1; xFactor <= factors[0]; xFactor++ ){ // add factors to numerator factors list
            if( aiNumeratorFactors[0] == aiNumeratorFactors.length - 1 ){ // need to expand list
               int[] aiNumeratorFactors_new = new int[aiNumeratorFactors.length * 2];
               System.arraycopy( aiNumeratorFactors, 0, aiNumeratorFactors_new, 0, aiNumeratorFactors.length );
               aiNumeratorFactors = aiNumeratorFactors_new;
            }
            aiNumeratorFactors[0]++;
            aiNumeratorFactors[aiNumeratorFactors[0]] = factors[xFactor];
         }
      }
      int[] aiDenominatorFactors = new int[100];
      for( int xDenominatorFactorial = extra; xDenominatorFactorial > 1; xDenominatorFactorial-- ){
         int[] factors = factor( xDenominatorFactorial );
         if( factors == null ) return 0; // an error has occurred
         for( int xFactor = 1; xFactor <= factors[0]; xFactor++ ){ // add factors to numerator factors list
            if( aiDenominatorFactors[0] == aiDenominatorFactors.length - 1 ){ // need to expand list
               int[] aiDenominatorFactors_new = new int[aiDenominatorFactors.length * 2];
               System.arraycopy( aiDenominatorFactors, 0, aiDenominatorFactors_new, 0, aiDenominatorFactors.length );
               aiDenominatorFactors = aiDenominatorFactors_new;
            }
            aiDenominatorFactors[0]++;
            aiDenominatorFactors[aiDenominatorFactors[0]] = factors[xFactor];
         }
      }
      int[] aiNumeratorFactors_fitted = new int[aiNumeratorFactors[0]];
      System.arraycopy( aiNumeratorFactors, 1, aiNumeratorFactors_fitted, 0, aiNumeratorFactors[0] );
      aiNumeratorFactors = aiNumeratorFactors_fitted;
      int[] aiDenominatorFactors_fitted = new int[aiDenominatorFactors[0]];
      System.arraycopy( aiDenominatorFactors, 1, aiDenominatorFactors_fitted, 0, aiDenominatorFactors[0]);
      aiDenominatorFactors = aiDenominatorFactors_fitted;
      java.util.Arrays.sort( aiNumeratorFactors );
      java.util.Arrays.sort( aiDenominatorFactors );
      long nTotal = 1;
      int xNumerator = 0;
      int xDenominator = 0;
      while( true ){
         if( xNumerator == aiNumeratorFactors.length ) return nTotal;
         if( xDenominator < aiDenominatorFactors.length && aiNumeratorFactors[xNumerator] == aiDenominatorFactors[xDenominator] ){
            xDenominator++;
         } else {
            if( Long.MAX_VALUE / nTotal < aiNumeratorFactors[xNumerator] ) return -1; // overflow
            nTotal *= aiNumeratorFactors[xNumerator];
         }
         xNumerator++;
      }
   }


Snippet war! Everyone post a snippet of what they have written. I will evaluate and declare a winner Smiley


While I agree with a few things blinken has to say, this is ridiculous.  The following code should emulate blinken's bloatware.  Complete with returning zero on invalid inputs and returning -1 on overflow.
Code:
#include <stdio.h>
#define MAXLONGLONG 0x7FFFFFFFFFFFFFFF

/* Returns GCD(a, b) using Euclid's algorithm of antiquity */
long int euclid(long unsigned int a, long unsigned int b)
{ return a%b==0 ? b : euclid(b, a%b); }

/* Returns n choose m */
long int combin(long int n, long int m)
{
  long int i;
  long unsigned int N=1;
  long unsigned int D=1;
  if (m+m>n) m=n-m;
  if (m < 0) N=0;
  for (i=0; i<m; i+=1)
  {
    long unsigned int t = euclid(n-i, m-i);
    long unsigned int u = euclid(D, (n-i)/t);
    long unsigned int v = euclid(N, (m-i)/t);
    if (MAXLONGLONG / (N/v) <= (n-i)/t/u) { N=-1; break; }
    N = N/v * ((n-i)/t/u);
    D = D/u * ((m-i)/t/v);
  }
  return N;
}

Here is the test harness:
Code:
void testcombin(long int n, long int m)
{ printf("%ld choose %ld = %ld\n", n, m, combin(n, m)); }

void main(void)
{
  testcombin(2, -1);
  testcombin(2, 3);
  testcombin(6, 0);
  testcombin(6, 2);
  testcombin(10, 7);
  testcombin(15, 7);
  testcombin(63, 30);
  testcombin(66, 33);
  testcombin(121976, 4);
  testcombin(3810778, 3);
  testcombin(4294967295, 2);
  testcombin(4294967296, 2);  // 2^32 choose 2 = 2^63.  One too big.
}

And here is the test output:
Code:
2 choose -1 = 0
2 choose 3 = 0
6 choose 0 = 1
6 choose 2 = 15
10 choose 7 = 120
15 choose 7 = 6435
63 choose 30 = 860778005594247069
66 choose 33 = 7219428434016265740
121976 choose 4 = 9222845730360011050
3810778 choose 3 = 9223364155031292776
4294967295 choose 2 = 9223372030412324865
4294967296 choose 2 = -1
30  Bitcoin / Mining / Re: Soft block size limit reached, action required by YOU on: March 12, 2013, 10:20:31 PM
[As I currently understand it, the problem is thought to be a limit on the number of locks on the BDB database, which is in turn a function of the number of transactions in a block and/or the number of transactions referenced by a block, which is indirectly related to the size of a block.]
There is some evidence from other threads that it might be possible to fixed old clients by adding a single configuration file, without requiring any code updates at all.
I am aware of that.  The problem remains.  Any 0.7.x (or prior) client that hasn't upgraded or updated the BDB configuration file will soon be on a hard fork of the block chain.  Aside from the risk to those individuals, I think that would be extremely harmful to the overall credibility of Bitcoin.
31  Bitcoin / Mining / Re: Soft block size limit reached, action required by YOU on: March 12, 2013, 09:08:19 PM
I suppose you forgot already why this thread started - we ran out of block space and transactions started stacking up, unconfirmed. This almost immediately caused lots of user complaints.

Exactly because people did NOT upgrade to Bitcoin 0.8 fast enough, which fixed the bug in 0.7 and below, now we are now back to square one - stuck with blocks that are too small to meet demand for transactions. Expect the complaints to start again soon unless filtering of SD becomes common.

The options I listed in my first post are still the only options. If you don't want to filter out SD transactions then the default "do nothing" option means that transactions will become very expensive, and small quantities of Bitcoin won't be spendable at all.

The fork is certainly a big problem: unfortunately, nobody realized it would happen. It's not even clear yet what kind of testing would have revealed this issue. Simply making big blocks is apparently not enough to trigger it. That said, we knew that BDB was generally a rats nest of weird problems which is one reason why I ported Bitcoin to LevelDB. If the block sizes had been raised a few months from now, we'd probably have just abandoned the 0.7 users to their fate, but we got unlucky with the timing.

Assuming we want Bitcoin to scale up, we'll have to fork 0.7 and lower off the chain sooner rather than later and then raise the block size limits again.

Users will not upgrade on your schedule, except very begrudingly!  Yet, the sky is not falling.  When the problem is better understood, it should be possible for miners to raise the soft block-size limits.  When the actual limit in BDB that causes 0.7.x clients to fail a block is well understood, that condition needs to be added (by miners, only) to the 0.8 code they use to generate and verify a block.  Then, after some testing and a gentle roll-out of bigger block sizes, that buys some breathing room.  You can add a couple of factors of 2 in this manner, but it appears to me this won't keep up with SD's growth rate for long.

[As I currently understand it, the problem is thought to be a limit on the number of locks on the BDB database, which is in turn a function of the number of transactions in a block and/or the number of transactions referenced by a block, which is indirectly related to the size of a block.]

@all:  Gamblers, have a different mindset than those who are engaged in commerce.  Let's just call it entertainment, and not get too preachy.  Gamblers will pay substantial fees for entertainment that those engaged in ordinary commerce will not (or cannot) pay.  (15%-20%, typically, at a parimutuel US horse race; 40-50% at a typical US State lottery; 5% at American roulette; etc.)  I don't think it is possible to raise the fees in a way that would make SD unprofitable, without raising the fees in a way that makes bitcoin unprofitable for many other economic purposes. 
32  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 07:33:33 PM
This has been interesting to read.  The problem has almost corrected itself, and may be corrected by the time I press "submit".

Users never needed to downgrade.  Miners didn't really need to downgrade either.  They needed to stop producing very large blocks.  And they needed to be poked to ignore the higher block, temporarily.  [Downgrading accomplishes both, but I doubt that most miners went to that trouble.]


Amazing. So you read this entire thread, came away with zero clue as to what happened, and decided to start preaching to the rest of us?

Maybe try reading it again.

Fact:  The official news at the top of the forum (today, 16 hours after the event) says "News: The bug appears to be resolved now. Merchants can accept transactions. Mining on 0.8 is OK, but you should not increase the target block size from the default."

I'm not sure which of my quotes, above, you believe were clueless or preachy.  Do these quotes differ materially from today's official "News".  There was a lot of FUD on this thread.  Much of it was unwarranted.  I thought this part of my post was clarifying the facts, not preaching.

33  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 07:14:08 PM
Even if the problem hadn't been found during testing, if miners had gradually rolled out the change to 0.8 (with a built-in bigger block-size limit), then when the problem cropped up, as long as 51% of the mining power hadn't been on the new "big block 0.8 release", there would not have been a hard fork.
The problem that cropped up is a hard fork, so by definition it would have happened. It's clear now that a hard fork is unavoidable. The only question is when does it happen and who will lose out because of it.
If only a few miners had been on "big block 0.8 release", they could have produced a block the rest of the world didn't understand.  But, wouldn't the rest of the world continued on without that block?  A single orphan block.  I don't exactly consider this a "hard fork".  Am I missing something, here?

The problem was that 51% of the miners were running an essentially untested combination of software and blocksize.  Without manual intervention, the fork that evolved would have continued forever.  *This* is what I call a "hard fork".

Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.

We will have to roll forward to 0.8 ASAP. There isn't any choice.
The official release of 0.8.0 was just 3 weeks ago:  https://bitcointalk.org/index.php?topic=145184.msg1540252#msg1540252

Unless you are the IT department at Citibank, you cannot possibly expect all of your branches and customers to upgrade on your schedule.  Forcing a tight upgrade schedule on customers and merchants will kill bitcoin as surely as a forked blockchain.  I think LukeJr suggested a 2 year upgrade window.  Seems more reasonable than 3 weeks.  Bitcoin isn't the same as Ubuntu, but look at their support window.
34  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 09:30:48 AM
To be accurate, it wasn't "the lead developer" who suggested raising the block size limit, it was me. I am a Bitcoin developer, but Gavin is the lead. So you can blame me if you like.
Apologies to "the lead developer".  I have amended my original post.  That portion of my original post was not intended so much as to cast blame on an individual as to point out a fundamental flaw in the process that permitted an "off-the-cuff" remark to be taken seriously (and almost simultaneously) by many miners.  I am a software developer, and I know what it is like to release upgrades to large legacy projects.  [Yuck, usually.]  Assuming you guys follow *some* process, a planned line-item in a scheduled release to increased the blocksize should have led to significant testing of that line-item.

Even if the problem hadn't been found during testing, if miners had gradually rolled out the change to 0.8 (with a built-in bigger block-size limit), then when the problem cropped up, as long as 51% of the mining power hadn't been on the new "big block 0.8 release", there would not have been a hard fork.  (Early adopters of "big block 0.8 release" would have found they were generating blocks that quickly became orphans.  There would have been no panic and no great urgency to solve the problem by fiat.)

[I'm not trying to second-guess the developers.  It's always easy to see with 20/20 hindsight.  But there should be a process.  And the process should be followed.]

There are two lessons here:
1.  Avoid off-the-cuff suggestions unless you *know* the impact is minor.
2.  Where possible, roll out the tested releases gradually -- especially to mining pools, who have become so centralized, a few of them can easily tip the scales to 51%.
35  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 07:20:58 AM
It *was* indirectly because of the size of the block.  Even at 166 bytes each (or whatever the minimum size of transaction), a 250K block cannot contain 1700+ transactions.  And a number of transactions that exceeds the BDB configuration is believed to be the root of the problem.  I know hindsight is 20/20, but I will give the developers credit and assume testing all extremes from 1 really big transaction to many really tiny transactions probably would have been in a formal release cycle.  No such testing was done, chiefly because this was an off the cuff suggestion, not a formal release.
The problem is with the old version, and is a previously unknown bug. Are you saying the developers should have thought to test the old version for an unknown bug before releasing the new version?
It is quite normal, when developing a complex system, that a new release must maintain compatibility with some of the "bugs" that were present in the old release.  I believe the developers of bitcoin follow that philosophy.  By definition, a *unexpected* difference in behavior between the old client and the new client is a bug in the new client.

I believe if the developers had decided to raise the blocksize before releasing 0.8, they may have found this bug during their testing.  The problem is that NO FORMAL TESTING was performed before the lead developer's suggestion was taken to heart by several mining pools.  Don't ask your "customers" to do X, unless you have tested that doing X causes no harm.  As someone else already pointed out on this thread, the development team was well aware that changing to a new database was a possible source of trouble.  I will give the developers credit and assume they tested thoroughly.  So I repeat, if the blocksize had been raised *before* the official release of 0.8, at least there would have been a chance of detecting the difference in behavior, and thus a chance of avoiding this fork.
36  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 07:03:00 AM
It was fascinating to watch the higher block chain continue to build.  Was it ignorance by some large miners?  Or was it an intentional attempt to keep the blockchain forked.
No, it was night. I woke up at 3am thanks to one guy who called me over skype. I have a lot of monitoring scripts which trigger SMS alert when something bad happen on pool, but there was almost no chance to catch this, because the bug wasn't in bitcoind 0.8 used by the pool, but in previous (and still widely spread) bitcoind 0.7.
I include that in "ignorance".  You were ignorant of the situation, else you would have responded.   Smiley  I guess I assumed there would be some sort of "friend net" by which the major mining pools operators would hear of trouble demanding their attention.  Such is the nature of a distributed, volunteer project, eh.

My mining client is configured to print the message that goes to the server when it finds a share.  From the hexadecimal code in the message, I could see the block header's Previous Block field, and I could see my pool was mining the wrong fork.  The next pool I switched to was also mining the wrong fork.  The third pool I tried was mining the correct (now current) fork.
37  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 06:45:01 AM
This all started when the lead developer issued an "off the cuff" suggestion to enlarge the block size from 250K to 500K.  A miner went the extra step and enlarged the block to 1M, which *should* have been legal.  But there wasn't enough system and integration testing.  [There was *none* as far as I can tell with respect to this suggested change.]  Perhaps the community will learn to avoid untested changes in the future.
It wasn't the size of the block that caused the problem.
It *was* indirectly because of the size of the block.  Even at 166 bytes each (or whatever the minimum size of transaction), a 250K block cannot contain 1700+ transactions.  And a number of transactions that exceeds the BDB configuration is believed to be the root of the problem.  I know hindsight is 20/20, but I will give the developers credit and assume testing all extremes from 1 really big transaction to many really tiny transactions probably would have been in a formal release cycle.  No such testing was done, chiefly because this was an off the cuff suggestion, not a formal release.
38  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 06:22:09 AM
This has been interesting to read.  The problem has almost corrected itself, and may be corrected by the time I press "submit".

Users never needed to downgrade.  Miners didn't really need to downgrade either.  They needed to stop producing very large blocks.  And they needed to be poked to ignore the higher block, temporarily.  [Downgrading accomplishes both, but I doubt that most miners went to that trouble.]

This all started when the lead developer one of the developers issued an "off the cuff" suggestion to enlarge the block size from 250K to 500K.  A miner went the extra step and enlarged the block to 1M, which *should* have been legal.  But there wasn't enough system and integration testing.  [There was *none* as far as I can tell with respect to this suggested change.]  Perhaps the community will learn to avoid untested changes in the future.

What prompted the suggestion to enlarge the block size?  A single site comprises some 70%+ of the traffic on bitcoin.  They are growing by leaps and bounds as bots are now doing the betting.  Whether you think SD was the "hero" for helping this bug come to light or the "villian" whose actions brought about the "off the cuff" remark and the ultimate fork is your call.  [There is plenty of heat in other threads, so let's please keep that argument out of this thread.]

It was fascinating to watch the higher block chain continue to build.  Was it ignorance by some large miners?  Or was it an intentional attempt to keep the blockchain forked.  Theoretically, this could have been fixed hours ago.  But I saw some well-known pool operators working on the chain that had (by consensus, apparently) been decreed to be the loser of the race.

Fun times.   Smiley

39  Bitcoin / Bitcoin Discussion / Re: Do you think SatoshiDice is blockchain spam? Drop their TX's - Solution inside on: March 10, 2013, 06:48:39 AM
@psy:  Thanks for spreading this patch.  I wrote in another thread of an unintended consequence of this patch, so I won't repeat.  But, for a regular node, not running a mining service, this patch is a significant improvement.  It cuts your memory usage.  It significantly cuts your outgoing bandwidth.  And it may cut your total CPU usage, slightly.

@blazr:  I, too, have turned off one full node.  This was a well-connected node at a commercial service provider.  The resources provided to my VPS are no longer sufficient to run the standard client.  I'm not willing to pay more real dollars to upgrade to the next level of service.

[Maybe I should collect donations from the SD supporters, who think I'm censoring SD.  $20 per month should be sufficient to go to the next level VPS as long as SD is only consuming 60-70% of the transactions.  Send me a PM stating what your donation is for, and when I've collected 3 months worth, I'll restart my node.]

@Debian Squeeze users:  I am running the patched client on Debian Squeeze.  The one "required" prerequisite I wasn't able to install via normal apt-get was for UPNP support.  But I don't need *or want* UPNP support.  (The USE_UPNP option removes the requirement for that one prerequisite.)  These four commands are sufficient to rebuild both the bitcoind and bitcoin-qt clients.  Run these commands from the directory that contains the "build", "doc" and "src" directories.

Code:
qmake USE_UPNP=-
make
qmake -o Makefile.test bitcoin-qt.pro USE_UPNP=-
make -f Makefile.test

(I didn't show building and running the test script.  It works fine until it gets to the UPNP tests.)

After this is decently adopted by the user community, the next step will be to preferentially connect to "SD-free" nodes (and to disconnect from nodes who are sending me the SD spam).
40  Bitcoin / Mining / Re: Soft block size limit reached, action required by YOU on: March 08, 2013, 07:02:23 PM
Bottleneck is not hardware, but bandwidth. Average joe's internet connection has to be able to handle a full node. ( 1Mbit/s w/ 10Mbit/s peak)
Everybody isn't cut from the same mold.  Different folks have different limits.

I was running a full node on a vendor's VPS service (roughtly equivalent to an Amazon Micro Instance).  [Very good connectivity, no bandwidth limit, good data rate.]  As of a week or two ago, it can no longer handle the volume of transactions.  Virtual memory is being exhausted.  I'm not willing to pay the higher fees for the next level of service.  That node is now gone.

If a means isn't found for the average Joe to be compensated for running a full node, they are going to start dropping out as they perceive abusive services like Satoshi Dice costing them money.

@psy:  Thank you.  I saw Luke-jr's (more sophisitcated) patch, but I haven't had the time to analyze it.
Pages: « 1 [2] 3 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!