Bitcoin Forum
May 17, 2024, 09:22:01 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 [133] 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 »
2641  Bitcoin / Bitcoin Discussion / Re: Cypriot bank deposits hit in €10bn bailout on: March 16, 2013, 05:56:09 PM
I am wondering why it wouldn't've been easier to just print moar money, thus taking it out of all deposits, whether in trouser pockets, under mattresses, in banks,in the back of the glove-compartment, wherever.


Because they are using euro, so they can't print on their own will?
2642  Bitcoin / Development & Technical Discussion / Re: UTXO set size does not increase storage costs, you just suck at computer science on: March 16, 2013, 03:00:06 PM
In simple wordings: a distributed and shared UTXO dataset, right?

So we could also have a distributed and shared full blockchain?

Haha, sorry, I didn't mean to offend anyone, I'm just somewhat disappointed that people consider only the simplest solutions and do not want to see a bigger picture.

Let's start with an example... Suppose I am a miner who is in kinda exotic situation: I have unlimited computational and networking resources, and enough temporary storage (RAM) to verify blockchain, but local permanent storage is incredibly scarce. But if I want to store something, I can store it externally over network. Say, in a DHT like Freenet, Tahoe-LAFS or something like that.

So I want to download and scan blockchain up to block N just once, then I need to be able to verify validity of any transaction. Is it possible?

Of course. As I go through blockchain I will build an index and will store it in an external DHT. I only need to keep hash of the latest block locally, 32 bytes that is.

Having this hash, I can retrieve things from DHT until I find transaction outputs I need to verify in index I've built. (I do not care whether DHT is secure: if something was tampered with, hashes won't match.)

This is kinda obvious: if secure distributed file systems can exist, then I can simply store data in such file systems instead of a local file system.

But... how much would it cost me to verify a transaction? Well, tree-like datastructures generally have look-up costs on scale of log2 N where N is a number of elements. In the worst case each individual satoshi is an individual UTXO, so we have 21000000*100000000 = 2100000000000000 ~= 2^51 UTXOs. Thus I need something like 51 lookups to find an UTXO in a binary search tree. Or just 9 lookups if I have a 64-ary tree.

But people can argue that 9 lookups per UTXO is a lot... Network latency yada yada. How about zero?

That's possible. Suppose that person which sends transaction knows how I store index in DHT, it isn't secret. To make sure that I'll include his transaction into block, he will fetch all the data I need from DHT himself, and will send me a message with his transaction and all information I need.

I don't need to look up anything in a DHT myself, I only need data which was stored in the message. And this is secure: if data was tampered with, hash won't match.
 
So, basically, number of transactions I can include into a block is limited by my computational and network abilities, but storage capability/cost is irrelephant.

But what's about blocks mined by others? Oh...

Well, it is possible to do 9 DHT lookups per UTXO mentioned in block. Number of outputs is limited, I can do lookups in parallel, so it isn't such a big problem. But still...

You know, miners are friendly guys, how about they all use same DHT, and then include confirmation information together with the block they have just mined?

So I receive a new block and supplementary information which is all what is needed to confirm that block is valid.

In the end, it is possible to do all mining having only 32 bytes of permanent secure storage. It requires somewhat more bandwidth, though. But extra bandwidth costs are approximately proportional to block size. So maybe not a big problem...

E.g. I either need 128 GB of RAM, array of hard drives and 100 MBit/sec pipe. Or I need 1 GB of RAM, no hard drives at all and 1 GBit/sec pipe. Which is cheaper?

So what I'm talking about storage/bandwidth trade-off. Using less storage might increase latency, but possible in such a way that it won't be critical.

Next time I will teach you how to implement a blockchain-based cryptocurrency in such a way that new miners can start mining right away without downloading whole blockchain, stay tuned...

2643  Bitcoin / Development & Technical Discussion / Re: How to force a rule change by bloating the UTXO set on: March 16, 2013, 12:57:46 AM
As we increase the max block size, we may have a UTXO index, which is the total number of outputs in a block minus the total number of inputs in a block, and have a hard limit for it

Problem solved.
2644  Bitcoin / Development & Technical Discussion / Re: Max box size should also consider the size of UTXO set on: March 16, 2013, 12:56:37 AM
Using this proposal, we can increase the max block size without bloating the UTXO set
This.

The argument that a block size limit is necessary to prevent excessive centralization applies equally well to a limit on the per block expansion of the utxo set.  So I think this justifies the creation of such a limit, if a block size limit is to be maintained.

Also, I don't see why we would need to have a single metric to describe usage of the two scarce resources, block space and utxo set space.  Wouldn't it be simpler to just have separate limits for both?  They consume distinct physical resources - bandwidth and storage, respectively - and so these parameters should be somewhat orthogonal.

Agreed.

We may have a UTXO index, which is the total number of outputs in a block minus the total number of inputs in a block, and have a hard limit for it
2645  Bitcoin / Development & Technical Discussion / Re: Max box size should also consider the size of UTXO set on: March 15, 2013, 04:47:05 PM
No, this is a hard fork.
Granted that your block still needs to be valid for the rest of the network to accept it.

But your goal of "encouraging good transactions" does not require any changes to the protocol. You can reject any non-good transactions in your pool and just ask other miners to support you by mining in your pool.

Isn't that a much better solution than putting mandatory fee selection rules in the protocol?

This is actually a response to the other thread: https://bitcointalk.org/index.php?topic=153133.0

Using this proposal, we can increase the max block size without bloating the UTXO set
2646  Bitcoin / Development & Technical Discussion / Re: Max box size should also consider the size of UTXO set on: March 15, 2013, 04:25:17 PM
We should
Start a mining pool. You can implement any rule you want with regards to which transactions get included and which do not. Convince other miners that your rules are the best and they will vote with their hashing power.

No, this is a hard fork.
2647  Bitcoin / Development & Technical Discussion / Max block size should also consider the size of UTXO set on: March 15, 2013, 03:45:52 PM
This is obsolete. See my new proposal below: https://bitcointalk.org/index.php?topic=153401.msg11329252#msg11329252

We should encourage "good transactions", which are: 1. small in size, 2. with less outputs, 3. with practically spendable outputs

Targets 2, 3 are important for maintaining a reasonable size of UTXO set.

The current block size restriction, however, considers only the target 1. Miners will accept polluting transactions (with lots of practically unspendable outputs) as long as enough fee is paid. However, every full nodes has to maintain the inflated UTXO set.

If the block size limit is to be increased, it could be determined by more factors, not just the absolute size.

I have a rough idea:

Let's denote:

S0, S1,.....,Sn: Amount of Satoshis of the output 0, 1 ..... n.
Size: size of the transaction in kB.

The adjusted transaction size is defined as:

Size * (1/(log2S0+1)+1/(log2S1+1)+..+1/(log2Sn+1))

The value of 1/(log2Sn+1) increases exponentially as the output size decreases. The value is 1 for 1 satoshi, 0.5 for 2 satoshi, 0.13 for 1 uBTC, 0.057 for 1mBTC, and 0.036 for 1 BTC.  

The adjusted block size is defined as the sum of adjusted transaction size.

If the real block size is < 1MB, the adjusted block size is not consider. If the real block size is > 1MB, the adjusted block size must be smaller than a certain constant.

Many problems are solved with a system like this:

1. Block size is still scare. If it is < 1MB, it is equivalent to current limit. If it is > 1MB, "good transactions" are prioritised
2. Miners will have an incentive to exclude dust outputs, because that will increase the adjusted block size
3. Miners will love transactions with less outputs, so the UTXO set could be reduced.
4. People trying to send dust outputs and/or inflate UTXO set have to pay more miner fee to compensate for their pollution
5. The block size, which costs bandwidth and disk space, is still accounted.

Since there must be a hard fork when lifting the max block size, adding extra rules like these won't make the change more complicated.
2648  Economy / Exchanges / Re: BTC-E.com exchange Bitcoin, Litecoin, Namecoin <-> USD\BTC (fee 0.2%) on: March 15, 2013, 06:10:47 AM
Problem with TOR is when your dealing with 300+ Btc you have to worry about snooping exit nodes sniffing your passwords. Too risky for me when dealing with over 15K worth of Btc

How could they sniff https?
2649  Bitcoin / Development & Technical Discussion / Re: How to force a rule change by bloating the UTXO set on: March 15, 2013, 05:20:25 AM
I think we should think out of the box.

The block size limit could be determined by more factors, not just the absolute size.

I have a rough idea:

Let's denote:

#I: number of inputs in a transaction
S0, S1,.....,Sn: Amount of Satoshis of the output 0, 1 ..... n.
ST: Total Satoshis would ever exist, i.e. 2,100,000,000,000,000
Size: size of the transaction in kB.

The adjusted transaction size is defined as:

Size * (log(ST - S0) + log(ST - S1) + ..... + log(ST - Sn)) / #I

The maximum block size will be calculated by the adjusted transaction size, not the absolute size.

Many problems are solved with a system like this:

1. Block size is still scare, although not stick to an absolute number
2. Miners will have an incentive to exclude dust outputs, because that will increase the adjusted block size
3. Miners will love transactions with many inputs, because that will decrease the adjusted block size. So the UTXO set could be reduced.
4. People trying to send dust outputs and/or inflate UTXO set have to pay more miner fee
5. The block size, which costs bandwidth and disk space, is still accounted.

EDIT: There is some defect in this part: (log(ST - S0) + log(ST - S1) + ..... + log(ST - Sn)), because it doesn't really discriminate dust outputs. I will think about it again
2650  Bitcoin / Development & Technical Discussion / Double spend alert system on: March 15, 2013, 04:59:42 AM
After the double spend attack against OKPAY, I think we need an automatic double spend alert system.

First of all, the definition of double spend is "the existence of two different valid transactions which the inputs are common or partially common"

Currently, a node will ignore double spend transactions and not relying them. However, the more logical way of handling double spend transactions should be broadcasting the conflicting transactions, so everyone will know there is a double spend.

When double spend is identified, miners will stop mining any transactions from the same address for at least 48 hours. Warning message will pop up at the clients of the recipients.

If there were a system like this, the double spend attack against OKPAY would not be successful unless the attacker mines his own block.

There are some special issues with this system:

1. If there are 2 transactions which the inputs and outputs are exactly the same, but one of them with an extra input which the BTC is sending to nowhere (i.e. miner fee), it won't be considered as double spend attack. Sometimes people may send transactions with inadequate miner fee and this will allow them to add more fee.

2. Greedy miners may still mine for double spend transactions if a huge amount of fee is included.

3. There is a possible DoS attack by flooding the network with infinite double spend transactions
2651  Bitcoin / Mining / Re: 15 blocks in last 24 hrs ? on: March 14, 2013, 04:14:55 PM
https://bitcointalk.org/index.php?topic=123726.0

Please close this thread
2652  Bitcoin / Bitcoin Discussion / Re: A successful DOUBLE SPEND US$10000 against OKPAY this morning. on: March 13, 2013, 05:06:05 AM
Is this the only sucessful double spend yesterday? Has anyone made a detailed analysis?
2653  Bitcoin / Bitcoin Discussion / Re: Amateur hour on: March 12, 2013, 05:29:51 PM
As a professional software developer this may be an opportune time to point out that the bitcoin code is an amateur production.

I have the greatest respect for Gavin and others that have donated untold hours to make bitcoin into a reality and I know from experience how tough self-funded development is.

Nevertheless, make no mistakes, the current incarnation of Bitcoin has a lot of ill-conceived design points and implementation weaknesses (as we have seen from the events of the last 24 hours).

Aside from the blunder that just resulted in a blockchain fork, there is a much larger, related issue looming on the horizon, which is the inability of the design to process large numbers of transactions. It is ludicrous we have people whining about "Satoshi Dice" creating numerous transactions. I could sit down and write a software component that could easily generate billions of transactions without breaking a sweat once it is deployed to a few thousand boxes, if I so chose, and yet you are concerned about Satoishi Dice generating a few million transactions. The problem of high-volume transaction handling needs to be answered at a new level which is, unfortunately, way above the paygrade of the current development team.


Yes you could. Please pay the bill first:
1,000,000,000 * 0.0005 * $43 = $21,500,000

Fees are optional and can be set to any level.

Transaction priority is partly based on age, so your "old" spam trumps any "new" transaction with the same fee or less.

What an amateur attack!
2654  Bitcoin / Bitcoin Discussion / Re: Amateur hour on: March 12, 2013, 05:03:03 PM
As a professional software developer this may be an opportune time to point out that the bitcoin code is an amateur production.

I have the greatest respect for Gavin and others that have donated untold hours to make bitcoin into a reality and I know from experience how tough self-funded development is.

Nevertheless, make no mistakes, the current incarnation of Bitcoin has a lot of ill-conceived design points and implementation weaknesses (as we have seen from the events of the last 24 hours).

Aside from the blunder that just resulted in a blockchain fork, there is a much larger, related issue looming on the horizon, which is the inability of the design to process large numbers of transactions. It is ludicrous we have people whining about "Satoshi Dice" creating numerous transactions. I could sit down and write a software component that could easily generate billions of transactions without breaking a sweat once it is deployed to a few thousand boxes, if I so chose, and yet you are concerned about Satoishi Dice generating a few million transactions. The problem of high-volume transaction handling needs to be answered at a new level which is, unfortunately, way above the paygrade of the current development team.


Yes you could. Please pay the bill first:
1,000,000,000 * 0.0005 * $43 = $21,500,000
2655  Bitcoin / Bitcoin Discussion / Re: 2013-03-12; client decentralization is as important as node decentralization on: March 12, 2013, 04:56:55 PM
The events occurring due to the update from 0.7 to 0.8 show, in my opinion, that client decentralization is just as important as node decentralization. Notice that I don't even have to say which client I'm talking about. It's obvious we're all using the same client.

Don't get me wrong, the devs for the open source client brought to us by the foundation are doing an excellent job and I salute them. I have no doubt that the core devs have no malicious intent, but as bitcoin matures it is important that there be no one entity which mediates the protocol. There have been talks about how a client sitting on the majority of nodes which auto-updates from a single source could be hazardous to bitcoin - not only with regards to bugs, but also by malicious intent. In my opinion, it is very important that as bitcoin matures, we have a consensus on how to implement the protocol and not a faucet that showers on us how to implement the protocol.

I have been quite disappointed with the variety of clients available. Yes, there are plenty of clients which satisfy a multitude of needs and desires, however I still feel like there need to be more. Not only with full nodes, but also for noob-friendly and merchant-friendly clients. If we are to go the linux way, only insiders will be satisfied and there won't be enough effort to get more people involved. For example, just the other day I wanted to print air-gapped paper wallets in bulk and was surprised to discover that there was no "bootable, ready-to-start-printing, bulk private key generator live CD". Maybe that's asking for too much, but the fact is that the best way to make private keys in bulk for the average user is with bitcoinaddress.org (you know, the one that works in your internet browser).

I, as one who is in the process of studying programming and software engineering, plan to write at least one open source bitcoin client (and hopefully other bitcoin-related software as well). I hope more heed this call.

This event proofs that it is basically impossible to re-implement the Satoshi client. You not only need to re-implement all functions and features, but also all bugs, in order to be compatible with the Satoshi client. Making a incompatible will just lead to a hard-fork like this event.
2656  Bitcoin / Pools / Re: List of v0.7 pools on 3/11/2013, 3/12/2013 on: March 12, 2013, 03:59:25 AM
0.8 with sipa patch, so it's safe here

What is sipa patch? Does it reject the >900k block 225430?
https://github.com/sipa/bitcoin/commit/ca7739797ce7990ebb9f33852412f2c3f6950b0d
Add blacklistblock RPC

For users to black list a block manually?
2657  Bitcoin / Bitcoin Discussion / Re: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks on: March 12, 2013, 03:54:30 AM
Yes, fixed now.
Fixed as in the chain containing the large block is now officially orphaned?

No, the 0.7 chain is still at 225441 while the 0.8 chain is 225453. Even worse, someones are stilling mining on the 0.8 chain
2658  Bitcoin / Pools / Re: List of v0.7 pools on 3/11/2013, 3/12/2013 on: March 12, 2013, 03:41:40 AM
0.8 with sipa patch, so it's safe here

What is sipa patch? Does it reject the >900k block 225430?
2659  Economy / Speculation / Re: Wall Observer - MtGoxUSD wall movement tracker on: March 12, 2013, 03:19:53 AM
people now understand the problem has been fixed and it is just a waiting game. i see us over $45 before or soon after the chains is no longer forked...

48 hours? lulz




Back to 42.5 already....

edit 43.8

lulz

That's because the exchanges stop accepting bitcoin deposit. When they open again, coins in cold wallets will flood in. Tighten your seat belt.
2660  Bitcoin / Bitcoin Discussion / Re: Do you think SatoshiDice is blockchain spam? Drop their TX's - Solution inside on: March 10, 2013, 10:42:30 AM
When more miners refuse to mine and nodes refuse to reply SD TX, SD is exposed to much higher risk of different types of double spend attacks, and eventually has to stop accepting 0-confirmation TXs.  Grin
Pages: « 1 ... 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 [133] 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!