Bitcoin Forum
October 18, 2017, 12:31:01 AM *
News: Latest stable version of Bitcoin Core: 0.15.0.1  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 [1252] 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 ... 1558 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 1995171 times)
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 03:16:57 PM
 #25021

Gold preparing to take the next dive.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
Adrian-x
Legendary
*
Offline Offline

Activity: 1372



View Profile
May 29, 2015, 03:20:54 PM
 #25022

The reason we have to worry about miners producing "too large" blocks is because they don't pay for all the P2P network resources they use (neither do end users).

All the arguments we have about resource consumption are derived from that primary design flaw.

If we fix it, then we won't have to argue any more.

Well put.

Till we have had a 1:1 ratio between full node and miner, the block reward
did pay for all the resoures involed in the process. Once such ratio started
to decrease, due to the introduction of mining pools, mining and full node
role became more and more decoupled.

The block reward remains on the miner side, though.
* edited to read more clearly.
I agree with the notion that miners are for the most part unaffected by block size and are empowered not to care, this is also why I dismiss those developer arguments that want to solve the block size problem by manipulating mining fees, or some variant of this idea. The incentive is not in the TX fee to reduce block size - that's paying miners not nodes if anything you want the incentive to be supply and demand based on node size.

Ironically it is only the competition for the fee between miners that will force writing blocks to the marginal cost and force the block size to the smallest size capable of sustaining a profit, this could be neatly modeled by the Nash equilibrium.

As block rewards diminishes the Nash equilibrium is introduced and miners become marginalized with little to no power in the system.

I participated in discussing the idea in 2012  of financially incentivizing notes in a market driven way to regulate miners and block size. But after pondering the idea over time it seemed it was not  necessary.

People invested in the idea that money is memory who store that memory on the blockchain - They have a lot of "memory",ie. an invested interest in the blockchain will want to preserve the blockchain, some call it altruistic but I prefer to think they will use greed for the grater good.

The conclusion I draw is as long as wealth is distributed and people are in competition with one another the blockchain will remain distributed. by the nature of the design of Bitcoin, the wealth that is still to move in to Bitcoin cannot be transferred into Bitcoin with out redistributing it to the participants who grow the network.  

 

Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 03:25:02 PM
 #25023

some call it altruistic but I prefer to think they will use greed for the grater good.

 

Nash was a great man.  watch the video:

https://twitter.com/cypherdoc2/status/602533856290349056
justusranvier
Legendary
*
Offline Offline

Activity: 1400



View Profile WWW
May 29, 2015, 03:27:12 PM
 #25024

There's a peculiar kind of incoherence about people who can argue both for decentralization and also argue that users of the system can not be relied up to decide their own best interests.


cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 03:32:34 PM
 #25025

There's a peculiar kind of incoherence about people who can argue both for decentralization and also argue that users of the system can not be relied up to decide their own best interests.

especially profound given how many here argue that individuals are "stupid" and irrational.  this is a particular form of pessimism and hubris. 

given the proper incentives, individuals can be counted on to be quite rational to work not only in their individual best interest, but that of the group.
justusranvier
Legendary
*
Offline Offline

Activity: 1400



View Profile WWW
May 29, 2015, 03:34:00 PM
 #25026

It's not possible to build a currency on misanthropy.

http://nakamotoinstitute.org/reciprocal-altruism-in-the-theory-of-money/

Quote
Reciprocal altruism is a great first start as a theory of money because it so neatly undercuts a lot of the most common fallacies. First, what gives money value? An adherent of commodity money might say that it is the industrial uses of the money good, whereas an adherent of fiat money might say that it is the force of the government issuing it, and the loyalty people have toward their government. Neither of these answers is true. It is true that some system is required to keep track of who has money and who does not, but that is not what makes money valuable. The value of money is the value of cooperation. It is that simple. The value of money is not somehow in the monetary unit; it is in the whole of society and in peoples’ desire to cooperate.

If you want your money to be valuable you need the people who produce the products and services you want to consume to use that money.

There is no other way to imbue currency with value.

If bringing sound money into existence requires an mass education project to overcome many generations of propaganda-induced fallacies, then that's what it going to take.

There is no shortcut.
_mr_e
Legendary
*
Offline Offline

Activity: 817



View Profile
May 29, 2015, 03:35:47 PM
 #25027

Funny that the new bitcoin fork will be called bitcoinxt Cheesy
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 03:48:48 PM
 #25028

It's not possible to build a currency on misanthropy.

http://nakamotoinstitute.org/reciprocal-altruism-in-the-theory-of-money/

Quote
Reciprocal altruism is a great first start as a theory of money because it so neatly undercuts a lot of the most common fallacies. First, what gives money value? An adherent of commodity money might say that it is the industrial uses of the money good, whereas an adherent of fiat money might say that it is the force of the government issuing it, and the loyalty people have toward their government. Neither of these answers is true. It is true that some system is required to keep track of who has money and who does not, but that is not what makes money valuable. The value of money is the value of cooperation. It is that simple. The value of money is not somehow in the monetary unit; it is in the whole of society and in peoples’ desire to cooperate.

If you want your money to be valuable you need the people who produce the products and services you want to consume to use that money.

There is no other way to imbue currency with value.

If bringing sound money into existence requires an mass education project to overcome many generations of propaganda-induced fallacies, then that's what it going to take.

There is no shortcut.

the Blockstream devs have said they would like to see & study what happens upon the repeated filling of blocks.  they'd like to study what happens to frustrated users, hoodwinked merchants, exchange price volatility, confused full nodes, etc.  to what end?  to satisfy their own curiosity?  and only to then raise the limit as they've admitted to?  what a bunch of misplaced pseudo-academia.

they're like a wife who begs her husband to beat her just so she can experience what it is like.  so he beats her repeatedly.  she finally decides she doesn't like it.   but it's too late; he's already lost all respect for her and leaves for another woman.  that's what will happen to Bitcoin if users and merchants have bad experiences at this early stage of the game.  they'll just leave and may not come back for a 100 yrs.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036


View Profile
May 29, 2015, 03:56:46 PM
 #25029

Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the first mempool transactions already(?), those being set in stone, so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

This seems to shift the burden from bandwidth to CPU power for checking the hash, but as long as miners are behind the curve it seems to avoid the "race" where lower-bandwidth miners/nodes are at a disadvantage.

Does this, or anything like it, make any sense?
adamstgBit
Legendary
*
Offline Offline

Activity: 1904


Trusted Bitcoiner


View Profile WWW
May 29, 2015, 04:02:45 PM
 #25030

Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the mempool transactions already(?), so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

Does this, or anything like it, make any sense?

i think thats more or less how its currently works when there's a backlog of unconfirmed TX

this is fine for now, but at one point if theres alot of TX the mem pool will just grow and grow, and TX will confirm slower and slower.

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 04:04:16 PM
 #25031

such a simple but elegant point here from Reddit:

[–]painlord2k 5 points 48 minutes ago

Use your coins as usual. More uses, more transactions, the smaller blockchain will not be able to manage the transactions and people will be forced to migrate to the larger.

permalinksaveparentreportgive goldreply
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036


View Profile
May 29, 2015, 04:06:09 PM
 #25032

Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the mempool transactions already(?), so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

Does this, or anything like it, make any sense?

i think thats more or less how its currently works when there's a backlog of unconfirmed TX

this is fine for now, but at one point if theres alot of TX the mem pool will just grow and grow, and TX will confirm slower and slower.

You're saying miners currently sometimes only put the hash of all the tx in a block, instead of the tx themselves? Huh
adamstgBit
Legendary
*
Offline Offline

Activity: 1904


Trusted Bitcoiner


View Profile WWW
May 29, 2015, 04:08:44 PM
 #25033

Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the mempool transactions already(?), so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

Does this, or anything like it, make any sense?

i think thats more or less how its currently works when there's a backlog of unconfirmed TX

this is fine for now, but at one point if theres alot of TX the mem pool will just grow and grow, and TX will confirm slower and slower.

You're saying miners currently sometimes only put the hash of all the tx in a block, instead of the tx themselves? Huh

i miss read, ya no they put the Full TX

I dont see how knowing which TX to include in the next block is going to help tho.

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 04:12:39 PM
 #25034

Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the first mempool transactions already(?), those being set in stone, so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

This seems to shift the burden from bandwidth to CPU power for checking the hash, but as long as miners are behind the curve it seems to avoid the "race" where lower-bandwidth miners/nodes are at a disadvantage.

Does this, or anything like it, make any sense?

the mempool is rarely uniform across all nodes.  it would be impossible to reconstruct which unconf tx's a node would be missing.

your idea is a variation on IBLT.  but in that case, nodes can reconstruct their missing tx's due to the math of the IBLT.

and your idea would totally render SPV clients unusable as they rely on retrieving the Merkle tree path with it's block header to their specific tx history they are interested in.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036


View Profile
May 29, 2015, 04:15:57 PM
 #25035

the mempool is rarely uniform across all nodes.  it would be impossible to reconstruct which unconf tx's a node would be missing.

OK, good point. I thought maybe having a time cutoff where no new tx are added to the first mempool after 10 minutes would help, but I guess there's no way to know for sure. That's the whole point of a consensus network after all. Oh well, there goes that shower thought. Thanks for the quick reply.
Natalia_AnatolioPAMM
Full Member
***
Offline Offline

Activity: 154


View Profile
May 29, 2015, 04:36:34 PM
 #25036

Gold preparing to take the next dive.

mayeb the last one
adamstgBit
Legendary
*
Offline Offline

Activity: 1904


Trusted Bitcoiner


View Profile WWW
May 29, 2015, 05:03:14 PM
 #25037

Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
May 29, 2015, 05:07:55 PM
 #25038

Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold


yes, gold is useless in the new digital age.
Fakhoury
Legendary
*
Offline Offline

Activity: 1022


Permabull Bitcoin Investor


View Profile
May 29, 2015, 05:12:15 PM
 #25039

Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold


yes, gold is useless in the new digital age.

Kindly tell me, when gold goes down in terms of price, how bitcoin makes use of this ?

Thank you.

Quote from:  Satoshi Nakamoto
Feb. 14, 2010: I’m sure that in 20 years there will either be very large transaction volume or no volume.
Peter R
Legendary
*
Offline Offline

Activity: 1050



View Profile
May 29, 2015, 05:21:28 PM
 #25040

...
The UXTO constraint may never be solved in an acceptable (sub)linear way, or the solution(s) could for political reasons never be implemented in BTC.
...
Almost certainly 'never' by any realistic definition of various things.
...
Solving 'the UTXO problem' would require what is by most definitions 'magic'.  Perhaps some future quantum-effect storage, communications, and processing schemes could 'solve' the problem but I'm not expecting to pick up such technology at Fry's by the next holiday season (Moore's law notwithstanding.)

A comment from chriswilmer got me thinking…

The UTXO set is actually bounded. The total amount of satoshis that will ever exists is

   (21x10^6) x (10^8) = 2.1 x 10^15 = 2.1 "peta-sats"
...
...
OK, now let's be reasonable!  Let's assume that 10 billion people on earth each control about 4 unspent outputs on average.  That's a total of 40 billion outputs, or

    (40 x 10^9) x (65 bytes) = 2.6 terabytes

With these assumptions, it now only takes about 20 of those SD cards to store the UTXO set:

    (2.6 x 10^12) / (128 x 10^9) = 20.3,

or, three 1-terrabyte SSDs, for a total cost of about $1,500.  
...

I have thought about this bounding (mostly in the context of the current rather awkward/deceptive 'unspendable dust' settings.)  I think that there is currently, and probably for quite some time, some big problems with this rosy picture:

 - UTXO is changing in real time through it's entire population.  This necessitates currently (as I understand things) a rather more sophisticated data-structure than something mineable like the blockchain.  UTXO is in ram and under the thing that replaced BDB (forgot the name of that database at the moment) because seeks, inserts, and deletes are bus intensive and, again, in constant flux.

Agreed.  The UTXO can be thought of as "hot" storage that's continually being updated, while the blockchain can be thought of as "cold" storage that does not have the same requirements for random memory reads and writes.  However, the UTXO doesn't need to sit entirely in RAM (e.g., the uncompressed UTXO set is, AFAIK, around 2 GB, but bitcoind runs without problem on machines with less RAM).  

Quote
...but would be interested to see a proof-of-concept, simulator, prototype, etc.

Agreed.  What I'm curious about is the extent to which the UTXO database could be optimized algorithmically and with custom hardware.  

Consider the above scenario where 10 billion people control on average 4 unspent outputs (40 billion coins), giving us a UTXO set approximately 2.6 TB in size.  Now, let's assume that we sort these coins perfectly and write them to a database.  Since they're sorted, we can find any coin using binary search in no more than 36 read operations (about 65 bytes each):

   log2(40x10^9) = 36  

Rough numbers: A typical NAND FLASH chip permits random access reads within about 30 us, a typical NOR FLASH chip within about 200 ns, and perhaps less than 20 ns for SDRAM, so it takes about

   36 x 30 us = 900 us (NAND FLASH)
   36 x 200 ns = 7.2 us (NOR FLASH)
   36 x 20 ns = 0.72 us (SDRAM)

to find a particular coin if there's 40 billion of them.  If we commit 10% or our time to looking up coins, to match Visa's average rate of 2,000 TPS means we need to be able to find a particular coin in

   (1 sec x 10%) / (2000 /sec) = 50 us.  

My gut tells me that looking up coins isn't too daunting a problem, even if 10 billion people each control 4 coins, and, in aggregate, make 2,000 transactions per seconds.  

...Of course, the UTXO is constantly evolving.  As coins get spent, we have to mark them as spent and then eventually erase them from the database, and add the new coins to that database that were created.  If we assume the typical TX creates 2 new coins, then this means we need to write about

    (65 bytes per coin) x (2 coins per TX) x (2000 TXs per sec) = 0.26 MB/sec

Again, this isn't fast.  Even an SD card like the SanDisk Extreme Pro has a write speed up to 95 MB/s.

Of course, this is the speed for sequential writes, and we'll need to do plenty of (slower) random writes and erase operations, but note that 0.26 MB means that only  

    0.26 MB / 2.6 TB = 0.00001 %

of our database is modified each second, or about 1% per day.  


My questions, then, are:

  - To what extent can we develop algorithms to optimize the UTXO database problem?  
  - What is the best hardware design for such a database?

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
Pages: « 1 ... 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 [1252] 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 ... 1558 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!