Bitcoin Forum
June 27, 2024, 10:26:25 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: On january 1st 2014 the BTC/USD exchange will be  (Voting closed: January 01, 2014, 04:59:26 PM)
above 100$/BTC - 25 (16.1%)
between 20 and 100 $/BTC - 96 (61.9%)
between 10 and 20 $/BTC - 20 (12.9%)
between 1 and 10 $/BTC - 9 (5.8%)
below 1 $/BTC - 5 (3.2%)
Total Voters: 155

Pages: « 1 2 [3] 4 5 »  All
  Print  
Author Topic: Your bets for 2014  (Read 6811 times)
grondilu (OP)
Legendary
*
Offline Offline

Activity: 1288
Merit: 1076


View Profile
January 07, 2013, 12:09:18 AM
 #41

of course that won't make bitcoin unusable, even if you have to verify 1000 blocks. but if we want to make bitcoin more popular we also have to focus on usability. and i don't think many peolpe would be happy with a payment solution that takes 20min to load before you can use it.

i don't want to make bitcoin sound bad, i love the idea as much as most here do, i just want to point out that there is a lot of work to be done, especially on technical details of the software

I can agree with that.  But there was someone in this thread barking that "your computer will just die"  Roll Eyes

waspoza
Hero Member
*****
Offline Offline

Activity: 602
Merit: 508


Firstbits: 1waspoza


View Profile
January 07, 2013, 12:12:57 AM
 #42

of course that won't make bitcoin unusable, even if you have to verify 1000 blocks. but if we want to make bitcoin more popular we also have to focus on usability. and i don't think many peolpe would be happy with a payment solution that takes 20min to load before you can use it.

i don't want to make bitcoin sound bad, i love the idea as much as most here do, i just want to point out that there is a lot of work to be done, especially on technical details of the software

I can agree with that.  But there was someone in this thread barking that "your computer will just die"  Roll Eyes

He's just spreading FUD to make profit on his short position.
mem
Hero Member
*****
Offline Offline

Activity: 644
Merit: 501


Herp Derp PTY LTD


View Profile
January 07, 2013, 12:57:03 AM
 #43

I think Bitcoin will be stable and in the low $17.xx range by the end of 2013.

thats inline with the 33% growth, its also what Im expecting.

notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
January 07, 2013, 05:27:21 AM
 #44

of course that won't make bitcoin unusable, even if you have to verify 1000 blocks. but if we want to make bitcoin more popular we also have to focus on usability. and i don't think many peolpe would be happy with a payment solution that takes 20min to load before you can use it.

i don't want to make bitcoin sound bad, i love the idea as much as most here do, i just want to point out that there is a lot of work to be done, especially on technical details of the software

I can agree with that.  But there was someone in this thread barking that "your computer will just die"  Roll Eyes

He's just spreading FUD to make profit on his short position.

Too bad for him... even that won't talk the market down right now.

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
Nagato
Full Member
***
Offline Offline

Activity: 150
Merit: 100



View Profile WWW
January 09, 2013, 07:15:18 AM
 #45

Open your fucking eyes. we all go to bottleneck apocalypse. Didn't you mind that vanilla client already makes your computer unusable during sync process? Imagine now that Bitcoin has 10k transactions per block instead of 200. Your PCs will just die.

You have no idea what you are talking about. 10k transactions per block is peanuts for even a low end CPU to handle. That is less workload than computing a single frame of any modern PC Video Game. The biggest bottleneck now is the disk read to lookup transactions which is resolved in v0.8 by only storing the unspent output transactions in RAM which is ALL YOU NEED to verify incoming blocks/new transactions. The unspent output set would have to grow by atleast a factor of 30x before storing it in RAM(4GB) becomes an issue. Even if it does, you have on average, 10 mins to process it, something your mobile phone CPU could easily do.

Lets bring it up to 10k/s and assume that the max block size was increased to handle such loads and network bandwidth is not an issue. We could store the unspent output data set in your consumer grade GPU and process them all in realtime.

In short, processing power/harddisk are cheap and plentiful even on low end PCs TODAY.
Memory/Network bandwidth is relatively expensive especially in some parts of the world.

The whole point of having every node process everything is Bitcoin's core philosophy of being trustless.
You dont have to rely on anyone else to verify the validity of any transaction.

lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 07:33:33 AM
 #46

Hey, I am talking exactly about disk read. Not about CPU or GPU. Such amount of transactions will cause heavy disk key lookups. Read before write.
notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
January 09, 2013, 08:11:26 AM
 #47

Hey, I am talking exactly about disk read. Not about CPU or GPU. Such amount of transactions will cause heavy disk key lookups. Read before write.

Dude, with the latest leveldb ultraprune builds I can sync the complete chain, verify the transactions and block hashes for all blocks, and verify the signatures for all the blocks after the last checkpoint in under 4 hours with mostly idle disk and less than one core of cpu.  It's bottlenecking at the network code (not network speed, just the block download code needs work that is underway or will begin soon).

So how is disk a problem again?

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
Nagato
Full Member
***
Offline Offline

Activity: 150
Merit: 100



View Profile WWW
January 09, 2013, 08:12:30 AM
 #48

Which as i and others have mentioned is resolved in v0.8. The current RAM required for the unspent set is barely 150MB. We are a long way from having to worry about it exceeding RAM availability. Even if it does, 10minutes is plenty of time to page it from disk.

Off the top of my head, i can already think of some easy ways to optimise this.

1) Only store the most recent unspent outputs in RAM.(Temporal Locality)
This is based on the observation that recent outputs are more likely to be spent again versus those which have been dormant for many years.
This will automatically optimise out lost/saving/cold storage wallets(Which is huge) by leaving them on disk.

2) Do a x pass process to verify the transactions in the block.
Basically x is
Total unspent output DB size / Your RAM

For i = 0 : X
Load each partition(i) in RAM, verify transactions which are in partition

There are probably many other ways you could optimise this but i thought of these 2 in less than a minute.
There are many very experienced and intelligent people involved in the Bitcoin ecosystem, this is the least of my worries.

lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 08:15:50 AM
 #49

Slowpokes reserve here.

You will be doing well with this as long as your RAM is enough to cache disk data. When it is not - big problems comming.
lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 08:21:07 AM
 #50

Unspent needs to be verified against spent. And spent are growing to infinity.

Btw, disk stunning persists also at syncs (write), but I think its an ugly code.
Kupsi
Legendary
*
Offline Offline

Activity: 1193
Merit: 1003


9.9.2012: I predict that single digits... <- FAIL


View Profile
January 09, 2013, 08:24:23 AM
 #51

Unspent needs to be verified against spent.

Why? I believe a transaction only needs to be verified against unspent.
lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 08:29:49 AM
 #52

Don't know - don't tell. Search for connectinputs() in source code.
grondilu (OP)
Legendary
*
Offline Offline

Activity: 1288
Merit: 1076


View Profile
January 09, 2013, 08:50:28 AM
 #53

Unspent needs to be verified against spent. And spent are growing to infinity.

It's not a descending process.

Coinbase transaction A is spent by transaction B which is spent by transaction C.

During database loading, you verify each block and you say:

- ok A is good,
- (you process more blocks)
- oh B spends A so B is good and I can forget about A
- (you keep processing more blocks)
- oh C spends B so C is good and I can forget about B.

Now if a transaction D comes and spends C you don't have to go all the way to A because you just need to remember that C is good.

All you have to do is to put C (and not A nor B) in an index or something so that you don't have to do this everytime you run the client.

PS.  Of course I oversimplify here because there can be chain forking which requires that you don't forget spent transactions too easily.  But even so, "forget" is a bit too strong a word since it just means you don't put it in RAM anymore. It's still on disk.

lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 09:00:20 AM
 #54

Yes, but...

http://blockchain.info/charts/bitcoin-days-destroyed-cumulative
notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
January 09, 2013, 09:06:15 AM
 #55


well duh it goes up

there are 10 million bitcoin days created every day!

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 09:08:00 AM
 #56

PS.  Of course I oversimplify here because there can be chain forking which requires that you don't forget spent transactions too easily.  But even so, "forget" is a bit too strong a word since it just means you don't put it in RAM anymore. It's still on disk.
Looks like its out of speculation discussion, but i will continue this chess game.

10k trans per block is a 16 trans per second.

Each trans can have many inputs. And every input needs to be found in huge data set.

Your move.
grondilu (OP)
Legendary
*
Offline Offline

Activity: 1288
Merit: 1076


View Profile
January 09, 2013, 09:14:08 AM
 #57


It's a cumulative graph and it looks quite linear.  What looks linear in a cumulative graph?  A constant!

Also those are "destroyed" bitcoins.  Therefore they represent transactions that could be "forgotten".  So if this is of any relation with this thread (which is not obvious), it really does not seem to prove your point at all.

notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
January 09, 2013, 09:16:25 AM
 #58

PS.  Of course I oversimplify here because there can be chain forking which requires that you don't forget spent transactions too easily.  But even so, "forget" is a bit too strong a word since it just means you don't put it in RAM anymore. It's still on disk.
Looks like its out of speculation discussion, but i will continue this chess game.

10k trans per block is a 16 trans per second.

Each trans can have many inputs. And every input needs to be found in huge data set.

Your move.

Plenty of data structures can look up individual transactions in O(log u) where u is the number of unspent transactions.  Even using the lowest base of two, if there are u= 1 trillion unspent transactions, we are looking at about 40*16 lookups per second.  Even a spinning disk can easily hit 2000 seeks per second, and a SSD would handle that many reads trivially.  Only if we assume 1 trillion unspent transactions, a suboptimal data structure, 16 transactions a second and 3 inputs per transaction do we move to SSD as a necessity.  And that's assuming spinning drives don't improve in the decade minimum it takes to get to these levels.  Oh, and we're ignoring the cache in ram that will hold many of the transactions needed.

Your move.

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
grondilu (OP)
Legendary
*
Offline Offline

Activity: 1288
Merit: 1076


View Profile
January 09, 2013, 09:25:57 AM
 #59

Each trans can have many inputs. And every input needs to be found in huge data set.

And again, it is not a descending process.

Even if your transaction has an hundred inputs, you'll have to verify that each of these input point to an output of a valid transaction, but you won't go further, for the same reason as described above.  There is no exponential growth of the number of required checks.

Also, the number of inputs and outputs is more or less proportional to the size of the transactions, or the average size of the block.  I've already calculated that for 10k transaction par block it was about  8kB per second, iirc.  However you look at it, processing 8kB of data per second does not seem like such a tough task for a computer.


lucif
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Clown prophet


View Profile
January 09, 2013, 09:27:57 AM
 #60

Plenty of data structures can look up individual transactions in O(log u) where u is the number of unspent transactions.  Even using the lowest base of two, if there are u= 1 trillion unspent transactions, we are looking at about 40*16 lookups per second.  Even a spinning disk can easily hit 2000 seeks per second, and a SSD would handle that many reads trivially.  Only if we assume 1 trillion unspent transactions, a suboptimal data structure, 16 transactions a second and 3 inputs per transaction do we move to SSD as a necessity.  And that's assuming spinning drives don't improve in the decade minimum it takes to get to these levels.

Your move.
O(log u) - proof pls.

afaik, spinning drive has about 100 iops performance.
Pages: « 1 2 [3] 4 5 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!