Don't use deprecated methods!
What ? Maybe they aren't officially deprecated, but the find_by_* methods are old the old style finders. Regardless, leaving the "secret key" in a public repo is just fail anyway.
|
|
|
Here is a guy with "Staff" near nickname talking to me.
Oh grondilu... he just helps run the boards. You should know bitcoin has no staff, and forum staff don't really have any more weight with the bitcoin community than anyone else. But I imagine if SSD becomes necessary (it isn't even close now) that is what people will recommend.
|
|
|
Actually, $300 a year for ASV scan
Merchants require from regular customers only quarter ASV scan provided by someone like mcafee.
And for Bitcoin community this case of forcing security standards would be good. I wrote about it somewhere here.
I have a website. On my website I give customers a Bitcoin address to make payment. I keep the private key for this address on a QR code in a bank safety deposit box. How would PCI compliance benefit me or the customer in any way? It wouldn't. Once the uneducated learn their lessons and bitcion is done right there will be no way for a webserver breach to compromise anybodys finances.
|
|
|
Don't use deprecated methods!
|
|
|
So you staff going to announce that good highend SSD needed for using bitcoin? My 7200 is not the cheapest HDD.
I have no staff.
|
|
|
I dont talk about the code - I show you HDD capabilities. Where are 2000 seeks?
Sorry, I read "stats under running bitcoin" and mistranslated your gibberish as stats with bitcoin running. I already agreed that 100 IOPS was correct, I thought we were moving on.
|
|
|
Guys, cumon ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) You are talking with highload architect and unix sysadmin ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) Who doesn't believe that optimal search is log(n) ![Roll Eyes](https://bitcointalk.org/Smileys/default/rolleyes.gif)
|
|
|
This is my 7200 seagate stats under running bitcoin. Regular disks have lower rates. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 50,67 0,00 122,00 2,00 690,67 6,67 11,25 1,02 8,27 8,09 19,17 7,47 92,60 sda 51,33 5,67 124,33 22,67 702,67 105,33 10,99 1,18 8,00 7,88 8,62 6,32 92,97 sda 100,33 2,67 123,33 29,33 894,67 113,33 13,21 1,14 7,48 8,05 5,10 6,10 93,17 sda 51,00 0,00 111,33 35,67 664,00 132,00 10,83 1,13 7,66 8,52 4,96 6,35 93,37
As you can see, its about 100 requests per second with almost busy heads. And this is cached sequental read, when caches are available as not so much data in DB. And first column displays how many requests were merged. For the third time, quit using the old code. We know it's shitty. The new code is already available and barely touches the disk.
|
|
|
Plenty of data structures can look up individual transactions in O(log u) where u is the number of unspent transactions.
O(log u) - proof pls. No need for proof, it's common knowledge. Just look on the top right. Not the best example... Binary search trees are only average case log(n), worst case n. Btrees are worst case log(n).
|
|
|
Plenty of data structures can look up individual transactions in O(log u) where u is the number of unspent transactions. Even using the lowest base of two, if there are u= 1 trillion unspent transactions, we are looking at about 40*16 lookups per second. Even a spinning disk can easily hit 2000 seeks per second, and a SSD would handle that many reads trivially. Only if we assume 1 trillion unspent transactions, a suboptimal data structure, 16 transactions a second and 3 inputs per transaction do we move to SSD as a necessity. And that's assuming spinning drives don't improve in the decade minimum it takes to get to these levels.
Your move.
O(log u) - proof pls. afaik, spinning drive has about 100 iops performance. http://en.wikipedia.org/wiki/B-treeAnd I'll give you the point on 100 iops for low end consumer drives. IOPS is a better measure than seeks. Good drives can get you close to 200 and SSDs can blow the 2000 I calculated around out of the water: http://en.wikipedia.org/wiki/IOPS#Examples
|
|
|
PS. Of course I oversimplify here because there can be chain forking which requires that you don't forget spent transactions too easily. But even so, "forget" is a bit too strong a word since it just means you don't put it in RAM anymore. It's still on disk.
Looks like its out of speculation discussion, but i will continue this chess game. 10k trans per block is a 16 trans per second. Each trans can have many inputs. And every input needs to be found in huge data set. Your move. Plenty of data structures can look up individual transactions in O(log u) where u is the number of unspent transactions. Even using the lowest base of two, if there are u= 1 trillion unspent transactions, we are looking at about 40*16 lookups per second. Even a spinning disk can easily hit 2000 seeks per second, and a SSD would handle that many reads trivially. Only if we assume 1 trillion unspent transactions, a suboptimal data structure, 16 transactions a second and 3 inputs per transaction do we move to SSD as a necessity. And that's assuming spinning drives don't improve in the decade minimum it takes to get to these levels. Oh, and we're ignoring the cache in ram that will hold many of the transactions needed. Your move.
|
|
|
Ah, yes. But thats not the issue the issue is that since 0.6.xxxx > the client segmentation faults on suse 11.3, works fine on my suse 12...
likely a library version issue... just compile from git
|
|
|
well duh it goes up there are 10 million bitcoin days created every day!
|
|
|
Could you do marathon or sheetz cards? There is only one shell station in my town and I'm on the opposite side. Also, it is probably the most expensive one in the area due to its location.
psuedo-electric car lol
|
|
|
Hey, I am talking exactly about disk read. Not about CPU or GPU. Such amount of transactions will cause heavy disk key lookups. Read before write.
Dude, with the latest leveldb ultraprune builds I can sync the complete chain, verify the transactions and block hashes for all blocks, and verify the signatures for all the blocks after the last checkpoint in under 4 hours with mostly idle disk and less than one core of cpu. It's bottlenecking at the network code (not network speed, just the block download code needs work that is underway or will begin soon). So how is disk a problem again?
|
|
|
Is it just me, or is it really looks like smaller version of May 2011 to Mar 2012? ![](https://ip.bitcointalk.org/?u=http%3A%2F%2Fbitcoincharts.com%2Fcharts%2Fchart.png%3Fwidth%3D940%26m%3DmtgoxUSD%26SubmitButton%3DDraw%26r%3D%26i%3D%26c%3D0%26s%3D%26e%3D%26Prev%3D%26Next%3D%26t%3DS%26b%3D%26a1%3D%26m1%3D10%26a2%3D%26m2%3D25%26x%3D0%26i1%3D%26i2%3D%26i3%3D%26i4%3D%26v%3D1%26cv%3D0%26ps%3D0%26l%3D0%26p%3D0%26&t=663&c=B1cPbRubB3OsMQ) Echo bubble. And once we climb over the top of it, the next stop is the top of the big guy.
|
|
|
Could you do marathon or sheetz cards? There is only one shell station in my town and I'm on the opposite side. Also, it is probably the most expensive one in the area due to its location.
|
|
|
All money already there. Nothing new, hehe. Did you noticed bid side size?
Some more upside should be here. 1400-1425.
I agree on the upside. I certainly know the people I buy for are still sending me more cash than normal.
|
|
|
Bids increasing, asks shrinking away... I see a storm brewing.
|
|
|
Nothing I wasn't used to seeing... until he shot out over his neighbors' houses.
|
|
|
|