Bitcoin Forum
October 19, 2021, 10:46:12 PM *
News: Latest Bitcoin Core release: 22.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 »
1  Economy / Economics / Low Prices at CaVirtEx: Is this a trend? on: August 12, 2013, 06:27:24 PM
I recently discovered CaVirtEx, and it's remarkable for two things, which I believe are highly correlated:
  • Easy cash withdrawals
  • Significantly lower exchange rates (~10% below Gox)

So, here's my question: How much of the current price level of BTC is directly attributable to the fact that it's much easier to exchange traditional currency for BitCoins than the other way around?

What if there were an American exchange that made getting dollars out as easy as CaVirtEx does?  Would we see a massive BTC exodus, with an accompanying drop in value?
2  Bitcoin / Development & Technical Discussion / Re: Different architecture proposal? on: July 24, 2010, 03:47:54 PM
Oauth has been hacked hasnt it?It is supposed to be announced at blackhat.
A particular implementation of OAuth was susceptible to a side-band attack.  Basically, you could get extra information about which part of your attempted fake-authentication packet was incorrect by timing how long it took for the server to reject it.

The protocol is still considered secure.  (Besides, I'm not sold on OAuth in particular, but something similar.  The key point is to not use simple password authentication, because then you end up with each client "caching" the users password independently which is a big security risk.)
3  Bitcoin / Development & Technical Discussion / Re: Different architecture proposal? on: July 24, 2010, 08:04:01 AM
One more thing:

I think the daemon process (which does all the networking, hashing and database stuff) and a reference hashing plugin absolutely needs to be free and open.  I also think that a basic free client should be available for everyone.  However, I think there are some really great opportunities in creating for-profit hashing plugins and whizzy and/or specialized clients.

I suspect that NewLiberty wants to talk off list because he has some ideas about a closed, for-profit client.  I'm totally cool with that and wish him the best, but I really, really hope he doesn't think that anyone should trust the core logic of their wallet to software they can't audit (or have audited).
4  Bitcoin / Development & Technical Discussion / Re: Different architecture proposal? on: July 24, 2010, 07:58:12 AM
I have lots of ideas, but I'm not sure I have enough time to even articulate them fully, let alone implement them.  Here's a first stab:

  • A highly portable, core computation engine which doesn't depend on anything outside the standard libraries.  I don't think that a "validation only" engine is any less code than a block generating engine, so I see no reason for any bifurcation in that respect.

  • A cross-platform APPEND ONLY database for storing blocks. There's lots of interesting stuff happening in the NoSQL world that would work well here.  I think it makes sense to piggyback on some other open source; the key is to find something that is lean.  Most of the NoSQL world is focussed on speed, and that's far less of an issue here.  Ideally, a library for reading the block list would be a separable component; there are some interesting BTC applications that don't need access to your wallet.

  • A plugin architecture for alternate hashing engines.  Faster hashing means generating more coins for yourself.  If BTC takes off, there will be a market for fast hash engines.  Ideally, this plugin engine should require that the plugin returns ALL computed hash values, not just successes.  That way the core engine can randomly confirm that hashes are being properly computed.  (Making a fake plugin which doesn't actually do all the hashes it claimed would be really simple...)  Creating a plugin system here means fewer forked clients and much less maintenance for users and the community.

  • The main "bitcoin" process should always be a daemon.  You could configure a client to quit the daemon when it shuts down, but the client and the daemon should be separate processes.

  • WALLET IS ALWAYS STORED ENCRYPTED.

  • A secure RPC/web service interface for clients.  Maybe based on OAuth?  The OAuth style model is best here, though.  The key thing is that you don't want to require clients to have a copy of the user's password.  No siree, Bob!  Local machine connections could be authenticated by the daemon directly, remote connections would need to provide a simple HTTPS form for authentication.

  • Separate read-only/read-write permissions.  Read-only permissions could be used to check balances, verify payments, check confirmation counts.  Read-write would only be necessary for applications that send payments.  There are some useful applications which only require read-only access (eg. notify when a new payment comes in and is confirmed), but I'd be less likely to want to run a third-party app like that if I had to also give it permission to send my coins to some untraceable address!

A key design goal here is compartmentalization.  The compute engine is portable so it can be shared across platforms.  The hashing code is pluggable so that custom hash engines don't require forking the entire project.  Multiple clients can be written for each platform, and can all be safely run at the same time talking to a single core daemon.
5  Bitcoin / Development & Technical Discussion / Re: Feature Request: Splash Screen on: July 23, 2010, 04:34:38 AM
Seriously?  A splash screen?

The current BitCoin client is an awesome proof-of-concept, and a decent first pass to get the network "into beta".

However, I think that "fixing" the client with things like a splash screen is wasted effort.  Let's put our time and energy into a portable core BitCoin engine, and define a way of creating LOTS of different clients that interface into it.

But, you know... adding a splash screen is probably pretty easy... Tongue
6  Bitcoin / Development & Technical Discussion / Re: Different architecture proposal? on: July 22, 2010, 04:18:04 AM
I've been thinking exactly along these lines.  The main bitcoin process should NOT be tied to the UI at all (and, while a wxWidgets UI is a great way to bootstrap the project, it's NOT a good solution for the long term).

In fact, I'd go so far as to say that one of the next major steps for the bitcoin system is two fold:
  • Create a proper spec of the protocol (if one exists outside of the source code at this point, I haven't found it)
  • Create a second, independent implementation of the protocol

Without a second, independent implementation, you can't be sure that the protocol is correctly documented.

Perhaps the "reference" implementation (which we're all currently using) doesn't need to change, but if I were designing a bitcoin client, it's be structured a LOT differently than the current client.
7  Bitcoin / Development & Technical Discussion / Re: Hot swapping wallets? on: July 22, 2010, 04:13:05 AM
So far as the rest of the world is concerned, each address you generate is completely unique.  If you create two addresses, there's no way for anyone to connect the two of them (unless you transfer money between them a bunch of times).

There's no reason to swap wallets, and you run the risk of messing it up and losing coins...
8  Bitcoin / Development & Technical Discussion / Re: Dynamic Difficulty Adjustments on: July 19, 2010, 05:19:36 PM
The noise in the time it takes to generate blocks is very, very high.  There was a link posted somewhere in the forums that showed the time taken for the swarm to solve the previous 100 blocks (or so), and the range in solution time was between 3 seconds and 20 minutes.

Adjusting SLIGHTLY after each block makes sense to me though.  Maybe the difficulty can go up or down a fraction of one percent each block, and then recomputed wholesale on the two-week boundaries?

Regardless of how it's done though, we need to be wary of wild swings...
9  Bitcoin / Development & Technical Discussion / Re: tracking bitcoins? on: July 19, 2010, 04:54:20 PM
I think we should try to track the payment JUST TO SEE IF WE CAN.

How much info is there in the system?  If we don't know this, how can we trust anything?!?
10  Bitcoin / Development & Technical Discussion / Re: Two instances one computer on: July 19, 2010, 04:51:52 PM
I'm not sure why you want to run two instances on one computer, but I can answer your question:

For some reason (and I have no idea why!), PROCESSING the downloaded blocks from the swarm is slow.  Validating the blocks shouldn't be too slow, but if you have some way of checking, you'll see that the node that is downloading blocks is slamming the disk.

Without actually profiling the software, I can't say why, but I believe you should just be able to copy the block DB (blk*.dat) from one instance to the other without any badness happening.  Just be careful not to blow away your wallet!
11  Bitcoin / Development & Technical Discussion / Re: Selling cuda enabled client on: July 18, 2010, 02:36:42 PM
Damn, Paysafe!  I was 99% sure this guy's claim was bogus, and came very close to posting my opinions as to why...  Coulda saved you some BTC if I'd posted sooner.  Sorry!
12  Bitcoin / Development & Technical Discussion / Re: Privacy versus Safety: handling change on: July 18, 2010, 07:49:02 AM
Thanks Theymos: That clears things up a lot!  (I wasn't sure who the change-handling code worked...)
13  Bitcoin / Development & Technical Discussion / Re: Hash() function not secure on: July 18, 2010, 07:47:01 AM
I don't want to sound disrespectful, Satoshi (after all, you're the reason we're all here!), but I'm not sure that "it's probably good enough" is a sufficient answer!

I hear you saying that SHA-256 isn't much better than SHA-128, but what I think we need to hear is why SHA-128 is BETTER than SHA-256...

(Also: Is there a planned mechanism in place to switch to a new hashing scheme?  Seems like a good thing to plan for early, even if it seems unlikely to be necessary...)
14  Bitcoin / Development & Technical Discussion / Re: 50%+ Attack Nodes on: July 18, 2010, 06:33:16 AM
Interim steps like you suggest don't really change anything.  What matters is the total number of transactions processed between the time Black Bart spends his coin and the point at which you (unrevokably) transfer something to him in return.  (Called "confirmations" in the current UI.)

If Black Bart controls fewer CPU resources than the swarm, increasing the total number of confirmations (again, regardless of the number of interim transactions) dramatically decreases the probability that Black Bart can "change history" by providing an alternate block chain.

However, if Black Bart controls more CPU resources than the swarm, all bets are off.  He can change history pretty much at any point he chooses.

I'll reiterate however, that the cost of dedicating more CPU resources than the swarm to ripping off BTCs is virtually guaranteed to be many times greater than any amount of value you could extract from the swarm.
15  Bitcoin / Development & Technical Discussion / Re: Privacy versus Safety: handling change on: July 18, 2010, 06:22:34 AM
Surely the "coins" stored in wallet.dat are essentially just a cache, though, right?  The "real" count of BTC that I can spend are encoded into the chain.

If I still have all my private keys, I should be able to scan the block list to recover that "cache".

Suggestion: Include a block number/id in the wallet which is the last-known-good point for that wallet.  When a wallet is restored from backup, you can rescan the new blocks to be sure it's up-to-date.

Second Suggestion: All the user to rebuild a wallet without re-downloading all the keys.

Final Suggestion: Separate the "real" critical data (the private keys), from the "cache" (my current coins).  There is no reason to conflate the data that is absolutely critical for me to access my coins from a system designed to efficiently find my coins and let me spend them.

(Unless I'm fundamentally misunderstanding the inner workings of BitCoin!)
16  Bitcoin / Development & Technical Discussion / Re: Bitcoin is "Growing Up" : Feature Request on: July 18, 2010, 06:13:13 AM
I'm not sure a static download of the first n blocks is really the right solution.  The problem isn't that it's so much data for the network to handle, the problem is that the processing and storage of those blocks is really expensive.

Static download is a crutch, fix the real problem: Why should downloading 32M of blocks bring my computer to a halt for a couple hours under any circumstances?!
17  Bitcoin / Development & Technical Discussion / Re: Compiling bitcoind on unix/linux on: July 18, 2010, 06:05:24 AM
The problem is that wxWidgets includes a bunch of "core" functionality that isn't related to GUI widgets at all.  String classes and whatnot.  The core of BitCoin was build with those utility classes.

You'll notice that Gavin's configure call disables GUI.

I have a linux box that has no windowing system on it at all; no GTK, no X, nothing.  I did something similar to Gavin and the daemon is running fine.

I'd much prefer if it could be built without wxWidgets, though.  (Or Boost, for that matter!)
18  Bitcoin / Development & Technical Discussion / Re: 50%+ Attack Nodes on: July 18, 2010, 04:36:36 AM
I think I have a pretty good handle on how BitCoin works, but this is just my understanding.  Grain of salt and all that.

The concern is not just that "the bad guys" will run more nodes, the weakness is very specific:

A malicious entity can "unspend" coins if they can generate more valid blocks than the swarm.

Here is the scenario:

Black Bart sends you 100 BTC, you wait an hour to get 6 confirmations on the payment before considering that payment valid and giving BB the gold bar he just bought from you.  At that point, Black Bart releases a new chain of hashes which is at least one block longer than the "truth" that the swarm produced.  The BitCoin algorithm takes a longer chain as being "more valid" than the shorter chain, so the "real" chain gets rejected for BB's chain.  But, of course, in BB's chain, you didn't get paid.  He keeps his 100 BTC and uses it to buy a second gold bar.

A few notes:

  • A node chain is easily verified to be "correct", so BB needs to generate a valid block chain which is larger than the swarm.  Thus, he needs more CPU power than the swarm.

  • That's kind of a lie: He doesn't strictly need more CPU power, block generation is probabilistic.  It's possible that BB could be running a single client on a 486SX, and still be "lucky enough" to produce more blocks than the swarm.  I haven't done the math, but I'm pretty sure the probability of this scenario is somewhere along the lines of winning the lotto every week for your entire life.  (The research paper gives a few sample probabilities for this.)

  • Even with all this CPU power and/or luck, BB can only spend as many coins as are in his wallet.  He can't ever steal your coins, he can't create "fake" coins.  The potential value to BB of throwing all this CPU power at tricking the system is very, very low.  If such an attack ever did come, it would be far more likely that it would be intended to destroy the BitCoin system (by undermining confidence), rather than for direct financial gain.

19  Bitcoin / Development & Technical Discussion / Re: Network Size on: July 18, 2010, 03:43:20 AM
Based on http://www.alloscomp.com/bitcoin/calculator.php:

1 250 000 khps will have an average generation time of 10 minutes.
  900 000 khps will have a median generation time of 10 minutes.

This thread claims that the average generation speed is about 400 khps, I'm seeing about 1000 (reasonably recent machines).  That would give an estimate between 1000-3000 machines working in the swarm.
20  Bitcoin / Development & Technical Discussion / Re: Network Size on: July 18, 2010, 01:16:47 AM
To go back to the original posters question: How to estimate the current swarm size?

Seems to me you could get a pretty good estimate of the maximum historical swarm size (in terms of compute power, not nodes) based just by looking at the difficulty number.  As I understand it, the difficulty is scaled up based on how fast the swarm can compute new blocks.

A more accurate and up-to-date estimate could be made based on the time it took for the last few blocks to be solved.

I can't imagine that there is (or could be!) an accurate way of estimating the number of nodes directly.  Too many firewalls, tors, etc.
Pages: [1] 2 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!