Bitcoin Forum
May 09, 2024, 05:24:26 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 »  All
  Print  
Author Topic: [Stratum] Overlay network protocol over Bitcoin  (Read 37821 times)
ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 08:14:17 AM
 #41


To slush of course.

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
Every time a block is mined, a certain amount of BTC (called the subsidy) is created out of thin air and given to the miner. The subsidy halves every four years and will reach 0 in about 130 years.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715275466
Hero Member
*
Offline Offline

Posts: 1715275466

View Profile Personal Message (Offline)

Ignore
1715275466
Reply with quote  #2

1715275466
Report to moderator
ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 08:15:22 AM
 #42

Please describe a possible attack vector.
I provided a few

I don't understand - you just described while such a protocol would be secure.

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
December 29, 2011, 08:17:35 AM
 #43

Yeah, good luck to whom?
To slush of course.
But at whose expense?

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 08:21:25 AM
 #44

However, there aren't too many things that a malicious server can do.

!!!Dangerous misconception!!!

Satoshi invented blockchains for a reason: they prevent double-spends.  If you don't have a copy of the block chain, anybody running the server you're talking to can "send" you coins and then take them back with total impunity, simply by making sure that a transaction sending those coins to themselves is "mined in" to the blockchain before they release the transaction that looks like it's sending them to you.

This problem is also removed once the client can talk to multiple servers.

This is dangerous "cargo cult security".  It does nothing to prevent man-in-the-middle attacks (my ISP attacks me) and merely shifts the problem elsewhere.  What prevents somebody from renting 100 cheap VPSes and running the server software on all of them?  Worse: botnet operators.  See also Sybil attacks.  Basing security on 51% hashpower is reasonable; basing it on 51%-of-IP-addresses-running-the-software is definitely not.  I can't think of anything more attractive to botnet operators!

I imagine slush can give a better defense of this proposal as he seems to always write what I am thinking way better than I seem to be able.

With all due respect to slush, he shouldn't provide a "defense"; he should address these issues in the design document by explicitly stating the trust model.


Nobody said that all nodes are created equal. I would trust people with things to lose (e.g. established Bitcoin business such as Mt. Gox).
I would connect my client to servers run by the top 5 Bitcoin businesses.

Send queries and transactions to all 5. If any one of them disagrees, disregard its results.
It I can't get a consensus of 4 to agree, there is a major problem and I fail the transaction.
Use SSL to avoid man in the middle.

If you're running a business that's large enough to really be paranoid about Mt. Gox trying to scam you by falsifying TX data, then simply run your own server - this whole Overlay Network is targeted at end consumers and small businesses / webapps that don't have much to lose.

I plan to run a website that will probably never pass 100 BTC. Even before multiple server support, if I only rely on Mt. Gox server (or slush's server, or whatever), then Mt. Gox and slush have more to lose than by double spending against me.

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 08:21:54 AM
 #45


What?

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
December 29, 2011, 08:22:32 AM
 #46

I'm not saying its 100% secure.  I'm saying it probably can't be but also that the attacks are not very dangerous.

Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
December 29, 2011, 08:25:48 AM
 #47

Nobody said that all nodes are created equal. I would trust people with things to lose (e.g. established Bitcoin business such as Mt. Gox).
I would connect my client to servers run by the top 5 Bitcoin businesses.

Send queries and transactions to all 5. If any one of them disagrees, disregard its results.
It I can't get a consensus of 4 to agree, there is a major problem and I fail the transaction.
Use SSL to avoid man in the middle.

If you're running a business that's large enough to really be paranoid about Mt. Gox trying to scam you by falsifying TX data, then simply run your own server - this whole Overlay Network is targeted at end consumers and small businesses / webapps that don't have much to lose.

I plan to run a website that will probably never pass 100 BTC. Even before multiple server support, if I only rely on Mt. Gox server (or slush's server, or whatever), then Mt. Gox and slush have more to lose than by double spending against me.

This.

2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
December 29, 2011, 08:28:30 AM
 #48

At this table we are playing a zero-sum game. I know slush is the "house". But who pays the rake?

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 08:30:56 AM
 #49


No we're not.

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
December 29, 2011, 08:59:11 AM
 #50

No we're not.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 09:01:26 AM
 #51


Whatever that may mean.

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
Red Emerald
Hero Member
*****
Offline Offline

Activity: 742
Merit: 500



View Profile WWW
December 29, 2011, 09:32:40 AM
 #52

Lets stay on topic guys.

We need to make sure that there is little to no trust between the client and server.  As well as come up with ways to mitigate attacks from a malicious server. If we find attacks, we need to figure out what damage they have.

What happens if the server resolves firstbits to a matching, but not "first" address? (Bad news I think)

What happens if the server withholds a transaction either from the client or from the network?

What happens if the server (or the person controlling the server) double spends against their client?

How should we handle server selection? Talk to multiple servers at once? Round-robin requests?  Only talk to "trusted" servers?  I think this might depend on the request.

How should we handle two (or more) servers giving the client different information?

Should SSL be mandatory to prevent MITM attacks? Can we build the protocol so a MITM cannot do any damage?

ripper234
Legendary
*
Offline Offline

Activity: 1358
Merit: 1003


Ron Gross


View Profile WWW
December 29, 2011, 09:44:39 AM
 #53

Lets stay on topic guys.

We need to make sure that there is little to no trust between the client and server.  As well as come up with ways to mitigate attacks from a malicious server. If we find attacks, we need to figure out what damage they have.

What happens if the server resolves firstbits to a matching, but not "first" address? (Bad news I think)

What happens if the server withholds a transaction either from the client or from the network?

What happens if the server (or the person controlling the server) double spends against their client?

How should we handle server selection? Talk to multiple servers at once? Round-robin requests?  Only talk to "trusted" servers?  I think this might depend on the request.

How should we handle two (or more) servers giving the client different information?

Should SSL be mandatory to prevent MITM attacks? Can we build the protocol so a MITM cannot do any damage?

- firstbits - I'm not really interested in that feature - if it's a security risk, postpone/cancel it.

- withholding TX - mitigated by talking to N independent servers (not random servers to prevent someone from starting 100 instances, but known servers hosted by various known organizations and persons).

- Double spend - Mitigated by talking to N servers. If you send a TX to N independent servers, with at least N/2+1 honest nodes, then any attempt at a double spend will be easily detected.

- Start with a single server. Next step would be a hard code list of trusted servers, with requests going out to N of them, and at least K have to agree. Future work - please don't let this delaying getting the first server operational ... 1 is so much better than 0.

- SSL should be mandatory, probably from day 1. I don't see how the protocol itself need to change, just the transport.

Nothing here is a show stopper or a major architectual risk - let's get this thing up with 1 server, and work from there.

Be aware that this layer will never be a decentralized widespread network like Bitcoin. At first there will only be a handful of such servers, and there's absolutely no need for more than a few dozens or hundreds (known, trusted, not anonymous) servers besides future scalability concerns. Don't over engineer.

Please do not pm me, use ron@bitcoin.org.il instead
Mastercoin Executive Director
Co-founder of the Israeli Bitcoin Association
slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 06:05:40 PM
 #54

The resultant Frankenstein monster (or family of monsters) is going to try to strangle its creator.

I agree with you that projects like this are tending to be over complicated. Many times I found that I'm engineering some monster project just because I wanted to hide internal complexity of the task. That's exactly the reason why I'm forcing myself to keep this KISS: basic transports, simple RPC protocol, pubsub mechanism, services on top of this. If you know how to make it even simplier, tell me, seriously!

I'm trying to avoid "full stack" RPC mechanisms like SOAP+WSDL, because, honestly, it's pretty hard to implement everything correctly, on both ends.

Quote
It is hard to argue with what you proposing point by point because of the contradictory nature of the requirements. I'm going to group my response into the following points.

I'm fully aware that those requirements are contradictory. It's pretty common that some solution needs to find some compromises. But correct me if you think that some of those requirements are wrong (for such purpose of overlay network).

Quote
1) Protocol can be either RPC or bidirectional, but not both.

Well, what's wrong on creating communcation channel and then have a possibility to call services on both ends, from any connected side?

I'm aware that it *can* be overcomplicated in the end, but it fully depends on designing of services. This idea is taken from jabber protocol, where both sides are providing some services. (And yes, I thought about building overlay network on top of jabber protocol, because it already provides service paradigm, but I think jabber is great example of overcomplicated stuff.)


Quote
The RPC paradigm (request-response, master-slave) is mutually contradictory with a pair of peers exchanging asynchronous messages.

I agree. I like RPC much more than message-based protocol, because it's making communication much cleaner. Just a request and its response. It's the reason why I'm not proposing communication based on async messages, because dependencies of various types of messages can turn into total mess.

Quote
Bitcoin protocol itself is from its origin asynchronous and cannot be squeezed into the master-slave architecture.

I partially agree (not fully, because everything can be transformed into master-slave architecture, but of course it doesn't necessary mean that it's effective), but I'm not talking about bitcoin network. Difference between Bitcoin network and "Overlay" network is that Bitcoin is distributed database, but Overlay is network of services.

Quote
2) Your target market (low-end consumer-level devices) demands that there's checksuming and framing at the application layer. Precisely because cheap NAT gateways and cheap DSL/Cable/WLAN modems are known to mangle the transport-level frames due to bugs (in the implementation of NAT) and excessive buffering (to improve one-way file transfer benchmarks).
If you think you can add CRC later you are going to loose by not detecting corruptions early.

Hm, thanks to you, I'm thinking about this a little more than before. Is that really an issue? Isn't TCP checksumming and TCP retransmissions on both ends enough to "fix" corrupted information? More generally, keeping transmission working is task for the transport layer, not for protocol itself. And I agree that transport implementations should use it's internal mechanisms for checking that transmission was successfull. Like using Content-Length or content checksum in http headers.

RPC proposal itself contains unique IDs of requests to link requests to response. If transport layer fail, it will probably appear as disconnection, which also closes the session (on TCP transport), so client need to reset it's internal state after reconnecting (ask for balance, history etc, to be sure it is again in stable state).

Side note: I'm running json-based protocol on the pool over a year, I had over 3300 rq/s in June peak. I agree that mining protocol is stupid and ugly (although I understand the reason how and why it has been designed in such way - don't worry m0m ;-) ) and that there are some isssues with DDoSing etc. But I definitely didn't have any problems with corrupted packets like you're suggesting.

Quote
3) In my experience JSON is probably the close to being the least resilient encoding possible. I can't disclose proprietary data, but I have over a decade's worth of reliability statistics for various  remote services (RPC-like) that are sold by organizations I consulted. But the rough ranking is
as follows: (from the least errors to the most)

3.1) ultra-lean character-based protocol similar to FIX, designed originally for MNP5 and V.42bis modems, currently used through a simple TCP/IP socket
3.2) SOAP (RPC over XML) with Content-Length, Content-MD5 and DTD verification enabled
3.3) SOAP and plain XML-RPC without the above strenghtening options
3.4) JSON-RPC
3.5) RPC over e-mail with various human-readable encodings

Fair summary, thanks. I'm familiar with SOAP (although I never used it in real life). As I mentioned above, I agree that Content-Length and Content-MD5 should be implemented, but it's part of transport in my concept (because HTTP is only one of many ways how to transfer bytes from one side to another, and Content-* are headers of HTTP), not a part of protocol so we are in agreement here.

About DTD - it's definitely better face of XML concept and I see some benefits in implementing DTD as formal specification of protocols. Although  I personally dislike XML, I'm open to change my mind at this point. Is here, as an example, standardized way how to serialize data such lists or dictionaries? I picked JSON because it is providing pretty compressed serialization of some standard structures in transparent and understandable way. I can call json_encode(any_data) and don't care about "how it works" in almost any programming language. But if there is some similar, widely accepted XML specification for serializing such objects, I'll elaborate. But handling XML streams on low level is usually pain.

Quote
JSON is also infamous for letting people easily make byte-endianness mistakes, very much like current "getwork" which is neither big-endian not little-endian.

Well, endianess in getwork isn't the mistake of json, but of layer above. However, I agree, it is a pain.

Quote
4) You somehow read the my earlier suggestions about IPsec imply high-end large-volume target market. The reality is quite opposite: Windows support IPsec since 2000, Linux for a long time, Netgear ProSafe family has several models in $100-$200 range, L2TP and PPTP are available for free on all iPhones,Blackberries,Androids; many Nokias and other smartphones. The real hindrance is the HTTP(S)-uber-alles mindset, not the actual difficulty and cost of the implementation.

Thanks for explanation. I have only one experience with IPSec ten years ago and that experience was painful. Good to hear it gets better over the time. I feel I'm repeating myself, but I see that "transport" concept is one of the most powerful thing on my proposal. If somebody want to fiddle with IPSec, give it to him. But most of people will be happy with ssl-hardened TCP socket or even HTTP poll, which is very familiar for them. Both for programmers and for end users.

Quote
In summary I'd like to say that you wrote a very interesting and thought provoking proposal. I just think that the range of the targets you are hoping to cover is way too broad ($3 AVR processors, shared hosting plans, home computers, etc.).

Actually I'm already in touch with two groups of people who're developing hardware based wallets, so my proposal was created with some specific projects in mind. I agree that final audience is pretty wide, but everything I can do is to keep KISS attitude and hope I won't overcomplicate it in some way.

Quote
I have my personal litmus test for the technological implementation in the Bitcoin domain: it has to support (and be tested for) chain reorganization. Preferably it should correctly retry the transactions upon the reorg. Absolute minimal implementation should correctly shut down with a clear error message and a defined way to restart the operations.

Interesting stuff. I feel like overlay network should be "stateless", which mean that it will "forget" transaction once transaction has been succesfully broadcasted to Bitcoin network. The implementation of retransmissions should be more the task of clients built on top of overlay network. As far as overlay notify it's clients that their balance has been changed (because of blockchain reorg) and provide correct address history (using actuall chain branch), it's job should be done. The reason why I don't think overlay network should try to "fix" such issues by retransmissions is that it can turn into really complicated stuff. As an example - end user don't want to re-broadcast some transaction, because it has been created in the context of previous incoming transaction in "wrong" chain, which isn't actually stored in currect branch... But thanks for suggestion, I'll think about it more.

In the end, I really appreciate that our discussion is factual and we're discussing specific points more than "you're doing everything wrong". I think this is constructive and helps me in my project.

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 07:32:46 PM
 #55

The thing is not to get too caught up writing long proposals and documents. Think a little bit, code up an implementation, get rolling and refine your requirements as they become obvious. It's easier to ask for forgiveness than to ask for permission.
I disagree.  I think planning ahead always leads to better software even if the software development may be slower to start.  Better to find a problem in your proposal and rewrite that than to discover a problem in your code and redo a ton of work there IMO

Proposals are especially important when multiple people are working on a project as it makes it easier to work on different parts of the project at once.

You're advocating the ancient waterfall method which is basically out of fashion since the last decade.

Quote
genjix:~$ python
Python 2.7.2+ (default, Oct  4 2011, 20:03:08)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
I think planning ahead always leads to better software even if the software development may be slower to start.  Better to find a problem in your proposal and rewrite that than to discover a problem in your code and redo a ton of work there IMO

Proposals are especially important when multiple people are working on a project as it makes it easier to work on different parts of the project at once.

I don't want to turn this thread to battle between waterfall/XP. I'm not designing my proposal to form which is "ready to implement" by any monkey. However I feel that anybody with some experience can understand the basics from my paper and ask some questions, which can bring some good ideas, like 2112 is doing already. So I'm trying to find some balance between waterfall method and "show me the code" attitude.

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 07:39:26 PM
 #56

* use JSON RPC

done

Quote
* the set of functions that are currently used by the Electrum client and server (address based functions): the server does not know the set of addresses in a client's wallet, it just sends address histories and broadcasts transactions. This means that a client should be able to use several servers simultaneously for improved anonymity.

yes, that's main purpose. so - done. I call it the "blockchain service", because it provide API for querying blockchain database for chainless clients.

Quote
* wallet-based functions (similar to BCCAPI) for ultra-thin clients:  The server knows the public key used to generate the sequence of addresses in a type 2 wallet. It sends the balance and history of the wallet. It also sends the number of addresses detected in the wallet (gap based detection) The server also sends unsigned transactions to the client, and the client signs them.

Yes, this is possible, I call it the "wallet service", because it provides functionality of coin selection, creating of (unsigned) transactions etc.

Quote
* also, I would like to see something similar to Transaction Radar: http://www.transactionradar.com/ : when a transaction is unconfirmed, the client should display its rate of propagation. Electrum servers could be part of the existing transaction radar service.

I already implemented 'txradar' service, as an easy example of 'proxy service' (providing standard overlay-network API, but asking some other specialized service for doing the job). I see txradar service as usefull one, but it's fully on clients how they integrate such information.

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 07:56:13 PM
 #57

I just re-read JSON-RPC 2.0 specification (http://json-rpc.org/wiki/specification) and found that I missed "notification" type of messages, which very similar to what I proposed for pubsub mechanism. There was difference that notification has id:null, which I see as a smart choice, so I changed proposal documentation and also protocol implementation (not commited yet). So far, application protocol is now *fully* compatible with JSON-RPC 2.0.

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 07:58:09 PM
 #58

Use WebSocket as transport. It is bi-derectional and simple enough.
http://en.wikipedia.org/wiki/WebSocket

Yes, I plan websocket as a transport oriented for in-browser clients. However websocket is more like fallback solution for environments where standard TCP socket isn't available. websockets have significant overhead and there's also a lot of versions, making it painful for implementation.

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 08:18:59 PM
 #59

Just my two cents:
1. Like some others said, develop a prototype, launch early, and iterate.

Yes, I'm doing this already. However I like to open discussion and not everybody can understand python (and especially twisted) code. So I found writing textual proposal as a good way how to gain interest also between non-programmers.

Quote
2. Should not require sending private keys (haven't read the protocol in detail yet, but just in case this was not planned). Instead use some form of Offline Transactions.

It's exactly what I'm trying to do.

Quote
3. Should consider DDOS protection. If this will be The Simple Network over Bitcoin, that will eventually handle huge traffic, you might want to consider charging a bit per API call ... just something minimal to deter DDOS. People can register their API keys, charge their account with X BTC, and get an allowance for Y API calls.

Good point. About DDoS - you can DDoS any service even without paying for it. You can just flood it with a lot of requests. So paying as a protection against *real* DDoS isn't a solution. Of course you can make service paid to avoid people misusing processing power. I'm already thinking about it, not sure if I found any viable solution.

Brainstorming: I'm thinking about "credit service" and standard exception "Fee required", which will indicate that service operator want some fee for performing such call. It's up to every server operator which calls will be free or paid. When client receive "fee required" exception, he needs to authenticate to "credit service" with prepaid credit. When previous failed call will be retried, server will credit the fee from such prepaid account. Not sure if this is a good way, but it's the best solution which I found so far. Comments welcome.

Quote
4. Should consider malicious servers. If someone evil runs an evil copy of this server, and clients connect to it ... how exactly can the evil server damage them?

Basically, there's not so much things how can malicious server damage users (in serious way - like stealing their coins). However if client wants to be sure he has correct data, he can perform same calls to more overlay servers. If all of them are on valid blockchain, their responses should be the same.

Quote
Can EvilServer steal money? (Probablly not if you're just using Offline TX as the only mutator operation)

No, it cannot. However there's real issue with services like firstbits. If server provide different result for firstbits lookup, client can send money to wrong address. However this can be solved by performing multiple calls to different servers.

Quote
Does it have a strong incentive to lie about GET queries?

I don't think so. However some people can do malicious things just because they can, without any specific reason. This is why I think that people will need to pick only trusted servers or confirm important calls against different overlay server.

Quote
These are just some things to think about ... you don't need all the answers in advance. I actually started developing a small Bitcoin webapp today, and got stuck when I realized I'd have to maintain the entire blockchain (or even just the headers). If a prototype of this project comes out early enough (~ 1 month), I'll wait for it instead of writing and debugging blockchain-maintaining code.

You're exactly the person for who I'm doing this project :-). I see that maintaining full client is big overkill for some kind of projects which want to integrate with Bitcoin. And I'm almost sure there will be working server in less than one month. You're using PHP on your site, right? Please send me a PM with some details, I need to design PHP binding and I want to discuss it with somebody who's actively working in PHP...

slush (OP)
Legendary
*
Offline Offline

Activity: 1386
Merit: 1097



View Profile WWW
December 29, 2011, 08:23:31 PM
 #60

This is cool, but I couldn't find any mention of the trust model anywhere in this thread or the design docs.

If you aren't keeping a copy of the blockchain, you need to find another answer for this question.

Well, server cannot steal your money, it can only lie with transaction history or balance. I'm not saying it's not enough for some kind of attack, but it's still better than using API of some web-based wallet (like MtfGGox).

So yes - if you don't have full copy of blockchain, you need to trust somebody else, at least on some level. But the most of users or projects (like eshops) aren't so big that it could be profitable to cheat them from side of server operators. You can also ask multiple servers to confirm balances/history, which is limiting necessary trust to single entity.

Overall, it's more secure solution than using cart APIs of web wallets (like, heh, mybitcoin), because you still own your bitcoins.

Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!