Bitcoin Forum
November 12, 2024, 03:23:25 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: BTC-like cryptocurrency with arbitrary tradeable computation in proofs of work  (Read 3957 times)
Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 18, 2012, 05:20:30 PM
Last edit: April 25, 2012, 09:52:50 PM by Alex Coventry
 #1

Hi, everyone.  In my spare time, I've been thinking about ways to incorporate arbitrary computation into the proof of work for a bitcoin-like blockchain-based cryptocurrency.  You can read about the design I've come up with here (click on the "Download" link in the top right-hand corner to view the PDF locally.)  

I've given serious thought to this problem, but I am by no means an expert on any aspect of it, and I welcome all critical feedback on all aspects of my proposal.  In this thread I am mostly hoping for conversation about the flaws in the protocol I've described.  (E.g., I welcome conversation about security flaws or potentially perverse incentives created by the economic model I've proposed, I'm less interested in conversation about how the proof of work in bitcoin is already useful, or about David Graeber's ideas.)
Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 19, 2012, 03:16:32 AM
 #2

Thanks to the moderators for moving this out of the newbie jail!
Stephen Gornick
Legendary
*
Offline Offline

Activity: 2506
Merit: 1010


View Profile
February 19, 2012, 03:49:06 AM
 #3

Trying to understand the economics of it.

Quote
The scheduling transactions TA must commit at least 65 NooShares with anything in excess of that going to the best result reported.

So what happens if there's less than 65 NooShares (i.e., not a high enough demand)?

Unichange.me

            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █


Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 19, 2012, 04:01:03 AM
 #4

Thanks for your interest.  Just to make sure we're on the same page, TA is like a bitcoin transaction with an input of at least 65 units (bitcoins/nooshares) and no explicit outputs.  Essentially, the nooshares used to purchase the computation are destroyed temporarily, but some are later redistributed.  So I'm interpreting your question as "what if no computation has been scheduled for some block?"  Please let me know if I've misunderstood or my answer isn't clear.  The answer is that I haven't decided yet.  The default computation ("curly A" in the paper's notation) would work.  Some publically interesting computation like a protein folding problem or searching for Fermat primes might also make sense. 

Stephen Gornick
Legendary
*
Offline Offline

Activity: 2506
Merit: 1010


View Profile
February 19, 2012, 04:18:57 AM
 #5

So I'm interpreting your question as "what if no computation has been scheduled for some block?"

Correct.

Perhaps I'm misunderstanding the model.  Let's say I would like to have my computation A performed and would like to get it scheduled.  Is this a market-mechanism (based on price) or is it some other method?

Unichange.me

            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █


Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 19, 2012, 04:35:12 AM
 #6

It's fixed-price, first come, first served (modulo the random scheduling.)  I never thought of varying the price based on demand.  Interesting idea! 

Edit: The way I have been imagining it, "effective price" will be shaped by the exchange rate of nooshare vs other currencies.  I think this affords all the benefits you could get from varying the price within nooshare itself.
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
February 19, 2012, 06:27:49 PM
 #7

Hi, everyone.  In my spare time, I've been thinking about ways to incorporate arbitrary computation into the proof of work for a bitcoin-like blockchain-based cryptocurrency.  You can read about the design I've come up with here (click on the "Download" link in the top right-hand corner to view the PDF locally.)  

I know nothing about the MCMC problems used for physics, but I sure can't see how this can be used for MC sampling for bayesian inference, since you actually need the whole chain data after the bootstrapping. Perhaps it would be helpful if you spelled out some example problems and gave the setups for them.

As someone who uses optimization frequently the idea of being able to buy cycles to run arbitrary stochastic algorithms is a very exciting one. 

In particular, I often use optimization ~interactively while problem solving. While my average utilization is fairly low, I would really like enormous amounts of computation in short bursts. If I could buy EC2 in 5 minute chunks I'd regularly be buying 1000 CPUs worth at a time.  So a distributed system which allowed me to shift computation in time in order to get more of it at once would be _VERY_  useful to me.


I have some vague concerns that the result of the Lua execution engine (with software floating point!) will be that such a system could not be competitive with centralized computational services— simply because you're going to take some large slowdown to start with. You don't seem to have addressed resource exhaustion except for speed, e.g. running the systems out of memory, or simply requiring great amounts in order to disadvantage nodes with smaller amounts creating a race where all participants must have as much ram as the most and making it hard to reason about the returns on investing resources into this system. Yes, participants can use the escape hatch out, but at a 10x disadvantage to ones that have enough ram to keep going.

For the same reason, I'm unhappy with the alternative block regular hash— how is such a system to be competative with 50% deadweight loss (on top of the other losses)?

Bleh.  I wrote several paragraphs of text with excited ideas about this and that— and also arguing that you didn't need the second hash function because if you already had 51% computing power you'd be better off using it to run optimized versions of your own computing work then using it to trick the rest of the network into running inefficient versions of your work.

When then made me realize that your system has a pretty debilitating "Optimization attack"— I'm sure this isn't news to you, thus the every other block optimization... but I don't really think it helps much. I think your paper should spell this attack out clearly.

Say an attacker forms some A() which is just a trivial function e.g. A(x,y)=x, he also forms an A()', say A(x,y)'= sha256^1000(x) * 0 + x which is identical to A() in results but which is much more expensive to compute.  He announces A()' to the network, but detects it on his own systems and runs A() instead. He now has a enormous speed advantage on all other nodes.

The alternative block idea prevents this from being trivially translated into an overpowering attack, though the security is still enormously reduced compared to blockchain cryptocurrencies. But I don't think thats the worse result.

The worst part of this is that even ignoring overpowering attacks, I believe it will be economically advantageous to perform this attack so long as getting the network to use some A() always costs strictly more than the reward. This means that currency will only enter the system through execution of the dummy A that cost no one anything to run, which (assuming constant computing power) means another 2x overhead: for every unit of real work you do, you must have a unit of dummy work to create the coin to run it.

This ... on top of the standing halving required to make overpowering even a little non-trivial.. and the halving (at best!) of using a highly sandboxed execution engine (rather than optimized machine code)....  As a result this system probably has a peak computing efficiency of 12.5%, while offering substantially less security than existing cryptocurrency in a number of regards (the >25% attack, the fact that the POW can be expensive to validate— burdening regular nodes not just mining ones, the fact that the POW might be insecure to validate, again burdening regular nodes and not just mining ones).   For people who don't need to store up computing for burst loads this system could pretty much never be more cost effective then simply computing their desired work directly.

Have I gone off in space here?



In any case, in the chance that this isn't as useless as I now fear, before finalizing your system I'd like to encourage you to take the chance to pick up some protocol improvements which can't easily be implemented in Bitcoin. The anti-timewarping fix would be one obvious one, less trivially, you should probably change to Ed25519 from the secp256k1 curve. There are a number of proposed changes at https://en.bitcoin.it/wiki/Hardfork_Wishlist though many of them would be inapplicable to you or a bit too blue-sky.
Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 19, 2012, 07:40:01 PM
 #8

Thanks, gmaxwell.  This is  exactly the kind of feedback I was hoping for.

I sure can't see how this can be used for MC sampling for bayesian inference, since you actually need the whole chain data after the bootstrapping.
 You're right could  only be used to find approximate modes in the parameter space.  Actual inference would have to be a subsequent MCMC involving sampling around these nodes.  I mention this footnote 3 on page 5, but obviously I need to clarify it.

I would really like enormous amounts of computation in short bursts.
At least with the current proposal, NooShares can only be used to purchase batch processing which will run several hours from now.

how is such a system to be competative with 50% deadweight loss (on top of the other losses)?
 The idea is that there are a lot of fallow computational resources out there.  Even if you only get access to 1/8th of their potential computing power, it's a huge win over doing nothing with them.  But certainly a market for computational resources of the same size but with more efficient computation would have a huge advantage, so these inefficiencies are a weakness...

your system has a pretty debilitating "Optimization attack"— I'm sure this isn't news to you, thus the every other block optimization... but I don't really think it helps much. I think your paper should spell this attack out clearly.
You're right, I should be more explicit in section 2.2.3 that massive speedups in computation would actually be easy to engineer.

I believe it will be economically advantageous to perform this attack so long as getting the network to use some A() always costs strictly more than the reward.
 Could you explain the economic advantages of this attack in more detail, please?  It is certainly economically deleterious for the blockchain as a whole, but it also seems deleterious for the attacker, in that with the current price/reward structure, they lose at least 5 NooShares per block (probably 10.)  

Code:
 -65NS/block to schedule the transaction+
  50NS for winning the block +  
   5NS for reporting the best result +
   5NS for including the best result in the blockchain
--------------------------------------------------------------------
  -5NS
You could keep the last 5NS from them by requiring the report to occur in a block secured by the standard proof of work.  (I designed the pricing/reward structure specifically with this attack in mind, and should probably be explicit about that in section 2.4.)

For people who don't need to store up computing for burst loads this system could pretty much never be more cost effective then simply computing their desired work directly.
 I don't think the economics are that clear, because this is a potential use for currently fallow resources.  It's not clear to me that the only participants would be people with distributed computing needs.  For instance, I have all the computrons I need at work, but I also have a machine at home running litecoin which I would be happy to turn over to this.  Obviously I'm biased, though.

Have I gone off in space here?
No, your feedback has been incredibly useful.  Even if the  attack you describe is unsustainable, I hadn't previously thought of requiring the best-result reports to occur in the standard blocks, so it's been useful to discuss.  And if the NooShare idea is dead in the water, I want to know now, not after pouring hundreds of hours into implementing it, so I am grateful to learn of any possible flaws in it.  I'm also concerned about the potential useability/marketability of the system (see http://news.ycombinator.com/item?id=3609991), so I am glad to hear that you would be excited about using it.
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
February 20, 2012, 03:32:08 AM
Last edit: February 20, 2012, 07:31:38 AM by gmaxwell
 #9

Could you explain the economic advantages of this attack in more detail, please?  It is certainly economically deleterious for the blockchain as a whole, but it also seems deleterious for the attacker, in that with the current price/reward structure, they lose at least 5 NooShares per block (probably 10.)  

I'll try to find some time in the next couple days to elaborate on my thoughts... I'll also clean up the stuff I wrote before I realized the optimization attack would be an issue.

(it may also be better to describe it as running arbitrary stochastic algorithms — not just MCMC, because it can run things that people wouldn't generally think of as MCMC at all too.)

Quote
Code:
 -65NS/block to schedule the transaction+
  50NS for winning the block +  
   5NS for reporting the best result +
   5NS for including the best result in the blockchain
--------------------------------------------------------------------
  -5NS
You could keep the last 5NS from them by requiring the report to occur in a block secured by the standard proof of work.  (I designed the pricing/reward structure specifically with this attack in mind, and should probably be explicit about that in section 2.4.)

What made me sad about this is that it ultimately means the system can't subsist entirely on 'useful' work— that there will always be substantial busywork overhead because useful work (potentially optimization-attack work) must always net destroy coins or it will be economically beneficial to produce such work in order to gather the resulting coins.  This may indeed still leave the system attractive vs something like litecoin (where the reduced security model relative to bitcoin may be tolerable simply because of the different nature of use) but still ultimately limits its effectiveness over a more straight forward pay-for-computation model...

There are advantages over straight pay-for-computation, in that you've removed some of the risks (e.g. risk I'll pay for computation, and they won't do it— or the risk that I'll do computation and they won't pay— the risk that I ask for N units and they really do M), etc... though there are other ways to solve these problems.

In any case— even if your system goes bust as a blockchain technology— all the LUA sandbox, best-solution-prize, turning random solutions into distinguished points, stuff could be applied to create an interesting distributed computation tool.

Then again, who knows: Other than people paying for bitcoin mining remote pay for idle time computation has basically gone nowhere. I probably had UID 10 on CPUshare... but the usage and tools just didn't seem to materialize. I'm shocked that someone hasn't added password/wpa cracking kernels to the RPC bitcoin miners, since they could all also are very agreeable to distinguished point POW, and run great on GPUs.... but it just hasn't happened.

... more after I've had a chance to think for a bit longer
Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 21, 2012, 01:27:21 AM
Last edit: February 21, 2012, 02:03:54 AM by Alex Coventry
 #10

I'll try to find some time in the next couple days to elaborate on my thoughts... I'll also clean up the stuff I wrote before I realized the optimization attack would be an issue.
 Thanks, I'd appreciate that.

(it may also be better to describe it as running arbitrary stochastic algorithms...
Yes, I should change MCMC in the abstract to straight Monte Carlo.

This may indeed still leave the system attractive vs something like litecoin but still ultimately limits its effectiveness over a more straight forward pay-for-computation model...
 
This gets into questions of marketability which I find really hard to think clearly about, but you're not the only person to express concern about the inefficiency compared to bare metal.  It probably needs to be addressed.

For the defense against the "optimization attack" the only speed tradeoff I can think of is centralization.  Full SolidCoin-style trusted-node verification every second block would work, at least for this aspect.  That was actually my first approach, but I was looking for a tradeoff which preserved decentralization.   Perhaps there is a way to federate the trusted nodes along the lines Ben Laurie proposed with his mintettes.  Centralization would simplify many aspects of the design, though.

For the floating-point consistency, it might be possible to make a software-f.p. "fuzzing library" which randomizes the least-significant bits from each f.p. operation across multiple runs, to check numerical stability.  Then algorithm submitters would be responsible for using the fuzzer to ensure that their numerical calculations are stable to, say, 20 significant bits, and the f.p. values used in the hash could be rounded accordingly.  All results reported to the network would still be checked using software f.p., but the bulk of the computation could be bare-metal.  To prevent a "pessimization attack" via posing an numerically unstable algorithm which causes most bare-metal results to fail the software-f.p. check, the default miner could compare results generated by the fuzzer library every hundred iterations or so, and get a prize for reporting an example of numerical instability to the network.  Such reports would trigger miners to fail over to the default hashing algorithm for that block.  But requiring this of submitters would be a significant useability setback. (Edit:  On the other hand, this will probably make William Kahan happy.)

You don't seem to have addressed resource exhaustion except for speed
I didn't mention this in the paper, but the plan there is to allocate a sizeable chunk of memory at the start of the run, and tell malloc to assign all memory from that chunk.  (Needed to think about this anyway because an earlier iteration used only the seccomp sandbox, and that would not allow mmap.)

... more after I've had a chance to think for a bit longer
Looking forward to it!
k9quaint
Legendary
*
Offline Offline

Activity: 1190
Merit: 1000



View Profile
February 21, 2012, 07:44:10 PM
 #11


For the defense against the "optimization attack" the only speed tradeoff I can think of is centralization.  Full SolidCoin-style trusted-node verification every second block would work, at least for this aspect.  That was actually my first approach, but I was looking for a tradeoff which preserved decentralization.   Perhaps there is a way to federate the trusted nodes along the lines Ben Laurie proposed with his mintettes.  Centralization would simplify many aspects of the design, though.


Using the mintettes doesn't solve the 51% attack, it just moves it from the pool of all participants to a smaller hand selected pool of participants. It still requires that a majority of the mintettes be honest or the protocol breaks. The very nature of distributed rulesets makes them vulnerable to collusion, because that breaks the underlying assumption that the participants are not acting as a monolithic entity against the protocol.

I also believe the "51% attack" is misnamed. If the majority of the participants in Bitcoin agree to a protocol change, it changes. That is the process for adding positive changes to the system, but it is also a process from which negative changes can result (double spends etc). Trying to enforce that only positive changes can result from 51% of the hashpower is like trying to legislate that people act responsibly. I don't believe it can be done with any efficacy.

I would watch Solidcoin carefully over the next few weeks. You may gain some insight as to what goes wrong when a trusted node system goes haywire.

Bitcoin is backed by the full faith and credit of YouTube comments.
Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 21, 2012, 10:40:49 PM
 #12

Yes, such a scheme unavoidably requires trust in the mintettes.  Laurie makes this clear in his paper.

I'm looking for a centralized authentication system with flexible federation/syndication, because I think it should be possible to arrange things so that the authenticators could only cheat effectively by colluding on a large scale, and such an arrangement would increase the system's trustworthiness and robustness.  I don't have a clear picture of it, yet, but it may not need a secondary blockchain like you're assuming.
NASDAQEnema
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
February 23, 2012, 06:38:54 PM
 #13

The optimization attack can be avoided.

Require 1000 samples of the work.
Reward those contributions within a 30% variance of speed.
Use the fastest speed as the reference for what the work is worth.

In bitcoin the longest chain is the accepted true chain.
In Noocoin the fastest work should define the worth of the work.
This will also behave as a fast adapting difficulty parameter.

Result: No trusted nodes needed.

If you feel Universe has trolled you exclusively, please donate to Emergency Butthurt Support Fund:
1Jv4wa1w4Le4Ku9MZRxcobnDFzAUF9aotH
Proceeds go to Emergency Butthurt Escape Pod none of you will be allowed to use. If you have read this far, you must pay Emergency Butthurt Internet Tax.
Alex Coventry (OP)
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
February 23, 2012, 06:53:56 PM
 #14

Require 1000 samples of the work.
Reward those contributions within a 30% variance of speed.
Use the fastest speed as the reference for what the work is worth.
 

Thanks for the suggestion.  In the case of  gmaxwell's function, when the network checks the result it's still going to run a huge number of SHA256 calculations that the attacker doesn't need to compute, so as far as the network is concerned any contribution from the hacker is going to look bona fide.  Of course  for gmaxwell's function the optimization is so easy there are probably compilers which would find it, but it's also easy to make much more cryptic (even cryptographically unidentifiable) optimizations.

It is possible to force the attacker to do the calculation as advertised by incorporating oblivious hashing into the hash function, but that  would slow things down even more than devoting every second block to a default hash function.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!