Bitcoin Forum
May 12, 2024, 10:15:46 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Could the Bitcoin business model be expanded to super-computing?  (Read 1287 times)
-Redacted- (OP)
Hero Member
*****
Offline Offline

Activity: 574
Merit: 501


View Profile
September 19, 2013, 12:43:17 AM
Last edit: September 19, 2013, 12:56:23 AM by -Redacted-
 #1

Supercomputers are extremely expensive, and the computing power of the Bitcoin network far exceeds the capabilities of any supercomputer, at least as far as computing SHA256 hashes goes.  I'm not suggesting that Bitcoin ASICS are useful for general purpose computing - they aren't - but would the Bitcoin "business model" work to build a distributed super-computer?  There are dozens, possibly hundreds of distributed applications already - seti@home, protein folding, calculating prime numbers, and on and on, but these are all non-paying applications.

Would it be possible to build a business around having you install some piece of hardware (or using GPU cards), having you connect up to some central server, and return answers to bits-and-pieces of a distributed computation - paying some set fee per computation block?    The business would rent out some particular measure of super-computing calculation-ability by the hour or calculation rate, and pay the various nodes that participate a per-computation-per-difficulty rate.  Might be a business just waiting to happen, given the cost of super-computers and the cost of an hour of time on them, not to mention the power, cooling, staffing and maintenance contract costs....
1715552146
Hero Member
*
Offline Offline

Posts: 1715552146

View Profile Personal Message (Offline)

Ignore
1715552146
Reply with quote  #2

1715552146
Report to moderator
It is a common myth that Bitcoin is ruled by a majority of miners. This is not true. Bitcoin miners "vote" on the ordering of transactions, but that's all they do. They can't vote to change the network rules.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715552146
Hero Member
*
Offline Offline

Posts: 1715552146

View Profile Personal Message (Offline)

Ignore
1715552146
Reply with quote  #2

1715552146
Report to moderator
1715552146
Hero Member
*
Offline Offline

Posts: 1715552146

View Profile Personal Message (Offline)

Ignore
1715552146
Reply with quote  #2

1715552146
Report to moderator
1715552146
Hero Member
*
Offline Offline

Posts: 1715552146

View Profile Personal Message (Offline)

Ignore
1715552146
Reply with quote  #2

1715552146
Report to moderator
Ho Lee Fuk
Newbie
*
Offline Offline

Activity: 13
Merit: 0



View Profile
September 19, 2013, 12:53:39 AM
 #2

a brute Force Pool with GPU's
console_cowboy
Member
**
Offline Offline

Activity: 98
Merit: 10



View Profile
September 19, 2013, 04:21:40 AM
 #3

Supercomputers are extremely expensive, and the computing power of the Bitcoin network far exceeds the capabilities of any supercomputer, at least as far as computing SHA256 hashes goes.  I'm not suggesting that Bitcoin ASICS are useful for general purpose computing - they aren't - but would the Bitcoin "business model" work to build a distributed super-computer?  There are dozens, possibly hundreds of distributed applications already - seti@home, protein folding, calculating prime numbers, and on and on, but these are all non-paying applications.

Would it be possible to build a business around having you install some piece of hardware (or using GPU cards), having you connect up to some central server, and return answers to bits-and-pieces of a distributed computation - paying some set fee per computation block?    The business would rent out some particular measure of super-computing calculation-ability by the hour or calculation rate, and pay the various nodes that participate a per-computation-per-difficulty rate.  Might be a business just waiting to happen, given the cost of super-computers and the cost of an hour of time on them, not to mention the power, cooling, staffing and maintenance contract costs....


One of the main advantage to HPC is the high speed interconnect that connects all parts of the supercomputer. The reason that bitcoin is just as fast distributed is due the massively parallel nature of bitcoin. The algorithm requires no high speed communications to run. Most HPC applications just simply will not work in a distributed network.
brogramer
Newbie
*
Offline Offline

Activity: 15
Merit: 0


View Profile
September 20, 2013, 12:18:47 AM
 #4

seti@home & folding@home both do this now.  Although they aren't paying people out.  I doubt they could afford to do that.  Tho if you could get an IBM type setup that allowed distributed computing with gpus/processes that might be worthwhile. 

I for one have a lot of horse power in my main rig doing practically nothing all day.  It's a big $ vs earlier hardware death issue tho :/

@console_cowboy I don't think that's necessarily the case - massive computer clustered are built for parallelization- sure, you get internet delay and basically have to do about 20% more work to verify that people aren't giving you bogus answers but when you go from a 200 core machine to a 200,000 everything just becomes moot. 

bcp19
Hero Member
*****
Offline Offline

Activity: 532
Merit: 500



View Profile
September 20, 2013, 12:32:02 AM
 #5

The big problem here is that those projects already have had a lot of work done, and it'd be near impossible to prevent old work from being sent through as new work just completed.  You'd almost have to come up with a new project from scratch to prevent people trying to cash in without doing anything.

I do not suffer fools gladly... "Captain!  We're surrounded!"
I embrace my inner Kool-Aid.
polarhei
Sr. Member
****
Offline Offline

Activity: 462
Merit: 250


Firing it up


View Profile
September 20, 2013, 05:15:20 AM
 #6

Already in Super-computing before GPU age. People use pool to save initial cost while they can get the desired power during the steps. Same 'signal' can be added up. Which is a basic idea of super computing.

So the model before the ASIC system, it is already in Super-computing.
-Redacted- (OP)
Hero Member
*****
Offline Offline

Activity: 574
Merit: 501


View Profile
September 20, 2013, 05:18:48 AM
 #7

The big problem here is that those projects already have had a lot of work done, and it'd be near impossible to prevent old work from being sent through as new work just completed.  You'd almost have to come up with a new project from scratch to prevent people trying to cash in without doing anything.

That's pretty much what the bitcoin blocks do.  It's not difficult to distinguish old and/or duplicate work that gets submitted, which is part of what I'm calling the "bitcoin model".
Trongersoll
Hero Member
*****
Offline Offline

Activity: 490
Merit: 501



View Profile
September 20, 2013, 03:15:45 PM
 #8

The better way to approach this is to look at a project that would be profitable providing it had vast computing resources and then determine if distributed computing would be a way to provide those resources. Also you need to determine if a supercomputer is required for your task, in some cases a Beowulf Cluster may be sufficient.

I'm sure you could find people willing to sell you their unused computing power if you had a use for it.
-Redacted- (OP)
Hero Member
*****
Offline Offline

Activity: 574
Merit: 501


View Profile
September 20, 2013, 03:25:51 PM
 #9

You got it backwards, I want to sell my unused CPU/GPU cycles to someone else Smiley
console_cowboy
Member
**
Offline Offline

Activity: 98
Merit: 10



View Profile
September 20, 2013, 05:19:07 PM
 #10

One of the main advantage to HPC is the high speed interconnect that connects all parts of the supercomputer. The reason that bitcoin is just as fast distributed is due the massively parallel nature of bitcoin. The algorithm requires no high speed communications to run. Most HPC applications just simply will not work in a distributed network.

You guys really should read up on how HPC works. Nobody has yet to mention the fact that the lack of a high speed interconnect really screws up the main use of HPC. High Performance Computing, or Supercomputing, really requires a high speed interconnect to allow communication between nodes. On a distributed computer setup, such as the bitcoin network, the communication between nodes would be insanely slow and laggy. Even the fastest internet connections in the world would cause issue here.

Now what a distributed computer network would be good at is called Grid Computing. The bitcoin network is in many ways a massive grid computer. Here's the wikipedia article: http://en.wikipedia.org/wiki/Grid_computing. It is actually a model designed for massively parallel systems, such as a distributed computer. In fact, Seti@home and Folding@home are great examples of Grid Computing.

Source: I'm an HPC engineer.

EDIT:

So in response, no. The bitcoin business model could not be expanded to supercomputing due to the nature of super computing itself. No software traditionally used would run on your distributed computer, so there is no market for such a thing. There are only a few exceptions such as Seti@home and Folding@home, and these are all done to save money and use "free" resources.
-Redacted- (OP)
Hero Member
*****
Offline Offline

Activity: 574
Merit: 501


View Profile
September 20, 2013, 05:26:44 PM
 #11

Yes, it's grid.  I've been in the business for many years.  

Many of the problems being run on dedicated supercomputers could be run on a distributed grid - you only have a need a high speed interconnect when you have to move massive amounts of data between nodes.  Many simulations use as many cores as they can get allocated for as many hours as they can get them allocated because the calculations are relatively distinct from one another, require small packets of data in and out, and can be parallel processed, combined, and sent out as a new calculation.

They typically aren't moving around "big data" between processing nodes, although they may be generating a bit of it as output.   Granted, this is not always the case...  

EDIT:  There are more than just a few examples - there are literally hundreds of grid applications.  Done to save money by the applications, with time and resources donated time by the volunteers  running them.   But suppose an app came along that DID pay some trivial amount of money per calculation - like Bitcoin does - do you not think significant spare computing capactiy would tend to gravitate to that app? 
console_cowboy
Member
**
Offline Offline

Activity: 98
Merit: 10



View Profile
September 20, 2013, 07:35:44 PM
 #12

Yes, it's grid.  I've been in the business for many years.  

Many of the problems being run on dedicated supercomputers could be run on a distributed grid - you only have a need a high speed interconnect when you have to move massive amounts of data between nodes.  Many simulations use as many cores as they can get allocated for as many hours as they can get them allocated because the calculations are relatively distinct from one another, require small packets of data in and out, and can be parallel processed, combined, and sent out as a new calculation.

They typically aren't moving around "big data" between processing nodes, although they may be generating a bit of it as output.   Granted, this is not always the case...  

EDIT:  There are more than just a few examples - there are literally hundreds of grid applications.  Done to save money by the applications, with time and resources donated time by the volunteers  running them.   But suppose an app came along that DID pay some trivial amount of money per calculation - like Bitcoin does - do you not think significant spare computing capactiy would tend to gravitate to that app? 

I 100% agree that a fair bit of applications running on HPC systems could very easily be run on a grid, but we tend to regard those apps as apps that shouldn't really be running on an HPC. The users using grid computing applications on a true high speed HPC system are really wasting their funding purchasing time on such a machine.

Now where I disagree on this involves getting the data to the nodes. This would end up taking a fairly large amount of time on most internet connections, unless each machine is working on a very small data subset. If the data subsets are that small, I would imagine that the network would be inefficient for even most grid computing tasks. Even grid computers tend to have a nice data backbone for loading up the ram on each node to run computations on. This means that data center based clouds would be a much more viable candidate for this sort of computing, such as the amazon cloud. They have the network connections capable of receiving the datasets in a reason amount of time.  I just can't see where even paying a trifling amount would be more cost effective than buying time on a high performance (or even low performance) cloud.  The I/O output is a major PITA.

Now if you decide to produce and store all data locally, you still have to keep track of the metadata so that you can stitch it back together. This is a serious issue in and of itself, as in traditional HPC, data is just stored to disk. You will have to verify the data and restitch it, which could require its own large HPC just for that. The data has to be collected somewhere. Check out research on metadata trees. There are people making careers doing just this, look at the folding@home methods in particular. Its a huge issue for your model.

What you want already exists, its called Condor. Its very slow and has been around for quite some time. If you want to test your applications on something similar to what you purpose, then go test some on Condor and see how they run. Its a major PITA though, due to the high amount of non-traditional checkpointing that must be done.
Takeshi_Kovacs
Member
**
Offline Offline

Activity: 65
Merit: 10


View Profile
September 21, 2013, 08:27:31 AM
 #13

Nobody here has touched on the real reason why this could not work.

Honesty.

Crypto coins work because the work required to verify that the claim to have found a block is valid can be tested far more quickly than the block can be found. That makes it very hard to cheat the system. People try all the time but the most obviously available way to cheat is to try to convince a pool that you are checking more potential blocks than you really are and the folks who run the pools are in a constant battle to catch those cheats.

Seti and folding at home work because there is no reward for running those projects other then the warm glow of helping out with something that you feel is important. (Of course, if you really believe that the lizard alliance really rules mankind, you will find it easy to believe that our scaly overlords and already making sure that seti never finds evidence of their existence.)

If somebody offered to pay third parties actual cash for running some numeric simulation on their GPU farm then the ratbags would get busy trying to cheat the system and verifying the result would cost as much as calculating it in the first place. It would be worse though. Suppose that an F1 racing team offered to pay it's fans to help with the CFD calculations around the aerodynamic properties of of their new front wing. They might get swamped by 'help' from computers run by a rival team who would be happy to calculate the correct numbers and then give slightly wrong but still plausible answers.

Of course, you could try to submit the same calculational problem to multiple users and get all the answers and compare them. Then you only have to worry about a 51% attack.

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!