Uptrenda
Member
Offline
Activity: 114
Merit: 16
|
|
December 04, 2016, 08:58:10 AM Last edit: December 05, 2016, 06:41:35 AM by Uptrenda |
|
I really am getting sick of these so-called "white papers" that don't describe any of the technical details of how the system actually solves its stated problem. If Bitcoin had of been written this way it would have been ignored by just about everyone but I'll give you the benefit of the doubt here and assume you've thought this through.
So my question is: how do you ensure the results from the outsourced computations will be correct? This is not explained any where in the paper and its the most important part of this idea. I mean: its not like people haven't had the idea to do this before. I myself have and researchers have used these clusters before for years for lengthy problems.
I know you mentioned Filecoin in your paper but the reason why we can so easily do decentralized cloud storage compared to decentralized computation is because the trust problem of proving that a node is still storing content (and proving that retrieved content is what we originally stored) is all easily solved by basic cryptography. But with outsourced computation there are no simple ways to validate the results are correct without wasting as many cycles or more than the original computation.
Am I to assume that this network is going to be based on a reputation system? I suppose you could also outsource the same computations to multiple nodes and choose the majority consensus but I'm not sure if the economics will work in your favor since having to pay to repeat a computation multiple times may work out to be more costly than just firing up an Amazon cluster.
You also can't rely on a pseudo-anonymous reputation system instead since this could poison the work for an entire computation and there would be no easy way to detect it (outsourcing random problems won't be enough since someone giving out faulty work can just selectively alter random results which is a similar problem that online market places face.)
How do you solve these problems? Or do you envision a reputation system will be enough?
Edit: This didn't occur to me before but if you were to focus exclusively on distributed computations for graphics rendering it would be easy to see which chunks of information weren't rendered correctly so you could outsource it to another node. But the problems I listed still apply to general purpose computation.
|
|
|
|
|
|
There are several different types of Bitcoin clients. The most secure are full nodes like Bitcoin Core, but full nodes are more resource-heavy, and they must do a lengthy initial syncing process. As a result, lightweight clients with somewhat less security are commonly used.
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
|
skrtel37
|
|
December 05, 2016, 11:51:10 AM |
|
I believe after 4-6 months ,once the development starts happening and we get some update the price will touch 5k sat
If had a Satoshi for every time I've read these kind of statements I could have bought fucking tropical island for myself.....
|
|
|
|
|
Calabi–Yau Manifold
|
|
December 05, 2016, 02:25:20 PM |
|
Heh let them fud Golem and "make" tiny btc for bounties ark, dice and more shady pump scam projects lol.
|
|
|
|
skrtel37
|
|
December 06, 2016, 11:14:34 AM |
|
Heh let them fud Golem and "make" tiny btc for bounties ark, dice and more shady pump scam projects lol. Just wondering why do you think Golem will be any different than Tauchain, Elastic and former projects that haven't achieved anything yet
|
|
|
|
Calabi–Yau Manifold
|
|
December 06, 2016, 04:06:07 PM |
|
Heh let them fud Golem and "make" tiny btc for bounties ark, dice and more shady pump scam projects lol. Just wondering why do you think Golem will be any different than Tauchain, Elastic and former projects that haven't achieved anything yet You missed the person, that's not my words.
|
|
|
|
mirny
Legendary
Offline
Activity: 1108
Merit: 1005
|
|
December 08, 2016, 10:41:18 AM |
|
So, this project should be fully operational in four years?
|
This is my signature...
|
|
|
Divorcion
|
|
December 08, 2016, 10:44:15 AM |
|
So, this project should be fully operational in four years?
What in 4 years really, i must missed sth. here.
|
|
|
|
bitcoinsforall
|
|
December 08, 2016, 02:39:26 PM |
|
In 2+ years it will have great functionality..but this much time is too large in crypto world..
|
|
|
|
2012
Legendary
Offline
Activity: 1526
Merit: 1003
|
|
December 08, 2016, 02:45:35 PM |
|
In 2+ years it will have great functionality..but this much time is too large in crypto world..
Even this period is so long but it will be worth to it to wait when it will be get fully functional.
|
|
|
|
Razaberry
|
|
December 09, 2016, 07:19:10 AM |
|
I really am getting sick of these so-called "white papers" that don't describe any of the technical details of how the system actually solves its stated problem. If Bitcoin had of been written this way it would have been ignored by just about everyone but I'll give you the benefit of the doubt here and assume you've thought this through.
So my question is: how do you ensure the results from the outsourced computations will be correct? This is not explained any where in the paper and its the most important part of this idea. I mean: its not like people haven't had the idea to do this before. I myself have and researchers have used these clusters before for years for lengthy problems.
I know you mentioned Filecoin in your paper but the reason why we can so easily do decentralized cloud storage compared to decentralized computation is because the trust problem of proving that a node is still storing content (and proving that retrieved content is what we originally stored) is all easily solved by basic cryptography. But with outsourced computation there are no simple ways to validate the results are correct without wasting as many cycles or more than the original computation.
Am I to assume that this network is going to be based on a reputation system? I suppose you could also outsource the same computations to multiple nodes and choose the majority consensus but I'm not sure if the economics will work in your favor since having to pay to repeat a computation multiple times may work out to be more costly than just firing up an Amazon cluster.
You also can't rely on a pseudo-anonymous reputation system instead since this could poison the work for an entire computation and there would be no easy way to detect it (outsourcing random problems won't be enough since someone giving out faulty work can just selectively alter random results which is a similar problem that online market places face.)
How do you solve these problems? Or do you envision a reputation system will be enough?
Edit: This didn't occur to me before but if you were to focus exclusively on distributed computations for graphics rendering it would be easy to see which chunks of information weren't rendered correctly so you could outsource it to another node. But the problems I listed still apply to general purpose computation.
From the FAQ page: There will be different methods depending on the task type. User who adds a new task can implement a new verification method that suits for this new problem. Possible solution may involve: - simple correctness checking of the result, eg. proof-of-work, - redundant computation, ie. few providers compute same part of the task and their results are compared, - computing small, random part of the task and comparing this part with result send by provider, ie. comparing color of few random pixels in rendered picture, - analysis of output logs.
|
|
|
|
someone111
|
|
December 09, 2016, 03:26:35 PM |
|
Why people are selling below ICO price on liqui?
|
-
|
|
|
golfhuso
|
|
December 09, 2016, 03:38:43 PM |
|
Can we deposit gmt at liqui?
|
▀▀▀▀ ▀▀▀▀ ▄▄▄▄ ▀▀▀▀ ████ ▄▄▄▄ ▄▄▄▄ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ | | | | | | ▀▀▀▀ ▀▀▀▀ ▄▄▄▄ ▀▀▀▀ ████ ▄▄▄▄ ▄▄▄▄ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ | | | | ▀▀▀▀
▀▀▀▀ ▄▄▄▄ ████ ▀▀▀▀ ▄▄▄▄ ▄▄▄▄ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ | | | [ | | ] | | ▀▀▀▀
▀▀▀▀ ▄▄▄▄ ████ ▀▀▀▀ ▄▄▄▄ ▄▄▄▄ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ ▄▄▄▄ ▄▄▄▄ ▄▄▄▄ ████ ████ ████ |
|
|
|
Calabi–Yau Manifold
|
|
December 09, 2016, 08:06:15 PM |
|
Why people are selling below ICO price on liqui?
The reason is the same like for people who e.g. bought ETH for 0.015 and sold for 0.008 Fear, lack of skills, "I have no idea what I am doing" etc. noobish style symptoms.
|
|
|
|
logictense
|
|
December 09, 2016, 09:02:48 PM Last edit: December 10, 2016, 02:01:43 AM by logictense |
|
What makes the difference between the elastic project and golem project? Both describe supercomputer as a project of basic objective of project
In short, the first one is crap and struggles from the greatest disease all the other alts are exposed to: limited utility and lack of funding. Golem is a long term play that puts ethereum blockchain to good use.
|
|
|
|
royalfestus
|
|
December 11, 2016, 05:42:50 AM Last edit: December 11, 2016, 01:42:52 PM by royalfestus |
|
What makes the difference between the elastic project and golem project? Both describe supercomputer as a project of basic objective of project
In short, the first one is crap and struggles from the greatest disease all the other alts are exposed to: limited utility and lack of funding. Golem is a long term play that puts ethereum blockchain to good use. Some describe the elastic computer as a calculator, a simple computer. It is sad that project does not show any relevance and progress in the latest time. The issue of donation during an ICO can be worrying when the idea and people behind it is whack, No company behind and they don't seem to care if any support is given or form of transparency is revealed. I just hope Golem will learn previous mistakes and endeavor to get the best out of the project
|
|
|
|
cyrixcer
|
|
December 11, 2016, 06:46:45 AM |
|
What makes the difference between the elastic project and golem project? Both describe supercomputer as a project of basic objective of project
In short, the first one is crap and struggles from the greatest disease all the other alts are exposed to: limited utility and lack of funding. Golem is a long term play that puts ethereum blockchain to good use. Exactly, golem is great because it is based on ethereum platform, golem will success.
|
|
|
|
Razaberry
|
|
December 11, 2016, 12:47:21 PM |
|
What makes the difference between the elastic project and golem project? Both describe supercomputer as a project of basic objective of project
In short, the first one is crap and struggles from the greatest disease all the other alts are exposed to: limited utility and lack of funding. Golem is a long term play that puts ethereum blockchain to good use. Some describe the elastic computer as a calculator, a simple computer. It is sad that project does not show any relevance and project in the latest time. The issue of donation during an ICO can be worrying when the idea and people behind it is whack, No company behind and they don't seem to care if any support is given or form of transparency is revealed. I just hope Golem will learn previous mistakes and endeavor to get the best out of the project We're definitely trying to make sure we learn from the mistakes of past projects that walked an alike path. Any lessons in particular that you'd like to point out?
|
|
|
|
_sunshine_
|
|
December 12, 2016, 11:54:14 AM |
|
Hi, read that BOINC is your inspriations. What different between BOINC? Why Golem will be better?
|
|
|
|
tomkat
|
|
December 12, 2016, 12:17:56 PM |
|
From the FAQ page: There will be different methods depending on the task type. User who adds a new task can implement a new verification method that suits for this new problem. Possible solution may involve:
- simple correctness checking of the result, eg. proof-of-work, - redundant computation, ie. few providers compute same part of the task and their results are compared, - computing small, random part of the task and comparing this part with result send by provider, ie. comparing color of few random pixels in rendered picture, - analysis of output logs.
- redundant computation - isn't it just pure waste of energy to compute the same thing few times? Also: - how many times the computation should be performed? - who will be paid in this scenario: all providers equally, first/last provider, other...? - after comparison is performed, how is the correct result determined, ie. which result is assumed correct? - comparing color of few random pixels in rendered picture - If a small part is OK, then you're going to assume that the whole image is correct? come one, it's a joke right ? What will stop malicious users from sending garbage data after that first sample is assumed to be correct? -analysis of output logs - what is this one actually, what logs are you going to analyze?
|
|
|
|
|