Bitcoin Forum
May 09, 2024, 04:12:19 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread  (Read 57048 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:24:29 AM
 #121

Quote
My last reply to this still stands, Gauss was "wrong" about our model since he wasn't considering intentionally introduced fault.  Gauss-Markov (which I love, being ML guy) ends up being double wrong, because the Markov side of things assumes the errors are uncorrelated.  An attacker introducing error may certainly introduce correlated error, and may even have reason to explicitly do so!  Wink
man stop mixing miscalculation with procfs spoofing

Tau-Chain & Agoras
1715227939
Hero Member
*
Offline Offline

Posts: 1715227939

View Profile Personal Message (Offline)

Ignore
1715227939
Reply with quote  #2

1715227939
Report to moderator
1715227939
Hero Member
*
Offline Offline

Posts: 1715227939

View Profile Personal Message (Offline)

Ignore
1715227939
Reply with quote  #2

1715227939
Report to moderator
Even in the event that an attacker gains more than 50% of the network's computational power, only transactions sent by the attacker could be reversed or double-spent. The network would not be destroyed.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715227939
Hero Member
*
Offline Offline

Posts: 1715227939

View Profile Personal Message (Offline)

Ignore
1715227939
Reply with quote  #2

1715227939
Report to moderator
1715227939
Hero Member
*
Offline Offline

Posts: 1715227939

View Profile Personal Message (Offline)

Ignore
1715227939
Reply with quote  #2

1715227939
Report to moderator
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:30:54 AM
 #122

i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.

I still don't see how the n programs have any assumable relation to the program that I'm actually trying to get quoted.  How is the behavior of any of the runs of the n programs indicative of anything about my future run of my job?  How is what is being priced over serving to price the thing that I want priced?
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:32:50 AM
 #123

very easy: each client picks any amount of ID POW to require from its parties.

Eeep, it just keeps getting more scary!  Cheesy

Who decides how much work is sufficient?  How does any given publisher have any indication about any given providers ability to perform the identity work?

There kind of has to be some continual consensus on a difficulty here, doesn't there?
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:35:22 AM
 #124

it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

 Huh  If benchmarks don't have to be accurate than why do them at all?

Quote
it is *able* to be customized and coded, hence flexible.

By this measure all software is flexible.  Of course you must have known what I meant, here, as a measure of flexibility relative to alternatives.

Quote
it has to be done only for totally new creatures of hardware.

Totally new creatures of hardware show up every day.  Anyway this is neither here nor there.  I said I wasn't going to hold my breath on that result, and I didn't.  We can move on from it without prejudice.  Smiley
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:37:41 AM
 #125

Quote
Precisely the contradiction, restated yet again.  "Translate procfs to zencoin" is the exact same problem as "price procfs" so this statement is precisely restated as "The control on the price is the user's. The system just helps them by doing pricing."

at least we now understand who influences who, and that the user may change his numbers at any time with or without looking at the market. hence no contradiction or something like that. you may argue about terminology.

Quote
I'm becoming really quite convinced that where you've "gone all wrong" is in this repeated assumption of semi-honest participation.

Much of what you're saying, but particularly this, simply doesn't hold in the explicit presence of an attacker.

draw me a detailed scenario for a publisher hiring say 10K nodes and lets see where it fails

Quote
The one exception to this, of course, being formal proof.  We can actually offer real promises, and this is the even central to the "seemingly magic" novelty of bitcoin and altcoins.  Bitcoin really does promise that no-one successfully double spends with any probability as long as hashing is not excessively centralized and the receiver waits an appropriate number of confirms.  (Both seemingly reasonable assumptions.)

Why you're so readily eschewing approaches that can offer any real promises, even go so far as to deny they exist "in real life," despite our Bitcoin itself being a great counterexample, is confusing to me.

in theory. in practice, power can shut down and so on. probability for a computer to give you a correct answer is never really 1. how much uptime AWS guarantee? i think 99.999%

Quote
I assume most users will be rational and will do whatever maximizes their own profit.

I actually go a bit further to assume that users will actually behave irrationally (at their own expense) if necessary to maximize their own profit.  (There's some fun modal logic!)

(I further have always suspected this is the root cause behind the failure of many corporations.)

since after the convergence of the network toward more-or-less stable market, spammers and scammers will earn so little. they'd rather do decent work and get paid more. even if they don't, other mentioned mitigations are taking place.

Quote
People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.

i totally agree. i do not agree that the network is not able to mitigate them and converge to reasonable values. since the costs are so lower than big cloud firm operational costs, we have a large margin to allow some considerable financial risk.

Quote
How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?

let me mention that i know nothing about cpushare so i can't refer this question

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:39:14 AM
 #126

man stop mixing miscalculation with procfs spoofing

Man stop thinking they are not exactly the same thing.   Wink

If you can get over that hangup I think we can make better progress.

The attacker's arbitrary control over the execution, without being held to any scrutiny of authentication, is the same problem regardless of if we are looking at the implications to the pricing or the implications to the execution itself.

The attacker recomposing the execution context is the same behavior in either case.  This is the good ol' red/blue pill problems, just reiterated under a utility function and cost model.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:39:46 AM
 #127

i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.

I still don't see how the n programs have any assumable relation to the program that I'm actually trying to get quoted.  How is the behavior of any of the runs of the n programs indicative of anything about my future run of my job?  How is what is being priced over serving to price the thing that I want priced?

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:42:11 AM
 #128

man stop mixing miscalculation with procfs spoofing

Man stop thinking they are not exactly the same thing.   Wink

If you can get over that hangup I think we can make better progress.

The attacker's arbitrary control over the execution, without being held to any scrutiny of authentication, is the same problem regardless of if we are looking at the implications to the pricing or the implications to the execution itself.

The attacker recomposing the execution context is the same behavior in either case.  This is the good ol' red/blue pill problems, just reiterated under a utility function and cost model.

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?
maybe there is a similarity if our goal is to eliminate them both.
but all we want is to decrease the risk expectation.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:43:46 AM
 #129

very easy: each client picks any amount of ID POW to require from its parties.

Eeep, it just keeps getting more scary!  Cheesy

Who decides how much work is sufficient?  How does any given publisher have any indication about any given providers ability to perform the identity work?

There kind of has to be some continual consensus on a difficulty here, doesn't there?

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:46:55 AM
 #130

it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

 Huh  If benchmarks don't have to be accurate than why do them at all?

that's the very point: i dont need accurate benchmarks. i just need them to be different and to keep the system busy. that's all!! then i get my data from reading procfs of the benchmark's running. if you understand the linear independence point, you'll understand this.

Quote
Quote
it is *able* to be customized and coded, hence flexible.

By this measure all software is flexible.  Of course you must have known what I meant, here, as a measure of flexibility relative to alternatives.

Quote
it has to be done only for totally new creatures of hardware.

Totally new creatures of hardware show up every day.  Anyway this is neither here nor there.  I said I wasn't going to hold my breath on that result, and I didn't.  We can move on from it without prejudice.  Smiley

oh well
so you know how to write software that applies to all future hw?

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:48:13 AM
 #131

as i wrote, i'll be glad to discuss with you methods for proving computation. not even discuss but maybe even work on it.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:58:08 AM
 #132

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

I don't disagree with any of this except the notion that it is acceptable to forget about programs.   Wink

What you propose is fine other than the fact that there is no relation between the end result and the program that you conveniently want to just "forget about."

(Such a relation is not easily established, generally.  Functional extension is hard.  Proving lack of divergence in the general case is even known to be impossible.  However, you can't just "punt" like this, forgetting about programs, and assume everything else will just work out soundly.)
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:59:08 AM
Last edit: September 20, 2014, 02:09:37 AM by ohad
 #133

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

I don't disagree with any of this except the notion that it is acceptable to forget about programs.   Wink

What you propose is fine other than the fact that there is no relation between the end result and the program that you conveniently want to just "forget about."

(Such a relation is not easily established, generally.  Functional extension is hard.  Proving lack of divergence in the general case is even known to be impossible.  However, you can't just "punt" like this, forgetting about programs, and assume everything else will just work out soundly.)

it's true for ANY vectors. calling them "procfs msmts" doesnt change the picture.
i can linearly span the last 100 chars you just typed on your pc by using say 200 weather readings each from 200 different cities.
of course, only once, otherwise the variance will be too high to make it somehow informative.
but for a given program, the variance over several runnings is negligible (and in any case can be calculated, and taken into account on the least squares algo).

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 02:00:53 AM
 #134

also note that those UVs are actually "atomic operations".
running one FLOP requires X atomic operations of various types.
we just add their amounts linearly!
but summing consequent FLOPS will end up correlated.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:08:46 AM
 #135

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?

No!  This is central to my point.  Authentication is authentication, and anything else is not.

(Authentication encompasses both concerns.)

Quote
maybe there is a similarity if our goal is to eliminate them both.

There is more than a similarity, there is a total equivalence.  To assume they are in any way different is a mistake.  They both just constitute a change in reduction semantics within the execution context.

Quote
but all we want is to decrease the risk expectation.

I am actually interested in a goal of eliminating both, of course.

However, all I really want is at least a rational explanation of where risk expectation is decreased, assuming rational behavior (and not assuming honest or even semi-honest behavior of participants beyond what is enforced by blockchain semantics) by participants.

It still seems to me like rational behavior of participants is to default to attack, and it seems that they have little discouraging them from doing so.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:10:48 AM
 #136

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 02:12:23 AM
 #137

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?

again, all matter of approximations and probabilities.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 02:14:34 AM
 #138

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?

No!  This is central to my point.  Authentication is authentication, and anything else is not.

(Authentication encompasses both concerns.)

Quote
maybe there is a similarity if our goal is to eliminate them both.

There is more than a similarity, there is a total equivalence.  To assume they are in any way different is a mistake.  They both just constitute a change in reduction semantics within the execution context.

Quote
but all we want is to decrease the risk expectation.

I am actually interested in a goal of eliminating both, of course.

However, all I really want is at least a rational explanation of where risk expectation is decreased, assuming rational behavior (and not assuming honest or even semi-honest behavior of participants beyond what is enforced by blockchain semantics) by participants.

It still seems to me like rational behavior of participants is to default to attack, and it seems that they have little discouraging them from doing so.

it is clear that:
1. you want the computational work to be proven
2. i want to control the risk expectation and decrease it to practical values

i claim that your method is less practical, but i'm open to hear more.
you claim that my method won't work.
now let's focus on either ways:
if we want to talk about my approach, then miscalc and procfs spoof are indeed different.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:38:04 AM
 #139

at least we now understand who influences who, and that the user may change his numbers at any time with or without looking at the market. hence no contradiction or something like that. you may argue about terminology.

 Huh Another response that didn't match the statement.  How is the system not doing the pricing?  If you think the market is doing the pricing, this would imply that the pricing only occurs at the fiat denomination.  I don't think this is a notion that would be well accepted.  If you think the user is doing the pricing, why?  How does the user setting the relation between the procfs token and the coin token say anything abut the valuation of the actual computation, which is denominated in that procfs token?  I would hope you do understand the difference between valuation and denomination.

Quote
draw me a detailed scenario for a publisher hiring say 10K nodes and lets see where it fails

I've already detailed where it can fail.  This is all I've been doing for hours now.

Quote
in theory. in practice, power can shut down and so on.

Eh, I'm going to avoid getting back into this discussion for the hundredth-or-so time.  The whole model of bitcoin *doesn't* actually fall over when the EMPs go off.  The theory remains just as sound.  The protocol can still be enacted, and work, albeit probably with adjusted parameters that account for the (massively) increased network latency caused by lack of electronic communication.

Bitcoin would've literally solved the actual Byzantine Generals' problem, even at the time!

Quote
probability for a computer to give you a correct answer is never really 1.

It is when the process the computer employs to derive that answer is proof carrying!  Either you get out the correct answer or you get no output at all.

Quote
how much uptime AWS guarantee? i think 99.999%

How much uptime does bitcoin guarantee? 100%.  Anti-fragile and all that jazz.  It really is deterministic "immortal" modulo the 51% attack or some hypothetical eventual exhaustion of the hash space.

Six sigma has it all wrong.  We should be building systems that are "forever."  (Particularly being "Bitcoiners.")

Quote
since after the convergence of the network toward more-or-less stable market, spammers and scammers will earn so little.

Again, why do we think this model will converge in such a direction?  What makes them actually "earn so little?"  What is going to prompt the network participants to behave altruistically when they have both incentive and opportunity not to?

Quote
Quote
People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.

i totally agree. i do not agree that the network is not able to mitigate them and converge to reasonable values.

I never said that I don't think it could.  In fact I explicitly stated the opposite several times.  What I'm saying here is that your network, as described so far, doesn't even seem to mitigate correctly.

Quote
since the costs are so lower than big cloud firm operational costs, we have a large margin to allow some considerable financial risk.

Eh?  How can we know the relative cost a priori?  Why shouldn't we believe the cost of this service will actually average higher, given the need for excess redundancy etc.  We've already brought into the discussion the notion that people might even just re-sell AWS, and they certainly wouldn't do so at a loss.

I don't think you've made a safe assumption on this.

Quote
Quote
How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?

let me mention that i know nothing about cpushare so i can't refer this question

I'll rephrase.  How exactly is this not reiterative of any other attempts at a p2p resource market, which have all failed?

They've all failed for the same reasons I'm assuming your model fails, btw.  No authentication over quote or work.  Inadequately constrained execution context.  Disassociated cost models.  Providers absconding mid-computation.  Providers attacking each-other and the network for any possible advantage.  Providers burning through identities to perpetuate their unfair trades. Requirement for substantial overheads in any attempts at "mitigation" of these problems.

Those who do not learn from history, they say, are doomed.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:43:55 AM
 #140

Quote
By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?

again, all matter of approximations and probabilities.

More "and then some magic happens."

So a provider who makes some technological leap in solving the puzzle gets to violate the constraint at will until the network just "wisens up on it's own?"  By what means should it become wise to the fact?

If you remove the difficulty scale it can't really be called PoW anymore, since it no longer manages to prove anything.  The "hash lottery" has to be kept relatively fair by some explicit mechanism, or else anyone who finds a way to buy their "hash tickets" very cheaply breaks any assumption of fairness otherwise!
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!