got it. if you call it multi-objective optimization, you just didn't understand the math. sorry.
How is it not? You have multiple candidates each with unique feature-sets and are trying to find a best fit subset of candidates along all feature dimensions. Sounds like a textbook example to me!
go over "least squares" again.
...
It's not about the metric, it's about the information I have.
When I say procfs I mean any measurements that the OS gives you (we could get into the kernel level but that's not needed given my algo).
Of course it is about the metric. This metric is the only thing we have, so far.
The most relevant piece of information available, the job algorithm itself, gets ignored. As long as this remains the case there is no meaningful association between the benchmark and the application.
true. procfs doesn't have this linearity. that's exactly what my algo comes to fix.
but *any* n-dim vector can be written as a linear combination of n linearly independent vectors.
Sure, but that doesn't magically make the measure linearly constrained. You're still assuming that the measurements are actually done at all, which is a mistake. We can't assume the semi-honest model for this step. We have to assume the attacker just makes up whatever numbers he wants, here.
those latter n vectors are the unknowns (their number might be different than n, yet supported by the algo).
note that I never measure the UVs, nor give any approximation of them. I only estimate their product with a vector.
Er, of course you are measuring the UVs. The performance of {xk} constitutes the measurement of all UVs, together. Just because you measure them through indirect observation doesn't mean you aren't measuring them. In any case I'm not sure what this has to do with anything? (Or were you just stating this in case I had missed the implication?)
sure, wonderful works out there. we have different goals, different assumptions, different results,
Your goals don't seem to align with market demands, your assumptions can't be safely assumed, and where they get great results I'm skeptical of what your results really are, let alone their merit.
This is the core of the problem. If your coin were based on these established works that meet lofty goals under sound assumptions and show practical results, I'd have no trouble seeing the value proposition. Since your coin is based on "known to fail" goals (which we'll just generally call the CPUShare model, for simplicity) and makes what appear to be some really bad assumptions, I don't expect comparable results. Why should I?
yet of course i can learn a lot from the cs literature, and i do some.
you claim my model is invalid but all you show till now is misunderstanding. once you get the picture you'll see the model is valid. i have a lot of patience and will explain more and more again and again.
Please do.
sure. see my previous comment about normal distribution and mitigating such cases.
Again, mitigation is not solution. You've claimed solutions, but still aren't presenting them. (In this case I'm not even really sure that the assumption around the mitigation actually holds. Gauss only assumes natural error, and not induced error. Can we assume any particular distribution when an attacker can introduce skew at will?
)
you're not even in the right direction. as above, of course procfs aren't linear. and of course they're still n-dim vectors no matter what.
And of course this doesn't magically repair our violated linearity constraint in any way, does it? Maybe this is what I'm missing....?
Of course it seems that I will be somewhat "doomed" to be missing this, as filling that hole requires establishing a reduction from arbitrary rank N circuits to rank 1 circuits. I'm pretty sure this is impossible, in general.
that follows from the definition of the UVs you're looking for.
even though I wrote detailed doc, as any good math, you'll still have to use good imagination.
just think of what you're really looking for, and see that procfs is projected linearly from them. even the most nonlinear function is linear in some higher dimensional space. and there is no reason to assume that the dim of the UVs is infinite.. and we'd get along even if it was infinite.
Being careful not to tread too far into the metaphysical I'll approach this delicately. You are closest to seeing the root of this particular problem with this statement, I feel.
If I say being linear in a higher dimension does not resolve the violation of the linearity constraint in the lower dimension, would you agree? If so, you must in turn admit that the linearity constraint is still violated.
It is fine to "patch over the hole" in this way if we can assume correctness of the higher dimensional representation.... unfortunately the attacker gets to construct both, and therein lies the rub. He gets to break the linearity,
and hide from us that he has even done so.
(This is precisely the premise for the attack I mentioned based on the "main assumption.")
If attackers can violate the linearity constraint at all, even if they can make it look like they haven't... well, they get to violate the constraint. :-) I say again, linearity can not be enforced, only assumed. (Well, without some crypto primitive "magic" involved, anyway.)
why different circuit? the very same one.
No, the benchmarks are canonical and are not the circuit representative of my job's algorithm.
the provider tells the publisher what is their pricing, based on benchmarks. yes, it can be tampered, but it can be recognized soon, loss is negligible, reputation going down (the publisher won't work with this address again, and with the ID POW they can block users who generate many new addresses), and all other mechanisms mentioned on the docs.
on the same way, the publisher can tell how much they are willing to pay, in terms of these benchmarks. therefore both sides can negotiate over some known objective variables.
Except they're subjective, not objective. They're potentially even a total fantasy. Worse yet, they have no real relation, in either case, to the transaction being negotiated.
again, we're not measuring the UVs, but taking the best estimator to their inner product with another vector. this reduces the error (error at the final price calculation, not in the UVs which are not interesting for themselves) in order of magnitude.
Again this is only true assuming a semi-honest model, which we can't, for reasons already enumerated. (Are you understanding, yet, that the critical failure of the model is not in any one of these problems, but in the combination?)
Local history & ID POW as above
We're running in circles, now. ID POW does nothing to filter spammers, only discourages them by effectively charging a small fee. The post requires stamps and I still get plenty of junk mail.
Given that once one owns an identity one can get at least one issuance of some payment for no work, why would anyone bother running actual computations and building up a positive reputation instead of mining IDs to burn by
not providing service?
The ID POW can only have any effect if the cost to generate an ID is significantly higher than the gain from burning it, the cost the publisher pays for utilizing the attacker's non-working worker. In other words, it has to be enforced somewhere/somehow that the cost of an ID is more than the "first round" cost of a job. I don't see how/where this can be enforced, but I'm not precluding it just yet. If you would enforce this somehow, it would solve at least the "trust model in a vacuum" part of the problem by relating the value of the trust entity to the value of its resource.
it is possible to take the algorithm which the publisher wants to run, and decompose its procfs measurements into linear combination of canonical benchmarks.
Is it? (In a way aside from my proposed functional extension mechanism, that also doesn't give the publisher some free ride?)
If so, this might be a useful approach. If anything, this should be explored further and considered.
that's exactly what i'm saying. we don't care about bursts on such massive distributed apps.
We do, because we can't price the distribution as a whole we can only price the individual instances. The concern is not that one node does 1000/sec and another node does 1/sec, it is that the 1000/sec node might suddenly start doing 1/sec - and might even do so through neither malicious intent or any fault!
as above, that's an open problem, which i'm happy i don't need to solve in zennet. that's why zennet is not adequate for really-real-time apps.
but it'll fold your protein amazingly fast.
Or it'll tell you it is folding your protein, take your money, spit some garbage data at you (that probably even looks like a correct result, but is not) and then run off.
Or it tell you it can fold your protein really fast, take your money, actually fold your protein really slowly, and you'll have to walk away and start over instead of giving it more money to continue to be slow.
We can't know which it will do until we hand it our money and find out! No refunds. Caveat emptor!
they are not called canonical from the historical point of view, but from the fact all participants at a given time know which benchmark you're talking about.
Sure, my point is that the benchmarks are presented as "one size fits all" and they aren't. It's ultimately the same problem as the benchmark having nothing to do with the job, in a way. How are you going to maintain and distribute useful benchmarks for every new computing kit and model that comes out? (At least that problem is not so bad as trying to maintain a benchmark for any job algorithm used. There can be only finite technologies.
)
Further, how are you going to benchmark some nonlinear, progressive circuit "at all?" How can you benchmark something that only does unsupervised classification learning? How can you meaningfully benchmark a system with "online LTO" where the very act of running the benchmark will result in the worker re-configuring into a different performance profile, partly or totally invalidating your benchmark results? (Likely further separating the bench-marked performance profile from that of the eventual job, too!)
it doesn't must be phoronix (but it's a great system to create new benchmarks). all the programs have to do is to make the HW work hard. if you have a new HW, we (or the community) will have to write a program that utilizes it in order to be able to trade its resources over zennet, and participants will have to adopt it as a canonical benchmark.
flexibility.
the last thing the system is, is "rigid".
Being continuously re-coded and added to in order to support change does not really fit my definition of flexible. That is kind of like saying a brick is flexible because you can stack more bricks on top of it.