For example, the pricing model is totally innovative.
What is the innovation? What new contribution are you making? I hear you saying over and over again "it's innovative, it's innovative, no really it's TOTALLY innovative" but you haven't yet come out and said what the innovation
is.Averaging some procfs samples is hardly innovative. Many providers (at least the ones that don't charge based on straight wall-clock time) do this. What are you doing in this pricing model that is new?
It measures the consumption much more fairly than common services.
Except that it doesn't, and this is somewhat central to my point. It measures consumption in (more or less) the same way as everyone else, but instead of having a company to complain to or sue if they lie about their stats the user only has a pseudonym to leave a comment on. The system removes the actual accountability and replaces it with a trust metric, and further makes it exceedingly easy to lie about stats.
It also optimally mitigates the difference between different hardware.
How? This is certainly not explained, or not explained well, in your materials. I see some hand waving about risk analysis in the client, but nothing that actually does anything about it.
The crunch here is to make assumptions that are relevant only for distributed applications.
Ok, what assumptions are we talking? You say these things, but never complete your thoughts. It gets tedious to try to draw details out of you like this.
Then comes the novel algorithm (which is an economic innovation by itself) of how to price with respect to unknown variables under a linearity assumption (which surprisingly occur on Zennet's case when talking about accumulated resource consumption metrics).
Aside from the novelty claim (RE my first point, I don't see the novelty) I see two issues, here. First, the linearity assumption is a big part of the rigidity. It applies fine for a traditional Harvard architecture CPU, but breaks down on any "real" tech. How does your pricing model price a neuromorphic architecture? How does your pricing model measure a recombinant fpga array? How do you account for hardware that literally changes (non-linearly!) as the process runs? (The answer is self evident from your own statement - right now you can't.)
Second the pricing model only prices against the micro-benchmarks (indicating, right there, that the performance profile indicated in the pricing model will differ from the applied application!) and doesn't account for environmental context. With modern distributed systems most of the performance profile comes down to I/O wait states (crossing para-virtualization thresholds) and bursting semantics. I don't see either of these actually being accounted for. (Maybe you should spend some time with engineers and A.E.s from AWS, Google, Rackspace, MediaFire, Joyent, DigitalOcean, etc and find out what your future users are
actually going to be concerned about in their pricing considerations.)
The workflow is inconvenient. Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc. If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.
I don't understand what's the difference between this situation or any other cloud service.
With any other cloud service I can spin up only one instance and be fairly confident that it will perform "as advertised" and if it doesn't I can sue them for a refund.
With your service I can't spin up one instance and be confident about anything. If it turns out that one instance is not as advertised - all well, too late - there is no recourse for compensation, no jurisdictional court to go to, no company to boycott, nothing but a ding on a trust rating. (Again if you think trust rating works in a vacuum like this you should just look around at some of the trust rating behavior on these forums. It doesn't.)
Also note that Zennet is a totally free market. All parties set their own desired price.
I've been ignoring this catch-22 statement up to now. Either the pricing model is a free market, or the pricing model is controlled by your formula and magical risk-averse client. It can't be both free and regulated by protocol, this is a contradiction. Either your users manage the pricing or the software manages the pricing for them, which is it? (I'm not sure either is sustainable, without changes elsewhere in the model.)
I cannot totally cut this risk,
Now we get to why I'm actually "all riled up" here. You
could totally cut that risk. Over the past decade, real solutions to these problems
have been found. You aren't applying them.
Why the heck not?!?!?!
Also, this is another of your contradictory statement. You simultaneously claim some innovative new solutions to the problems at hand, and that you cannot solve them.
but I can give you control over the probability and expectation of loss, which come to reasonable values when massively distributed applications are in mind, together with the free market principle.
Again setting aside the catch-22 assumption that the market is free (and not re-iterating that these are not solutions but costly partial mitigation) this statement still doesn't stand to scrutiny. The values actually converge to *less* reasonable sums as you scale up distribution!
Examples of risk reducing behaviors:
1. Each worker serves many (say 10) publishers at once, hence the reducing the risk 10 fold to both parties.
This reduces risk on the part of the worker, but increases risk on the part of the publisher by adding to statistical variance of the delta between measured performance and applied performance on the part of the worker. In other words, this behavior makes it safer (though not 10 fold, as you claim) for the worker as they diversify some, but their act of diversification introduces inaccuracy into the micro-benchmark. Now my application's performance profile is being influenced by the behaviors of 9 other applications moment to moment.
2. Micropayment protocol is taking place every few seconds.
3. Since the system is for massive distributed applications, the publisher can rent say 10K hosts, and after a few seconds dump the worst 5K.
Another point that you keep reiterating but not expounding. The problem here is the same problem as any speculative investment - past performance does not guarantee (or even meaningfully predict) future results As a publisher I spin up 10K instances and then dump half. Now I've paid for twice as much work over those "few" seconds (a few seconds on 10K CPUs is an awfully long time, and gets expensive fast) and dumped half of my providers without knowing which in that half were malicious or which just had some unlucky bout of extra cache misses. Further, I have no reason to believe that the 5K I'm left with won't suddenly start under-performing relative to the 5K that I've just ditched.
You could induce a counter-argument here by saying "well just spin up 5K replacement nodes after you drop the first set, and then re-sort and drop the 5K lowest, rinse, repeat" but this actually only exacerbates all of the other problems related to this facet of the model! Now I'm *continuously* paying for those "few" wasted seconds over and over, I have to structure my application in such a way to handle this sort of consistent soft failure (So doing anything like MRM coordination in a useful way becomes both very difficult and laden with even more overheads!) and I still have no reason to believe that my system will ever converge onto an optimal set of providers. (Maybe there simply aren't 5K non-malicious workers up at the moment, or maybe some of the malicious nodes are preventing my reaching that set!)
4. One may only serve known publishers such as universities.
Again, this is a behavior that only mitigates risk for the worker. The worker doesn't need much risk mitigation - they are largely in control of the whole transaction - the buy side needs the assurances.
5. One may offer extra-reliability (like existing hosting firms) and charge appropriately. (for the last two points, all they have to do is to config their price/publishers on the client, and put their address on their website so people will know which address to trust).
Wait, now you're proposing the sort of solution of "just re-sell AWS" as a compensatory control?!?! This isn't exactly what I meant by competition, and precludes reliable service from ever becoming priced competitively. (Why wouldn't the buy side just skip the middleman and go straight to the established hosting firm?)
6. If one computes the same job several times with different hosts, they can reduce the probability of miscalculation. As the required invested amount grows linearly, the risk vanishes exponentially. (now I see you wrote it -- recall this is a free market. so if "acceptable" risk probability is say "do the calc 4 times", the price will be adjusted accordingly)
Did you miss the part where I pointed out that this behavior is only applicable to a subset of useful computations? Here's a "real world hypothetical" related to an actual HPC initiative I was once involved with - ranking page-rank implementations by performance over time. In other words, systematically and automatically comparing search engine result quality over time.
There are two places where this behavior directly fails to meet it's goal. First, there's the obvious problem of floating point precision. The numbers spit out from two identical software runs on two different architectures will not be the same, and the vm layer specifically precludes me from necessarily knowing the specifics of the semantics applied at the hardware. Second, even if the architectures are identical as well meaning float semantics are identical, the algorithm itself lacks the necessary referential transparency to be able to make the assumption of consistency! Instance A performing the query against the service might (usually WILL, in my experience!) alter the results issued to instance B for the same query! (Unfortunately, most useful processes will fall into this category of side-effecting.)
7. User can filter spammers by requiring work to be invested on the pubkey generation (identity mining).
This does nothing to "filter spammers" it only enforces a (somewhat arguably) fair distribution of identities. All this does is establish a small cost to minting a new identity and give identity itself some base value. In other words, you'll inevitably have some miners who never do a single job, and only mine new identities with which to attempt to claim announcements without ever fulfilling anything more than the micro-benchmark step and claiming their "few seconds worth" of payments. (They don't even have to actually expend resource to run the micro-benchmarks, they can just make up some numbers to fill in.)
(Rationally, I don't see why any worker would
ever bother to actually work when they simply "don't have to" and still get paid a bit for doing effectively nothing.)
I definitely forgot several more and they appear on the docs.
What doesn't appear in the docs, or your subsequent explanations, are these
solutions that you keep talking about. I see a lot of mitigation (almost none of which are even novel) but no resolutions. Where have you solved even a single previously unsolved problem? Which problem, and what is that solution?
Can you really not sum up your claimed invention? If you can't, I'm suspect of the claim. (Does anyone else out there feel like they've met this obligation? Is it just me, missing something obvious? I'm doubtful.)
Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers. Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria. Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such. (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)
See above regarding the pricing algorithm which addresses exactly those issues.
EH? Maybe you misunderstood what I meant by service discovery and pricing. Pricing raw resources is one thing, but the big consumers are less interested in shopping based on low level hardware metrics, and more interested in shopping based on specific service definitions. (If current trends continue this will only become increasingly true.) Your system offers a publisher nothing to base a query for a particularly structured SOA on.
As for matching buyers and sellers, we don't do that, the publisher announces they want to rent computers, while publishing their ip address, then interested clients connect to them and a negotiation begins without any 3rd party interference.
Again, this (like much of your writing, it seems) is contradictory. Either the market is free and the system does nothing to pair a publisher with a worker, or the market is partially controlled and the system attempts to match an appropriate worker to a publisher. You've made several references to the client being risk averse implying that the system is actually partially regulated. You can't subsequently go on to claim that the system does nothing to match buyers and sellers and is entirely free.
In any case, I'm not sure what that statement has to do with the service discovery concern.
The structure is not rigid at all - the contrary, it allows full control to the user.
The algorithm is also agnostic to all kinds of resources -- it even covers the unknown ones!! That's a really cool mathematical result.
I've already touched on this above, so I won't reiterate.
Can you show this "really cool mathematical result" somehow? I'd be interested to see such a proof, and even more interested in throwing such a proof past a few category theorist friends. I'm highly skeptical that such a result can be reasonably had, as it is difficult to form meaningful morphisms over unknown sets. Surely such a mathematical result would be of great interest in the philosophical community!
I'm putting this on my "big list of unfounded claims made in the crypto-currency space that I personally believe to be intractable and won't hold my breath to be shown."
(Seems like that list is getting bigger every day, now!
)
Where is the literature that I've missed that "actually solved" any of these problems? Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?
(I just use CPUShare as the example, because it is so very close to your model. They even had focus on a video streaming service too! Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)
What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players? Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.
Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange. My only confusion is over your assertions that the problems are solved. There is nothing in the materials that is a solution. There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.
As for Zennet vs AWS, see detailed (yet partial) table on the "About Zennet" article.
Another reply that doesn't match, at all, what it is replying to. I'm seeing some persistent themes to your writing.
If you haven't seen above, we renamed from Xennet cause of
this.
This one really baffled me.
I did see this. We even discussed it.
Did you just forget? Bad short term memory? Substance (ab)use?
It is going to be difficult to carry out an appropriate discourse if you're not aware of what we talked about less than 10 posts ago. As a potential investor, this would be particularly troubling to me.
I think that what I wrote till now shows that many issues were thought and answers were given.
It shows that many issues were thought about. (It also shows that a few more were not.)
It doesn't show any answers, just tries to explain how it'll all somehow just work out if the publisher just does these expensive and painful things to mitigate. The reality is that even if the publisher does the expensive and painful things there is no rational reason to believe anything might work out.
Please rethink about it and please do share further thoughts.
I've thought and re-thought about it quite a bit. Anyone who knows me can tell you that this (thinking about, analysing, and assessing systems) is pretty much the only thing I do.
You have not really addressed many of my points, such as general convenience/usability as compared to traditional providers, or how the system can expect to take away market share from established "decentralized" private trade that already occurs.
I've shared my pertinent thoughts, and now re-shared them. I do have some further thoughts, but I won't share those because I have very little interest in claiming your 1BTC. I have three such further thoughts now.
(As someone who is involved, intimately, with that set that you earlier referred to as "HPC and Big Data giants" I think maybe I will see if they would offer more for my further thoughts. I really hope you're right, and that these players do feel threatened, because then they would give me lots of money for these thoughts, yah? If they're not threatened then all well, I can make lots of money from naive publishers instead, right?)