Bitcoin Forum
December 15, 2024, 01:10:50 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread  (Read 57103 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 16, 2014, 04:43:04 AM
 #81

Up to now, even with media exposure and a bounty, no one found a flaw.

No flaw has been disclosed.  There is a subtle difference.  (Presumably anyone finding a significant flaw would stand to gain more by not disclosing it, no?)
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 16, 2014, 05:08:47 AM
 #82

Up to now, even with media exposure and a bounty, no one found a flaw.

No flaw has been disclosed.  There is a subtle difference.  (Presumably anyone finding a significant flaw would stand to gain more by not disclosing it, no?)

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw. Many of them contact us to see how they will be able to keep their business with Xennet. Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0

Tau-Chain & Agoras
tobeaj2mer01
Legendary
*
Offline Offline

Activity: 1098
Merit: 1000


Angel investor.


View Profile
September 16, 2014, 06:34:47 AM
 #83


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

so when do you think the ico will be released and abouth how high will be the mcap ?

Full details will be given in a few weeks (or even days).
As the the mcap, recall that this is a much larger market than known so far in the world of crypto (I'm speaking about the market of distributed arbitrary native computation of course). So I expect it to begin with billions. We also thought much about a fair coin distribution.
As said, very soon details will be released.

Did you do tech feasibility research of this coin, sometimes theory is good but it is not realistic, and how can we know you or your team has the ability to complete this project, can you share some information?

Feasibility research was done intensively and is reflected from the public technical documents, which are deep and wide far more than common in new crypto projects.
Up to now, even with media exposure and a bounty, no one found a flaw.
You're welcome to check our dev's LinkedIn and get some impression about the ability, this in addition to the professional documents.
We will gladly get into any public technical questions, questionings, discussion, suggestions etc.
So take some time to read the docs, and you might find a flaw and win the bounty!


Thanks for the explanation,where is your team Linkedin, can you post the Linkedin link in OP?

Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 16, 2014, 06:40:55 AM
 #84

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw.

They also probably have the greatest interest not to disclose it to you.

Quote
Many of them contact us to see how they will be able to keep their business with Xennet.

I have trouble believing any major players are really threatened.  Xennet isn't the first attempt at a decentralized resource exchange.  Heck, many of them even do their own "decentralized" resource exchange, already.  Wink

Quote
Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0

Interesting.  I'm sure some other coins have gotten similar C&D notices lately, but you're the first I've seen who has said such publicly.
xenmaster (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 19, 2014, 12:45:57 AM
 #85

Partial LinkedIn profiles list now appears on bottom of OP.
Note that we began making the transition from Xennet to Zennet.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 12:54:25 AM
 #86

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw.
They also probably have the greatest interest not to disclose it to you.
Depends. There are many talented people who will be happy to get 1BTC, but not really into ruining networks.

I have trouble believing any major players are really threatened.  Xennet isn't the first attempt at a decentralized resource exchange.  Heck, many of them even do their own "decentralized" resource exchange, already.

I never heard of any such successful attempt. Please let me know if you heard of one. Yes, they are threatened, they say it themselves. We indeed solved old problems who many have thought of, thanks for new cutting edge technology.

Quote
Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0
Interesting.  I'm sure some other coins have gotten similar C&D notices lately, but you're the first I've seen who has said such publicly.

How come you're so sure we're just like everyone else? Wink

Tau-Chain & Agoras
bitcoinmon
Newbie
*
Offline Offline

Activity: 10
Merit: 0


View Profile
September 19, 2014, 12:54:41 AM
 #87

I don't know about this one yet, but I'll keep an eye on for sure.
nonocoin
Member
**
Offline Offline

Activity: 109
Merit: 10


View Profile
September 19, 2014, 01:04:16 AM
 #88

let's take on the big boys in their own back yard! Grin  Grin

 ███ ███ ███    WMCoin ▮▮▮▮▮   - Real Games Accepted - High Rewards Signature Campaign
                                <<  ● $$$ - $$$ - $$$ - $$$ - $$$ - $$$ - $$$ >>   ███ ███ ███
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 04:04:51 AM
 #89

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw.
They also probably have the greatest interest not to disclose it to you.
Depends. There are many talented people who will be happy to get 1BTC, but not really into ruining networks.

These would be mostly or entirely disjoint sets, no?  I'm not sure what the one statement really says about the other.

Quote
I never heard of any such successful attempt. Please let me know if you heard of one. Yes, they are threatened, they say it themselves. We indeed solved old problems who many have thought of, thanks for new cutting edge technology.

Seccomp is in the linux kernel largely because of CPUShare and related projects.  You are correct that none were largely "successful" in the sense that none have gained broad mass-market adoption.  There were many attempts of various sorts.  None survived, unless you count Globus style initiatives, and I don't.

As far as I can tell, you have not presented solutions to any of the "old problems" that kept these sorts of projects from taking off in the past.  Your model actually seems largely reiterative of them, with the exception of the introduction of crypto for payment. (Though CPUShare markets did use escrow models to achieve the same goals.)

Your model seems to suffer from all of the same problems of lacking convenience, security and data privacy, authentication, rich service discovery, and adaptability.

How does this really intend to compete with the "semi-private" overflow trade commonly practiced by this market already?  How are you going to take market share from this without some advantage, and with so many potential disadvantages?

Quote
Quote
Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0
Interesting.  I'm sure some other coins have gotten similar C&D notices lately, but you're the first I've seen who has said such publicly.

How come you're so sure we're just like everyone else? Wink

Huh?  Who said anything about anyone being just like everyone else?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 04:09:25 AM
 #90


As far as I can tell, you have not presented solutions to any of the "old problems" that kept these sorts of projects from taking off in the past.  Your model actually seems largely reiterative of them, with the exception of the introduction of crypto for payment. (Though CPUShare markets did use escrow models to achieve the same goals.)

Your model seems to suffer from all of the same problems of lacking convenience, security and data privacy, authentication, rich service discovery, and adaptability.

How does this really intend to compete with the "semi-private" overflow trade commonly practiced by this market already?  How are you going to take market share from this without some advantage, and with so many potential disadvantages?

That's exactly the point bro. Go over the literature and see how we actually solved the real underlying problems, which were pending for a long time. See also Q&A on this thread. I began working on it ~1yr ago. It's far from being only virtualization plus blockchain, and there are several very significant proprietary innovations. All can be seen from the docs.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 05:40:20 AM
 #91

That's exactly the point bro. Go over the literature and see how we actually solved the real underlying problems, which were pending for a long time.

I've read over all of the materials that you've made available, "bro."

I've re-read them.

I still don't see where the problems get solved.

Quote
See also Q&A on this thread. I began working on it ~1yr ago. It's far from being only virtualization plus blockchain, and there are several very significant proprietary innovations. All can be seen from the docs.

Where?  Maybe I'm missing something, but what is the actual innovation, here?  How is this preferable over spot priced surplus resource from any of the big players?  What is the competitive advantage?  Why will XZennet "make it" when every prior attempt didn't, and failed?

The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.

By launching jobs, you're trusting in the security of a lot of random people.  As you've said, you have to assume many of these people will be downright malicious.  Sure, you can cut off computation with them, but by then they may already be selling off your data and/or code to a third party.  Even if the service provided is entirely altruistic the security on the host system might be lax, exposing your processes to third parties anyway, and in a way that you couldn't even detect as the sandbox environment precludes any audit trail over it.  Worse yet, your only recourse after the fact is a ding on the provider's reputation score.

Since authenticity of service can't be validated beyond a pseudonym and a reputation score, you can't assume computation to be done correctly from any given provider.  You are only partly correct that this can be exponentially mitigated by simply running the computation multiple times and comparing outputs - for some types of process the output would never be expected to match and you could never know if discrepancy was due to platform differences, context, or foul play.  At best this makes for extra cost in redundant computations, but in most cases it will go far beyond that.

Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers.  Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria.  Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such.  (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)

How can this model stay competitive with such a rigid structure?  You briefly/vaguely mention GPUs in part of some hand waving, but demonstrate no real plan for dealing with any other infrastructure resource, in general.  The technologies employed in HPC are very much a moving target, more so than most other data-center housed applications.  Your network offers a very prescriptive "one size fits all" solution which is not likely to be ideal for anyone, and is likely to be sub-optimal for almost everyone.

Where is the literature that I've missed that "actually solved" any of these problems?  Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?

(I just use CPUShare as the example, because it is so very close to your model.  They even had focus on a video streaming service too!  Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)

What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players?  Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.

Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange.  My only confusion is over your assertions that the problems are solved.  There is nothing in the materials that is a solution.  There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 06:09:19 AM
Last edit: September 19, 2014, 06:42:25 AM by ohad
 #92

ok np, let's go step by step:

Where?  Maybe I'm missing something, but what is the actual innovation, here?  How is this preferable over spot priced surplus resource from any of the big players?  What is the competitive advantage?  Why will XZennet "make it" when every prior attempt didn't, and failed?

For example, the pricing model is totally innovative. It measures the consumption much more fairly than common services. It also optimally mitigates the difference between different hardware. The crunch here is to make assumptions that are relevant only for distributed applications. Then comes the novel algorithm (which is an economic innovation by itself) of how to price with respect to unknown variables under a linearity assumption (which surprisingly occur on Zennet's case when talking about accumulated resource consumption metrics).

The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.

I don't understand what's the difference between this situation or any other cloud service.
Also note that Zennet is a totally free market. All parties set their own desired price.

By launching jobs, you're trusting in the security of a lot of random people.  As you've said, you have to assume many of these people will be downright malicious.  Sure, you can cut off computation with them, but by then they may already be selling off your data and/or code to a third party.  Even if the service provided is entirely altruistic the security on the host system might be lax, exposing your processes to third parties anyway, and in a way that you couldn't even detect as the sandbox environment precludes any audit trail over it.  Worse yet, your only recourse after the fact is a ding on the provider's reputation score.

I cannot totally cut this risk, but I can give you control over the probability and expectation of loss, which come to reasonable values when massively distributed applications are in mind, together with the free market principle.
Examples of risk reducing behaviors:
1. Each worker serves many (say 10) publishers at once, hence the reducing the risk 10 fold to both parties.
2. Micropayment protocol is taking place every few seconds.
3. Since the system is for massive distributed applications, the publisher can rent say 10K hosts, and after a few seconds dump the worst 5K.
4. One may only serve known publishers such as universities.
5. One may offer extra-reliability (like existing hosting firms) and charge appropriately. (for the last two points, all they have to do is to config their price/publishers on the client, and put their address on their website so people will know which address to trust).
6. If one computes the same job several times with different hosts, they can reduce the probability of miscalculation. As the required invested amount grows linearly, the risk vanishes exponentially. (now I see you wrote it -- recall this is a free market. so if "acceptable" risk probability is say "do the calc 4 times", the price will be adjusted accordingly)
7. User can filter spammers by requiring work to be invested on the pubkey generation (identity mining).
I definitely forgot several more and they appear on the docs.

Since authenticity of service can't be validated beyond a pseudonym and a reputation score, you can't assume computation to be done correctly from any given provider.  You are only partly correct that this can be exponentially mitigated by simply running the computation multiple times and comparing outputs - for some types of process the output would never be expected to match and you could never know if discrepancy was due to platform differences, context, or foul play.  At best this makes for extra cost in redundant computations, but in most cases it will go far beyond that.

Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers.  Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria.  Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such.  (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)

See above regarding the pricing algorithm which addresses exactly those issues.
As for matching buyers and sellers, we don't do that, the publisher announces they want to rent computers, while publishing their ip address, then interested clients connect to them and a negotiation begins without any 3rd party interference.

How can this model stay competitive with such a rigid structure?  You briefly/vaguely mention GPUs in part of some hand waving, but demonstrate no real plan for dealing with any other infrastructure resource, in general.  The technologies employed in HPC are very much a moving target, more so than most other data-center housed applications.  Your network offers a very prescriptive "one size fits all" solution which is not likely to be ideal for anyone, and is likely to be sub-optimal for almost everyone.

The structure is not rigid at all - the contrary, it allows full control to the user.
The algorithm is also agnostic to all kinds of resources -- it even covers the unknown ones!! That's a really cool mathematical result.

Where is the literature that I've missed that "actually solved" any of these problems?  Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?

(I just use CPUShare as the example, because it is so very close to your model.  They even had focus on a video streaming service too!  Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)

What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players?  Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.

Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange.  My only confusion is over your assertions that the problems are solved.  There is nothing in the materials that is a solution.  There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.


As for Zennet vs AWS, see detailed (yet partial) table on the "About Zennet" article.
If you haven't seen above, we renamed from Xennet cause of this.

I think that what I wrote till now shows that many issues were thought and answers were given.
Please rethink about it and please do share further thoughts.

Tau-Chain & Agoras
profitofthegods
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250


View Profile WWW
September 19, 2014, 09:47:05 AM
 #93



The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.



If its possible for people to run this thing on a fairly normal home computer then these extra overheads are not a problem, because compared to a commercial operation the 'miner' will have substantial cost savings due to not having paid for dedicated hardware, facilities, staff etc which should more than make up for the additional computational costs.
xenmaster (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 19, 2014, 02:11:10 PM
 #94

Ohad Asor was invited to present Zennet on a Big Data & Crowd Computing Conference in Amsterdam
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 06:12:13 PM
Last edit: September 19, 2014, 06:22:29 PM by HunterMinerCrafter
 #95

For example, the pricing model is totally innovative.

What is the innovation?  What new contribution are you making?  I hear you saying over and over again "it's innovative, it's innovative, no really it's TOTALLY innovative" but you haven't yet come out and said what the innovation is.

Averaging some procfs samples is hardly innovative.  Many providers (at least the ones that don't charge based on straight wall-clock time) do this.  What are you doing in this pricing model that is new?

Quote
It measures the consumption much more fairly than common services.

Except that it doesn't, and this is somewhat central to my point.  It measures consumption in (more or less) the same way as everyone else, but instead of having a company to complain to or sue if they lie about their stats the user only has a pseudonym to leave a comment on.  The system removes the actual accountability and replaces it with a trust metric, and further makes it exceedingly easy to lie about stats.

Quote
It also optimally mitigates the difference between different hardware.

How?  This is certainly not explained, or not explained well, in your materials.  I see some hand waving about risk analysis in the client, but nothing that actually does anything about it.

Quote
The crunch here is to make assumptions that are relevant only for distributed applications.

Ok, what assumptions are we talking?  You say these things, but never complete your thoughts.  It gets tedious to try to draw details out of you like this.

Quote
Then comes the novel algorithm (which is an economic innovation by itself) of how to price with respect to unknown variables under a linearity assumption (which surprisingly occur on Zennet's case when talking about accumulated resource consumption metrics).

Aside from the novelty claim (RE my first point, I don't see the novelty)  I see two issues, here.  First, the linearity assumption is a big part of the rigidity.  It applies fine for a traditional Harvard architecture CPU, but breaks down on any "real" tech.  How does your pricing model price a neuromorphic architecture? How does your pricing model measure a recombinant fpga array?  How do you account for hardware that literally changes (non-linearly!) as the process runs?  (The answer is self evident from your own statement - right now you can't.)

Second the pricing model only prices against the micro-benchmarks (indicating, right there, that the performance profile indicated in the pricing model will differ from the applied application!) and doesn't account for environmental context.  With modern distributed systems most of the performance profile comes down to I/O wait states (crossing para-virtualization thresholds) and bursting semantics.  I don't see either of these actually being accounted for.  (Maybe you should spend some time with engineers and A.E.s from AWS, Google, Rackspace, MediaFire, Joyent, DigitalOcean, etc and find out what your future users are actually going to be concerned about in their pricing considerations.)

Quote
The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.

I don't understand what's the difference between this situation or any other cloud service.

With any other cloud service I can spin up only one instance and be fairly confident that it will perform "as advertised" and if it doesn't I can sue them for a refund.

With your service I can't spin up one instance and be confident about anything.  If it turns out that one instance is not as advertised - all well, too late - there is no recourse for compensation, no jurisdictional court to go to, no company to boycott, nothing but a ding on a trust rating.  (Again if you think trust rating works in a vacuum like this you should just look around at some of the trust rating behavior on these forums.  It doesn't.)

Quote
Also note that Zennet is a totally free market. All parties set their own desired price.

I've been ignoring this catch-22 statement up to now.  Either the pricing model is a free market, or the pricing model is controlled by your formula and magical risk-averse client.  It can't be both free and regulated by protocol, this is a contradiction.  Either your users manage the pricing or the software manages the pricing for them, which is it?  (I'm not sure either is sustainable, without changes elsewhere in the model.)

Quote
I cannot totally cut this risk,

Now we get to why I'm actually "all riled up" here.  You could totally cut that risk.  Over the past decade, real solutions to these problems have been found.  You aren't applying them.

Why the heck not?!?!?!

Also, this is another of your contradictory statement.  You simultaneously claim some innovative new solutions to the problems at hand, and that you cannot solve them.

Quote
but I can give you control over the probability and expectation of loss, which come to reasonable values when massively distributed applications are in mind, together with the free market principle.

Again setting aside the catch-22 assumption that the market is free (and not re-iterating that these are not solutions but costly partial mitigation) this statement still doesn't stand to scrutiny.  The values actually converge to *less* reasonable sums as you scale up distribution!

Quote
Examples of risk reducing behaviors:
1. Each worker serves many (say 10) publishers at once, hence the reducing the risk 10 fold to both parties.

This reduces risk on the part of the worker, but increases risk on the part of the publisher by adding to statistical variance of the delta between measured performance and applied performance on the part of the worker.  In other words, this behavior makes it safer (though not 10 fold, as you claim) for the worker as they diversify some, but their act of diversification introduces inaccuracy into the micro-benchmark.  Now my application's performance profile is being influenced by the behaviors of 9 other applications moment to moment.

Quote
2. Micropayment protocol is taking place every few seconds.
3. Since the system is for massive distributed applications, the publisher can rent say 10K hosts, and after a few seconds dump the worst 5K.

Another point that you keep reiterating but not expounding.  The problem here is the same problem as any speculative investment - past performance does not guarantee (or even meaningfully predict) future results  As a publisher I spin up 10K instances and then dump half.  Now I've paid for twice as much work over those "few" seconds (a few seconds on 10K CPUs is an awfully long time, and gets expensive fast) and dumped half of my providers without knowing which in that half were malicious or which just had some unlucky bout of extra cache misses.  Further, I have no reason to believe that the 5K I'm left with won't suddenly start under-performing relative to the 5K that I've just ditched.

You could induce a counter-argument here by saying "well just spin up 5K replacement nodes after you drop the first set, and then re-sort and drop the 5K lowest, rinse, repeat" but this actually only exacerbates all of the other problems related to this facet of the model!  Now I'm *continuously* paying for those "few" wasted seconds over and over, I have to structure my application in such a way to handle this sort of consistent soft failure (So doing anything like MRM coordination in a useful way becomes both very difficult and laden with even more overheads!) and I still have no reason to believe that my system will ever converge onto an optimal set of providers.  (Maybe there simply aren't 5K non-malicious workers up at the moment, or maybe some of the malicious nodes are preventing my reaching that set!)

Quote
4. One may only serve known publishers such as universities.

Again, this is a behavior that only mitigates risk for the worker.  The worker doesn't need much risk mitigation - they are largely in control of the whole transaction - the buy side needs the assurances.

Quote
5. One may offer extra-reliability (like existing hosting firms) and charge appropriately. (for the last two points, all they have to do is to config their price/publishers on the client, and put their address on their website so people will know which address to trust).

Wait, now you're proposing the sort of solution of "just re-sell AWS" as a compensatory control?!?!  This isn't exactly what I meant by competition, and precludes reliable service from ever becoming priced competitively.  (Why wouldn't the buy side just skip the middleman and go straight to the established hosting firm?)

Quote
6. If one computes the same job several times with different hosts, they can reduce the probability of miscalculation. As the required invested amount grows linearly, the risk vanishes exponentially. (now I see you wrote it -- recall this is a free market. so if "acceptable" risk probability is say "do the calc 4 times", the price will be adjusted accordingly)

Did you miss the part where I pointed out that this behavior is only applicable to a subset of useful computations?  Here's a "real world hypothetical" related to an actual HPC initiative I was once involved with - ranking page-rank implementations by performance over time.  In other words, systematically and automatically comparing search engine result quality over time.

There are two places where this behavior directly fails to meet it's goal.  First, there's the obvious problem of floating point precision.  The numbers spit out from two identical software runs on two different architectures will not be the same, and the vm layer specifically precludes me from necessarily knowing the specifics of the semantics applied at the hardware.  Second, even if the architectures are identical as well meaning float semantics are identical, the algorithm itself lacks the necessary referential transparency to be able to make the assumption of consistency!  Instance A performing the query against the service might (usually WILL, in my experience!) alter the results issued to instance B for the same query!  (Unfortunately, most useful processes will fall into this category of side-effecting.)

Quote
7. User can filter spammers by requiring work to be invested on the pubkey generation (identity mining).

This does nothing to "filter spammers" it only enforces a (somewhat arguably) fair distribution of identities.  All this does is establish a small cost to minting a new identity and give identity itself some base value.  In other words, you'll inevitably have some miners who never do a single job, and only mine new identities with which to attempt to claim announcements without ever fulfilling anything more than the micro-benchmark step and claiming their "few seconds worth" of payments.  (They don't even have to actually expend resource to run the micro-benchmarks, they can just make up some numbers to fill in.)

(Rationally, I don't see why any worker would ever bother to actually work when they simply "don't have to" and still get paid a bit for doing effectively nothing.)

Quote
I definitely forgot several more and they appear on the docs.

What doesn't appear in the docs, or your subsequent explanations, are these solutions that you keep talking about.  I see a lot of mitigation (almost none of which are even novel) but no resolutions.  Where have you solved even a single previously unsolved problem?  Which problem, and what is that solution?

Can you really not sum up your claimed invention?  If you can't, I'm suspect of the claim.  (Does anyone else out there feel like they've met this obligation?  Is it just me, missing something obvious?  I'm doubtful.)

Quote
Quote
Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers.  Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria.  Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such.  (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)

See above regarding the pricing algorithm which addresses exactly those issues.

EH?  Maybe you misunderstood what I meant by service discovery and pricing.  Pricing raw resources is one thing, but the big consumers are less interested in shopping based on low level hardware metrics, and more interested in shopping based on specific service definitions.  (If current trends continue this will only become increasingly true.)  Your system offers a publisher nothing to base a query for a particularly structured SOA on.

Quote
As for matching buyers and sellers, we don't do that, the publisher announces they want to rent computers, while publishing their ip address, then interested clients connect to them and a negotiation begins without any 3rd party interference.

Again, this (like much of your writing, it seems) is contradictory.  Either the market is free and the system does nothing to pair a publisher with a worker, or the market is partially controlled and the system attempts to match an appropriate worker to a publisher.  You've made several references to the client being risk averse implying that the system is actually partially regulated.  You can't subsequently go on to claim that the system does nothing to match buyers and sellers and is entirely free.

In any case, I'm not sure what that statement has to do with the service discovery concern.

Quote
The structure is not rigid at all - the contrary, it allows full control to the user.
The algorithm is also agnostic to all kinds of resources -- it even covers the unknown ones!! That's a really cool mathematical result.

I've already touched on this above, so I won't reiterate.

Can you show this "really cool mathematical result" somehow?  I'd be interested to see such a proof, and even more interested in throwing such a proof past a few category theorist friends.  I'm highly skeptical that such a result can be reasonably had, as it is difficult to form meaningful morphisms over unknown sets.  Surely such a mathematical result would be of great interest in the philosophical community!  Cheesy

I'm putting this on my "big list of unfounded claims made in the crypto-currency space that I personally believe to be intractable and won't hold my breath to be shown."

(Seems like that list is getting bigger every day, now!  Undecided)

Quote
Where is the literature that I've missed that "actually solved" any of these problems?  Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?

(I just use CPUShare as the example, because it is so very close to your model.  They even had focus on a video streaming service too!  Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)

What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players?  Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.

Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange.  My only confusion is over your assertions that the problems are solved.  There is nothing in the materials that is a solution.  There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.


As for Zennet vs AWS, see detailed (yet partial) table on the "About Zennet" article.

Another reply that doesn't match, at all, what it is replying to.  I'm seeing some persistent themes to your writing.

Quote
If you haven't seen above, we renamed from Xennet cause of this.

This one really baffled me.

I did see this.  We even discussed it.

Did you just forget?  Bad short term memory?  Substance (ab)use?

It is going to be difficult to carry out an appropriate discourse if you're not aware of what we talked about less than 10 posts ago.  As a potential investor, this would be particularly troubling to me.

Quote
I think that what I wrote till now shows that many issues were thought and answers were given.

It shows that many issues were thought about.  (It also shows that a few more were not.)

It doesn't show any answers, just tries to explain how it'll all somehow just work out if the publisher just does these expensive and painful things to mitigate.  The reality is that even if the publisher does the expensive and painful things there is no rational reason to believe anything might work out.

Quote
Please rethink about it and please do share further thoughts.

I've thought and re-thought about it quite a bit.  Anyone who knows me can tell you that this (thinking about, analysing, and assessing systems) is pretty much the only thing I do.

You have not really addressed many of my points, such as general convenience/usability as compared to traditional providers, or how the system can expect to take away market share from established "decentralized" private trade that already occurs.

I've shared my pertinent thoughts, and now re-shared them.  I do have some further thoughts, but I won't share those because I have very little interest in claiming your 1BTC.  I have three such further thoughts now. Lips sealed

(As someone who is involved, intimately, with that set that you earlier referred to as "HPC and Big Data giants" I think maybe I will see if they would offer more for my further thoughts.  I really hope you're right, and that these players do feel threatened, because then they would give me lots of money for these thoughts, yah?  If they're not threatened then all well, I can make lots of money from naive publishers instead, right?)

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 06:24:52 PM
 #96

HunterMinerCrafter, you indeed touch the real points. Lots of misunderstanding though, but I do take into account that the deep stuff are very brief at the docs. Also recall that I know nothing about the background of each BTT member.
Those issues are indeed more delicate. I can write emphasis here, but let me suggest we make a recorded video chat where you ask all questions and I give all small details, so the rest of the community will be able to enjoy this info.
I think that'll be really cool. But if for some reason you don't want to do it, I'll write my answers in detail here.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 07:56:04 PM
 #97

In order to make the pricing algorithm clearer, I wrote and uploaded this detailed explanation.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 08:13:31 PM
 #98

HunterMinerCrafter, you indeed touch the real points. Lots of misunderstanding though,

All I am asking of you (and of pretty much any coin developer I speak with, if you look over my post history outside of the MOTO thread) is to make me understand.

Quote
but I do take into account that the deep stuff are very brief at the docs.

Your team has only one job right now - deepen it!  (Don't broaden it, that will work against you.)

A very simple way to do this would, of course, simply be to release working source code.  If you do have actual solutions to these problems, it will become self evident quite quickly.

Quote
Also recall that I know nothing about the background of each BTT member.

Ok, let me give you some quick background.  I'm a "seasoned" professional in the field of computer science who also has just about 5.5 years of experience in working with logical formalism around crypto currency.  (The number of us (surviving) who can make this claim, aside from perhaps Satoshi, can fit in a small elevator comfortably, and we pretty much all know each other.  A moment of silence, please, for those who can no longer attend our elevator.)




Interestingly, I also have just over a decade of experience with HPC applications and, more generally, data-center operations.  (This would need to be a very large elevator.)  I like critiquing coins' claims in general, but yours is particularly appealing to my razor because it is very much "in my world."

If you'd like some more detailed background on my qualifications as a critic of your claims we can discuss that privately.

Again, I am only trying to fully understand the value proposition.  If you can't make me understand then you might have some trouble making average Joe understand.  If you can't make average Joe understand, your project doesn't have to worry about the protocol actually working anyway because it will fail just from lack of any traction.

Quote
Those issues are indeed more delicate. I can write emphasis here, but let me suggest we make a recorded video chat where you ask all questions and I give all small details, so the rest of the community will be able to enjoy this info.

Just write them down and publish them.  Posts on a forum or a proper whitepaper, either will do the trick as long as it makes a full (and not self-contradictory) treatment of the subject.

Even better, write it all down encoded in the form of a program that we can run.  You surely have to be doing that already anyway, right?  Wink

Quote
I think that'll be really cool. But if for some reason you don't want to do it, I'll write my answers in detail here.

For a lot of reasons, I do not want to do that.

How about the three R&D folks just set up a tech oriented IRC to hang out in, open to anyone interested in protocol detail?  I think this would better align with the actual goals of such a meeting.

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 08:19:16 PM
 #99

I hope the doc I just put here will give you a better understanding what is the innovative pricing model.


Ok, let me give you some quick background.  I'm a "seasoned" professional in the field of computer science who also has just about 5.5 years of experience in working with logical formalism around crypto currency.  (The number of us (surviving) who can make this claim, aside from perhaps Satoshi, can fit in a small elevator comfortably, and we pretty much all know each other.  A moment of silence, please, for those who can no longer attend our elevator.)

Interestingly, I also have just over a decade of experience with HPC applications and, more generally, data-center operations.  (This would need to be a very large elevator.)  I like critiquing coins' claims in general, but yours is particularly appealing to my razor because it is very much "in my world."

If you'd like some more detailed background on my qualifications as a critic of your claims we can discuss that privately.

Again, I am only trying to fully understand the value proposition.  If you can't make me understand then you might have some trouble making average Joe understand.  If you can't make average Joe understand, your project doesn't have to worry about the protocol actually working anyway because it will fail just from lack of any traction.


I'm really glad to have you here. I was waiting long for deep criticism and showing how we solved all main issues. I'm 100% sure, and I put all my professional dignity on it, that I have reasonably practical answers to all your doubts. Let's keep going here. Tell me what you think of the pricing algo.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 08:27:45 PM
 #100

I'm really glad to have you here. I was waiting long for deep criticism and showing how we solved all main issues. I'm 100% sure, and I put all my professional dignity on it, that I have reasonably practical answers to all your doubts.

I'm convinced that you don't have reasonably practical answers to all of my doubts, as you've already explicitly admitted that you lack answers to the biggest of them, in big letters on your github.

"No POW"

If you can't authenticate the computation then you have no reasonable *or* practical answer to some specific doubts.

As long as the workers don't have to prove that they do the work, I doubt they will.

Quote
Let's keep going here. Tell me what you think of the pricing algo.

On my second read now.  I think you've quite nicely elided some of the specific problems for me.  Wink

P.S. I like that the new site is up on the new domain, but don't like that it still has all the self-contradictory gobbledygook on the "documentation" page.  I really don't like that when I clicked the link in your OP to get to that page again, it took me to the old domain.  Just a heads up that you might want to check your back-links.  Wink
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!