Bitcoin Forum
March 30, 2020, 02:24:18 PM *
News: Latest Bitcoin Core release: 0.19.0.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [All]
  Print  
Author Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread  (Read 56560 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 12, 2014, 10:40:32 PM
Last edit: February 09, 2015, 11:38:08 AM by xenmaster
 #1

Videos
Short introductory video, and another one
10 Minutes Presentation from InsideBTC conference
Detailed introduction to Zennet
Presentation Video on Crowd Computing conference at Almere

Papers
About Zennet article (newer)
RFC (old)
Software Detailed Design
Pricing algo
Auctions draft
Tau-Chain Paper

Presentations
Almere Conference
Inside Bitcoin Conference

UI Mockup can be found at http://www.zennet.sc/xennet-ui/#/

Updates and important information posted here
Economical Aspects
Number of coins
Main design assumption and generation of coins
Draft of pre-sale terms
First pre-sale terms
Moving to a generalized Blockchain

Join our IRC channel #zennet @freenode

Pre-buying coins (currently only privately) by emailing Ohad at ohadasor@gmail.com


-- Zennet --
Entirely new software and architectures
Combined with the industry's most solid technologies
Creating a public decentralized Supercomputer
The Zen Protocol presents a novel reward mechanism
While the Blockchain makes it frictionless
We will make it possible to utilize and monetize
all hardware in the world. That's big.


Zennet initiative is a public, distributed, and decentralized Supercomputer. It lives in the world of distributed and decentralized Blockchain applications, and in the huge field of Big Data and High Performance Computing. The latter has captured the most attention and investments in the software market over the last few years, experiencing huge growth. HPC and Big Data are used in countless industries, including medicine, government, and marketing. Big Data alone is at least a $30 billion industry this year.

http://zennet.sc/images/infogr/p1.PNGhttp://zennet.sc/images/infogr/p2.jpg


Zennet is here to power this expansion with everyone's computational resources around the globe. Its most significant impacts on this market are drastically reducing costs and bringing new magnitudes of speed to HPC consumers.

http://zennet.sc/images/infogr/p3.PNGhttp://zennet.sc/images/infogr/p4.PNG

Computation power is traded on Zennet's open market platform. Anyone can rent computation power and use it to run arbitrary tasks. Anyone can monetize their hardware by offering unused computation power for sale.    
Zennet allows Publishers who need computation power to run arbitrary computational tasks. Computation power is supplied by Providers for a negotiated fee. A free-market infrastructure brings Publishers and Providers together. Publishers can hire many computers and run whatever they want on them safely, thanks to cutting-edge virtualization technology. Payment is continuous and frictionless, thanks to Blockchain technology, among other technologies that shall be discussed later on.    

http://zennet.sc/images/infogr/p5.jpghttp://zennet.sc/images/infogr/p6.jpg

The network is 100% distributed and decentralized: there is no central entity of any kind, just like Bitcoin. All software will be open source. Publishers pay Providers directly, there is no middleman. Accordingly, there are no payable commissions, except for regular transaction fees which are paid to ZenCoin miners.       
It is a totally free market: all participants are free to pay or charge any rate they want. There are no restrictions. Hence, we put additional focus on customizability. We allow advanced participants to control all parameters and conditions of their nodes in a versatile way. On the other hand, simplicity and automation are made possible by making the client software implement automatic risk-reward considerations by default. In addition, we present a novel pricing model.       
Following is a sketch summarizing the general idea of the communication protocol between publisher A and provider B:        

Zennet presents new economic aspects, both to the traditional economy and to the world of cryptocurrencies. ZenCoin has a real intrinsic value, as it is fully backed by the right to use a certain amount of computation power. Most of the power-calculating resources are in the hands of the common people, and most of the time they are free.

Many questions may naturally arise. The above description is extremely summarized. Please take some time to read the About Xennet article at http://zennet.sc/ It contains an interesting table presenting Xennet's benefits over Amazon Web Services, the largest computational resources rental firm.

Please stay tuned, share your thoughts and check Xennet Bug Bounty Announcement.

In the press (note that we renamed from Xennet due to this):

http://i.imgur.com/0UmDvzL.png http://i.imgur.com/rvpfJoR.png http://i.imgur.com/sgCNbS1.png http://i.imgur.com/QdPKPRS.png http://i.imgur.com/a6WgrFR.png

http://i.imgur.com/1GqZqPc.png http://i.imgur.com/Pzerf7s.png http://i.imgur.com/qMVLT0F.png http://i.imgur.com/wmPwMda.png http://i.imgur.com/3SyUdP2.png

http://i.imgur.com/tfTkELZ.png http://i.imgur.com/xpck9B2.png http://i.imgur.com/Z6Gstx7.png http://i.imgur.com/X1nIg3R.png

Please note that above releases were organic, and did not co-ordinate with us, hence contain some inaccuracies. This thread clarifies most of them.

Partial list of team's LinkedIn profiles (in lexicographic order):
Ohad Asor (CEO/CTO)
Gil Bahr (Creative)
Gil Erlich (R&D)
Ariel Formanovski (R&D)
Ravit Halmut (R&D)
Assaf Kerstin (CFO)
Alon Muroch (R&D)
Lukáš Nový (R&D)
Daniel Peled (Legal)
Yonatan Omer (R&D)
Felicity Perry (Marketing)
Ofer Rotem (Capital)

Special credit to HunterMinerCrafter for advising widely to the project and for showing us new paths toward verifiable computing. His contribution is very appreciated.

Sincerely,
The Zennet Team.
Best ratesfor crypto
EXCHANGE
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
dasource
Hero Member
*****
Offline Offline

Activity: 823
Merit: 1000


View Profile
August 12, 2014, 10:49:08 PM
 #2

Had a quick read.... Interested, let's take on the big boys in their own back yard!

^ I am with STUPID!
dogetime
Sr. Member
****
Offline Offline

Activity: 260
Merit: 250


BUY BUY CELL CELL


View Profile
August 12, 2014, 11:14:58 PM
 #3

Interesting ideas  Smiley

Pump and dump philosophy
BTC: 1EZnwK5H5nRV2Tm4qToPg4Gb56eu35YoUC
spacelab
Sr. Member
****
Offline Offline

Activity: 529
Merit: 250


Nominex support


View Profile WWW
August 12, 2014, 11:27:41 PM
 #4

I really like this idea
jasemoney
Legendary
*
Offline Offline

Activity: 1582
Merit: 1005


Forget-about-it


View Profile
August 12, 2014, 11:28:58 PM
 #5

dude awesome

$MAID & $BTC other than that some short hodls and some long held garbage.
provenceday
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000



View Profile
August 12, 2014, 11:40:16 PM
 #6

will check this later. Smiley
damiano
Legendary
*
Offline Offline

Activity: 1246
Merit: 1000


103 days, 21 hours and 10 minutes.


View Profile
August 12, 2014, 11:44:24 PM
 #7

looks great
crimealone
Sr. Member
****
Offline Offline

Activity: 644
Merit: 251



View Profile
August 13, 2014, 01:37:38 AM
 #8

Maybe another thousands of btc presale??

            ▄▄████▄▄
        ▄▄██████████████▄▄
      ███████████████████████▄▄
      ▀▀█████████████████████████
██▄▄       ▀▀█████████████████████
██████▄▄        ▀█████████████████
███████████▄▄       ▀▀████████████
███████████████▄▄        ▀████████
████████████████████▄▄       ▀▀███
 ▀▀██████████████████████▄▄
     ▀▀██████████████████████▄▄
▄▄        ▀██████████████████████▄
████▄▄        ▀▀██████████████████
█████████▄▄        ▀▀█████████████
█████████████▄▄        ▀▀█████████
██████████████████▄▄        ▀▀████
▀██████████████████████▄▄
  ▀▀████████████████████████
      ▀▀█████████████████▀▀
           ▀▀███████▀▀



.SEMUX
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
  Semux uses 100% original codebase
  Superfast with 30 seconds instant finality
  Tested 5000 tx per block on open network
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
Bluis Jan
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
August 13, 2014, 03:39:40 AM
 #9

Sounds good

belief
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
August 13, 2014, 03:32:24 PM
 #10

keep watching

karmazaki
Newbie
*
Offline Offline

Activity: 29
Merit: 0


View Profile
August 14, 2014, 09:05:55 PM
 #11

Looks good,
Decentralized Supercomputer OMG Shocked
provenceday
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000



View Profile
August 14, 2014, 11:11:29 PM
 #12

What's the difference between Amazon EC2 Clouding service?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 14, 2014, 11:31:59 PM
 #13

What's the difference between Amazon EC2 Clouding service?

Great question. Please see the detailed table at http://xennet.io which compares Xennet to AWS.

Tau-Chain & Agoras
smokim87
Hero Member
*****
Offline Offline

Activity: 938
Merit: 500


View Profile
August 14, 2014, 11:42:23 PM
 #14

Guessing Windows Server OS won't able to work in this right but maybe in a few years when computers are much more powerful?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 14, 2014, 11:49:13 PM
 #15

Guessing Windows Server OS won't able to work in this right but maybe in a few years when computers are much more powerful?

It will work on Windows and even on mobiles!
Docker will be executed inside a virtual machine (Virtualbox, QEMU etc.).
The VM will run Xennet-OS that will split containers to each publisher.
Therefore, cross-platform.

Note that since QEMU is a fully user-space hypervisor, it is compatible with any os and architecture able to compile C.

Tau-Chain & Agoras
mr.coinzy
Hero Member
*****
Offline Offline

Activity: 504
Merit: 507



View Profile
August 15, 2014, 09:39:06 AM
 #16

This is by far one of the most interesting projects i saw lately and i will keep my eye out for this one for sure!

I do have a question: What determines the precedence order when several equal entities wish to hire computational resources at the same instance?
If 2 different people for example both offer to pay the exact same price for the exact same resources, put their offer at the exact same time in the system, which one gets the computational power first? (assuming there is a limitation in the available computational power in total and its not enough for both tasks at the same time). How is it decided in a case of a complete tie in demands?
Hope my question makes sense... Smiley
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 15, 2014, 10:39:23 AM
Last edit: August 15, 2014, 02:45:35 PM by ohad
 #17

This is by far one of the most interesting projects i saw lately and i will keep my eye out for this one for sure!

I do have a question: What determines the precedence order when several equal entities wish to hire computational resources at the same instance?
If 2 different people for example both offer to pay the exact same price for the exact same resources, put their offer at the exact same time in the system, which one gets the computational power first? (assuming there is a limitation in the available computational power in total and its not enough for both tasks at the same time). How is it decided in a case of a complete tie in demands?
Hope my question makes sense... Smiley

Very good question!

The ultimate theoretical answer is that it's up to the providers and publishers choice. But of course Xennet will be predefined with settings so non-technical users will still be able to rent their computers and get paid, in (probably almost) one click. Yet, the client will be highly customizable. My current plan is to let a JS scripting ability on the client for inputting complex user-defined filters and preferences.

The practical answer for when users do not handle this situation manually and relies on the client's defaults, requires me to highlight several points:

1. As you stated correctly, publishers publish (over the blockchain) a request for providers to connect, including their IP address, and providers connect to them.

2. Some not-very-necessary-for-this-answer background: The publisher do not state a price for a unit of CPU/RAM/etc or time. They state how much they would pay for standard programs, i.e. some common phoronix-test-suite benchmarks. The providers also state their price in terms of canonical benchmarks. Then, the linear-algebra algorithm determines how to decompose those prices into the prices per CPU instruction, RAM fetching etc., and determines the bill at runtime. The algorithm knows how to (optimally!) compensate over the variance between the different kinds of hardware. It's a bit sophisticated and advanced-math mechanism, see our docs for more details. For our case, what needs to be concluded is that not only the publisher's requirements determine a match, but also the performance of each provider, while each having various models of hardware.

3. Each provider will tend to serve several publishers in parallel. If each provider serves 10 publishers, then the risk of both provider and publishers decreases by about x10, since if a publisher do not pay, the provider lost only 10% (note that payments occur every few seconds in a frictionless manner thanks to the Micropayments algorithm, so there will never be a big loss by all means), and if the provider is gone, the publisher loses only the work of 10% of a PC, not 100%. Hence, back to your question, the desired behaviour is to serve both publishers in parallel.

4. Your question still remains if we have many (say 100) publishers with the very same parameters. It is indeed a realistic case: for example, many researchers use Gromacs for molecular dynamics simulation, and they can even work on the same institute with the same terms. So how will Xennet handle this?

5. Same rationale of 3, i.e. reducing risk, points to another risk-decreasing behaviour: say I do have 100 identical publishers. On such case the client will just pick publishers randomly. This is a risk-reducer since it spreads the risks among all nodes.

6. Note that if from that identical publishers, some of them either have richer history than others (seen at the ledger), or, if my specific node served one of them in the past and kept some reputation measurement for this publisher (will be done automatically by the client), it will prefer the one with more reputation.

I hope I addressed your question.
Please (you and everyone) do not hesitate to ask any kind of question about Xennet.


P.S.
"one of the most interesting projects i saw lately" just one? show me another with the same magnitude of redemption Smiley

Tau-Chain & Agoras
mr.coinzy
Hero Member
*****
Offline Offline

Activity: 504
Merit: 507



View Profile
August 15, 2014, 10:58:22 AM
 #18

Thank you for the detailed answer!
It gives a lot of confidence in this project to see that you really gave much thought to all the fine details.
If i have further questions i will definitely ask away.
Oh, and I stand corrected - this is the most interesting project i have seen lately! it's a keeper!  Wink
Will surely keep an eye out for more info on development and hope to participate in your scheduled pre-sale!

takayama
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
August 16, 2014, 08:15:11 AM
Last edit: August 16, 2014, 08:34:53 AM by takayama
 #19

always nice to see innovation!
@Dev: can you explain more about the XenCoin?  
-Greed-
Hero Member
*****
Offline Offline

Activity: 910
Merit: 1000


Decentralized Jihad


View Profile
August 16, 2014, 08:39:49 AM
 #20

Keep my eye on this. Wink

cwnt
Newbie
*
Offline Offline

Activity: 29
Merit: 0


View Profile
August 16, 2014, 12:59:07 PM
 #21

Keep my eye on this. Wink

Me too. Extremely interesting proposal here.
tobeaj2mer01
Legendary
*
Offline Offline

Activity: 1104
Merit: 1000


Angel investor.


View Profile
August 16, 2014, 01:56:45 PM
 #22

Is there any IPO/ICO or pre-sale for this coin, I want to invest in it.

Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
-Greed-
Hero Member
*****
Offline Offline

Activity: 910
Merit: 1000


Decentralized Jihad


View Profile
August 16, 2014, 01:59:10 PM
 #23

Is there any IPO/ICO or pre-sale for this coin, I want to invest in it.

From xennet.io:
Quote
* Presale is expected. Stay tuned *

Ohad Asor
July 27, 2014

xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 16, 2014, 10:30:33 PM
Last edit: August 16, 2014, 11:02:12 PM by xenmaster
 #24

always nice to see innovation!
@Dev: can you explain more about the XenCoin? 

Sure,

On Xennet, people will trade computational resources for XenCoins.

Its purpose and design goal are to be a token for activating computational machines. It's interesting to note that XenCoin has a real intrinsic value, as it is fully backed by the right to use a certain amount of computation power.
The best candidate for now for the coin technology is Delegated Proof of Stake (http://bitshares.org/delegated-proof-of-stake/) by Dan Larimer.
On the pre-sale XenCoins will be sold, and delivered after the product is ready.
People naturally ask why we don't make it over Bitcoin. The reason is that for the Micropayment process to initiate, provider has to wait for confirmations for the multisig deposit by the publisher. Hence, publishers might have to wait 45mins until the work begins, which is unacceptalble.
BTCat
Legendary
*
Offline Offline

Activity: 1708
Merit: 1000



View Profile
August 17, 2014, 06:48:54 PM
 #25

Interesting, but for now just a synopsis.
takayama
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
August 18, 2014, 07:46:14 AM
 #26

Interesting, but for now just a synopsis.

synopsis?!?
have you even read all of it?

- http://xennet.io
- https://docs.google.com/document/d/1Y-JKJtJrnGpEISgpEpGKqwmW-atg0_Uk2rJAH5YpaLc/edit#heading=h.8llwmxkra3of
- https://github.com/Xennet/xennetdocs

- http://insidehpc.com/2014/08/new-xennet-hpc-cloud-free-market-alternative-aws/ (just found this  Smiley)


takayama
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
August 18, 2014, 07:54:27 AM
 #27

Its purpose and design goal are to be a token for activating computational machines. It's interesting to note that XenCoin has a real intrinsic value, as it is fully backed by the right to use a certain amount of computation power.

refreshing to see a cryptocoin with real intrinsic value, that aims to solve an actual problem.

@Dev can you explain more about the Micropayment protocol? (to a non technical person please  Wink)
smokim87
Hero Member
*****
Offline Offline

Activity: 938
Merit: 500


View Profile
August 18, 2014, 07:58:14 AM
 #28

Guessing Windows Server OS won't able to work in this right but maybe in a few years when computers are much more powerful?

It will work on Windows and even on mobiles!
Docker will be executed inside a virtual machine (Virtualbox, QEMU etc.).
The VM will run Xennet-OS that will split containers to each publisher.
Therefore, cross-platform.

Note that since QEMU is a fully user-space hypervisor, it is compatible with any os and architecture able to compile C.

Now this is a very interesting idea. Just wondering are you part of the team?

This should take months to develop, please take your time. Perfect it, make it air tight. Bookmarked will for sure following this.
MsCollec
Legendary
*
Offline Offline

Activity: 1400
Merit: 1000


View Profile
August 18, 2014, 08:04:53 AM
 #29

good no IPO  Roll Eyes
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 19, 2014, 03:14:27 AM
 #30

Guessing Windows Server OS won't able to work in this right but maybe in a few years when computers are much more powerful?

It will work on Windows and even on mobiles!
Docker will be executed inside a virtual machine (Virtualbox, QEMU etc.).
The VM will run Xennet-OS that will split containers to each publisher.
Therefore, cross-platform.

Note that since QEMU is a fully user-space hypervisor, it is compatible with any os and architecture able to compile C.

Now this is a very interesting idea. Just wondering are you part of the team?

This should take months to develop, please take your time. Perfect it, make it air tight. Bookmarked will for sure following this.


Yes, I'm Ohad Asor

Its purpose and design goal are to be a token for activating computational machines. It's interesting to note that XenCoin has a real intrinsic value, as it is fully backed by the right to use a certain amount of computation power.

refreshing to see a cryptocoin with real intrinsic value, that aims to solve an actual problem.

@Dev can you explain more about the Micropayment protocol? (to a non technical person please  Wink)

The Micropayments protocol is described here.

tl;dr version:

Denote our publisher by A and provider by B. Then A has to continuously pay to B.
A transfers some coins to a 2/2 multisig address with A's and B's addresses, but only after B signs (offline) and sends to A a refund transaction, with some nLockTime, so if B runs away, A can redeem this refund transaction after a certain period of time.
After coins were locked, i.e. the transfer to the multisig address got enough confirmations (on Bitcoin it might take very long, that's why we can't make it over Bitcoin), they begin the work and continuously sign each other a transaction where the refund amount is reduced, and the amount paid directly to B is increased. Those transaction are sent offline and not being sent, hence the continuous update is frictionsless.

Tau-Chain & Agoras
bitcoinwonders010
Hero Member
*****
Offline Offline

Activity: 686
Merit: 500


View Profile
August 19, 2014, 03:16:37 AM
 #31

when do we get details of the pre sale. its best to have the coins on a exchange where we can buy them straight away. bittrex wouldn't mind supporting this
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 19, 2014, 03:26:20 AM
 #32

when do we get details of the pre sale. its best to have the coins on a exchange where we can buy them straight away. bittrex wouldn't mind supporting this

The pre-sale will take place before the software is ready. As mentioned above, it's a lot of work. Hence using an exchange is not an option.
I can't promise now exact date for pre-sale details, but it shouldn't take too long.

Tau-Chain & Agoras
bitcoinwonders010
Hero Member
*****
Offline Offline

Activity: 686
Merit: 500


View Profile
August 19, 2014, 03:33:09 AM
 #33

when do we get details of the pre sale. its best to have the coins on a exchange where we can buy them straight away. bittrex wouldn't mind supporting this

The pre-sale will take place before the software is ready. As mentioned above, it's a lot of work. Hence using an exchange is not an option.
I can't promise now exact date for pre-sale details, but it shouldn't take too long.

how would a exchange not be good just make pre sale available when software is ready. having it on a exchange makes life easier as it won't need to be individually distributed
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 19, 2014, 03:43:08 AM
 #34

when do we get details of the pre sale. its best to have the coins on a exchange where we can buy them straight away. bittrex wouldn't mind supporting this

The pre-sale will take place before the software is ready. As mentioned above, it's a lot of work. Hence using an exchange is not an option.
I can't promise now exact date for pre-sale details, but it shouldn't take too long.

how would a exchange not be good just make pre sale available when software is ready. having it on a exchange makes life easier as it won't need to be individually distributed

The pre-sale will take place long before the software is ready. We need it to fund the dev.

Tau-Chain & Agoras
bitcoinwonders010
Hero Member
*****
Offline Offline

Activity: 686
Merit: 500


View Profile
August 19, 2014, 03:58:51 AM
 #35

when do we get details of the pre sale. its best to have the coins on a exchange where we can buy them straight away. bittrex wouldn't mind supporting this

The pre-sale will take place before the software is ready. As mentioned above, it's a lot of work. Hence using an exchange is not an option.
I can't promise now exact date for pre-sale details, but it shouldn't take too long.

how would a exchange not be good just make pre sale available when software is ready. having it on a exchange makes life easier as it won't need to be individually distributed

The pre-sale will take place long before the software is ready. We need it to fund the dev.

thats awesome but im just saying put it on a exchange so we can buy it from there
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 19, 2014, 04:03:02 AM
 #36

Have a look on two organic press releases:
http://www.hpcwire.com/2014/08/18/free-market-hpc-cloud-development/
http://insidehpc.com/2014/08/new-xennet-hpc-cloud-free-market-alternative-aws/
We will also mention that some worried supercomputing companies have contacted us.
bitcoinwonders010
Hero Member
*****
Offline Offline

Activity: 686
Merit: 500


View Profile
August 19, 2014, 04:05:56 AM
 #37

Have a look on two organic press releases:
http://www.hpcwire.com/2014/08/18/free-market-hpc-cloud-development/
http://insidehpc.com/2014/08/new-xennet-hpc-cloud-free-market-alternative-aws/
We will also mention that some worried supercomputing companies have contacted us.

how much are you looking to raise hope its not a crazy ipo like ether
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 19, 2014, 04:14:55 AM
Last edit: August 19, 2014, 04:27:13 AM by xenmaster
 #38

Have a look on two organic press releases:
http://www.hpcwire.com/2014/08/18/free-market-hpc-cloud-development/
http://insidehpc.com/2014/08/new-xennet-hpc-cloud-free-market-alternative-aws/
We will also mention that some worried supercomputing companies have contacted us.

how much are you looking to raise hope its not a crazy ipo like ether

Well the market will tell.
The market size of Xennet (HPC, Big Data, Worldwide computing resources) is so much bigger than the market of cryptocoins (in which we're huge fans of), especially in terms of demand and being a bottleneck for so many (good and wealthy) organizations. Moreover, XenCoin is by no means a coin that is designed to be useful for generic value transfer. It is optimized for use over Xennet and to rent computational resources.
So we have many people that need XenCoin (HPC consumers), and on the other hand, we do need to invest a lot on the product itself, it's audience, and build more applications on top of Xennet, like XenFS and XenTube as in the RFC (https://github.com/Xennet/xennetdocs), or a distributed and decentralized search engine.. God knows. Endless work towards us.
Anyway it's going to be a very fair sale. Details aren't closed yet, but will of course be announced here.

Note that the RFC is a bit outdated, and http://xennet.io contains a newer design of Xennet implementation.
tobeaj2mer01
Legendary
*
Offline Offline

Activity: 1104
Merit: 1000


Angel investor.


View Profile
August 19, 2014, 05:35:20 AM
 #39

Have a look on two organic press releases:
http://www.hpcwire.com/2014/08/18/free-market-hpc-cloud-development/
http://insidehpc.com/2014/08/new-xennet-hpc-cloud-free-market-alternative-aws/
We will also mention that some worried supercomputing companies have contacted us.

how much are you looking to raise hope its not a crazy ipo like ether

Well the market will tell.
The market size of Xennet (HPC, Big Data, Worldwide computing resources) is so much bigger than the market of cryptocoins (in which we're huge fans of), especially in terms of demand and being a bottleneck for so many (good and wealthy) organizations. Moreover, XenCoin is by no means a coin that is designed to be useful for generic value transfer. It is optimized for use over Xennet and to rent computational resources.
So we have many people that need XenCoin (HPC consumers), and on the other hand, we do need to invest a lot on the product itself, it's audience, and build more applications on top of Xennet, like XenFS and XenTube as in the RFC (https://github.com/Xennet/xennetdocs), or a distributed and decentralized search engine.. God knows. Endless work towards us.
Anyway it's going to be a very fair sale. Details aren't closed yet, but will of course be announced here.

Note that the RFC is a bit outdated, and http://xennet.io contains a newer design of Xennet implementation.

Will the beta testnet/client be available before IPO begins, it will be a very fantastic coin but at the same time it is very likely to fail ,even the basic function can't work after release for some coins.

Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 19, 2014, 05:49:27 AM
 #40


Will the beta testnet/client be available before IPO begins, it will be a very fantastic coin but at the same time it is very likely to fail ,even the basic function can't work after release for some coins.

Note that the mining, or transaction approvals etc., is *not* done by the computational work. This work is unable to be proven. What goes between the publisher and the provider, cannot be proved to a third party to be true. Trustless network is implemented by other means. Xennet is a layer on top of the coin layer, which will use a common algorithm such as POW, or maybe Delegated Proof of Stake.

So as for the coin safety, you can feel safe.

We still didn't decide whether there will or will not be a test client, this is a possibility that depends on several considerations.

We're in contact with companies having very massive computing resources so we can test it all well. We also hired a qualified QA manager. Intensive tests will be done and nothing premature will be released. Our developers are from the top of the industry with very large expertise in bringing high quality products to the market, for example, life critical products for the medical market, software for banks, and many more.

Quality is definitely one of our main guidelines, and we shall invest as much resources as needed (yet available) to deliver a top quality product.
XNext
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile
August 19, 2014, 10:49:28 AM
 #41

When for ICO?
sdersdf2
Full Member
***
Offline Offline

Activity: 224
Merit: 100


View Profile
August 19, 2014, 12:20:35 PM
 #42

ould appreciate it if someone could briefly summarise:
1) the strengths of this coin?
2) the weaknesses of this coin?
3) promises made? kept? any FUD?

To the dev(s), what will be the key original code/feature(s) in this coin? And are you going to provide "Proof of Developer" certification from cryptoasian:
http://cryptoasian.com/

And are the devs here real developers, or copy-paste devs or middlemen, like the SYS guys?
provenceday
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000



View Profile
August 19, 2014, 12:22:13 PM
 #43

will watch what happens later.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 19, 2014, 12:23:37 PM
 #44

ould appreciate it if someone could briefly summarise:
1) the strengths of this coin?
2) the weaknesses of this coin?
3) promises made? kept? any FUD?

To the dev(s), what will be the key original code/feature(s) in this coin? And are you going to provide "Proof of Developer" certification from cryptoasian:
http://cryptoasian.com/

And are the devs here real developers, or copy-paste devs or middlemen, like the SYS guys?

That's so much more than just a coin. It's not even meant to be a coin. Please read some more here and on the links. The devs are from the top industry of software, as can be seen in our documents.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 19, 2014, 09:25:51 PM
 #45

an article about Xennet Supercomputer that was publish in the insideHPC:
http://insidehpc.com/2014/08/new-xennet-hpc-cloud-free-market-alternative-aws
"Today we caught wind of something coming out of stealth mode called the Xennet initiative, a “public, distributed, and decentralized Supercomputer.” As the brainchild of Israeli computer scientist Ohad Asor, Xennet is essentially a free-market alternative to AWS that sounds a lot like the marriage of BitCoin and SETI@Home."
takayama
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
August 20, 2014, 10:32:28 AM
 #46

ould appreciate it if someone could briefly summarise:
1) the strengths of this coin?
2) the weaknesses of this coin?
3) promises made? kept? any FUD?

To the dev(s), what will be the key original code/feature(s) in this coin? And are you going to provide "Proof of Developer" certification from cryptoasian:
http://cryptoasian.com/

And are the devs here real developers, or copy-paste devs or middlemen, like the SYS guys?

I will try to answer your questions:

Strengths - (1) XenCoins have real intrinsic value (fully backed by the right to use a certain amount of computation power)  (2) the project aims to solve an actual problem, there is a very big demand in the High-Performance Computing (HPC) for computation power. (3) The HPC is hundreds billions market with rapid growth.
            
Weaknesses - very innovative project, there will be hurdles along the way.

Promises made? kept? any FUD? - I don't know of any yet, this thread is just a PRE-ANN to get the community involved.

Original code/feature? - they are using industry's most solid technologies. The originality will be to bring everything together to achieve the xennet goal - Creating a public decentralized Supercomputer with frictionless reward mechanism using blockchain technology.

Are the devs here real developers? you can check Ohad Asor linkedin here: https://www.linkedin.com/in/ohadasor

BTCat
Legendary
*
Offline Offline

Activity: 1708
Merit: 1000



View Profile
August 20, 2014, 10:52:45 AM
 #47

This looks to me much like the Cloak story: promise a 'holistic' idea but not deliver.
That's the difference with Bitcoin, when it was introduced it was already there only to develop further upon. Todays new coin launches are more like a businessplan or ideas that need to be developed but if it ever comes to full launch remains uncertain. A very risky investment that is mainly driven by new investors entering the market with  the same high hopes.
In the real world only 1 out of 1,000 ideas will be successfully developed if at all.

That being said I'm not saying this coin can't be that 1 successfull launch. So goodluck if you're serious in achieving the goal.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 20, 2014, 10:57:56 AM
 #48

This looks to me much like the Cloak story: promise a 'holistic' idea but not deliver.
That's the difference with Bitcoin, when it was introduced it was already there only to develop further upon. Todays new coin launches are more like a businessplan or ideas that need to be developed but if it ever comes to full launch remains uncertain. A very risky investment that is mainly driven by new investors entering the market with  the same high hopes.
In the real world only 1 out of 1,000 ideas will be successfully developed if at all.

Note that we published a detailed software design document which explains how to implement the product. We haven't published the current code yet.
We also have the right personnel.
Using the funds from the crowdsale we will be able to finish the development and deliver the product.
seek4dream
Hero Member
*****
Offline Offline

Activity: 966
Merit: 501



View Profile
August 20, 2014, 04:26:36 PM
 #49

keep an eye on it.
studio1one
Hero Member
*****
Offline Offline

Activity: 1078
Merit: 500



View Profile
August 21, 2014, 10:52:22 AM
 #50

ould appreciate it if someone could briefly summarise:
1) the strengths of this coin?
2) the weaknesses of this coin?
3) promises made? kept? any FUD?

To the dev(s), what will be the key original code/feature(s) in this coin? And are you going to provide "Proof of Developer" certification from cryptoasian:
http://cryptoasian.com/

And are the devs here real developers, or copy-paste devs or middlemen, like the SYS guys?

Dude,

please

stop posting this on every single thread.


▄▄▄███████▄▄▄
▄▄█████▀▀''`▀▀█████▄▄
▄███P'            `YY██▄
▄██P'                  `Y██▄
███'                      `███
███'                         ███
▄██'   ▄█████▄▄  ,▄▄▄▄▄▄▄▄▄▄p   ███
▄██▀  ,████▀P▀███.`██████████P   ▀██▄
███[ ,████ __. ███.   ,▄████▀    ███
███[ ]████████████[  ▄████▀       ███
███[ `████   ,oo2 ▄████▀'       ,███
▀██▄  `████▄▄█████d███████████   ▄██▀
▀██.   `▀▀▀▀▀▀"  Y▀▀▀▀▀▀▀▀▀▀▀  ,██▀
███.                        ,███
▀██▄                      ▄██▀
▀███▄_                 ,███▀
▀███▄▄_          _▄▄███▀
▀▀████▄▄ooo▄▄█████▀
▀▀███████▀▀'

365

TM

EZ365 is a digital ecosystem that combines
the best aspects of online gaming, cryptocurrency
trading
and blockchain education. ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

..WHITEPAPER..    ..INVESTOR PITCH..

.Telegram     Twitter   Facebook

                       .'M████▀▀██  ██
                      W█Ws'V██  ██▄▄███▀▀█
                     i█████m.~M████▀▀██  ███
                     d███████Ws'V██  ██████
                     ****M██████m.~███f~~__mW█
          ██▀▀▀████████=  Y██▀▀██W ,gm███████
      g█████▄▄▄██   █A~`_WW Y█  ██!,████████
   g▀▀▀███   ████▀▀`_m████i!████P W███  ██
 _███▄▄▄██▀▀▀███Af`_m███   █W ███A ]███  ██
__ ~~~▀▀▀▀▄▄▄█*f_m██████   ██i!██!i███████
Y█████▄▄▄▄__. i██▀▀▀██████████ █!,██████
 8█  █▀▀█████.!██   ██████████i! █████
 '█  █  █   █W M█▄▄▄██████   ██ !██
  !███▄▄█   ██i'██████████   ██
   Y███████████.]██████████████
   █   ███████b ███   ██████
   Y   █   █▀▀█i!██   ████
    V███   █  █W Y█████
      ~~▀███▄▄▄█['███
            ~~*██

Play

            │
    │      ███
    │      ███
    │      ███
    │   │  ███
   ███  │  ███
   ███ ███ ███
 │  ███ ███ ███
███ ███ ███ ███
███ ███  │   │
███ ███  │   │
 │   │
 │

Trade

           __▄▄████▄▄
     __▄▄███████████████▄▄▄
 _▄▄█████████▀▀~`,▄████████████▄▄▄
 ~▀▀████▀▀~`,_▄▄███████████████▀▀▀
   d█~  =▀███████████████▀▀
   ]█! m▄▄ '~▀▀▀████▀▀~~ ,_▄▄
  ,W█. *████▄▄__ '  __▄▄█████
  !██P  █████████████████████
   W█. - ██████████████████▀
  i██[   ~ ▀▀█████████▀▀▀
 g███!
Y███

Learn
sdersdf2
Full Member
***
Offline Offline

Activity: 224
Merit: 100


View Profile
August 21, 2014, 12:05:34 PM
 #51

ould appreciate it if someone could briefly summarise:
1) the strengths of this coin?
2) the weaknesses of this coin?
3) promises made? kept? any FUD?

To the dev(s), what will be the key original code/feature(s) in this coin? And are you going to provide "Proof of Developer" certification from cryptoasian:
http://cryptoasian.com/

And are the devs here real developers, or copy-paste devs or middlemen, like the SYS guys?

Dude,

please

stop posting this on every single thread.

Too many new coins, threads, posts to track. These are discussion threads to discuss the coins, no?
No interest in informing newcomers about the coin?
studio1one
Hero Member
*****
Offline Offline

Activity: 1078
Merit: 500



View Profile
August 21, 2014, 12:08:17 PM
 #52

ould appreciate it if someone could briefly summarise:
1) the strengths of this coin?
2) the weaknesses of this coin?
3) promises made? kept? any FUD?

To the dev(s), what will be the key original code/feature(s) in this coin? And are you going to provide "Proof of Developer" certification from cryptoasian:
http://cryptoasian.com/

And are the devs here real developers, or copy-paste devs or middlemen, like the SYS guys?

Dude,

please

stop posting this on every single thread.

Too many new coins, threads, posts to track. These are discussion threads to discuss the coins, no?
No interest in informing newcomers about the coin?

Just read the info available and make up your own mind.  Every thread I go.on you copy and paste the same thing. Even if it's only 1 or 2 pages


▄▄▄███████▄▄▄
▄▄█████▀▀''`▀▀█████▄▄
▄███P'            `YY██▄
▄██P'                  `Y██▄
███'                      `███
███'                         ███
▄██'   ▄█████▄▄  ,▄▄▄▄▄▄▄▄▄▄p   ███
▄██▀  ,████▀P▀███.`██████████P   ▀██▄
███[ ,████ __. ███.   ,▄████▀    ███
███[ ]████████████[  ▄████▀       ███
███[ `████   ,oo2 ▄████▀'       ,███
▀██▄  `████▄▄█████d███████████   ▄██▀
▀██.   `▀▀▀▀▀▀"  Y▀▀▀▀▀▀▀▀▀▀▀  ,██▀
███.                        ,███
▀██▄                      ▄██▀
▀███▄_                 ,███▀
▀███▄▄_          _▄▄███▀
▀▀████▄▄ooo▄▄█████▀
▀▀███████▀▀'

365

TM

EZ365 is a digital ecosystem that combines
the best aspects of online gaming, cryptocurrency
trading
and blockchain education. ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

..WHITEPAPER..    ..INVESTOR PITCH..

.Telegram     Twitter   Facebook

                       .'M████▀▀██  ██
                      W█Ws'V██  ██▄▄███▀▀█
                     i█████m.~M████▀▀██  ███
                     d███████Ws'V██  ██████
                     ****M██████m.~███f~~__mW█
          ██▀▀▀████████=  Y██▀▀██W ,gm███████
      g█████▄▄▄██   █A~`_WW Y█  ██!,████████
   g▀▀▀███   ████▀▀`_m████i!████P W███  ██
 _███▄▄▄██▀▀▀███Af`_m███   █W ███A ]███  ██
__ ~~~▀▀▀▀▄▄▄█*f_m██████   ██i!██!i███████
Y█████▄▄▄▄__. i██▀▀▀██████████ █!,██████
 8█  █▀▀█████.!██   ██████████i! █████
 '█  █  █   █W M█▄▄▄██████   ██ !██
  !███▄▄█   ██i'██████████   ██
   Y███████████.]██████████████
   █   ███████b ███   ██████
   Y   █   █▀▀█i!██   ████
    V███   █  █W Y█████
      ~~▀███▄▄▄█['███
            ~~*██

Play

            │
    │      ███
    │      ███
    │      ███
    │   │  ███
   ███  │  ███
   ███ ███ ███
 │  ███ ███ ███
███ ███ ███ ███
███ ███  │   │
███ ███  │   │
 │   │
 │

Trade

           __▄▄████▄▄
     __▄▄███████████████▄▄▄
 _▄▄█████████▀▀~`,▄████████████▄▄▄
 ~▀▀████▀▀~`,_▄▄███████████████▀▀▀
   d█~  =▀███████████████▀▀
   ]█! m▄▄ '~▀▀▀████▀▀~~ ,_▄▄
  ,W█. *████▄▄__ '  __▄▄█████
  !██P  █████████████████████
   W█. - ██████████████████▀
  i██[   ~ ▀▀█████████▀▀▀
 g███!
Y███

Learn
dexX7
Legendary
*
Offline Offline

Activity: 1106
Merit: 1005



View Profile WWW
August 22, 2014, 05:15:22 PM
 #53

The idea is simply brilliant. The question how unused resources could be used in a (semi-) trustless and distributed way was asked many times, but using micropayment channels might be the missing piece and part of the solution.

After a quick read I have a few questions and excuse me, if I missed the general concept:

xennet.io describes the idea of decrentralized supercomputing where access via SSH to VMs can be rented or sold. Contracts are negotiated over a P2P network and payments are done via payment channels for actual work which, according to the description, can be measured. This seems pretty straight forward.

xennetdocs in contrast mentiones elements like XenFS and XenTube, proof of storage and much more. So what's the plan? Distributed HPC or MaidSafe 2.0?

This particular statement made me wonder:

Quote
1. Publisher A broadcasts an announcement (ann) to the blockchain, saying it is seeking providers. The ann contains information about the required systems in terms of hardware capabilities, and the publisher's IP address.

2. Provider B polls on the blockchain. Once an ann that matches its filter is found, it connects to A's IP address.

I'd like to quote Satoshi:

Quote
We define an electronic coin as a chain of digital signatures. (...) The problem of course is the payee can't verify that one of the owners did not double-spend the coin.

We need a way for the payee to know that the previous owners did not sign any earlier transactions. For our purposes, the earliest transaction is the one that counts, so we don't care about later attempts to double-spend.

In this paper, we propose a solution to the double-spending problem using a peer-to-peer distributed timestamp server to generate computational proof of the chronological order of transactions.

The solution we propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [2-5]. The timestamp proves that the data must have existed at the time, ...

The blockchain is basically a ledger where data is published in an ordered structure and it provides an answer to the question which piece of data came in first.

Peer discovery and contract negotiation doesn't seem like something that requires such properties and might as well be satisfied by other communication networks. Once two peers are matched, they can furthermore communicate in an isolated channel. I don't really see the benefit of using a blockchain here and the BitTorrent Mainline DHT with likely over 25 million participants is probably a prime example how it could be done, too -- without any delay based on "block confirmations" or whatsoever. You may take a look at the colored coins projects or bitsquare (to name a another concrete example), as well, which intend to use an overlay network for order publishing.

I assume this is directly linked to my third note or question:

Why do you want to create a new coin at all? Despite that this would be a huge and complex task on it's own, not even looking at all the implications and security risks, I seem to miss the underlying need in the first place.

To quote:

Quote
One would naturally ask: why isn't Xennet planned to be implemented over Bitcoin? The answer is mainly the following: in order to initiate a micropayment channel, it is necessary to deposit money in a multisig address, and the other party has to wait for confirmations of this deposit. This can make the waiting time for the beginning of work to last 30-90 minutes, which is definitely unacceptable.

I'm not sure, if this is indeed linked to my previous comment (with something like: tx with "announcement" -> block confirmation -> tx with "accept" -> confirmation -> tx to "open channel" -> ...), but let's assume for a moment this is only about opening the payment channel: I would humbly disagree here and wonder: how do you come up with a delay of 30-90 minutes? When I look at Gavin's chart which shows the relation between fees and delay of a and inclusion within a block it seems that you can be pretty sure a transaction can be confirmed within one block at a cost of 0.0005-0.0007 BTC/1000 byte or two blocks at a cost of about 0.00045 BTC/1000 byte transaction size. Given that opening the micropayment channel equals funding the multisig wallet via a standard pay-to-hash transaction with a size of usually about 230 byte, then it comes down to a cost of about 0.0001-0.0002 BTC to ensure at a high probability the channel is opened within one or two blocks.

With a block confirmation time of usually 10 minutes, but due to the increasing total computation power of the network, of usually even less then 10 minutes, this is far away from 30-90 minutes.

This timeframe, depending on the level of trust in the other party, could be used to begin with the work (probably not wise), but also to setup everything that's needed in general and especially to run the benchmark (let's call this proof-of-benchmark Wink to measure the system's capabilities. The inceives during this periode seem sort of balanced, given that one party at least pays the transaction fee to open the channel and the other party which spends computitional resources to run the benchmark.

Another question I'd like to throw in: is an almost instant start even required here? I'm not familiar with HPC and what is usually computed at all, but I would assume tasks that require heavy resources usually run over longer periods of time and say (totally out of thin air) there is a lab which wants to run some kind of simulation over the next 7 days, then I'd say it doesn't really matter, if the work begins within 5 or 30 minutes.

My last question derives from my lack of knowledge in this field, too, but would it be possible to game the system or even produce bogus data? Related to Bitcoin mining it's pretty simple: there is a heavy task of finding a nonce which produces a hash with some specific properties. The task can easily take quite some time to be solved, but the solution can be verified almost instantaneously.

If the "usual work" done in HPC environments has similar properties, then I see a golden future. If this is not the case and if results may not even be verifiable at all, then this could be a significant problem.

Looking forward for your answers. Cheers! Smiley

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 22, 2014, 07:04:52 PM
Last edit: August 22, 2014, 07:25:01 PM by ohad
 #54

The idea is simply brilliant. The question how unused resources could be used in a (semi-) trustless and distributed way was asked many times, but using micropayment channels might be the missing piece and part of the solution.

After a quick read I have a few questions and excuse me, if I missed the general concept:

xennet.io describes the idea of decrentralized supercomputing where access via SSH to VMs can be rented or sold. Contracts are negotiated over a P2P network and payments are done via payment channels for actual work which, according to the description, can be measured. This seems pretty straight forward.

xennetdocs in contrast mentiones elements like XenFS and XenTube, proof of storage and much more. So what's the plan? Distributed HPC or MaidSafe 2.0?

This particular statement made me wonder:

Quote
1. Publisher A broadcasts an announcement (ann) to the blockchain, saying it is seeking providers. The ann contains information about the required systems in terms of hardware capabilities, and the publisher's IP address.

2. Provider B polls on the blockchain. Once an ann that matches its filter is found, it connects to A's IP address.

I'd like to quote Satoshi:

Quote
We define an electronic coin as a chain of digital signatures. (...) The problem of course is the payee can't verify that one of the owners did not double-spend the coin.

We need a way for the payee to know that the previous owners did not sign any earlier transactions. For our purposes, the earliest transaction is the one that counts, so we don't care about later attempts to double-spend.

In this paper, we propose a solution to the double-spending problem using a peer-to-peer distributed timestamp server to generate computational proof of the chronological order of transactions.

The solution we propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [2-5]. The timestamp proves that the data must have existed at the time, ...

The blockchain is basically a ledger where data is published in an ordered structure and it provides an answer to the question which piece of data came in first.

Peer discovery and contract negotiation doesn't seem like something that requires such properties and might as well be satisfied by other communication networks. Once two peers are matched, they can furthermore communicate in an isolated channel. I don't really see the benefit of using a blockchain here and the BitTorrent Mainline DHT with likely over 25 million participants is probably a prime example how it could be done, too -- without any delay based on "block confirmations" or whatsoever. You may take a look at the colored coins projects or bitsquare (to name a another concrete example), as well, which intend to use an overlay network for order publishing.

I assume this is directly linked to my third note or question:

Why do you want to create a new coin at all? Despite that this would be a huge and complex task on it's own, not even looking at all the implications and security risks, I seem to miss the underlying need in the first place.

To quote:

Quote
One would naturally ask: why isn't Xennet planned to be implemented over Bitcoin? The answer is mainly the following: in order to initiate a micropayment channel, it is necessary to deposit money in a multisig address, and the other party has to wait for confirmations of this deposit. This can make the waiting time for the beginning of work to last 30-90 minutes, which is definitely unacceptable.

I'm not sure, if this is indeed linked to my previous comment (with something like: tx with "announcement" -> block confirmation -> tx with "accept" -> confirmation -> tx to "open channel" -> ...), but let's assume for a moment this is only about opening the payment channel: I would humbly disagree here and wonder: how do you come up with a delay of 30-90 minutes? When I look at Gavin's chart which shows the relation between fees and delay of a and inclusion within a block it seems that you can be pretty sure a transaction can be confirmed within one block at a cost of 0.0005-0.0007 BTC/1000 byte or two blocks at a cost of about 0.00045 BTC/1000 byte transaction size. Given that opening the micropayment channel equals funding the multisig wallet via a standard pay-to-hash transaction with a size of usually about 230 byte, then it comes down to a cost of about 0.0001-0.0002 BTC to ensure at a high probability the channel is opened within one or two blocks.

With a block confirmation time of usually 10 minutes, but due to the increasing total computation power of the network, of usually even less then 10 minutes, this is far away from 30-90 minutes.

This timeframe, depending on the level of trust in the other party, could be used to begin with the work (probably not wise), but also to setup everything that's needed in general and especially to run the benchmark (let's call this proof-of-benchmark Wink to measure the system's capabilities. The inceives during this periode seem sort of balanced, given that one party at least pays the transaction fee to open the channel and the other party which spends computitional resources to run the benchmark.

Another question I'd like to throw in: is an almost instant start even required here? I'm not familiar with HPC and what is usually computed at all, but I would assume tasks that require heavy resources usually run over longer periods of time and say (totally out of thin air) there is a lab which wants to run some kind of simulation over the next 7 days, then I'd say it doesn't really matter, if the work begins within 5 or 30 minutes.

My last question derives from my lack of knowledge in this field, too, but would it be possible to game the system or even produce bogus data? Related to Bitcoin mining it's pretty simple: there is a heavy task of finding a nonce which produces a hash with some specific properties. The task can easily take quite some time to be solved, but the solution can be verified almost instantaneously.

If the "usual work" done in HPC environments has similar properties, then I see a golden future. If this is not the case and if results may not even be verifiable at all, then this could be a significant problem.

Looking forward for your answers. Cheers! Smiley

Thanks Dex!

It is important to bear in mind that Xennet is only IaaS (Infrastructure as a Service) and not PaaS (Platform) or SaaS (Software as a Service). Xennet brings you access to hardware. It need not help you manage your workers and their up/downtime, or latency and so on, or to divide the distributed work between them - Xennet won't do it just because there are plenty of wonderful tools out there (e.g. Hadoop). We do not aim to innovate on this field. All we do is bringing more metals to existing distributed applications.

Even though Xennet is an infrastructure, it's a strong and versatile one. Any native code can be executed, hence more applications can be built on top of Xennet. One of then is XenFS. Xennet+XenFS will provide both computational power and storage, but they're different layers, one on top of another. So we'll have HPC, Big Data, Cloud, Storage, and all decentralized and open for applications on top of it.

Xennet does not implement algorithms to verify the correctness of the execution. Such algorithms are a very hot topic in academic research nowadays, and once there will be a good way to do it, we will probably use it. But, Xennet can still be a fair market even without such verifications. We cannot totally eliminate the probability of incorrect computation, but we can make this probability as small as we want.
For example, if we make each work twice, it'll cost twice, but the probability of mistake will be squared. Linear growth in the cost gives exponential decrease in the risk. But that's not the only mechanism. The micropayments protocol bounds the time that a fraud affect to several seconds. As for storage in XenFS, the mechanism of bounding the probability is totally different, as described in the RFC.

Maliciously misleading measurements will be taken care by calculating normally distributed fluctuations from the pseudo-inverse. See the linear algebra part. The errors of the least squares problem are known to distribute normally (Gauss proved that) so one can configure the distribution tails' when they disconnect the from their party due to high enough probability of misleading measurements. It's a bit complex, I know, but the user will only config % of probability to reject.

Moreover, a publisher can rent, say, 10K hosts, and after a few seconds drop the less-efficient 50%.

Another risk-decreasing behavior is that each provider works simultanously for several publishers. Say each work for 10 publishers in parallel, so the risk to both sides decreses 10 fold. See my answers earlier on this thread.

You claim that the time for confirmation (for the micropayment initialtion via the multisig deposit) may be much less than 30-90  (I stated those numbers while thinking about 6 confirmations. DPOS gives you more security with less confirmations). You mentioned that high fees may help. Note that this deposit should be for each provider separately, and on the total, they don't get much coins since usually small amounts are transferred, also for risk management. So the fee cannot be high. In addition, even if the confirmation would take 5 minutes, that's still a lot. Look: the world's fastest supercomputer is about like 8,000 AMD 280X GPU, in terms of TFLOPS. So a full day of fastest supercomputer work should be done in minutes or even seconds, without Xennet having even 1% of all GPUs. Moreover, Xennet is an infrastructure for more applications, like XenFS and XenTube. If someone wants to publish an encoding and streaming task over XenTune, will they have to wait 5 mins before they begin watching?

So 5 min are too much time. In addition, POW is pretty obsolete. It's also centralized, de-facto. I tend to find DPOS much better. What is your opinion?

As for the benchmark, the publisher does not run a benchmark each time it connects to a provider (otherwise huge waste will take place). The provider's client does benchmarks from time to time, and the linear algebra algorithm is smart enough to compensate over different hardwares showind same measurements but still one is more efficient, or fluctuations of other tasks in the background, and so on.

Why publish the ann over the Blockchain? It will be up to the user's choice whether to make the ann prunable or not. We might even support propagating it between the nodes without getting into the blockchain (but we have to think about cases if nodes don't have incentive to pass anns). Some users will want their ann to be public and persistant, for the sake of reputation.

I hope I touched every point and didn't forget more related strengths to present on this scope.

Thanks again,
Ohad.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 22, 2014, 07:29:15 PM
Last edit: August 22, 2014, 11:57:33 PM by ohad
 #55

If the "usual work" done in HPC environments has similar properties, then I see a golden future. If this is not the case and if results may not even be verifiable at all, then this could be a significant problem.

Some more words about it: as mentioned, risk can be controlled. Also verifications can take place, when verifiable algorithms are in question. Many of the common algorithms are efficiently verifiable, such as root finding, matrix inversion, eigenvalue problems, all NP-Complete problems and so on, all cover many many applications.
If the work is totally unprovable, then do it twice and square the risk.
Yet, Xennet cannot help you on proving your specific algorithm. This is task-dependant. Xennet assumes that you already have a software that manages endless hosts, some more stable than others. It just gives you the hosts.

EDIT
Another point which I think worth mentioning is that the pricing units are much more noise-resistant than common.
Explanation:
Naturally one would suggest negotiating a price for a unit of time of usage, as in all server hosting services. But this is not very fair: what if I used only 20% of the CPU overall? On Xennet, we measure how many CPU instructions were made, how many sequential RAM reads, how many random disk writes and so on. Hence, even if the computer was loaded and did the job several times slower, on our scope of distributed computing which doesn't matter about the speed of each specific node, it will still get paid for a single job like everyone else.
This way we allow to reduce the risk to both sides by serving many publishers in parallel, yet the billing remains accurate, and even more accurate (and fair) than AWS for example.
Now since the publisher has all the information from all nodes, like the expected CPU instructions for a work item to complete and its variance, it can easily identify outliers.

Tau-Chain & Agoras
dexX7
Legendary
*
Offline Offline

Activity: 1106
Merit: 1005



View Profile WWW
August 23, 2014, 03:08:08 AM
 #56

Ah, thanks for the answers!

The numbers you used in your example ("rents 10k hosts") hint that this is about a much larger scale than I initially assumed.

The magical number of 6 confirmations has actually not all that much meaning in reality and is derived from the fact that a miner with 10 % hashing power has a chance of less than 0.1 % to double-spend a transaction with 6 confirmations, if I recall correctly. Anyway, this number was probably mentioned once and has been repeated since. Meni Rosenfeld wrote an interesting paper about this topic and came up with the conclusion that one should rather look at the transacted amounts in relation to the probability that a malicious party might attempt to double-spend a payment instead of sticking to some fixed numbers ("it's no gain to double-spend 20 BTC, if this comes at the cost of losing one block reward of 25 BTC").

A delay based on channel initialization is given at the very beginning, but can be hidden, if leaving clients are replaced early enough. Furthermore there might not be the need of closing a channel between honest parties right after the work contract ended at all, since both parties seem to benefit from a faster setup. At best a network with many nodes and many open channels is established over time. This of course doesn't imply each channel must be active, in the sense of transacting value.

This likely holds true for whatever underlying payment network is used and I think it's probably best to seperate as you already did: there is the service layer (Xennet) and a payment layer (XenCoin, Bitcoin, ...), as well as another layer for the applications (such as XenTube).

My main question sort of remains, but I can phrase it with different words now: do you want to focus on building another payment layer or a service? Do you intend to do both?


So 5 min are too much time. In addition, POW is pretty obsolete. It's also centralized, de-facto. I tend to find DPOS much better. What is your opinion?

Actually I'm not sure. I'm heavily biased in favor of Bitcoin, but I have a high level knowledge about other approaches and acknowledge the potenial of alternatives, still I feel like this is not enough to answer this question properly. One of the main difficulties I see in the whole context is that you can be certain that a system failed, once it failed, but you can only guess that it won't fail in the future, based on assumptions derived from the information that is available at the very moment. Calling POW obsolete seems a bit premature, given it's track record of more than 5 years and a pretty solid understanding that has been established during this time. Simply put, it works. And without a doubt I consider Bitcoin as the most solid basis to build upon.

A different topic however is the context and which properties you need or consider as valuable, whether that is network security, transaction cost, block confirmation delay, the user base, ...

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 23, 2014, 11:16:14 AM
 #57

Ah, thanks for the answers!

The numbers you used in your example ("rents 10k hosts") hint that this is about a much larger scale than I initially assumed.

The magical number of 6 confirmations has actually not all that much meaning in reality and is derived from the fact that a miner with 10 % hashing power has a chance of less than 0.1 % to double-spend a transaction with 6 confirmations, if I recall correctly. Anyway, this number was probably mentioned once and has been repeated since. Meni Rosenfeld wrote an interesting paper about this topic and came up with the conclusion that one should rather look at the transacted amounts in relation to the probability that a malicious party might attempt to double-spend a payment instead of sticking to some fixed numbers ("it's no gain to double-spend 20 BTC, if this comes at the cost of losing one block reward of 25 BTC").

A delay based on channel initialization is given at the very beginning, but can be hidden, if leaving clients are replaced early enough. Furthermore there might not be the need of closing a channel between honest parties right after the work contract ended at all, since both parties seem to benefit from a faster setup. At best a network with many nodes and many open channels is established over time. This of course doesn't imply each channel must be active, in the sense of transacting value.

This likely holds true for whatever underlying payment network is used and I think it's probably best to seperate as you already did: there is the service layer (Xennet) and a payment layer (XenCoin, Bitcoin, ...), as well as another layer for the applications (such as XenTube).

My main question sort of remains, but I can phrase it with different words now: do you want to focus on building another payment layer or a service? Do you intend to do both?


So 5 min are too much time. In addition, POW is pretty obsolete. It's also centralized, de-facto. I tend to find DPOS much better. What is your opinion?

Actually I'm not sure. I'm heavily biased in favor of Bitcoin, but I have a high level knowledge about other approaches and acknowledge the potenial of alternatives, still I feel like this is not enough to answer this question properly. One of the main difficulties I see in the whole context is that you can be certain that a system failed, once it failed, but you can only guess that it won't fail in the future, based on assumptions derived from the information that is available at the very moment. Calling POW obsolete seems a bit premature, given it's track record of more than 5 years and a pretty solid understanding that has been established during this time. Simply put, it works. And without a doubt I consider Bitcoin as the most solid basis to build upon.

A different topic however is the context and which properties you need or consider as valuable, whether that is network security, transaction cost, block confirmation delay, the user base, ...

Sure, 6 confirmations is just a common number and does not represent a fixed risk. But as I wrote, even 1 confirmation can take too much (sometimes it does take 30mins without even one block!!). Mean, average, expected, but variance is very high.

As for keeping open channels etc., see the XenFS's "circuits" mechanism on the RFC. But it'll work for closed jobs like XenFS, not for arbitrary computation. Still, note that those frequent money locks have financial consequences. If it'll always seek for more clients just because the coin is slow, it'll have to have a larger initial amount of coins, which not all will be used. They also have to be split more times to many small accounts (the multisig ones), which costs fees.

You asked "do you want to focus on building another payment layer or a service?" and the answer is that I don't really feel unstoppable desire doing this, but I must to. I need a secure(!!) coin with max 1min block time (1min is also too much). I managed to come up with two solutions: either DPOS, or the blockchain modifications I listed on xennet.io (some of them might be novel).

As for Bitcoin being obsolete, let's face it bro: we're counting in dog years when we talk about tech, 5 years are a long time for a technology! And it's only the first one. No surprise at all that no one imagines a new coin which is the same as Bitcoin is, and not just because Bitcoin already exists. The mining is problematic. Also, POW is making the network centralized by the largest pools. And, again, it's slow.

Will be glad to continue this professional discussion with you, and get new advice and insights. You certainly know what you're talking about.

Tau-Chain & Agoras
SalimNagamato
Legendary
*
Offline Offline

Activity: 924
Merit: 1000



View Profile
August 23, 2014, 11:34:07 AM
 #58

can't wait to use my computers as a part of the cloud

if we had good internet connection at our homes
we could give up centralized hosting

not hashing, folding and curing (check FLDC merged-folding! reuse good GPUs)
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
August 23, 2014, 11:47:09 AM
 #59

if we had good internet connection at our homes
we could give up centralized hosting

it seems like most of the world isn't far from there in terms of internet connection speed

Tau-Chain & Agoras
smokim87
Hero Member
*****
Offline Offline

Activity: 938
Merit: 500


View Profile
August 23, 2014, 11:56:52 AM
 #60

I was just wondering what are the security measures that would be set to prevent abuse? DDos attacks for example?

If a client is looking for say 1000 computers worth of computational power, would the client and each "Seller" have to follow these steps:
Quote


-Publisher Alice puts her announcement into the blockchain, that she's seeking a defined hardware capacity.
-Provider Bob polls the blockchain looking for announcements within his capacity. Seeing Alice's announcement, Bob connects to Alice's IP address.
-Alice and Bob challenge each others' keys – the key pair of their blockchain addresses – to verify each others' identity.
-Bob tells Alice what hardware he has available, the two negotiate a price, and after payment is made Bob creates a virtual machine that Alice accesses over SSH.

Do the 1000 computers form a cluster acting like one supercomputer?
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 23, 2014, 12:03:44 PM
Last edit: August 23, 2014, 12:31:42 PM by xenmaster
 #61

I was just wondering what are the security measures that would be set to prevent abuse? DDos attacks for example?

The provider will able to block (or limit) any network connections outside the publisher's box (we might block it by default to avoid such concerns). Providers may block data persistence as well. They will also be able to work with specific trusted publishers (such as a University). That's where the beauty lies - it's an open and free market. It's all up to participant's decisions and preferences, including the pricing.

As for security on the provider's PC itself, it all runs in a restricted VM and the publisher does not gain elevated access even inside the Docker box, not to mention elevated or non-elevated access to the VM containing the boxes, not to mention access to the OS running the VM.

If a client is looking for say 1000 computers worth of computational power, would the client and each "Seller" have to follow these steps:

Yes, each step is required for any single pair of publisher and provider, and this points to why we need a dedicated coin.

Do the 1000 computers form a cluster acting like one supercomputer?

Yes.

Let us tell something about supercomputers: the fastest one in the world does 33 Petaflops. Like 8000 AMD 280X GPU, which exist maybe in every average western city. They hired 1300 engineers to build this giant computer. And still, not every researcher gets instant access to it..
mikerutch
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
August 24, 2014, 11:51:29 AM
 #62

im confused...is this a coin or what? if it is whats the details. sorry for the noob questions, just very unfamiliar with this.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 25, 2014, 07:09:52 AM
 #63

im confused...is this a coin or what? if it is whats the details. sorry for the noob questions, just very unfamiliar with this.

Maybe listening to the podcast (see on bottom of the main post) will give you some introduction.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
August 30, 2014, 02:29:42 PM
Last edit: August 30, 2014, 03:30:06 PM by xenmaster
 #64

Xennet Bug Bounty Announcement

We at Xennet are committed to top quality. That’s one of the reasons why Xennet article, RFC and SDD are publicly available; but merely making them public, doesn’t accomplish anything if people don’t read them.
 
For this reason, Xennet bug bounty provide an opportunity for people who find a serious flaw in our plan to be compensated.

We offer 1 BTC to the first person that points out a serious flaw in our plan to build xennet supercomputer, that will cause us to make a change accordingly.

Note: The RFC and SDD documents still describe the traditional VM design. The current design as described here at BTT and in Xennet article was written after we have moved to OS virtualization, among some other changes.
armlock
Sr. Member
****
Offline Offline

Activity: 560
Merit: 250



View Profile
August 31, 2014, 04:22:06 AM
 #65

Guys you need first to do POD for this ICO. Many investors want it today

                  ▄▀▀▄▄▄▄▄▄     ▄▄▄
                  ▀▄▄█     ▀▀▀▀█   █
                   █  ▀▄▀▀▀▀▄▄▀▀▄▄▄▀
                  █   █      █   █
                 █    ▀▄    ▄▀    █
                █     ▄▀▀▀▀▀▀▄     █
               █    ▄▀        ▀▄    █
              █   ▄▀            ▀▄   █
█   ▄▀    ▄▀▀▀▀▀▄▀   █▄     █  █  ▀▄▄▀▀▀▀▀▄    ▄▀▀▀▄
█ ▄▀    ▄▀       ▀▄  █ █    █  █  ▄▀       ▀▄  ▀▄
██      █         █  █  ▀▄  █  █  █         █    ▀▄
█ ▀▄    ▀▄       ▄▀  █    █ █  █  ▀▄       ▄▀      █
█   ▀▄    ▀▄▄▄▄▄▀    █     ▀█  █    ▀▄▄▄▄▄▀   ▀▄▄▄▄▀
THE WORLD'S MOST SECURE
CASH
& CRYPTO PLATFORM
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
█████████████████████████████████
█████████████████████████████████
█████████████       █████████████
█████████████       █████████████
█████████████       █████████████
███████                   ███████
███████                   ███████
███████                   ███████
█████████████       █████████████
█████████████       █████████████
█████████████       █████████████
█████████████████████████████████
█████████████████████████████████
SWISS
MADE
exoton
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
September 05, 2014, 12:33:38 PM
 #66

when can we start mining / buying ? ^^
Equate
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500


View Profile
September 05, 2014, 12:57:46 PM
 #67

Quite detailed information given on your website , it was a long read  Cheesy . When is the pre-sale btw ?
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 05, 2014, 02:08:01 PM
 #68

Thank you for your interest about Xennet.
Full details about the presale will soon be given (~1 month).
YNWA2806
Sr. Member
****
Offline Offline

Activity: 400
Merit: 250


View Profile
September 06, 2014, 06:38:56 AM
 #69


Sounds cool, keeping my eye on this one!

Tons of coding though.....can you elaborate on the status of development?? any Roadmap?
MorAltsPlease
Sr. Member
****
Offline Offline

Activity: 286
Merit: 250

Fear only from the fear itself


View Profile
September 06, 2014, 06:47:51 AM
 #70


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
BreakoutCoins
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
September 14, 2014, 02:58:42 PM
 #71


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 14, 2014, 03:32:03 PM
 #72


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

Tau-Chain & Agoras
siliconchip
Full Member
***
Offline Offline

Activity: 123
Merit: 100


View Profile
September 15, 2014, 08:05:32 AM
 #73

only IPO,no freeaway?  Huh Huh

profitofthegods
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250


View Profile WWW
September 15, 2014, 08:25:05 AM
 #74

Will you make this open source?
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 15, 2014, 08:35:03 AM
 #75

Will you make this open source?

Not only open source, but fully decentralized, no middleman, we can't earn a penny once the system is out.
thefix
Legendary
*
Offline Offline

Activity: 1045
Merit: 1000



View Profile
September 15, 2014, 09:30:12 AM
 #76

Looks epic!  Shocked
exoton
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250


View Profile
September 15, 2014, 10:35:32 AM
 #77


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

so when do you think the ico will be released and abouth how high will be the mcap ?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 15, 2014, 11:40:03 AM
 #78


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

so when do you think the ico will be released and abouth how high will be the mcap ?

Full details will be given in a few weeks (or even days).
As the the mcap, recall that this is a much larger market than known so far in the world of crypto (I'm speaking about the market of distributed arbitrary native computation of course). So I expect it to begin with billions. We also thought much about a fair coin distribution.
As said, very soon details will be released.

Tau-Chain & Agoras
tobeaj2mer01
Legendary
*
Offline Offline

Activity: 1104
Merit: 1000


Angel investor.


View Profile
September 16, 2014, 02:43:07 AM
 #79


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

so when do you think the ico will be released and abouth how high will be the mcap ?

Full details will be given in a few weeks (or even days).
As the the mcap, recall that this is a much larger market than known so far in the world of crypto (I'm speaking about the market of distributed arbitrary native computation of course). So I expect it to begin with billions. We also thought much about a fair coin distribution.
As said, very soon details will be released.

Did you do tech feasibility research of this coin, sometimes theory is good but it is not realistic, and how can we know you or your team has the ability to complete this project, can you share some information?

Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 16, 2014, 04:05:13 AM
 #80


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

so when do you think the ico will be released and abouth how high will be the mcap ?

Full details will be given in a few weeks (or even days).
As the the mcap, recall that this is a much larger market than known so far in the world of crypto (I'm speaking about the market of distributed arbitrary native computation of course). So I expect it to begin with billions. We also thought much about a fair coin distribution.
As said, very soon details will be released.

Did you do tech feasibility research of this coin, sometimes theory is good but it is not realistic, and how can we know you or your team has the ability to complete this project, can you share some information?

Feasibility research was done intensively and is reflected from the public technical documents, which are deep and wide far more than common in new crypto projects.
Up to now, even with media exposure and a bounty, no one found a flaw.
You're welcome to check our dev's LinkedIn and get some impression about the ability, this in addition to the professional documents.
We will gladly get into any public technical questions, questionings, discussion, suggestions etc.
So take some time to read the docs, and you might find a flaw and win the bounty!
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 16, 2014, 04:43:04 AM
 #81

Up to now, even with media exposure and a bounty, no one found a flaw.

No flaw has been disclosed.  There is a subtle difference.  (Presumably anyone finding a significant flaw would stand to gain more by not disclosing it, no?)
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 16, 2014, 05:08:47 AM
 #82

Up to now, even with media exposure and a bounty, no one found a flaw.

No flaw has been disclosed.  There is a subtle difference.  (Presumably anyone finding a significant flaw would stand to gain more by not disclosing it, no?)

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw. Many of them contact us to see how they will be able to keep their business with Xennet. Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0

Tau-Chain & Agoras
tobeaj2mer01
Legendary
*
Offline Offline

Activity: 1104
Merit: 1000


Angel investor.


View Profile
September 16, 2014, 06:34:47 AM
 #83


My participation depends only on the amount of BTC looking to be raised. If your looking to raise in the area of 600, than I'm all in and even sends respected BTC amount....Nevertheless if its going to be another SYS, SWARM or other monstrous ICO than I'm out of here.....please seriously consider an Investment Cap of the total amount raised...too much mega ICOs lately and practically all of them very disapointing
+1

Gentlemen,
As YNWA2806 wrote, "tons of code". If you can make this product with only 600BTC, please come to work with us. You must be top devs.
Please take some time to understand the size of the project (Xennet, XenFS, Xentube and some more). Also note the various expertises needed. Personally I have them all but this project is too big to finish it by myself within a reasonable time.

so when do you think the ico will be released and abouth how high will be the mcap ?

Full details will be given in a few weeks (or even days).
As the the mcap, recall that this is a much larger market than known so far in the world of crypto (I'm speaking about the market of distributed arbitrary native computation of course). So I expect it to begin with billions. We also thought much about a fair coin distribution.
As said, very soon details will be released.

Did you do tech feasibility research of this coin, sometimes theory is good but it is not realistic, and how can we know you or your team has the ability to complete this project, can you share some information?

Feasibility research was done intensively and is reflected from the public technical documents, which are deep and wide far more than common in new crypto projects.
Up to now, even with media exposure and a bounty, no one found a flaw.
You're welcome to check our dev's LinkedIn and get some impression about the ability, this in addition to the professional documents.
We will gladly get into any public technical questions, questionings, discussion, suggestions etc.
So take some time to read the docs, and you might find a flaw and win the bounty!


Thanks for the explanation,where is your team Linkedin, can you post the Linkedin link in OP?

Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 16, 2014, 06:40:55 AM
 #84

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw.

They also probably have the greatest interest not to disclose it to you.

Quote
Many of them contact us to see how they will be able to keep their business with Xennet.

I have trouble believing any major players are really threatened.  Xennet isn't the first attempt at a decentralized resource exchange.  Heck, many of them even do their own "decentralized" resource exchange, already.  Wink

Quote
Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0

Interesting.  I'm sure some other coins have gotten similar C&D notices lately, but you're the first I've seen who has said such publicly.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 19, 2014, 12:45:57 AM
 #85

Partial LinkedIn profiles list now appears on bottom of OP.
Note that we began making the transition from Xennet to Zennet.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 12:54:25 AM
 #86

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw.
They also probably have the greatest interest not to disclose it to you.
Depends. There are many talented people who will be happy to get 1BTC, but not really into ruining networks.

I have trouble believing any major players are really threatened.  Xennet isn't the first attempt at a decentralized resource exchange.  Heck, many of them even do their own "decentralized" resource exchange, already.

I never heard of any such successful attempt. Please let me know if you heard of one. Yes, they are threatened, they say it themselves. We indeed solved old problems who many have thought of, thanks for new cutting edge technology.

Quote
Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0
Interesting.  I'm sure some other coins have gotten similar C&D notices lately, but you're the first I've seen who has said such publicly.

How come you're so sure we're just like everyone else? Wink

Tau-Chain & Agoras
bitcoinmon
Newbie
*
Offline Offline

Activity: 10
Merit: 0


View Profile
September 19, 2014, 12:54:41 AM
 #87

I don't know about this one yet, but I'll keep an eye on for sure.
nonocoin
Member
**
Offline Offline

Activity: 110
Merit: 10


View Profile
September 19, 2014, 01:04:16 AM
 #88

let's take on the big boys in their own back yard! Grin  Grin

 ███ ███ ███    WMCoin ▮▮▮▮▮   - Real Games Accepted - High Rewards Signature Campaign
                                <<  ● $$$ - $$$ - $$$ - $$$ - $$$ - $$$ - $$$ >>   ███ ███ ███
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 04:04:51 AM
 #89

Not really. Think of the HPC and Big Data giants. They have a great interest to find a flaw.
They also probably have the greatest interest not to disclose it to you.
Depends. There are many talented people who will be happy to get 1BTC, but not really into ruining networks.

These would be mostly or entirely disjoint sets, no?  I'm not sure what the one statement really says about the other.

Quote
I never heard of any such successful attempt. Please let me know if you heard of one. Yes, they are threatened, they say it themselves. We indeed solved old problems who many have thought of, thanks for new cutting edge technology.

Seccomp is in the linux kernel largely because of CPUShare and related projects.  You are correct that none were largely "successful" in the sense that none have gained broad mass-market adoption.  There were many attempts of various sorts.  None survived, unless you count Globus style initiatives, and I don't.

As far as I can tell, you have not presented solutions to any of the "old problems" that kept these sorts of projects from taking off in the past.  Your model actually seems largely reiterative of them, with the exception of the introduction of crypto for payment. (Though CPUShare markets did use escrow models to achieve the same goals.)

Your model seems to suffer from all of the same problems of lacking convenience, security and data privacy, authentication, rich service discovery, and adaptability.

How does this really intend to compete with the "semi-private" overflow trade commonly practiced by this market already?  How are you going to take market share from this without some advantage, and with so many potential disadvantages?

Quote
Quote
Mentioning Xennet, let me say that we'll modify its name soon, cause of https://www.dropbox.com/s/am2g3jk5jo60itz/XENNET%20Ltr.pdf?dl=0
Interesting.  I'm sure some other coins have gotten similar C&D notices lately, but you're the first I've seen who has said such publicly.

How come you're so sure we're just like everyone else? Wink

Huh?  Who said anything about anyone being just like everyone else?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 04:09:25 AM
 #90


As far as I can tell, you have not presented solutions to any of the "old problems" that kept these sorts of projects from taking off in the past.  Your model actually seems largely reiterative of them, with the exception of the introduction of crypto for payment. (Though CPUShare markets did use escrow models to achieve the same goals.)

Your model seems to suffer from all of the same problems of lacking convenience, security and data privacy, authentication, rich service discovery, and adaptability.

How does this really intend to compete with the "semi-private" overflow trade commonly practiced by this market already?  How are you going to take market share from this without some advantage, and with so many potential disadvantages?

That's exactly the point bro. Go over the literature and see how we actually solved the real underlying problems, which were pending for a long time. See also Q&A on this thread. I began working on it ~1yr ago. It's far from being only virtualization plus blockchain, and there are several very significant proprietary innovations. All can be seen from the docs.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 05:40:20 AM
 #91

That's exactly the point bro. Go over the literature and see how we actually solved the real underlying problems, which were pending for a long time.

I've read over all of the materials that you've made available, "bro."

I've re-read them.

I still don't see where the problems get solved.

Quote
See also Q&A on this thread. I began working on it ~1yr ago. It's far from being only virtualization plus blockchain, and there are several very significant proprietary innovations. All can be seen from the docs.

Where?  Maybe I'm missing something, but what is the actual innovation, here?  How is this preferable over spot priced surplus resource from any of the big players?  What is the competitive advantage?  Why will XZennet "make it" when every prior attempt didn't, and failed?

The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.

By launching jobs, you're trusting in the security of a lot of random people.  As you've said, you have to assume many of these people will be downright malicious.  Sure, you can cut off computation with them, but by then they may already be selling off your data and/or code to a third party.  Even if the service provided is entirely altruistic the security on the host system might be lax, exposing your processes to third parties anyway, and in a way that you couldn't even detect as the sandbox environment precludes any audit trail over it.  Worse yet, your only recourse after the fact is a ding on the provider's reputation score.

Since authenticity of service can't be validated beyond a pseudonym and a reputation score, you can't assume computation to be done correctly from any given provider.  You are only partly correct that this can be exponentially mitigated by simply running the computation multiple times and comparing outputs - for some types of process the output would never be expected to match and you could never know if discrepancy was due to platform differences, context, or foul play.  At best this makes for extra cost in redundant computations, but in most cases it will go far beyond that.

Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers.  Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria.  Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such.  (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)

How can this model stay competitive with such a rigid structure?  You briefly/vaguely mention GPUs in part of some hand waving, but demonstrate no real plan for dealing with any other infrastructure resource, in general.  The technologies employed in HPC are very much a moving target, more so than most other data-center housed applications.  Your network offers a very prescriptive "one size fits all" solution which is not likely to be ideal for anyone, and is likely to be sub-optimal for almost everyone.

Where is the literature that I've missed that "actually solved" any of these problems?  Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?

(I just use CPUShare as the example, because it is so very close to your model.  They even had focus on a video streaming service too!  Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)

What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players?  Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.

Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange.  My only confusion is over your assertions that the problems are solved.  There is nothing in the materials that is a solution.  There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 06:09:19 AM
Last edit: September 19, 2014, 06:42:25 AM by ohad
 #92

ok np, let's go step by step:

Where?  Maybe I'm missing something, but what is the actual innovation, here?  How is this preferable over spot priced surplus resource from any of the big players?  What is the competitive advantage?  Why will XZennet "make it" when every prior attempt didn't, and failed?

For example, the pricing model is totally innovative. It measures the consumption much more fairly than common services. It also optimally mitigates the difference between different hardware. The crunch here is to make assumptions that are relevant only for distributed applications. Then comes the novel algorithm (which is an economic innovation by itself) of how to price with respect to unknown variables under a linearity assumption (which surprisingly occur on Zennet's case when talking about accumulated resource consumption metrics).

The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.

I don't understand what's the difference between this situation or any other cloud service.
Also note that Zennet is a totally free market. All parties set their own desired price.

By launching jobs, you're trusting in the security of a lot of random people.  As you've said, you have to assume many of these people will be downright malicious.  Sure, you can cut off computation with them, but by then they may already be selling off your data and/or code to a third party.  Even if the service provided is entirely altruistic the security on the host system might be lax, exposing your processes to third parties anyway, and in a way that you couldn't even detect as the sandbox environment precludes any audit trail over it.  Worse yet, your only recourse after the fact is a ding on the provider's reputation score.

I cannot totally cut this risk, but I can give you control over the probability and expectation of loss, which come to reasonable values when massively distributed applications are in mind, together with the free market principle.
Examples of risk reducing behaviors:
1. Each worker serves many (say 10) publishers at once, hence the reducing the risk 10 fold to both parties.
2. Micropayment protocol is taking place every few seconds.
3. Since the system is for massive distributed applications, the publisher can rent say 10K hosts, and after a few seconds dump the worst 5K.
4. One may only serve known publishers such as universities.
5. One may offer extra-reliability (like existing hosting firms) and charge appropriately. (for the last two points, all they have to do is to config their price/publishers on the client, and put their address on their website so people will know which address to trust).
6. If one computes the same job several times with different hosts, they can reduce the probability of miscalculation. As the required invested amount grows linearly, the risk vanishes exponentially. (now I see you wrote it -- recall this is a free market. so if "acceptable" risk probability is say "do the calc 4 times", the price will be adjusted accordingly)
7. User can filter spammers by requiring work to be invested on the pubkey generation (identity mining).
I definitely forgot several more and they appear on the docs.

Since authenticity of service can't be validated beyond a pseudonym and a reputation score, you can't assume computation to be done correctly from any given provider.  You are only partly correct that this can be exponentially mitigated by simply running the computation multiple times and comparing outputs - for some types of process the output would never be expected to match and you could never know if discrepancy was due to platform differences, context, or foul play.  At best this makes for extra cost in redundant computations, but in most cases it will go far beyond that.

Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers.  Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria.  Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such.  (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)

See above regarding the pricing algorithm which addresses exactly those issues.
As for matching buyers and sellers, we don't do that, the publisher announces they want to rent computers, while publishing their ip address, then interested clients connect to them and a negotiation begins without any 3rd party interference.

How can this model stay competitive with such a rigid structure?  You briefly/vaguely mention GPUs in part of some hand waving, but demonstrate no real plan for dealing with any other infrastructure resource, in general.  The technologies employed in HPC are very much a moving target, more so than most other data-center housed applications.  Your network offers a very prescriptive "one size fits all" solution which is not likely to be ideal for anyone, and is likely to be sub-optimal for almost everyone.

The structure is not rigid at all - the contrary, it allows full control to the user.
The algorithm is also agnostic to all kinds of resources -- it even covers the unknown ones!! That's a really cool mathematical result.

Where is the literature that I've missed that "actually solved" any of these problems?  Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?

(I just use CPUShare as the example, because it is so very close to your model.  They even had focus on a video streaming service too!  Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)

What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players?  Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.

Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange.  My only confusion is over your assertions that the problems are solved.  There is nothing in the materials that is a solution.  There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.


As for Zennet vs AWS, see detailed (yet partial) table on the "About Zennet" article.
If you haven't seen above, we renamed from Xennet cause of this.

I think that what I wrote till now shows that many issues were thought and answers were given.
Please rethink about it and please do share further thoughts.

Tau-Chain & Agoras
profitofthegods
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250


View Profile WWW
September 19, 2014, 09:47:05 AM
 #93



The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.



If its possible for people to run this thing on a fairly normal home computer then these extra overheads are not a problem, because compared to a commercial operation the 'miner' will have substantial cost savings due to not having paid for dedicated hardware, facilities, staff etc which should more than make up for the additional computational costs.
xenmaster
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 19, 2014, 02:11:10 PM
 #94

Ohad Asor was invited to present Zennet on a Big Data & Crowd Computing Conference in Amsterdam
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 06:12:13 PM
Last edit: September 19, 2014, 06:22:29 PM by HunterMinerCrafter
 #95

For example, the pricing model is totally innovative.

What is the innovation?  What new contribution are you making?  I hear you saying over and over again "it's innovative, it's innovative, no really it's TOTALLY innovative" but you haven't yet come out and said what the innovation is.

Averaging some procfs samples is hardly innovative.  Many providers (at least the ones that don't charge based on straight wall-clock time) do this.  What are you doing in this pricing model that is new?

Quote
It measures the consumption much more fairly than common services.

Except that it doesn't, and this is somewhat central to my point.  It measures consumption in (more or less) the same way as everyone else, but instead of having a company to complain to or sue if they lie about their stats the user only has a pseudonym to leave a comment on.  The system removes the actual accountability and replaces it with a trust metric, and further makes it exceedingly easy to lie about stats.

Quote
It also optimally mitigates the difference between different hardware.

How?  This is certainly not explained, or not explained well, in your materials.  I see some hand waving about risk analysis in the client, but nothing that actually does anything about it.

Quote
The crunch here is to make assumptions that are relevant only for distributed applications.

Ok, what assumptions are we talking?  You say these things, but never complete your thoughts.  It gets tedious to try to draw details out of you like this.

Quote
Then comes the novel algorithm (which is an economic innovation by itself) of how to price with respect to unknown variables under a linearity assumption (which surprisingly occur on Zennet's case when talking about accumulated resource consumption metrics).

Aside from the novelty claim (RE my first point, I don't see the novelty)  I see two issues, here.  First, the linearity assumption is a big part of the rigidity.  It applies fine for a traditional Harvard architecture CPU, but breaks down on any "real" tech.  How does your pricing model price a neuromorphic architecture? How does your pricing model measure a recombinant fpga array?  How do you account for hardware that literally changes (non-linearly!) as the process runs?  (The answer is self evident from your own statement - right now you can't.)

Second the pricing model only prices against the micro-benchmarks (indicating, right there, that the performance profile indicated in the pricing model will differ from the applied application!) and doesn't account for environmental context.  With modern distributed systems most of the performance profile comes down to I/O wait states (crossing para-virtualization thresholds) and bursting semantics.  I don't see either of these actually being accounted for.  (Maybe you should spend some time with engineers and A.E.s from AWS, Google, Rackspace, MediaFire, Joyent, DigitalOcean, etc and find out what your future users are actually going to be concerned about in their pricing considerations.)

Quote
The workflow is inconvenient.  Reliably executing a batch of jobs will carry a lot of overhead in launching additional jobs, continually monitoring your service quality, managing reputation scores, etc.  If you don't expend this overhead (in time, money, and effort) you will pay for service you don't receive or, worse, will end up with incorrect results.

I don't understand what's the difference between this situation or any other cloud service.

With any other cloud service I can spin up only one instance and be fairly confident that it will perform "as advertised" and if it doesn't I can sue them for a refund.

With your service I can't spin up one instance and be confident about anything.  If it turns out that one instance is not as advertised - all well, too late - there is no recourse for compensation, no jurisdictional court to go to, no company to boycott, nothing but a ding on a trust rating.  (Again if you think trust rating works in a vacuum like this you should just look around at some of the trust rating behavior on these forums.  It doesn't.)

Quote
Also note that Zennet is a totally free market. All parties set their own desired price.

I've been ignoring this catch-22 statement up to now.  Either the pricing model is a free market, or the pricing model is controlled by your formula and magical risk-averse client.  It can't be both free and regulated by protocol, this is a contradiction.  Either your users manage the pricing or the software manages the pricing for them, which is it?  (I'm not sure either is sustainable, without changes elsewhere in the model.)

Quote
I cannot totally cut this risk,

Now we get to why I'm actually "all riled up" here.  You could totally cut that risk.  Over the past decade, real solutions to these problems have been found.  You aren't applying them.

Why the heck not?!?!?!

Also, this is another of your contradictory statement.  You simultaneously claim some innovative new solutions to the problems at hand, and that you cannot solve them.

Quote
but I can give you control over the probability and expectation of loss, which come to reasonable values when massively distributed applications are in mind, together with the free market principle.

Again setting aside the catch-22 assumption that the market is free (and not re-iterating that these are not solutions but costly partial mitigation) this statement still doesn't stand to scrutiny.  The values actually converge to *less* reasonable sums as you scale up distribution!

Quote
Examples of risk reducing behaviors:
1. Each worker serves many (say 10) publishers at once, hence the reducing the risk 10 fold to both parties.

This reduces risk on the part of the worker, but increases risk on the part of the publisher by adding to statistical variance of the delta between measured performance and applied performance on the part of the worker.  In other words, this behavior makes it safer (though not 10 fold, as you claim) for the worker as they diversify some, but their act of diversification introduces inaccuracy into the micro-benchmark.  Now my application's performance profile is being influenced by the behaviors of 9 other applications moment to moment.

Quote
2. Micropayment protocol is taking place every few seconds.
3. Since the system is for massive distributed applications, the publisher can rent say 10K hosts, and after a few seconds dump the worst 5K.

Another point that you keep reiterating but not expounding.  The problem here is the same problem as any speculative investment - past performance does not guarantee (or even meaningfully predict) future results  As a publisher I spin up 10K instances and then dump half.  Now I've paid for twice as much work over those "few" seconds (a few seconds on 10K CPUs is an awfully long time, and gets expensive fast) and dumped half of my providers without knowing which in that half were malicious or which just had some unlucky bout of extra cache misses.  Further, I have no reason to believe that the 5K I'm left with won't suddenly start under-performing relative to the 5K that I've just ditched.

You could induce a counter-argument here by saying "well just spin up 5K replacement nodes after you drop the first set, and then re-sort and drop the 5K lowest, rinse, repeat" but this actually only exacerbates all of the other problems related to this facet of the model!  Now I'm *continuously* paying for those "few" wasted seconds over and over, I have to structure my application in such a way to handle this sort of consistent soft failure (So doing anything like MRM coordination in a useful way becomes both very difficult and laden with even more overheads!) and I still have no reason to believe that my system will ever converge onto an optimal set of providers.  (Maybe there simply aren't 5K non-malicious workers up at the moment, or maybe some of the malicious nodes are preventing my reaching that set!)

Quote
4. One may only serve known publishers such as universities.

Again, this is a behavior that only mitigates risk for the worker.  The worker doesn't need much risk mitigation - they are largely in control of the whole transaction - the buy side needs the assurances.

Quote
5. One may offer extra-reliability (like existing hosting firms) and charge appropriately. (for the last two points, all they have to do is to config their price/publishers on the client, and put their address on their website so people will know which address to trust).

Wait, now you're proposing the sort of solution of "just re-sell AWS" as a compensatory control?!?!  This isn't exactly what I meant by competition, and precludes reliable service from ever becoming priced competitively.  (Why wouldn't the buy side just skip the middleman and go straight to the established hosting firm?)

Quote
6. If one computes the same job several times with different hosts, they can reduce the probability of miscalculation. As the required invested amount grows linearly, the risk vanishes exponentially. (now I see you wrote it -- recall this is a free market. so if "acceptable" risk probability is say "do the calc 4 times", the price will be adjusted accordingly)

Did you miss the part where I pointed out that this behavior is only applicable to a subset of useful computations?  Here's a "real world hypothetical" related to an actual HPC initiative I was once involved with - ranking page-rank implementations by performance over time.  In other words, systematically and automatically comparing search engine result quality over time.

There are two places where this behavior directly fails to meet it's goal.  First, there's the obvious problem of floating point precision.  The numbers spit out from two identical software runs on two different architectures will not be the same, and the vm layer specifically precludes me from necessarily knowing the specifics of the semantics applied at the hardware.  Second, even if the architectures are identical as well meaning float semantics are identical, the algorithm itself lacks the necessary referential transparency to be able to make the assumption of consistency!  Instance A performing the query against the service might (usually WILL, in my experience!) alter the results issued to instance B for the same query!  (Unfortunately, most useful processes will fall into this category of side-effecting.)

Quote
7. User can filter spammers by requiring work to be invested on the pubkey generation (identity mining).

This does nothing to "filter spammers" it only enforces a (somewhat arguably) fair distribution of identities.  All this does is establish a small cost to minting a new identity and give identity itself some base value.  In other words, you'll inevitably have some miners who never do a single job, and only mine new identities with which to attempt to claim announcements without ever fulfilling anything more than the micro-benchmark step and claiming their "few seconds worth" of payments.  (They don't even have to actually expend resource to run the micro-benchmarks, they can just make up some numbers to fill in.)

(Rationally, I don't see why any worker would ever bother to actually work when they simply "don't have to" and still get paid a bit for doing effectively nothing.)

Quote
I definitely forgot several more and they appear on the docs.

What doesn't appear in the docs, or your subsequent explanations, are these solutions that you keep talking about.  I see a lot of mitigation (almost none of which are even novel) but no resolutions.  Where have you solved even a single previously unsolved problem?  Which problem, and what is that solution?

Can you really not sum up your claimed invention?  If you can't, I'm suspect of the claim.  (Does anyone else out there feel like they've met this obligation?  Is it just me, missing something obvious?  I'm doubtful.)

Quote
Quote
Service discovery (or "grid" facilities) is a commonly leveraged feature in this market, particularly among the big resource consumers.  Your network doesn't appear to be capable of matching up buyer and seller, and carrying our price discovery, on anything more than basic infrastructural criteria.  Considering the problem of authenticity, I'm skeptical that the network can succeed in price discovery even on just these "low level" resource allocations, since any two instances of the same resource are not likely to be of an equivalent utility, and certainly can't be assumed as such.  (How can you price an asset that you can't meaningfully or reliably qualify (or even quantify) until after you have already transacted for it?)

See above regarding the pricing algorithm which addresses exactly those issues.

EH?  Maybe you misunderstood what I meant by service discovery and pricing.  Pricing raw resources is one thing, but the big consumers are less interested in shopping based on low level hardware metrics, and more interested in shopping based on specific service definitions.  (If current trends continue this will only become increasingly true.)  Your system offers a publisher nothing to base a query for a particularly structured SOA on.

Quote
As for matching buyers and sellers, we don't do that, the publisher announces they want to rent computers, while publishing their ip address, then interested clients connect to them and a negotiation begins without any 3rd party interference.

Again, this (like much of your writing, it seems) is contradictory.  Either the market is free and the system does nothing to pair a publisher with a worker, or the market is partially controlled and the system attempts to match an appropriate worker to a publisher.  You've made several references to the client being risk averse implying that the system is actually partially regulated.  You can't subsequently go on to claim that the system does nothing to match buyers and sellers and is entirely free.

In any case, I'm not sure what that statement has to do with the service discovery concern.

Quote
The structure is not rigid at all - the contrary, it allows full control to the user.
The algorithm is also agnostic to all kinds of resources -- it even covers the unknown ones!! That's a really cool mathematical result.

I've already touched on this above, so I won't reiterate.

Can you show this "really cool mathematical result" somehow?  I'd be interested to see such a proof, and even more interested in throwing such a proof past a few category theorist friends.  I'm highly skeptical that such a result can be reasonably had, as it is difficult to form meaningful morphisms over unknown sets.  Surely such a mathematical result would be of great interest in the philosophical community!  Cheesy

I'm putting this on my "big list of unfounded claims made in the crypto-currency space that I personally believe to be intractable and won't hold my breath to be shown."

(Seems like that list is getting bigger every day, now!  Undecided)

Quote
Where is the literature that I've missed that "actually solved" any of these problems?  Where is this significant innovation that suddenly makes CPUShare market "work" just because we've thrown in a blockchain and PoW puzzles around identity?

(I just use CPUShare as the example, because it is so very close to your model.  They even had focus on a video streaming service too!  Wait, are you secretly Arcangeli just trying to resurrect CPUShare?)

What is the "elevator pitch" for why someone should use this technology over the easier, safer, (likely) cheaper, and more flexible option of purchasing overflow capacity on open markets from established players?  Buying from amazon services (the canonical example, ofc) takes seconds, requires no thought, doesn't rely on the security practices of many anonymous people, doesn't carry redundant costs and overheads (ok AWS isn't the best example of this, but at least I don't have to buy 2+ instances to be able to have any faith in any 1) and offers whatever trendy new infrastructure tech comes along to optimize your application.

Don't misunderstand, I certainly believe these problems are all solvable for a decentralized resource exchange.  My only confusion is over your assertions that the problems are solved.  There is nothing in the materials that is a solution.  There are only expensive and partial mitigation for specific cases, that aren't actually the cases people will pragmatically care about.


As for Zennet vs AWS, see detailed (yet partial) table on the "About Zennet" article.

Another reply that doesn't match, at all, what it is replying to.  I'm seeing some persistent themes to your writing.

Quote
If you haven't seen above, we renamed from Xennet cause of this.

This one really baffled me.

I did see this.  We even discussed it.

Did you just forget?  Bad short term memory?  Substance (ab)use?

It is going to be difficult to carry out an appropriate discourse if you're not aware of what we talked about less than 10 posts ago.  As a potential investor, this would be particularly troubling to me.

Quote
I think that what I wrote till now shows that many issues were thought and answers were given.

It shows that many issues were thought about.  (It also shows that a few more were not.)

It doesn't show any answers, just tries to explain how it'll all somehow just work out if the publisher just does these expensive and painful things to mitigate.  The reality is that even if the publisher does the expensive and painful things there is no rational reason to believe anything might work out.

Quote
Please rethink about it and please do share further thoughts.

I've thought and re-thought about it quite a bit.  Anyone who knows me can tell you that this (thinking about, analysing, and assessing systems) is pretty much the only thing I do.

You have not really addressed many of my points, such as general convenience/usability as compared to traditional providers, or how the system can expect to take away market share from established "decentralized" private trade that already occurs.

I've shared my pertinent thoughts, and now re-shared them.  I do have some further thoughts, but I won't share those because I have very little interest in claiming your 1BTC.  I have three such further thoughts now. Lips sealed

(As someone who is involved, intimately, with that set that you earlier referred to as "HPC and Big Data giants" I think maybe I will see if they would offer more for my further thoughts.  I really hope you're right, and that these players do feel threatened, because then they would give me lots of money for these thoughts, yah?  If they're not threatened then all well, I can make lots of money from naive publishers instead, right?)

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 06:24:52 PM
 #96

HunterMinerCrafter, you indeed touch the real points. Lots of misunderstanding though, but I do take into account that the deep stuff are very brief at the docs. Also recall that I know nothing about the background of each BTT member.
Those issues are indeed more delicate. I can write emphasis here, but let me suggest we make a recorded video chat where you ask all questions and I give all small details, so the rest of the community will be able to enjoy this info.
I think that'll be really cool. But if for some reason you don't want to do it, I'll write my answers in detail here.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 07:56:04 PM
 #97

In order to make the pricing algorithm clearer, I wrote and uploaded this detailed explanation.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 08:13:31 PM
 #98

HunterMinerCrafter, you indeed touch the real points. Lots of misunderstanding though,

All I am asking of you (and of pretty much any coin developer I speak with, if you look over my post history outside of the MOTO thread) is to make me understand.

Quote
but I do take into account that the deep stuff are very brief at the docs.

Your team has only one job right now - deepen it!  (Don't broaden it, that will work against you.)

A very simple way to do this would, of course, simply be to release working source code.  If you do have actual solutions to these problems, it will become self evident quite quickly.

Quote
Also recall that I know nothing about the background of each BTT member.

Ok, let me give you some quick background.  I'm a "seasoned" professional in the field of computer science who also has just about 5.5 years of experience in working with logical formalism around crypto currency.  (The number of us (surviving) who can make this claim, aside from perhaps Satoshi, can fit in a small elevator comfortably, and we pretty much all know each other.  A moment of silence, please, for those who can no longer attend our elevator.)




Interestingly, I also have just over a decade of experience with HPC applications and, more generally, data-center operations.  (This would need to be a very large elevator.)  I like critiquing coins' claims in general, but yours is particularly appealing to my razor because it is very much "in my world."

If you'd like some more detailed background on my qualifications as a critic of your claims we can discuss that privately.

Again, I am only trying to fully understand the value proposition.  If you can't make me understand then you might have some trouble making average Joe understand.  If you can't make average Joe understand, your project doesn't have to worry about the protocol actually working anyway because it will fail just from lack of any traction.

Quote
Those issues are indeed more delicate. I can write emphasis here, but let me suggest we make a recorded video chat where you ask all questions and I give all small details, so the rest of the community will be able to enjoy this info.

Just write them down and publish them.  Posts on a forum or a proper whitepaper, either will do the trick as long as it makes a full (and not self-contradictory) treatment of the subject.

Even better, write it all down encoded in the form of a program that we can run.  You surely have to be doing that already anyway, right?  Wink

Quote
I think that'll be really cool. But if for some reason you don't want to do it, I'll write my answers in detail here.

For a lot of reasons, I do not want to do that.

How about the three R&D folks just set up a tech oriented IRC to hang out in, open to anyone interested in protocol detail?  I think this would better align with the actual goals of such a meeting.

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 08:19:16 PM
 #99

I hope the doc I just put here will give you a better understanding what is the innovative pricing model.


Ok, let me give you some quick background.  I'm a "seasoned" professional in the field of computer science who also has just about 5.5 years of experience in working with logical formalism around crypto currency.  (The number of us (surviving) who can make this claim, aside from perhaps Satoshi, can fit in a small elevator comfortably, and we pretty much all know each other.  A moment of silence, please, for those who can no longer attend our elevator.)

Interestingly, I also have just over a decade of experience with HPC applications and, more generally, data-center operations.  (This would need to be a very large elevator.)  I like critiquing coins' claims in general, but yours is particularly appealing to my razor because it is very much "in my world."

If you'd like some more detailed background on my qualifications as a critic of your claims we can discuss that privately.

Again, I am only trying to fully understand the value proposition.  If you can't make me understand then you might have some trouble making average Joe understand.  If you can't make average Joe understand, your project doesn't have to worry about the protocol actually working anyway because it will fail just from lack of any traction.


I'm really glad to have you here. I was waiting long for deep criticism and showing how we solved all main issues. I'm 100% sure, and I put all my professional dignity on it, that I have reasonably practical answers to all your doubts. Let's keep going here. Tell me what you think of the pricing algo.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 08:27:45 PM
 #100

I'm really glad to have you here. I was waiting long for deep criticism and showing how we solved all main issues. I'm 100% sure, and I put all my professional dignity on it, that I have reasonably practical answers to all your doubts.

I'm convinced that you don't have reasonably practical answers to all of my doubts, as you've already explicitly admitted that you lack answers to the biggest of them, in big letters on your github.

"No POW"

If you can't authenticate the computation then you have no reasonable *or* practical answer to some specific doubts.

As long as the workers don't have to prove that they do the work, I doubt they will.

Quote
Let's keep going here. Tell me what you think of the pricing algo.

On my second read now.  I think you've quite nicely elided some of the specific problems for me.  Wink

P.S. I like that the new site is up on the new domain, but don't like that it still has all the self-contradictory gobbledygook on the "documentation" page.  I really don't like that when I clicked the link in your OP to get to that page again, it took me to the old domain.  Just a heads up that you might want to check your back-links.  Wink
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 08:34:47 PM
Last edit: September 19, 2014, 09:21:54 PM by ohad
 #101

I'm really glad to have you here. I was waiting long for deep criticism and showing how we solved all main issues. I'm 100% sure, and I put all my professional dignity on it, that I have reasonably practical answers to all your doubts.

I'm convinced that you don't have reasonably practical answers to all of my doubts, as you've already explicitly admitted that you lack answers to the biggest of them, in big letters on your github.

"No POW"

If you can't authenticate the computation then you have no reasonable *or* practical answer to some specific doubts.

As long as the workers don't have to prove that they do the work, I doubt they will.

The solution is sufficient without POW. Since the risk is low and can be controlled to be much lower. I agree that there are a lot of advantages on real computation POW, some with some innovative mathematical or cryptographic ideas, some hardware implemented, but their cost in additional computation or by sending VM snapshorts etc. is high. Nevertheless, I don't rule out implementing such an algo one day. Moreover, nothing on the current Zennet's design blocks such an implementation by the publisher, or by backward compatibility.

Note that the RFC is significantly older than the "About Zennet" doc.

Quote
Let's keep going here. Tell me what you think of the pricing algo.

On my second read now.  I think you've quite nicely elided some of the specific problems for me.  Wink

I didn't understand (you obviously noted I'm not a native English speaker).

P.S. I like that the new site is up on the new domain, but don't like that it still has all the self-contradictory gobbledygook on the "documentation" page.  I really don't like that when I clicked the link in your OP to get to that page again, it took me to the old domain.  Just a heads up that you might want to check your back-links.  Wink

Thanks.  And yes, we didn't fully finish the migration yet.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 09:52:54 PM
 #102

Few more points:

1. When you said that the free market idea contradicts the pricing idea, recall that it's all about Zencoins in which their value can fluctuate, in addition to the fact that the user sets the price for canonical benchmarks, and this (together with procfs measurements during the benchmark) deterministically sets the price.

2. Since we're solving a least squares problem, Gauss promises us that the errors are normally distributed with zero mean. Hence, publisher can easily (and of course automatically) identify outliers.

3. More than the above local statistics, the publisher knows to compare a provider to many more. The comparison is the number of finished small jobs w.r.t. paid coins. This can happen in seconds, and by no means, renting a PC for seconds should exceed not-too-much-satoshis.


Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 09:53:45 PM
 #103

Tell me what you think of the pricing algo.

First, I love a good multi-objective optimization as much as the next guy, so I don't really have any problem with the overall goal in the abstract.  I do agree that this approach to pricing optimization will "fairly" price computational resources.

My problem is that it prices entirely the wrong resources, in several ways.

Quote
"An intuitive approach is the measure accumulated procfs variables"
...
"Big troubles, but, procfs is all we have."

Why?  This is what has me most baffled.  Why is this your only metric considered?  You say yourself that this approach is intuitive, and I agree, but is it sound?  Why should one believe it to be?

Quote
"Assume that there exists n such physical unknown principal uncorrelated variables"
...
"We now make the main assumption: we claim that the procfs variables are the result of a linear projection acting on the UVs."

Beautifully put.

One of my three hypothetical attacks on your hypothetical network is based directly on this (bad) assumption.

You seem to have a grasp on the notion that the overall problem is one of measurable linear increments, but fail to realize that "projected procfs" does not represent this linearity, and that linearity can only ever be assumed, not enforced.  Here we get to one of the technical details of the matter, your UVs can only be correctly measured (and as such your formula can only hold as rational) as a metric over a rank one construction!!!!!!!  In other words, unless your UV is derived starting from some "gate count" you can't safely assume that any potential attack is constrained, relatively speaking, to linearity.  Woops.

(For more detail on this you should read up on recent works following from Yao's garbled circuits, and the work of folks like Gentry et al, Huang, etc.  In particular, I suggest reading the work specifically being done on objective cost models (done the "right" way, heh, no offense) by Kerschbaum et al.  I also suggest spending some hands-on time with the JustGarble toolkit, or similar, and proving to yourself what sort of things do/don't violate linearity assumptions under different transforms.  (It probably isn't what you'd expect!  Bleeding edge work is showing that a garbled circuit can even be side-effecting and perform interactive I/O without necessarily violating linearity or increasing rank order of the netlist!  Fascinating stuff going on, there.))

It is as if your attacker can arbitrarily change the cost of any given operation at will.  (Measurement over the whole netlist becomes meaningless/worthless.)  It is like this because it is precisely the case.

Your model only holds if your linear projection is actually a linear projection.  Your measurement of procfs is not, so the rest does not follow.

If procfs offered some rank one representation of system state (like some SSA form of a closure over the last instruction executed) this would be a different story.  I'm not sure this would even be possible without some sort of magical self-introspecting CPU, and suspect that it would certainly preclude any microcode optimizations in such a CPU.

This should be "plain and simple" enough, but I'll go on.


Quote
"The rationale is that we do seek accumulated"...

Love all of this.  Apply this over a properly represented process and you're almost on to something.  Authenticate the structure and it's gold!

Quote
"Take N programs" ...

Here we see the rigidity that I alluded to, and our second big problem.  We are pricing the xn circuits in order to decide on a cost for an entirely different circuit.

In other words, what we're measuring our UVs over is not necessarily meaningfully representative (in any way) of what we are actually attempting to price.

Of course the argument would be made that the only way we can price the algorithm we are "actually shopping for" with this sort of approach involves the worker running a part of it at no cost, leading to a situation where publishers could "leach" computation by only requesting quotes, which is obviously no good.

This tells us that either our UVs have to be taken in situ over the algorithm being shopped, without running it (again "gate count" metrics) or that we need a mechanism to assert functional extensional projections from the algorithm.  What I mean by this alternative is that the system could require the publisher to offer up, in their RFQ, the algorithm they intend to run, the input they intend to run it on, an extraction of (bounded?) portions of the algorithm that they intend to use for price shopping, and a generator function for a second set of input.  Doing it this way would be some awesome work, but would also be a bit of the-moon-on-a-stick probably.

Quote
"In addition, if Singular Value Decomposition is used"...

O HAI!  Now I has FOUR ideas!

If we consider that an attacker can "change gate cost at will" he can also "game" your decomposition, right?  Woops.

Quote
"Note that this pricing model is adequate for massively distributed applications only: say that a pending IO"...

As far as I can tell, paravirt boundary I/O cost and bursting semantics are not exercise-able as UVs.

As such this basically enforces that regardless of how you measure the circuit, your associated cost function will be incorrect, pragmatically.  Even parallel, these still break the linearity over any one measure, and add skew to the net pricing.

This is actually a huuuuuuuuge open problem in the HPC community.  Most of your "big player"s would kill for a way to model cost with parameters over IO quantification and bursting semantics.  Such a model would be worth a fortune, since it is the only place where they get slippage in their profit margin.  (Academically, there's probably a lot of application for related open problems with fair schedulers, etc.)

Solve this and you've got more than gold, you've got the future of computing.  I don't expect you to be able to solve this.

Quote
"This root assumption is therefore vital for the acceptability of the solution."

Yup.  Bad assumption number two.  Effed on the way in and the way back out.  It's got fail coming out both ends, so to speak.   Grin

As I said, I don't expect a solution applied here, as I understand that it is an unavoidable assumption without constraining the IO and bursting semantics of the worker.

I think it would be best to just go ahead and constrain the IO and bursting semantics of the worker, myself.

Quote
"Those xn are called, on the Zennet's framework, the Canonical Benchmarks."

The canonical benchmarks define the rigidity of the system.  How are you going to execute your phoronix suite on my recombinant FPGA chip?  How is your canonical benchmark going to tell you anything about the performance of my nascent memristor array?  (I'm not even kidding, here.)

How do those benchmarks apply when this classical hardware of today gets swept under by the new computing models of tomorrow?  (It is already happening!)  It seems like the canonical benchmarks will need to be a moving target, and thus not very canonical.  Wink

As I see it, any actual solution to these problems will begin with either an authenticated circuit construction, or some measure of cost of alpha/beta reductions and pi/sigma type construction.  In other words you're going to have to start from something well behaved, like Turing's machine or Church-Rosser and Barendregt's cube, not from something "wild" like procfs under paravirt.

HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 10:26:16 PM
 #104

The solution is sufficient without POW. Since the risk is low and can be controlled to be much lower. I agree that there are a lot of advantages on real computation POW, some with some innovative mathematical or cryptographic ideas, some hardware implemented, but their cost in additional computation or by sending VM snapshorts etc. is high.

?!

This used to be true, but isn't anymore.  Recent advances have brought the tradtionally associated computational cost down significantly.

Quote
Nevertheless, I don't rule out implementing such an algo one day. Moreover, nothing on the current Zennet's design blocks such an implementation by the publisher, or by backward compatibility.

The problem is in lack of authentication not over the jobs themselves, but over the benchmark.  Combine this with a lack of association between the benchmark metrics and the job.

I understand that authentication and/or privacy preserving mechanism can be layered on top of the job.  This would just further distance the job's performance profile from what the benchmark metrics would imply.

Quote
Note that the RFC is significantly older than the "About Zennet" doc.

Both contain significant amounts of what appears to be superseded information.  A cleanup of these materials is badly needed.

Quote
I didn't understand (you obviously noted I'm not a native English speaker).

I mean that you summed up your bad assumptions so nicely that I pretty much didn't need to formulate the statements myself.

Your English is very good.

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 10:33:48 PM
 #105

Tell me what you think of the pricing algo.

First, I love a good multi-objective optimization as much as the next guy, so I don't really have any problem with the overall goal in the abstract.  I do agree that this approach to pricing optimization will "fairly" price computational resources.

got it. if you call it multi-objective optimization, you just didn't understand the math. sorry.
go over "least squares" again.


My problem is that it prices entirely the wrong resources, in several ways.

Quote
"An intuitive approach is the measure accumulated procfs variables"
...
"Big troubles, but, procfs is all we have."

Why?  This is what has me most baffled.  Why is this your only metric considered?  You say yourself that this approach is intuitive, and I agree, but is it sound?  Why should one believe it to be?


It's not about the metric, it's about the information I have.
When I say procfs I mean any measurements that the OS gives you (we could get into the kernel level but that's not needed given my algo).


Quote
"Assume that there exists n such physical unknown principal uncorrelated variables"
...
"We now make the main assumption: we claim that the procfs variables are the result of a linear projection acting on the UVs."

Beautifully put.

One of my three hypothetical attacks on your hypothetical network is based directly on this (bad) assumption.

You seem to have a grasp on the notion that the overall problem is one of measurable linear increments, but fail to realize that "projected procfs" does not represent this linearity, and that linearity can only ever be assumed, not enforced.  Here we get to one of the technical details of the matter, your UVs can only be correctly measured (and as such your formula can only hold as rational) as a metric over a rank one construction!!!!!!!  In other words, unless your UV is derived starting from some "gate count" you can't safely assume that any potential attack is constrained, relatively speaking, to linearity.  Woops.


true. procfs doesn't have this linearity. that's exactly what my algo comes to fix.
but *any* n-dim vector can be written as a linear combination of n linearly independent vectors.
those latter n vectors are the unknowns (their number might be different than n, yet supported by the algo).
note that I never measure the UVs, nor give any approximation of them. I only estimate their product with a vector.


(For more detail on this you should read up on recent works following from Yao's garbled circuits, and the work of folks like Gentry et al, Huang, etc.  In particular, I suggest reading the work specifically being done on objective cost models (done the "right" way, heh, no offense) by Kerschbaum et al.  I also suggest spending some hands-on time with the JustGarble toolkit, or similar, and proving to yourself what sort of things do/don't violate linearity assumptions under different transforms.  (It probably isn't what you'd expect!  Bleeding edge work is showing that a garbled circuit can even be side-effecting and perform interactive I/O without necessarily violating linearity or increasing rank order of the netlist!  Fascinating stuff going on, there.))

sure, wonderful works out there. we have different goals, different assumptions, different results, yet of course i can learn a lot from the cs literature, and i do some.
you claim my model is invalid but all you show till now is misunderstanding. once you get the picture you'll see the model is valid. i have a lot of patience and will explain more and more again and again.


It is as if your attacker can arbitrarily change the cost of any given operation at will.  (Measurement over the whole netlist becomes meaningless/worthless.)  It is like this because it is precisely the case.

sure. see my previous comment about normal distribution and mitigating such cases.


Your model only holds if your linear projection is actually a linear projection.  Your measurement of procfs is not, so the rest does not follow.

If procfs offered some rank one representation of system state (like some SSA form of a closure over the last instruction executed) this would be a different story.  I'm not sure this would even be possible without some sort of magical self-introspecting CPU, and suspect that it would certainly preclude any microcode optimizations in such a CPU.

This should be "plain and simple" enough, but I'll go on.

you're not even in the right direction. as above, of course procfs aren't linear. and of course they're still n-dim vectors no matter what.


Quote
"The rationale is that we do seek accumulated"...

Love all of this.  Apply this over a properly represented process and you're almost on to something.  Authenticate the structure and it's gold!

that follows from the definition of the UVs you're looking for.
even though I wrote detailed doc, as any good math, you'll still have to use good imagination.
just think of what you're really looking for, and see that procfs is projected linearly from them. even the most nonlinear function is linear in some higher dimensional space. and there is no reason to assume that the dim of the UVs is infinite.. and we'd get along even if it was infinite.


Quote
"Take N programs" ...

Here we see the rigidity that I alluded to, and our second big problem.  We are pricing the xn circuits in order to decide on a cost for an entirely different circuit.

why different circuit? the very same one. the provider tells the publisher what is their pricing, based on benchmarks. yes, it can be tampered, but it can be recognized soon, loss is negligible, reputation going down (the publisher won't work with this address again, and with the ID POW they can block users who generate many new addresses), and all other mechanisms mentioned on the docs.
on the same way, the publisher can tell how much they are willing to pay, in terms of these benchmarks. therefore both sides can negotiate over some known objective variables.


In other words, what we're measuring our UVs over is not necessarily meaningfully representative (in any way) of what we are actually attempting to price.

again, we're not measuring the UVs, but taking the best estimator to their inner product with another vector. this reduces the error (error at the final price calculation, not in the UVs which are not interesting for themselves) in order of magnitude.


Of course the argument would be made that the only way we can price the algorithm we are "actually shopping for" with this sort of approach involves the worker running a part of it at no cost, leading to a situation where publishers could "leach" computation by only requesting quotes, which is obviously no good.

Local history & ID POW as above


This tells us that either our UVs have to be taken in situ over the algorithm being shopped, without running it (again "gate count" metrics) or that we need a mechanism to assert functional extensional projections from the algorithm.  What I mean by this alternative is that the system could require the publisher to offer up, in their RFQ, the algorithm they intend to run, the input they intend to run it on, an extraction of (bounded?) portions of the algorithm that they intend to use for price shopping, and a generator function for a second set of input.  Doing it this way would be some awesome work, but would also be a bit of the-moon-on-a-stick probably.

it is possible to take the algorithm which the publisher wants to run, and decompose its procfs measurements into linear combination of canonical benchmarks. that's another formula but it's very similar to the main algo. but such measurements will have some distribution of values and not fixed one per each job (even something standard like 1000 FFT take different time if you have more zeros on your data). that's one of the strengths of the system: it's fully customizable. you'll be able to insert arbitrary JS code to the client which contains your decisions regarding who to work with. of course, non-techs won't need to mess with this, but the system and the protocol are open for full customizability since there is no middleman between the two parties, while the product and the network will defenitely develop with time.


Quote
"In addition, if Singular Value Decomposition is used"...

O HAI!  Now I has FOUR ideas!

If we consider that an attacker can "change gate cost at will" he can also "game" your decomposition, right?  Woops.

answered above


Quote
"Note that this pricing model is adequate for massively distributed applications only: say that a pending IO"...

As far as I can tell, paravirt boundary I/O cost and bursting semantics are not exercise-able as UVs.

that's exactly what i'm saying. we don't care about bursts on such massive distributed apps.


As such this basically enforces that regardless of how you measure the circuit, your associated cost function will be incorrect, pragmatically.  Even parallel, these still break the linearity over any one measure, and add skew to the net pricing.

This is actually a huuuuuuuuge open problem in the HPC community.  Most of your "big player"s would kill for a way to model cost with parameters over IO quantification and bursting semantics.  Such a model would be worth a fortune, since it is the only place where they get slippage in their profit margin.  (Academically, there's probably a lot of application for related open problems with fair schedulers, etc.)

Solve this and you've got more than gold, you've got the future of computing.  I don't expect you to be able to solve this.

as above, that's an open problem, which i'm happy i don't need to solve in zennet. that's why zennet is not adequate for really-real-time apps.
but it'll fold your protein amazingly fast.


Quote
"This root assumption is therefore vital for the acceptability of the solution."

Yup.  Bad assumption number two.  Effed on the way in and the way back out.  It's got fail coming out both ends, so to speak.   Grin

As I said, I don't expect a solution applied here, as I understand that it is an unavoidable assumption without constraining the IO and bursting semantics of the worker.

I think it would be best to just go ahead and constrain the IO and bursting semantics of the worker, myself.

as above


Quote
"Those xn are called, on the Zennet's framework, the Canonical Benchmarks."

The canonical benchmarks define the rigidity of the system.  How are you going to execute your phoronix suite on my recombinant FPGA chip?  How is your canonical benchmark going to tell you anything about the performance of my nascent memristor array?  (I'm not even kidding, here.)

How do those benchmarks apply when this classical hardware of today gets swept under by the new computing models of tomorrow?  (It is already happening!)  It seems like the canonical benchmarks will need to be a moving target, and thus not very canonical.  Wink

As I see it, any actual solution to these problems will begin with either an authenticated circuit construction, or some measure of cost of alpha/beta reductions and pi/sigma type construction.  In other words you're going to have to start from something well behaved, like Turing's machine or Church-Rosser and Barendregt's cube, not from something "wild" like procfs under paravirt.

they are not called canonical from the historical point of view, but from the fact all participants at a given time know which benchmark you're talking about. it doesn't must be phoronix (but it's a great system to create new benchmarks). all the programs have to do is to make the HW work hard. if you have a new HW, we (or the community) will have to write a program that utilizes it in order to be able to trade its resources over zennet, and participants will have to adopt it as a canonical benchmark.

flexibility.

the last thing the system is, is "rigid".

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 10:36:11 PM
 #106

The solution is sufficient without POW. Since the risk is low and can be controlled to be much lower. I agree that there are a lot of advantages on real computation POW, some with some innovative mathematical or cryptographic ideas, some hardware implemented, but their cost in additional computation or by sending VM snapshorts etc. is high.

?!

This used to be true, but isn't anymore.  Recent advances have brought the tradtionally associated computational cost down significantly.

will be glad to discuss it with you and maybe implement it on zennet

Quote
Nevertheless, I don't rule out implementing such an algo one day. Moreover, nothing on the current Zennet's design blocks such an implementation by the publisher, or by backward compatibility.

The problem is in lack of authentication not over the jobs themselves, but over the benchmark.  Combine this with a lack of association between the benchmark metrics and the job.

I understand that authentication and/or privacy preserving mechanism can be layered on top of the job.  This would just further distance the job's performance profile from what the benchmark metrics would imply.

answered in post that posted after this

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 10:42:16 PM
 #107

1. When you said that the free market idea contradicts the pricing idea, recall that it's all about Zencoins in which their value can fluctuate, in addition to the fact that the user sets the price for canonical benchmarks, and this (together with procfs measurements during the benchmark) deterministically sets the price.

This is another case where the reply doesn't seem to match up to the critical statement.  My problem was not with any association between the price of a resource and the price of the coin.  The contradiction is in that you say the users set all of the pricing, but also say that the algorithm handles pricing determination for the users!  Either the users control the pricing or the system controls the pricing, but you seem to somehow want it both ways.

Quote
2. Since we're solving a least squares problem, Gauss promises us that the errors are normally distributed with zero mean. Hence, publisher can easily (and of course automatically) identify outliers.

Sure, but it can't infer anything else as to why it is marginal.  I refer back to my notion of "why is there any reason to think a publisher will ever converge (or potentially be able to converge) on an optimal set of workers?"

Quote
3. More than the above local statistics, the publisher knows to compare a provider to many more.

Again, I see this is a problem in and of itself, not a useful solution.  At best it is a misguided mitigation.

Quote
The comparison is the number of finished small jobs w.r.t. paid coins.

It goes a little beyond this.  The jobs must not only be finished, but correct.

Quote
This can happen in seconds, and by no means, renting a PC for seconds should exceed not-too-much-satoshis.

The problem is it is not "a PC for seconds" in anything but toy cases.  If I want one node, I really need at least two.  If I want 100 nodes I really need at least 200.  If I want 10000 I really need at least 20000.  This quickly becomes costly.

Unlike the problem of weeding out sub-optimal (but correct) workers, I can't just "drop the under-performing half" either.  I need to run each task to completion on each replica in order to compare results.

Really, I should expect to need much more than just a doubling of effort, as well.  My ability to infer correctness of the result increases asymptotically. I can be twice as confident with four workers per instance, and twice as confident as that with eight.

I can never fully infer correctness, regardless of how much I spend.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 19, 2014, 10:55:40 PM
Last edit: September 19, 2014, 11:39:23 PM by ohad
 #108

1. When you said that the free market idea contradicts the pricing idea, recall that it's all about Zencoins in which their value can fluctuate, in addition to the fact that the user sets the price for canonical benchmarks, and this (together with procfs measurements during the benchmark) deterministically sets the price.

This is another case where the reply doesn't seem to match up to the critical statement.  My problem was not with any association between the price of a resource and the price of the coin.  The contradiction is in that you say the users set all of the pricing, but also say that the algorithm handles pricing determination for the users!  Either the users control the pricing or the system controls the pricing, but you seem to somehow want it both ways.

let's make some order:
1. market decides on how much $ worth 1 zencoin
2. users set the price for benchmarks
3. system translates procfs to zencoins using terms of benchmarks
4. users may set not only price but any arbitrary behavior in form of JS code given the zennet's envorinment variables.

number 3 may be tampered. and here you have all other means i mentioned.
the control on the price and logic is all the user's. the system just helps them translate procfs to zencoin.


Quote
2. Since we're solving a least squares problem, Gauss promises us that the errors are normally distributed with zero mean. Hence, publisher can easily (and of course automatically) identify outliers.

Sure, but it can't infer anything else as to why it is marginal.  I refer back to my notion of "why is there any reason to think a publisher will ever converge (or potentially be able to converge) on an optimal set of workers?"

yeah, i forgot to mention that Gauss also promised they're uncorrelated and with equal variances. so it's not marginal only. see gauss-markov theorem


Quote
3. More than the above local statistics, the publisher knows to compare a provider to many more.

Again, I see this is a problem in and of itself, not a useful solution.  At best it is a misguided mitigation.

it is a well guided mitigation. with parameters promised to normally distribute while being uncorrelated.


Quote
The comparison is the number of finished small jobs w.r.t. paid coins.

It goes a little beyond this.  The jobs must not only be finished, but correct.

Quote
This can happen in seconds, and by no means, renting a PC for seconds should exceed not-too-much-satoshis.

The problem is it is not "a PC for seconds" in anything but toy cases.  If I want one node, I really need at least two.  If I want 100 nodes I really need at least 200.  If I want 10000 I really need at least 20000.  This quickly becomes costly.

Unlike the problem of weeding out sub-optimal (but correct) workers, I can't just "drop the under-performing half" either.  I need to run each task to completion on each replica in order to compare results.

Really, I should expect to need much more than just a doubling of effort, as well.  My ability to infer correctness of the result increases asymptotically. I can be twice as confident with four workers per instance, and twice as confident as that with eight.

I can never fully infer correctness, regardless of how much I spend.


right, but correctness is a whole different story than alomst everything i talked about.
as for correctness:
1. as said, risk can be decreased exponentially with linearly growing investments
2. many problems are very easy to verify, so the investment is far from doubling itself. example for such problems are matrix inversion, eigenvalues problems, svd, al NP-Complete problems, numerically solving (mainly-time-dependent-)PDEs, root finding and many more.
3. also in real life, you don't have real promises, just probabilities
4. we both know that most of the users won't tamper the code, so the risk probability to lower even more, isn't so high at the first place. they have nothing to earn from miscomputation.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 19, 2014, 11:55:18 PM
Last edit: September 20, 2014, 12:30:13 AM by HunterMinerCrafter
 #109

got it. if you call it multi-objective optimization, you just didn't understand the math. sorry.

How is it not?  You have multiple candidates each with unique feature-sets and are trying to find a best fit subset of candidates along all feature dimensions.  Sounds like a textbook example to me!

Quote
go over "least squares" again.

...

Quote
It's not about the metric, it's about the information I have.
When I say procfs I mean any measurements that the OS gives you (we could get into the kernel level but that's not needed given my algo).

Of course it is about the metric.  This metric is the only thing we have, so far.  Wink

The most relevant piece of information available, the job algorithm itself, gets ignored.  As long as this remains the case there is no meaningful association between the benchmark and the application.

Quote
true. procfs doesn't have this linearity. that's exactly what my algo comes to fix.
but *any* n-dim vector can be written as a linear combination of n linearly independent vectors.

Sure, but that doesn't magically make the measure linearly constrained.  You're still assuming that the measurements are actually done at all, which is a mistake.  We can't assume the semi-honest model for this step.  We have to assume the attacker just makes up whatever numbers he wants, here.

Quote
those latter n vectors are the unknowns (their number might be different than n, yet supported by the algo).
note that I never measure the UVs, nor give any approximation of them. I only estimate their product with a vector.

Er, of course you are measuring the UVs.  The performance of {xk} constitutes the measurement of all UVs, together.  Just because you measure them through indirect observation doesn't mean you aren't measuring them.  In any case I'm not sure what this has to do with anything?  (Or were you just stating this in case I had missed the implication?)

Quote
sure, wonderful works out there. we have different goals, different assumptions, different results,

Your goals don't seem to align with market demands, your assumptions can't be safely assumed, and where they get great results I'm skeptical of what your results really are, let alone their merit.

This is the core of the problem.  If your coin were based on these established works that meet lofty goals under sound assumptions and show practical results, I'd have no trouble seeing the value proposition.  Since your coin is based on "known to fail" goals (which we'll just generally call the CPUShare model, for simplicity) and makes what appear to be some really bad assumptions, I don't expect comparable results.  Why should I?

Quote
yet of course i can learn a lot from the cs literature, and i do some.
you claim my model is invalid but all you show till now is misunderstanding. once you get the picture you'll see the model is valid. i have a lot of patience and will explain more and more again and again.

Please do.

Quote
sure. see my previous comment about normal distribution and mitigating such cases.

Again, mitigation is not solution.  You've claimed solutions, but still aren't presenting them.  (In this case I'm not even really sure that the assumption around the mitigation actually holds.  Gauss only assumes natural error, and not induced error.  Can we assume any particular distribution when an attacker can introduce skew at will?  Wink)

Quote
you're not even in the right direction. as above, of course procfs aren't linear. and of course they're still n-dim vectors no matter what.

And of course this doesn't magically repair our violated linearity constraint in any way, does it?  Maybe this is what I'm missing....?

Of course it seems that I will be somewhat "doomed" to be missing this, as filling that hole requires establishing a reduction from arbitrary rank N circuits to rank 1 circuits.  I'm pretty sure this is impossible, in general.

Quote
that follows from the definition of the UVs you're looking for.
even though I wrote detailed doc, as any good math, you'll still have to use good imagination.
just think of what you're really looking for, and see that procfs is projected linearly from them. even the most nonlinear function is linear in some higher dimensional space. and there is no reason to assume that the dim of the UVs is infinite.. and we'd get along even if it was infinite.

Being careful not to tread too far into the metaphysical I'll approach this delicately.  You are closest to seeing the root of this particular problem with this statement, I feel.

If I say being linear in a higher dimension does not resolve the violation of the linearity constraint in the lower dimension, would you agree?  If so, you must in turn admit that the linearity constraint is still violated.

It is fine to "patch over the hole" in this way if we can assume correctness of the higher dimensional representation.... unfortunately the attacker gets to construct both, and therein lies the rub.  He gets to break the linearity, and hide from us that he has even done so.

(This is precisely the premise for the attack I mentioned based on the "main assumption.")

If attackers can violate the linearity constraint at all, even if they can make it look like they haven't... well, they get to violate the constraint. :-)  I say again, linearity can not be enforced, only assumed.  (Well, without some crypto primitive "magic" involved, anyway.)

Quote
why different circuit? the very same one.

No, the benchmarks are canonical and are not the circuit representative of my job's algorithm.

Quote
the provider tells the publisher what is their pricing, based on benchmarks. yes, it can be tampered, but it can be recognized soon, loss is negligible, reputation going down (the publisher won't work with this address again, and with the ID POW they can block users who generate many new addresses), and all other mechanisms mentioned on the docs.
on the same way, the publisher can tell how much they are willing to pay, in terms of these benchmarks. therefore both sides can negotiate over some known objective variables.

Except they're subjective, not objective.  They're potentially even a total fantasy.  Worse yet, they have no real relation, in either case, to the transaction being negotiated.  

Quote
again, we're not measuring the UVs, but taking the best estimator to their inner product with another vector. this reduces the error (error at the final price calculation, not in the UVs which are not interesting for themselves) in order of magnitude.

Again this is only true assuming a semi-honest model, which we can't, for reasons already enumerated. (Are you understanding, yet, that the critical failure of the model is not in any one of these problems, but in the combination?)

Quote
Local history & ID POW as above

We're running in circles, now.  ID POW does nothing to filter spammers, only discourages them by effectively charging a small fee.  The post requires stamps and I still get plenty of junk mail.

Given that once one owns an identity one can get at least one issuance of some payment for no work, why would anyone bother running actual computations and building up a positive reputation instead of mining IDs to burn by not providing service?

The ID POW can only have any effect if the cost to generate an ID is significantly higher than the gain from burning it, the cost the publisher pays for utilizing the attacker's non-working worker.  In other words, it has to be enforced somewhere/somehow that the cost of an ID is more than the "first round" cost of a job.  I don't see how/where this can be enforced, but I'm not precluding it just yet.  If you would enforce this somehow, it would solve at least the "trust model in a vacuum" part of the problem by relating the value of the trust entity to the value of its resource.

Quote
it is possible to take the algorithm which the publisher wants to run, and decompose its procfs measurements into linear combination of canonical benchmarks.

Is it?  (In a way aside from my proposed functional extension mechanism, that also doesn't give the publisher some free ride?)
If so, this might be a useful approach.  If anything, this should be explored further and considered.


Quote
that's exactly what i'm saying. we don't care about bursts on such massive distributed apps.

We do, because we can't price the distribution as a whole we can only price the individual instances.  The concern is not that one node does 1000/sec and another node does 1/sec, it is that the 1000/sec node might suddenly start doing 1/sec - and might even do so through neither malicious intent or any fault!

Quote
as above, that's an open problem, which i'm happy i don't need to solve in zennet. that's why zennet is not adequate for really-real-time apps.
but it'll fold your protein amazingly fast.

Or it'll tell you it is folding your protein, take your money, spit some garbage data at you (that probably even looks like a correct result, but is not) and then run off.

Or it tell you it can fold your protein really fast, take your money, actually fold your protein really slowly, and you'll have to walk away and start over instead of giving it more money to continue to be slow.

We can't know which it will do until we hand it our money and find out!  No refunds.  Caveat emptor!

Quote
they are not called canonical from the historical point of view, but from the fact all participants at a given time know which benchmark you're talking about.

Sure, my point is that the benchmarks are presented as "one size fits all" and they aren't.  It's ultimately the same problem as the benchmark having nothing to do with the job, in a way.  How are you going to maintain and distribute useful benchmarks for every new computing kit and model that comes out?  (At least that problem is not so bad as trying to maintain a benchmark for any job algorithm used.  There can be only finite technologies.  Wink)

Further, how are you going to benchmark some nonlinear, progressive circuit "at all?"  How can you benchmark something that only does unsupervised classification learning?  How can you meaningfully benchmark a system with "online LTO" where the very act of running the benchmark will result in the worker re-configuring into a different performance profile, partly or totally invalidating your benchmark results? (Likely further separating the bench-marked performance profile from that of the eventual job, too!)

Quote
it doesn't must be phoronix (but it's a great system to create new benchmarks). all the programs have to do is to make the HW work hard. if you have a new HW, we (or the community) will have to write a program that utilizes it in order to be able to trade its resources over zennet, and participants will have to adopt it as a canonical benchmark.

flexibility.

the last thing the system is, is "rigid".

Being continuously re-coded and added to in order to support change does not really fit my definition of flexible.  That is kind of like saying a brick is flexible because you can stack more bricks on top of it.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 12:08:39 AM
 #110

before i answer in detail,
since you mentioned the coin,
let me just make sure that you noticed that the computational work has nothing to do with mining, coin generation, transaction approval etc., and it all happens only between two parties with no implications at all to any 3rd party, not to mention the whole network.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 12:15:14 AM
Last edit: September 20, 2014, 12:30:35 AM by ohad
 #111

Quote
Quote
true. procfs doesn't have this linearity. that's exactly what my algo comes to fix.
but *any* n-dim vector can be written as a linear combination of n linearly independent vectors.
Sure, but that doesn't magically make the measure linearly constrained.  You're still assuming that the measurements are actually done at all, which is a mistake.  We can't assume the semi-honest model for this step.  We have to assume the attacker just makes up whatever numbers he wants, here.

accumulated procfs readings are taking place every few seconds, then updating the micropayments transactions accordingly.
see my list in prev comments about miscalculations

Quote
Quote
those latter n vectors are the unknowns (their number might be different than n, yet supported by the algo).
note that I never measure the UVs, nor give any approximation of them. I only estimate their product with a vector.
Er, of course you are measuring the UVs.  The performance of {xk} constitutes the measurement of all UVs, together.  Just because you measure them through indirect observation doesn't mean you aren't measuring them.  In any case I'm not sure what this has to do with anything?  (Or were you just stating this in case I had missed the implication?)

note that measuring a vector vs measuring its inner product with another vector, is estimating 1 number vs estimating n numbers, while variance goes down by 1/n (since it's all normal, since my estimator's errors are normal, uncorrelated, and with zero mean, *even if the system is nonlinear and not-normal*)

as for mitigation, that's the very part of zennet's solution. we come from a recognition that we can't promise 100% and we don't need to promise 100%, cause all you have in real life is probabilities.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 12:31:37 AM
 #112

before i answer in detail,
since you mentioned the coin,
let me just make sure that you noticed that the computational work has nothing to do with mining, coin generation, transaction approval etc., and it all happens only between two parties with no implications at all to any 3rd party, not to mention the whole network.

I fully understand this.  It is not relevant.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 12:38:01 AM
 #113

Quote
Quote
true. procfs doesn't have this linearity. that's exactly what my algo comes to fix.
but *any* n-dim vector can be written as a linear combination of n linearly independent vectors.
Sure, but that doesn't magically make the measure linearly constrained.  You're still assuming that the measurements are actually done at all, which is a mistake.  We can't assume the semi-honest model for this step.  We have to assume the attacker just makes up whatever numbers he wants, here.

accumulated procfs readings are taking place every few seconds, then updating the micropayments transactions accordingly.
see my list in prev comments about miscalculations

Again this comment doesn't match up.  The attacker can just as well lie every few seconds.

Quote
Quote
Quote
those latter n vectors are the unknowns (their number might be different than n, yet supported by the algo).
note that I never measure the UVs, nor give any approximation of them. I only estimate their product with a vector.
Er, of course you are measuring the UVs.  The performance of {xk} constitutes the measurement of all UVs, together.  Just because you measure them through indirect observation doesn't mean you aren't measuring them.  In any case I'm not sure what this has to do with anything?  (Or were you just stating this in case I had missed the implication?)

note that measuring a vector vs measuring its inner product with another vector, is estimating 1 number vs estimating n numbers, while variance goes down by 1/n (since it's all normal, since my estimator's errors are normal, uncorrelated, and with zero mean, *even if the system is nonlinear and not-normal*)

Sure, an attacker can only slope the vector normal, not outright violate the normality constraint.  This says nothing of the violation of the linear assumption, though?

Quote
as for mitigation, that's the very part of zennet's solution. we come from a recognition that we can't promise 100% and we don't need to promise 100%, cause all you have in real life is probabilities.

Cool, so now you are starting to accept and admit that you actually do not offer solution.  I consider this progress!  Now the challenge will just be in getting you to try for some solutions instead of just making a fancy new CPUShare. XD
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 12:40:45 AM
 #114

Quote
Again this comment doesn't match up.  The attacker can just as well lie every few seconds.
yeah but then he'll get blocked, and if ID POW is in use by the publisher, it'll be difficult for him to cheat again.

Quote
Sure, an attacker can only slope the vector normal, not outright violate the normality constraint.  This says nothing of the violation of the linear assumption, though?
an attacker can do *anything* to the measurements. but the mitigation makes the expectation of the risk negligible. at least on wide enough cases.
the linear estimator gives me normally distributed errors so i can easily identify outliers. this property exists thanks to the linear estimator, and does not depend on the UVs linearity assumption (which, and partially answering a question you raised, is more accurately "low-enough-dimension assumption"). how low is low enough? order of magnitude of the number of different benchmarks i can afford to run.

Quote
Cool, so now you are starting to accept and admit that you actually do not offer solution.  I consider this progress!  Now the challenge will just be in getting you to try for some solutions instead of just making a fancy new CPUShare. XD

and again: i'm not looking for the kind of solution you're looking for. i'm looking for a working solution, which from economic (and performance!!) point of view, is preferred over existing alternatives. it doesnt necessarily have to be cryptographically proven work etc.

Tau-Chain & Agoras
plopper50
Member
**
Offline Offline

Activity: 64
Merit: 10


View Profile
September 20, 2014, 12:45:52 AM
 #115

Is this like the Amazon service where you can rent processing power, but on the blockchain instead?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 12:47:07 AM
 #116

Is this like the Amazon service where you can rent processing power, but on the blockchain instead?

it is a free market alternative to AWS aimed for massive computations. it consists also from blockchain technology.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 12:55:48 AM
 #117

I misquoted a comment so I deleted it and writing again:

Quote
Quote
it is possible to take the algorithm which the publisher wants to run, and decompose its procfs measurements into linear combination of canonical benchmarks.
Is it?  (In a way aside from my proposed functional extension mechanism, that also doesn't give the publisher some free ride?)
If so, this might be a useful approach.  If anything, this should be explored further and considered.

now if i get to explain you this, i think you'll understand much more of the broader picture.

that's trivial! let me convince you that's trivial:

i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.


of course, only small tasks are considered.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 12:58:08 AM
 #118

Quote
Given that once one owns an identity one can get at least one issuance of some payment for no work, why would anyone bother running actual computations and building up a positive reputation instead of mining IDs to burn by not providing service?

The ID POW can only have any effect if the cost to generate an ID is significantly higher than the gain from burning it, the cost the publisher pays for utilizing the attacker's non-working worker.  In other words, it has to be enforced somewhere/somehow that the cost of an ID is more than the "first round" cost of a job.  I don't see how/where this can be enforced, but I'm not precluding it just yet.  If you would enforce this somehow, it would solve at least the "trust model in a vacuum" part of the problem by relating the value of the trust entity to the value of its resource.

very easy: each client picks any amount of ID POW to require from its parties.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:01:43 AM
 #119

Quote
Sure, my point is that the benchmarks are presented as "one size fits all" and they aren't.  It's ultimately the same problem as the benchmark having nothing to do with the job, in a way.  How are you going to maintain and distribute useful benchmarks for every new computing kit and model that comes out?  (At least that problem is not so bad as trying to maintain a benchmark for any job algorithm used.  There can be only finite technologies.  Wink)

it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

Quote
Being continuously re-coded and added to in order to support change does not really fit my definition of flexible.  That is kind of like saying a brick is flexible because you can stack more bricks on top of it.

it is *able* to be customized and coded, hence flexible.
it has to be done only for totally new creatures of hardware.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:21:26 AM
 #120

3. system translates procfs to zencoins using terms of benchmarks

number 3 may be tampered. and here you have all other means i mentioned.
the control on the price and logic is all the user's. the system just helps them translate procfs to zencoin.

Precisely the contradiction, restated yet again.  "Translate procfs to zencoin" is the exact same problem as "price procfs" so this statement is precisely restated as "The control on the price is the user's. The system just helps them by doing pricing."

We collectively decide what the coin is worth.  A user decides how many coins one "proc unit" is worth.  The system decides how many "proc units" one execution is worth.

The system has priced the resource, not the user.  The user priced the token in which that resource pricing is denominated, and the market priced the token in which that token is denominated, but the resource itself was valuated systematically by some code, not by the user or the broader market.

(Or do you only consider things priced if they are priced denominated in fiat dollars?   Huh)

yeah, i forgot to mention that Gauss also promised they're uncorrelated and with equal variances. so it's not marginal only. see gauss-markov theorem

My last reply to this still stands, Gauss was "wrong" about our model since he wasn't considering intentionally introduced fault.  Gauss-Markov (which I love, being ML guy) ends up being double wrong, because the Markov side of things assumes the errors are uncorrelated.  An attacker introducing error may certainly introduce correlated error, and may even have reason to explicitly do so!  Wink

Quote
it is a well guided mitigation. with parameters promised to normally distribute while being uncorrelated.

I'm becoming really quite convinced that where you've "gone all wrong" is in this repeated assumption of semi-honest participation.

Much of what you're saying, but particularly this, simply doesn't hold in the explicit presence of an attacker.

Quote
2. many problems are very easy to verify, so the investment is far from doubling itself. example for such problems are matrix inversion, eigenvalues problems, svd, al NP-Complete problems, numerically solving (mainly-time-dependent-)PDEs, root finding and many more.

Sure, but again I suspect that most useful jobs will not be "pure" and so will not fall into any of these.

Quote
3. also in real life, you don't have real promises, just probabilities

The one exception to this, of course, being formal proof.  We can actually offer real promises, and this is the even central to the "seemingly magic" novelty of bitcoin and altcoins.  Bitcoin really does promise that no-one successfully double spends with any probability as long as hashing is not excessively centralized and the receiver waits an appropriate number of confirms.  (Both seemingly reasonable assumptions.)

Why you're so readily eschewing approaches that can offer any real promises, even go so far as to deny they exist "in real life," despite our Bitcoin itself being a great counterexample, is confusing to me.

Quote
4. we both know that most of the users won't tamper the code,

I assume most users will be rational and will do whatever maximizes their own profit.

I actually go a bit further to assume that users will actually behave irrationally (at their own expense) if necessary to maximize their own profit.  (There's some fun modal logic!)

(I further have always suspected this is the root cause behind the failure of many corporations.)

Quote
so the risk probability to lower even more, isn't so high at the first place. they have nothing to earn from miscomputation.

This caries a similar flavor as your HPC giants being motivated to turn in your bugs for your 1BTC bounty.

Miners have everything to earn from violating correctness.

If I can sell you on some resource contract, deliver some alternate and cheaper resource, and have you walk away none the wiser, I profit.

If I can sell you on some resource contract, convince you to make a deposit, and then abscond with almost no cost, I profit.

If I can sell you on some resource contract, actually perform the work and get paid, then turn around and sell your data and software to the highest bidder, I profit.

People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.  Even if you do everything possible to mitigate these concerns, it will all come down to people bulk mining identities to burn.

How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:24:29 AM
 #121

Quote
My last reply to this still stands, Gauss was "wrong" about our model since he wasn't considering intentionally introduced fault.  Gauss-Markov (which I love, being ML guy) ends up being double wrong, because the Markov side of things assumes the errors are uncorrelated.  An attacker introducing error may certainly introduce correlated error, and may even have reason to explicitly do so!  Wink
man stop mixing miscalculation with procfs spoofing

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:30:54 AM
 #122

i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.

I still don't see how the n programs have any assumable relation to the program that I'm actually trying to get quoted.  How is the behavior of any of the runs of the n programs indicative of anything about my future run of my job?  How is what is being priced over serving to price the thing that I want priced?
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:32:50 AM
 #123

very easy: each client picks any amount of ID POW to require from its parties.

Eeep, it just keeps getting more scary!  Cheesy

Who decides how much work is sufficient?  How does any given publisher have any indication about any given providers ability to perform the identity work?

There kind of has to be some continual consensus on a difficulty here, doesn't there?
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:35:22 AM
 #124

it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

 Huh  If benchmarks don't have to be accurate than why do them at all?

Quote
it is *able* to be customized and coded, hence flexible.

By this measure all software is flexible.  Of course you must have known what I meant, here, as a measure of flexibility relative to alternatives.

Quote
it has to be done only for totally new creatures of hardware.

Totally new creatures of hardware show up every day.  Anyway this is neither here nor there.  I said I wasn't going to hold my breath on that result, and I didn't.  We can move on from it without prejudice.  Smiley
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:37:41 AM
 #125

Quote
Precisely the contradiction, restated yet again.  "Translate procfs to zencoin" is the exact same problem as "price procfs" so this statement is precisely restated as "The control on the price is the user's. The system just helps them by doing pricing."

at least we now understand who influences who, and that the user may change his numbers at any time with or without looking at the market. hence no contradiction or something like that. you may argue about terminology.

Quote
I'm becoming really quite convinced that where you've "gone all wrong" is in this repeated assumption of semi-honest participation.

Much of what you're saying, but particularly this, simply doesn't hold in the explicit presence of an attacker.

draw me a detailed scenario for a publisher hiring say 10K nodes and lets see where it fails

Quote
The one exception to this, of course, being formal proof.  We can actually offer real promises, and this is the even central to the "seemingly magic" novelty of bitcoin and altcoins.  Bitcoin really does promise that no-one successfully double spends with any probability as long as hashing is not excessively centralized and the receiver waits an appropriate number of confirms.  (Both seemingly reasonable assumptions.)

Why you're so readily eschewing approaches that can offer any real promises, even go so far as to deny they exist "in real life," despite our Bitcoin itself being a great counterexample, is confusing to me.

in theory. in practice, power can shut down and so on. probability for a computer to give you a correct answer is never really 1. how much uptime AWS guarantee? i think 99.999%

Quote
I assume most users will be rational and will do whatever maximizes their own profit.

I actually go a bit further to assume that users will actually behave irrationally (at their own expense) if necessary to maximize their own profit.  (There's some fun modal logic!)

(I further have always suspected this is the root cause behind the failure of many corporations.)

since after the convergence of the network toward more-or-less stable market, spammers and scammers will earn so little. they'd rather do decent work and get paid more. even if they don't, other mentioned mitigations are taking place.

Quote
People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.

i totally agree. i do not agree that the network is not able to mitigate them and converge to reasonable values. since the costs are so lower than big cloud firm operational costs, we have a large margin to allow some considerable financial risk.

Quote
How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?

let me mention that i know nothing about cpushare so i can't refer this question

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:39:14 AM
 #126

man stop mixing miscalculation with procfs spoofing

Man stop thinking they are not exactly the same thing.   Wink

If you can get over that hangup I think we can make better progress.

The attacker's arbitrary control over the execution, without being held to any scrutiny of authentication, is the same problem regardless of if we are looking at the implications to the pricing or the implications to the execution itself.

The attacker recomposing the execution context is the same behavior in either case.  This is the good ol' red/blue pill problems, just reiterated under a utility function and cost model.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:39:46 AM
 #127

i have a program which i want to decompose into a linear combination of benchmarks w.r.t. procfs measurements (abbrev. msmts).

i.e., i want to describe the vector x of procfs msmts of a given program as a linear combination of procfs msmts vectors of another n programs.

assume the dimension of the vectors is k.

all i need to do is have n>=k and have the matrix having n rows which are our vectors be of rank k.
i will get it for almost surely if i just pick programs "different enough", or even if they're just "a bit different", i may increase n.

I still don't see how the n programs have any assumable relation to the program that I'm actually trying to get quoted.  How is the behavior of any of the runs of the n programs indicative of anything about my future run of my job?  How is what is being priced over serving to price the thing that I want priced?

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:42:11 AM
 #128

man stop mixing miscalculation with procfs spoofing

Man stop thinking they are not exactly the same thing.   Wink

If you can get over that hangup I think we can make better progress.

The attacker's arbitrary control over the execution, without being held to any scrutiny of authentication, is the same problem regardless of if we are looking at the implications to the pricing or the implications to the execution itself.

The attacker recomposing the execution context is the same behavior in either case.  This is the good ol' red/blue pill problems, just reiterated under a utility function and cost model.

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?
maybe there is a similarity if our goal is to eliminate them both.
but all we want is to decrease the risk expectation.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:43:46 AM
 #129

very easy: each client picks any amount of ID POW to require from its parties.

Eeep, it just keeps getting more scary!  Cheesy

Who decides how much work is sufficient?  How does any given publisher have any indication about any given providers ability to perform the identity work?

There kind of has to be some continual consensus on a difficulty here, doesn't there?

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:46:55 AM
 #130

it's not about "one size fits all". it's about describing how much has to be invested in a job. like saying "each task of mine is as about 1000 fft of random numbers" or a linear combination of many such tasks.
again, such task decomposition is not mandatory, only the ongoing procfs msmts
another crucial point is that we don't have to have accurate benchmarks at all. just many of them, as different as possible.

 Huh  If benchmarks don't have to be accurate than why do them at all?

that's the very point: i dont need accurate benchmarks. i just need them to be different and to keep the system busy. that's all!! then i get my data from reading procfs of the benchmark's running. if you understand the linear independence point, you'll understand this.

Quote
Quote
it is *able* to be customized and coded, hence flexible.

By this measure all software is flexible.  Of course you must have known what I meant, here, as a measure of flexibility relative to alternatives.

Quote
it has to be done only for totally new creatures of hardware.

Totally new creatures of hardware show up every day.  Anyway this is neither here nor there.  I said I wasn't going to hold my breath on that result, and I didn't.  We can move on from it without prejudice.  Smiley

oh well
so you know how to write software that applies to all future hw?

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:48:13 AM
 #131

as i wrote, i'll be glad to discuss with you methods for proving computation. not even discuss but maybe even work on it.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 01:58:08 AM
 #132

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

I don't disagree with any of this except the notion that it is acceptable to forget about programs.   Wink

What you propose is fine other than the fact that there is no relation between the end result and the program that you conveniently want to just "forget about."

(Such a relation is not easily established, generally.  Functional extension is hard.  Proving lack of divergence in the general case is even known to be impossible.  However, you can't just "punt" like this, forgetting about programs, and assume everything else will just work out soundly.)
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 01:59:08 AM
Last edit: September 20, 2014, 02:09:37 AM by ohad
 #133

forget about probrams.
think of arbitrary vectors.
every k-dim vector can be written as a linear combination of k linearly independent vectors.
you can take much more vectors, and increasing the probability that they'll contain k independent ones.
if the benchmarks are different, k will be enough. but the more the better.

I don't disagree with any of this except the notion that it is acceptable to forget about programs.   Wink

What you propose is fine other than the fact that there is no relation between the end result and the program that you conveniently want to just "forget about."

(Such a relation is not easily established, generally.  Functional extension is hard.  Proving lack of divergence in the general case is even known to be impossible.  However, you can't just "punt" like this, forgetting about programs, and assume everything else will just work out soundly.)

it's true for ANY vectors. calling them "procfs msmts" doesnt change the picture.
i can linearly span the last 100 chars you just typed on your pc by using say 200 weather readings each from 200 different cities.
of course, only once, otherwise the variance will be too high to make it somehow informative.
but for a given program, the variance over several runnings is negligible (and in any case can be calculated, and taken into account on the least squares algo).

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 02:00:53 AM
 #134

also note that those UVs are actually "atomic operations".
running one FLOP requires X atomic operations of various types.
we just add their amounts linearly!
but summing consequent FLOPS will end up correlated.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:08:46 AM
 #135

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?

No!  This is central to my point.  Authentication is authentication, and anything else is not.

(Authentication encompasses both concerns.)

Quote
maybe there is a similarity if our goal is to eliminate them both.

There is more than a similarity, there is a total equivalence.  To assume they are in any way different is a mistake.  They both just constitute a change in reduction semantics within the execution context.

Quote
but all we want is to decrease the risk expectation.

I am actually interested in a goal of eliminating both, of course.

However, all I really want is at least a rational explanation of where risk expectation is decreased, assuming rational behavior (and not assuming honest or even semi-honest behavior of participants beyond what is enforced by blockchain semantics) by participants.

It still seems to me like rational behavior of participants is to default to attack, and it seems that they have little discouraging them from doing so.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:10:48 AM
 #136

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 02:12:23 AM
 #137

it's not like btc difficulty, where all the network has to agree. it's only local. each participant may choose their own value.

By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?

again, all matter of approximations and probabilities.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 02:14:34 AM
 #138

won't you agree that detecting miscalculation and detecting procfs spoofing are two different things?

No!  This is central to my point.  Authentication is authentication, and anything else is not.

(Authentication encompasses both concerns.)

Quote
maybe there is a similarity if our goal is to eliminate them both.

There is more than a similarity, there is a total equivalence.  To assume they are in any way different is a mistake.  They both just constitute a change in reduction semantics within the execution context.

Quote
but all we want is to decrease the risk expectation.

I am actually interested in a goal of eliminating both, of course.

However, all I really want is at least a rational explanation of where risk expectation is decreased, assuming rational behavior (and not assuming honest or even semi-honest behavior of participants beyond what is enforced by blockchain semantics) by participants.

It still seems to me like rational behavior of participants is to default to attack, and it seems that they have little discouraging them from doing so.

it is clear that:
1. you want the computational work to be proven
2. i want to control the risk expectation and decrease it to practical values

i claim that your method is less practical, but i'm open to hear more.
you claim that my method won't work.
now let's focus on either ways:
if we want to talk about my approach, then miscalc and procfs spoof are indeed different.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:38:04 AM
 #139

at least we now understand who influences who, and that the user may change his numbers at any time with or without looking at the market. hence no contradiction or something like that. you may argue about terminology.

 Huh Another response that didn't match the statement.  How is the system not doing the pricing?  If you think the market is doing the pricing, this would imply that the pricing only occurs at the fiat denomination.  I don't think this is a notion that would be well accepted.  If you think the user is doing the pricing, why?  How does the user setting the relation between the procfs token and the coin token say anything abut the valuation of the actual computation, which is denominated in that procfs token?  I would hope you do understand the difference between valuation and denomination.

Quote
draw me a detailed scenario for a publisher hiring say 10K nodes and lets see where it fails

I've already detailed where it can fail.  This is all I've been doing for hours now.

Quote
in theory. in practice, power can shut down and so on.

Eh, I'm going to avoid getting back into this discussion for the hundredth-or-so time.  The whole model of bitcoin *doesn't* actually fall over when the EMPs go off.  The theory remains just as sound.  The protocol can still be enacted, and work, albeit probably with adjusted parameters that account for the (massively) increased network latency caused by lack of electronic communication.

Bitcoin would've literally solved the actual Byzantine Generals' problem, even at the time!

Quote
probability for a computer to give you a correct answer is never really 1.

It is when the process the computer employs to derive that answer is proof carrying!  Either you get out the correct answer or you get no output at all.

Quote
how much uptime AWS guarantee? i think 99.999%

How much uptime does bitcoin guarantee? 100%.  Anti-fragile and all that jazz.  It really is deterministic "immortal" modulo the 51% attack or some hypothetical eventual exhaustion of the hash space.

Six sigma has it all wrong.  We should be building systems that are "forever."  (Particularly being "Bitcoiners.")

Quote
since after the convergence of the network toward more-or-less stable market, spammers and scammers will earn so little.

Again, why do we think this model will converge in such a direction?  What makes them actually "earn so little?"  What is going to prompt the network participants to behave altruistically when they have both incentive and opportunity not to?

Quote
Quote
People *will* do these things at any given opportunity, and people will use any *other* vulnerability of the system to carry out these practices.

i totally agree. i do not agree that the network is not able to mitigate them and converge to reasonable values.

I never said that I don't think it could.  In fact I explicitly stated the opposite several times.  What I'm saying here is that your network, as described so far, doesn't even seem to mitigate correctly.

Quote
since the costs are so lower than big cloud firm operational costs, we have a large margin to allow some considerable financial risk.

Eh?  How can we know the relative cost a priori?  Why shouldn't we believe the cost of this service will actually average higher, given the need for excess redundancy etc.  We've already brought into the discussion the notion that people might even just re-sell AWS, and they certainly wouldn't do so at a loss.

I don't think you've made a safe assumption on this.

Quote
Quote
How exactly is this not just like CPUShare again, and why exactly shouldn't we expect it to fail for the exact same reasons, again?

let me mention that i know nothing about cpushare so i can't refer this question

I'll rephrase.  How exactly is this not reiterative of any other attempts at a p2p resource market, which have all failed?

They've all failed for the same reasons I'm assuming your model fails, btw.  No authentication over quote or work.  Inadequately constrained execution context.  Disassociated cost models.  Providers absconding mid-computation.  Providers attacking each-other and the network for any possible advantage.  Providers burning through identities to perpetuate their unfair trades. Requirement for substantial overheads in any attempts at "mitigation" of these problems.

Those who do not learn from history, they say, are doomed.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:43:55 AM
 #140

Quote
By what criteria? As a publisher, how should I estimate what a sufficient difficulty should be to counter incentive for absconding at some point, considering I can't know the capacity of the worker?

again, all matter of approximations and probabilities.

More "and then some magic happens."

So a provider who makes some technological leap in solving the puzzle gets to violate the constraint at will until the network just "wisens up on it's own?"  By what means should it become wise to the fact?

If you remove the difficulty scale it can't really be called PoW anymore, since it no longer manages to prove anything.  The "hash lottery" has to be kept relatively fair by some explicit mechanism, or else anyone who finds a way to buy their "hash tickets" very cheaply breaks any assumption of fairness otherwise!
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:51:42 AM
 #141


it is clear that:
1. you want the computational work to be proven

Ideally, yes.

Quote
2. i want to control the risk expectation and decrease it to practical values

As I've said I'd accept this as compromise, but have yet to see the mechanic by which it is decreased to practical values.

Quote
i claim that your method is less practical, but i'm open to hear more.

Excellent.  Why, specifically, do you claim that my method is less practical?  Also what, specifically, would you like to hear more about?

Quote
you claim that my method won't work.

I only claim that they "shouldn't work, as described."

Quote
now let's focus on either ways:
if we want to talk about my approach, then miscalc and procfs spoof are indeed different.

GAH how are they at all different? :-)

In either case the attacker claims to execute one reduction but actually executes another related reduction.  Where is the difference?  Either I am executing a different algorithm than specified for the job itself (miscalc) or I am executing a different algorithm than specified for the consumption sampling (procfs spoof) but either way I'm doing the same thing - a different reduction from what the publisher thinks I am doing.

HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:54:28 AM
 #142

it's true for ANY vectors. calling them "procfs msmts" doesnt change the picture.
i can linearly span the last 100 chars you just typed on your pc by using say 200 weather readings each from 200 different cities.
of course, only once, otherwise the variance will be too high to make it somehow informative.
but for a given program, the variance over several runnings is negligible (and in any case can be calculated, and taken into account on the least squares algo).

Again, I don't disagree, but how does any of this have any bearing on "my arbitrary program?"  How do you get around the lack of functional extension?  How do you make the vectors, in any way, relate back to the job program?
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 02:58:42 AM
 #143

Huh  If benchmarks don't have to be accurate than why do them at all?

that's the very point: i dont need accurate benchmarks. i just need them to be different and to keep the system busy. that's all!! then i get my data from reading procfs of the benchmark's running. if you understand the linear independence point, you'll understand this.

Wha?  I guess I don't understand the linear independence point. Oh, wait, I already knew that I didn't understand that.  Wink

If the benchmarks don't need to be accurate then why are they in the model at all?  If everyone can just reply 3.141 for every benchmark result and that is somehow "just ok" then what purpose does the benchmark serve?

My only conclusion is that it isn't actually "just ok."

Quote
oh well
so you know how to write software that applies to all future hw?

Sure, that is precisely what the full lambda cube is for!

What I don't know how to do is devise any scheme to meaningfully benchmark all future hw.  You briefly claimed you had.  I didn't hold my breath.  Good thing.
Yuzu
Sr. Member
****
Offline Offline

Activity: 368
Merit: 250



View Profile
September 20, 2014, 03:07:59 AM
 #144

 I have this marked for reading later.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 03:26:32 AM
 #145

what you say point to many misunderstandings of what i said. i dont blame you. the higher probability is me not being clear enough, plus i do have a language barrier when it comes to english:

1. those benchmarks has nothing to do with benchmarking. i don't even read their result. it's just i need programs that extensively use various system components, and such programs i'll typically find under the shelve of "benchmarks". but that's all.

1.1 the info i do gather from the benchmark running is the procfs readings of the running itself.

1.2 now let's go on with the simple case (on the pdf it's where s=t1), when the user just sets one number: "i want 1 zencoin for every hour of full load". so the client will blow his pc with "benchmarks", take procfs measurements, and compute the pricing vector. call this latter vector g. this vector plays the main role.

1.3 at every few seconds, the client takes procfs measurements vector. this is a vector of n components. the first component is CPU User Time. the second is CPU Kernel time. the third is RAM Sequential Bytes Read, and so on many procfs vars. then we take the inner product of g with the procfs msmts vector, and that's the amount to be paid.

1.4 the location of the vars as components in the vector also answers your misunderstanding regarding the decomposition of a procfs msmts of a given program to a linear combination of procfs msmts of k>>n programs.

2. all the publisher wants is to fold his protein in 10 minutes over zennet rather than 2 weeks in the univerty's lab. he does not need uptime guarantees such as bitcoin. even if he's folding various proteins 24/7.

3. as for the id pow, just common sense. one estimates a confidence bound of the reward of a spoof/spam/scam and determines a POW which is significantly more expensive that it.

Quote
GAH how are they at all different? :-)

In either case the attacker claims to execute one reduction but actually executes another related reduction.  Where is the difference?  Either I am executing a different algorithm than specified for the job itself (miscalc) or I am executing a different algorithm than specified for the consumption sampling (procfs spoof) but either way I'm doing the same thing - a different reduction from what the publisher thinks I am doing.

4. they're equivalent from the attacker's/provider's point of view, but not from the publisher's one who wants to detect them, as i stated.

5. what do you mean by functional extension? like feature mapping, rkhs etc.?
Quote
Quote
oh well
so you know how to write software that applies to all future hw?
Sure, that is precisely what the full lambda cube is for!

What I don't know how to do is devise any scheme to meaningfully benchmark all future hw.  You briefly claimed you had.  I didn't hold my breath.  Good thing.

6. you're both a scientist and an engineer, right? but when you wrote it, you were only a scientist. we'll get back to this point Wink
Quote
Excellent.  Why, specifically, do you claim that my method is less practical?  Also what, specifically, would you like to hear more about?

7. the methods i've encountered in the literature were seemed to me less practical. i do not rule out that you found or read about good enough methods. and of course i'd like to hear more about such technologies, and even more than hear, depends on the specific ideas.

more answers to give but let's first establish a wider basis of agreement and understanding.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 03:46:13 AM
 #146

as i wrote, i'll be glad to discuss with you methods for proving computation. not even discuss but maybe even work on it.

I'm not sure there is much to work on.  Just replace the VM with something authenticating, and that's that, right?  There's a good list of options at http://outsourcedbits.org/coding-list/ covering everything from basic two-party garbling (you don't need more then two) to full multi-party morphisms.

There is also the work of SCIPR labs, but nothing so "drop in" yet.  I hear M$ and IBM both have some really nice tech for it, too, but I doubt they'd share.

One of the other folks who fits in my small elevator is Socrates1024, aka AMiller.  He has something I've come to call "Miller's Merkle Trees" which he calls generalized authenticated data structures.  This work started from his now-infamous model to reduce block chain bloat.  He has recently extended this into a reduction model for general authenticated computing without requiring Yao's garbling step and the overhead that comes with the logic table rewrite.  Verification happens in reduced complexity from the prover challenge itself.  This seems almost perfectly suited to your goals.  I'm sure he'd love to see practical application, and almost everything you'd need is in his github in "some form" or another, hehe.

Many of us have suspected for some time now that there is some middle ground between the approaches, with a preservation of privacy over data but not over the circuit construction itself and without the overheads of it.  I tend to believe this is probably the case, but as far as I'm aware it is still an open question.

I'm probably not the best to explain, since I'm only standing on the shoulders of the giants with this stuff.  You probably want to talk to the giants directly.  Wink  I can try to make a few introductions if you'd like.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 03:53:21 AM
 #147

as i wrote, i'll be glad to discuss with you methods for proving computation. not even discuss but maybe even work on it.

I'm not sure there is much to work on.  Just replace the VM with something authenticating, and that's that, right?  There's a good list of options at http://outsourcedbits.org/coding-list/ covering everything from basic two-party garbling (you don't need more then two) to full multi-party morphisms.

There is also the work of SCIPR labs, but nothing so "drop in" yet.  I hear M$ and IBM both have some really nice tech for it, too, but I doubt they'd share.

One of the other folks who fits in my small elevator is Socrates1024, aka AMiller.  He has something I've come to call "Miller's Merkle Trees" which he calls generalized authenticated data structures.  This work started from his now-infamous model to reduce block chain bloat.  He has recently extended this into a reduction model for general authenticated computing without requiring Yao's garbling step and the overhead that comes with the logic table rewrite.  Verification happens in reduced complexity from the prover challenge itself.  This seems almost perfectly suited to your goals.  I'm sure he'd love to see practical application, and almost everything you'd need is in his github in "some form" or another, hehe.

Many of us have suspected for some time now that there is some middle ground between the approaches, with a preservation of privacy over data but not over the circuit construction itself and without the overheads of it.  I tend to believe this is probably the case, but as far as I'm aware it is still an open question.

I'm probably not the best to explain, since I'm only standing on the shoulders of the giants with this stuff.  You probably want to talk to the giants directly.  Wink  I can try to make a few introductions if you'd like.
thanks much for the info. will go over it.
will be delighted for introductions, of course. also if you want to be a part of zennet, let's consider it

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 04:10:23 AM
 #148

after a quick look on the link:
1. recall that on zennet we speak about native code only. no jvms or so.
2. data privacy is nice to have. much of the data is not private (just numbers), and on any case, homeomorphic encryptions etc. can take place over zennet using the any existing implementation. all zennet does is giving you ssh. if your verification tool gives you suspicious info, then just disconnect. that's part of your distributed app, which i do nothing to help you distribute it, rather than giving you passwordless ssh access to remote machines.
3. we're speaking about arbitrary native code (rules out projects for specific calcs). verifiable calcs are well and economically mitigated, as above.
but that's only after a quick look

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 04:16:21 AM
 #149

1. those benchmarks has nothing to do with benchmarking. i don't even read their result. it's just i need programs that extensively use various system components, and such programs i'll typically find under the shelve of "benchmarks". but that's all.

1.1 the info i do gather from the benchmark running is the procfs readings of the running itself.

Sure, this is what I meant by the output/result of the benchmarking.  No confusion there.

Quote
1.2 now let's go on with the simple case (on the pdf it's where s=t1), when the user just sets one number: "i want 1 zencoin for every hour of full load". so the client will blow his pc with "benchmarks", take procfs measurements, and compute the pricing vector. call this latter vector g. this vector plays the main role.

1.3 at every few seconds, the client takes procfs measurements vector. this is a vector of n components. the first component is CPU User Time. the second is CPU Kernel time. the third is RAM Sequential Bytes Read, and so on many procfs vars. then we take the inner product of g with the procfs msmts vector, and that's the amount to be paid.

The procfs measurements in 1.2 and in 1.3 are entirely unrelated.  The load that any given benchmarking algorithm, B, puts on the system will not be representative of the load the job algorithm, P, puts on the system.  No combination of any algorithms B2, B3, ... BN will decompose onto a meaningful relation with P's load.  This is the fundamental problem.  No measurement of any other algorithm tells us anything about P unless that algorithm is isomorphic to P.  Even then it doesn't tell us anything about our particular run of P, the performance profile of which will likely be dominated by external factors.

Quote
1.4 the location of the vars as components in the vector also answers your misunderstanding regarding the decomposition of a procfs msmts of a given program to a linear combination of procfs msmts of k>>n programs.

?

Other than the well ordering making the subsequent decomposition consistent with the original g vector, I fail to see how it relates.  Is there something more and special to this?

Quote
2. all the publisher wants is to fold his protein in 10 minutes over zennet rather than 2 weeks in the univerty's lab. he does not need uptime guarantees such as bitcoin. even if he's folding various proteins 24/7.

Sure, I've never disputed this, nor do I see it as any problem.  (With the exception of the absconder cases, which I consider a separate concern anyway.)  You brought in uptimes into the discussion, in an unrelated context.  Let's move on.

Quote
3. as for the id pow, just common sense. one estimates a confidence bound of the reward of a spoof/spam/scam and determines a POW which is significantly more expensive that it.

But how do they make the determination?  How am I supposed to pick a target without knowing anything about any participants capacity?  How can I have any reason the believe that my target is not way too low, creating no incentive not to abscond.  I can just set the target unreasonably high, but then I significantly reduce my possible workers.

How can we say the puzzle is sufficiently difficult if we can't quantify difficulty?

Quote
4. they're equivalent from the attacker's/provider's point of view, but not from the publisher's one who wants to detect them, as i stated.

Only in that, without authentication, they have separate mitigation to the fact that they are not actually detectable.  (Even this is arguable, as in both cases mitigation can only really be performed by way of comparison to a third party result!)

Quote
5. what do you mean by functional extension? like feature mapping, rkhs etc.?

I mean http://ncatlab.org/nlab/show/function+extensionality as in proving two algorithms equivalent, or proving a morphism path as continuous from one to the other under homotopy.  If you can't show that the benchmark work is a functional extension from the job work than any measurements over the benchmark work have no relation to the job work.  In other words, if you can't assert any relation between the algorithms then you can't assert any relation between the workloads of executing the algorithms!

Quote
6. you're both a scientist and an engineer, right? but when you wrote it, you were only a scientist. we'll get back to this point Wink

I've been both, at various points.  I try to avoid ever being both simultaneously, I find it tends to just cause problems.


Quote
7. the methods i've encountered in the literature were seemed to me less practical.

Until the past few years, they have been almost entirely impractical.  Times have changed rapidly, here.

Quote
i do not rule out that you found or read about good enough methods.

It concerns me that you wouldn't have been aware of these developments, considering the market you're about to try to penetrate.  Know your competition, and all that.

Quote
and of course i'd like to hear more about such technologies, and even more than hear, depends on the specific ideas.

I've sent you a PM wrt this.

Quote
more answers to give but let's first establish a wider basis of agreement and understanding.

Sure.  Which of our disagreements and/or misunderstandings should we focus on?
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 04:22:45 AM
 #150

after a quick look on the link:
1. recall that on zennet we speak about native code only. no jvms or so.
2. data privacy is nice to have. much of the data is not private (just numbers), and on any case, homeomorphic encryptions etc. can take place over zennet using the any existing implementation. all zennet does is giving you ssh. if your verification tool gives you suspicious info, then just disconnect. that's part of your distributed app, which i do nothing to help you distribute it, rather than giving you passwordless ssh access to remote machines.
3. we're speaking about arbitrary native code (rules out projects for specific calcs). verifiable calcs are well and economically mitigated, as above.
but that's only after a quick look

Then what you want, certainly, is Miller's work, and not Yao.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 04:29:16 AM
 #151

thanks much for the info. will go over it.
will be delighted for introductions, of course. also if you want to be a part of zennet, let's consider it

I'd consider it.  However demands on my time are already pretty extreme.   Undecided
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 04:32:01 AM
Last edit: September 20, 2014, 04:11:49 PM by ohad
 #152

Quote
Other than the well ordering making the subsequent decomposition consistent with the original g vector, I fail to see how it relates.  Is there something more and special to this?

it agrees with the ordering of all procfs measurements vectors.
say it's a two dimensional vector: [<UserTime>, <KernelTime>], like [5,3.6].
now say I run your small task with various inputs many times. I get some average vector of procfs, of the form [<UserTime>, <KernelTime>], plus some noise with some variance.
this vector can be written as a*[<B1.UserTime>, <B1.KernelTime>]+b*[<B2.UserTime>, <B2.KernelTime>] plus some noises with covariances.
of course, this is only if B1.UserTime*B2.KernelTime-B1.KernelTime*B2.UserTime does not equal zero (this is just the determinant). if it's zero, we need B3, B4 and so on until we get rank 2.
this is how to decompose a procfs vector of a program into of several given (rank-n) procfs vector programs.

Quote
Quote
i do not rule out that you found or read about good enough methods.
It concerns me that you wouldn't have been aware of these developments, considering the market you're about to try to penetrate.  Know your competition, and all that.

i did that research, just didn't find something good enough. the best i found involved transmitting vm snapshots.
no human being is able to read all papers regarding distributed computing.

Quote
3. as for the id pow, just common sense. one estimates a confidence bound of the reward of a spoof/spam/scam and determines a POW which is significantly more expensive that it.
Quote
But how do they make the determination?  How am I supposed to pick a target without knowing anything about any participants capacity?  How can I have any reason the believe that my target is not way too low, creating no incentive not to abscond.  I can just set the target unreasonably high, but then I significantly reduce my possible workers.

How can we say the puzzle is sufficiently difficult if we can't quantify difficulty?

just dont give too much work to a single provider. that's a good practice for many reasons.
in addition, begin slowly. hash some strings, see you get correct answers etc., fell your provider before massive computation (all automatically of course).

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 06:17:20 AM
 #153

Quote
Other than the well ordering making the subsequent decomposition consistent with the original g vector, I fail to see how it relates.  Is there something more and special to this?
it agrees with the ordering of all procfs measurements vectors.
...
this is how to decompose a procfs vector of a program into of several given (rank-n) procfs vector programs.

Right, so the ordering only matters to keep the decomposition consistent with the g values, not more. This is what I suspected.

Quote
i did that research, just didn't find something good enough. the best i found involved transmitting vm snapshots.
no human being is able to read all papers regarding distributed computing.

I'm just quite surprised you didn't catch it.  The service industry has been "abuzz" about the post-Gentry work.

Quote
Quote
3. as for the id pow, just common sense. one estimates a confidence bound of the reward of a spoof/spam/scam and determines a POW which is significantly more expensive that it.
Quote
But how do they make the determination?  How am I supposed to pick a target without knowing anything about any participants capacity?  How can I have any reason the believe that my target is not way too low, creating no incentive not to abscond.  I can just set the target unreasonably high, but then I significantly reduce my possible workers.

How can we say the puzzle is sufficiently difficult if we can't quantify difficulty?

just dont give too much work to a single provider. that's a good practice for many reasons.

This doesn't answer the question at all.  There really does need to be a global difficulty, as far as I can tell.  I've been mulling this one over for awhile now, and just don't see a way around it.  For an arbitrary worker, I can't know if he is doing his id pow with cpu, gpu, pen and paper, asic, pocket calculator, or some super specialized super efficient worker that outperforms all the rest.  What is an appropriate difficulty target to set on his puzzle?

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 10:36:03 AM
 #154

1. those benchmarks has nothing to do with benchmarking. i don't even read their result. it's just i need programs that extensively use various system components, and such programs i'll typically find under the shelve of "benchmarks". but that's all.

1.1 the info i do gather from the benchmark running is the procfs readings of the running itself.

Sure, this is what I meant by the output/result of the benchmarking.  No confusion there.

Quote
1.2 now let's go on with the simple case (on the pdf it's where s=t1), when the user just sets one number: "i want 1 zencoin for every hour of full load". so the client will blow his pc with "benchmarks", take procfs measurements, and compute the pricing vector. call this latter vector g. this vector plays the main role.

1.3 at every few seconds, the client takes procfs measurements vector. this is a vector of n components. the first component is CPU User Time. the second is CPU Kernel time. the third is RAM Sequential Bytes Read, and so on many procfs vars. then we take the inner product of g with the procfs msmts vector, and that's the amount to be paid.

The procfs measurements in 1.2 and in 1.3 are entirely unrelated.  The load that any given benchmarking algorithm, B, puts on the system will not be representative of the load the job algorithm, P, puts on the system.  No combination of any algorithms B2, B3, ... BN will decompose onto a meaningful relation with P's load.  This is the fundamental problem.  No measurement of any other algorithm tells us anything about P unless that algorithm is isomorphic to P.  Even then it doesn't tell us anything about our particular run of P, the performance profile of which will likely be dominated by external factors.


true, they're unrelated.
as for load decomposition,
why on earth the programs should be identical/homomorphic/isomorphic?
programs that performs 1000 FLOPs will take about 1/2 than programs that perform 2000 similar FLOPs, even if they're two entirely different algos. i'm not counting on that anywhere, just pointing to the fact that i'm apparently not interested at all in such functional equivalence.
how any kind of such equivalence is related to resource consumption?
Quote
I mean http://ncatlab.org/nlab/show/function+extensionality as in proving two algorithms equivalent, or proving a morphism path as continuous from one to the other under homotopy.  If you can't show that the benchmark work is a functional extension from the job work than any measurements over the benchmark work have no relation to the job work.  In other words, if you can't assert any relation between the algorithms then you can't assert any relation between the workloads of executing the algorithms!

again, how come you tie so closely between workloads and specific algo's operation?
maybe it's needed for proven comps, but for my approach estimating consumption and risk reducing?

Quote
1.4 the location of the vars as components in the vector also answers your misunderstanding regarding the decomposition of a procfs msmts of a given program to a linear combination of procfs msmts of k>>n programs.

?

Other than the well ordering making the subsequent decomposition consistent with the original g vector, I fail to see how it relates.  Is there something more and special to this?
[/quote]
where are we stuck on agreeing over the procfs vector decomposition?
maybe you're looking for something deep like functional extension, but those vectors can be spanned by almost any enough random vectors. like all vectors.
Quote
Quote
Quote
4. they're equivalent from the attacker's/provider's point of view, but not from the publisher's one who wants to detect them, as i stated.

Only in that, without authentication, they have separate mitigation to the fact that they are not actually detectable.  (Even this is arguable, as in both cases mitigation can only really be performed by way of comparison to a third party result!)

sometimes the verification is so fast that it can be done on the publisher's computers. so as for miscalc, not always 3rd party is needed. moreover, yes, i do assume that each publisher rents many providers and is able to compare between them.

I still don't understand which flaw you claim have found.
You claim that people will just create many addresses, make jobs for a few seconds, and this way fool the whole world and get rich until zennet is doomed?
So many mechanisms were offered to prevent this. Such as:
1. Easy outlier detection cause of [large population and] normal estimators.
2. Keeping local history and building a list of trusted addresses, by exposuring yourself to risk slowly, not counting on an address fully right from the first second.
3. You can always ask it to hash something and verify the result! Moreover, you can spend the your first new seconds with your new provider with just proving his work. Yes, they can became malicious a moment after, but: you'll find out pretty quickly and never work with that address, while requiring more POW than this address has.
4. i'll be glad of a detailed scenario of an attacker trying to earn. i dont claim he won't make a penny. but show me how he'll make more than a penny a day. Can you pinpoint the problem? for miscalc and procfs spoofing, we may assume we dont have the fancy pricing model, and we can discuss it as we were pricing according to raw procfs.

Tau-Chain & Agoras
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 20, 2014, 12:34:19 PM
 #155

peled1986
Legendary
*
Offline Offline

Activity: 882
Merit: 1000


View Profile
September 20, 2014, 01:10:28 PM
 #156


Super interesting conversation between ohad and HMC   Smiley
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 06:31:48 PM
 #157

true, they're unrelated.
as for load decomposition,
why on earth the programs should be identical/homomorphic/isomorphic?

First, homomorphism is an entirely seperate thing. I only want the isomorphism.

Second, they "should" be isomorphic because the performance profile of any given algorithm has to be assumed as potentially unique to that algorithm.  No benchmark you could throw at the system will necessarily "load" that system in the same way(s) that my program P will "load" that system.  What we ultimately want to get to is a measure of "how hard is the system working on P" and we can't formulate this beginning from some baseline that has nothing to do with P!

Quote
programs that performs 1000 FLOPs will take about 1/2 than programs that perform 2000 similar FLOPs, even if they're two entirely different algos. i'm not counting on that anywhere, just pointing to the fact that i'm apparently not interested at all in such functional equivalence.

Sure, but this reasoning is only really valid if we look at a single measure in isolation - which is not what we intend, per your decomposition!  If we run your benchmarks and generate our g values across all metrics, and then use a spin loop for P, we will see 100% processor usage, but no other load.  Does this mean that we are loading the system with 100% of "P work?"  YES, but your model will initially decide that the usage is actually lower!

Quote
how any kind of such equivalence is related to resource consumption?

If we can formulate our g in relation to P then our measures relate, too.  If we take our g measure using a variety of spin loops (potentially "extended" from P in some way) and then perform our decomposition against our P spin loop your decomposition will "know" from g that the only meaningful dimension of the performance profile is the cpu, and will correctly measure P as loading the system to 100%.

Obviously this is a contrived example to illustrate the point, no-one will want to pay to run a busy loop P.  However taking the point to an extreme like this makes it simple to illustrate.

Quote
again, how come you tie so closely between workloads and specific algo's operation?

Because every algorithm combined with a particular reduction of that algorithm has a unique performance profile criteria!  A performance is unique to a particular run.  We can never know how our particular run will behave, but we can't measure it against the baseline of some other algorithm(s) in a meaningful way.  Disk usage or page faults or etc in our benchmarks have NO bearing on our measure of our busy loop P!

Quote
maybe it's needed for proven comps, but for my approach estimating consumption and risk reducing?

It is necessary in either case to derive meaningful measure in the decomposition.

Also, this notion should actually be turned the other way round - proven comps should be considered needed for estimating consumption, as AMiller pointed out on IRC.

" 12:21 < amiller> imo it's not a bad idea to have some pricing structure like that, i'm not sure whether it's novel or not, i feel like the biggest challenge (the one that draws all my attention) is how to verify that the resources used are actually used, and this doens't address that" [SIC]


Quote
where are we stuck on agreeing over the procfs vector decomposition?
maybe you're looking for something deep like functional extension, but those vectors can be spanned by almost any enough random vectors. like all vectors.

The problem is that such a span is not meaningful relative to subsequent measure unless some functional extension exists.  Unless our benchmarks are "similar" to our P busy loop, they only introduce noise to our decomposition of our measures over P.


Quote
sometimes the verification is so fast that it can be done on the publisher's computers. so as for miscalc, not always 3rd party is needed.

Again, I'm assuming most computations will not have referential transparency, will not be pure, precluding this.

Quote
moreover, yes, i do assume that each publisher rents many providers and is able to compare between them.

Again, I'd rather find solutions than defer to (potentially costly) mitigation.

Quote
I still don't understand which flaw you claim have found.
You claim that people will just create many addresses, make jobs for a few seconds, and this way fool the whole world and get rich until zennet is doomed?

Among other behaviors.  I basically assume that people will do "all the same crap they did with CPUShare et al that made those endeavors fail miserably."

Quote
So many mechanisms were offered to prevent this. Such as:

Except as offered, they don't prevent anything at all.  They presume to probabilistic avoid the concern, except that there is no formalism, yet, around why we should believe they will serve to avoid any of them to any reasonable probable degree.  Again, I defer to AMiller who always puts things so much better than I ever could:

"12:25 < amiller> there are no assumptions stated there that have anything to do with failure probability, malicious/greedy hosts, etc. that would let you talk about risk and expectation"

Quote
3. You can always ask it to hash something and verify the result! Moreover, you can spend the your first new seconds with your new provider with just proving his work. Yes, they can became malicious a moment after, but: you'll find out pretty quickly and never work with that address, while requiring more POW than this address has.

Again, how is the challenge difficulty to be set?  This is still unanswered!

Quote
4. i'll be glad of a detailed scenario of an attacker trying to earn. i dont claim he won't make a penny. but show me how he'll make more than a penny a day.

How much he stands to make largely depends on his motive and behavior.  Some attackers might make 0 illegitimate profit, but prevent anyone *else* from being able to accept jobs, for example.  Some attackers might burn addresses, and just make whatever "deposit" payments he can take.  Some attackers might fake their computation, and make the difference in energy cost between his fake work and the legitimate work.  Whether or not he'll make more than a penny a day depends on how capable his approach is, how naive/lax his victims are, and how much traction the network itself has gained.

My concern is not that some attacker will get rich, my concern is that the attackers leeching their "pennies a day" will preclude the network from being able to gain any traction at all, and will send it the way of CPUShare et al.

It is very important to remember that, for some attackers, a dollar a day would be a fortune.  Starving people are starving.  Some of those starving people are smart and own computers, too.

Quote
Can you pinpoint the problem? for miscalc and procfs spoofing, we may assume we dont have the fancy pricing model, and we can discuss it as we were pricing according to raw procfs.

The root of the problem is just the combination of everything I've already described.  Wink  It is all predicated from lack of authentication over the g values, and made worse by the lack of cost for identity.

HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 07:17:39 PM
 #158


Super interesting conversation between ohad and HMC   Smiley

But the big question: Who is wrong?  Wink

Is there anyone out there in internetland who wants to jump in with a new perspective?
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 20, 2014, 08:27:51 PM
 #159

Quote
Quote
3. You can always ask it to hash something and verify the result! Moreover, you can spend the your first new seconds with your new provider with just proving his work. Yes, they can became malicious a moment after, but: you'll find out pretty quickly and never work with that address, while requiring more POW than this address has.
Again, how is the challenge difficulty to be set?  This is still unanswered!

Easily: requirind ID POW is like requiring your BTC pubkey to begin with a certain amount of zeros.
When I connect to you, I give you some challange to sign with your privkey, to make sure you own that pubkey with tons of POW.
So with this approach, "good people" will use one addr and not gen new one every time, both for the id pow, and for keeping history with good name on the publisher's local data.

As for AMiller's, see the following chat (he didn't answer yet):

amiller:
so the conditions for gauss markov theorem are weaker than IID, but they're still have to be independent and the distributions have to have the same variance
i don't know what you mean by as for lying, connected to actual outcome
if many other nodes perform the same job,
then you have to assume that those nodes failing are all independent
but on the other hand if *all* the nodes know they can cheat and get away with it, they might all cheat
so it's not justified to assume that the probability of any individual node cheating is uncorrelated with any of the others
this is kind of the difference between adversarial reasoning and, i dunno what to call the alternative, "optimistic" reasoning

me: the issue you raise is simpler than the linear model: say i have 10^9 work items to distribute. i begin dividing them among client. after one job is done by a worker (the smaller job, the better) i can tell if they cheated or not, or, i can tell how much they charged me for that work item. if i don't like the numbers i see, i can always disconnect. it will all happen automatically, by the publisher specifying their prices and margins.
the publisher may also begin slowly, giving at the beginning some hashing commands just to see the correct result. they won't pay more for working slow. they pay according to stationary resource consumptions.

The main point is that the exposure is short (few secs), while the publisher can perform any risk-decreasing behavior. Even running a small background task just for sanity verification. Of course, the attacker can modify rare random calculations, and lowering this risk (the miscalc risk) can be done at either:
1. have an easily proven jobs. you claim that they are rare cause of impurity. I claim that for many if not most common problems, one can formalize them and pick a method of solution that'd be distributed and with main massive pure task such as huge eigenvalue problem.
2. increase the number of times each work item is calculated (exp. decrease risk)
3. selected known or trusted providers.
4. require high ID POW, so high such that there is no incentive to lose it, just for the price of a few seconds of calculations.

note again: even if the provider has computing monsters, the publisher doesn't must use all resources. he can run, at first, simple cheap tasks. so the potentially lost several seconds can contain minimal amount of work to be paid.
-- this latter solution of course does not really help you identify scammers, but it does help you slowly gain experience with the good guys.
will be back later with more answers.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 20, 2014, 08:55:26 PM
 #160

When I connect to you, I give you some challange to sign with your privkey, to make sure you own that pubkey with tons of POW.

How do we know what constitutes "tons" for any given puzzle solver?!?!?!?!  Without a global difficulty how do we know what is sufficiently difficult?  How do we know that our "tons" is not too little to fail to avoid the scammers and not too much to disqualify the non-scammers?  How should we pick an number that we can be confident falls in the middle, here, without knowing anything about the worker and how he will perform his hashing?

Quote
4. require high ID POW, so high such that there is no incentive to lose it, just for the price of a few seconds of calculations.

How do we know how high is high enough?  How do we know how high is too high?  How do we pick the target?  You keep dodging this question, somehow.  Wink

I'll let AMiller respond to the rest, since it is intended for him.  I don't want to interject there and just muddy things further.  Grin
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 20, 2014, 11:28:27 PM
 #161


Super interesting conversation between ohad and HMC   Smiley

But the big question: Who is wrong?  Wink

Is there anyone out there in internetland who wants to jump in with a new perspective?

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 03:47:57 AM
 #162


Super interesting conversation between ohad and HMC   Smiley

But the big question: Who is wrong?  Wink

Is there anyone out there in internetland who wants to jump in with a new perspective?

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.

Hi,

Our discussion follows two main approaches:
1. Verifiable computing, and
2. Risk reduction

Zennet does not aim to verify the correctness if the computation, but offer risk reducing and controlling mechanisms. HMC's opinion is that we should stick to path 1, towards verifiable computing, and we're discussing this option also off this board. HMC also suggests that Zennet's risk reduction model is incorrect, and gives scammers an opportunity to ruin the network. I disagree.
I think that'd be enough for you to read only the last comments, since many of them are only clarifications, so you can get right into the clear ones Smiley
More information about Zennet at http://zennet.sc/about, and more details on the math part of the pricing algo available here http://zennet.sc/zennetpricing.pdf

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 05:16:42 AM
Last edit: September 21, 2014, 08:06:30 AM by HunterMinerCrafter
 #163

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I'll try, but the key points have been shifting a bit rapidly.  (I consider this a good thing, progress.)

Socrates1024 jumping in moved the goal posts a bit, too, in ways that are probably not obvious from the thread, now.  Undecided

Perhaps we need that dedicated IRC channel sooner?

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

Did I miss any of the major issues?

Quote
I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.

I also like the idea, but even if the model is fixed it still has some dangerous flaws "as given."  It will be a prime target for hackers, data theft, espionage, and even just general "griefing" by some participants.  In some respects, this is easily resolved, but in other respects it may become very difficult and/or costly.  This will, in any case, have to be a bit of a "wait and see" situation.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 05:23:18 AM
 #164

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?
1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

I do agree, that's just misunderstanding. I meant that when the client connects, the publisher can't know which address this ip owns, unless they challange it with some string to sign.
Yes, the POW should be invested as identity creation. Like in Keyhotee project.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 05:28:03 AM
 #165

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

if we go on the new idea of "slim" paravirt provability, we might prove the running of the benchmarks themselves.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 05:34:18 AM
 #166

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

if we go on the new idea of "slim" paravirt provability, we might prove the running of the benchmarks themselves.

Yes, but these are related-but-distinct concerns.  Related because of #4 there.  Distinct because even with authentication to verify the correct benchmarks are run I still see a potential problem if we lack that "functional extension" from the benchmark to the job itself.  Our baseline would still be the wrong baseline, we'd just have a proof of it being the correct wrong baseline, heh.
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 21, 2014, 11:04:44 AM
 #167

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

I do agree too and seems like it was a misunderstanding based on Ohad's post.

2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

Many processes don't have the properties necessary to be authenticated. For example, you can verify the work of a miner by a simple hash function, but you can't verify the work of a neural network that simply. If the publisher has 1000 hosts on his VM, and wants to verify their works one by one, it would take a lot of computational power on his side. Also, I assume by 'work' we don't mean running a mathematical operation across hosts. I don't know the infrastructure for the VM, but the system may assume all hosts are online and cooperating in a non-malicious way, so it can build and operate an entire OS across them. If one host acts maliciously, it would endanger the integrity of the whole VM. In this perspective, assuming 1 in a 1000 defective host endangers the entire system, not just 1/1000 of work.

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

I agree with HMC here. Any kind of benchmarking used must be run alongside the process. Any host can benchmark high and detach resources after the process has begun. The host can even do this by the network itself. Consider renting 1000 hosts to just benchmark high for a publisher and then release them. So, you either have to benchmark and process at the same time, decreasing the effective resources available, or your work supplied must be 'benchmarkable' itself. In the perspective I introduced in the last question, this does not necessarily mean every publisher should change his work, but would mean running an OS across hosts that can effectively calculate the help of each host in terms of resources. This may introduce another problem aside, any open source OS selected must be heavily changed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

I think if point 3 and 2 are solved, this won't rise. If we can identify well behaved nodes that give verifiable results with verifiable resources used, this incentive wouldn't exist. Any pricing model based on this would be sound.
MaximBitCoin
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
September 21, 2014, 03:00:04 PM
 #168

Here is an interesting blog post about the project

http://data-science-radio.com/is-zennet-or-any-other-decentralized-computing-real/
Hueristic
Legendary
*
Offline Offline

Activity: 2310
Merit: 1429


Doomed to see the future and unable to prevent it


View Profile
September 21, 2014, 03:39:30 PM
 #169


Interesting article. I disagree with the premise that "Most" DC users need that level of security for their proprietary tasks. The need for such massive computational power is in itself a determent to theft as the use of the resulting data is specialized to the entity seeking it.

ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 05:09:10 PM
 #170


3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmar