DATA PRODUCERAn edge-case exists in our system that will become more relevant as the network grows. Imagine an analysis buyer who
requests an analysis by only one data producer or dataset. But the requirements the analysis buyer set match multiple data
producers (or datasets) in the network. Which one will MADANA choose for the participation?
(See Tokenomics Paper for graph information here:
http://www.madana.io/download/MADANA-Tokenomics-Paper.pdf)
REPUTATION SYSTEM FOR DATA QUALITYPAX can be utilized as another deposit measure. Data producers are rewarded by an incentive system when they provide
quality datasets. The incentive system allows the data producer to nearly double the revenue from a particular dataset as
compared to a respective lower reputated dataset.
The data producer would need to make a small deposit on every dataset he connects to the MADANA platform. This gives
him a Deposit Factor valuation for this dataset of “1”. There is another factor valuation that is set to “0” initially (Reputation
Factor). These two factors added up (1+0 = 1) create a Value Factor for the dataset of 1 x (price of the dataset).
When the dataset is used in an analysis process, plug-in providers now have an incentive to design and train their algorithms
to check data quality before further proceeding with the analysis. After a dataset has passed through the algorithm, it receives
a mark indicating that it has successfully participated in an analysis. During this process, the algorithm increases the
Reputation Factor logarithmically to 0.5 and then 0.75 and so on, approach a factor of 1. If the dataset is being declined by an
algorithm, its Deposit Factor falls logarithmically 0.5 and then 0.25 and so on approaching zero. Based on the Deposit Factor,
the plug-in provider can claim the reward for separating datasets of low quality from datasets of good quality. Figure 4 - Reputation System based on accepted/declined analyses
(See graph on Tokenomics Paper here:
http://www.madana.io/download/MADANA-Tokenomics-Paper.pdf)
In conclusion, datasets that are declined multiple times quickly lose their Value Factor, approaching 0, and making them
worthless (0 x price of the dataset = 0). On the other side datasets that never get declined approach a doubled factor evaluation
(2 x price of the dataset = double the price).
The data producer is not forced to make a deposit, he rather has the chance to work his way up to a normal dataset factor evaluation.
It is possible for him, starting from a value factor of 0 to approaching 1, without making a deposit within a few accepted
analyses. This way, spam or low-quality data remains with a data factor evaluation of 0 to protect the data analysis buyer.
The ecosystems growth will result in many datasets wanting to prove their quality, thus locking up more and more PAX as a
deposit. This reduces the token velocity considerably.
PLUG-IN PROVIDERTo secure proof of ownership, it is considered to implement a plug-in registration fee. This fee is paid by the plug-in provider
to secure his intellectual property (the plug-in) on the MADANA platform. The MADANA Nodes receive the fee as a transaction
fee. With increasing network growth more new plug-ins will be registered as a higher number of plug-ins means
more earnings for the plug-in provider. The development and registration of new plug-ins are considered an investment
for the plug-in provider, so he would rather reinvest in the registration of more plug-ins than liquidating his earnings.
This also increases HT and thus reduces the token velocity.
APEDistributed Result Validation
One future potential of MADANA is the implementation of a distributed network of APEs. By meeting specific requirements
of software and hardware, volunteering entities can host the analysis process and get PAX token as a reward. An important
question is how MADANA could ensure that the APE returns the correct and desired result which has been requested by
the inquirer. The solution is to validate the signature of the code to be executed. This will ensure that only the desired code
will be executed.
Moreover, it is conceivable to go one step further and execute the analysis process simultaneously on an odd number of
previously validated nodes and then compare the respective results with each other afterward. The described process would
rule out whether an analysis that is faulty for undetermined reasons would be sent back to the requesting entity. Corrupt
APEs could be quickly detected in the network and be marked, which would have a positive effect on the quality of the
analyses. The approach of distributed analysis processing would per se be vulnerable to adoption by a majority of corrupt
APEs, but should be interceptable by the other security measures implemented in the system.
APEs could be regulated by letting them deposit a considerable amount of PAX and putting it at stake. If the APE fails to
deliver the correct processing of the analysis, it would, as a result, lose its deposit to successful APEs. This way, more and
more PAX get locked up as the network grows and the demand for analysis processing rises. To participate in more valuable
analyses (which may require a larger deposit), the APE therefore is assumed to have a constant need in PAX to process more
or more valuable work. This increases the HT in his earnings and thus reduces token velocity