Bitcoin Forum
April 26, 2024, 05:59:43 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Ectarium - An Idea To Bridge Neural Networks And Nodes  (Read 163 times)
Watanabe Blue (OP)
Copper Member
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
August 08, 2018, 04:20:08 AM
Last edit: August 13, 2018, 04:30:34 PM by Watanabe Blue
 #1

LIVE DEMO at: Ectarium.com. Register unique keypair at https://www.bitaddress.org/ to test the demo. Click "Train" to train your own neural network. Please read below for full details. Currently seeking: Full stack developers.

I'm looking for a business partner that is a professional, full stack developer, willing to work with me in exchange for initial equity in the token before we recruit a full team. Equity in funds raised may also be distributed if we ever launch an ICO. Please PM me your real resume, letter of introduction, and real LinkedIn profile if interested, along with past experience in blockchain projects. You should understand how to create and implement a decentralized node network. Experience with convolutional and recurrent neural networks is a plus, preferably with TensorFlow and Keras.

I'm not a great copywriter, so I'm going to get straight to the details of what my idea is.

Ectarium - A vision for a Decentralized Artificial Intelligence Network That Utilizes Cloud Scaling

The Problem
Blockchain protocols suffer from two main issues: 1) Slow transaction confirmation upon scaling, along with slow dApp processing, 2) High transaction fees as the token gains more investors.

Addressing point 1 in summary) The aim of Ectarium is not to be a fully distributed blockchain among many miners, but rather a decentralized node network, similar to the Lightning Network in Bitcoin, the node layout of Credits, or the decentralized block producer model of EOS. It's been shown that any serious project that hopes to implement dApps on such a scale that includes worldwide adoption does not work for a fully decentralized blockchain, at least not today and not until quantum computing ever becomes mainstream. A compromise must be made somewhere between the speed of centralization and the immutability of decentralization. EOS is one of the latest projects to make such a compromise with their block producer model. This is obviously in high demand by consumers given the large amount of funds raised in the EOS ICO. As recent ICOs have demonstrated, mainstream consumers, investors and blockchain users are more interested in a fast, reasonably decentralized network that scales, rather than a theoretically perfectly decentralized utopian protocol that is slow and cannot be used for anything other than value transfer. Decentralization is not an "absolute" thing, but rather a scale between the theoretical concept of full decentralization, and something as centralized as a bank. So long as a threshold is reached in this scale that a protocol can be said to be "trustless", speed and efficiency of the protocol should be the main focus after that.

A similar model to that of EOS, Credits & other comparable projects would be implemented in Ectarium, but rather than having 21 block producers, there would be no limit to the number of decentralized nodes. This would be more similar to a protocol such as Credits, however the dApp aspect would be far richer but more centralized. This compromise of more centralization will be overcome as will be explained further in this post. Note that this compromise of centralization is in reference to dApps, and not transactions. As will be later explained, the transaction layer of Ectarium will be at least as decentralized as projects such as Credits, and moreso than projects such as EOS, however the dApp layer will be more centralized, with the decentralized aspect dependent upon the choice of the dApp creator. In essence, the amount of decentralization can be scaled as needed rather than fixed, depending upon the needs of dApp developers. This will all be explained.

Addresing point 2 in summary) Point 2 is a reflection of an economic problem and the observation that most blockchain protocols are made by excellent, savant programmers, who unfortunately do not recruit economists or practical minded people into their teams. Whilst not an economist myself, there are some glaring issues with how current protocols handle costs, and how these costs can be reduced. EOS claims to offer a "no fee" network. Whilst enticing in principle, the law that "there is no free lunch" applies as surely as the law of gravity to blockchain, at least with our current technology. Thus upon further examination it is obvious that EOS is indeed not free at all, and in fact will most likely be more expensive than most other networks if it ever scales to worldwide adoption. This is due to the fact that the "free" aspect of EOS relies upon the % of token equity you hold correlating to an equivalent % of available CPU power on the EOS network. Whilst fine initially, EOS is hard coded to inflate its supply by at least 1% per year to pay block producers, with a potential cap of up to 5% inflation depending upon community vote. Rather than users paying transaction fees directly, they are instead losing the % of equity they own in EOS tokens as more supply is generated, thus, with time, lowering the % of CPU computations available to them. This is, in my opinion, the worst possible model for transaction fees since those who do not use the network and do not delegate their CPU power are paying "maintenance costs" in the form of inflation even when they do not generate any transactions. The greatest winners in this system are those that use their CPU cycles all the time, to maximum efficiency. The EOS fee model is great for active users, but not casual users.

What about fixed fee models such as Ethereum? The issue with this is that a fully decentralized blockchain will always have high fees due to the computation effort required by miners, and the storage costs of the many miners needed to maintain a constantly updated ledger. These computation fees must be reflected somehow, and thus are reflected in the transaction fees.  For this reason, it is unlikely that on-chain fees on Ethereum will ever reach sub $0.01 USD levels on a consistent basis when the network is in high demand, for near instant payment processing.

On the opposite end of the spectrum, projects such as IOTA which claim to have no fees in fact do have fees in the form of computation costs. If you're a dApp developer that needs to send millions of transactions daily, you'd be investing in some very, very expensive hardware to make IOTA work, and this would probably end up costing more than transaction fees on a network such as Ethereum. The opposite problem to EOS.. IOTA is great for casual users, not good for heavily active users such as dApps.

The closest thing to a correct economic model is seen in projects such as Cardano. Whilst Cardano implements delegrated proof of stake, this is inevitably a fancy way of dressing up a node network, since voters will nevertheless still vote on a more centralized form of block verification. Cardano does things right with its economic model by limiting its supply to a fixed amount, so that casual users are not penalized by inflation, and does things right by allowing voter centralized nodes to verify transactions to keep costs low. By making a small tradeoff in the "decentralization to centralization" scale, costs can remain low. However Cardano does dress things up a bit with fancy terminology when in essence a simple decentralized network of nodes would suffice for the same purpose. Projects such as Credits aim to implement such a fee model however seem to suffer from a lack of organization and professionalism.

Solution Ectarium aims to provide to point 1)
Ectarium aims to be quite unlike any prior network, but may draw partial inspiration from the better aspects of both EOS & Cardano for some things. It will however have an underlying goal of transitioning to a self-evolving, adapting, artificial intelligence that will self-monitor many things on the network, instead of using voters.

For the transaction layer, a network of voted-in nodes will perform transaction verification. Whilst initially these nodes can be voted in manually by staking or another similar system, the end goal of the network is to use a self-evolving neural network to vote in nodes automatically, that will in theory be resistant to manipulation (ie: manipulation attempts will serve as past training data the NN can learn from, and predict similar attempts from in the future). In essence, nodes will be voted in after a 'pro bono' "good network samaritan period". Anyone wishing to run a node must delegate a certain amount of server resources to the network, and verify transactions for free for a certain period of time as an "at risk node". Only after they've proven themselves over time will they be voted-in as a more trusted node. Thus, nodes have an incentive to the network since if they're removed from the network, they will lose the time and resources implemented during their 'pro bono' period. This, in addition to initial staking requirements. The voting in should happen automatically eventually as determined by an AI, however such an AI is not yet envisioned until at least an official non-beta release, far into the future, and is not in the scope of discussion for this post.

Transaction verification by nodes in order to ensure fast, cheap transactions is nothing new. Whilst Ectarium will use a similar system for transaction verification, something completely different will be used for dApps that have intense CPU/GPU/storage requirements. Allow me to introduce the CrossCloud principle.

CrossCloud
In order to run a node on the Ectarium network, dedication of a centralized server for transaction verification isn't the only requirement. All nodes must also run a private cloud that is centralized to that respective node. Nodes may create their own personal clouds using their own hardware, or instead they can outsource these clouds on services such Amazon AWS, Microsoft Azure, or one of the many cloud hosting providers. Depending upon the rewards they wish to receive, they may scale their cloud to be large or small, there are no fixed requirements other than that their cloud remains active and online a certain % of the time (to be determined, most likely 99.9% or so to avoid penalization).

dApps that are "CrossCloud" verified are dApps that can in principle be distributed, to ensure redundancy, without affecting their utility. In essence, a CrossCloud dApp would be one that is distributed over the separate, private clouds of multiple nodes. Ie: Exact copies and state of the dApp is maintained over various separate private clouds.

By hosting the dApp on a single large private cloud the dApp can scale instantaneously to user demand without sacrificing efficiency, and shrink back when that demand ceases. The problem with a single cloud however is that the dApp isn't a dApp at all, but rather a centralized application. To solve this, CrossCloud verified dApps are randomly distributed among multiple private clouds of nodes. When the dApp is run, it is run on all of the private clouds where it is contained, and the private clouds will periodically (but not instantaneously) communicate with one another to ensure consistency in the state of the dApp. In the event a node acts maliciously and manipulates the state of a dApp in its own private cloud, its state will of course differ to that contained in the other private clouds. If the state differs, that node will be penalized or removed, thus disincentivizing private cloud manipulation by node owners. When distributed self-AI monitoring is eventually implemented, it is aimed that the AI will be able to determine the malicious activity by itself.

It should be noted that not all dApps would initially be capable of CrossCloud technology. Games and other such things which have continuous state changes wouldn't be well suited to this technology, however as the project and network evolves, solutions could be implemented, along with many other dApp capabilities not yet thought of. Initially however, CrossCloud will be designed with the creation and training of decentralized neural networks in mind. Neural networks would be perfectly suited for this technology, and those wishing to deploy their own decentralized neural networks would benefit greatly from using CrossCloud on the Ectarium network.

Not all private clouds on the network will host the same dApps, this is not practical. Each private cloud would be different, having its own unique crossover to other private clouds, like a web. It would be prohibitively expensive, and it would be slow, to maintain state consistency of private clouds over the entire network. Instead, dApp creators and those using CrossCloud will choose how many private clouds they wish to use to meet their decentralization requirements. dApp state consistency will only be maintained over the number of private clouds they elect to pay for. The more private clouds a Dapp uses to meet decentralization requirements, the higher the cost. For some applications, perhaps as few as 2-3 random private clouds will suffice for decentralization requirements. Other applications may require 100 or so random, independent private clouds (remember, each private cloud is owned by a unique node) for their decentralization requirements, and thus would pay much more than the same dApp using only 2-3 private clouds.

The vision with CrossCloud is that it will depend entirely upon the dApp creator how much decentralization they need. On current dApp protocols, dApps are forced into a decentralization model they may not need, and may be paying needlessly prohibitive costs. With CrossCloud, only the dApps that truly desire full decentralization need to pay such costs. For dApps that are content with a lower statistical threshold for trustlessness to be reached, they can use far fewer nodes, and will thus pay a much lower cost. This is in line with the Ecatrium vision of "pay for what you use". However, as mentioned in point 1, all costs on Ectarium will be minimal since the network won't be using a traditional miner-based PoW blockchain, but rather a hybrid decentralized node network, where each node runs their own private cloud.

To understand CrossCloud more fully, here is the most basic example possible: File storage. Let's say fileA has a unique checksum. A user wants to store fileA in a decentralized environment for fast retrieval and file manipulation. They believe doing this over 4 decentralized nodes will suffice for their decentralization requirements. Thus, they host fileA in 4 random, separate private clouds, each run by a separate node. When the user wishes to retrieve fileA, the network will perform a checksum check on the file retrieved. If the checksum is the same among all 4 decentralized nodes, the file is retrieved from any one (and only one) of the 4 random private clouds. In the event one private cloud has a different checksum for fileA, and the other 3 private clouds have the same checksum, the file is retrieved from one of the 3 private clouds with the same checksum. The cloud with the different checksum is most likely penalized, and potentially loses their trusted node status. Why most likely? A neural network will eventually be implemented to determine many things about the nodes, their past behaviour, how much they have to lose, the probability of 3 nodes acting maliciously against 1 honest node (incredibly unlikely, but possible) and other such things, in a manner of prediction that would contain far more efficiency and speed than any human or voters could hope to achieve. The neural network will decide which nodes are most likely honest, which are not, and penalize accordingly. For nodes, there will be a manual arbitration/appeal system which would require temporary staking investment in the rare event of a mistake. However, from the perspective of the user, they have simply stored fileA in CrossCloud, and successfully retrieved it. The entire internal workings of checksums and file retrieval can be invisible to the user if they wish. Of course the user as an additional security precaution should save the checksum before they use CrossCloud, and ensure it's the same upon retrieval. In fact, such functionality would most likely be embedded client-side into the network. Statistical probability however of receiving a different file would approach 0 as more random private clouds are used. End-to-end encryption would of course be used for sensitive things such as storage of files.

Now for a more sophisticated example of CrossCloud that would use neural networks: Let's say a user wishes to train a neural network using the 'rmsprop' optimizer over a certain number of epochs. They don't trust training this network on any single centralized cloud, and they lack the capabilities to train it on their own computer. Instead, they wish to use the decentralized nature of CrossCloud and its scaling capabilities. The user selects the number of private clouds that will suffice for their decentralization requirements, let's say in this example 5. Here is what would happen -

1) The user simply submits to the Ectarium network their training set, classification set (in event of categorical classification) and their parameters + hyperparameters. They'll also input the desired epochs, optimizer, batch size, and of course, most importantly, the actual network architecture and layout. Lastly, they'll select the number of private clouds they need in order to meet their decentralization requirements, which in this case is just the simple number '5'.

2) The Ectarium network now does the rest. It finds 5 separate, random nodes with private clouds that have available computational power and storage to meet the requirements of training the user's neural network, and sends them all of the data that the user submitted. These 5 private clouds train the neural network separately, without communicating with each other at all.

3) The Ectarium network will hold back a "test" set of a fixed size, most likely 10% of the size of the training set. It will then retrieve the trained model from these 5 private clouds, and test each model against the withheld "test" set. It's important to note the private clouds do not have access to this test set so that they cannot falsify the accuracy of their trained network. In the event the private clouds reach consensus as to the accuracy and loss of the test set within a statistical margin of error, all private clouds are deemed to most likely be trustworthy, and the trained model is randomly retrieved from one of them (or all of them if the user so elects) and returned back to the user. Alternatively, the user may automatically store the model on these same private clouds as a file for later use as a predictor. Or the user may wish to store the trained model on more, or less, decentralized private clouds, or none at all, just choosing to keep the trained model for themselves. Each of these scenarios would have different costs depending upon the cloud usage requirements.

It might be asked how the Ectarium network will conduct the "test" set checking, where will this occur? It will occur in the private clouds of other completely separate, random nodes, with the 5 private clouds that did the training being excluded from this random node selection. Since all nodes must keep a certain % of their private cloud network computational capabilities, such as 10%, open for this exact type of decentralized redundancy checking, validation of the model should always be possible. Every one of the nodes used in this process may receive a payment depending upon the utilization of their private cloud resources. In the event any of the nodes reports inconsistent results with the other nodes (such as a test set loss that is outside the statistical margin of error, or a false positive/negative with the validation nodes), the erring nodes will be punished.

A fine idea
Now, here is where, at least for neural networks, training of a model can be done in a decentralized manner with potentially no extra cost, or very little extra cost, to using a centralized cloud, and I encourage you to pay careful attention to this idea.

Imagine a user elects to train their neural network over 5 separate private clouds in order to meet their decentralization requirements. According to the "no free lunch" law, they will in essence be paying 5x the cost of using just a single cloud to do the same thing (albeit without the decentralization). Here is how this can be overcome:

In neural network theory, it is well known that prediction accuracy is improved if the same neural network (same training & classification set) is trained with different optimizers (each resulting in a different model), and according to how accurate these unique models are they are assigned a probability. Then, these trained models can be combined together to then give a more accurate classification.

Eg: The exact same training & classification set is trained once with SGD, once with rmsprop, and once with adams, resulting in 3 different models with potentially comparable accuracy and loss, however with entirely different methods that they used to reach this loss during training. Given a unique input that we wish to predict the classification for, we can observe the predictions over these 3 models. In the event different classification is given in one of the models, but is the same with the other two models, we may choose to use the classification of the other two models. This system has been shown to be more accurate than using  a single model alone for many NN applications.

Now, for our example, if we use 5 different optimizers for 5 unique models, we can increase prediction accuracy. So, when we use CrossCloud, rather than using the same optimizer in each of the 5 clouds, we instead give each cloud a different optimizer, but with the training and classification set remaining the same. We can now combine these 5 models to create a single, more accurate predictor. In order to avoid a private cloud node maliciously using a different optimizer to the one we trained with, we may use 2 private clouds per optimizer, for a total of 10 private clouds. Now instead of paying 5x the cost in fees, we only pay 2x the cost, since we receive the benefit of 5 optimizers that are decentralized over 10 clouds. We can also potentially only use 5 clouds to ensure we only pay the equivalent cost of using a single centralized cloud if we implement a way to ensure a model was trained with a specific optimizer. I believe this will not be hard to do, but until then using the "double cloud" method (ie: one cloud for correct optimizer verification) allows us to pay costs that are comparable to a single cloud whilst benefiting from decentralization.

How is the capacity of a private cloud determined, since not all clouds will have the same scaling or the same storage/apps running? The node broadcasts their available cloud computation. When a user wishes to use private clouds, only nodes with sufficient available cloud computation are randomly selected.

Important Note: All critical, low bandwidth things such as coin transactions or token creations are verified by all nodes, not just a handful. Each node receives a % of the fee. The fee would be fixed such as at 1 millionaith of a token. All nodes must agree to do such a computation to be able to run.

Ectarium Project - Unique Aspects
Tokenomics
The exact supply is yet to be determined, but may be 100 million tokens or so. All tokens will only be created at genesis, upon project launch, and cannot be created again after that. Later the token distribution model will be shown. Any coins that are lost will be permanently lost. There will be no system designed to "recover" coins, nor any centralized arbitration body that can enforce any type of constitution. Constitutions are against the nature of decentralization. Private centralized organizations or DAOs may be created on the network however only majority node vote can determine direction of the main network.

Each coin will be guaranteed to offer the equivalent computational power to something, eg: 1 coin might always be guaranteed to train MNIST once up to 99% accuracy. Every node must agree to process such transactions for the set payment of 1 coin as a minimum base level to remain a node. This isn't the actual value conferred to the coin, but is just an example of what the minimal value might be. Whatever minimal value a single coin holds, it will be related to computational power in a neural network in some way, and possibly prediction usage of a "mega AI" (discussed next).

AI
The decentralized node network will be governed by a decentralized AI. The decentralized AI will be run in the private cloud of all nodes and only trusted nodes can vote via majority consensus to change its architecture or goals. The AI is not a launch goal of the project, this is rather a much later goal. Until then, node voting will be used for most decisions. The AI, when it is implemented, should be an advanced neural network that can pool a small % of resources of the private clouds of all trusted nodes for its calculations, making it potentially powerful for tasks such as tracking malicious nodes.

END IDEA: Ectarium will always expand and evolve to create the best possible system whereby a decentralized network of nodes are each connected to their own centralized private cloud, or whatever scaling system (ie: quantum computations) is found to be more efficient than the cloud in the future. Users of the network will choose how many random private cloud nodes they wish to use for their computation, storage or live app depending upon the necessary redundancy/decentralization vs. efficiency requirements. By using this system, computation can be partially centralized to allow efficiency whilst remaining decentralized insofar as is required by the system or app. The network will also have an underlying goal to predict something incredibly complex via its own weights (eg: every private cloud may be required to allocate specific resources to this large neural network requirement). The desired goal of this complex thing is an AI that can determine efficient nodes, fraudulent votes, etc. to learn and evolve by itself to keep the network maximally secure. The architecture of this AI or anything else can only be changed by majority node vote. The purpose of this AI may involve more than just ensuring node integrity. If a majority of nodes vote for it, multiple AIs may be used to predict and do other things entirely, not related to the network at all. An AI that uses a small % of thousands of private clouds could become potentially huge in terms of computational power. Its direction would be decided upon by majority trusted node consensus.

DEMO
Note: Please don't bother generating limitless coins with the faucet. These demo coins are temporary and will be deleted before the official network launches (if ever). They are completely valueless.

This is an extremely basic demo coded by me, a non-professional programmer. When I recruit a full and proper team, the real Ectarium project will not resemble this demo in any way. Nevertheless, it is there just to show a basic idea of what I'm talking about.

It is completely centralized and does not rely upon decentralized node distribution. In the demo you can generate coins to a new public address, and use these generated coins to train an incredibly basic neural network with severe limitations (eg: training set is limited to 2MB in size, epochs are limited to 5, architecture is limited to 256 neurons, single layer classification). You can use your trained network to predict classification of inputs (also limited to 2MB in size).

The demo also contains a wallet section you can use to send money to other public addresses, create your own token, and transfer balance between addresses with both custom tokens and the main network token (main represents the main network).

Lastly, the demo contains a very basic example of a neural network that has been trained to predict cryptocurrency prices of the top 250 coins on Coinmarketcap 24 hours ahead of time, once per day. Please note that this network isn't advanced, it only relies upon 12 inputs relative to each coin's statistics over the previous week (such as volume traded, % change, etc.). The network should not, and cannot, be used to predict actual prices, rather it should just serve as an interesting observation of a simple, self-working AI that is learning based upon its past predictions and subsequent corrections. Whilst it's doing its best to predict what future predictions might be, it doesn't have the capabilities to do so to any meaningful level. At the time of this post publication (7th August 2018) it only contains 3 day's worth of history, so this demo AI is very immature. I'll be leaving it running for the duration of the demo however, so maybe in a couple of months time the predictions might be slightly more accurate. Nevertheless, it's highly, highly unlikely that this demo AI will be able to predict future coin prices to any meaningful degree, given its simplicity, lack of access to non-publicly available information, and lastly efficient market response to any potential prediction even if were able to do anything correctly. Please do not use it for your trading strategies, it almost certainly will not work.

I've coded this demo completely from scratch and by hand, without using any base templates, so bugs will almost certainly exist. In saying that, if you would like to look at it, you may do so at: Ectarium.com

Brief Tutorial of the Demo
To explore the basic functionality of the demo, I've put together this tutorial.

To use the demo you will need to generate a new public:private key pair in a Bitcoin compatible format. You can do this at: https://www.bitaddress.org/. Do NOT use this key pair anywhere else besides the Ectarium.com site and do NOT use an existing key pair you already have. The site is not guaranteed to be secure, so generate a new key pair only for this website and only for testing purposes, do not use it anywhere else.

Faucet
In order to use the demo, you will need some coins. After you've successfully generated your new private and public key address pair click on the "Demo Coin Generator" link in the footer, or click this link: ectarium.com/demofaucet. Enter your public key into the "Public Address" field and click "Generate Tokens". You will receive 150 free tokens for testing purposes. Please note the faucet will not work if the public address has already been used before on the website. Also note that these tokens are worthless and will be deleted after the demo is taken offline, so please don't waste your time hoarding them.

Login
Next, you need to login to the website.

Visit the Login section in the footer menu at Ectarium.com, or visit ectarium.com/login directly. Enter your private key into the login section. You will be logged in with your corresponding public key appearing at the top, and your balance should be 150 coins. Your private key is now encrypted as a cookie in your web browser to allow easy navigation throughout the demo without needing to login again.

Wallet
Click "Wallet" in the footer or visit https://ectarium.com/wallet. If you want to experiment with the wallet, try generating a second private:key pair and sending tokens to it from your main address. Make sure you leave enough tokens to cover the demo transaction fees as indicated. Once you've sent tokens to an unused public key, it will be activated and you can then login with its private key. If you have 100 tokens, try creating your own custom token in the "Create Token" field with your own supply. Send this custom token to other addresses via the Transfer field, just make sure to change the "Token Symbol" field from "main" to the symbol you used when creating your token. You can transfer custom tokens in the same way as the main network token by changing the symbol. Any token symbol different to "main" is a custom token. You can view the balance of both custom tokens and "main" network tokens. Note that all fees are calculated in "main" network tokens, not in custom tokens.

Train
Click "Train" in the footer or visit https://ectarium.com/train.

Step 1)

If you're familiar with Python & Numpy and have access to your own training and classification set via python (such as the mnist dataset through keras) use these instructions: (else skip this part)
For "Training Set" Upload any neural network compatible training set in 2D numpy array format saved using the numpy.savetxt() method in python.

Otherwise, for this example, we will go through how to upload a sample of the mnist dataset using python & keras. You can substitute these instruction for your own training set.

In a python shell, first import the mnist dataset from keras, along with numpy:

Quote
from keras.datasets import mnist
import numpy as np

Next, implement a simple function to return the training and classification sets. This function will also normalize the data by dividing it by 255 to increase accuracy. Since the initial data is retrieved as a 3D array, we will reshape it into a 2D array that is compatible with the website. We then call the function and assign the x and y training sets to variables:

Quote
def getdataset():
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train = x_train.astype('float32') / 255.
    x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
    return [x_train,y_train]

x_train,y_train=getdataset()

The x_train and y_train sets are too large to be uploaded to the demo interface which is limited to 2MB maximum. So instead we will just save a small sample of x_train and y_train:

Quote
x_sample = x_train[0:100]
y_sample = y_train[0:100]
x_test = x_train[100:200]
y_test = y_train[100:200]

Let's now save these arrays in numpy.savetxt compatible format that we can use on the website:

Quote
np.savetxt("x.txt",x_sample )
np.savetxt("y.txt",y_sample )
np.savetxt("testset.txt",x_test )
np.savetxt("testresults.txt",y_test )

You should now have the four .txt files on your computer: x.txt, y.txt, testset.txt, testresults.txt. These will usually be stored in the same path you launched python from.

Or if you're not familiar with python or don't have access to your own Training Set in numpy:
The above steps have already been completed for you. Download a small sample of the mnist data (please right click and "save as"):
Training Set (labelled x.txt)
Classification Set (labelled y.txt)
Test Set (labelled testset.txt)
Result Set (labelled testresults.txt)

Now Continue With Step 2:
All 4 of your files should be under 2MB in size, as the demo will limit file uploads to 2MB since neural network training of large networks uses a lot of CPU/GPU power.

Under Training Set upload x.txt
Under Classification Set upload y.txt
Leave all other parameters the same. Choose a custom Model name that is a simple string you'll remember
Click Begin Training. This will cost you 10 coins.

If the training is successful, you should see your validation loss and accuracy to indicate the performance of your model. If you did everything correctly, your model should return at least 80% accuracy or higher by the 5th epoch. This is good considering the neural network trained this from only 100 samples instead of 60,000, and with only 5 epochs! The mnist training set is used for number recognition between 0-9. This means that your model should now be able to predict any handwritten numbers, even from your own handwriting, with roughly the accuracy that is displayed.

Let's test our model however with our test sets.

Step 3)

If you successfully trained your network, click on "Load Model" in the footer or click ectarium.com/loadmodel to go there directly.

Choose the Model you named, and click "Load Model". Only you will have access to your own models. They are associated with your public key.

Step 4)

Under "Test Set" upload the file "testset.txt" and then click "Prediction Classification". This will show your neural network model a series of handwritten digits (if you've been following the mnist example) that it hasn't seen before. We are curious how well it will classify/predict what numbers they are.

Step 5)

You should now receive a list of predictions. Open up your testresults.txt file directly on your computer (don't upload it). Make sure to use an editor that can display newlines. How do the numbers here compare to the predictions in testresults.txt? You should be expecting a similar accuracy to what was shown earlier when you trained the model by the 5th epoch.

Now that you understand the format required for the training and classification sets, you can upload your own sets and train your own custom and unique neural network models. Note that these neural networks will be very basic due to the demo limitations. Demo limitations are implemented in order to minimize the effects of spam (since the faucet can generate free tokens).

Coinmarketcap Predictor
Click on "CMC Predictor" or visit ectarium.com/cmcpredictor directly. Take a note of the time and date at the top in GMT format, which will be in the future. The 250 coin prices below are predictions of what the coin prices will be on this time and date in the future. These are the results from a constantly updating neural network that I designed and trained in order to predict future CMC prices 24 hours in advance. It will make new predictions once daily, 24 hours into the future, and will supposedly learn from past mistakes. Of course in reality, the predictions will never really be too accurate and cannot predict catastrophic events, but with each day that passes, the network should become slightly more sophisticated given its prior training exposure. You can also train a similar neural network using the Train option if you input the right parameters.

Don't rely on these predictions, they are just used to showcase a custom neural network, and should not be used as an investment strategy.

Roadmap & Funding
How well the Ectarium project progresses will depend upon its funding & development. As funding milestones are reached, further funding will be halted until corresponding development milestones are reached.

I'm going to have two "Donation" stages. If this project actually proceeds to launch, all donations in an amount of 0.1 ETH or higher per donation will receive a % of total token supply corresponding to the donation amount with a vesting lock of 3 months after project launch. Below 0.1 ETH per donation will not receive anything, but I thank you nevertheless for the help.

Not sure what BCT policy is if I need to request you to PM me regarding donations, but until I'm told otherwise you can donate ETH here: 0x8EA2CDA6ba94AD29F8bE55e104bd96F7E1540d76

Donations are currently open and I will gift 0.1% of token supply per $200 worth of donations to all corresponding ETH addresses if the project launches, with a vesting lock. I cannot change the receiving addresses if they're hacked, so make sure your address is secure before you donate. This rate is capped at 2.5% token supply total for all donations, no more donations will be accepted above this amount. Donation amount will be calculated based upon the ETH price at the end of the donation period.

Once $5000 in ETH is reached (which you can verify with etherscan), I'll cease to accept donations for round 1 of the project and will instead focus on the round 1 project milestones.

Please find the roadmap funding and development milestones below. Future milestones are subject to change. The token supply % that I promised for donations in the past will not change, you can be certain it's fixed and will be honored, provided the project actually launches. I can't refund donations if the project fails since I'll be using these donations for project development. In the unlikely event the project fails, I'll release all code, open source, to the community for free. However I'm betting against this happening and have already made my own personal investments in the project as well so I really do want to see it succeed.

Personal funding only (PREVIOUS STAGE):
All funding is out of my own pocket.
Development milestones:
 - Release BCT post of idea
 - Release basic centralized demo of idea, to be replaced completely when proper team is later put together

Pre-seed Donations Round 1 (CURRENT STAGE) (up to 2.5% token equity only) - $200 per 0.1% token equity. Project valuation: $200,000 (vesting lock - 3 months after ICO)
Aim to raise: $5000
Development milestones after successful fundraising:
- Find proper partner also interested in equity, begin coding of actual project.
- Implement very basic decentralized node network prototype.
- Transfer prototype to the cloud, enable scaling capabilities (with a limit for testing purposes)
- This will only be very rudimentary and basic, however it should have some decentralized aspects which can be showcased.

Pre-seed Donations Round 2 (up to 5% token equity only) - $1000 per 0.1% token equity. Project valuation: $1,000,000 (vesting lock - 3 months after ICO)
Aim to raise: $50,000
Development milestones after successful fundraising:
- Implement CrossCloud prototype
- Polish prototype, ensure automated cloud testing. Do a minor code audit. Recruit nodes for testing. Launch pre-alpha test.  
- Work on web interface and client
- If all is working as intended, polish and showcase web interface & client to attract seed investors
- Do preliminary basic graphics, basic whitepaper, basic technical whitepaper
- Incentivize hiring of a professional team interested in equity with the above used as a showcase of the project's potential
- Follow technical implementation with the new team. Release advanced graphics, advanced whitepaper, advanced technical whitepaper
- Completely polish the protocol, release public demo of pre-alpha
- Release first professional version of website
- Incorporate as company for ICO legality and corporate account
- Retain attorney for ICO & other general compliance advisory
- Launch Private sale

Private sale seed investors (up to 10% token equity only) - $5,000 per 0.1% token equity. Project valuation: $5,000,000 (vesting lock - 6 months after ICO)
Aim to raise: $500,000
Development milestones after successful fundraising:
- Do full, professional code-audit of the pre-alpha. Release alpha version if possible and development funds do permit.
- Use 20% of raised Private sale funds for polished/perfect whitepaper, polished/perfect graphics, and comprehensive, fully professional website ready for mainstream promotion. Make sure whitepaper is translated into various languages.
- Use 50% of raised Private sale funds to launch marketing campaign for ICO & the product
- Use last 30% of raised Private sale funds for on-going expenses & alpha development
- Alpha must be fully and professionally implemented for mainstream attraction, ease of use, and lack of bugs
- Launch ICO with a professional website, professional team, professional whitepaper, professional presentation, and most importantly a working alpha of the product that can all be showcased to ICO investors

ICO sale (67.5% token equity available) - $30,000 per 0.1% token equity. Intended final valuation after ICO & product release: $30,000,000 (or so, subject to change). Fair ICO distribution model with max cap.
(May run Pre-ICO, with 30% discount, vesting lock 6 months post-ICO, or may not. To be determined later)
Aim to raise: $20.25m
 - Use ICO funds to launch network, expand and use funds in accordance with fund distribution model

TOKEN ALLOCATION:
 - 85% as per above (investors)
 - 3% reserve (possible burning, or milestone sell-off for on-going project funding.. wait and see)
 - 2% bounties, advertising, gifts, airdrops, equity payments
 - 10% team

ICO FUND DISTRIBUTION:
 - 50% development
 - 25% founders/team
 - 20% marketing
 - 5% reserve, foundation

Future Development Post-ICO
Expansion upon core ideas, growing the protocol, implementation of the global, self-monitoring AI, and moving toward a beta release, and eventually official stable release.

Future milestones subject to change.

Currently seeking: Team Members (please read intro) and Donations (please read Roadmap & Funding section).

100% Decentralized, Visibly Fair Blackjack On The Ethereum Mainnet, Developed By Me Entirely.

Play Now For Real Ether!
1714111183
Hero Member
*
Offline Offline

Posts: 1714111183

View Profile Personal Message (Offline)

Ignore
1714111183
Reply with quote  #2

1714111183
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714111183
Hero Member
*
Offline Offline

Posts: 1714111183

View Profile Personal Message (Offline)

Ignore
1714111183
Reply with quote  #2

1714111183
Report to moderator
1714111183
Hero Member
*
Offline Offline

Posts: 1714111183

View Profile Personal Message (Offline)

Ignore
1714111183
Reply with quote  #2

1714111183
Report to moderator
1714111183
Hero Member
*
Offline Offline

Posts: 1714111183

View Profile Personal Message (Offline)

Ignore
1714111183
Reply with quote  #2

1714111183
Report to moderator
Watanabe Blue (OP)
Copper Member
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
August 08, 2018, 08:21:25 AM
 #2

(reserved)

100% Decentralized, Visibly Fair Blackjack On The Ethereum Mainnet, Developed By Me Entirely.

Play Now For Real Ether!
drmilind2004
Member
**
Offline Offline

Activity: 280
Merit: 28


View Profile
August 08, 2018, 08:39:32 AM
 #3

(Reserved)
Watanabe Blue (OP)
Copper Member
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
August 10, 2018, 02:35:20 AM
 #4

Well that's a little underwhelming that not even a single piece of feedback was received, not even on the Coinmarketcap future price predictor (https://ectarium.com/cmcpredictor). Interestingly the price predictor didn't do well in predicting the crash 2 days ago due to its limited prior training data being less volatile (only 3 days worth), however after the crash it quickly corrected and made some more bold predictions that are reasonable. Will be interesting to see if it learns from this experience.

I guess I'll continue to work and fund this alone for now. Donation options remain open, and I'll still be searching for partners, if anyone is ever interested in contributing.


100% Decentralized, Visibly Fair Blackjack On The Ethereum Mainnet, Developed By Me Entirely.

Play Now For Real Ether!
Watanabe Blue (OP)
Copper Member
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
August 13, 2018, 04:33:04 PM
 #5

Could a mod please move this to the Altcoin Discussion/Announcements section? Perhaps more luck may be found there.

100% Decentralized, Visibly Fair Blackjack On The Ethereum Mainnet, Developed By Me Entirely.

Play Now For Real Ether!
drmilind2004
Member
**
Offline Offline

Activity: 280
Merit: 28


View Profile
August 13, 2018, 07:55:22 PM
 #6

Could a mod please move this to the Altcoin Discussion/Announcements section? Perhaps more luck may be found there.

Or, maybe, you could make an Announcement post on that forum. Study the way the other announcement posts write their Subject Title, and use a similar format. You'll certainly find better traction over there.

Good Luck with Your Post, and Your Project!
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!