ooo this is pretty cool, <3 me some Tendermint, i think its one of the more interesting validation methods being developed right now
i need to get me some more info on this one, still in testnet i see? so not too late for us plebs to maybe become validators?
Very exciting to see this here finally, as i have heard about it here and there in the Cryptosphere over the last year or so
Its the right time to become a validator right now. Well. We are in between euler-4 and euler-5 (two testnets). They are both incentivised, meaning that if you decide to run a validator during the comin testnet, you will receive part of the inflation when the testnet will launch. We also have some distribution games planned for the validators very soon.
We are preparing a launch-kit repo, which will explain all those things within 1 - 2 weeks. So I advise to follow our TG channel. I will also try po post updates here of course!
BTW, I advise you to check out how big validating on tendermint (and beyond) has become over the last year. Not only a lot of the projects are alive, developing and constantly releasing, but a lot are actually doing very useful things for the industry in terms of R&D =)
@dev, I'm compiling your main.tex file but it is giving me error in line 45.
I even tried xelatex and lualatex, but to no avail.
I'm using TexShop Version 4.44 (4.44)
Can you share the final output instead? Want to see what you have here. Thanks.
Thanks for pointing this out! I must have missed a line somewhere during one of the latest commits. Will debug this.
There is a good reason we aren't sharing the paper in a .pdf - it is still a work in process and we are still changing some parameters, which can be vital to the economy (for example even now, there is a pull request in the repo with fixes, so the main.tex in the /cyber repo is already slightly outdated).
We don't want to mislead people. So that's the reasons why we are still having people compiling it manually. To avoid any confusion. Although, it is over 90% done I presume.
Once again, sorry for this. Will look into what's broken
PS. If you really eager, you can always use the "rich text" version in overleaf. I know its a pain in the ar... thanks for understanding
What is the difference between using cyber and google in the browser search bar? You said the idea is amazing but is there really advantage of using cyb? Have you tested its usability?
Great question!
I guess this means we need to do a FAQ to point out the main differences / advantages.
Coming to think of it, we have done some already:
Some are described in this
readme, please check it out.
Eh. I guess I can be less lazy and upload the pic. Here we go:
Also, we have described some the possibilities that arise whilst using cyber, in the WP (as stated above it is still a work in progress and need to be manually compiled... but while I'm at this. I guess I might as well copy this to here too):
This is going to be a long text. So bare with me!
We assume that the proposed algorithm does not guarantee high-quality knowledge by default. Just like a newborn it needs to acquire knowledge to develop further. The protocol itself provides just one simple tool: the ability to create a cyberlink with a certain weight between two content addresses.
Analysis of the semantic core, behavioural factors, anonymous data about the interests of agents, and other tools that determine the quality of search can be achieved via smart contracts and off-chain applications, such as: web3 browsers, decentralized social networks and content platforms. Therefore, it is the aim of the community and agents to build the initial knowledge graph and to maintain it, so that it can provide the most relevant search results.
Generally, we distinguish three types of applications for knowledge graphs:- Consensus apps. Can be run at the discretion of the consensus computer by adding intelligent abilities
- On-chain apps. Can be run by the consensus computer in exchange for gas
- Off-chain apps. Can be implemented by using the knowledge graph as an input within an execution environment
The following imaginable list of apps can combine the above mentioned types:Web3 browsers: In reality browser and search are inseparable. It is hard to imagine the emergence of a full-blown web3 browser which is based on web2 search. Currently, there are several efforts for developing browsers around blockchains and distributed tech. Among them are Beaker, \sout{Mist}, Brave, and Metamask. All of them suffer trying to embed web2 in web3. Our approach is a bit different. We consider web2 as the unsafe subset of web3. So we develop a web3 browser Cyb showcasing the cyber approach to answer questions better and deliver content faster.
Programmable semantics: Currently, the most popular keywords in the gigantic semantic core of Google, are keywords of apps such as Youtube, Facebook, GitHub, etc. However, the developers of those successful apps have very limited ability to explain to Google how to structure search results in a better manner. The cyber approach gives this power back to developers. Developers are now able to target specific semantics cores and index their apps as they wish.
Search actions: The proposed design enables native support for blockchain (and tangle-alike) assets related activity. It is trivial to design applications which are (1) owned by the creators, (2) appear correctly in the search results and (3) allow a transactable action, with (4) provable attribution of a conversion to a search query. e-Commerce has never been this easy for everyone.
Off-line search: IPFS makes it possible to easily retrieve a document from such an environment without a global internet connection. cyberd itself can be distributed by using IPFS. This creates the possibility for ubiquitous, off-line search!
Command tools: Command-line tools can rely on relevant and structured answers from a search engine. Practically speaking, the following CLI tool is possible to implement:
cyberd earn using 100 GB
Enjoy the following predictions:
- apt install go-filecoin: 0.001 BTC p/ month p/ GB
- apt install siad: 0.0007 BTC p/ month p/ GB
- apt install storjd: 0.0005 BTC p/ month p/ GB
According to the most desirable prediction, I decided to try `mine go-filecoin -limit 107374182400`
Git clone ...
Building go-filecoin
Starting go-filecoin
Creating a wallet using @xhipster seed
You address is ....
Placing bids ...
Waiting for incoming storage requests ...
The search from within CLI tools will inevitably create a highly competitive market of a dedicated semantic core for robots.
Autonomous robots: Blockchain technology enables the creation of devices that can manage digital assets on their own.
If a robot can store, earn, spend and invest - they can do everything you can do
What is needed is a simple, yet a powerful state reality tool with the ability to find particular elements. \code{cyberd} offers minimalistic, but continuously self-improving data source, which provides the necessary tools for programming economically rational robots. According to \linkgreen{https://github.com/first20hours/google-10000-english}{top-10,000 English words} the most popular word in the English language is the defining article \code{the} - which means a pointer to a particular item. This fact can be explained as the following: particular items are of most importance to us. Therefore, the nature of our current semantic computing is to find unique things. Hence, the understanding of unique things is essential for robots too.
Language convergence: A programmer should not care about what language will an agent be using. We don't need to know about what language the agent is performing their search in. The entire UTF-8 spectrum is at work. The semantic core is open, so competition for answering queries can become distributed across different domain-specific areas, including the semantic cores for various languages. This unified approach creates an opportunity for cyber•Bahasa. Since the dawn of the Internet, we observe a process of rapid language convergence. We use truly global words across the entire planet, independently of our nationality, language, race, name or Internet connection. The dream of a truly global language is hard to deploy because it is hard to agree on what means what. However, we have the tools to make this dream come true. It is not hard to predict that the shorter a word, the more powerful its cyber•rank will become. Global, publicly available list of symbols, words, and phrases sorted accordingly by cyber•rank with a corresponding link provided by cyberd, can become the foundation for the emergence of a genuinely global language everybody can accept. Recent scientific advances in machine translation are breathtaking but meaningless to those who wish to apply them without a Google-like scale trained model. The proposed cyber•rank offers precisely this.
Our approach to economics of a consensus computer is that agents will pay for gas as they wish to execute programs. OpenCypher-like language can be provided to query the knowledge graph, right from within smart contracts.
We can envision the following smart contracts that can be built on top of a simple relevance machine with the support of on-chain WASM VM or CUDA VM:
Self prediction: A consensus computer can continuously build a knowledge graph on its own, predicting the existence of cyberlinks and applying these predictions to its state. Hence, a consensus computer can participate in the economic consensus of the cyber protocol.
Universal oracle” A consensus computer can store the most relevant data in a key-value storage, where the key is a CID and the values are the bytes of the actual content. This can be achieved by making a decision every round, on which CID value the agents want to prune and which value they wish to apply, based on the utility measure of content addresses within the knowledge graph. To compute utility measure, validators check the availability and the size of the content for the top-ranked content addresses within the knowledge graph, then, weight on the size of the CIDs and its rank. The emergent key-value storage will be available to write for consensus computer only and not for agents, but, values could be used in programs.
Proof of location: It is possible to construct cyberlinks with 'proof-of-location' based on remarkable existing protocols such as Foam. Consequently, a location-based search can also become provable, if web3-agents will mine triangulations and attach ‘proof of location’ for every linked chain.
Proof of web3-agent: Agents are a subset of content addresses with one fundamental property: a consensus computer can prove the existence of private keys for content addresses for the subset of a knowledge graph. Even if those addresses have never transacted on their chain. Therefore, it is possible to compute much provable essence on top of that knowledge, e.g. any inflation can be distributed to addresses that have never transacted in the cyber network but have the provable link required.
Motivation for read requests: It would be great to create cybernomics not only for ‘write’ requests to consensus computers but from ‘read’ requests too. Thus, read requests can become orders of magnitude cheaper but still guaranteed. Read requests to a search engine can be provided by the second tier of nodes which earn CYB tokens within state channels. We consider implementing state channels based on HTLC and proof of verification, which unlocks the amount of tokens earned for already served requests.
Prediction markets on link relevance: We can impel this idea further by the ranking of the knowledge graph, based on a prediction market on link relevance. An app. that allows betting on link relevance can become a unique source of truth for the direction of terms, as well as, motivate agents to submit more links.
This is surely not the excessive list of all the possible applications... but a very exciting one indeed.
And yes, we have tested it of course. You are welcome to test the browser too. Please bare in mind that it is not our priority before the chain itself is fully functioning.
You can check out the current
alpha of the browser here (please be aware that this is very raw and might not work in places).
Once again, the browser is not our main goal right now. We will start concentrating on it when the mainnet is launched. As the chain itself is way more important at this stage