IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
March 31, 2017, 05:59:24 AM |
|
@ladixDev, let's stop talking and instead produce some specifications.
I already have my specification Did you provide a repository link? There is the git there : https://github.com/iadix/purenodeThe high level design pattern is this : http://blog.iadix.com/web/images/wp/diag.svg ( now there should be one more module in the bottom added next to rpc server & block explorer for the raytracing ) There are some infos on the site too http://iadix.com/ico/blogs/enI dont write a lot of documentation, as im only dev anyway, and I keep the code simple enough for that I can do without extensive documentation, I dont spend too much time into open office to write pdf The idea is to combine this https://github.com/iadix/purenode/blob/master/purenode/main.cAnd this https://github.com/iadix/purenode/blob/master/export/iadix.confWith script to have the full node definition with high level language, all the part that are relevant to define a specific coin out of block / tx parameters, and the network event handling for the p2p network . The http rpc interface bind automatically rpc method to module exported function name. And the trick with /api/method http url for block explorer. New methods can be added by adding exported function to the module.
|
|
|
|
iamnotback
|
|
March 31, 2017, 06:40:41 AM |
|
I like this post. Will want to contemplate it more at another time when I am less sleepy. Re: How good is Decred?By having one's coins locked, it also requires people that wish to vote on Decred's future to have "skin in the game". It's not a perfect system, but it does incentive people to vote in Decred's best interest.
My fundamental idea is that as long as one is combining several distinct aspects of a crypto currency into one single mechanism having to do everything, the system becomes or insecure, or centralized, and most of the time, both. The different aspects to resolve are: 1) finding consensus over valid first spendings. 2) securing this consensus (immutability) once it is firmly established 3) coin creation and seigniorage 4) governance of protocol modifications and evolution 5) rewarding special people who do 1, 2 and/or 4, eventually with 3 The reason that combining these aspects into one single dynamics is problematic, is that you get complicated couplings between them. In my opinion, there are two no-gos : 4) and 5). 4) is orthogonal to immutability. I know that given that crypto currencies are heavily invaded by software people, they have a hard time thinking of something that shouldn't evolve. But crypto currencies are not software, not any more than mathematics is software, even though both domains rely heavily on software to have practical implementations. But the "software implementing a protocol" and "the protocol itself" are two totally distinct things. The problem with 5) is that you get strategies for optimising obtaining the rewards, which can lead to a serious incentive to go away from the right consensus resolution or the right secularisation of the obtained consensus. So, while it sounds logical that people "helping the system to come to consensus" or "helping the consensus to be secured" are rewarded for their efforts, the very fact of instoring *mechanical* rewards makes that you are open to strategies of reward maximisation, exclusion of less competitive, although honest, contributors to the system, centralization and corruption. For instance, the very idea that proof of stake should be rewarded is somewhat strange: the higher your stakes in the system, the higher is the importance, for you, that the system is secure and comes to right consensus, because in the end, that determines the market value of the stake ! Finally, another problem is that the rewards are inside the system, while the true rewards depend on market valuation. The market valuation can make that rewards as foreseen inside the system become crazily high, and people are going to kill their mother for those rewards, or become ridiculously small, so that people don't do it "for such small rewards". Reward markets will emerge which will "regulate" aspects of the system that weren't foreseen, and will induce strategies to manipulate said market, like the spamming of blocks in bitcoin. I think that as long as people are going to go that way, thinking that there should be governance (voting and central protocol power), that there should be rewards, and that all of this should be combined in one single system, one will always end up with a broken protocol in which a few people pump value from the gullible into their pockets and then dictate the fate of the system. Rewards are a bitch. You fight and cheat for them. You will find that one will try to patch the difficulties with more and more involved mechanisms, each of them patching a previously recognized problem, and opening 20 doors for new strategies to game the system. It can be a fun experimentation, and as we've seen, bitcoin has been running successfully for more than 8 years before finally hitting seriously the errors in its design. So it can take a long time before the strategies that game the system because of its inherent rewards design, become visible and tangible. The security and honesty of bitcoin now comes essentially from the investment in asics that some Chinese guy has made and doesn't want to lose immediately. The price of the computer is what still keeps bitcoin floating.
|
|
|
|
iamnotback
|
|
March 31, 2017, 06:00:18 PM Last edit: March 31, 2017, 07:30:29 PM by iamnotback |
|
On the issues with and ramifications of centralization: Of course, the smaller the number of votees needed to do this, the higher the probability of de facto centralization.
Disagree. Because the cost of voting is not free, i.e. democracy is centralized. Actually a small number of highly knowledgeable whales who have conflicting priorities (such that a stalemate is attained, which appears to be the case now with Bitcoin as you had previously posited) is more decentralized because these whales are not manipulable into voting as a colluding bloc, except perhaps to defend the status quo. And this is precisely the conceptual reason why Bitcoin may still be decentralized. But the danger is that over time the paradigm is a winner-take-all due to either the marginal utility of economies-of-scale (within the Bitcoin protocol) being perfectly constant and/or the inability of the Bitcoin small block, non-shrinking (money supply, i.e. not a deflationary supply) system to adapt to the knowledge age as finance becomes inexorably less efficacious w.r.t. to knowledge value (i.e. the NWO will be able to bring those whales into submission to the higher authority of an externality). I would much prefer if possible a protocol wherein the power of everyone is essentially nil. I think I may have such a design, but I am highly skeptical as I should be.
By the way... our pools are NOT reward sharing as you seem to think. All they do is vote according to a users preference so that a user may remain offline if he/she chooses to use a pool. Individuals still retain control of 100% of their funds.
I am operating at a higher level of meta economics wherein your "preferences" differentiation is afaics irrelevant. The users who don't align their preferences with the majority will lose rewards, because the majority can censor every block of the minority. Now go back to the point that voting (i.e. "preferences") is not cost-free and thus is manipulable. Delegating preferences means users can't be bothered, i.e. the cost of voting is not free. Thus they will not be expert. They will not be able to organize because they can be divided and conquered and many different pet "preferences" (analogous examples from current democracies include e.g. gay rights, abortion, etc). They can even be incited via propaganda to diffuse their efforts into a multitude of preferences. This is why @dinofelis is correct when he states instead the rules must be hard-coded immutably into the protocol.
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
March 31, 2017, 06:57:09 PM Last edit: March 31, 2017, 08:35:15 PM by IadixDev |
|
For me, where i want to get at, is to get a definition of node that would look like this : { "node" : { "name":"purenode", "seed_node_host":"iadix.com", "seed_node_port":16714, "p2p_port":16819, "magic":0xD9BEFECA, "version":60018, "sign_mod":"ecdsa", "pubKeyVersion":0x19, "staking": { "staking_kernel":"stake_pos3", "target spacing":64, "target timespan":960, "stake reward":150000000, "limit":0x1B00FFFF }, "mining": { "target spacing":64, "target timespan":960, "limit":0x1E0FFFFF, "reward":10000000000000, "last_pow_block":200, "paytxfee":10000 }, "genesis": { "version" :1, "time" :1466419085, "bits" :0x1e0fffff, "nonce" :579883, "InitialStakeModifier":0, "InitialStakeModifier2":0 }, "initialize" : function () {}, // initialize the app (constructor) eg load the required modules, allocate memory "start" : function () {}, //start the application eg open port, initialize protocol and connect nodes "stop" : function () {}, //stop application eg close port and all connections "free" : function () {}, //free resources (destructor) unload all modules and free memory "message_handlers" : { //P2P message handlers {"msg":{"cmd":"inv", function (node,inv_data) { %MyNode%->send_getdata(node,inv_data); } }}, //send get data message to the node sending the inv {"msg":{"cmd":"block", function (node,block_data) { %MyNode%->validate_block(node,block_data); } } }, // call %MyNode% module method on the data or implement the block validation protocol }, //define rpc interface for the wallet "rpc_wallet": { "module":"rpc_wallet", "rpc_port":16820, "index":"/wallet", "page file":"wallet.html" }, //define http api for block exporer "block_explorer": { "module":"block_explorer", "base path":"/api/", "page file":"blocks.html" } } }
To get pedantic on design pattern, the blockchain protocol technically it's already an application layer, and in fact it contain in itself 2 layer, the message layer, and the actual blockchain layer. The message layer can be seen as the transport layer, it encapsulate the message data, and it's the part that is handled by the node modue, it read packets from the network, and parse the message "transport" layer. The protocol module is there to deal with the blockchain layer data format, how block headers and transactions and other objects are represented on the network to be transmitted between two nodes. The protocol module contain template like type class to deserialize the message data from the node (transport layer), with run time type definition, in sort that the node can deserialize the message independently of protocol definition. The whole format of the blockchain layer data can be changed, and it wouldn't need to recompile the node module. The code of the node module is not on the git, but it goes like this for reading new message from the network https://github.com/iadix/purenode/blob/master/protocol_adx/protocol.c#L1370OS_API_C_FUNC(int) new_message(const struct string *data, mem_zone_ref_ptr msg) { mem_zone_ref_ptr key; mem_zone_ref list = { PTR_NULL },my_list = { PTR_NULL }; mem_zone_ref payload_node = { PTR_NULL }; struct string pack_str = { PTR_NULL }; size_t cnt = 0; size_t elSz = 0; int ret,nc;
if (!strncmp_c(&data->str[4], "version", 7)) make_string(&pack_str, "{(\"payload\",0x0B000010) (0x02)\"proto_ver\" : 0,\"services\" : 0, \"timestamp\" : 0, (0x0B000040)\"their_addr\":\"\", (0x0B000040)\"my_addr\":\"\",\"nonce\":0,(0x0B000100)\"user_agent\":\"\", (0x02)\"last_blk\":0}"); else if (!strncmp_c(&data->str[4], "ping", 4)) make_string(&pack_str, "{(\"payload\",0x0B000010) \"nonce\":0}"); else if (!strncmp_c(&data->str[4], "pong", 4)) make_string(&pack_str, "{(\"payload\",0x0B000010) \"nonce\":0}");
ret=tree_manager_json_loadb (pack_str.str, pack_str.len, &payload_node); free_string (&pack_str); if (!ret) return 0;
tree_manager_set_child_value_str(msg, "cmd" , &data->str[4]); tree_manager_set_child_value_i32(msg, "size", (*((unsigned int *)(&data->str[16])))); tree_manager_set_child_value_i32(msg, "sum" , *((unsigned int *)(&data->str[20]))); tree_manager_set_child_value_i32(msg, "cnt" , cnt); tree_manager_set_child_value_i32(msg, "elSz", elSz); tree_manager_node_add_child (msg, &payload_node);
code from the node module if (tree_manager_create_node("emitting", NODE_BITCORE_MSG, &msg)) { new_message(get_con_lastline(node_con), &msg); // create the message type template based on the message command tree_manager_node_add_child(node, &msg); tree_manager_set_child_value_i32(&msg, "recv", pre_read); tree_manager_set_child_value_i32(&msg, "done", 0); release_zone_ref(&msg); }
to unserialize the message based on the runtime type definition : OS_API_C_FUNC(int) read_node_msg(mem_zone_ref_ptr node) { mem_zone_ref msg = { PTR_NULL }; struct con *node_con; if (!tree_manager_get_child_value_ptr(node, NODE_HASH("p2p_con"), 0, (mem_ptr *)&node_con))return 0;
if (tree_manager_find_child_node(node, NODE_HASH("emitting"), NODE_BITCORE_MSG, &msg)) { if (!tree_manager_get_child_value_i32(&msg, NODE_HASH("size"), &size)) size = 0;
tree_manager_get_child_value_i32(&msg, NODE_HASH("sum"), &sum);
mbedtls_sha256(get_con_lastline(node_con)->str, size, (unsigned char*)checksum1, 0); mbedtls_sha256((unsigned char*)checksum1, 32, (unsigned char*)checksum, 0);
if (checksum[0] == sum) { unserialize_message(&msg, get_con_lastline(node_con)->str, PTR_NULL); //unserialize the message payload based on the template tree_manager_set_child_value_i32(&msg, "done", 1); queue_emitted_message(node, &msg); // add the parsed message in the node's emitted message queue release_zone_ref(&msg); con_consume_data(node_con, size); } }
and the actual message processing that should be scripted in there https://github.com/iadix/purenode/blob/master/purenode/main.c#L1419int process_node_messages(mem_zone_ref_ptr node) { mem_zone_ref msg_list = { PTR_NULL }; mem_zone_ref_ptr msg = PTR_NULL; mem_zone_ref my_list = { PTR_NULL };
if (!tree_manager_find_child_node(node, NODE_HASH("emitted queue"), NODE_BITCORE_MSG_LIST, &msg_list))return 0;
for (tree_manager_get_first_child(&msg_list, &my_list, &msg); ((msg != NULL) && (msg->zone != NULL)); tree_manager_get_next_child(&my_list, &msg)) { char cmd[16]; mem_zone_ref payload_node = { PTR_NULL }; int ret;
if (!tree_manager_get_child_value_str(msg, NODE_HASH("cmd"), cmd, 12, 16))continue;
tree_manager_find_child_node(msg, NODE_HASH("payload"), NODE_BITCORE_PAYLOAD, &payload_node);
ret = handle_message(node, cmd, &payload_node); tree_manager_set_child_value_i32(msg, "handled", ret); release_zone_ref(&payload_node); } tree_remove_child_by_member_value_dword (&msg_list, NODE_BITCORE_MSG, "handled", 1); tree_remove_child_by_member_value_lt_dword (&msg_list, NODE_BITCORE_MSG, "recvtime", get_time_c()-100);
release_zone_ref(&msg_list); return 1; }
int process_nodes() { mem_zone_ref_ptr node = PTR_NULL; mem_zone_ref my_list = { PTR_NULL };
for (tree_manager_get_first_child(&peer_nodes, &my_list, &node); ((node != NULL) && (node->zone != NULL)); tree_manager_get_next_child(&my_list, &node)) { process_node_messages(node);
The node defintion in the first block of this post would replace the "coin core" module on the graph ( https://github.com/iadix/purenode/blob/master/purenode/ ), where all the coin logic take place, which is the part to check if block headers fit the difficulty, to compute the difficulty re targeting, check the proof of work or stake, check the transaction inputs, etc , but i think most of this could be wrote in high level language script, all the part about protocol consensus and which block is considered valid could be scripted with the high level node definition, scripting call to modules coded in C for the hard crypto part or list processing. I already have all the code to implement this with the C modules, and it uses the run time type definition so it should not be too hard to integrate a script system based on the node definition i posted above, and to make a parser to have running full node with parameters and protocol consensus wrote in high level language like this. The thing with data format definition become really useless, it's mostly useful if you want compile time verification, but anyway you can't have this with js, and C/C++ is still very weak for detecting problem at compile time, and it completely prevent true run time type checking. And unlike in javascript, all the key and variable definition can be sur typed with built in type identifier, with runtime type checking. So can have more design pattern than with javascript. As my framework is all based on runtime type definition, it allow for runtime type checking, and more robust design pattern than js, and it's all though from scratch to easily build a script engine on top of it. Where i want to get at with you is that regarding you want to code a new blockchain protocol , it would probably not be longer to code the script engine based on my framework with already 90% of all basic blockchain related code and javascript html5 app / rpc interface already working, than to start coding a blockchain node from scratch, and like this you can have your blockchain programmed already with the high level language that has all the property you want with pararelization, runtime tpe checking, modularity, design pattern & all. And i'm pretty sure we are after the same thing With clear language, easy to read, dynamic typing, asynch request, green threading, "hard threading" , javascript/html5 compatible, i'm sure we can get something cool. The point of doing this script engine is to facilitating programming more complex consensus to have protocol more evolved than only checking pow, and facilitate programming distributed application based on flexible definition of node who can run different modules/class exposed as rpc api. If the synthax remain compatible with json, maybe it can be used as template to generate js objects and or/ui, or use static html5 pages . And this kind of scripting could really not take too long to get started from the code i already have. And i'm sure you have plenty of experience and good idea to get to a script language like this, to define node/modules interface and global application logic. And any blockchain can be programmed with it independently of each other, the script engine should allow to easily write new consensus algorithm without having to change too much the underlying code. And as long as the module dependency chain remain ok, apps using those modules should be compatible with the two blockchain.
|
|
|
|
IadixTeam
Newbie
Offline
Activity: 35
Merit: 0
|
|
March 31, 2017, 07:48:29 PM |
|
Brilliant !!
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
March 31, 2017, 07:49:30 PM |
|
|
|
|
|
iamnotback
|
|
March 31, 2017, 09:29:46 PM |
|
@iadix I haven't forgotten you. Coming back to your stuff next. First I wanted to solidify the understanding of what I am attempting to do for blockchains: And this is precisely the conceptual reason why Bitcoin may still be decentralized. But the danger is that over time the paradigm is a winner-take-all due to either the marginal utility of economies-of-scale (within the Bitcoin protocol) being perfectly constant and/or the inability of the Bitcoin small block, non-shrinking (money supply, i.e. not a deflationary supply) system to adapt to the knowledge age as finance becomes inexorably less efficacious w.r.t. to knowledge value (i.e. the NWO will be able to bring those whales into submission to the higher authority of an externality). I would much prefer if possible a protocol wherein the power of everyone is essentially nil...
So Bitcoin maintains its immutability because the richest whales in the kingdom dominate it and they will experience mutual self-destruction if they defect, where 'defect' means to try to mutate the protocol. The economics and cooperative game theory was explained upthread, e.g. my summary rebuttal to @jbreher. One hypothetical threat to Bitcoin's immutability is where another altcoin would gain a higher marketcap, due to the analogous arbitrage master/slave vulnerability quoted below. But this won't happen with Litecoin gaining more adoption with SegWit (presuming @dinofelis and I are correct that Bitcoin will never have SegWit and LN), because Lightning Networks is centralized hubs meaning private fractional reserve banking, which thus means all the profit from it will be ultimately settled between those power brokers on the Bitcoin blockchain. The transaction settling of LIghtning Network channels on the Litecoin blockchain is for transactional settlement not for power broker settlement. The tail doesn't wag the dog. And thus Litecoin will never be more valuable than Bitcoin, for the analogous reason that Nash explained why silver can't be as valuable as gold (c.f. my upthread rebuttal of @r0ach for the relevant Nash quotes). Litecoin might rise to 1/10 of Bitcoin's market cap, similar to silver's ratio to gold when silver was used as a wide-spread currency. PoW is only immutable on the master blockchain. The lesser altcoins can be manipulated by arbitrage employing the fulcrum of the master Bitcoin. Litecoin appears to ready to lose its immutability and adopt SegWit because the miners and whales (who are the same in both BTC and LTC btw) are able to take advantage of the ability to dominate a lesser market cap with a greater one. This creates a game theory opportunity where the previously manipulation of LTC to force it to be undervalued will now be reversed so that there will be scarcity of mining supply thus motivating miners to vote for activating SegWit to maintain the ramp up in the price.
Thus any threat to Bitcoin only comes from an altcoin that gains massive adoption in a value activity that is not susceptible to fungible finance (because otherwise the profit of the activity will be siphoned off to the finance power brokers and settled on the Bitcoin blockchain). If such an altcoin attained a larger marketcap than Bitcoin, then Bitcoin would be the tail that can't wag the dog and so then the power broker Nash equilibrium of Bitcoin can fail. As I stated upthread, the collapse of the efficacy of finance will not happen overnight. Thus Bitcoin's immutability is not in danger near-term. I presented upthread already my thesis on the knowledge age, value of knowledge, and the inexorable decline of the efficacy of finance w.r.t. to knowledge value. In short, fungible money is a winner-take-all paradigm because of the power vacuum. But if the money is non-fungible knowledge, then nobody is in control of the paradigm (which is the asymptotic result Nash was searching for). I have a surprise coming. The tail doesn't wag the dog. If knowledge value is traded directly in an Inverse Commons, finance is the tail.
|
|
|
|
iamnotback
|
|
March 31, 2017, 11:29:58 PM Last edit: April 01, 2017, 01:29:29 AM by iamnotback |
|
@iadix, I did not yet finished trying to figure out what you are doing with your code. I see you are doing some strange things such as loading precompiled modules and some scripting concept which I don't yet see the relevance of. I was too exhausted when I was looking at it. Had to sleep. Woke up and was finishing up my analysis of blockchain economics. Now I need to prepare to go for a doctors appointment which will consume 6 hours because have to get xrays first.
So then when I return, I will probably be too exhausted to look at your stuff today.
I will say that the code of manually building these JSON objects looks very fugly to me. And coding everything in C seems not ideal and not programmer friendly. We need C only for code which needs fine tuned performance, which typically is only a small portion of the overall code.
There is TypeScript for static typing on JS.
I will read your last message later as I don't have enough time right now.
But again I really don't understand the overall point of the JSON stuff. And I don't understand why you claim it is optimal to conflate the languages used to create programs which implement the APIs. APIs should be language agnostic. Good design is to not prematurely optimize and commit to a language.
Sure seems to me that you are probably conflating things instead of building good APIs. But I need to think about this more before I can definitively respond to you. I am still open minded.
My goal is not to prove which one of us is more clever. I simply want to design the systems that will scale (and I mean scale in terms of programmers adoption of our platform thus obtuse frameworks are not desirable, not just the other meanings of scaling). And to balance that with needing to get something launched asap.
I put a lot of effort into correct layered design. I don't believe in just hacking things together like spaghetti. I don't understand what APIs you claim you have designed and why you claim they are optimal? I don't even understand yet the significance of your tree concept. It is as if you are trying to build an object system in program/library code that should instead be integrated into a programming language. Perhaps I just don't yet grasp it holistically.
|
|
|
|
mining1
|
|
April 01, 2017, 12:31:58 AM |
|
Lol, the random april fools trolling busted you; eth apologist, steem fanboy, and ltc facilitator. Do you think this is by mystake, or design ?
|
|
|
|
iamnotback
|
|
April 01, 2017, 12:57:59 AM |
|
In past days I had a realization (or probably remembering something I had realized long ago) which I mentioned upthread but I'm confident readers didn't recognize the importance.
Fungible money awards the profits from volatility to the whale (or shareholder capital) with the most reserves who can backstop the speculation and collect a fee on the wave motion, i.e. the fee is a friction. This is what MPEx was doing and how he became so wealthy in Bitcoins (750,000+ BTC).
The point is that fungible money is a winner-take-all paradigm because even if there is competition, any system which imparts less risk to the one with higher economies-of-scale, is going to end up completely concentrated at some point. This is why Bitcoin can wags the altcoin's tail and not vice versa, but it is also why Bitcoin is not stable long-term. Because eventually all the capital will become concentrated in one whale or group.
This is why Nash's theory doesn't work in practice. But Nash was mathematically correct that if it is possible to have asymptotic number of stable currencies, then that winner-take-all tragedy-of-the-commons becomes an Inverse Commons.
And I posit that the way to achieve Nash's goal is something I am working on. I will not detail the reasons further right now. That is for a later time.
|
|
|
|
|
stereotype
Legendary
Offline
Activity: 1554
Merit: 1000
|
|
April 01, 2017, 07:53:34 AM |
|
^^^ LOL. Humour-free zone, obviously. ^^^
|
|
|
|
mining1
|
|
April 01, 2017, 09:19:24 AM |
|
OMG @iamnotback your conspiracies and thoughts about eth were right. Vitalik admitted AXA and bilderberg group are shareholders of ethereum foundation. Also in metropolis there's going to be implemented the possiblity to reverse any contract if desired, named "Proof of Vitalik". Being overcome with guilt, Vitalik decided to redirect 20% of the mining rewards to the dao hackers. Being unable to find solutions for nothing at stake and stake grinding, Vitalik decided to give up on proof of stake and go with PoA instead. As if it wasn't bad enough, Devcon 3 will be held in Pyongyang. I ain't tell no lies, proof inside https://blog.ethereum.org/2017/04/01/ethereum-dev-roundup-q1/
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
April 01, 2017, 09:49:02 AM Last edit: April 01, 2017, 10:50:04 AM by IadixDev |
|
@iadix, I did not yet finished trying to figure out what you are doing with your code. I see you are doing some strange things such as loading precompiled modules and some scripting concept which I don't yet see the relevance of. I was too exhausted when I was looking at it. Had to sleep. Woke up and was finishing up my analysis of blockchain economics. Now I need to prepare to go for a doctors appointment which will consume 6 hours because have to get xrays first.
So then when I return, I will probably be too exhausted to look at your stuff today.
I will say that the code of manually building these JSON objects looks very fugly to me. And coding everything in C seems not ideal and not programmer friendly. We need C only for code which needs fine tuned performance, which typically is only a small portion of the overall code.
There is TypeScript for static typing on JS.
I will read your last message later as I don't have enough time right now.
But again I really don't understand the overall point of the JSON stuff. And I don't understand why you claim it is optimal to conflate the languages used to create programs which implement the APIs. APIs should be language agnostic. Good design is to not prematurely optimize and commit to a language.
Sure seems to me that you are probably conflating things instead of building good APIs. But I need to think about this more before I can definitively respond to you. I am still open minded.
My goal is not to prove which one of us is more clever. I simply want to design the systems that will scale (and I mean scale in terms of programmers adoption of our platform thus obtuse frameworks are not desirable, not just the other meanings of scaling). And to balance that with needing to get something launched asap.
I put a lot of effort into correct layered design. I don't believe in just hacking things together like spaghetti. I don't understand what APIs you claim you have designed and why you claim they are optimal? I don't even understand yet the significance of your tree concept. It is as if you are trying to build an object system in program/library code that should instead be integrated into a programming language. Perhaps I just don't yet grasp it holistically.
Loading precompiled modules it's called dynamic linking There is no scripting in this code. The goal to have module is to like a static class, and they can be moved from linux to win without recompilation. Json object can loaded from json text, the tree_manager_set_child_value (node, "xx",value) is like hash key list. Like node["xx"]=value. It's the same than to have hash table. But you cant have in C load_json ("{key:value}",&xx); and then xx.key, like in as3 or js . because static type so it's read_node_child(&xx,"key",&value); The point of doing a script language is to have synthax improvement on the C. Api cannot be language agnostic as they need to take in account the functions call parameters and the types of arguments lol even with XPCOM api definition are compiled from IDL to the target language. I could write interface definition, but if it's to have interface definition in json, to define an interface with json parameters, the api definition is just decoration at programming level. The json is the api/data format definition used by the json/rpc api. I dont see the point of defining interface otherwise. To have IDL like thing, compiled to json ? Or compiled to C header to load preformated json ? Using json can pass language agnostic data, but the language has to parse it, and with compiled languages structures or class cant be made at run-time with the members of the json object. Im not hacking stuff together like spaghetti lol there is design pattern, modules, typing, interface, api, etc. But ok ill take this as ill have to build application for a non existant blockchain made with a non existant language who is so agnostic that it doesnt exist. So ill let you at it and keep doing application with my thing But my goal is to improve the synthax, and modules in my mind they are more for the low level operation. And the high level part in js. Or adding à icing in c++ even if id rather leaving the scripting in other language. Anyway most of programmable part is made in js. Module they are mostly to implement the hard part to remove it from js app, but still able to parse json/rpc request and output json result. Can make definition of the rpc interface, but it's useless at programming level. The compiler is blind to the api. It's more like as3/js, object and type are build at run-time based on data definition. It's the implementation of wallet rpc api https://github.com/iadix/purenode/blob/master/rpc_wallet/rpc_methods.c function rpc_call(in_method,in_params,in_success) { $.ajax({ url: '/jsonrpc', data: JSON.stringify({ jsonrpc: '2.0', method: in_method, params: in_params, id: 1 }), // id is needed !! type: "POST", dataType: "json", success: in_success, error: function (err) { alert("Error"); } }); }
The js wallet : https://github.com/iadix/purenode/blob/master/export/web/wallet.html
function import_address(address) { rpc_call('importaddress', [address], function (data) { }); }
function get_addrs(username) { rpc_call('getpubaddrs', [username], function (data) { $('#newaddr').css('display', 'block'); if ((typeof data.result.addrs === 'undefined') || (data.result.addrs.length == 0)) { my_addrs = null; } else { my_addrs = data.result.addrs; } update_addrs (); }); }
function import_keys(username, label) { rpc_call('importkeypair', [username, label, pubkey, privkey, 0], function (data) { get_addrs(username); }); }
Complete api for the rpc wallet in js http://iadix.com/web/js/keys.jsAnd for the block explorer http://iadix.com/web/js/blocks.jsImplemented in : https://github.com/iadix/purenode/blob/master/block_explorer/block_explorer.cOS_API_C_FUNC(int) getlastblock(mem_zone_ref_const_ptr params, unsigned int rpc_mode, mem_zone_ref_ptr result) { mem_zone_ref last_blk = { PTR_NULL };
if (tree_manager_find_child_node(&my_node, NODE_HASH("last block"), NODE_BITCORE_BLK_HDR, &last_blk)) { mem_zone_ref txs = { PTR_NULL }; char chash[65]; hash_t hash, merkle, proof, nullhash, rdiff,hdiff,prev; size_t size; unsigned int version, time, bits, nonce; uint64_t height;
memset_c(nullhash, 0, sizeof(hash_t));
if (!tree_manager_get_child_value_hash(&last_blk, NODE_HASH("blk_hash"), hash)) { compute_block_hash(&last_blk, hash); tree_manager_set_child_value_hash(&last_blk, "blk_hash", hash); }
tree_manager_get_child_value_str(&last_blk, NODE_HASH("blk_hash"), chash, 65, 16); tree_manager_get_child_value_hash(&last_blk, NODE_HASH("merkle_root"), merkle); tree_manager_get_child_value_hash(&last_blk, NODE_HASH("prev"), prev); tree_manager_get_child_value_i32(&last_blk, NODE_HASH("version"), &version); tree_manager_get_child_value_i32(&last_blk, NODE_HASH("time"), &time); tree_manager_get_child_value_i32(&last_blk, NODE_HASH("bits"), &bits); tree_manager_get_child_value_i32(&last_blk, NODE_HASH("nonce"), &nonce); if (!get_block_size(chash, &size)) size = 0;
get_blk_height(chash, &height);
if (is_pow_block(chash)) { SetCompact (bits, hdiff); get_pow_block (chash, proof); tree_manager_set_child_value_hash (result, "proofhash", proof); tree_manager_set_child_value_hash (result, "hbits", rdiff); } else if (get_blk_staking_infos) get_blk_staking_infos(&last_blk, chash, result);
tree_manager_set_child_value_hash(result, "hash", hash); tree_manager_set_child_value_i32(result , "confirmations", 0); tree_manager_set_child_value_i32(result , "size", size); tree_manager_set_child_value_i64(result , "height", height); tree_manager_set_child_value_i32(result, "time", time); tree_manager_set_child_value_i32(result, "version", version); tree_manager_set_child_value_i32(result, "bits", bits); tree_manager_set_child_value_i32(result, "nonce", nonce); tree_manager_set_child_value_hash(result, "merkleroot", merkle); tree_manager_set_child_value_hash(result, "previousblockhash", prev); tree_manager_set_child_value_hash(result, "nextblockhash", nullhash); tree_manager_set_child_value_float(result, "difficulty", GetDifficulty(bits)); tree_manager_add_child_node(result, "txs", NODE_JSON_ARRAY,&txs); get_blk_txs(chash, &txs,10); release_zone_ref(&txs); /* "mint" : 0.00000000, "blocktrust" : "100001", "chaintrust" : "100001", "nextblockhash" : "af49672bafd39e39f8058967a2cce926a9b21db14c452a7883fba63a78a611a6", "flags" : "proof-of-work stake-modifier", "entropybit" : 0, */ return 1; }
return 0; }
The json/rpc method call are directly module function calls. This is the rpc/json api. But it's defined at run-time. But tree can be built automatically from json string in one call. And json string from this tree too, it's the interest.
|
|
|
|
iamnotback
|
|
April 01, 2017, 10:41:53 AM Last edit: April 01, 2017, 10:53:32 AM by iamnotback |
|
Loading precompiled modules it's called dynamic linking DLLs are already supported by the OS. The goal to have module is to like a static class
That is not correct English grammar. It is difficult to understand your writing. , and they can be moved from linux to win without recompilation.
Yeah I read that on your site yesterday, and I don't understand what is the worthwhile benefit. Developers know how to use compilers. Do you want to dynamically load this over the network? Why? And code trust may be an issue in that case depending on your use case. The point of doing a script language is to have synthax improvement on the C.
That is what I thought. You are trying to model OOP in C instead of using a more appropriate language, all because you want this dynamic linking of code? It is the holistic reasoning that I am not getting. I don't understand why you would choose to make these design decisions. Api cannot be language agnostic as they need to take in account the functions call parameters and the types of arguments lol
An API can surely be language agnostic. REST APIs are language agnostic. Using json can pass language agnostic data, but the language has to parse it, and with compiled languages structures or class cant be made at run-time with the members of the json object.
Orthogonality (separation-of-concerns) does have a performance cost. But premature optimization is bad design. Im not hacking stuff together like spaghetti lol there is design pattern, modules, typing, interface, api, etc.
Really premature optimization is spaghetti. You may have certain features, but that doesn't mean the priorities of how you achieved them are not spaghetti. Spaghetti is conflating things which should be remain orthogonal until there is an overriding justification for conflating them. Also OOP as in subclassing, is an anti-pattern. Not saying you have that feature (subclassing, i.e. virtual inheritance with subtypes). But ok ill take this as ill have to build application for a non existant blockchain made with a non existant language who is so agnostic that it doesnt exist. So ill let you at it and keep doing application with my thing No you don't have to do anything. You are free to do what ever you want. I am not coercing anyone. I am going to build a correct design. Those who find it appealing will build on it. I am merely asking questions to try to understand your reasoning about why you made the design decisions you did. Until I understand well your reasons, I can't be 100% sure whether I reject or accept your reasons and your design choice. Remember you were trying to help/influence me to base my design around or compatible with your existing code. It is not unappreciated, but if I don't agree with design decisions, then I don't agree. I am not 100% sure yet. I need to fully understand your reasons for your design choices. I guess my use of the word 'spaghetti' offended you. Sorry but I think writing high-level code for apps in C is not a good idea (because developers don't like to do that, it produces less readable code, reduces productivity, slower to market, etc). But I think we need to differentiate between the server-side and client-side. I suppose most of the code you are showing me is for a server-side full node. For the client side apps, I hope you aren't proposing to use C But my goal is to improve the synthax, and modules in my mind they are more for the low level operation.
Incorrect. Modules are integral with static typing. That is one of the aspects that I mean when I use the word spaghetti to describe what I think you may be doing in your design. Edit: I see you edited your post and added some code examples. I am too sleepy to review those now. I had looked at that block explorer C code of yours yesterday.
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
April 01, 2017, 11:05:59 AM Last edit: April 01, 2017, 11:21:50 AM by IadixDev |
|
Loading precompiled modules it's called dynamic linking DLLs are already supported by the OS. Well i made my own dll who can be made from so or dll, and can be loaded and linked on both linux and windows, independently of the compiler. The goal to have module is to like a static class
That is not correct English grammar. It is difficult to understand your writing. Sorry lol im typing on the tablet What I mean is to have module definition used as static class. A collection of methods, and unique instance of data. , and they can be moved from linux to win without recompilation.
Yeah I read that on your site, and I don't understand what is the worthwhile benefit. Developers know how to use compilers. Do you want to dynamically load this over the network? Why? And code trust may be an issue in that case depending on your use case. For linux binary compatibility can be an issue, and binary often have to be recompiled for each machine. And like this it's 100% the same binary code running on all nodes. The code trust can be solved with verticalisation and private network like I explained before. The main concern for doing these module is not for this project, but I think they are useful here, to deploy easily distributed application, it remove the burden of compiling. And can provide good abstraction for interface definition, and as they can deal with json data, they are fit for json/rpc. A module implement a json/rpc interface. The point of doing a script language is to have synthax improvement on the C.
That is what I thought. You are trying to model OOP in C instead of using a more appropriate language, all because you want this dynamic linking of code? It is the holistic reasoning that I am not getting. I don't understand why you would choose to make these design decisions. Not really oop, I know the limit of C with this, but for interface definition and encapsulation the module does the trick. Full oop need to be done in another layer. Or it can be directly exploited in js, adding the OO synthax with js class definition. Api cannot be language agnostic as they need to take in account the functions call parameters and the types of arguments lol
An API can surely be language agnostic. REST APIs are language agnostic. They use XML. If the language you use dont support XML, you cant use it. If your language has certain limitation on data type, it can be unsafe. Using json can pass language agnostic data, but the language has to parse it, and with compiled languages structures or class cant be made at run-time with the members of the json object.
Orthogonality (separation-of-concerns) does have a performance cost. But premature optimization is bad design. Im not hacking stuff together like spaghetti lol there is design pattern, modules, typing, interface, api, etc.
Really premature optimization is spaghetti. You may have certain features, but that doesn't mean the priorities of how you achieved them are not spaghetti. Spaghetti is conflating things which should be remain orthogonal until there is an overriding justification for conflating them. Also OOP as in subclassing, is an anti-pattern. Not saying you have that feature (subclassing, i.e. virtual inheritance with subtypes). Where am I talking about optimisation ?
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
April 01, 2017, 11:34:31 AM Last edit: April 01, 2017, 04:59:02 PM by IadixDev |
|
I guess my use of the word 'spaghetti' offended you. Sorry but I think writing high-level code for apps in C is not a good idea (because developers don't like to do that, it produces less readable code, reduces productivity, slower to market, etc). But I think we need to differentiate between the server-side and client-side. I suppose most of the code you are showing me is for a server-side full node. For the client side apps, I hope you aren't proposing to use CFor the app code, there i see two possibility. Either using javascript, and encapsulating modules rpc interfaces in js class. But the design pattern is still weak, but already good application can be made with html5/js . Or either doing another script language, who recognize the module interface/api, and can make call to these modules, to have better high level node definition, and building application with this script language. 90% of application is making call to modules interface and formating UI. If the script language can include some form of html templating , additionally to exchanging data via api, it could allow to program more of the application with this script language . Maybe programming in C for the warrior or if they need specific code that need to be done in C, but normally the idea is to have module already made to fit the purpose to be used by the high level app language. For certain real time app like 3d video game or opengl the http/rpc can be too slow, if the app still need to have real time connection to the blockchain in lower level language. The part of application to be made in C / c++ or js is up to each developper. Most webapp should be made with js. For purely blockchain related stuff , the wallet rpc interface and block explorer should be enought with the in browser crypto part in js. Normally most operation that can be done on a blockchain by an application is implemented in those modules with the regular rpc api and block explorer api. Other module are to implement other interface for other type of application, but the logic can be either programmed in C or js, knowing that with in browser js, there cant be local data, all the user data has to be handled via a module interface. ( either it's on a local node or not). But outside of code that need to store permanent data, it can be programmed in js too, but js is not necessarily good for doing complex parsing on long list. So complex parsing / filtering etc is better done via C modules. And really the synthax is not so bad, there is not too many key word it's mostly tree_set_child _value_type (object, key, value ) ; and the gets. It's to manipulate json tree in C with reference counting etc. But such tree can be translated to/from json in one line. Either it's to build them out of rpc/json request, or output the json result for the javascript app. As most of the node operation even low level stuff use this tree structure, js app can plug anywhere in the internal api with json/rpc call. Only the very low level things like kernel level io or sockets really need to be done in C. But parsing thousand blocks and tx in js, not good idea either As all the data manipulated by the node internally use this structure, all the internal "DLL based api" is also the "rpc/json api" . Without there is anything special to do. The function dont make a difference if it's call from another C modules, or a js rpc. The data format is the same for both the call argument, and the result data . Both the C program and js/rpc works on the same api. The http server just load the json parameters of the rpc request to the tree, and tranform the result tree to json for the rpc/json. Whereas the C app directly sent argument using the tree and parse functiun result from the tree, instead of using json. But internally it's same than old school dll api, expect parameters and data is defined dynamically as json data tree, and the compiler is blind to it. ( but not the runtime). Any level of hierarchy of array of object can be represented like this. By object I mean object in the json sense, a collection of named value.
|
|
|
|
mining1
|
|
April 01, 2017, 12:12:02 PM |
|
Just a question from a non technical guy. I haven't really read all that you guys wrote, but i've noticed javascript being mentioned. So why javascript and not webassembly ? Apparently WASM is superior, even javascript's creator brendan eich endorses it.
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
April 01, 2017, 12:29:08 PM Last edit: April 01, 2017, 09:03:47 PM by IadixDev |
|
Just a question from a non technical guy. I haven't really read all that you guys wrote, but i've noticed javascript being mentioned. So why javascript and not webassembly ? Apparently WASM is superior, even javascript's creator brendan eich endorses it.
I just looked into it : http://webassembly.org/docs/web/Apparently it still need to be compiled to js to run in a browser. Can be interesting if it has good support for network protocol and crypto. ABIs
In the MVP, WebAssembly does not yet have a stable ABI for libraries. Developers will need to ensure that all code linked into an application are compiled with the same compiler and options.
In the future, when WebAssembly is extended to support dynamic linking, stable ABIs are expected to be defined in accompaniment.My ABI already have this and it support aligned simd 128 bits operation too and full native compatibility with C ( it's C ), cross compiler binary compatbility, dynamic linking, and json/rpc compatibility and I already have large part of blockchain protocol implemented with it. But it seem similar in the purpose to where I want to get at. The big advantage of js is that it has large base of code , sdk, programmers, support etc which is mostly what make it interesting. The only pb is access to local storage, and performance for binary data or crypto, for operating full node it seem a bit light. But for programming high level logic, UI, and interactive application, it's good. Just need to have simple design pattern or it can become dicey. With good naming convention and specialized object who are all horizontal it can be doable in js. But adding a layer in c++ or true OO just to have compile time design pattern in the node, where all the application logic happen in another layer anyway throught a rpc/json interface, seem a bit pointless. For me the two way are Either doing full OO script to define node and rpc interface to app , and eventually program some parts or all the app itself with it, or a synthax to export rpc method from the script code for other modules or js app. Either doing the OO layer with js and rpc calls. WASM seem to have a system like this to bind module interface to js api, but it doesnt seem to support http/rpc. Or to execute remote modules. With the js rpc it can execute code on remote machine. The thing with table/element definition in the modules to have data definition with WASM is replaced by runtime type definition based on json. So the data definition can be shared easily between low level and high level. No need for "glue code". http://webassembly.org/docs/semantics/#table Table
A table is similar to a linear memory whose elements, instead of being bytes, are opaque values of a particular table element type. This allows the table to contain values—like GC references, raw OS handles, or native pointers—that are accessed by WebAssembly code indirectly through an integer index. This feature bridges the gap between low-level, untrusted linear memory and high-level opaque handles/references at the cost of a bounds-checked table indirection.
My system with explicit type added to json does the same than this, expect it's not opaque, and convertible to json, using json allowed typed string/integer/real/object/array. So the equivalent definition for module interface can be used also as the definition of the json/rpc interface. And it's never totally opaque. In worst case scenario such thing with array of element with binary ofset can be done with the tree system, but better to add named and typed sub members , in transparent structure that can be shared with js. The function arguments passing is the same for C program call (dll api), or rpc call (via http/json api)). Having internal compile time constraint for modules rpc interface is not necessarily a good thing, and it will be useless in any case for the application layer, unless the application layer language can understand those type. Which while not be the case with js. The best you will have for data definition in js is json and flaky runtime type. Unless you add a layer of runtime in js who is able to get the good class instance from high level definition, like a factory to instanciate complex class hierarchy based on runtime parameters to build an OO representation of the node and specialized class for each rpc interfaces/modules. But the data format will be hardcoded in a way or another in the javascript code anyway, independently of the node definition. You can just pick between different hardcoded format at run-time by using the good class instance corresponding to the node rpc interface. Unless you want to compile the js code with the hardcoded values from a definition also used to compile the node interface implementation , but it wouldnt give much security at run-time. Or can generate the js code to call the api at run-time based on module definition like WASM. But doesnt seem all that convenient. Having data definition based on json in the node's internal C dll-api is not the worst solution. And there is an api, it can compile as dll on windows or so on linux like in debug mode, with the dependencies and api definition etc, but the data format of functions parameters are defined / parsed at run-time, and they can be defined at run-time automatically based on json object. The API at the C level can be called directly via http/rpc transparently. C function parameters are sent with a json object instead of a C structure or list of parameters.The C code can check the validity of the data structure at runtime, and do it's operation safely on it if it has the good type / children, and return the data to the app in the same format via json/rpc. The only thing the compiler see is pointers and integers, but all memory access are checked at run-time. the http json is like a layer added to the call to handle remote access to the api, it turn the json parameters from the rpc request to the tree, call the exported function specfied in the rpc request method with this parameter, encode the result to json, and handle the http layer. From the js point of view, it's like directly calling the C function exported in the module. The rpc request method is the exported function name, The module is selected with http url path. The parameters are parsed at run-time by the module code from the json format.
|
|
|
|
IadixDev
Full Member
Offline
Activity: 322
Merit: 151
They're tactical
|
|
April 01, 2017, 06:10:44 PM Last edit: April 01, 2017, 08:58:35 PM by IadixDev |
|
But all together im sure it could be turned to c++ with much better synthax, even only using over loaded operators.
You ll notice the only thing it would need to encapsulate modules as c++ class is adding the class { } in the file, and removing the prefix in functiun name, and roughly you get a class definition out of modules, they are a single C file , it's thought to allow this easily, and macros/regexp could easily turn the tree manipulation functiun to a better synthax with c++ operators.
Like
tree_manager_get_child_value_hash(&last_blk, NODE_HASH("prev"), prev);
=>
Prev=last_blk ["prev"]
,
tree_manager_set_child_value_hash(result, "merkleroot", merkle);
=>
result ["merkleroot"] = merkle;
Etc With over loaded operators and type conversion.
( the tree_xx) function allow explicit type conversion, and are roughly equivalent of c++ operators, and they can be either just copying/passing a reference (by default) or being equivalent to copy constructors , and they can already allow looping like for each on array of json objects with the C synthax, it could easily be implemented as c++ iterators.
But ....
c++ operators function exported name is not compatible between compilers.
Object this call is not compatible between compilers.
Can make the compilation more complex on certain platform (like raspberry pi).
Can make complex type conversion implicit/hidden, even sometime hard to really express with c++ operators .
So for distributed application it's a bit bothering.
And anyway the real application is programmed in js, so it's better to have application driven data definition, and use json as base for data definution , in the absolute all the data format and protocol could be defined at run-time, loaded from a conf file or user supplied from the UI.
The c++ would just add synthax sugar to code that is supposed to be low level, while making compatbility with json data definition from application more difficult, because lack of compatbility in module interface implementation at binary level. And it wouldnt change anything to the synthax of the js code anyway, as it all happen through json/rpc.
The whole OO synthax sugar to manipulate data in the UI can be made in javascript with hardcoded data format to communicate with the node, and it doesnt have to know about nodes internal object representation, just the json encoding of the parameters and return result of the rpc request , no matter how the module interface is implemented in the node and which compiler has been used to compile it.
|
|
|
|
|