jl777
Legendary
Offline
Activity: 1176
Merit: 1132
|
|
February 05, 2014, 05:40:46 AM |
|
Except CfB said it had to be a low level (assembly language) type of language. He started a contest to find language with fewest opcodes for Turing complete. I doubt anything is less than 1, though there were half a dozen variants of single opcode Turing completes
This isn't a competition to find the least # of instructions (but if it was then "you won"). I am guessing CfB just isn't keen on something that is too complicated but believe me *without* instructions such as SHA256 such a VM would really be of no practical use at all. So I guess it depends whether we are wanting to add something "useful" or whether we just want to say "me too" when it comes to having some sort of "Turing complete VM". I think the whole idea is stupid and goes nowhere. Anytime something new is announced by someone else, we have "me too" discussion here. Last week it was Zerocoin. What happened to it? Nothing. This week it's "Turing complete VM" No one has answered what it will do 1000 tps claims? I have created a project plan that is being worked on to add zerocoin functionality. I am sorry I could not get this project done in 3 days. How long will I have to get the zerocoin functionality done? We will be adding all features to NXT that makes NXT more valuable. I think NXTcash will make NXT valuable. I also think being able to flowchart DACs and submitting them to be run on NXT network is also very valuable feature. Notice that running DACs will require a lot of transactions. James
|
|
|
|
|
|
|
|
|
"I'm sure that in 20 years there will either be very large transaction volume or no volume." -- Satoshi
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1132
|
|
February 05, 2014, 05:44:06 AM |
|
I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM
In which case I would suggest why bother doing "any calcs" at all - there seems to be *no thought* here about what this concept is needed apart from to somehow answer Ethereum with "me too"! Did you see my post about the layers with 400000 NXT bounty? Building from Turing complete you can get to where you can flowchart DAC control flow and have it automatically generate the code to implement it. How is that not useful? We will of course need to do a few reference DAC's so people will get an idea of what can be implemented. Alias has a bunch of ideas: ************* Alright. I'm going to interpret "most valuable" as meaning the following: Quote The most valuable DACs will be those that simultaneously engage a large audience (far beyond those individuals with exclusively an economic interest in cryptocurrencies) and inspire them to imagine a world where decentralized/distributed trustless systems are the norm. You might watch the following to get near the same wavelength as me - http://www.youtube.com/watch?v=dE1DuBesGYM. Also just for clarity, I tend to focus on the Decentralised Autonomous Community definition rather than the Decentralised Autonomous Corporation/Company definition. Simple DAC games - These would allow simple competitive gaming with or without betting or financial reward (think tournaments). Tic-tac-toe Chess Rock, Paper, Scissors Complex DACs: [1] Prototype Resource Based Economy (RBE) - A simplistic version of this would be a simple ledger of all the resources owned by this DAC's community. Effectively a stock take of all the lawn mowers, projectors, step-ladders, basketballs owned by the community and who has access to them right now. And who has requested access to them in the future. And who needs access to them right now in particular cases (e.g. - defibrillator). I cannot stress enough that even a poorly implemented prototype RBE would generate a huge amount of interest. http://www.youtube.com/watch?v=PIMy0QBSQWo. [2] Prototype Decentralized Government - This doesn't have to be at a very large scale. It could allow a golf club or a soccer club to organize themselves and vote on pressing issues. [3] Prototype Credit Union - I think that this is the pièce de résistance. It doesn't have to be incredibly complex. Think 10 members in the first credit union. They all lodge 100 NXT into the credit union. One member has a great idea for a new business/DAC etc. The community listen to his pitch and vote to give him 500 NXT. He launches his venture makes a tonne of money and pays back the credit union (most likely with interest) and he lodges a bit more anyway. [4] Prototype Voting Registry [5] Prototype Distributed Census *I say prototype here because one would get their head in a twist if they tried to implement all the nuances associated each system. I hope that helps. Feel free to ask me any more questions. I will be much more active within the NXT community from the middle of February onwards. As a closing remark I might suggest that there might be no extremely complex DACs. What looks like a complex DAC might be as a result of emergent behavior on top of simple interacting DACs. Determine the most primitive lego bricks of how humans communicate, organize and transfer/store data and 95% of all DACs will be entirely construct-able from them. That is where NXT could beat Ethereum. In their case, it is almost hard to know where to start because you can do almost anything. If we can create the lego bricks that guide people we might end up with things liek this - http://totallytop10.com/wp-content/uploads/2010/01/lego-giraffe1.jpg. I suppose that is also the Minecraft way of looking at things.
|
|
|
|
Anon136
Legendary
Offline
Activity: 1722
Merit: 1217
|
|
February 05, 2014, 05:45:10 AM |
|
if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.
Good grief - suggesting ideas for a "useful instruction set" is now called "central planning"? So - let's say we go with the idea of the 1 instruction low-level language and we charge say 1 NXT per 100 "operations". Then a simple SHA256 operation will likely cost you 100's of NXT - so we have now a completely useless VM for doing SHA256 (but at least it wasn't "centrally planned" I guess). No one has answered what it will do 1000 tps claims?
Indeed - the reason why I was suggesting we would need a "practical" instruction set if we are going to bother with this at all. Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that it would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer? I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM
In which case I would suggest why bother doing "any calcs" at all - there seems to be *no thought* here about what this concept is needed apart from to somehow answer Ethereum with "me too"! If we don't even have any idea what it is going to be used or useful for then I'd suggest not building it (at least we *know* what Zerocoin can offer). This is the way it works man. We dont expect that as soon as someone suggests an idea that he must have every detail worked out. Someone suggests an idea and then we talk about it.
|
Rep Thread: https://bitcointalk.org/index.php?topic=381041If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1075
Ian Knowles - CIYAM Lead Developer
|
|
February 05, 2014, 05:50:17 AM |
|
Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?
Of course if you don't set some sort of a price per instruction then you'd end up "stopping the network". Sure it makes some sense for a forger to determine how many operations they would be willing to execute but then how would we actually "check" that this has been done correctly if other nodes are *not* doing the same work? BTW - I am not really so excited about the whole DAC thing (which I gather James is) and would instead like to see simpler "more useful" concepts being created (rather than "pie in the skynet" ideas).
|
|
|
|
xyzzyx
Sr. Member
Offline
Activity: 490
Merit: 250
I don't really come from outer space.
|
|
February 05, 2014, 05:53:35 AM |
|
Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price?
What is an objective difference between those two options? They look like two ways to state the same thing to me. Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?
One of the problems we run into is the Halting Problem -- there's just no way to know before-hand if a program will find an answer or chug away with calculations until the heat death of the universe.
|
"An awful lot of code is being written ... in languages that aren't very good by people who don't know what they're doing." -- Barbara Liskov
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1132
|
|
February 05, 2014, 05:57:27 AM |
|
Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?
Of course if you don't set some sort of a price per instruction then you'd end up "stopping the network". Sure it makes some sense for a forger to determine how many operations they would be willing to execute but then how would we actually "check" that this has been done correctly if other nodes are *not* doing the same work? BTW - I am not really so excited about the whole DAC thing (which I gather James is) and would instead like to see simpler "more useful" concepts being created (rather than "pie in the skynet" ideas). All sorts of useful things can be done using the middle of the NXTlayers. I envision the ability for a Turing script to trigger sending an email. it would do this by processing its inputs (aliases and AM) and outputting an AM. Then services running on hub nodes process the AM and send out an email Lots of details missing, but those are details. I am trying to understand the concepts for each layer. Now not all forging nodes will have all services available, but as long as 90% of the forging is done on full service nodes, then within 10 minutes it is almost assured that any services needed will be done. More details needed to make sure things are done once and only once, but these again are details. So, if you don't like the DAC layer, you can just deal with the services layer. Whatever service we can run on 100+ hub servers can be triggered by AM created by Turing scripts. This requires proper segmentation of the overall task to the right layer. From my understanding as long as the Turing scripts can get access to input data, it wont have to do a whole lot of complicated calculations. Though for a price any calculation is allowed. I think 1 NXT per millisecond of compute time is reasonable. James
|
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1132
|
|
February 05, 2014, 05:58:05 AM |
|
Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price?
What is an objective difference between those two options? They look like two ways to state the same thing to me. Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?
One of the problems we run into is the Halting Problem -- there's just no way to know before-hand if a program will find an answer or chug away with calculations until the heat death of the universe. kill -9 after time limit is up. time limit = # NXT paid * milliseconds
|
|
|
|
Eadeqa
|
|
February 05, 2014, 05:59:16 AM |
|
if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.
Good grief - suggesting ideas for a "useful instruction set" is now called "central planning"? So - let's say we go with the idea of the 1 instruction low-level language and we charge say 1 NXT per 100 "operations". Then a simple SHA256 operation will likely cost you 100's of NXT - so we have now a completely useless VM for doing SHA256 (but at least it wasn't "centrally planned" I guess). No one has answered what it will do 1000 tps claims?
Indeed - the reason why I was suggesting we would need a "practical" instruction set if we are going to bother with this at all. I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM How would it impact the decentralized exchanges without any third part gateways when even simple SHA 256 operations would be that costly? I think this will have serious security implications (and flaws). I doubt it work at all.
|
|
|
|
xyzzyx
Sr. Member
Offline
Activity: 490
Merit: 250
I don't really come from outer space.
|
|
February 05, 2014, 06:02:09 AM |
|
kill -9 after time limit is up. time limit = # NXT paid * milliseconds
The time it would take to run a program would depend on the underlying computer. This may lead to a given program with the same input succeeding when run on a fast/powerful computer, but not succeeding when run on a less fast/powerful computer. It would be better to use number of instructions as a metric. Also, multiple nodes would have to run the program with the same input in order to check each other's work. Otherwise, a malicious node could take your payment and lie about the result. Edit: what he said ↓
|
"An awful lot of code is being written ... in languages that aren't very good by people who don't know what they're doing." -- Barbara Liskov
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1075
Ian Knowles - CIYAM Lead Developer
|
|
February 05, 2014, 06:02:20 AM |
|
kill -9 after time limit is up. time limit = # NXT paid * milliseconds
You couldn't use "milliseconds" as the instructions would be processed at different times on different hardware. Understand that you'd ideally want *every node* to be able to verify *every transaction* which is going to have to include any VM script operations performed (otherwise how could you ever trust the "state" of the specific instance?). I guess "light nodes" could skip that work but that would lesson the security of the blockchain (if we are assuming that say only Hallmarked nodes are going to check the code has been executed correctly).
|
|
|
|
l8orre
Legendary
Offline
Activity: 1181
Merit: 1018
|
|
February 05, 2014, 06:02:26 AM |
|
We need some kind of a competition. The goal is to find a language with min number of opcodes.
OH NO! NOT BRAINFUCK !!!!!!!!!!!!!!!!!
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1075
Ian Knowles - CIYAM Lead Developer
|
|
February 05, 2014, 06:05:18 AM |
|
We need some kind of a competition. The goal is to find a language with min number of opcodes.
OH NO! NOT BRAINFUCK !!!!!!!!!!!!!!!!! Apparently we managed to get even worse than that - I think maybe CfB might have changed his mind about this now.
|
|
|
|
Eadeqa
|
|
February 05, 2014, 06:06:26 AM |
|
kill -9 after time limit is up. time limit = # NXT paid * milliseconds
The time it would take to run a program would depend on the underlying computer. This may lead to a given program with the same input succeeding when run on a fast/powerful computer, but not succeeding when run on a less fast/powerful computer. And it would mean the idea of "forging" on cell phones would be dead, aside from giving up fast transactions, 1000 tps, as some have been claiming.
|
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1075
Ian Knowles - CIYAM Lead Developer
|
|
February 05, 2014, 06:09:54 AM |
|
And it would mean the idea of "forging" on cell phones would be dead, aside from giving up fast transactions, 1000 tps, as some have been claiming.
*Exactly* - it seems there is a problem of "wanting to have cake and to eat it too" here. It will be almost impossible to a have 1000+ TPS *and* be able to execute "arbitrary code" (and certainly not using low-end hardware). CfB had already spelled out that to even achieve 1000+ TPS requires the "non-script" idea of very simple transactions and would also require a "binary protocol" (which currently is not being used). Now you want to add a "Turing complete" VM to the mix and still somehow get to that magical 1000+ TPS? So guys - I see it has being a case of wanting A or B - either you want A (1000+TPS) or you want B (arbitrary code execution a la Ethereum).
|
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1132
|
|
February 05, 2014, 06:12:22 AM |
|
if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.
Good grief - suggesting ideas for a "useful instruction set" is now called "central planning"? So - let's say we go with the idea of the 1 instruction low-level language and we charge say 1 NXT per 100 "operations". Then a simple SHA256 operation will likely cost you 100's of NXT - so we have now a completely useless VM for doing SHA256 (but at least it wasn't "centrally planned" I guess). No one has answered what it will do 1000 tps claims?
Indeed - the reason why I was suggesting we would need a "practical" instruction set if we are going to bother with this at all. I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM How would it impact the decentralized exchanges without any third part gateways when even simple SHA 256 operations would be that costly? I think this will have serious security implications (and flaws). I doubt it work at all. CPU intensive calculations will be done in the client (or service module on hub) and pushed down to the Script via AM data The problem is not calculation time, as with proper allocation of work to the appropriate layer you have access to the local CPU, the hub server CPU before it goes into the molasses slow VM. The key to designing practical system is to minimize the work that the Turing script needs to do. Its goal is to generate AM data that is then processed by hub server modules and/or client My thinking on automating gateways already has several solutions using multisig approaches, I have posted before. However, I think there could well be a way to implement a method similar to XCP's escrow process followed by BTCpay. Just need to add LTCpay, DOGEpay, etc. There will certainly be bugs at first, but a gateway does not do any really complicated tasks. Deposit and withdraw. Not sure why you are so skeptical that it wouldn't work. I dont understand the multisig methods with timeouts in the articles, but I am sure someone else will. I also think someone else will be able to figure out an analogue for XCP escrow process. When someone says something is impossible, it only means that it is impossible for them to do it. I just keep asking until I find someone that says it is really difficult! James
|
|
|
|
xyzzyx
Sr. Member
Offline
Activity: 490
Merit: 250
I don't really come from outer space.
|
|
February 05, 2014, 06:12:47 AM |
|
And it would mean the idea of "forging" on cell phones would be dead, aside from giving up fast transactions, 1000 tps, as some have been claiming.
*Exactly* - it seems there is a problem of "wanting to have cake and to eat it too" here. It will be almost impossible to a have 1000+ TPS *and* be able to execute "arbitrary code" (and certainly not using low-end hardware). Not necessarily. Just don't expect the result to be available by the next block.
|
"An awful lot of code is being written ... in languages that aren't very good by people who don't know what they're doing." -- Barbara Liskov
|
|
|
Anon136
Legendary
Offline
Activity: 1722
Merit: 1217
|
|
February 05, 2014, 06:13:40 AM |
|
Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?
Of course if you don't set some sort of a price per instruction then you'd end up "stopping the network". Sure it makes some sense for a forger to determine how many operations they would be willing to execute but then how would we actually "check" that this has been done correctly if other nodes are *not* doing the same work? BTW - I am not really so excited about the whole DAC thing (which I gather James is) and would instead like to see simpler "more useful" concepts being created (rather than "pie in the skynet" ideas). point taken
|
Rep Thread: https://bitcointalk.org/index.php?topic=381041If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1075
Ian Knowles - CIYAM Lead Developer
|
|
February 05, 2014, 06:15:19 AM |
|
Not necessarily. Just don't expect the result to be available by the next block.
CfB *himself* said 1000+ TPS would only be *possible* if protocol was changed to binary and kept "dead simple". So - how is it now suddenly become *possible* to also execute (even just a few) instructions of VM "per transaction" (as you can't assume they won't *all* want to do that) and still have 1000+ TPS? Does anyone else see the problem here?
|
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1132
|
|
February 05, 2014, 06:17:18 AM |
|
kill -9 after time limit is up. time limit = # NXT paid * milliseconds
You couldn't use "milliseconds" as the instructions would be processed at different times on different hardware. Understand that you'd ideally want *every node* to be able to verify *every transaction* which is going to have to include any VM script operations performed (otherwise how could you ever trust the "state" of the specific instance?). I guess "light nodes" could skip that work but that would lesson the security of the blockchain (if we are assuming that say only Hallmarked nodes are going to check the code has been executed correctly). I cant see how we avoid relying on hub (hallmarked?) servers. For advanced services we need beefy servers. I think CfB's plan was to simply count the number of instructions and then abort the interpreter session. If your loop exceeds the instruction limit, then it stops after 10000 opcodes are executed. Something like that. Time would vary based on server spec, but not too dramatically. I guess it wont be exact cost per millisecond, but approximate cost per millisecond of execution time James
|
|
|
|
Anon136
Legendary
Offline
Activity: 1722
Merit: 1217
|
|
February 05, 2014, 06:18:13 AM |
|
We need some kind of a competition. The goal is to find a language with min number of opcodes.
OH NO! NOT BRAINFUCK !!!!!!!!!!!!!!!!! Apparently we managed to get even worse than that - I think maybe CfB might have changed his mind about this now. well this sort of thing would be a long way off anyways even if we did do it. the more i think about it the more i think about how important specialization is. these sorts of blockchains should probably best be built on a per application basis. one unique blockchain per dac. part of the beauty of nxt forging is that multiple blockchains can coexist with out competing for security resources. (atleast not in the way of a POW coin)
|
Rep Thread: https://bitcointalk.org/index.php?topic=381041If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
|
|
|
|