tomkat
|
|
April 12, 2017, 06:56:29 AM |
|
Is queuing the right way to go at all?
I don't really see any way around it, but maybe some other bright minds around here can come up with a solution. But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated. For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else. And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN. I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties. I know you'll say we need the POW logic ;-) I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues. Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right? Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?
|
|
|
|
Bgjjj2016
Sr. Member
Offline
Activity: 448
Merit: 250
Ben2016
|
|
April 12, 2017, 10:24:02 AM |
|
Hi everyone, how high the block height is supposed to go on Elastic Explorer ?
|
My " I want that Old Toyota Camry very bad" BTC Fund :1DQU4oqmZRcKSzg7MjPLMuHrMwnbDdjQRM
|
|
|
onemanatatime
Full Member
Offline
Activity: 294
Merit: 101
Aluna.Social
|
|
April 12, 2017, 10:26:29 AM |
|
When do these tokens finally launch? and any plans on which exchange ?
|
This time it's different.
|
|
|
coralreefer
|
|
April 12, 2017, 11:57:57 AM |
|
Is queuing the right way to go at all?
I don't really see any way around it, but maybe some other bright minds around here can come up with a solution. But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated. For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else. And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN. I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties. I know you'll say we need the POW logic ;-) I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues. Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right? Assuming the jobs are being spread evenly between all SNs, is it really a problem at all? Yes, I can target all my submission to a specific SN. I'm not suggesting anyone do this, but it has to be accounted for that someone could do it.
|
|
|
|
ttookk
|
|
April 12, 2017, 11:59:23 AM |
|
Is queuing the right way to go at all?
I don't really see any way around it, but maybe some other bright minds around here can come up with a solution. But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated. For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else. And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN. I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties. I know you'll say we need the POW logic ;-) I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues. Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right? Assuming the jobs are being spread evenly between all SNs, is it really a problem at all? I think jobs must be checked by at least a portion of SNs, otherwise a SN could send bad jobs, be it by accident or with malicios intent. So, to check a SNs integrity, other SNs have to check what a SN does. Whether all SNs have to check all jobs, I don't know.
|
|
|
|
coralreefer
|
|
April 12, 2017, 12:13:58 PM |
|
Is queuing the right way to go at all?
I don't really see any way around it, but maybe some other bright minds around here can come up with a solution. But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated. For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else. And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN. I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties. I know you'll say we need the POW logic ;-) I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues. Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right? Assuming the jobs are being spread evenly between all SNs, is it really a problem at all? I think jobs must be checked by at least a portion of SNs, otherwise a SN could send bad jobs, be it by accident or with malicios intent. So, to check a SNs integrity, other SNs have to check what a SN does. Whether all SNs have to check all jobs, I don't know. That is not correct, the current design does not require multiple SNs to validate a job..
|
|
|
|
ImI
Legendary
Offline
Activity: 1946
Merit: 1019
|
|
April 12, 2017, 12:34:08 PM |
|
Is queuing the right way to go at all?
I don't really see any way around it, but maybe some other bright minds around here can come up with a solution. But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated. For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else. And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN. I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties. I know you'll say we need the POW logic ;-) I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues. Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right? Assuming the jobs are being spread evenly between all SNs, is it really a problem at all? I think jobs must be checked by at least a portion of SNs, otherwise a SN could send bad jobs, be it by accident or with malicios intent. So, to check a SNs integrity, other SNs have to check what a SN does. Whether all SNs have to check all jobs, I don't know. SN are checked through guard-nodes afaik.
|
|
|
|
logictense
|
|
April 12, 2017, 12:34:44 PM |
|
Is queuing the right way to go at all?
I don't really see any way around it, but maybe some other bright minds around here can come up with a solution. But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated. For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else. And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN. I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties. I know you'll say we need the POW logic ;-) I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues. Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right? Assuming the jobs are being spread evenly between all SNs, is it really a problem at all? Is there any practical use for this? I mean are there any parties that would be interested in posting jobs that need solving to xel network? Is the a chance that there exists somebody - hypothetical person - that has nothing to do other than submitting dumb jobs while they can simply put an entire elastic thread on ignore and dont bother about it.
|
|
|
|
bspus
Legendary
Offline
Activity: 2165
Merit: 1002
|
|
April 12, 2017, 02:31:24 PM |
|
When do these tokens finally launch? and any plans on which exchange ?
Judging by the messages posted here between the devs and the problems they are facing, I'd say we are still months away
|
|
|
|
jeffthebaker
Legendary
Offline
Activity: 1526
Merit: 1034
|
|
April 12, 2017, 02:56:29 PM |
|
When do these tokens finally launch? and any plans on which exchange ?
Judging by the messages posted here between the devs and the problems they are facing, I'd say we are still months away Honestly, not a huge issue. The more time spent making this project perfect, the more it'll be used and the more people will value it. I think a lot of people in the crypto world have very high hopes for what Elastic is going to be when it comes to fruition. Really excited to see what's to come.
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
April 12, 2017, 04:43:44 PM Last edit: April 12, 2017, 05:11:33 PM by Evil-Knievel |
|
Just thinking out loud: do we really need the bounty announcements in the new SN scheme?
It depends on how well the SN can handle a huge volume of submissions...but I believe it will have to do this either way. Yesterday, I submitted a job that had an issue which allowed every pass of the interpreter to find a "Bounty", so the miner tried to send hundreds of submissions pretty quickly....this is something anyone could do (i.e. create a simple job that allows legitimate bounty submissions to spam the SN). So I thought your original design had a small fee on each of these submissions, along with the announcement in order to deter this kind of behaviour. But if you have another approach that simplifies things, I'd be all for it. Well, first of all a job has a natural bounty limit ... submissions beyond this level are not permitted. But of course, there is a grace period between the submissions and their actual inclusion in the blockchain (or its unconfirmed transaction cache). In this time window it is possible to flood as much as you can. We could add a "SN rate limit" which would allow not more than x transactions per node per second. What sucks more is the lack of direct feedback from the SN. Since we queue at the moment, the miner does not even know if his submission was dropped, accepted or denied. We really have to think this through! Is queuing the right way to go at all? Btw: I could reproduce your bug today. I just could not yet find out why it happens. Fixed that bug. We will have to make sure that hashes and multiplicators are at most 32 bytes long ... not 33 like it could happen before. Fix is here: https://github.com/OrdinaryDude/elastic-core/commit/b95596e572af659cb7355a68643a58098579109fExtra checks in API are here: https://github.com/OrdinaryDude/elastic-core/commit/4870aa5c22786e27fbfdc37665a45e82f99410c9Do not use that yet!
|
|
|
|
coralreefer
|
|
April 12, 2017, 07:12:47 PM |
|
Just thinking out loud: do we really need the bounty announcements in the new SN scheme?
It depends on how well the SN can handle a huge volume of submissions...but I believe it will have to do this either way. Yesterday, I submitted a job that had an issue which allowed every pass of the interpreter to find a "Bounty", so the miner tried to send hundreds of submissions pretty quickly....this is something anyone could do (i.e. create a simple job that allows legitimate bounty submissions to spam the SN). So I thought your original design had a small fee on each of these submissions, along with the announcement in order to deter this kind of behaviour. But if you have another approach that simplifies things, I'd be all for it. Well, first of all a job has a natural bounty limit ... submissions beyond this level are not permitted. But of course, there is a grace period between the submissions and their actual inclusion in the blockchain (or its unconfirmed transaction cache). In this time window it is possible to flood as much as you can. We could add a "SN rate limit" which would allow not more than x transactions per node per second. What sucks more is the lack of direct feedback from the SN. Since we queue at the moment, the miner does not even know if his submission was dropped, accepted or denied. We really have to think this through! Is queuing the right way to go at all? Btw: I could reproduce your bug today. I just could not yet find out why it happens. Fixed that bug. We will have to make sure that hashes and multiplicators are at most 32 bytes long ... not 33 like it could happen before. Fix is here: https://github.com/OrdinaryDude/elastic-core/commit/b95596e572af659cb7355a68643a58098579109fExtra checks in API are here: https://github.com/OrdinaryDude/elastic-core/commit/4870aa5c22786e27fbfdc37665a45e82f99410c9Do not use that yet! Hi EK, can you please confirm if this was due to something the miner sent or is it an issue in only the core server? I checked the miner code and I don't see how it could send more than 32 bytes.
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
April 12, 2017, 08:11:48 PM |
|
Hi EK, can you please confirm if this was due to something the miner sent or is it an issue in only the core server? I checked the miner code and I don't see how it could send more than 32 bytes.
Took me a few days to find out, but the bug was the following: The core server only allows a hash of max. 32 byte length in the BountyAnnouncement transaction. When something longer than that was tried to be added, the hash was simply set to null: PiggybackedProofOfBountyAnnouncement(final ByteBuffer buffer, final byte transactionVersion) throws NxtException.NotValidException { super(buffer, transactionVersion); this.workId = buffer.getLong(); final short hashSize = buffer.getShort();
if ((hashSize > 0) && (hashSize <= Constants.MAX_HASH_ANNOUNCEMENT_SIZE_BYTES)) { this.hashAnnounced = new byte[hashSize]; buffer.get(this.hashAnnounced, 0, hashSize); } else { this.hashAnnounced = null; }
} When signing such transaction, which came in from via HTTP, everything appearently went fine ... the core server just signed the BountyAnnouncement with empty hash and submitted it. The verification worked fine as well because Attachments have no verification themselves. The problem happened in the rebroadcast. Imagine someone originally attached 33 bytes (instead of 32) in the hash, then the hash would be nulled - hash has length zero. But the original transaction is still broadcast in its original form causing 33 extra "unexpected bytes" to be over after parsing the TX. This is what the error that you posted earlier stated. I fixed it and learned ... variable length input sucks a lot. Tried to port many features to fixed length inputs (as you can see from my commits today). However, 33 bytes must have come from somewhere, i.e., the miner. I suspect an extra byte for a minus sign maybe?
|
|
|
|
coralreefer
|
|
April 12, 2017, 09:05:12 PM |
|
Hi EK, can you please confirm if this was due to something the miner sent or is it an issue in only the core server? I checked the miner code and I don't see how it could send more than 32 bytes.
Took me a few days to find out, but the bug was the following: The core server only allows a hash of max. 32 byte length in the BountyAnnouncement transaction. When something longer than that was tried to be added, the hash was simply set to null: PiggybackedProofOfBountyAnnouncement(final ByteBuffer buffer, final byte transactionVersion) throws NxtException.NotValidException { super(buffer, transactionVersion); this.workId = buffer.getLong(); final short hashSize = buffer.getShort();
if ((hashSize > 0) && (hashSize <= Constants.MAX_HASH_ANNOUNCEMENT_SIZE_BYTES)) { this.hashAnnounced = new byte[hashSize]; buffer.get(this.hashAnnounced, 0, hashSize); } else { this.hashAnnounced = null; }
} When signing such transaction, which came in from via HTTP, everything appearently went fine ... the core server just signed the BountyAnnouncement with empty hash and submitted it. The verification worked fine as well because Attachments have no verification themselves. The problem happened in the rebroadcast. Imagine someone originally attached 33 bytes (instead of 32) in the hash, then the hash would be nulled - hash has length zero. But the original transaction is still broadcast in its original form causing 33 extra "unexpected bytes" to be over after parsing the TX. This is what the error that you posted earlier stated. I fixed it and learned ... variable length input sucks a lot. Tried to port many features to fixed length inputs (as you can see from my commits today). However, 33 bytes must have come from somewhere, i.e., the miner. I suspect an extra byte for a minus sign maybe? Thx for the explanation. Many months back we had this same issue and it was due to a negative sign in the miner...thought I got rid of that issue. I'll take a look again and get it fixed.
|
|
|
|
|
Bgjjj2016
Sr. Member
Offline
Activity: 448
Merit: 250
Ben2016
|
|
April 13, 2017, 02:30:09 AM |
|
I see clear communication among our wonderful devs, along with their solid determination and efforts.... All great for good news to arrive soon
|
My " I want that Old Toyota Camry very bad" BTC Fund :1DQU4oqmZRcKSzg7MjPLMuHrMwnbDdjQRM
|
|
|
BTCspace
|
|
April 13, 2017, 03:25:21 AM |
|
|
running farm worldwide
|
|
|
BTCspace
|
|
April 13, 2017, 03:25:47 AM |
|
maybe we need to rebuild the website?
|
running farm worldwide
|
|
|
tomkat
|
|
April 13, 2017, 06:42:42 AM |
|
maybe we need to rebuild the website?
The website elastic-project.com is aparently dead, and since Lannister is the admi I guess that domain is lost until it can be renewed by someone else. We'll probably need a new domain, and I'd suggest that you change your signature for something else like https://talk.elasticexplorer.org/ for instance
|
|
|
|
by rallier
Legendary
Offline
Activity: 1848
Merit: 1334
just in case
|
|
April 13, 2017, 07:57:42 AM |
|
maybe we need to rebuild the website?
The website elastic-project.com is aparently dead, and since Lannister is the admi I guess that domain is lost until it can be renewed by someone else. We'll probably need a new domain, and I'd suggest that you change your signature for something else like https://talk.elasticexplorer.org/ for instance Pls contribute to https://github.com/elastic-coin/elastic-project.github.io
|
signature not found.
|
|
|
|