Oh, and regarding "chat" (or I prefer messaging), I can see it being used, especially in partner situations, for discussing and coordinating payments among other things, and if it can be a tunneled, encrypted connection that is never recorded by a 3rd party, it could be a great tool, only face to face would be better!
I was more thinking we should support encrypted JSON blobs being sent to your friends within the network. This allows the clients to actually talk to each other directly and decide what they implement on the client side. Something like this: encrypted {'type' : 'message', 'data-from' : '@user2', 'text' : 'hello'} The first use of this I see is allowing you, as a client on the network to send your friends messages like: {'type' : 'next-use-pubkey', 'data-from' : '@user2', 'pubkey' : 'Xaddr3'} This would tell your friend, "use this address for the next payment". So you can imagine all of the things we can do with a system like that, we could have different wallets that support various advanced functionality. For example, private chatting and group chatting become really easy. Or how about a command to implementing a whole new window in the wallet? I can think of a million things we could do with it - Negotiating multi-sig transactions, through the network! - Creating a fiat/dash "local bitcoins" market within the currency itself - Negotiating an arbitrated transaction between parties - Sending a "friend" deliverables for a contract All of these things can simply occupy new command "types" within this transport layer.
|
|
|
I just forked DarkWallet and mocked up a client, what do you think? It seems like it might be a great platform for connecting to DAPI. The code is really nice as well. If anyone looking at this is really good at javascript and wants to help, I see a role for you! Email me at evan@dash.org.
|
|
|
Uh huh, keep trying to spin your massive disappointment What happened to the Evolution demo Evan promised in Miami? Pushing it back 1 year sounds good to you? Yeah so disappointing to hear all this DASH news. Such a terrible shame to see all these whitepapers and devs hard at work. So depressing to see my masternodes making coins every day. Meanwhile, what joy to listen to Adam White's incisive commentary. The boy is a legend. Go Adam. Go XMR! We will be writing the software for this project in stages, the first stage will take about 2 months to have a very early prototype for Dash Evolution that includes a basic implementation of DashDrive, Primitives, DAPI and a simple T3 wallet. In six to eight months, we should be entering testnet phase with most basic functionality. In 12-18 months, we plan for the first release version (a stable prototype). Keep in mind, we're going to start revising the design tomorrow to try and figure out the best possible design to create. I think I've found a way to simplify this project by about 75%, which means we can have something much sooner. I'll start working on a proposal.
|
|
|
This is how the algorithm currently works. Users submit lists of inputs and outputs, the masternodes either accept them or deny them. If the inputs and outputs violate a privacy rule (they must be uniform, etc), then the user will be denied access to DarkSend. Selection of the nodes to use is also random... what's the issue? We decided this wasn't a priority because we have built in Tor support. This was during the exploration of Darksend and other technologies, I hadn't made up my mind about the direction of the project yet. By the way, this is currently where the evolution project is at, there are many parts of it that are up in the air and we're figuring out what will make the best implementation possible, then we're going with that. This is just how development works for really adventurous projects. Is anyone else creating multiple tier p2p networks capable of paying autonomous agents that provide service for the network? To implement IP obfuscation it would push Evolution back about 2-3 months, then we would use it in v12 for about a year, then throw it out as soon as evolution came along, then I would have to redo the work again....What sense does that make?
|
|
|
Just noticed this Someone tell them to re-direct HTTP to HTTPS version. I think you just did. And what status code should they use for the redirect for maximum SEO friendliness - 301 permanent ? Then make sure that all promoted home page links contain a consistent protocol: http or https ? Why use https for just regular stuff ? It's slower. Thanks, I was missing a couple lines in the htaccess. Fixed now. RewriteEngine On RewriteCond %{HTTPS} off # First rewrite to HTTPS: # Don't put www. here. If it is already there it will be included, if not # the subsequent rule will catch it. RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # Now, rewrite any request to the wrong domain to use www. RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
|
|
|
Well, duh, of course it was CPU-only at the start. I'm asking if the intent was to make it easier on GPUs than Quark.
I think he just did it his own way, as he didn't know about Quark, which is the impression I got long ago when I asked - don't ask me to find proof, I'll not waste my time. Just giving you an answer if you're interested. I suppose it's possible. Meh, hardly matters, I suppose. I just want to know his intentions for the PoW - what qualities he wanted it to have. I think Evan should chime in..that is the best answer even at https://dashpay.atlassian.net/wiki/display/DOC/X11 Created by Balazs Kiraly, last modified on Jun 01, 2015 _______________________________________________________________________________ _______________________________ the author said: It was probably inspired by the chained-hashing approach of Quark, adding further “depth” and complexity by increasing the number of hashes, yet it differs from Quark in that the rounds of hashes are determined a priori instead of having some hashes being randomly picked. The X11 algorithm uses multiple rounds of 11 different hashes (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo), thus making it one of the safest and more sophisticated cryptographic hashes in use by modern cryptocurrencies. The name X11 is not related to the open source GUI server that provides a graphical interface to unix/linux users. Balanced CPU/GPU mining When Darkcoin was initially launched with X11, it was only mineable through CPU mining programs. After a spike in the Darkcoin network hashrate in early February 2014, it was speculated that someone might have made a GPU miner and thus a bounty of over 3000 DRKs was given in order to assist in the creation of a GPU miner client that could be publicly available, for fairness reasons.By mid-February the GPU client was launched and by late-February it was optimized for higher hashrates. At the same time, the CPU mining clients evolved to increase their speed by using the SSE2/3/4, AVX and AES instruction sets. At that point it became evident that the hashrate difference between GPU and CPU implementations were not that chaotic, although GPUs still took the crown in terms of energy efficiency. Top of the line and tuned CPUs, like a 6-core i7s running at 4.5 GHz produced 880 khashes/second when GPUs like AMD's 280 and 290 gave 3 times as much. A ratio of 1:3 was thus established for the fastest CPUs versus the fastest AMD GPUs – which is significantly better than Scrypt or SHA256 and allowing CPU users to mine X11 coins. By June 2014, the ratio had gone up to 1:6 due to the evolution of GPU mining programs and relative stability of CPU mining programs. It should be noted that these ratios are more a reflection of the mining programs rather than an inherent property of the algorithm itself and thus the ratios can change depending the development progress of the CPU and GPU mining clients. _______________________________________________________________________________ _______________________________ is that you wolf0 that got 3000 DRK? i highly doubt Evan didn't know about Quark, Quark was months ahead of dash launch (blake, bmw, groestl, jh, keccak, skein) Quark (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo) X11 Haha, no. I would be ashamed to have written that miner. kindly describe the miner? in layman's terms..is it crippled? purposely unoptimized? is it written by savages? you devs kind of have a "third eye" in this crypto sphere. enlighten us. It was done for a bounty, and it works. I'm not going to demonize the guy - X11 is a lot of logic to optimize, and using copypasted CPU libraries was pretty much the fastest way to get a GPU miner put together for the 3000 DRK. But it was dead slow. This was the darkcoin kernel, not Girino's darkcoin-mod kernel. I wouldn't say it's purposely crippled, just a little above minimal effort put into it. for me a copypasted libraries is kind of crippling to my eye if the miner stayed for a while before any decent GPU miner was created...thanks for another insight. IMO, the real decider is intent. I'm pretty sure it wasn't done to slow it on purpose, it was to be the first done with a GPU miner and get the bounty. That kind of reward rarely incentivizes excellent work. Girino's code was stolen from binaries he made with a fee embedded - which is what we call darkcoin-mod to this day. I saw that code and it was still terrible, so I improved it. Thankfully, my sources haven't leaked, but the binaries have. The idea behind the launch of XCode and X11 was to have a brand new algorithm. My code was based off of Litcoin, Quark (a few of the hashes) and Primecoin (difficulty algorithm). I went and looked up all of the SHA3 candidates and took the rest, then I removed a lot of the logical switches from the hashing algo that quark used. I wanted to create a new algorithm, so that we went through the same stages as Bitcoin and Litecoin did. First CPU, GPU, FPGA, then ASIC. That's the general path that these currencies go down and I thought, since the two most successful followed that, my currency should as well. As for the GPU miner, it was contracted for a few hundred dollars worth of Dash and we got a miner that was pretty much unoptimized. But that was really good for the time and allowed a good portion of time for the network to update to GPUs without making CPU mining unprofitable instantaneously. After a spike in the Darkcoin network hashrate in early February 2014, it was speculated that someone might have made a GPU miner and thus a bounty of over 3000 DRKs was given in order to assist in the creation of a GPU miner client that could be publicly available, for fairness reasons.
That's pretty much exactly what happened.
|
|
|
Anyone have the link to the masternode count graph?
|
|
|
This is worth quotting! YEAH 6KK baby! Hadn't noticed! Great work blockchain.
|
|
|
It's all clear about variance. The weird thing is incremented "lastpayment" entry in the list without real payment occured. I believe updated "lastpayment" (from ./dash-cli masternodelist full, I observe it from 3 of mine vps) means that consensus decided some mn must be paid - and there're no ways to avoid that since the enforcement is active. Correct me if I'm wrong. It's not about bugs in the monitoring scripts. Looks like dashninja.pl calculates "Last Paid=1d8h37m20s" based on "masternodelist full" output and dashwhale shows "Lastpay=6 days ago" based on the real payment transactions in the blockchain. Anyway I'll inform about next payment. Yes, that's correct! Dashwhale lastpay status is derived from the blockchain, nextpay is derived from dashd. So your node moved from the 10% of masternodes, which will be paid within a few hours to the bottom of the payment queue without getting paid and without having been offline, which might indicate a rare bug. Since it's so rare, i suppose it's more worth to dedicate the core team's devpower to Evolution. Actually, the way the masternode payments work is slightly different. Once your masternode moves into the 10% of masternodes eligible for payment, the system just uses some simple math to determine when you'll get paid. There's not a set time that nodes get paid, instead it's defined with probabilities. Moocowmoo explains it really nicely here: I think the math is correct here.
A node is eligible for for selection when 90% of the other nodes have already been selected.
Once you're due to be selected, your chances are 1 in 326 (10% of masternodes, the length of the selection queue) or about: 63% in the first 13 hours (326 blocks) 82% in the first 24 hours (576 blocks) 97% in the first 48 hours (1152 blocks) 99.5% in the first 72 hours (1728 blocks) 99.91% in the first 96 hours (2304 blocks)
where probability P = (325/326)^blocks
This will shift a little as the total masternode count changes, but the probabilities will stay close to these numbers.
Now, since other unpaid masternodes are constantly being swapped into the selection queue and that selection depends on a competition between cryptographic hashes, it is possible your node will fall OUT of the queue before being paid. (causing a skipped payment cycle) This is a rare occurrence, but, double payments occur about 0.6% of the time, so over time payments should average out.
As moo points out, there is a possibility to fall out of the queue and not receive payment within a cycle. This only can happen when your node receives two votes for a given block and another node received more at the same time. This is extremely rare, as it would require a concensus disagreement among the masternode network; those are more common shortly after upgrading to a new version, then they quickly are solved and do not occur again but for very uncommon situations. This is also not a bug, but will go away with evolution as well as the probability based design. I found another design that is much more memory efficient and isn't based off of probabilities, but instead will establish a set list of payees. I believe that's a lot better for the long term solution for us, so we'll move to that.
|
|
|
I think your payment hit a superblock (the blocks where a budget proposal was paid) which, IMO should be repaired in the code. I think it's so infrequent, that it wasn't a priority. But I think these superblocks are coming more often, as we have many budgets being paid, and this will really start to impact everyone sooner or later (not to say your impact wasn't important) and will affect the masternode and mining income adversely. I mean, does this only affect MNs? Or also miners? It just needs to be fixed at the first opportunity imo Nope, the budget code will never cause a masternode not to be paid. It just pushes back the masternodes payment a few minutes on the network.
|
|
|
Waaaait, fun! Does this mean I can send a regular TX, and instantly double-spend it using IX, forcing any miner who mines the regular TX to get their block orphaned?
(TX Won) If you release a normal TX, then IX a moment after. In this case, the IX transaction doesn't get any signatures, so it would never get approved. (IX Won) Alternatively, if you release them at exactly the same time and half of the network accepted the normal TX, when IX is confirmed the other daemons would undo the TX in the memory pool and apply the IX transaction. I get it. To game the system, you need a masternode, and if you have a masternode, you're getting paid enough to not want to undermine people's faith in the currency. Not really. You must remember, the core problem in bitcoin is the fact that if there are two separate transactions on the network that are trying to use the same inputs, the network doesn't know which the miners will go with. If you controlled a quorum, you could only signal which transaction the network will take. You still can't attack anything, i.e. everyone on the network will also know which is valid within moments and the other transaction will be rejected by the entire network at once.
|
|
|
Waaaait, fun! Does this mean I can send a regular TX, and instantly double-spend it using IX, forcing any miner who mines the regular TX to get their block orphaned?
(TX Won) If you release a normal TX, then IX a moment after. In this case, the IX transaction doesn't get any signatures, so it would never get approved. (IX Won) Alternatively, if you release them at exactly the same time and half of the network accepted the normal TX, when IX is confirmed the other daemons would undo the TX in the memory pool and apply the IX transaction.
|
|
|
Why can't masternodes write the transactions in some kind of blockchain database? We already use something like the blockchain to store InstantX transactions and where the users promised the funds to go. This "blockchain" is managed by quorum messages; you can't add to it as a user. If any new blocks or txes conflict, they are rejected by the entire network. That is what this datastructure is for: https://github.com/dashpay/dash/blob/master/src/instantx.cpp#L23If a miner mines a block and includes a conflicting transaction, or the big miner causes a reorg and removes that IX transaction, then yeah, you'd be fucked. You can't cause a reorg, because you can't get the block in question approved by the network in the first place. Miners blocks have to qualify to get accepted by the network, it's not as simple as the Bitcoin system. Even if you could do a reorg, IX only goes up to $2000 worth of coin, a reorg would cost more than that in processing power to pull off. You can't simply sent the conflicting transaction to a miner, they have the list of IX transactions and already will side with the masternode network. https://github.com/dashpay/dash/blob/master/src/main.cpp#L2973So InstantX can't be trusted?
Given that you can't get a block approved with a conflicting transaction, it can be trusted 100% of the time. These attacks are more theoretical, where the double spend attack that we're stopping is trivial to pull off. On Bitcoin anyone can double spend against you by submitting two conflicting transactions to separate edges of the network, there is no question our network is safer to do business on because of this. For more information read 4.2, 4.3, 4.4 and 4.5: https://www.dashpay.io/wp-content/uploads/2014/09/InstantTX.pdf
|
|
|
Sometimes it's good to just start fresh, I've made four separate discoveries so far . I'm sorry, but I've got to agree with the trolls here: Posts like this one contain no useful information and only create the impression that the dev is trying to pump their own coin. So if you're unwilling to share anything of substance, it might be better not to post on the development progress at all. Did I mention that I hate those teaser-based marketing strategies? I was waiting for someone to ask what I discovered... Basically, something called masternode input age based quorum layering. Quorums are created using the age of the masternodes, 25% of the quorums are more than 1.5 years old, the next 25% is more than 1 year old, the next 25% is more than 6 months old, then the final 25% is any masternode that is newer than that. It guarantees, you can't control a quorum by just buying coins from an exchange to make new masternodes, there will always be masternodes that are really, really old. That will create a market for Master Nodes. Maybe instead of using quorums using all 25% of the nodes from the same date range, use an even distribution of nodes from each date range. That would eliminate any specific value to old or new nodes. That is how I am going to write it
|
|
|
|