libstdc++ and libgcc_s_dw2 are regular 32-bit MinGW runtimes, no real need to compile them in. In fact, this executable has even these two compiled in after a project file update. All other non-system dependencies are also built in. A cheap insult, piss off. Over at Dobbscoin, we release a full compliment of Wallets with each update. Built to Bitcoin standards. Not this frankenstein build system built, windows wallet only garbage that you and Shakezula seem to have pioneered. You and your cult of crapcoin are the ones who will be dying off soon.
Blah-blah-blah. If no one in the community needs Mac or Linux or Plan 9 or whatever binaries, why am I supposed to deliver these? Come back to the real world where over 90% are Windows users and most Linux users can build Qt clients & daemons themselves. Feel proud of being a Mac user? Most of the others don't give a shit. After a quick look at your Orbitcoin block explorer, it would appear your coin generation is all over the place. Nowhere in the realm of where they should be for what you claim the targets are. For PoS or PoW. Really? Feel free to tell me more about coin generation, make my day.
|
|
|
Orbitcoin (ORB) was supported by CoinPayments. Now it's not in the list. The support is silent. What's up?
Since the addition of the coin, way back months ago, it has received 0 transactions... Along with a few other coins that have been inactive for some time now. Nobody is using it to buy anything. The coin got PoW mining stuck in January, lost its founder in early March, was nearly abandoned and attacked recently. Now, it has just started to become popular again. Come on, Red Mist paid you to get listed at CoinPayments as he paid everyone to get things done. Give ORB a second chance.
|
|
|
Orbitcoin (ORB) was supported by CoinPayments. Now it's not in the list. The support is silent. What's up?
|
|
|
Been looking for a decent replacement to the PPC every block retarget algorithm. KGW, DGW, DigiShield, whatever else does no good. Have got the job done myself finally. https://github.com/ghostlander/Orbitcoin/blob/1e417b40a65dc316037855b1d3db3b32165c80db/src/main.cpp#L1157Re-targets every block with separate fixed targets for PoW and PoS. Two averaging windows (short of 5 blocks and long of 20 blocks), both are limited against time warping. Interwindow averaging and 0.25 damping with the final +1% / -2% difficulty limiting. Runs fine so far. P.S. Though BOB has Coingen roots, Testnet does work. So that's always an option. Though the seed node can be wonky at times. OMFG, this Bobcoin spam is now here at Bitcointalk. This annoying Coingen nonsense will get killed some day. The sooner is the better.
|
|
|
Not every. Those operating with large traditional averaging windows are fine, though their time warp vulnerability level is higher than usual. PPC even allows negative actual time span, so many faster forks got burned by this "feature". It samples only 2 time stamps per retarget (the last block and the previous one), applies extreme damping and gets the job done. Very slow as the damping suggests, though very easy to allow difficulty manipulations through time warps. Those PPC forks brave enough to run without the ACP enabled can be torn apart by 51% attacks quite easily.
Didn't know about that. Interesting! So what do you suggest? Other retargeting algos that seem promising, but haven't taken a look at is digishield and dark gravity wave2. How do they perform? And do you agree with the trade-off? Implementing the fix pushes in erroneous diff adjustments that could be detrimental to the coin. Or make it so that a 51%-er can mint more blocks with the same work? Currently, I think it's not ideal to implement the current tw fix that everyone is implementing, since if someone is maliciously doing a 51% attack, you are screwed anyway... Thoughts? I've seen Orbitcoin time warped today with their difficulty falling to near zero in minutes literally. The coin is a fork of NVC with some bells and whistles using 1 hour PPC style retarget window and very fast blocks. Such a small window combined with their low network hash rate and no limits in the code made it possible. Their previous retarget fix sibstituted negative time span with 0, but it didn't help much. 1 wouldn't help either. The attacker instamined a couple thousand blocks in a few hours and got away. The conclusion is, the more weird/complicated algorithm you employ, the more likely you to run into a trouble with it some day.
|
|
|
if (PastRateActualSeconds < 1) { PastRateActualSeconds = 1; }
If it's a block in the past, or the same timestamp, it's regarded as having taken 1 second. The problem with this is, is that if there are legitimate blocks in the past [due to network lag, or other reasons] it is regarded as an extremely fast block, rather than a legitimate block [within range]. This shoots up the difficulty (as we've seen with some coins). This happens more easily with coins with low block-times.
1 second isn't much better than 0. There should be either a higher value as a fraction of block target or these blocks may be skipped from difficulty calculation or their time stamps recalculated as average of neighborous blocks. Another way is to limit every difficulty adjustment like many non-KGW algorithms do. With diff adjustments happening per block, it's a difficult problem to solve. Any 1-block algo is vulnerable, as blocks in the past [which is a legitimate anomaly] is an "odd" happenstance in 1-block diff algos.
Not every. Those operating with large traditional averaging windows are fine, though their time warp vulnerability level is higher than usual. PPC even allows negative actual time span, so many faster forks got burned by this "feature". It samples only 2 time stamps per retarget (the last block and the previous one), applies extreme damping and gets the job done. Very slow as the damping suggests, though very easy to allow difficulty manipulations through time warps. Those PPC forks brave enough to run without the ACP enabled can be torn apart by 51% attacks quite easily.
|
|
|
A note on the recently discovered OpenSSL bug. Phoenixcoin Qt client doesn't support payment requests of the Bitcoin style. I have decided not to add this functionality when implemented the Coin Control. So, no upgrade required. If you use rpcssl in your daemon's phoenixcoin.conf and expose your rpcport to the outside which is a bad thing actually, you should upgrade OpenSSL to v1.0.1g and re-compile the daemon. No upgrade required otherwise.
|
|
|
Median of 11 is 6 blocks. Although AUR has changed this to median of 3 which is a bad idea actually.
Saying it is a bad idea does not yet make it a bad idea. Can you give some reasoning behind this? If there is good reasoning, why didnt you tell that when it was only being planned? Why have you not asked? I'm not a part of your community, not supposed to monitor what you do and to give advice on every matter.
|
|
|
Lets do some math real quick
open your calculator and do
2 to the 10th power and tell me what you get.
Stop playing a fool. void scrypt(const uint8_t *password, size_t password_len, const uint8_t *salt, size_t salt_len, uint8_t Nfactor, uint8_t rfactor, uint8_t pfactor, uint8_t *out, size_t bytes) { scrypt_aligned_alloc YX, V; uint8_t *X, *Y; uint32_t N, r, p, chunk_bytes, i; scrypt_aligned_alloc YX, V; uint8_t *X, *Y; uint32_t N, r, p, chunk_bytes, i;
if (Nfactor > scrypt_maxN) scrypt_fatal_error("scrypt: N out of range"); if (rfactor > scrypt_maxr) scrypt_fatal_error("scrypt: r out of range"); if (pfactor > scrypt_maxp) scrypt_fatal_error("scrypt: p out of range");
N = (1 << (Nfactor + 1)); r = (1 << rfactor); p = (1 << pfactor);
chunk_bytes = SCRYPT_BLOCK_BYTES * r * 2; V = scrypt_alloc((uint64_t)N * chunk_bytes); YX = scrypt_alloc((p + 1) * chunk_bytes);
/* 1: X = PBKDF2(password, salt) */ Y = YX.ptr; X = Y + chunk_bytes; scrypt_pbkdf2(password, password_len, salt, salt_len, 1, X, chunk_bytes * p);
/* 2: X = ROMix(X) */ for (i = 0; i < p; i++) scrypt_ROMix((scrypt_mix_word_t *)(X + (chunk_bytes * i)), (scrypt_mix_word_t *)Y, (scrypt_mix_word_t *)V.ptr, N, r);
/* 3: Out = PBKDF2(password, X) */ scrypt_pbkdf2(password, password_len, X, chunk_bytes * p, 1, out, bytes);
scrypt_ensure_zero(YX.ptr, (p + 1) * chunk_bytes);
scrypt_free(&V); scrypt_free(&YX); }
|
|
|
I'm afraid changing N from 1024 to 128 (N factor from 9 to 6) doesn't qualify as a really ASIC resistant thing.
litecoin is nfactor 10 or 1024 aiden is nfactor 6 or 64 And I answered that question already Its currently asic resistant (due to cost to manufactor) but nothing is asic intolerant. Really? Let's do some math. #include <stdio.h>
int main() { unsigned int n, nfactor;
for(nfactor = 0; nfactor < 24; nfactor++) { n = (1 << (nfactor + 1)); printf("n = %u, nfactor = %u\n", n, nfactor); }
return(0); }
And... n = 2, nfactor = 0 n = 4, nfactor = 1 n = 8, nfactor = 2 n = 16, nfactor = 3 n = 32, nfactor = 4 n = 64, nfactor = 5 n = 128, nfactor = 6 n = 256, nfactor = 7 n = 512, nfactor = 8 n = 1024, nfactor = 9 n = 2048, nfactor = 10 n = 4096, nfactor = 11 n = 8192, nfactor = 12 n = 16384, nfactor = 13 n = 32768, nfactor = 14 n = 65536, nfactor = 15 n = 131072, nfactor = 16 n = 262144, nfactor = 17 n = 524288, nfactor = 18 n = 1048576, nfactor = 19 n = 2097152, nfactor = 20 n = 4194304, nfactor = 21 n = 8388608, nfactor = 22 n = 16777216, nfactor = 23 Your code requires less memory than regular Scrypt to operate. Only a trivial change in the ASIC firmware allows them to run your code. Do you really think it's a problem even for those programmable hybrid Gridseeds which can do even SHA-256 and Scrypt at the same time? Don't waste your and other people's time.
|
|
|
I'm afraid changing N from 1024 to 128 (N factor from 9 to 6) doesn't qualify as a really ASIC resistant thing.
|
|
|
The another fix (preventing PastRateActualSeconds to go to 0) takes care of another attack vector. Here is a short explanation of the attack: 1. generate a block 2 weeks to the future. You cannot publish it, it is not on current time window. 2. Start generating blocks with the same timestamp (ie the moment 2 weeks in the future)
See what would happen: after there is PastBlocksMax blocks in the private chain, *the diff would not change* at all!
That would mean you have 2 weeks to generate blocks with 0 difficulty. With decent hashrate, you easily get 1 block in a second. In 2 weeks you get 1209600 blocks.
When that 2 weeks has passed, what would happen to the blockchain, if you suddenly publish 1209600 perfectly valid blocks? The whole network would be doing nothing but checking those 1209600 blocks... and finding nothing wrong with them. That would be the end of the coin.
First, an attacker still needs to exceed the cumulative difficulty score of the original chain. Second, there must not be any checkpoints on the original chain for those 2 weeks, neither hard coded nor synchronised. Third if the second is true, this is a huge reorganisation which won't pass unnoticed and a smart developer would secure his chain with a checkpoint immediately, release an updated client and ask the community to upgrade. EDIT: Actually, it *is* prevented somewhere else. One can generate only 5 blocks with the same timestamp. Median of 11 is 6 blocks. Although AUR has changed this to median of 3 which is a bad idea actually.
|
|
|
Time warps can decrease or increase difficulty, but they cannot make you more hash power than you have actually. You still need 50%+ of the network hash rate.
No, the point of the time warp attack is that you don't need more than 50% of the network hashrate to execute the attack. With that much hashing power you can always attack the chain, regardless of how the coin adjusts difficulty. It seems the original point of time warp attack as explained by ArtForz over 2 years ago has been missed by you. No matter how you play this game, you have to follow the rules and cumulative difficulty is one of them. You can mine at a lower difficulty or even attempt to double spend, but you still need to catch up with the original chain. It's up to your skills how to do that. In fact, time warps have never been a real problem for Bitcoin or Litecoin even though the latter addressed one of the issues in their code. This trouble is for those new "fast" coins and their developers who hardly understand what they're doing. Finally, ability to execute an attack doesn't imply ability to benefit from it.
|
|
|
The main chain is calculated by total work done already. If it wasn't, this would actually open a vulnerability in Bitcoin. Unless you can trick the software into calculating more work at a lower difficulty I do not see how this is a critical issue. No one has explains why my logic is wrong yet. It's not critical that coins update KGW. The best any coin can do to increase security is to have a higher and well distributed hashrate.
this What is to prevent me from creating a chain of greater proof-of-work comprised of lower difficulty blocks? If I'm following protocol, nothing. I more than likely don't need anywhere close to a majority of the hashing power since I am manipulating time. Time warps can decrease or increase difficulty, but they cannot make you more hash power than you have actually. You still need 50%+ of the network hash rate.
|
|
|
That's the whole point, the current network will happily accept chain-of-massive-number-of-low-diff-blocks over chain-of-less-harder-blocks as long as the sum of difficulty of the first is higher and it follows the "rules set in stone" (no invalid tx, generation amount <= calculated amount, difficulty == getNextDifficulty(prevblock), block nTime > median of prev 11 blocks, block nTime can't be more than 2 h in the future, ...).
ArtForz also noted that any asymmetrical algorithm will be vulnerable. Because KGW is designed to deal with multipool problems and abrupt jumps in difficulty that are caused by fast increases in hashrate, it makes increasing diff harder than decreasing it. Because of that, attacker can get a lot of lower difficulty blocks at the cost of few larger difficulty blocks when he jumps back and forth in time. Hmm.. ok.. so in the end there really is a attack vector (however not so easy I have been thinking)? But that means summing the difficulty is a wrong way to measure the height of a blockchain. There should be a way (some algorithm) to assure a certain blockchain has been done with more work than the other, regardless of are they done with lots of low diff blocks or a few high diff blocks. It should be possible to count the total amount of needed hashes calculated to generate a certain blockchain. And that should quite explicitely tell which blockchain really has been generated with most work. Before I chitchat more bullshit, I guess I have to make some homework and familiarize myself more with the source.. and what block difficulty really relates to. That's nice link to ArtForz comments. I have been wondering the same; does it need to be symmetric to protect the chain better? I think it is somehow weaker, if it is not, but that's not as big 'hole' than what other issues cause. In order to succeed, an attacker needs to put more hash power into his chain than the other miners can supply. Their pools may be DDoS'ed or they may just autoswitch to a more profitable coin. There are also checkpoints, either hard coded or synchronised. No matter how much cumulative difficulty or trust score you have on a forked chain, it always fails against a checkpoint. KGW is just an overcomplicated solution with no difficulty limiting. This is what needs to be fixed actually. And, with symmetric algorithm and one block retarget, you have the problem not being able to calulate with zero or negative timespans. Truly symmetric would approach infinite difficulty when time difference approaches zero.
A long averaging window can be used even for every block retargets. There are no zero or negative time spans.
|
|
|
Hey Ghost! Yeah, I like the silence and the lower energy bills. But maybe I'll point my Titan on PXC for a while You're welcome if you like rejects as the block limiter is here and it's going to be even more advanced over time Except there are Scrypt ASICs already, and more will come online, destroying non-ASIC miner's profitability.
Sure, as soon as it becomes profitable to do so, companies will develop X11 ASICs.
But by the time that happens, my little stash of Hirocoins will have paid off my mortgage.
Good for you if it does. What comes to the Scrypt ASICs, there is always a work-around if you really need it.
|
|
|
This bug is possible due to no hard limiter at all for difficulty adjustments as well as a large allowance for the future time stamps. EventHorizonDeviation is an averaging factor and doesn't work as a limiting factor. It isn't really supposed to. A retarget code which allows difficulty abuse of such extreme magnitude like with AUR recently isn't a good one. By the way, how many coin developers have copied KGW to their coins without having a good understanding of the internals just because it's a popular trend? That's a rhetorical question.
|
|
|
Joeri, nice to see you and MaGNeT having fun with Hirocoin I don't comment on marketing things usually, but from my engineering point of view it's much easier to do an X11 ASIC/FPGA than a decent Scrypt ASIC/FPGA. The X11 market is just too small currently.
|
|
|
|