Can I send you 0.01 or 1$ to your CC?
Bitcoin = bidirectional, like paypal.
|
|
|
And we also have a "catastrophic" problem with PoW:
No, it you read what I wrote it was a hypothetical, modified version of AES which does not diffuse to all of the output bits, which is specifically not what real AES is designed to do. No one has identified an actual problem. Ok, let's make it more practical then. Citing the experts below: As I said above, no one has identified an actual problem. His concerns are theoretical and other experts disagree. No one has identified an actual problem. EDIT: I'd add that the DoS issue is a potential problem, but is somewhat mitigated by the optimizing of the algorithm and the code that occurred. Initially (when that comment was written), it took hundreds of milliseconds to verify a hash which is indeed a lot of time and a big DoS vulnerability. These days with the optimized hash code and multithreaded verification, the effective time is well under 10 ms, which is close enough to network latency to serve as an effective throttle on DoS. Actually, my bad, this was a point that should be posted in XMR #badcrypto thread. Anyway. The reason we are here: Wow, I just saw that. I agree with tacotime, it looks intentional. So you mean the BCN engine was curbed all this time, and they just add to remove it to artificially increase it? That's disgusting.Blame BCN developers - instamine yourself. Nice catchphrase you got here
And here i thought people went to MRO to have a “clean” start but nope! instamining it with a fast linux hash... Cool! You're welcome to optimize your own miner if you're so interested; you have the source code. Artforz GPU mined Bitcoin for a long time on his private OCL code on 4870s and got thousands of them. Bitcoin survives to this day, or so I'm told. You're the developers after all. It's a good approach for a developer: we're instamining - you can too, if you're good enough! Promising coin - no shit. "Fair launch" "Disgusting" "You can instamine too" Monero #REKT
|
|
|
With a coin like dash that uses a paynode scheme, all you need to do is keep reinvesting and you are guaranteed a greater percent of the stake
There are masternode costs involved, security and technical know-how costs (if you don't have the skills, you have to pay others to do it for you) as well as investment dilution by PoW mining. This is not proof of stake where you have 1000 coins and they are giving you interest, 'just because'.
|
|
|
Well, it was a surprise that dumping began this fast. I thought that it will take at least a day or two, before the BTC->STEAM->$ engine will show it's effect. Things will probably get interesting because I doubt that the engine is running at it's peak. Most here are blinded by big words and can't understand how thin the buy side really is. The price isn't holding because of the strong bid side, but because of the desperate miners who want to see some ROI.
We have no idea who sold. It could be guys that bought at 370-400 when others were like "OMG I WONDER WHY IT IZNT AT SUB200". There was looooooooot of buying back then at the dumps of weak hands.
|
|
|
And we also have a "catastrophic" problem with PoW:
No, it you read what I wrote it was a hypothetical, modified version of AES which does not diffuse to all of the output bits, which is specifically not what real AES is designed to do. No one has identified an actual problem. Ok, let's make it more practical then. Citing the experts below: Which file in the source code contains the proof-of-work algorithm?
I've tried to locate it and can't seem to find it quickly.
I want to analyze the cpu-only claim.
src/crypto/slow-hash.c On quick glance, I see AES code. Is this the MemoryCoin algorithm and not the one described in the CryptoNote whitepaper which is memory latency bound? I do not think it is the memorycoin algorithm. Analyzed it. It is employing AES as another means of defeating GPUs (in addition to the memory latency bound), similar to MemoryCoin. https://cryptonote.org/inside.php#equal-proof-of-work3. GPUs may run hundreds of concurrent instances, but they are limited in other ways See prior analysis of that strategy, which concluded that GPUs would be 2.5 to 3X faster but would perform no better in hashes per Watt: https://bitcointalk.org/index.php?topic=355532.msg3976656#msg3976656I pointed out that ASICs would implement AES much more efficiently: https://bitcointalk.org/index.php?topic=355532.msg3977088#msg3977088Here follows my conclusions. - slow and thus DDoS prevention will be hampered, which will also likely eliminate any chance of supporting 0 transaction fees
- roughly both memory latency and computation bound (instead of the ideal of being only latency bound), thus if Tilera CPUs or GPUs add dedicated AES support or if ASICs are mated to large fast SDRAM caches, the cpu-only claim will fail.
- it is not leveraging hyperthreads
In short, it is too computation heavy, not maximizing the CPU's hyperthreads, and thus not only will it not be the best cpu-only PoW algorithm possible, it will also fail to be remain cpu-only if it becomes widely adopted. Also being computation heavy, it is consuming more electricity than the ideal cpu-only PoW algorithm. There is another egregious flaw in the proof-of-work algorithm.AES encryption is being employed as the hash function and assumed to be a random oracle with perfect distribution in order to provide the randomized memory access. Problem is that AES is not suitable as a hash (certainly not when employed as encryption) for it has too small of a output space (repeating patterns will be over a few number of bits), thus it will be possible to attack this with an algorithm to reduce the scratchpad size significantly from the 2MB. Monero #REKT
|
|
|
BTC+gaming integration is extremely huge for bitcoin's long-term fundamentals and the training of a potentially very big user base. Not to repeat myself, but: The best "crowd" to "train" are young tech-savy people (teenagers+). They'll need ...zero training as they figure out everything, just as they did with dogecoin. We need to merge BTC with the millions of online gamers. Games where people can win BTC through their gameplay or pay with BTC for advanced game priviledges, or transact with BTC between each other for exchanging items, accounts etc. These young people also tend to have GPUs and thus can become indirect BTC miners through GPU multipools.
I've stated it before but online games are the way... massive user base (tens of millions of young gamers) who are tech-knowledgeable and won't have an issue using cryptos.
Want to sell a virtual item? A game account? Something in-game related? BTC is the way... Same for game companies... they can have users use BTC for paying for their premium stuff or reward players with BTC for winning something, like a tournament or in-game achievements.
agree m8 also think xbox and ps will add to their stores soon, now that japan has officially recognised btc as money.. "Even more encouraging in many ways was the other news, that the Japanese government had devised a set of bills that would effectively recognize Bitcoin as a currency and put some regulations in place." http://www.nasdaq.com/article/the-reasons-behind-bitcoins-continued-rise-cm612238#ixzz472yNZbBNThe "best" will be when in-game economies start mixing with RL-economies with bitcoin as the gateway. I've already seen some samples of this on a massive online game where players were getting bitcoins as rewards for completing in-game tasks / in-game challenges. And we already have cases of virtual goods or accounts being sold for btcs. Interestingly BTC has an advantage here over paypal, due to non-reversibility. This disallows someone to buy an ingame account or property and then reverse through paypal, keeping both money and virtual goods.
|
|
|
My personal resume on the coin
You never did any thing apart from compiling miner guides (thank you for that), and creating an optimized miner to (I assume) instamine the coin yourself. And yes, you spent hours & hours & hors on this forum just posting irrelevant things, like posting strategy for the dummies above, which will drive more users to your apparently instamined coin.
Congratulations, you've made it! The fair launch was soooo fair!
Disclaimer: I apologize if any of my assumptions insult anybody and would like the self-proclaimed authors to finally clarify on the following points:
1) How exactly did you benefit the community and the coin? 2) When is the open source miner going to be released? Why it wasn't? Please, provide exact technical details on that "unstable code thing". 3) Why did you change your attitude towards closed-source miner? 4) Why did you take over the coin and create a second thread when there was one already there? 5) Why did you change the domain name and forked the rep, while keeping the blockchain that you never started? 6) Why do you keep avoiding this very clear questions.
Thank you.
Monero #REKT
|
|
|
Wait, so coins that are highly GPU mineable but don't have a public GPU miner (it would have to be a public highly-optimized GPU miner, right?) for at least 20 days are not also Cripplemined by your definition? How many coins were mined in those 20+ days?
Where do you draw the line here?
Quark, with just 5 of the hashes, was a "cpu coin" for months. It took the "crowdfunded" GPU miner of darkcoin to then be used for quark also. Actually what Darkcoin did, I doubt anyone has done in terms of mining fairness. "We are funding the development of GPU mining before anyone gets to develop it in private"... Who does that? As for cripplemining, well, the issue with Monero was 1) it was intentionally crippled to be slower.It appears from the simplicity of the fix that there may have been deliberate crippling of the hashing algorithm from introduction with ByteCoin. Interesting. Could you extrapolate on this? oaes_key_import_data calls are placed inside loops unnecessarily, which slows down the hash quite a bit during the scratchpad portions. Wow, I just saw that. I agree with tacotime, it looks intentional. So you mean the BCN engine was curbed all this time, and they just add to remove it to artificially increase it? That's disgusting. 2) There were people mining with at least double the speed and the devs were ok with it: Blame BCN developers - instamine yourself. Nice catchphrase you got here
And here i thought people went to MRO to have a “clean” start but nope! instamining it with a fast linux hash... Cool! You're welcome to optimize your own miner if you're so interested; you have the source code. Artforz GPU mined Bitcoin for a long time on his private OCL code on 4870s and got thousands of them. Bitcoin survives to this day, or so I'm told. You're the developers after all. It's a good approach for a developer: we're instamining - you can too, if you're good enough! Promising coin - no shit. + 3) There was a known cpu miner going much faster: Hey, NoodleDoodle optimized the slow hash code recently to about 225% performance. However, he has decided not to release the source code and has only released binaries. I think he is enjoying mining MRO with very high hash rates from Linux right now. Eventually we hope he will release the code.
Hope dies last And we also have a "catastrophic" problem with PoW: Problem is that AES is not suitable as a hash (certainly not when employed as encryption) for it has too small of a output space (repeating patterns will be over a few number of bits), thus it will be possible to attack this with an algorithm to reduce the scratchpad size significantly from the 2MB. I agree with this. Only a small number of bits of the output of AES are being used, but AES does not guarantee that all of its output bits are random. For example, consider an algorithm AES' which is just like AES except that it appends 10 trailing bits that are always zero (AES'(x) = AES(x) << 10). This would be just as secure as AES for encryption, but catastrophically bad for slow_hash. I suspect the developers wanted to use AES because of the hardware support in Intel CPUs, but they made a mistake, though it isn't immediately apparent how catastrophic this is (unlike my toy example above for example). If they used a true secure hash, it would be much slower and likely not memory bound. The algorithm can and should likely be improved in this regard, although I don't have any immediate suggestions how. Monero #REKT
|
|
|
how many bitmonero, bmr, mro, xmr monero were cripplemined the first 3 months?
the question they don't want to answer.
Ask an objective question (in which case you can answer it from a block explorer, etc.) or might as well just make up your own answer. do you have a link to where this info can be easily found? i need to recalculate the monero cripplemine. as i'm sure you know it's worse than previously disclosed. "This info" meaning what? You made up the the term Cripplemine, haven't defined it clearly, and now AlexGR is telling us that pretty much every unit of every coin that has ever been mined was Cripplemined. So, no, there is no link I'm aware of that will take you to Here are the Monero Cripplemine stats. Yes... CPU-coins... X11 has been gpu-mineable since mid-Feb 2014. Darkcoin issued a hefty bounty veeery early on (20 days old?) to have a GPU miner online as to avoid hidden GPU miners in the wild - I think the bounty was something like 2-3k DRK. Wait, so coins that are highly GPU mineable but don't have a public GPU miner (it would have to be a public highly-optimized GPU miner, right?) for at least 20 days are not also Cripplemined by your definition? How many coins were mined in those 20+ days? Where do you draw the line here? BTW, your comments about SIMD optimizations were almost completely wrong when it comes to Monero's hash function. A little knowledge can be a dangerous thing. I don't remember making a comment about Monero's hash function in particular, but I did make a comment on memory-hard hashes, having scrypt in mind. Cryptonight might also fall into the same category, or not, I haven't checked it. If its approach is doing it like scrypt, it might be equally vulnerable.
|
|
|
BTC+gaming integration is extremely huge for bitcoin's long-term fundamentals and the training of a potentially very big user base. Not to repeat myself, but: The best "crowd" to "train" are young tech-savy people (teenagers+). They'll need ...zero training as they figure out everything, just as they did with dogecoin. We need to merge BTC with the millions of online gamers. Games where people can win BTC through their gameplay or pay with BTC for advanced game priviledges, or transact with BTC between each other for exchanging items, accounts etc. These young people also tend to have GPUs and thus can become indirect BTC miners through GPU multipools.
I've stated it before but online games are the way... massive user base (tens of millions of young gamers) who are tech-knowledgeable and won't have an issue using cryptos.
Want to sell a virtual item? A game account? Something in-game related? BTC is the way... Same for game companies... they can have users use BTC for paying for their premium stuff or reward players with BTC for winning something, like a tournament or in-game achievements.
|
|
|
how many bitmonero, bmr, mro, xmr monero were cripplemined the first 3 months?
the question they don't want to answer.
Ask an objective question (in which case you can answer it from a block explorer, etc.) or might as well just make up your own answer. do you have a link to where this info can be easily found? i need to recalculate the monero cripplemine. as i'm sure you know it's worse than previously disclosed. "This info" meaning what? You made up the the term Cripplemine, haven't defined it clearly, and now AlexGR is telling us that pretty much every unit of every coin that has ever been mined was Cripplemined. So, no, there is no link I'm aware of that will take you to Here are the Monero Cripplemine stats. Yes... CPU-coins... X11 has been gpu-mineable since mid-Feb 2014. Darkcoin issued a hefty bounty veeery early on (20 days old?) to have a GPU miner online as to avoid hidden GPU miners in the wild - I think the bounty was something like 2-3k DRK.
|
|
|
I could make a coin then code a GPU miner around it's new algo *mod* Then only post a CPU miner and rape the shit out of it ..it's been done ..lots ..and YOU KNOW IT TOO !
The situation is worse than you think. For years, almost every CPU coin is actually cripplemined (to different degrees). The reason is that all the "sse, avx" ets enhanced miners, do not use packed commands / SIMDs. This means that if a hash has like 3 steps (usually lot more than 10), a sequential-non-packed version will go like 1. 5-10 cpu cycles : first step of hashing 2. 5-10 cpu cycles: second step 3. 5-10 cpu cycles: last step of hashing If you load 4 different hash candidates simultaneously, in the same thread, you can "pack" them with AVX/SSE and go like 1. 5-10 cpu cycles: first step of hashing for ALL 4 hashes 2. 5-10 cpu cycles: second step of hashing for ALL 4 hashes 3. 5-10 cpu cycles: last step of hashing for ALL 4 hashes Again, that's in the context of the same cpu thread btw. It's kind of parallelism within the same core/thread. In this way, Haswell can go from something like 8cycles per byte to ~2.5 with AVX2 for SHA256 and down to <1cycle/byte with AVX-512. I initially thought coins who use memory hard algorithms are more immune, and they are to some extent, but since memory use (reduced scratchpad) can be traded for more processing work (shortcut), if the processing work is multiplied -say- by 8x, then the underlying assumptions of trading less memory use (shortcut) for (supposedly waaaaay) more cpu performance could be invalidated (to some degree). Because the assumption of what cpu power levels are, is based on ...scalar and not SIMD use.
|
|
|
I also think we should keep "digital cash" or "anonymous" and "instant" in the title. I believe they are more important terms for newcomers, than the current one. More important for the end users. Imagine me as a new user, reading the current thread title, I would skip it. Please qoute this, if you agree.
Indeed. The user doesn't care about self-funding. He cares about what the coin can do for them.
|
|
|
Does this help at all? https://en.wikipedia.org/wiki/CLMUL_instruction_sethttp://www.intel.com/content/dam/www/public/us/en/documents/white-papers/carry-less-multiplication-instruction-in-gcm-mode-paper.pdfhttp://www.intel.com/content/dam/www/public/us/en/documents/white-papers/polynomial-multiplication-instructions-paper.pdf"Using the PCLMULQDQ instruction, the performance of ECC can be boosted significantly on a range of IA processor cores, making it an attractive option over other public key algorithms." There are also some newer "regular" instructions that might be useful for speedups (mulx, adcx, adox): http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf MULX Instruction The mulx instruction is an extension of the existing mul instruction, with the difference being in the effect on flags: mulx dest_hi, dest_lo, src1 The instruction also uses an implicit src2 register, edx or rdx depending on whether the 32-bit or 64-bit version is being used. The operation is: dest_hi:dest_lo = src1 * r/edx The reg/mem source operand src1 is multiplied by rdx/edx, and the result is stored in the two destination registers dest_hi:dest_lo. No flags are modified. This provides two key advantages over the existing mul instruction: - Greater flexibility in register usage, as current mul destination registers are implicitly defined. With mulx, the destination registers may be distinct from the source operands, so that the source operands are not over-written. - Since no flags are modified, mulx instructions can be mixed with add-carry instructions without corrupting the carry chain.
ADCX/ADOX Instructions
The adcx and adox instructions are extensions of the adc instruction, designed to support two separate carry chains. They are defined as: adcx dest/src1, src2 adox dest/src1, src2 Both instructions compute the sum of src1 and src2 plus a carry-in and generate an output sum dest and a carry-out. The difference between these two instructions is that adcx uses the CF flag for the carry in and carry out (leaving the OF flag unchanged), whereas the adox instruction uses the OF flag for the carry in and carry out (leaving the CF flag unchanged).
The primary advantage of these instructions over adc is that they support two independent carry chains. Note that the two carry chains can be initialized by an instruction that clears both the CF and OF flags, for example “xor reg,reg”
As for AVX512, I think AVX512-DQ (Q=quadword) might do (some of) the job, but it will be too limited in use (xeons).
|
|
|
Monero Review: Broken anonymity [ ✓] Broken scaling [ ✓] Broken game theory / Broken Nash Equilibrium [ ✓] Delusional, retarded and pumper-minded community [ ✓] Cryptographers with broken cryptographic ideas [ ✓] Broken de-centralization that will tend to centralization [ ✓] Fraudulent claims regarding the strength of anonymity provided [ ✓] Congratulations. You passed your "bad crypto" review with flying colors. Monero #REKT
|
|
|
Good crypto with mixin 0 and intentionally crippled PoW
|
|
|
Awesome read, I did not know that China had completed their gold backed currency a week or 2 back. No need for USD for them anymore, demand for the $ is going down.
The article is "confusing" the new gold exchange in CNY and a gold-backed-CNY... The way I read it is, they have acquired somewhere in the range 100000 tons of gold to back their currency. Having that type of reserve to guarantee the fiat notes stabaliz s their currency and the economy with less inflation to the currency and putting it in demand to other nations. This theory has been floating for a very long time (minus one digit - ~10k tons) but it won't happen. The chinese want an inflationary currency that they can devalue in order to compete in global commerce. A hard currency is not what they are after, at this stage. It *might* be if they have some serious crisis with their currency reserves, but with it being in the trillions of USD, it doesn't seem likely anytime soon.
|
|
|
Is there any downside in using SIMD instructions (loading multiple values in different segments of the same registers and performing "packed" SSE/AVX instructions for the math operations) to batch-process multiple signatures per thread?
It would theoretically get the cpu cycles per round much lower than doing it in a serial manner.
|
|
|
If there were no classes, subclasses, objects, complex types and all the abstraction clusterfucks, and you just had to deal with simple things, like vars/consts and functions, how would that affect your programming?
Wouldn't you be able to do your work? If you don't need them (in theory nobody does - the can just copy/paste similar code to emulate objects or make multiple declaration instead of using complex types), you could just skip them all together or bypass them. If the cpu can do everything by 10-15 math and logical functions, like comparing, jumping, adding, moving data, so can we at a much higher level (but still not too high to create clusterfucks).
OMG, I will let smooth handle this one or just ignore it. I'm not up for teaching you why your conceptualization is extremely naive. As I said, I'm not a programmer and thus my use cases for "complex" language features have always tended to zero. And even if I were, I'm not sure I'd prefer the more complex language, just for the sake of it. For example is ad-hoc polymorphism a necessity for my use case or can I do the same by emulating polymorphism and breaking it down? If I can break it down, I will - instead of, say, going to a more complex language. At least that's my theory of what I expect I'd do if faced with a problem. In any case I think you said one of the applications you created was in assembly. Obviously the language problems didn't affect you - the sky was the limit (=the hardware limitations were the only limit). And I doubt the code of the 80s was too "evolved" in terms of complexity anyway. I've seen sources from the 70s and 80s - even for things like compilers and OSes... they are full of if/then/else, loops, functions and that's pretty much it. It's like a more "readable" version of the underlying assembly expressed in c, pascal or something similar. And you know what? The software back then was way more reliable than today's. Then you had an OS or an app, and it wasn't *supposed* to be followed by 1500 patches. It was supposed to work out of the box, as intended. I don't know if it was because it was broken down to simple uses of language or if they tested the code 100 times more, or that the uses of the code were more limited and thus the code much simpler but that was the case. Something went wrong since then. I'm reading it, but I don't understand everything. P.S. with first-class functions and closures, you can model most of the same semantics, but it gets hairy.
Hmm...
|
|
|
|