why does there have to be any limits to transactions per second?
To allow full nodes to operate with limited bandwidth, i.e. to allow decentralization.
|
|
|
There is no actual 'time' in any cryptocurrency, and any things you might find in blocks called 'time stamps' are not to be trusted because in a p2p environment, nodes lie. This is what satoshi solved in his paper - a time stamping algorithm called POW.
Wrong; blocks need timestamps in order to retarget the PoW difficulty. Quoting from https://en.bitcoin.it/wiki/Difficulty"The difficulty is adjusted every 2016 blocks based on the time it took to find the previous 2016 blocks" Blocks can lie only a little about their timestamp; if too far out of whack, the block is not relayed.
|
|
|
A "CPU Only" algorithm is nearly impossible. AxiomMemHash lowers the incentive of GPU mining by making the algorithm more GPU-resistant. As has been explained many times in this thread, a GPU get's 2-3 times more hash on average with Axiom's algo. When compared to other algorithms where it is 100-1000x faster than a CPU
How about comparing to other PoWs where GPUs are no faster?
|
|
|
That's an intereseting concept for PoW (I know I'm very late to it), but I don't see any coin using it and it's been around since 2014... very interesting indeed ... i would definitely look into that with granite ... #crysx What's granite? Another GPU info tool like GPU-Z? Are there any such tools available for Linux?
|
|
|
Do you have a watt meter? How much power is it using?
I wonder what info these tools like GPU-Z would report for my Cuckoo Cycle cuda miner, but don't have easy access to a Windows system. Could someone clone https://github.com/tromp/cuckoo, build a size 2^32 cuda miner with > cd src > make cuda32 run it with > time ./cuda32 -h 0 -n 7 -t 512 (takes about 80sec on a GTX980) and report on the info from GPU-Z? For a simple introduction to Cuckoo Cycle, please check my recent blog post at http://cryptorials.io/beyond-hashcash-proof-work-theres-mining-hashing/
|
|
|
i think BURSTs PoC (proof of capacity) algo is worth to look closer at because it removed the PoW and PoS design flaws (long term secure affordable decentralization). since the algo is based on precomputed data comparable to rainbow tables there is no way to develop special centralized hardware like asics for it (in terms of running costs as capacity replacement). for the decentralization this means everyone can buy regular hdds in the next shop around the corner or use spare capacity. compared to PoW there are almost "no" running costs. the coin exists for over a year now and instead of having a whitepaper it has a over 1000 pages long bitcointalk thread here: https://bitcointalk.org/index.php?topic=731923.0Quoting from https://eprint.iacr.org/2015/528.pdf"Perhaps the most serious security issue with Burstcoin is that it allows for time-memory trade-offs: a miner doing just a small amount of extra computation can mine at the same rate as an honest miner while using just a small fraction of the disk-space that an honest miner would."
|
|
|
What is the name of your system or coin?
Classified till the announcement. I'm guessing... Coin-From-Beyond
|
|
|
2. Allow only one transaction per block
4. Argue whether the transaction should be limited to 1M or 8MB or ... Sorry; couldn't resist:-)
|
|
|
An algorithm with a much bigger memory footprint, like cuckoo, without a corresponding increase in verification time, might be sized for L4 though.
This focus on cache sizes may be unwarranted. If you benchmark Cuckoo Cycle for increasing memory sizes, you only see a small slowdown in access latency(~40%) when moving from fitting in the 12MB/core on-chip cache to moving way beyond that.
|
|
|
The next attempt at an improvement over Scrypt was (which I wrote down in 2013 well before the Cryptonite proof-of-work hash was published) was to not just read pseudo-randomly as described above, but to write new pseudo-random values to each pseudo-randomly accessed location. This defeats storing only every Nth random access location. However it is still possible to trade latency for computation by only storing the location that was accessed to compute the value to store instead of storing the entire value. The compute bound version of the algorithm can not be defeated by reading numerous locations before storing a value because the starting seed need only be stored.
I fail to make sense of the last two sentences. Could you elaborate? Thus I concluded it is always possible to trade computation for latency in any memory-hard hash function.
Why limit yourself to hash functions? See http://cryptorials.io/beyond-hashcash-proof-work-theres-mining-hashing/
|
|
|
somewhat 'asic-resistant', at least so far.
Asic resistance is not a temporal property. Scrypt was never particularly asic-resistant. And neither is X11. Everybody expects X11 ASICs to arrive sooner or later once it's profitable enough, e.g. once X11 coins exceed Litecoin in market-cap (which seems unlikely to ever happen). It would appear that many (dozens? hundreds?) MB of memory are needed to provide decent ASIC resistance. Ethereum's ethash, which requires 1GB for efficient mining, can be said to be pretty ASIC resistant.
|
|
|
The transaction links point to related transactions, such as inputs and outputs in bitcoin, or completely unrelated?
Outputs consumed by the tx are already linked to. These are unrelated and should be recent tips of the DAG.
|
|
|
I am looking into some of the hashing algo's used in todays cryptocoins, and am hoping someone here can give me a bit more insight. Basically what i looking for is a few informed individuals who can give me some pro's and con's , aswell as personal opinions, on the different available algorithms. (Why they are good for cryptocoins specifically, whether they are "asic-resistant", etc)
You may want to read this article http://cryptorials.io/beyond-hashcash-proof-work-theres-mining-hashing/explaining there's more to mining than hashing.
|
|
|
And C++ is one of the most complex, grotesque languages ever with perhaps only Perl and Brainfuck making it look good.
Your mention of Brainfuck is quite off the mark, as it is one of the simplest programming languages imaginable (that fact that writing programs in Brainfuck is anything but simple is a consequence of the language being simplified to an extreme). https://en.wikipedia.org/wiki/Brainfuck
|
|
|
I would feel more convinced of the well behaviour of this DAG if every transaction can be viewed as the end point of a totally ordered valid sequence of transactions.
That would prevent the situation you describe where two ancestor transactions are in conflict. I think that should make the joint descendant transaction invalid.
For a transaction referencing k parents tx_1..tx_k, we would like to define its total order in terms of those of its parents. Let TX be the last transaction in their intersection. Then all these histories agree up to TX and we need to define a valid merge of the k sequence suffixes. Some merges will have conflicts, which is something to be avoided. The problem reduces to the question of which of the next available transactions (whose number is between 2 and k) should extend the total order defined so far.
Have you considered whether this approach to well-ordering is feasible?
|
|
|
Thanks, sandor111, but that is a big commit with thousands of lines changed in multiple files. Could you point to or describe the change that had the most impact on performance?
|
|
|
There is nothing to support. Coin is done and those behind it have moved on. This 85% increase in the cpu miner just proves how crippled the miner was at launch.
I'm curious now. What was the nature of this "crippling" that allowed for such a large speedup? Can we see the change in source code? Or an explanation of how the computation is being done differently now?
|
|
|
Is Cuckoo currently used in any coin? sounds like a very fair distribution method.
No; it's not in use as far as I know. As to fair distribution, that depends more on the reward schedule than on the PoW. Nothing is fair when the majority of coins is released in a few weeks, or even a few days. A fair distribution would start with negligible rewards, which slowly climb over a period of months, to reach their peak in maybe a year, when mining software has had a chance to mature and spread to many platforms (e.g. phones charging overnight), and then slowly decline over many years.
|
|
|
CPU only is a myth.
"CPU only" means "No performance advantage for GPU" (a more subtle meaning would be No performance/watt advantage for GPU, but since electricity is considered "free" in some circumstances, I prefer the simpler notion of absolute performance, which is also much easier to measure). How is that a myth? It is a myth in that it has not yet been demonstrated. As far as I know (and of course, I am not all knowing) there is no design for a cpu only algo that can (and has) not been made to be drastically outperformed by a GPU. Certainly many have been claimed, but disproven. Again, I am not omniscient so it is entirely possible it is out there somewhere, possibly even in plain sight and of common knowledge(just not to MY knowledge)... -- I would love for my own academic reasons to see and study one if there is, so please do provide a link if you are aware of any. The reason why so many "cpu-only" claiming PoWs were disproven, is that they were doing too much computation. Of course, a GPU is going to excel at doing computation. In contrast, in my Cuckoo Cycle PoW, mining is designed to *avoid* computation, just doing the minimum possible computation to generate random accesses to arbitrary scalable amounts of memory. In mining, the majority of runtime is spent waiting for global memory access latency, which also makes it a rather energy efficient PoW. The reason it can scale memory arbitrarily is that, unlike almost every other "memory hard" design, this proof of work is asymmetric, so verification remains instant and requires no memory at all. There's a $500 bounty on convincingly disproving my claim that GPUs have no advantage over CPUs, see https://github.com/tromp/cuckoowhich also has a whitepaper.
|
|
|
|