Show Posts
|
Pages: [1]
|
I'm looking to add to my library of automated trading strategies and I have a 100k USD budget for the right ones.
The right ones will have:
* Complete automation, no human influence * Profit + equity graph for at least four months * Low drawdown * Enough trades to confirm strategy confidence
PM me, or post here with questions.
|
|
|
If we want to discourage double spends, why aren't we just enforcing the rule that spends from the same address must use the same r value in the ECDSA signatures? This would mean that any attempted double spend would reveal the private key*, therefore making the double spend attempt futile, while still allowing standard transactions to function, because change is always sent back to a new address anyway. It ought to be possible to verify and disallow unique r values for spends from the same address. Obviously I'm overlooking something horribly obvious here, anyone care to enlighten me? Cheers, Paul. *) https://bitcointalk.org/index.php?topic=271486.0
|
|
|
If you want to solve the double spend problem, you have to start with something that isn't double spendable. In bitcoin this is CPU cycles, which is a proxy for time. You can't double spend time.
Once you have this, you can use it to build your consensus mechanism. If you start out your design by choosing to use something which is double spendable (a transaction, stake*, a vote) then you're going to be chasing your own tail.
I keep having to remind myself that although financial markets operate under a nash-equilibrium without using PoW, they are built upon a foundation of the notion of an atomic transaction (handled by the centralised exchanges) which cannot be double spent. Therefore chasing some exotic consensus design around their operation will be lost cause.
You have to find something to use which you cannot double spend.
edit: *) except in the special case where that stake is stationary for the entire duration of the blockchain
|
|
|
We all know why bitcoin was created. Double spends are the problem. Without double spends, you don't need a blockchain, miners or any of this extra complexity. Yet, because information is easy to duplicate, we must accept that double spends are inevitable.
So, as a thought experiment, what happens if you ignore double spends completely?
In a system with UTXOs, like bitcoin, if you ignore double spends, and just credit the spend of a given UTXO to each party involved, the 'victim' of the double spend doesn't lose out, but the money supply of system will increase, leading to hyper inflation and everybody using the system pays the price. This is socialised losses.
If you had a system with accounts, instead of UTXOs (like NXT) then a double spend would leave the sender's balance negative. If you require the sender to post collateral in his account the size of the maximum transaction he will send (analogous to the Lightning Network), then for a single double spend, this problem is mitigated, but if he spends the same amount more than twice, the balance goes negative again, and we have the same hyper inflation problem.
The question I've been pondering is whether you could create some kind of nash equilibrium around the hyper inflation caused by just allowing double spends?
|
|
|
I was just doing some back-of-a-napkin style calculations to figure out the hashing power of the users of bitcoin, and whether it would even be worth looking into trying to harness it in some way.
Recalling that the difficulty is the expected number of hashes taken to solve a block, with current difficulty at 3007383866429, and assuming we have normal CPUs with an average of 10Mh/s (this may be optimistic), and assuming sending a transaction took 1 second of hashing, we'd have enough hash power in plain transaction sending to rival a 1% hash-power miner, if the network was sending 5 transactions per second*.
Is this significant enough to try and capture? Is it even possible to do so?
* 3007383866429 = 3007383 mega hashes 3007383 / 10 Mh/s = 300738 seconds to solve a block in 10 minutes 300738 / 600 seconds in 10 minutes = 500 tps for 100% network hash power
|
|
|
Hello all,
I've been mulling over in my head what the requirements are for inter-exchange arbitrage to affect the price of a particular market.
For example, if you have two exchanges, with the same market structure, both accepting deposits and withdrawals in base and quote currencies, it seems arbitrage is clearly possible, should a pricing discrepancy present itself.
But what happens if the market structure is different between exchanges?
Say, exchange A has a 'standard' LTC/BTC market where you sell 1 BTC and get 0.1 LTC instantly, but exchange B, is a CFD exchange trading LTC/BTC where you go short 1 BTC and then have to close the trade at a later point?
Is arbitrage still possible?
Cheers, Paul.
|
|
|
Hi there, It seems to me that no one is talking about something quite obvious that has come to light quite recently regarding scaling bitcoin. None of the segwit/blocksize increase proposals are the right answer. That's why there hasn't been a unanimous agreement on the right path the choose. Both of the major camps are lobbying for changes which are only stop gap measures. IMO, something quite radical needs to change in the way bitcoin works in order to facilitate proper scaling and to decrease centralisation literally as far as physically possible in a system with mining incentive. What we need is something like this: *) Homogenise - Remove the distinction between miners and users of the system *) Reduce blocks to one per transaction *) Users mine their own blocks when sending a transaction, no other user can mine another users block *) Users can choose their own difficulty level when mining their blocks *) Block reward is proportional to chosen difficulty (up to a an moore's law based maximum and with a spam preventing minimum) *) Preserve orphaned branches of blocks and include them in a new LCR scoring system so we maintain deterministic, global state These measures: *) Allow bitcoin to scale indefinitely, as the block size is now as small as it possibly can be, and there is now no fixed block interval, as these tiny blocks arrive constantly *) Miners can still participate, but instead of enabling transactions to be sent/received their only job is now in securing the chain by providing hashing power; they still earn their mining reward *) Chain security remains strong; miners get paid for being on the longest (largest cumulative difficulty) branch, and now this weighting includes orphaned branches which are referenced within each block, so no history based attacks are possible *) Decentralisation is maximised because there is no need for mining pools anymore, since variance in mining reward is now under the control of the user. Moreover, since only you can mine your own blocks (the PoW is signed by you), mining pools are unattractive anyway. Thoughts? edit: draft whitepaper: https://github.com/wildbunny/docs/blob/master/T.E.T.O-draft.pdfCheers, Paul.
|
|
|
Hi there, I'd like to open discussion about Swirlds hash grid consensus design as it pertains to cryptocurrencies. http://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdfhttp://www.swirlds.com/downloads/Swirlds-and-Sybil-Attacks.pdfAt first glance this design has a lot of similarities with the way Ripple's consensus works, with the key difference that voting history is preserved. This means it stands a better chance at reducing vulnerability to long range attacks and bootstrap-poisoning. However, as the author notes in that second paper above, in it's pure form it offers no protection from sybil attack. The author suggests using stake from another blockchain in order to weight votes to provide some protection again sybil attack, but does not talk about the inherent problems in doing so, such as the transient nature of the balance of any particular staking address in another blockchain. In addition the author talks about using PoW to acquire voting weight, which does not suffer from the same transient problems associated with stake, but it is still largely PoS in nature since voting weight persists or decays at a finite speed, unlike the way PoW functions in bitcoin, for example. All in all, I think it has applications outside of cryptocurrency, but IMO is poorly suited to the requirements of the consensus mechanism within one. Cheers, Paul.
|
|
|
Hi everyone, I'm posting this draft I worked on last year in the hope that it will inspire some debate. It's based on the work of Max Kaye ( https://bitcointalk.org/index.php?topic=1057342.0), and proposes a blockless consensus mechanism for achieving an eventual total ordering of all transactions in a DAG based cryptocurrency. The novel aspects are that it is the first system I am aware of that attempt to achieve a trustless, total ordering for all transactions. It's an incomplete draft, all of the proofs are missing and there is an open question related to the claim about miners being incentivised to reference uncle transactions in their own new transactions, which is somewhat key to overall convergence. https://github.com/wildbunny/docs/blob/master/T.E.T.O-draft.pdfedit: points of interest: * Permits a mining reward * Deterministic global state * Eliminates mining pools * Maximum possible decentralisation * Enables safe-to-accept-transaction-as-confirmed metric * Instant transactions Anyway, happy reading! Cheers, Paul.
|
|
|
|