Show Posts
|
Pages: [1] 2 3 »
|
Do you own the private key for any of the outputs in this TX? Perhaps a change address? If you do you can use CPFP (child-pay-for-parent) to spent that output with a larger fee.
|
|
|
Yes I agree this is quite obvious and probably not a new idea, but given the ongoing debate on block size limits I wonder if this proposal is an acceptable solution. It certainly motivates old nodes to update.
|
|
|
If the majority of nodes agree that for all blocks after block #N (e.g. 400000) the only valid block must contain only the coinbase transaction with zero block reward, and with a alt-block hash attached to the coinbase message, where the alt-block hash is the hash of the block in blockchain 2.0 (transmitted separately), then it is technically only a soft fork because the set of valid blocks accepted by new nodes is a strict subset of that accepted by old nodes. The old nodes will be forced to accept a perma-frozen blockchain where no new coins are generated and no transactions will confirm.
|
|
|
So I have seen a few people sending bitcoins without any miner fees, and their amount eventually got sent back after a week or so.
So, what do you think the technique(or tips) for sending bitcoins without miner fees would be? This would be really helpful for sending small amounts less than BTC0.001 since the transaction fee would be more than 10% of the real amount transferred.
You must spend inputs with sufficient priority. That means to pick inputs that are large enough ( BTC1 or higher), old enough (confirmed in the blockchain for 1 day or longer), and not too fragmented (ideally no more than a couple of inputs). For example to send BTC0.001 you can pick a 1-day old input of BTC1 and send BTC0.999 change back to yourself.
|
|
|
Original sum of block rewards = 20,999,999.97690000 BTC Proposed sum of block rewards = 21,000,033.29639948 BTC
So, a way to add 0.000159% inflation in a sneaky way  ? The 0.000159% is an unfortunate rounding error and is definitely unintended. But it shouldn't be hard to tweak the parameters to get to the exact number (or something within +/- a few thousand satoshis)
|
|
|
I agree it is a hard fork but this transition is something that can be planned to happen many years in the future (for example when the 3rd halving occurs at block 690,000 or around year 2020).
|
|
|
Simulation shows that starting from block 6,930,000 both reward functions return 0.
Original sum of block rewards = 20,999,999.97690000 BTC Proposed sum of block rewards = 21,000,033.29639948 BTC
|
|
|
It is well known that the bitcoin block reward (as a function of the block number) is not continuous - a discontinuity ("halving") occurs every 210,000 blocks or roughly 4 years, as illustrated in the bitcoin source code: CAmount GetBlockValue(int nHeight, const CAmount& nFees) { CAmount nSubsidy = 50 * COIN; int halvings = nHeight / Params().SubsidyHalvingInterval();
// Force block reward to zero when right shift is undefined. if (halvings >= 64) return nFees;
// Subsidy is cut in half every 210,000 blocks which will occur approximately every 4 years. nSubsidy >>= halvings;
return nSubsidy + nFees; } IMHO the halvings are disruptive events and negatively affects everyone. It isn't difficult to change the above code to make the block reward function continuous and piecewise linear while keeping the total limit of 21 million BTC unchanged. This would eliminate the discontinuity events in the future. Proposed function is: CAmount GetBlockValue(int nHeight, const CAmount& nFees) { CAmount nSubsidy = 50 * COIN; int halvingInterval = Params().SubsidyHalvingInterval(); int halvings = nHeight / halvingInterval; int phase = nHeight % halvingInterval; // Force block reward to zero when right shift is undefined. if (halvings >= 64) return nFees;
// Subsidy is a continuous and piecewise linear function that halves every 210,000 blocks // which will occur approximately every 4 years. nSubsidy = (nSubsidy * (4 * halvingInterval - 2 * phase)) / (3 * halvingInterval); nSubsidy >>= halvings;
return nSubsidy + nFees; }
|
|
|
Non-standard TX can only be mined on the Eligius pool and Eligius pool does not accept no-fee TX, period. Please always attach the proper transaction fees.
|
|
|
Normally OP_INVALIDOPCODE would cause script evaluation to return false, but what if it is inside the scope of a pair of OP_IF .... OP_ENDIF?
Take a look at this tx:
77822fd6663c665104119cb7635352756dfc50da76a92d417ec1a12c518fad69
scriptPubKey is
OP_IF OP_INVALIDOPCODE 4effffffff 46726f6d.... OP_ENDIF
It seems that a scriptSig of the following will be accepted to memory pool, but will it still fail to verify?
OP_1 OP_0
|
|
|
If it has been more than a few hours since your last attempt, and the non-standard output you're trying to spend is still unspent, then there's probably a conflicting tx that spends the same output in the memory pool of Eligius, but it can never be confirmed because it does not have sufficient fees (Eligius requires 0.1mBTC minimum and 0.08192mBTC per 1KB).
So unfortunately you're pretty much stuck until however long it takes for the transaction to "die off" from the memory pool. Could be weeks, or even months. There's no other major pool that will accept your non-standard tx.
|
|
|
You're doing it wrong if your service generates thousands of dust-sized outputs. You need to consolidate your outputs as you go. I assume that a transaction under your current model is like this: Input (x1): Payer Address (X BTC)
Outputs (x3): Payee Address (Y BTC) Service Wallet Address (100 satoshis) Payer Change Address (X - Y - 0.00000001 BTC) What you should do is to keep the Service Wallet Address accumulate the 100 satoshis as you go: Inputs (x2): Payer Address (X BTC) Service Wallet Address (Z BTC)
Outputs (x3): Payee Address (Y BTC) Service Wallet Address (Z + 0.000001 BTC) Payer Change Address (X - Y - 0.000001 BTC)
|
|
|
Sure but larger tx are merely nonstandard, not invalid. Eligius pool allows nonstandard tx to be included as long as fees are paid, but it will not propagate the tx to other nodes.
|
|
|
How did you try to redeem this output? There is a web-form http://eligius.st/~wizkid057/newstats/pushtxn.php but we do not know is it connected directly to the pool-node-accepting-non-standard-txs or not. The second way is connect to the node with non-standard client and send raw tx from console. But what is Eligius ip-address? To answer your question, yes I've trying the web-form. How do you directly dump rawmempool on a remote node? And how do you find out which tx has conflicts with your tx? I have added the node to my bitcoin.conf but when I type "bitcoin-cli getrawmempool" it just dumps my local mempool, right?
|
|
|
Well it looks like Eligius hasn't found any blocks for the past 5 hours, so that's probably why your tx is not mined... let's wait a bit longer.
Anyway thanks for the info!
|
|
|
The base58check encoding has a 32-bit checksum built in, so if you change one or two letters it is not only possible to detect the error, but to recover the correct key as well.
|
|
|
I've been watching this 16-of-16 multisig tx for a while (2ee6d8ea223e118075882edba876f01b30f407eb6c6d31c40bd6664a17f20f0c)... it has never been redeemed even though the solution is not hard to arrive at.
My conclusion is that someone has been spamming Eligius with a valid redemption tx but it is never included in a block due to insufficient fees. But because it is already in the mempool, no other valid tx can be accepted.
So does that mean the tx can never be redeemed?
|
|
|
This will also kill all ASIC-based mining because the bottleneck becomes the ECDSA signing operation. A mining gig would only produce MH/s instead of TH/s or GH/s today.
|
|
|
I'm afraid that 0.0002 BTC is required for 1500 bytes. If you send without fee it will most likely not get propagated.
Why is your TX so large? Can you consolidate some unspent outputs first?
The easiest way to get higher priority is to "bundle" a larger output (say 1-2 BTC or higher, at least 1 day old).
|
|
|
|