I don't know what are you guys talking about.The fee is more than good but that's not the problem, the transaction has an output of 147 sat.No one will mine this transaction, the transaction is 100% sure not to confirm.
Ah, good catch, I missed that. The fee is a bit on the low side though. With the dust output, it will take longer to confirm, but I have seen several transactions with dust outputs that still confirmed. Depends on what you call a dust.To some nodes it's 2730 to some it's 560(?) if this transaction gets confirmed it will be new for me(but it won't) and the fee is 16sat per byte, it's good. To be fair, dust is usually 546satoshis for a node with 1000satoshi minimum relay fee, by default. However, one could change the minimum relay fee and the dust threshold would change accordingly[1]. Nodes do not relay transactions with outputs below the dust threshold. As a rough estimate, majority of the network uses Classic[2] and Core[1] which has a dust threshold of 546 satoshis by default. It boils down to whether a miner would want to accept it. [1] https://github.com/bitcoin/bitcoin/blob/master/src/primitives/transaction.h#L163[2] https://github.com/bitcoinclassic/bitcoinclassic/blob/develop/src/primitives/transaction.h#L140Actually for Core 0.11.2+ the dust output is 2730 satoshis. The minimum relay fee was bumped in 0.11.2 to address the spam attacks and this increased the dust output threshold to 2730 satoshis. It just wasn't updated in those comments to reflect this change. Also, with 0.12, the min relay fee is now variable depending on the state of the node's mempool so the dust threshold could fluctuate a lot from node to node.
|
|
|
300Mb mempool takes bitcoind to the limit in a 4GB machine. I've seen it quit unexpectedly even in a 6GB machine, both during the "stress test" periods that happened last year. So I think we definitely cannot extrapolate, especially when the machines in question were only running Ubuntu + bitcoind (0.10 and 0.11 at the time).
That limit was only recently introduced in Bitcoin Core 0.12 so setting that in 0.10 and 0.11 nodes shouldn't work at all since it didn't even exist. I pretty sure it can be extrapolated, you just need to use 0.12 and simulate stress test conditions. A question to fellow node owners: how much RAM do you have, what are you running and what bitcoind version?
I have been running a build of the latest master branch for a while now (including when the stress tests were happening). I constantly update it. Sometimes I also have a testnet node running and occasionally a segnet (segwit test network) node running as well. I also have Armory running and when I have testnet up usually Armory Testnet is up as well. These are usually all running simultaneously and I don't see any lags or crashes at all. I use Ubuntu 15.10 with 8 Gb of RAM.
|
|
|
Bitcoin Core by default has a 300 Mb mempool. AFAIK running Bitcoin Core requires a machine with at least 4 Gb. I would try extrapolating from there, e.g. 600 Mb for a 8 Gb machine etc.
|
|
|
Is it possible to reserve a spot on this campaign? I consider myself to be a high quality poster, though I am bound to my current campaign until the current term finishes.
Can I request the same?
|
|
|
Can Bitcoin survive without computers existing?
Seems like I've overlooked this part; the answer is certainly no. Not necessarily. You could theoretically write out a transaction on a piece of paper. You could calculate the transaction hash and the signatures by hand. Then the transactions can be sent by runners to other people who copy all of the data onto more paper and give them to more runners to give to more people and so on and so forth. If we had wires then there wouldn't need to be runners, it could be done over morse code or something similar. In this scenario a node would be a person and the network be all of the runners going between nodes. Of course this is entirely unfeasible as each person would have to calculate the hash of a transaction and verify the transaction by hand whenever it receives a new one and that would take ages to do.
|
|
|
Post a screenshot of your transactions tab. I think there will probably be something about transactions being conflicted. If not, can you also post the transaction ids of the transactions that you supposedly have received?
|
|
|
I posted in March, in fact this post is my 4th this month. However, 154+14 =/= 161, so even with your explanation, I still don't understand. And what about the No board post ?
You have potential activity and by posting you are using up that potential. Bitcointalk doesn't update the activity field immediately but my site will calculate the activity, posts, and potential activity on the spot so it is always up to date. The No Board post is your post in the Philippines section which apparently is relatively new so I haven't added it into the list of sections yet. I probably won't do that for a while though since updating the site software requires shutting down the webserver which clears the queue and all of the saved tokens.
|
|
|
That stuff is usually in chainparams.cpp or main.cpp. If the coin your are looking at is based off of an older version of bitcoin core e.g. 0.8.6 then it should all be main.cpp.
|
|
|
Here's a con. SegWit does not solve the O(n^2) hashing problem. Other fixes in the The SegWit Omnibus Changeset do.
No, segwit requires that those changes be made to be compliant with the segwit fork. A simple 2 hard fork to a block size limit of 2 MB will not fix that. Yes, I know that a specific implementation of a 2 MB hard fork has a solution for that problem, but that is not the topic of this thread. This thread specifically talks about a generic 2 Mb hard fork which does not include such changes whereas segwit must include that change. First, this has to do with _transaction_ size, not _block_ size. Sure, you can fit a larger than 1MB transaction in a larger than 1MB block. Yawn.
But even if the other forks had nothing in place to deal with this issue, you still need to explain to me why a miner would not stop validating a malformed block, rather than getting back to earning revenue*. If this sort of technical detail is what Bitcoin's continued success depends upon, rather than an alignment of economic incentive for being 'a good neighbor', we're all doomed anyway.
You're right, there is nothing stopping a miner from not validating the block, and you know what, that is exactly what miners do now. It's called SPV mining, they don't verify the blocks that they receive, they just start mining on the block even before verifying it. Even before they have received anything about the block except for the header. Obviously there are solutions such as running validating in another thread, but that is off topic. The problem with blocks having large transactions that have this problem is the potential attack vector on full nodes which do actually verify all blocks and transactions. Say you run a full node. Then I decide to start sending you blocks with transactions that take minutes to verify. Suddenly your node pretty much grinds to a halt as it attempts to verify all of those maliciously crafted blocks. Right now I could do that with a 1 Mb transaction. If the block size limit was 2 Mb, I could send you a 2 Mb transaction (assuming it is strictly just an increase to 2Mb only and nothing else) which would take you 4 times as long to validate.
|
|
|
Seriously? Why do Africans need their own wallet? Bitcoin is international, all of the Bitcoin wallets can be used by anyone. They don't need to be designed to cater for one specific group of people.
|
|
|
Here's a con. A simple 2 Mb hard fork does not solve the O(n^2) hashing problem. For those of you who have no idea what that means, it means that as the number of hashes required to verify a transaction increases linearly, the time required to hash all of it increases quadratically. This means that a transaction could theoretically be produced that causes nodes to have to spend a significant amount of time (several seconds to a few minutes, depending on the transaction) verifying the transaction. Such a thing was seen in the past when a miner (f2pool I think) created a 1 Mb transaction to clean up spam that went to wee brainwallets. The transaction took about 30 seconds to verify. With 2 Mb, a 2 Mb theoretical transaction could take 2 minutes or more to verify.
the con in OP that refers to this: The possibility exists to construct a TX that takes too long to validate easily mitigated, through the use of soft limits imposed by miners limiting the number of inputs a TX can have.
its a dumbed down version of what you are talking about, if you think i could reword it for more clarity let me know. Oh, whoops, I missed that.
|
|
|
Here's a con. A simple 2 Mb hard fork does not solve the O(n^2) hashing problem. For those of you who have no idea what that means, it means that as the number of hashes required to verify a transaction increases linearly, the time required to hash all of it increases quadratically. This means that a transaction could theoretically be produced that causes nodes to have to spend a significant amount of time (several seconds to a few minutes, depending on the transaction) verifying the transaction. Such a thing was seen in the past when a miner (f2pool I think) created a 1 Mb transaction to clean up spam that went to wee brainwallets. The transaction took about 30 seconds to verify. With 2 Mb, a 2 Mb theoretical transaction could take 2 minutes or more to verify.
|
|
|
OK, so now consider the reverse situation: Bob is running a new client, and Alice is running an old one. If Bob wants Alice to send him coins, what should he tell her: "pay to this address" or "pay to this segwit script", or "pay to either"?
He could give her a P2PKH address and he would receive the coins as he does now. Or he can give her a segwit script embedded in a P2SH address and he would receive the coins as he does now. He could tell her to make the output a specific script. It doesn't matter as an upgraded node is backwards compatible and can process the old and new output types. Or: suppose Alice wants to make a payment that can be collected by either Bob or Carol. Bob is running an old client, and Carol a new one. Bob tells Alice "pay me to this address", Carol says "pay me to this segwit script". What should Alice do?
Er, is that possible now? Both parties receiving the payment would need to agree to the output type since they can both spend from the outputs. The obvious one is to simply pay to the address because that is one that is compatible with both clients.
|
|
|
~
I don't really have a issue with bandwidth and since it's a torrent it would take hardly 3-4hrs to download it but the problem is I don't have 60 gb of storage lol, yeah got a lame ass pc. I will check out electrum, I also have heard about armory is that a good wallet? It isn't a torrent, and you definitely should not download the torrent for it. That is actually slower than syncing normally over the p2p network. You don't need 60 Gb of storage. You can have the prune option enabled which can reduce the space used to around 2 Gb. Armory is a good wallet but it requires Bitcoin Core and Bitcoin Core must not be pruned. You would have to download all 60+ Gb of the blockchain in order to use Armory.
|
|
|
But, how Alice's wallet would know whether Bob's wallet is SegWit compatible or not. Because, as I understand, depending on this Alice's wallet should create P2WPKH output or P2PKH output respectively.
Well if Alice somehow had a direct connection to Bob's wallet, her wallet would know because of the service bit indicating segwit. Otherwise, she would only know if Bob told her. However, the reference implementation creates by default a P2PKH output when an address is entered. This is to keep the backwards compatibility. I am assuming that there will be checkbox to allow the user to indicate if he wants to send it as a segwit output but that is not the current default.
|
|
|
In short, is it possible that Bob receives a Tx created by Alice, which is included in a block (i.e. a confirmed Tx) and gives her a good/service in return, only to find out later that he can not spend the Tx outputs?
Probably not. His wallet is most likely unable to recognize that a P2WPKH output is meant for him because it does not have any OP codes to indicate what that data is and how to spend from such an output. So, is the counter situation possible? Alice sends the Tx, but as Bob's wallet did not recognize it, she did not get the good/service for which she paid. In effect, Alice lost her coins. Is it possible? Yes but unlikely. This depends on the implementation. The current implementation is that entering an address will result in creating a normal P2PKH output which Bob could spend from. If there is an implementation that a P2WPKH output is created when an address is entered, then it is possible that Alice would, in effect, lose her coins if she used such an implementation.
|
|
|
In short, is it possible that Bob receives a Tx created by Alice, which is included in a block (i.e. a confirmed Tx) and gives her a good/service in return, only to find out later that he can not spend the Tx outputs?
Probably not. His wallet is most likely unable to recognize that a P2WPKH output is meant for him because it does not have any OP codes to indicate what that data is and how to spend from such an output.
|
|
|
Suppose Alice is running "new" (SegWit-capable) client software, and sends some bitcoin to Bob, who is running an "old" (SegWit-oblivious) client, with a SegWIt transaction T1.
I understand that Bob would still be able to spend that bitcoin with a non-SegWit transaction T2, by providing the proper signature; is that correct?
But would Bob know what signature he must provide, without Alice telling him? Or will Bob's client believe that the output of T1 is "anyone can spend", and assume that T2 does not require a signature?
It depends on how Alice sent the Bitcoin. If Alice created a P2PKH or P2SH output (currently used) which Bob could spend from, then he would spend from that output normally as he does now. In this case, it doesn't matter if the inputs required segwit or not, they will be considered valid by Bob but his wallet won't even tell him about the transaction until it has confirmations. If Alice sent him a P2WPKH output (new segwit output), AFAIK, Bob wouldn't even know about those transactions and that those outputs are even meant for him. If he did know about said outputs, he would probably spend them as anyonecanspend outputs and thus not require a signature. However the segwit nodes would reject such transactions and it would never be confirmed or even propagate very far. If Alice sent him a P2WSH, P2WPKH-P2sh or P2WSH-P2SH (new outputs. The latter two are nested in p2sh outputs), Bob absolutely would not know that those outputs are meant for him.
|
|
|
|