Yup - I checked the source code and that is what is confusing me. If nLockTime is unsigned (which all the docco and source code I looked at shows that it is) then why on earth is OP_CHECKLOCKTIMEVERIFY expecting a signed value? Honestly I have no idea. All I could find is that the actual nLockTime in the transcation (not in an OP_CLTV script) is supposed to be unsiged. I don't know why it would be signed for OP_CLTV. Bring it up in an issue on github, you'll get more answers that way.
|
|
|
Ok, so maybe 0 padding makes the script non-standard. The script verifies fine which is why you have no error, so I guess it just doesn't pass standardness checks, probably because of 0 padding.
So - back to square one - how to get OP_CHECKLOCKTIMEVERIFY to understand block 255? I can't really help you but you can probably find it in the code. The locktime in an OP_CLTV script is a CScriptNum so it is signed. Here is the code for CScriptNum: https://github.com/bitcoin/bitcoin/blob/595f93977c5636add1f4f2e64cd2d9b19c65a578/src/script/script.h#L194.
|
|
|
Have you tried padding it with 0's?
Not yet - but I thought that padding with zeros is going to fall foul of the rule that the pushed data must be minimal (as I read when looking into doing this P2SH stuff). EDIT: I tried changing 0x017f to 0x02ff00 but then when I try and redeem (after 255 blocks) I get the following error: error: {"code":-26,"message":"64: non-mandatory-script-verify-flag (No error)"}
Which is a strange looking error (but an error nonetheless). Ok, so maybe 0 padding makes the script non-standard. The script verifies fine which is why you have no error, so I guess it just doesn't pass standardness checks, probably because of 0 padding.
|
|
|
Have you tried padding it with 0's?
|
|
|
BU eliminates the power struggle by unbundling the setting of consensus parameters from the rest of the Code.
Why should this be removed from the code and made user configurable? It is consensus critical since it can create hard forks, so why should it be removed? It gives choices you say, so does that mean that we should make everything else that is consensus critical and make that user configurable? Should we remove the block reward schedule? Should we change the difficulty retargeting schedule? Everything controversial, yes, unless we want to vest inordinate power in whichever devs control the dominant implementation. That power concentration opens a major attack vector. I believe this hasn't been noticed before because there was never such a big controversy before. Who knows what the next big debate will be. Whatever it might be, realize that making the choices blockier (less granular) by trying to shoehorn people into one, two, or three possible implementations with bundled consensus parameters doesn't make the situation any better. The market will always choose the least bad option, but if there are only two options, say that offered by Core and that offered by XT, that's quite suboptimal and will lead to less satisfying results no matter what parameter we're talking about. I agree that having too few options is bad, but at the same time too many options is not optimal either. BU gives too many options for this, and there currently aren't just two options for the block size limit. There are several other BIPs out there with many different alternatives to this. Part of the problem I have with BU is that I see it as a not-production-ready software. It doesn't having the proper testing (I couldn't find any), and it doesn't consider the all of the consequences. I think it should have been released when deployment options were available and some more options were there instead of just block size limit. There is much more than just the block size limit when it comes to deploying such a fork and not completely screwing up the network.
|
|
|
BU eliminates the power struggle by unbundling the setting of consensus parameters from the rest of the Code.
Why should this be removed from the code and made user configurable? It is consensus critical since it can create hard forks, so why should it be removed? It gives choices you say, so does that mean that we should make everything else that is consensus critical and make that user configurable? Should we remove the block reward schedule? Should we change the difficulty retargeting schedule?
|
|
|
BU will let the user select a given Core or XT BIP (this is still be worked on (BUIP002, probably not supposed to link it here)), so for example if they turned on the BIP101 option, their node would mimic an XT node as far as following BIP101, including the 75% threshold and specific starting block.
Really? How? So far what I have seen is that a new block size limit in BU takes effect immediately. There is no mechanism that does the supermajority fork process. If there is a specific option to for the supermajority fork process for a single BIP, then there should be that for every BIP. Will BU have options to allow the user to support whatever BIP or not? How will new BIPs be added? Through a software upgrade? Just like today, where if XT were winning Core miners might switch to XT, and if not they wouldn't, it's the same dynamic: if XT were winning, the BU miners would likely set their blocksize settings to BIP101. They can do this even faster than Core miners can switch to XT since it's just a GUI setting, not a new client to download.
A new client download and install takes about 2 minutes, it's not that big of a problem. Even so, the miners would have to either switch to use bigger block sizes after the fork happens or somehow indicate that they are supporting the bigger blocks before the fork (e.g. the supermajority fork process). This means that that larger block size should not take effect immediately. They can just follow Core. BU can be set up to default to Core behavior (it doesn't now, but it's an experimental release; anyone could fork it that way, trivially). I mean, you could say the same about XT: dumb users might try using XT. Could happen. This certainly isn't a security risk, or else Bitcoin is doomed because there's no way to stop people from releasing forks. Yeah I know XT has the 75% failsafe, so then imagine the reverse: everyone is using XT and someone dumb downloaded Core with its 1MB cap and tried to mine but kept not being able to build any blocks because their client rejected all the XT blocks.
Point is, the situation today is that miners and nodes need to pay attention to developments today. They can't just blindly trust whatever Core puts out - and if that's the expectation then we already have bigger problems.
Sure you can't blindly trust whatever Core puts out, same with XT, BU and every other software implementation.
|
|
|
Already done but the is high than like before i lower the fee but doesnt accepted but after increasing the fee he accept it and dont get any error. Why electrum increase their fee for transaction before i can only pay for 10 bits now i pay for 170 bits?
The fee has to cover the dust output. The dust output makes the transaction nonstandard. The only ways to remove the dust output are to either increase the fee to include that amount or increase another output to include that amount. By increasing the fee you did the first option. So it means if i increase the fee for first transaction the dust output will be gone and you can decrease fee in the second transaction? Am i right? Kind of. If you have another dust output in the second transaction, you will still have to raise the fee. Otherwise it should be safe to lower the fee for the next transaction.
|
|
|
Already done but the is high than like before i lower the fee but doesnt accepted but after increasing the fee he accept it and dont get any error. Why electrum increase their fee for transaction before i can only pay for 10 bits now i pay for 170 bits?
The fee has to cover the dust output. The dust output makes the transaction nonstandard. The only ways to remove the dust output are to either increase the fee to include that amount or increase another output to include that amount. By increasing the fee you did the first option.
|
|
|
How exactly does setting the block size limit constitute as a vote for that block size? How would anyone else know that you set your node to accept a certain maximum block size? If there is no mechanism for this (and I couldn't find any in the code), then a lot of things said here are wrong.
If there is no mechanism that tells everyone else what they are accepting as the max block size, then there is no voting happening. A sybil attack wouldn't work since there is no voting.
Additionally, miners wouldn't know when to increase the block size as the safe method of deploying the block size limit as a hard fork is no longer there. It could result in either nothing happening as miners want to play it safe, or the blockchain forking in multiple ways as miners test out different block sizes. Either way could be catastrophic.
Lastly, why would it be a good idea for users (especially non-technical users) to decide what their block size limit is? Not everyone is smart enough to know all of the implications of why a certain block size limit should be accepted or not.
|
|
|
Sorry, i meant that thats the address what sent me the bitcoins
What is your address then? Where did you get the watch only wallet from?
|
|
|
Edge cases would be scenarios that include something completely absurd that normally wouldn't happen but could happen. Those cases still need to be considered as part of the testing
This is part of the reason why I trust bitcoin more than almost all altcoins (where extensive QA and edge case scenarios are not considered). The more QA the better. Has the bitcoin been tested throughly? Yes. For every single feature that gets added there are tests for it. Anything new pull request for a feature that does not have a test is not merged until the person who opened the request creates tests for it. It is quite thorough.
|
|
|
From what I understand, BU moves the block size limit from consensus rules to a node policy rule. Instead of having the limit hard coded in, the user chooses their own block size limit. Also if a BU node detects a blockchain that has a higher block size (up to a certain user configurable threshold), after that chain is a number of blocks deep (user configurable), then it will switch to use that blockchain and set its block size limit higher.
So what happens if I left my node at 1MB +10% user threshold and a 1.2MB block comes - does my node reject it? IIRC the node will keep the block and watch the chain it is on. If the chain it is on becomes n blocks deep (where n is user configurable) then your client will switch to use that chain as the active one. Otherwise it stays with the one it is currently using. How will the network not split into a myriad little shards which diverge following accidental and/or intentional double-spends without manual human coordination?
I don't know. You'll have to ask someone else about that.
|
|
|
I unfortunately just recently started looking into BTC and ALT coins and while there is a ton of information here it is hard for me to discern good info from bad. It seems like a lot of the things people post are either ads or sarcastic comments to steer people away. Any tips on who to trust?
Build your own trust list. Use DefaultTrust for a little bit. As you interact with the community, find the people you think are trustworthy and build your own trust lists. As you do that, also build your own ignore list. You can start with DannyHamilton's list here: https://bitcointalk.org/index.php?topic=973843.0. It includes pretty much everyone that wears a sig ad (which unfortunately includes me) since most people with sig ads will spam. As you interact with the community, build your own ignore list.
|
|
|
From what I understand, BU moves the block size limit from consensus rules to a node policy rule. Instead of having the limit hard coded in, the user chooses their own block size limit. Also if a BU node detects a blockchain that has a higher block size (up to a certain user configurable threshold), after that chain is a number of blocks deep (user configurable), then it will switch to use that blockchain and set its block size limit higher.
|
|
|
I tried it but i got this problem --snip--
That problem is because you are sending a transaction with a dust output. Try sending it but with the dust output as part of the fee. But how can i send it to another wallet its still got this issue even on latest electrum wallet. Or i need to set a fee but i didnt know how much i will put for default fee in preference settings I change it before to 0 then i tried 1,10 and 100 per kb/bit Before i dont get this error its just now.. There should be an option to set the fee manually in preferences. So when you create the transaction to send, set the fee manually to include the dust output.
|
|
|
You can have armory send the change back to one of your existing addresses. There is a check box for "Use an existing address for change" where you can set the address for the change. Then when you need to send again from that address there will be Bitcoin there to spend.
|
|
|
See, I really tried to setup a virtual 3 membered escrow network in my PC. Electrum gives n private keys for n people involved for the same public key. So, there's no possibility for a single person to have n keys with him, also, its a multi sig thing, how can one single person spend the funds?
I think he is saying that whoever creates the address needs to share the redeemscript with everyone so that they can verify that script is for the provided address. Otherwise that person could create a redeemscript which could allow that person to spend all of the funds himself.
|
|
|
I don't have it but I don't think it is a scam. AFAIK the only scam debit card is bitplastic. Most other ones (except prepaid visas from polish banks and stuff like that) are legit. Yes. The escrow could be a scammer and run off with the money. To avoid that you can use multisig escrow. If the escrow and the other guy are both the same person and is a scammer, then they can still run away with your money. So even when using escrow, please be careful and make sure that you can trust the escrow.
|
|
|
|