More conservative of an implementation than I expected considering the gains. Things could be made smaller if there wasn't a requirement for the encoding to be context free... but context free is important for FIBRE reconstruction of blocks using the mempool.
Has there been considerations for a stand alone key:val table for transaction short ids? I'm asking because I can't tell whether the benefit would offset the overhead. It "feels" like a boon for historical data but a mess of complexity the younger the blocks get.
|
|
|
Indeed, it seems to be the closest to my idea, if I understand you can spend the funds with only one key with ntimelock to be proceed ans multisig to be instantly.
Basically, in pseudo code: if (CLTV conditions not met) this script is valid else that other script is valid
The scripts can be anything you can think of. OP_IF has been around since the begining, OP_CLTV is "kinda" new, one of the most recent added op codes, and it allows you to evaluate either the block height or block time. How can it be setup? is it easy to use for not power user?
Need a wallet that implements that scheme for you basically, unless you're looking to implement that stuff yourself. Depending the wallet's GUI, it may be very simple to do, or not. There were some talks of timelock setups a few years back, you'd have to unearth the threads. Once setup the single sig transaction could be braodcast from any software or hardwre wallet?
You still need to provide the underlying script (these are p2sh/p2wsh nested constructs typically), so you're likely tied to the wallet unless you know your way around Bitcoin's scripting language. Hardware wallets usually can't spend from scripts they have not built themselves.
|
|
|
CLTV == CheckLockTimeVerify
Let's you lock a script until a certain amount of time/blocks has passed. You'd build a script with OP_IF around that, with a primary multisig branch that can spend the coins with n-of-m sigs, and a second branch that unlocks after some time that invalidates the first branch and let's you spend from a singlesig script which would be your long term backup key.
You can use convoluted key/paper backup distributions for the primary branch with the knowledge that if you mess up you can spend your coins from the backup key in the future. Since there's no risk of blackholing your coins if you lose too many of the multisig keys, you can go afford to risk destroying them.
It has the same basic design as what you are proposing: stringent security requirements at first, lose backup sometimes in the future.
|
|
|
I dont see how this is different from a CLTV setup tbh.
|
|
|
Can't load the file, so I'm gonna ask here: Do you replace outpoints with shorter ids? Can/do you skip witness data optionally when applicable?
|
|
|
It's been in the process of an overhaul. Change to the handshake and key generation. You'd have to ask Jonas Schnelli directly I guess, I lost the link to the proposal. I think it was assigned a BIP number though. As for projects using it, there aren't that many, but I'm hoping this proposal will eventually find its way into the Core feature set.
|
|
|
BIP151. A bit off topic I guess.
|
|
|
One challenge with that kind of argument is that their tx fees are probably a rounding error relative to the overall cashflow of their operation, probably making it a low-to-no-priority for them.
This is true more often than not in Bitcoin's history, however I heard Coinbase left a sizable part of their revenue to miners in 2017 over their simplistic withdrawal strategy, when 1000 sat/B fees were "common". That was the result of the price runoff from the previous halving *hint*. Better be prepared now, just my 2 cents. and the company who got costs reduced in an area that wasn't their focus.
A bit contradictory when one of the main benefit of Bitcoin is bypassing 3rd party custodians. One thing is for sure, the lack of fee pressure will suppress this side of R&D.
|
|
|
I don't believe that delaying announcing transactions they just constructed would provide any value to the network. It's better to have them out there and provide people with more complete information about them.
I would assume they're motivated to cut down on their own tx fees. I'm not going to expect any market actors to act for the network's benefit at the behest of their own. Granted they could still bloat the mempool in a single batch broadcast with low fees and RBF later where necessary. Delays aren't the only way to cut down on fees.
|
|
|
I'm guessing that's part of the vanity prefix and not something they're willing to let go unless offered a better solution.
There is no requirement to use uncompressed keys to use vanity addresses-- in fact, vanity addresses can be generated slightly faster with compressed keys. If their prefix is just '3BMex' that's pretty short and easily found in any case. I was saying this more in the context of "we already have them, why bother?". Of course compressing the keys would make more sense, but how motivated would they be to at least gradually replace their current "fleet" of addresses (which obviously isn't trivial to generate)? I'm guessing there is some form of key ceremony they're not so willing to either modify nor run through (I assume they're not generating these keys deterministically). It would be significantly easier with a script merklerisation scheme where you can separate the vanity search from the public key generation. I'm trying to play devil's advocate here. These things are not mutually exclusive, batching into a modest number of outputs per txn could halve the load they place on the network-- and that's true regardless of when they emit the transactions. (doubly so if someone created a two-sided version of the branch and bound search in Bitcoin core that can often result in changeless transactions when run with large wallets)
Most certainly the optimal solution would lay at some intersection of batching and broadcast delays. Broadcast delays would take them 5 minutes to implement. Batching? Who knows.
|
|
|
Otherwise, it simply doesn't exist...
Ahh, is the user mode set to something other than "Expert" then?
|
|
|
redeem scripts with four uncompressed public keys, it's 2020...
I'm guessing that's part of the vanity prefix and not something they're willing to let go unless offered a better solution. With that it could even be possible for them to keep their '3BMex' vanity addresses by iterating a leaf of the merkle tree that's used as tweak until they get a Bech32 vanity they like. (Not saying that's a good idea: the privacy and fungibility benefits from Taproot would be gone.)
Clearly you do not care about privacy and fungibility when you are using a vanity prefix. Merklerizing their script candidates is indeed the proper solution. combined with output batching
That's debatable. Obviously they sign all their withdrawals in a single daily trip to the cold storage, and we can infer they then broadcast the whole lot in one go. If the idea is to reduce impact on fee estimation and confirmation time, a trickle of transaction throughout the day would achieve this better than batching outputs, which is a bandaid to blasting the mempool with all your withdrawals in one giant broadcast.
|
|
|
Some misnamed label, you'd have to modify the python code to get this working from the lobby. You can recreate the transaction manually using coin control from the "Send" dialog for the time being.
|
|
|
The issue with this kind of strategies is that you are identifying the vendor for each and every payment, so you end up cutting costs on sanity/security checks to improve the UX. Security always comes at a cost in convenience.
IMO the solution is to impose one costly verification and streamline from there:
- Step #1, identification: check WoT sigs on the vendor's widely known, easy to find secp256k1 public key. - Step #2, payment: On each payment, ECDH the vendor's public key with a salt of your own, pass that salt to the vendor and broadcast the payment.
This requires additional logistics (delivering the salt, tying it to the purchase) which makes it a perfect service to offer by payment processors.
|
|
|
1) Replace the [bitcoin:] URI scheme handler with your malware in the registry (save the current, valid handler in the process). 2) Whenever a user runs bitcoin: URI, your malware will be spawned. Replace the address in URI with yours and spawn the valid process with that.
I would say this is significantly easier to implement than a clipboard hack tbh. You do not need increased privileges to set that stuff for a user account.
|
|
|
2020-04-19 04:01:55 (ERROR) -- Traceback (most recent call last): File "ui\TxFrames.pyc", line 948, in createTxAndBroadcast File "ui\TxFrames.pyc", line 902, in validateInputsGetUSTX File "armoryengine\Transaction.pyc", line 2513, in createFromTxOutSelection File "armoryengine\Transaction.pyc", line 2414, in createFromPyTx File "CppBlockUtils.pyc", line 3062, in getTxByHash DbErrorMsg: <CppBlockUtils.DbErrorMsg; proxy of <Swig Object of type 'DbErrorMsg *' at 0x000000000709CF00> >
It can't find the supporting transaction, you're in for a DB rebuild & rescan.
|
|
|
Look for armorylog.txt in your wallet folder. What you are showing are the last few lines of the combined log (i.e. these are db log entries, need the gui ones).
|
|
|
I dont see how that protects you from USB rootkits. The point of burning CDs is to avoid taking a USB stick to your offline signer.
|
|
|
about how large should i expect the files in ...AppData\Roaming\Armory to be? I'm wondering if i'll be able to keep it all on C, or if i should plan from the start to split it across drives.
No more than 20GB. is there any risk (chance of not working again, etc) with using batch files to invoke bitcoin-qt.exe with a datadir parameter, and ArmoryQt.exe with satoshi-datadir and dbdir parameters?
I'm not sure I understand the question. You talking about config files?
|
|
|
|