Bitcoin Forum
May 26, 2024, 09:17:25 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [34] 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 ... 288 »
661  Bitcoin / Development & Technical Discussion / Re: New Mempool Observation: The daily BitMEX broadcast at 13:08 UTC on: May 07, 2020, 07:56:37 AM
I would assume they're motivated to cut down on their own tx fees.
One challenge with that kind of argument is that their tx fees are probably a rounding error relative to the overall cashflow of their operation, probably making it a low-to-no-priority for them.

In the bygone days when long distance service actually cost companies a lot of money (at least in absolute terms, if not relative to their operation) there were companies that would come in and go through your bills and optimize your service (e.g. by switching carriers, installing dedicated lines to offices you constantly called, getting contracts for special rates to certain locations or bulk discounts, recommending usage changes-- etc.) in exchange for 50% of the savings for the next year.  If they save nothing, you pay nothing.

It was a good business for both the contractor who could make a boatload off their specialized expertise and the company who got costs reduced in an area that wasn't their focus.

I could imagine the same thing existing for bitcoin-- but the security considerations would probably make it a harder sell.
662  Bitcoin / Development & Technical Discussion / Re: Using pubkeys as transaction inputs on: May 07, 2020, 12:08:12 AM
Nodes maintain a set of spent txids, and thus refuse to accept the same transaction twice. This "spent txid" set would also be prunable, because each transaction would contain a "max block height" (thus the transaction itself expires). So you only need to keep the "spent txid" until it's unspendable anyway.
Max height is not reorg-safe. Spent txid is vulnerable to malleation unless lots of limitations are imposed to make it not.

Also makes it hard or impossible to create intentional conflicts:  E.g. I pay alice txn A.. But later want to change my fees so I author txn B.  I want to be sure that only one can confirm.

Instead of txid, you can have just a spender created nonce. Avoids the malleation, lets you direct a conflict. But at that point I think you've just recreated what bitcoin does but less efficient. (on the plus side, if the nonce isn't grouped per-key, you can do some rather elaborate tricks with mutually exclusive sets of transactions which you could only do in bitcoin by creating a bunch of 0-value semaphore outputs).
663  Bitcoin / Development & Technical Discussion / Re: Using pubkeys as transaction inputs on: May 06, 2020, 05:20:35 PM
And then, after you've paid someone and you've received more funds, they can just resend your prior transaction to the network again to take those funds too.

664  Bitcoin / Development & Technical Discussion / Re: How to initially sync Bitcoin as fast as possible on: May 06, 2020, 04:23:37 AM
As for the topic, you can suggest setting "assumevalid=(latest block's hash)" for devices with slow processor like RPi.
For your specs, there's not much of a difference.
Please don't suggest that.

If you just want to blindly trust some host on the internet that you got a block hash from, just use them as your node remotely.

Skipping an extra month of validation or whatever doesn't make a big difference in initial sync time.

Plus, if you set the value too recently you'll just cause it to disable AV entirely and your sync will be slower.

Verifying scripts is just one part of validation, there is still a lot of cpu resources spent on the rest.
665  Bitcoin / Development & Technical Discussion / Re: How to initially sync Bitcoin as fast as possible on: May 05, 2020, 07:27:37 PM
SSD doesn't really make much of a difference so long as the dbcache is big enough (8GB - 10GB or so).

With a big dbcache validation is a write only process on the disk. Smiley

Not that SSDs aren't a lot better in general.  But if your choice is an anaemic 500GB SSD that with bitcoin and your other usage will be out of space in a year or a 10TB 7200RPM HDD, well... slow is better than not running.
666  Bitcoin / Development & Technical Discussion / Re: Blockstream Satellite 2.0 on: May 05, 2020, 07:08:25 PM
1. Is it right to assume computational power to encode the data more/less same with current serialization format?

Pretty close. The encoder takes a little amount of computation because it template matches scripts with common templates.

Quote
2. Are there any less rough estimate of "more computationally expensive" on decode part? 20%? 50%? twice?
IMO 15% reduction in the total ongoing bandwidth usage is big deal for public node with hundred connection.

Your benchmark should probably bet relative to validation costs rather than the current format, the current format is essentially 'free' to decode and all the time decoding it is likely spent allocating memory for it. I don't have figures, probably a few percent increase to validation time.

I think the big tradeoff other than just software complexity/review is just that a node which has stored blocks in compacted form will have to burn cpu time decoding them for peers that don't support the compacted form.

For relay of loose transactions, I struggle to see any downside (again other than the review burden).
667  Bitcoin / Development & Technical Discussion / Re: New Mempool Observation: The daily BitMEX broadcast at 13:08 UTC on: May 05, 2020, 05:52:15 PM
Most certainly the optimal solution would lay at some intersection of batching and broadcast delays. Broadcast delays would take them 5 minutes to implement. Batching? Who knows.
I don't believe that delaying announcing transactions they just constructed would provide any value to the network. It's better to have them out there and provide people with more complete information about them.

Generally the big barrier to deploying batching is delaying the payment (and learning its txid) in the first place.
668  Bitcoin / Development & Technical Discussion / Re: New Mempool Observation: The daily BitMEX broadcast at 13:08 UTC on: May 05, 2020, 02:36:42 PM
I'm guessing that's part of the vanity prefix and not something they're willing to let go unless offered a better solution.
There is no requirement to use uncompressed keys to use vanity addresses-- in fact, vanity addresses can be generated slightly faster with compressed keys.

If their prefix is just '3BMex' that's pretty short and easily found in any case.

Quote
throughout the day would achieve this better than batching outputs, which is a bandaid to blasting the mempool with all your withdrawals in one giant broadcast.
These things are not mutually exclusive, batching into a modest number of outputs per txn could halve the load they place on the network-- and that's true regardless of when they emit the transactions. (doubly so if someone created a two-sided version of the branch and bound search in Bitcoin core that can often result in changeless transactions when run with large wallets)

However, an even more block space-efficient alternative would be to use a non-multisig wallet for user deposits, which are periodically consolidated into a multisig wallet.
Meh. I'm not sure that's great advice. With the normal way that Bitcoin us used, you don't have any idea how much a user will deposit. So, sure, it would be reasonable to send *small* deposits to a less secure wallet, and preferentially use those payments... but then you might also have a user depositing 2000 BTC in one go and then have those funds stolen from the hot system before your sweep moves them.
669  Bitcoin / Development & Technical Discussion / Blockstream Satellite 2.0 on: May 05, 2020, 01:04:30 AM
Blockstream has announced a new version of their satellite bitcoin blockchain stream: https://blockstream.com/2020/05/04/en-announcing-blockstream-satellite-2/

It now supports getting the entire blockchain history over the satellite!

I've been beta testing this the last few weeks. The software is still pretty new but it's great.

One of the exciting new technical features in it is that it has an alternative serialization of Bitcoin transactions which is more bandwidth efficient.  Any bitcoin transaction can be losslessly converted, one transaction at a time, into this alternative serialization and applied across the whole Bitcoin history it reduces transaction sizes by about 25%.

It saves a little more on older blocks, in part because their transactions have a lot more uncompressed pubkeys and compressing pubkeys is one of the things it does to shrink transactions. Newer blocks are more like 20% smaller using this serialization.

Similar, but somewhat less reduction in size can be achieved by using standard compression tools like xz or zstd on groups of blocks.  But because the new serialization in blocksat works a single transaction at a time it's compatible with both transaction relay and fibre's-mempool-powered-reconstruction. (if you do want to work whole-block-at-a-time it can also be combined with traditional compression to get a little more savings).

If a Bitcoin nodes were to use this generally, they could drop their on-disk storage requirement for the full block data by about 25%, they could also negotiate using it with supporting peers and lower their bandwidth used for initial sync and transaction relay. Post erlay, this would give a 15% reduction in the total ongoing bandwidth usage of a node (pre-erlay the bandwidth used by INVs would diminish the gains a lot for anything except history sync).

The cool thing about it is that it's not a consensus change: How you store a block locally, or how two consenting peers share transactions data is no one else's business.  This is why blocksat 2.0 can use the new format without anything changing in the rest of the Bitcoin network.  Right now the blocksat software only uses this new serialization over the satellite-- where space savings is also critical--, but using it on disk or with other peers wouldn't be a huge addition.

The downsides of the new serialization is that it's more computationally expensive to decode than the traditional one, and of course the implementation has a bit of complexity. I've been pushing for this [sort of idea since 2016](https://people.xiph.org/~greg/compacted_txn.txt) (note: the design I described in that link is only morally similar, their bitstream is different-- I'd link to docs on it but I don't think there are any yet), so I'm super excited to see it actually implemented!

The history download is pretty neat: Every block is broken into ~1152 byte packets and redundantly coded with 5% + 60 extra packets.  A rolling window of about ~6500 blocks is transmitted in interleave, resulting in about one packet from each block in the window per minute.  With this setup, which can be adjusted on the sending side,  you can take an hour long outage per day or so plus 5% packet loss and not suffer any additional delays in initial sync. If it does lose sync it saves the blocks it completely received--even if it doesn't have their ancestors yet-- and will continue once the history loops back around again. If you have internet access (potentially expensive or unreliable; or maybe even sneaker net!), you could also connect temporarily and just get the chunk of blocks you missed instead of waiting for it to loop around again.

The software was also rebased on 0.19-- their prior stuff had been falling behind a bit.

The satellite signal is doing some neat stuff:  They time division multiple two different bitrate streams  (one about 100kbit/sec like the original blocksat stream, and one about 1mbit/sec) on the same frequency.  The low rate stream can be reliably received with a smaller dish and under worse weather, and only carries new blocks, and transactions.  The high rate stream also carries new blocks and transactions (when they show up), but in addition carries the block history. When new blocks come in the data from both streams contribute to how fast you receive the block.

I believe they're recommending an 80cm dish now, mine are 76cm and the signal on both streams is very strong and robust against poor weather. YMMV based on location and weather conditions. The low rate stream should be reliable on pretty small dishes.

This new high rate stream also significantly reduces the latency for transmitting blocks, making it more realistic to mine using blocksat as your primary block feed (and then using $$$ two-way sat to upload blocks when you find one).  Right now 4 second latencies are typical though there is some opportunity for software running that should get it consistently closer to 1 second.  The updated stream also handles multiple sat feeds more seamlessly-- in some regions you can see two different blocksat feeds, such as in California where I live, and if you have two receivers it'll half the latency to receive blocks (and obviously increase the robustness).

The new setup makes it easier to separate the modem from the bitcoin node.  You can have a modem left closer to the dish(es) connected to ethernet (directly w/ their ethernet attached modems, or w/ a usb modem and a rpi) have it send udp multicast across a network to feed one more receiving bitcoin nodes.  This can help eliminate long annoying coax runs.

Finally, they also preserved the ability to get the stream with a pure SDR receiver *and* added the ability to use an off the shelf USB DVB-S2 modem, and the DVB modems are more flexible in what LNBs you use... so if you're in a location where getting more blocksat specific hardware is inconvenient or might erode your privacy-- they've got you covered.

All in all, I think this is pretty exciting.
670  Bitcoin / Development & Technical Discussion / Re: Write ups of my Mempool Observations on: May 04, 2020, 10:56:39 AM
Nice article! but you messed up: your site called on users to request things--- so I went and opened a half dozen issues. Smiley
671  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 01, 2020, 11:53:29 PM
But for our problem it is not suitable, we have to maximize the chance to get a collision inside an interval.
In a small interval [a*G, ..., b*G] (80 bit) probably you won't find 2 points with the same x or the same y, we don't work in the entire space.

We tried to move [a*G, ..., b*G] to [-(b-a)/2*G, +(b-a)/2*G]  because it is precisely the way to have all points with the opposite in the same subgroup. Then for each 'x' we have 2 points for sure.

I think you are missing the point I am making.

You will find a collision if the solution is in one of the additional ranges opened up by the endomorphism.

First what is "our problem"?  Finding a that some point Q = kP has a discrete log with respect to some point P in a specific compact range of k -- [0, 2^80].  But why?    Instead you can solve a related problem: Find k for Q = kP where k's value is [+/-]1*[0, 2^78]*lambda^[0,2] in less time, even though that 'sparse' range has 1.5x the possible values.

If the reason you are interested in DL is because someone has embedded a puzzle in a [0,2^80] range or something, then the ability to find solutions in the other range isn't interesting.

If, instead, the reason you are interested is because you have a cryptosystem that needs small range DL solving to recover data-- for example decrypting an elgammal encryption or recovering data from a covert nonce side-channel, or from a puzzle-maker that was aware of the endomorphism and used it in their puzle... then it is potentially very interesting.

I think also if the interest is just in DL solving bragging rights or just the intellectual challenge in exploiting all the struture, the endomorphism is also interesting-- for that there isn't a particular problem structure so why not use the one that results in the fastest solver. Smiley  In the prime order group essentially all k values are equivalent, which is why you're able to shift the search around to look for that range wherever you want it in the 2^256 interval.
 
It looks the same if you put 3 times more angaroos at the same range (with G), but for this method you need less jump tables (only one jump table)

You do not need any extra tables to search the additional ranges opened up by the endomorphisms.  That's why I pointed out point invariants that hold for all three--- x^3 or (is_even(y)?-1:1)*y.    If you do all your lookups using one of these canonicalized forms the table for six of the ranges is the same (and you'd have to check after finding a hit to determine which of the ranges the solution was actually in).
672  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 01, 2020, 09:07:00 PM
If you cube the x coordinate you will extinguish the group of the endomorphism-- all six keys P, lambda*P, lambda^2*P, -P, lambda*-P, and lambda^2*-P all share the same x^3 coordinate, so if the x^3 was used to select the next position, they'd all collapse to the same chain.

Alternatively, you could choose y or -y whichever is an even number (only one will be), because negating y MAY be faster than cubing x. I say may because you have to compute y and I think an optimal addition ladder will usually not need y for all points-- only for points which are non-termial.

The endomorphism might be less interesting though, because the cases you are selecting the DL is known to lie in a 'small' range from a specific base point, and the two endomorphism generated points do not.

That said, I've had applications-- e.g. using nonce's as a sidechannel -- where you can choose the range, and  [small]*lambda^[0,1,2]  would be a viable option.


673  Economy / Reputation / Re: MicroGuy - claiming to be a "friend of Satoshi" in order to promote his altcoin on: April 30, 2020, 03:50:52 PM
in order to promote Goldcoin (which he has been heavily invested in for a number of years - here is a good background story on the matter).

I knew goldcoin was scammy, but dammmnnnn. I didn't previously know the full scope of it.
674  Economy / Service Discussion / Re: Purse.io is shutting down on: April 30, 2020, 08:49:31 AM
Will purse.io transform into a scam political attack tool under Roger Ver's ownership or he should be cheered because he saved it?
It wasn't already?

I've heard from multiple people that underwent some pretty awkward contact from law enforcement after using purse.io early on. By all appearance its primary use was creating plausibly deniable laundering for identity thieves into cryptocurrency.

There have been other services that bridge Bitcoin to gift cards-- even ones without a bunch of altcoin shilling. But what they don't have is astronomical discounts like 20%, presumably that's because they aren't being powered by stolen funds.

The staff there engaged in some pretty reprehensible attacks against me personally-- and as a result I will never associate with anyone that has anything to do with them.

But... it sounds like a perfect match for Ver's business ethics.

Largely about maintaining an image I'll guess. All of these crypto companies secretly dream of being stumbled across by a Peter Thiel and being scooped up and they think SF is the place to be for it to happen. There's so much noise there you'd likely be better off in Idaho where a potato billionaire gets a boner for that thar tech stuff.
If people are interested in doing something real, I'm confident that they would be better off in many other places.  If, instead, they want to put on a show and cycle through failed startup after failed startup then would be hard to beat.  You can think of what these companies and, more importantly, their investors is doing is fuzz testing the market.  You take some random people, you take some random business, you filter out some of the obvious crap and you see if it goes somewhere.  If it does, you gas it up and ride it to where it takes you-- if it doesn't, you let it die and try something else. The winners win big enough that they pay for all the losers and then some.

I've met people who've worked full time out here for over 20 years and worked for a dozen companies (what?) and never worked for a company that ever successfully shipped any kind of product. It's crazy. I can't imagine what that does to people's self esteem-- though I guess many just develop a blindness to how absurd it is.

There are, of course, also good people-- just from weather and environmental conditions, northern California is a pretty great place to live and so if you could pick anywhere in the world to live this would rank highly (which is part of how they can get away with relatively high taxes! Smiley).  But hot shots here can make close to mid six figures working at Google. Most of the people your SF startup can easily hire ... aren't that.
675  Bitcoin / Bitcoin Discussion / Re: Craig W. only claims to be Satoshi, because he knows the real Satoshi is dead? on: April 30, 2020, 08:31:09 AM
MicroGuy (Greg Matthews) is a piece of shit scammer.   He got caught fraudulently claiming to be "friends" with satoshi in order to promote himself and his scams... and his reaction has been to go into panic overdrive,  deleting the evidence, and flooding multiple threads with noise to bury the posts, and cloud the skies with conspiracy chaff.

The residents of this forum aren't half as ignorant or gullible as the residence of other venues full of victims and former victims of Wright's scam, so I predict that this isn't going to work out for him here.

You are yet another person on this forum that has reached 'Legendary' status with your systematic practice of posting complete garbage.

Other than Satoshi, Cobra, Theymos, Hearn, Gavin, Finney, and myself, I can think of no other individuals worthy of such an accolade.

Behold the narcissist in his native habitat replete in his self-worship, leader of all gullible enough to believe his incompetent lies.

Or maybe I'm wrong and he's just another drug addled idiot who's somehow earnestly arrived at this confusion,  but if that's the case why hasn't he attempted to answer nutildah's straightforward question?

Why would Satoshi threaten to dox himself?
676  Bitcoin / Bitcoin Discussion / Re: Schwarz rule: An alphanumeric pattern found in private key generation on: April 27, 2020, 12:28:22 AM
I do not believe it to be a bold claim. Images are provided of our console output identifying the private key with a second image providing public key to said private key.

Which is, amusingly enough, is provably fake.

Quote
We are currently investigating the illegality of withdrawing these btc in our region.
It is unambitiously unlawful to steal someone elses Bitcoin regardless of your region. Even where no caselaw currently exists it is just implausible that any functioning legal system could conclude otherwise. So-- asked and answered.

But you could provide evidence simply by demonstrating with a signature-- without stealing anyone's coins. This would be unlikely to create any legal problems, particularly if its part of a good faith effort to alert the owner to a vulnerability.

There is no reason not to do so except that you cannot.

If you were worried about getting in trouble for 'finding' the key but not taking the coins-- then your goose would already cooked, because you've *claimed* to do so, and any action against you would be able to rely on your claims that were detrimental to your interest.  Most likely I think you're just waiting for the coins to be moved by their owner with no involvement by you at all,  at which time you'll declare victory and begin the next step in your scam.
677  Bitcoin / Bitcoin Discussion / Re: Schwarz rule: An alphanumeric pattern found in private key generation on: April 25, 2020, 06:02:21 PM
Fine you say you found the key to 3Mt3Z1gKp5W4a1UkCrBG9XoqMH9F5Gyupn
Can you sign a message with it?
If you can not you are a scammer and can go away.
Good call.

There is, in fact, a reason why their claims are obviously bullshit (I mean besides the confused 'alphanumeric' babble), but I'm loathe to point it out in public and help them improve their scamming.

678  Bitcoin / Bitcoin Discussion / Re: Schwarz rule: An alphanumeric pattern found in private key generation on: April 25, 2020, 05:44:40 PM
You are so grossly incompetent that you can't even manage to write convincing nonsense. Go away.
679  Bitcoin / Bitcoin Discussion / Re: Schwarz rule: An alphanumeric pattern found in private key generation on: April 25, 2020, 04:51:07 PM
Please refrain from posting substanceless bullshit.
680  Bitcoin / Bitcoin Discussion / Re: Schwarz rule: An alphanumeric pattern found in private key generation on: April 25, 2020, 02:25:11 PM
"Bitcoin private key crackers" are a common scam/malware vector.

The way the scam goes is that someone posts some impossible/implausible claims about key cracking and then waits for people to contact them.

They offer to sell their software-- and make money by selling their non-functional software.

Then often the software itself is malware.

Presumably they feel okay about this because they're ripping off people who wanted to rob other people. ... though it also snares some people who lost access to their own coins and are desperately trying to recover them.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [34] 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!