Bitcoin Forum
November 06, 2024, 05:13:48 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Blockstream Satellite 2.0  (Read 618 times)
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 05, 2020, 01:04:30 AM
Merited by Welsh (10), ABCbits (10), fillippone (6), hugeblack (5), Carlton Banks (3), OmegaStarScream (2), nc50lc (2), bitmover (2), o_e_l_e_o (2), Heisenberg_Hunter (2), HeRetiK (1), Chikito (1), Husna QA (1), Chlotide (1), btctaipei (1)
 #1

Blockstream has announced a new version of their satellite bitcoin blockchain stream: https://blockstream.com/2020/05/04/en-announcing-blockstream-satellite-2/

It now supports getting the entire blockchain history over the satellite!

I've been beta testing this the last few weeks. The software is still pretty new but it's great.

One of the exciting new technical features in it is that it has an alternative serialization of Bitcoin transactions which is more bandwidth efficient.  Any bitcoin transaction can be losslessly converted, one transaction at a time, into this alternative serialization and applied across the whole Bitcoin history it reduces transaction sizes by about 25%.

It saves a little more on older blocks, in part because their transactions have a lot more uncompressed pubkeys and compressing pubkeys is one of the things it does to shrink transactions. Newer blocks are more like 20% smaller using this serialization.

Similar, but somewhat less reduction in size can be achieved by using standard compression tools like xz or zstd on groups of blocks.  But because the new serialization in blocksat works a single transaction at a time it's compatible with both transaction relay and fibre's-mempool-powered-reconstruction. (if you do want to work whole-block-at-a-time it can also be combined with traditional compression to get a little more savings).

If a Bitcoin nodes were to use this generally, they could drop their on-disk storage requirement for the full block data by about 25%, they could also negotiate using it with supporting peers and lower their bandwidth used for initial sync and transaction relay. Post erlay, this would give a 15% reduction in the total ongoing bandwidth usage of a node (pre-erlay the bandwidth used by INVs would diminish the gains a lot for anything except history sync).

The cool thing about it is that it's not a consensus change: How you store a block locally, or how two consenting peers share transactions data is no one else's business.  This is why blocksat 2.0 can use the new format without anything changing in the rest of the Bitcoin network.  Right now the blocksat software only uses this new serialization over the satellite-- where space savings is also critical--, but using it on disk or with other peers wouldn't be a huge addition.

The downsides of the new serialization is that it's more computationally expensive to decode than the traditional one, and of course the implementation has a bit of complexity. I've been pushing for this [sort of idea since 2016](https://people.xiph.org/~greg/compacted_txn.txt) (note: the design I described in that link is only morally similar, their bitstream is different-- I'd link to docs on it but I don't think there are any yet), so I'm super excited to see it actually implemented!

The history download is pretty neat: Every block is broken into ~1152 byte packets and redundantly coded with 5% + 60 extra packets.  A rolling window of about ~6500 blocks is transmitted in interleave, resulting in about one packet from each block in the window per minute.  With this setup, which can be adjusted on the sending side,  you can take an hour long outage per day or so plus 5% packet loss and not suffer any additional delays in initial sync. If it does lose sync it saves the blocks it completely received--even if it doesn't have their ancestors yet-- and will continue once the history loops back around again. If you have internet access (potentially expensive or unreliable; or maybe even sneaker net!), you could also connect temporarily and just get the chunk of blocks you missed instead of waiting for it to loop around again.

The software was also rebased on 0.19-- their prior stuff had been falling behind a bit.

The satellite signal is doing some neat stuff:  They time division multiple two different bitrate streams  (one about 100kbit/sec like the original blocksat stream, and one about 1mbit/sec) on the same frequency.  The low rate stream can be reliably received with a smaller dish and under worse weather, and only carries new blocks, and transactions.  The high rate stream also carries new blocks and transactions (when they show up), but in addition carries the block history. When new blocks come in the data from both streams contribute to how fast you receive the block.

I believe they're recommending an 80cm dish now, mine are 76cm and the signal on both streams is very strong and robust against poor weather. YMMV based on location and weather conditions. The low rate stream should be reliable on pretty small dishes.

This new high rate stream also significantly reduces the latency for transmitting blocks, making it more realistic to mine using blocksat as your primary block feed (and then using $$$ two-way sat to upload blocks when you find one).  Right now 4 second latencies are typical though there is some opportunity for software running that should get it consistently closer to 1 second.  The updated stream also handles multiple sat feeds more seamlessly-- in some regions you can see two different blocksat feeds, such as in California where I live, and if you have two receivers it'll half the latency to receive blocks (and obviously increase the robustness).

The new setup makes it easier to separate the modem from the bitcoin node.  You can have a modem left closer to the dish(es) connected to ethernet (directly w/ their ethernet attached modems, or w/ a usb modem and a rpi) have it send udp multicast across a network to feed one more receiving bitcoin nodes.  This can help eliminate long annoying coax runs.

Finally, they also preserved the ability to get the stream with a pure SDR receiver *and* added the ability to use an off the shelf USB DVB-S2 modem, and the DVB modems are more flexible in what LNBs you use... so if you're in a location where getting more blocksat specific hardware is inconvenient or might erode your privacy-- they've got you covered.

All in all, I think this is pretty exciting.
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 05, 2020, 07:08:25 PM
Last edit: May 05, 2020, 07:23:41 PM by gmaxwell
Merited by Welsh (4), ABCbits (2)
 #2

1. Is it right to assume computational power to encode the data more/less same with current serialization format?

Pretty close. The encoder takes a little amount of computation because it template matches scripts with common templates.

Quote
2. Are there any less rough estimate of "more computationally expensive" on decode part? 20%? 50%? twice?
IMO 15% reduction in the total ongoing bandwidth usage is big deal for public node with hundred connection.

Your benchmark should probably bet relative to validation costs rather than the current format, the current format is essentially 'free' to decode and all the time decoding it is likely spent allocating memory for it. I don't have figures, probably a few percent increase to validation time.

I think the big tradeoff other than just software complexity/review is just that a node which has stored blocks in compacted form will have to burn cpu time decoding them for peers that don't support the compacted form.

For relay of loose transactions, I struggle to see any downside (again other than the review burden).
goatpig
Legendary
*
Offline Offline

Activity: 3752
Merit: 1364

Armory Developer


View Profile
May 07, 2020, 01:10:03 PM
 #3

Quote

Can't load the file, so I'm gonna ask here: Do you replace outpoints with shorter ids? Can/do you skip witness data optionally when applicable?

gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 07, 2020, 02:53:53 PM
Merited by Welsh (6), fillippone (2), ABCbits (1), Heisenberg_Hunter (1), xenon131 (1)
 #4

Can't load the file, so I'm gonna ask here: Do you replace outpoints with shorter ids? Can/do you skip witness data optionally when applicable?
Stupid webserver died, alas. But it wouldn't have answered your questions because the relationship there is only spiritual, what they implemented was implemented a long time after that document, probably by people who only got a second hand description of it. Tongue

Here is the actual implementation:  https://github.com/Blockstream/bitcoinsatellite/blob/master/src/compressor.cpp#L212

Transactions are encoded and decoded in a fully standalone form without needing any context. That makes it impossible to replace the input txid/vout with a short identifier, though they are encoded more efficiently.  Script data is templatized and P2SH embedded segwit gets the redundant hash removed, and so on.

Things could be made smaller if there wasn't a requirement for the encoding to be context free... but context free is important for FIBRE reconstruction of blocks using the mempool. It also wouldn't be possible to use context for loose tx relay without a lot of extra complexity. E.g. if you use a counter for outpoints the peers have to be synchronized on the best block to relay (no guarantee of this in bitcoin, consider block races or just non-instant block propagation), if you use a short hash you have to deal with clowns colliding the short hash, if you use a salted short hash you'd need some indexing of utxos over that salted hash, negotiating it, etc.

I think there was some talk by the blockstream folks of making another encoding for historical blocks (so no mempool reconstruction) that used whole-block context and saved some more space.

Any estimation on how fast/slow is that compared to the long-established broadcasting via Internet?
Right now in my tests on the new signal I was getting blocks on the satellite about 2-4 seconds behind the internet, most of the time.  The performance appears to be dominated by low chunk hitrates which could be improved... I believe that with tuning they should be able to get it to 1-2 seconds consistently. Perhaps more tuning in the satellite modems could get it lower than that though the one-way delay to geosync is pretty high and a lot of the performance that the satellite modem gets comes from fairly intense error correction that adds delay.

Quote
How long does it take to have the full sync from scratch? Sure there is a need for extra equipment  but the question remains - is that broadcasting free?
About three weeks.  The signal is free. Blockstream also lets you pay bitcoin to send out additional data over the channel.


SpanishSoldier
Sr. Member
****
Offline Offline

Activity: 728
Merit: 256


View Profile
May 08, 2020, 07:21:30 PM
 #5

Blockstream has announced a new version of their satellite bitcoin blockchain stream: https://blockstream.com/2020/05/04/en-announcing-blockstream-satellite-2/

It now supports getting the entire blockchain history over the satellite!

I've been beta testing this the last few weeks. The software is still pretty new but it's great.
Are you still associated with Blockstream?
gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 09, 2020, 03:51:11 AM
 #6

Are you still associated with Blockstream?
Nope, just an enthusiastic user of their satellite stream.
fillippone
Legendary
*
Online Online

Activity: 2338
Merit: 16649


Fully fledged Merit Cycler - Golden Feather 22-23


View Profile WWW
May 09, 2020, 01:27:00 PM
 #7

That kit is definetly in my wishlist.
What i dream is a totally internet free setup (even if I live in a pretty civilised portion of the world and I have an optic fiber cabe in my own home).

So I dream to pair the satellite downlink with a land mesh network (goTenna) or LoRaWan to broadcast my transaction and connect to peers.
 
I started analysing the setup here:

[Total privacy Bitcoin]: off grid Transactions LoRaWan/goTenna

but this requires a lot of the the scarcest resource: time.
 

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
goatpig
Legendary
*
Offline Offline

Activity: 3752
Merit: 1364

Armory Developer


View Profile
May 10, 2020, 01:15:37 PM
 #8


More conservative of an implementation than I expected considering the gains.

Quote
Things could be made smaller if there wasn't a requirement for the encoding to be context free... but context free is important for FIBRE reconstruction of blocks using the mempool.

Has there been considerations for a stand alone key:val table for transaction short ids? I'm asking because I can't tell whether the benefit would offset the overhead. It "feels" like a boon for historical data but a mess of complexity the younger the blocks get.

gmaxwell (OP)
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 11, 2020, 12:14:08 AM
Merited by Welsh (4), ABCbits (2)
 #9

Has there been considerations for a stand alone key:val table for transaction short ids? I'm asking because I can't tell whether the benefit would offset the overhead. It "feels" like a boon for historical data but a mess of complexity the younger the blocks get.
Instead of 'short ids'  one would probably use an output counter for that-- saves having to deal with collisions. It's cheap/simple to maintain the counter itself (there is one for the total txn count displayed in the updatetip log entries).

Even with no reuse of old spent ids (which would make maintaining the counter complicated), every input could be referenced using 31 bits right now. ... pretty big improvement over 264 bits currently used by bitcoin (or the 258 or whatever in the this new encoding).

I want to say when Pieter crunched the numbers doing that it was an additional 30% savings!

You could probably do a bit better by transmitting with transactions the highest counter in the transaction separately, and code all the other inputs as varint-like difference below that value. (Or if you want to be really elaborate: code the all the input indexes for the transaction differentially in descending order, plus log2(inputs!) bits to encode which goes to which.)

But you'd have to have an index on disk of counter->txid which would take up a lot of space, and take time to update. Sad  I think maintaining through reorgs it wouldn't be too complicated, because it would just ride along with the rest of the state. ... e.g. the txindex gets updated in that way.

When using this with loose txn (rather than in blocks) you'd also run into issues where the encodings weren't compatible between different peers who were on different near-tip forks.  One way to handle that might be using a bit per input to indicate if the counter or full txid was used, and use full inputs on loose transactions for inputs with counter values too close to your current tip.

The additional savings is substantial.  But because it would require fixed overhead even if you weren't using it (the counter->id index, which both potential senders and potential receivers would have to have), it's a little more difficult to reason about its prospects for deployment.   The version blockstream put out has the advantage of being extremely self-contained in the codebase, and having no overhead (except the code) if you're not using it... so it's a realistic prospect to have all nodes adopt this and use it on a case by case basis.

I guess from my perspective, I proposed this sort of thing concretely in 2016 (and less concretely probably years before) and it took all this time for it to get implemented at all and there still isn't any real prospect of its use outside of satellite. If input reference compression had been a part of it .. it still probably wouldn't be done.


bg002h
Donator
Legendary
*
Offline Offline

Activity: 1466
Merit: 1048


I outlived my lifetime membership:)


View Profile WWW
November 20, 2020, 04:24:05 AM
 #10

Are you still associated with Blockstream?
Nope, just an enthusiastic user of their satellite stream.

I’m a dual satellite user. It’s a little bit like magic still to me. But now my coldcard’s sdcard never touches a computer on the internet. I still check my satellite node against a conventional internet connected node (hey, you never know), but I feel more secure with my sats never coming near the internet.

Hardforks aren't that hard. It’s getting others to use them that's hard.
1GCDzqmX2Cf513E8NeThNHxiYEivU1Chhe
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!