Bitcoin Forum
May 11, 2024, 10:39:32 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... 288 »
961  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 28, 2018, 03:16:49 PM
A block coding algorithm lets sequential pre-computation of code words using pre-knowledge of txns, which is essential for our purpose but the blocks are too local and don't carry message wide information, you decompose the message to adjacent segments and encode each segment independently which is not helpful in recovering long distant errors.
The block for error correction  is the entire block, so it is entirely non-local. You normally hear about error coding using small blocks because when you are correcting arbitrary errors and not just erasures the computation for decoding becomes prohibitive for coding large amounts at once. But we only need to handle erasures, so that makes it tractable to code everything at once.

Quote
Upon reception of the compact block (typically consisted of the header, coinbase txn and ids of block txns),  the receiver uses its pre-knowledge of txns to build the message (block) in its entirety.
In Fibre the short-ids are somewhat different from compact blocks because they also include transaction lengths. These are needed so it knows how big the holes for the missing transactions are.

Quote
In the very common case of missing txns:
A- The receiver constructs/guesses the sequence of encoded message fragments (code words) using its partial pre-knowledge of embedded txns which ends to realising that there are m missing fragments where it suffices to get m' distinct ones to recover the block and m' < m for being able to act as a transmitter for other peers, besides relaying the receiving stream meanwhile.
Yep. well m' = m: it needs to get as many fragments as are missing.   The block, broken into fragments is n fragments long. The rx is missing m fragments, m <= n (obviously), and can decode once it has received any m correction fragments sent by the sender.

Quote
B- The receiver initiates the protocol by requesting encoded data (the sequence of code words/packets) from peer(s).
Not quite-- that would require a round trip, which is fatal in terms of delay. Bam several hundred milliseconds out the door for the request to reach the sender and for the sender's reply to get back.

So instead the sender immediately sends encoded data when it gets a block, and the receiver tells it to stop when it has had enough. If the receiver was missing nothing, it'll just get some fragments it doesn't need which were sent during the time it took the sender to hear its stop request. It just throws them out (or relays the on to other peers that haven't said stop, then throws them out).

Otherwise your description seems okay.

Quote
original compact block with. For  raw multi kB compact blocks, TCP is hell of a reliable transport protocol.
As for reordering and  retransmission in TCP, I think they are neglectable compare to the overhead of missing txns.
TCP is a hell of a slow transport over long distance links.  As I pointed out, on international internet links there is typically 1%+ packet loss (akamai says more, but lets use 1% for discussion).  If TCP were used and even one packet were dropped then WHAM the connection is stalled for hundreds of milliseconds waiting for a retransmission, and further the transmission speed it cut down until the window regrows. Prior measurements before fibre and such showed that the effective block transmission speed between mining pools was about 750kbit/sec.

TCP's reliability only hurts and doesn't help-- the fibre recovery doesn't care which of the sent fragments it gets, it just needs enough of them... so pausing the whole stream just to retransmit a lost one doesn't make much sense.

Missed transactions and lost packets on long links happen at about similar rates-- 1%-10%. But regardless, if TCP were used every time there was a dropped packet the transmission would stall, so they couldn't be disregarded unless they were very rare.

Quote
2- I think non-systemic nature of the coding, implies frequently used encoding/decoding that are good candidates for hardware acceleration (like what happens for DVB),
The code we are using was designed specially to be very very fast on general purpose hardware. Almost all its decoding inner work is accomplished via SIMD xors. This was a major barrier in making fibre realistic after I originally proposed the general idea back in 2013-- a naive implementation took too long to decode. Above I say that the decode happens in in the speed of light delays plus 10ms or so, much of that 10ms is the FEC decode (obviously if nothing was missing, no FEC decode was needed). The FEC decode is also faster in the common case that relatively few fragments are missing.



962  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 27, 2018, 11:45:53 PM
FIBRE uses UDP and streaming-like FEC extra bits to tackle network layer (IP) packet lost. Now you are speaking of application layer (bitcoin) extra information requirements for pushing txn data. What am I missing here?
The whole idea essentially! Smiley   Don't worry, you're in common company.

Fibre never sends the block data itself. Not a single blinking packet of it.

Many people get stuck thinking that it sends the block and then additional correction data to recover from packet losses, because they've heard some description of how FEC is used elsewhere but Fibre doesn't work like other things.  This misunderstanding gets in the way of actually understanding what it does.

The way fibre works is that it sends only correction data.  The receiver uses the correction data to correct holes in its knowledge. The holes come from transactions in the block that weren't known to the receiver-- not from packet loss. In technical language: we construct a perfectly efficient list reconciliation protocol out of a non-systematic erasure code.

Say a block packetizes out to 1300 packets.  Because of missing transactions in your mempool you only have 1297 packets worth of transaction data from this block. Fibre sends you a compact block and additional packets. As the additional packets come in you also stream them to your fibre peers, and as soon has you have received _three_ of those packets (1300 - 1297 = 3) you will reconstruct the block, you will tell your fibre peers to stop sending data for that block and you will start relaying the block to your non-fibre peers via BIP152.  At the same time, you'll start generating novel correction data on your own, and streaming it to the fibre peers that haven't told you that they had enough.

If two fibre peers get a block at the ~same time, they will both start streaming it at once.  You only need enough packets from all peers combined to recover your block: so in the above, packet one and two might have been generated by peer A, and three might have been originally generated by peer B.  

When a source generates packets it round-robbins them out to its peers, and those peers send them to each other as they come in. So imagine five fibre speaking nodes A, B, C, D, E.  A is originating the block.  B, C, D are missing three packets and E is missing four.  After A has sent three packets out its network interface  B, C, D will all recover the block because they shared the data with each other. E will recover once it gets a forth packet which either comes from A or could be generated by any of B/C/D because having recovered the block they can generate more novel packets on their own. This makes good use of the full bisection bandwidth of the network rather than being bottlenecked by the outbound bandwidth of the source(s).

Even if you knew nothing (your mempool was empty or the block contained no previously relayed transactions)--  you'll recover the block once 1300 packets (again, from any source) come in. It doesn't matter which packets come in: each contributes equally to your efforts to recover the missing data. You never need to make a roundtrip to recover missing data no matter what is missing or how much: this is critical because a link around the earth can have a RTT of hundreds of milliseconds.

In this protocol packet loss is pretty much irrelevant-- it's not that it "repairs it" so much as it never matters to Fibre in the first place.  If you need six packets, you don't care if you get packets 1, 2, 3, 4, 5, 6 or 2, 6, 4, 8, 10, 12 or 4, 8, 15, 16, 23 and 42... you just need six in that case, any six, from any source in any order because that's how much you were didn't know in advance before the block was even created. The only effect packet loss has is that if packet 4 is sent 0.01ms later than packet 3 and you needed  packets to recover and got 1 and 2 but packet 3 got lost along the way your recovery will end up happening 0.01ms later than it would have absent the loss when packet 4 comes in. -- unlike ordinary transmission where it would happen hundreds of milliseconds later after you've requested and received a retransmission. If you only needed two packets due to knowing more of the transactions in advance then you'd finish after 1, 2 came in.

If your link is momentarily interrupted and you miss the first 1400 packets sent, you'll recover from the next N when the link comes back up-- however much you were missing.  Fibre keeps sending until the peer says it has enough-- effectively it's a rateless code (well, technically the implementation defaults to a maximum of eight times the size of the block, just simply to avoid wasting endless amounts of bandwidth on a peer that has gone offline is will never say stop; on blocksat-- where there is no back-channel-- it's configured to send 2x the block size + it repeats data on a ~24 hour loop as excess capacity is available).

I've left out a bunch of fine details to capture the basic understanding (e.g. the correction doesn't really need to work on a whole packet basis, it's just easier to explain that way.)

UDP is used so that TCP's loss correction retransmissions, reordering, and rate control don't get in the way.
963  Bitcoin / Bitcoin Discussion / Re: How Many Full Nodes Bitcoin Online ? on: December 27, 2018, 09:45:21 PM
firstly trying to get mempools synced is meant to be about if everyone has the same tx set before a pool mines a block, then all that needs to be sent as a confirmed block is the headers and list of txid's. thus reducing the data needed to be sent when a confirmed block is created.
Our work is not related to making "mempools synced", though they are naturally similar.

Our work is exclusively related to eliminating the massive overheads from relaying transactions the first time through as they go around the network.
 
Quote
to then suddenly need to grab hundreds of transactions from X and hundreds of tx from Y AGAIN
That doesn't happen in the Bitcoin protocol, no one has proposed for it to happen, and it isn't needed.

It's really a shame that people are forced to waste their time correcting you simply because you are so persistent and voluminous in your inaccuracies that you manage to confuse many people even though your posts are not very convincing.

Quote
i do find it funny that it was these very same devs that wanted a fee freemarket by removing a fee priority mechanism to make individualising mempools, that are now seeing the flaw in it..
Your statement here makes no sense.  Nodes prioritize transactions by feerate, none of that has been removed.  The only "individualizing" in practice is that a low memory host might reduce their mempool size. This is, in any case, totally unrelated to removing the relay inefficiencies.

Quote
but allowing nodes to relay tx's and drop them due to "fee free market" but then have to interrogate nodes to list their entire mempools(actually causing more bandwidth) and pick up the tx's AGAIN(more bandwidth again).. is silly..
Again, Bitcoin nodes don't interrogate nodes to list mempools nor pick up transactions again, no one has proposed they do, because there is no reason to do that.

Just pre-empting another confused tangent: There is a "mempool" p2p message which was added to the protocol by bitpay for the purpose of surveilling the network under a dishonest justification, which was later realized to be a privacy problem and the privacy leak was removed (and after that bitpay's staff recommended removing it from the protocol).  Bitcoin Core has no ability to send a mempool p2p request and never has had the ability to do so. It might be interesting to do so at initial startup to quick start the mempool and give miners something to mine after being offline for a while, but at the moment no one is working on that, AFAIK.

Quote
they all initially did get 1,2,3,4,a,b,c,d at initial relay..

The problem we are addressing is that if you have 100 peers, each of your hundred peers will advertise (or have advertised to them) each of those 8 transactions, using 100x the bandwidth on those advertisements as if you had only one peer.

Quote
the solution is much more simple.. get rid of the free market that lets nodes drop tx's in the initial relay. thus they would ALL have them all first go-around. without having to interrogate EACH connected node, after dropping.. because their would be no drop in the first place.
The need for nodes to potentially drop transactions has nothing to do with free market behaviour and everything to do with nodes not having infinite storage to keep the transactions.  But there is, again, no interrogation-- they don't need to go refetch them again.

Quote
X)now the first node has to ask the third node for the list.. 1,a,b,c,d (more data than initial relay)
y)now the first node has to ask the third node for the missing.. d (more data than initial relay)

You've misunderstood what we've accomplished here.

If at some point during the initial relay of transactions,  you receive from your other peers TX  A, B, C, D, E, F  and I get TX B, C, D, E, F In the historical Bitcoin protocol each of those 6 values would be sent across the link between us (potentially twice).

Instead, you could send me the single value X = A xor B xor C xor D xor E xor F,  or I could send you the single value Y = B xor C xor D xor E xor F.  

After the single value is exchanged whomever received it computes  X xor Y = A -- the missing transaction, even though neither of us knew in advance which transaction was missing.

Minisketch generalizes this to support any number of differences.  The data sent is exactly equal to the number of different values, regardless of how big the original sets are. (In fact, the first value in a minisketch is exactly what I described above: the xor of all the elements in your set).

So, if you have received in relay A, B, C, D ... X  and I have  already received B, C, D ... X, Y, Z;   Then I need send you only three values (or you me): The xor of all my values, the xor of all my values cubed, and the xor of all my values to the fifth power... and then you will know that I am missing A from you, and you are missing Y, Z from me.  And by doing this we send only three values on the link between us in the initial relay instead of 26 - 52 (depending on how much duplication there is from concurrent sends).

Quote
by getting rid of the "free market" and getting back to a consensus fee priority formulae/structure that everyone follows means
There has never been and can never been a "consensus priority formula", because priority by its very definition is external to consensus.  But the behaviour of existing nodes is consistent-- they keep and drop the same transactions, subject to having them in the first place, and subject to the restriction that anything configured to use less memory obviously can't keep as much.

Quote
to then not need to re-interrogate nodes and re-relay transactions.. then you will get to conect to 16-24 nodes as oppose to 8. and no need extra bandwidth and commands/sums playing around.
There is no re-interrogation, no-rerelay in Bitcoin, nor is any proposed.  It exists only in the imaginary protocol that you spend your days attacking and confusing people with.  The inefficiency in Bitcoin that we're working to resolve exists in the initial relay itself, and would still exists even if nodes had no mempools at all.
964  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 27, 2018, 08:34:23 PM
To be more specific: Packet lost is more probable in wireless/broadband communications
On a good day transcontinental internet links lose more than 1% of packets. The way congestion control in TCP works essentially guarantees loss on links unless they are massively over-provisioned/under utilized-- packet loss is the mechenism the network uses to tell senders that the link is full-- especially since ECN is not ubiquitous. Periods of 3% loss are not uncommon on international links, even worse is also common in and out of China due to firewalling and weird national telecom policies.

To give some concrete examples: Akamai claims 1.94% packet loss from Atlanta to Amsterdam, 4.5% packet loss from Seattle to Sydney, 4.12% LA to Tokyo. (Akamai's numbers seem a bit high to me, but only a bit)

Without a roundtrip-less protocol, packet loss during block transmission requires a an additional delay of at least one round-trip (but with TCP the effect is even worse). Wireless wasn't a consideration in the design of Fibre.  Without handling this you cannot get the 99th percentile worldwide block transmission times under 450ms regardless of your link bandwidths. (or 95th percentile, per Akamai's loss figures).

The FEC in Fibre isn't used exclusively, or even, primarily to deal with packet loss-- in any case.  It's used to compensate for missing transactions without the sender knowing which transactions the receiver is missing.  Without doing this you cannot get the ~90th percentile world-wide block transmission times under 450ms and you cannot avoid massive propagation penalties when unexpected transactions are included.

The only reason that use of fibre wouldn't be a several times speedup in the 99th percentile worldwide block propagation is simply because it's already in use, and we gain most of the benefits from it even if only a few nodes in each geographic region use it since they'll blast through the lossy paths and nodes in their regions will simply get the blocks from the places to that have it.

This is important for mitigating harm from mining centralization because a centralized miner can achieve equally fast transmission for their own blocks using simpler protocols because they know what transactions they'll include in advance and can extend their own next block without waiting for propagation.

Quote
We are not expecting a major event in networking layer in near future. Do you think otherwise?
I know otherwise, since we are working to replace transaction rumouring with a reconciliation protocol, which will cut tx rumouring bandwidth usage on the order of 40x. After that's complete we'll probably work on integrating fibre for the above reasons.

We're also still actively merging improvements that have first been deployed in the fibre implementation, e.g. supporting multiple outstanding concurrent getblocktxn decodes for compact blocks.
965  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 27, 2018, 06:12:54 PM
I get it, but unlike what you claim, FIBRE will never be deployed in any other way
It already is, Matt isn't the only user of it. Several mining pools also use it on their own, and Blockstream's satellite uses it (not for high speed, but because it also needed a roundtripless protocol for relaying blocks over a lossy channel).

Quote
but there is no way to go further than bip 152
Sure there is.

Quote
And web services are not p2p, there is a heterogeneous client/server style communication between engaging parties.
Good thing it's not a webservice then.

Quote
FIBRE prepares an infrastructure as a service for nodes
No it doesn't. It's simply a protocol for relaying blocks that never requires a round trip.

Quote
To have a FIBRE network you need more than just compact Blocks and compression/FEC and other techniques, you definitely need a handful of distributed-well connected specialized nodes (that are centrally owned and managed by an entity) that can put a trust in each other to bypass full block validation and lock acquisitions.
You absolutely do not.  In fact, the text "The FIBRE codebase running on each is optimized to ensure that Compact Blocks for each new block are made available to peers after only SPV validation of new blocks, without blocking for lock acquisition" also describes BIP152 in Bitcoin Core right now.   BIP152 was designed to permit blocks to be relayed after only SPV validation.  This wasn't implemented in the very first release with BIP152 because it was optional required a lot of additional work, but was implemented long ago, and similarly-- the ability to relay a compact block without acquiring cs_main was first done in Fibre but has also been in bitcoin core for more than a year.

So if it's already a standard part of BIP152 deployment, why is it mentioned on that page?  Because the page pre-dates these updates.

No trust is required for full nodes to use this sort of optimization for the reasons explained in BIP152.

So again, what you are saying can't be done not only can be done but has been done for a long time already.


Quote
because you need the nodes to trust each other and do not revalidate every block in full, otherwise you would simply configure them to use BIP 152 in HB mode, no additives needed.
As pointed out above, BIP152 also relays blocks without validating them. To what BIP152 HB modes does-- fibre adds: Unconditionally never using any round trips, not in the protocol, not at the network layer-- even when the block contains transactions that the network has never seen before. It also results in packet loss not causing  any delay (other than the time until the next packet).

Quote
As I understand, you are speaking of FIBRE as a technology, and I'm seeing it as a work-around complementary service available for large pools and altruists to improve latency for themselves or for the public respectively.
Yes, you are confusing the fibre protocol with Matt's public relay network.

Quote
Quote
It would be perfectly possible to use FIBRE for block relay everywhere and abandon BIP152 just like we abandoned the original block relaying mechanism.
I doubt it. Actually I denounce it completely.
If you mean improving BIP 152, to use FEC on UDP and stuff like that, it is not FIBRE, it is FIBRE technology and I've no comments to make about its feasibility, just my blessings.

You don't get to decide what Fibre is, the people who made it do. Fibre is a protocol for relaying block without ever making even a single round trip, even in the presence of packet loss, which allows the receiver to recover the block as soon as they received only as much data as they were missing.  This is how Fibre has always been described.  Matt's public relay network is a service run by Matt which pre-dated fibre by many years and helped drive its development. We specifically introduced the name Fibre because people were confusing the earlier relay protocol used by Matt's public relay network with the relay network itself because the protocol didn't have a name (for a long time it was just called relay network protocol, though later I started calling it fast-block-relay-protocol). When we created Fibre we decided to fix that problem by making sure that the free standing nature of the protocol was more clear. It seemed to work more or less until recently, when an ICO and its perpetrators started promoting otherwise.

Don't fall for the re-contextualization of an ICO scam that wants to fraudulently call the innovative free and open protocol we invented for relaying blocks as fast as possible a "service" so that they can sell unsophisticated investors on investing in their competing service.

And as far as "no comments to make about its feasibility"-- you just wrote "but there is no way to go further than bip 152".  So, you do have comments-- but you're simply incorrect about it.
966  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 27, 2018, 10:03:56 AM
So, it is for "registered users" and there is some kind of trust involved because FIBRE nodes do not bother wasting their valuable time to fully validate blocks.
Oh man, _please_ go re-read my first message to you in this thread.  You are confusing matt's public systems with the protocol-- linux with amazon.

Fibre is a protocol for distributing blocks very fast.  The protocol is open source and distributed, and I helpfully linked to it above.

Having a great protocol alone isn't enough to get block propagation as fast as a large well run centralized miner could achieve for themselves because a large centralized miner could carefully operate a well run DOS protected network of their own nodes to forward their blocks.  It would be bad if only the largest centralized miners had access to those facilities.  As a result, Matt runs a public network of nodes that provides a collection of geographically distributed, well maintained, dos protected nodes for anyone to connect their ordinary stock bitcoin nodes to who wants to, so that the the advantages of that kind of well maintained infrastructure are available to more parties.

Read the heading of the very page you are quoting.  "Public highly optimized fibre network: As there is significant benefit in having at least one publicly-available high-speed relay network, I maintain one."   Fibre is analogous QUIC-- a free licensed protocol with higher performance than the earlier alternatives-- and Matt's public relay network is analogous to Google's webservers-- a privately operated infrastructure using a high speed protocol and making its services available to the public.  

The experience of running this infrastructure is a lot of what made it possible to design fibre (and BIP152) in the first place, since it created a network in a bottle that could be well measured and characterized, exposing the limits of the legacy protocol and made it easy to deploy experimental adjustments in a rapid fire basis without risking disrupting the rest of the network.

Do you see how you are intermixing two different things?

What you're saying about complementary/supplementary services and their value as an addition applies perfectly to Matt's public relay network (and I agree!).  It just doesn't apply to the protocol-- which is what I was pointing to above, not matt's public relay network.

Quote
but you can't disruptively replace it with a new fancy idea like FIBRE or anything else. Bitcoin p2p network is not good candidate

This is amusing because we already completely replaced how blocks are propagated in Bitcoin with technology from FIBRE (optimized for minimizing bandwidth somewhat more than minimizing latency, which FIBRE optimizes) over two years ago now.

I've heard the old adage that people saying “It can’t be done,” are always being interrupted by somebody doing it. Being corrected by people who have been doing it for years is a bit of a twist... Tongue

It would be perfectly possible to use FIBRE for block relay everywhere and abandon BIP152 just like we abandoned the original block relaying mechanism. But the vast majority of nodes presumably don't want to use a couple times the bandwidth in exchange for shaving off the last few milliseconds, and so the benefit of doing hasn't yet been worth the effort to bring it up to that level of maturity (in particular, one has to be pretty sure there are no dangerous bugs for software that will run on all the nodes). This is especially true because most of the latency benefit of FIBRE can be achieved with just a small number of nodes running it, since they do the heavy lifting of carrying blocks over the 100+ms RTT transcontinental links while BIP152 has pretty similar performance to Fibre where the link RTTs are 10ms or less. Block propagation with BIP152 flows along the lowest delay paths, so when some nodes running Fibre manage to wormhole blocks around the globe faster than BIP152 does all the other nodes happily exploit that fact.

Presumably, Fibre protocol will eventually make it into default node software once doing so becomes the lowest hanging fruit-- likely with more knobs and/or automation to set the bandwidth vs latency trade-off more intelligently.   In BIP152 there are already two different trade-offs:  Normal and High Bandwidth mode. HB mode essentially eliminates one round trip time for relay, at the expense of wasting perhaps 13KB of bandwidth per block, nodes figure out which three peers would be most useful to have in HB mode and users it with those... so as to usually achieve the latency advantages of HB mode without much bandwidth cost.  The addition of Fibre into this mix would probably act as a third higher bandwidth lower latency trade-off that nodes would use if either manually configured or if they detected they had over some threshold of available bandwidth, so we could get the X percent fastest nodes on the network using it, thereby capturing most of the latency advantage without most of the bandwidth overhead. Until then, anyone that wants to run it is still free to do so, ... and without ever interacting with Matt's servers, because, again-- matt's servers are just a user of a protocol that is open and useful to anyone.



967  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 27, 2018, 02:29:30 AM
I'd much prefer a discussion around why a decentralized relay network is labelled a scam -- can't you criticize the idea instead of making a low level integrity attack?
Because it very clearly is a scam. Just because you've managed to bamboozle yourself doesn't make it right.  Selling some ICO token as a pretext to collect money from unsophisticated investors for some business that makes little sense is flat out unethical, makes everyone in the cryptocurrency space look bad, and the stink of it scares off competent technical contributors because they don't want to get anywhere near the scammyness of it.

Your post really does have nothing to do with the OP's question-- it seems like you didn't even bother to read it, and only read the subject line.  The OP was asking if gossiped messages would reach all nodes or, in other words, "is the communication lossless?". Achow101 responded pretty much completely-- that it can't be guaranteed to reach all (no physically realizable network could make such a guarantee, since nodes can become disconnected). Though in practice almost all txn reach almost all nodes quickly (subject to the intentional delays added to help obscure origins).

Quote
but this is my livelihood. If im wasting my time working on something that is a scam, id like to know that,
That is a fair enough request, but at the same time anyone in this space is utterly flooded with garbage. It's literally impossible for any person to explain why all of it is junk, worse-- in some cases the perpetrators are actively malicious and aggressively retaliate (for ones that are pure marketing they have literally nothing better to do than spend time and money attacking their critics) so people simply don't.

Why is is what you're working on a scam?  Lets start with the fact that it's soliciting investments from the general public (likely in violation of US securities law) for an enterprise which expects to make money ultimately from resources provided by third parties. ... to accomplish a task which is already accomplished by existing distributed software.  Even if the result manages to do something better compared to existing systems, at the end there would be no reason for users to not take the software and rip out the monetization leaving the investing public holding nothing (and using non-free licensing wouldn't help, since few are foolish enough to run closed systems with their cryptocurrency applications)... and I find it personally pretty doubtful that BloxRoute or Marlin will manage to construct something that outperforms the existing free state of the art, considering that both of the whitepapers massively misrepresent the state of the art.

Right now the current state of the art in Bitcoin block propagation is that >99.99% of blocks are transmitted to locations everywhere in the world within the one-way network delay plus less than 10ms while running on commodity hardware and networks.  The one-way network delay is the physical limit, which can only be improved on by putting in networks with more efficient physical paths (which someone can do, e.g. what HFT companies do... but isn't what you're doing [1]). 10ms could be improved on (esp with asic decoders, or creative protocol/software optimization) but it's not clear that going from 10ms to (say) 1ms would actually matter much-- perhaps worth doing, but it isn't something that is going to recoup the investors investment.

Without a clear statement on: How is it going to outperform light+10ms  and by how much  [2], why Bitcoin participants would be willing to _pay_ to gain access to that 0-10ms improvement, and why they wouldn't just cut out the 'middleman' token to do it?-- then the whole thing just looks like something designed to bilk unsophisticated investors which don't know the most basic questions to ask or have been deceived by misrepresentations about the performance of the existing state of the art.

Scamyness bonus round:  Text from the Marlin Protocol whitepaper was clearly plagiarized directly from the bloxroute whitepaper. As an example "The Falcon Network was deployed (by two members of our bloXroute Labs team)" (em mine, but this is not the only material copied outright-- e.g. it wholesale copies the incorrect description of what fibre is).  So not only are you working on a scammy ICO, it's an unoriginal scammy ICO that is ripping off another scammy ICO.

[1] BloXroute claims to be doing that with AWS but anyone can run the existing free and open state of the art software for block relay on AWS and gain whatever advantages locating relay on AWS has... also,  I have measured relay on AWS and it's actually not impressive compared to other hosting options.

[2] To be clear: we already know how to improve on the existing state of the art-- use a more efficient sketch encoding, use a more efficient serialization of encoded transactions, use a faster to decode FEC, optimize the software (and in particular use more precomputation), improve packet scheduling, support using nics with integrated FPGAs for faster decode and packet relay, and get more users using state of the art protocols-- , but even if the improvements achieve the maximum possible gain, it doesn't seem like it'll matter that much, so currently the efforts of open source developers are being spent on other areas.  But since both Marlin and bloxroute misrepresent the state of the art-- and fail to propose any performance advancement beyond it that I'm aware of--, I am doubtful that either group understands it well enough to even match it, much less exceed it.
968  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 27, 2018, 12:16:29 AM
Well, we don't.
Sure we do.  Simply just saying that we don't without any reasoning or justification (except perhaps avoiding admitting that you were mistaken before) doesn't make it so.

Quote
So, please, give me a label for such 'thing'

I'm not sure what you mean by label. Do you mean a link?

Quote
I'm ok with Matt running this protocol,
Well thats good, because anyone can run anything they want and they don't need your or anyone elses approval and in fact there is nothing you can to to stop them.
969  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 26, 2018, 07:12:26 PM
If it was a requirement for Linux to be hosted in a data center, it would fit in such a category. It is not the case for Linux but for FIBRE, almost, it is.
No it isn't not at all.

The reason Matt runs a network of well maintained nodes for people to connect to is because doing so is beneficial (because well maintained, well positioned, well networked hosts have better latency than random stuff) and because if someone doesn't do it and allow the public to use it large miners will be the only parties to benefit from that kind of service (because they do it for themselves) and enjoy an advantage against others.

Fibre is just a protocol that renders bitcoin block transmission free of round trips and exploits pre-knoweldge of transaction data, which allows blocks to be transferred in very close to the one way delay 99.9999% of the time.  There is nothing "service", "centeralized", or "data-center" about it. Please discontinue spreading that misinformation.
970  Bitcoin / Development & Technical Discussion / Re: Is gossip protocol in Bitcoin perfect? on: December 26, 2018, 05:07:52 PM
I know BloxRoute is a solution that Bitcoin could adopt,
BloxRoute appears to be a straight up ICO scam as far as I can tell-- It doesn't appear to propose anything new or interesting that would be and advance over what is in use in the network today, but uses a lot of hype and jargon to try to sound new and exciting.

Then again googling around suggests the thing you claim to be working on is a competing ICO scam. ... and presumably you only posted here to hype it-- especially since your post is offtopic from the thread. Sad

Quote
complementary centralized services like above project
Fibre is a protocol, not a service (and certainly not a centralized service). It's like you're calling Linux a centralized service because amazon runs it.
971  Bitcoin / Development & Technical Discussion / Re: [SCALING] Minisketch on: December 24, 2018, 02:51:00 AM
and I think this method will fit in cryptos. if so, then this would be a good idea to relay the FEE parameter with these hash values too. then you could add a HAND-SHAKING stage at the beginning of these back and forts:
[...]
node minimal fee threshold[/url]) Alice and Bob (within hand-shaking stage) could express their minimal-fee-threshold to the other side and save even more bandwidth from the beginning. not sure,

Bitcoin nodes already tell their peers the minimum feerate they'll accept, so their peers don't won't offer anything that they won't accept. Doesn't even need to go into the IDs.   See BIP-133.  It would be possible to put the feerate in the IDs for even more fine grained/realtime filtering but I believe that would be a net waste of bandwidth due to the extra space for that information and the relatively few extra transactions that get sent in between feefilter updates.

Quote
and I have a question here. if I understand it right, the BCH here needs a rearrangement of data in mempool - so in continue, we should know does this rearrangement need more space too? and the encoding/decoding processes engage the processor of a node. may it put an end on nodes that have miniature hardware like raspberry family?

We don't plan on changing how the mempool works-- rather, for each peer a sketch would be kept with all the transactions that your node would like to send / expect to receive with that peer. It takes a bit of memory but it's pretty negligible compared to other data kept per peer (such as the filters used to avoid redundant tx relay today).

For computation we've been designing assuming needing to accommodate a rpi3 ... and part of the purpose of building minisketch as a fairly well optimized library was understanding exactly what the performance tradeoffs would be. 

One nice thing is that all the CPU heavy lifting is done by the party that requests reconciliation, so if you are CPU starved you can just reconcile less frequently. Doing so will also further reduce your bandwidth but just has a downside of propagating transactions somewhat slower.
 


972  Bitcoin / Bitcoin Discussion / Re: Bitcoin Core 0.17.0 Released on: December 24, 2018, 12:29:11 AM
Latest version on bitcoin.org now is 0.17.0.1 but I have seen no announcement of that here. The signatures seem OK from first glance, upgrading now...

Releases like 0.17.0.x are just packaging respins, so they don't get announcements.  Basically nothing was changed in it but something about the OSX installer.

However, 0.17.1 is done now-- and tagged in git for over a day-- holiday travel is just delaying getting the binaries up, which is why it hasn't been announced yet. They should be up by christmas however!
973  Bitcoin / Development & Technical Discussion / Re: [SCALING] Minisketch on: December 23, 2018, 11:35:52 PM
I'm preparing a draft for this, but I'm really sick of doing work on problems that you guys in the team are not interested in.
And I'm really sick of your insulting lectures when you can't even bother to keep up on the state of the art in what's already been proposed. Tongue Please spare me the excuses about how its other people's fault that you're not doing other things. You seem to have plenty of time to throw mud...

There are already existing designs for an assumevalid 'pruned sync'...  but not enough hours in a day to implement everything immediately, and there are many components that have needed their own research (e.g. rolling hashes like the ecmh, erasure codes to avoid having snapshots multiplying storage requirements, etc.).

If you want to help--great! But no one needs the drama.  It's hard enough balancing all the trade-offs, considerations, and boring engineering without having to worry about someone being bent that other people aren't going out of their way to make them feel important. It's hard even to just understanding all the considerations that other engineers express: So on one has time for people who come in like a bull in a china shop and don't put in that effort towards other people's work. It doesn't matter who you are, no one can guarantee that any engineering effort will work out or that its results will be used even if it does.  The history of Bitcoin (as is the case in all other engineering fields) is littered with ideas that never (yet) made it to widespread use-- vastly more of them from the very people you think aren't listening to you than from you. That's just how engineering goes in the real world. Don't take it personally.
974  Bitcoin / Development & Technical Discussion / Re: [SCALING] Minisketch on: December 23, 2018, 10:04:37 AM
Thanks for the information. I agree that initial sync will become big problem, but aren't verification time is very fast, so this won't be problem unless block size limit is increase too much (unless verification time isn't growing linearly)?
Initial sync is already an enormous problem, and even if you assume computers/bandwidth improve at 18% year over year (which is a big assumption...) any blocksize over ~300kbytes means that the initial sync time will continue to get worse.

Quote
since even nodes which run 0.16 or above is barely above 50%[1] (https://luke.dashjr.org/programs/bitcoin/files/charts/software.html)
It's unclear, we know that the numbers are somewhat distorted by spynodes which commonly claim to be old versions, but we don't know by how much. I know that on nodes where I've aggressively banned spynodes I see a much newer node mix than luke does.

Regardless, in the future nodes could eventually decide to stop relaying unconfirmed transactions to old nodes... it's pretty backwards compatible to do so. But thats getting way ahead of things...
975  Bitcoin / Development & Technical Discussion / Re: Anti-pool algorithm PoW on: December 19, 2018, 05:21:47 AM
Quote
sign_nonce = mod(block_header_hash, N)
R = X of mulPoint(sign_nonce, G)
S = mod(invert(sign_nonce, N) * (block_header_hash + (K * R)), N)
Quote
In such scheme the miners will consolidate their computional powers but only within the boundaries of the server room or a small group of trusted participants, otherwise coins from the COINBASE transaction address can be immediately stolen by an untrusted miner.

I'm wondering if things went like this? "Problem ECDSA signing isn't derandomized so I know...! I'll invent some novel crypto. Nothing could go wrong."

I'm guessing your central idea isn't to have used completely insecure crypto, thus permitting first third party in their lightcone to steal the funds?

If so you might want to revise your proposal, perhaps minimizing the number of novel cryptographic structures you attempt to construct.

Aside, the response to no pool mining, isn't "I guess I won't pool mine", it's "I guess well fund bob to build us a big centrally controlled mining farm to mine for us and share the income".
976  Bitcoin / Development & Technical Discussion / Re: A new idea for node reward on: December 17, 2018, 10:34:38 PM
Currently, there are  9722 bitcoin nodes.

No there isn't. The number is more like 60k or so.

Quote
It seems as if this number has increased
It appears to be lower than it was in 2011.

Quote
which is good since the bitcoin value depends on the number of nodes according to metcalfe’s law.

This is nonsense.

Quote
When a block is added, the full node would use the hash, combine it with the node’s ip address and calculate a new hash. If the new hash has some kind of special property like a certain number of 0’s in the hash, then you submit the result and a send to address to the miner and get a portion of the miner reward. You have to respond within a certain time. If there’s more than one winner, the reward should be split.

All this would do is fund people to pretend to run zillions of nodes on as much address space as they can obtain.
977  Bitcoin / Development & Technical Discussion / Re: Lightning network proposal, use U2F tokens as hardware wallets on: December 16, 2018, 08:26:09 PM
I would be strongly opposed to adding support to secp256r1-ecdsa to Bitcoin, particularly for this rather shallow application.  r1 is slow to work with, now officially recommended against by the NSA, normal ECDSA cannot be batch verified and cannot be easily used as a threshold or adaptor signature.

There are many hardware wallets out there already, and U2F devices do not make for a good hardware wallet because they lack a display so that users can have any idea what they're signing (so they provide limited protection against a hacked computer).

If there is need for a U2F like device that works with bitcoin they could as easily be produced as ones that don't (including dual mode devices) but there just doesn't appear to be enough demand for that... accordingly, there isn't enough demand to add inferior cryptography to Bitcoin.
978  Bitcoin / Development & Technical Discussion / Re: Why use RFC6979 and is there any downsides? on: December 14, 2018, 08:22:36 PM
Achow's post seems to sound like it's saying that there is some kind of danger in random K because of 'getting more information out' -- there isn't in and of itself,  to the extent that different K's "get more information out" so does signing multiple messages.

The reason for 6979 is that reliably generating random numbers is hard and it turns out that over and over and over again applications screw it up.  Worse, you can't easily tell when a random number is screwed up because it's random, one looks as good as any other. ... unless the screwup is so bad that they're just constantly repeating.  But basically any form of predictability of K will break the signature, even ones like being linearly related to the Ks in other signatures.

You still need to generate a secure random number to get your private keys, but you only need to be successful at that _once_.  RFC6979 is really just a way to safely reuse that ONE random number you successfully got (the private key) for all your further transactions... rather than needing a constant influx of new random values, each one a potential point where a software error could cause the loss of your keys.

The fact that the procedure also gives the same signature every time for the same key/message if you don't use the optional extra-data input to 6979 is a bonus that makes software testing a lot easier,  but it is not the source of the advantage of this approach itself.

This distinction is important because multiparty schnorr (or mpc-ecdsa) signing can use RFC6979 but MUST be constructed in a way where the same signature is not repeated even for the same message. Smiley
979  Bitcoin / Development & Technical Discussion / Re: New Way to Generate Bitcoin Addresses! on: December 09, 2018, 07:16:45 AM
This site appears to connect back to the server every time a new key is entered...


People should NEVER use any key management webpage or webapp.
980  Bitcoin / Bitcoin Technical Support / Re: HUGE PROBLEM, LOST MASSIVE AMOUNT OF BTC. 100 BTC is Reward on: November 30, 2018, 02:42:33 AM
I posted the standard recovery procedure for the kind of corruption described here.  It seems to be being ignored.

I would take a substantial bet that the OP here is either scamming or is going to get scammed.

As an aside, it is not safe to use potentially malicious wallet.dat files.  Anyone who gets sent a wallet.dat from a third party should take great care in using it. I would not be shocked if it were possible to get arbitrary code execution from a wallet.dat file.  If a bad guy found a way to do that the best way to exploit that discovery would be to pose as someone who corrupted their wallet and encourages people to try to 'scam' them by getting a copy of their wallet or help them with a promise of an outsized reward.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!