Bitcoin Forum
May 26, 2024, 05:12:00 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 [76] 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 ... 288 »
1501  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 06:55:22 PM
You can sign a vin properly by multiple signers and get multiple signatures, each with a different sighash. What I assumed incorrectly was that all signers of a specific vin have to use the same sighash. Can you show me where it says that the same vin can be signed by different signers using different sighashes?
Generally the "says" is that you can signal it. Not that the bitcoin protocol is always a paragon of perfect design, but it should generally not be possible to signal something that shouldn't work-- when thats possible it reliably leads to bugs. Smiley so it shouldn't be done without good reason. Being able to control sighash per signature is pretty useful.

But fair enough; I just wanted to make sure you weren't confused in more profound ways, but I follow your thinking now.

Quote
I am making the iguana be able to be a lossless codec that uses half the space with the sigs and able to purge the sigs to get a 75% reduction in size, I estimate about 15GB size for the dataset without sigs. Do you see any reason for a non-relaying node to keep around sigs that have already been verified?
There isn't-- though Bitcoin Core can already do this. It can run a full node, with transaction and block relaying (but not serving historical blocks) with less than 2GB of space; this isn't a default behavior yet because we have not yet implemented mechanisms for nodes to retain and locate random chunks of the history so that nodes can collectively support other nodes bootstrapping.
1502  Bitcoin / Development & Technical Discussion / Re: Anyone can build bitcoin/secp256k1 on Windows? on: March 03, 2016, 06:48:49 PM
It used to be very easy to build it using mingw, but there have been some changes and now I cannot build it anymore.
At least not out of the box, as it requires "autoreconf" command, which my mingw+msys don't have.

Does anyone know an easy way to build the lib on Windows host, using msys/mingw?

autoreconf just runs autoconf, automake and libtoolize.  It's included in the more recent versions of autotools (since 2012 or so). You can run them manually.
1503  Bitcoin / Development & Technical Discussion / Re: Study Suggests Bitcoin Needs Major Changes to Scale UP on: March 03, 2016, 06:43:21 PM
Where did the data for that chart come from? I'm interested in examining it in greater detail, along with seeing the full set of assumptions and parameters.
The specifics are rubbish. Unfortunately, making an accurate estimate for the system involves things that are constantly changing and can't be measured-- and the story it tells doesn't hold when mining is very centralized.

The point I was attempting to make was that the general shape (which I believe is robust across parameters; excluding ones which centralize mining); is such that if you can't be sure to get the parameters right, you want to be fairly confident that they're wrong in the too-high direction, because being too low (relative to the network geometry) is bad compared to being too high. This has also been demonstrated in practice somewhat: there was a "liquidcoin" altcoin created with fixed difficulty, and as its hashrate increased its orphan rate went through the roof-- eventually it stopped converging completely.

(Though I do agree with the general sentiment, that if we were taking additional risk of a given amount, its more attractive to take it in the form of decreased intervals than increased blocksize; but unfortunately the decreased intervals involve strictly more uncertainty.)
1504  Bitcoin / Hardware / Re: Mining history... some odd avalon boards on: March 03, 2016, 08:53:44 AM
The wires are for overclocking. Many people did really serious mods on those things back in the days, every GH/s was worth a fortune!
Interestingly though, I got my avalon batch1 to about 82GH/s with just firmware twiddling. I wonder what people were getting with the voltage mods?

I didn't try because it was too valuable to risk frying.
1505  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 07:08:41 AM
I noticed it as I found my code was assuming only a single SIGHASH type per vin, but clearly there can be multiple signers.
That is a concerning misunderstanding. I'm not sure how you'd think that-- the sighash in use is part of the ecdsa signature (input to checksig).
1506  Bitcoin / Bitcoin Discussion / Re: Mempool is now up to 25.5 MB with 22,200 transactions waiting. on: March 01, 2016, 09:44:31 PM
The mempool was nearly empty at around 1-1.5 two days ago.
No it wasn't-- maybe on that site, but only due to it filtering things.

Because of ongoing low level spam attacks any long running 0.12 node has had a 300MB mempool (which is the limit) for months now.   There are about 800 MBytes of unconfirmed valid transactions that I'm aware of... 

None of this is especially interesting: one should always assume that the number of transactions at a fee of 0 is effectively infinite.

The spammers moved to paying more than 1e-8 BTC per byte; which is still much lower than most ordinary transactions; but high enough to move the needle on tradeblock... this lets people spin a bunch of FUD.  Meanwhile for most people transactions continue to function like normal: a total non-event.
1507  Bitcoin / Development & Technical Discussion / Re: Why blocktime decrease instead of blocksize increase is not discussed? on: March 01, 2016, 05:04:15 PM
This was asked, and answered in another recent thread: https://bitcointalk.org/index.php?topic=1378112.msg14031820#msg14031820
1508  Bitcoin / Development & Technical Discussion / Re: Signature aggregation for improved scalablity. on: February 29, 2016, 10:03:27 PM
Therefore, the total best-case validation time would be the time to compute e(g, sigma) and the product of n pre-calculated pairings.
Great point! Thought also precludes gaining a product of pairings speedup (though as I mentioned, efficient fraud proofs would preclude that).  Products in the transfer group (which is going to be 3kilobit or so), are not ultrafast either, so that is not totally negligible.  It would also mean that the 3kbit intermediate elements would need to be cached, instead of just a set.  Memory usage for our signature caches is already a consideration.

My thinking there is also somewhat influenced by my thinking in terms of the benefit from privacy or anti-reorg, transactions need to be not broadcast in advance to some degree. But you're absolutely right that caching is not totally lost.
1509  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 27, 2016, 09:59:14 PM
Quite astonishing. Especially, that gmaxwell is a super intelligent individual. When I talk to people about Bitcoin here in the UK I often point out that the best software developers of our industry work on BTC and one of the bests is gmaxwell. I am not sure why he navigates himself into this situation.
Oh come now.  TPTB made a totally offtopic post-- had nothing to do with Bitcoin at all, and only anything to do with the thread in question is that the rust language was mentioned in the thread and TPTB made a one sentence comment that he previously had complaints about rust but didn't remember them-- and it got moved to another subforum; where.. in fact, I wrote a polite reply to his commentary on audio codecs.

TPTB's response was to go on a campaign of insults and allegations; spreading conspiracy theories and such; instead of just looking in his post history to see that the post was still there.  His immediate recourse to attacks creates a hostile environment. Most thoughtful people have better things to do than to deal with that, myself included.  If you'd like to continue to share a forum with "the best software developers of our industry"-- then you should step up and ask people like TPTB to relax on the attacks, not shake your head with him in agreement when someone else doesn't completely lie down and take a pile of abuse.
1510  Bitcoin / Development & Technical Discussion / Re: Study Suggests Bitcoin Needs Major Changes to Scale UP on: February 27, 2016, 08:55:12 PM
It wasn't my assumption, it was something that appeared to have been calculated in the posted paper.
The paper is trying to be "conservative" in how limited they believe capacity increases from re-parameterization can come... meaning they're leaping for the largest possible increases, without regard to many considerations... e.g. how far before the existing system will certainly go off the rails.  They use this approach so they can then argue that even that much isn't enough-- a conclusion I strongly agree with.

That doesn't mean that those parameters are actually workable, however.

In terms of the interblock time, decreases have dramatic effects once you consider an adversarial setting-- honest miners end up diluted working on many forks more often, while a high hashpower attacker stays focused.  Decreased interblock times also increase the pressure to consolidate mining to a few (or one pools) by making it more profitable to do so. Many of the tools to increase reliability of shorter interblock times, like GHOST, increase other problems like selfish mining that crop up once you are considering rational actors (instead of just honest/altruistic ones).

If you chart out a simulation of how long a user has to wait for (e.g. 99.9999%) confidence that their transaction won't be reversed, as a function of the expected interblock time you end up with a chart that looks like this(ignore the scaling):


(flip vertically for 'wait time goes up').  The scaling of this depends on factors like network hashpower distribution and latencies which are hard to measure and which change. The key take away is the derivative: if the time is somewhat longer than optimal for the network-reality, it has a fairly small effect on time-until-security ... but if it's shorter than optimal it rapidly destroys security.  This is why it's important to be relatively conservative with the interblock interval.
1511  Bitcoin / Development & Technical Discussion / Re: nSequence and opt-in ReplaceByFee: difference between maxint and maxint-1? on: February 27, 2016, 08:41:47 PM
Please try to get that code corrected. Lots of people copy and paste, and the pasted code can live for a long time.
1512  Other / Off-topic / Re: Ogg Opus on: February 27, 2016, 07:34:18 PM
I am not an expert on streaming media and have just begun my research, but it seems to me that your quoted benchmark is assuming many seeks will be done on the stream. But there are cases where the user wants to only skip once or twice into a song as they are sampling the music for the first time, which is my use case. For that case, the lack of an index is afaics horrific because there will be the latency of some roundtrips required to optimally locate the seek position (by a bisection sampling method) and wasted bandwidth as well.
No, it is assuming the user will make exactly one seek. Otherwise, it would assume caching.  The expected accesses given are what you would get for a user that makes only a single seek.

Quote
You could have instead made the index optional.
This is also not a free choice. In existing formats with optional indexes _no_ commonly used software is able to correctly seek when the "optional" index is not present. Many have no support for that case at all at all, and what has support in theory doesn't work due to inadequate testing. Doing so would also increase the overhead for the format by 20% or so (as in if the overhead for a file was 100k, it ends up being 120k). As mentioned, accurate indexes are not small-- and many things compromise by just not providing accurate indexes; which then leaves applications linearly scanning or not permitting sample accurate seeking.

Quote
Seems you are excluding use cases.
What use case is excluded? Moreover-- what error is there in having a streaming container that is actually good at streaming? People can use other containers if they need other properties.  (Though ironically, the best supported non-streaming container that is widely used with Opus-- MKV-- which has 'optional' indexes, on average requires transferring more data to seek a single time than Opus in Ogg does; due to the need to transfer the whole index to seek only once). The height of flexibility is acceptance that not every goal can be optimally satisfied in a single design.

At the end of the day there are always trade-offs. The design of Ogg makes seekable streaming encoding possible, in alternatives with indexes that isn't possible at all. As a result the seeking is less efficient but with a good implementation only _slightly_-- e.g. if you seek 0 to 3 or so times, it'll still transfer less data than an format with an index. Many people are surprised at how efficient it is because they don't realize how much better secant/brent's method of root-finding is over bisection. I think the trade-off is a very good one, but if for some reason it doesn't work for your applications you're free to use another container. MKV support for Opus is well specified.  Someone caring about different things than you do does not make them bad or stupid, I hope you realize that, especially when their preference hardly hurts yours. I kinda feel like you're missing out on an opportunity to be amazed at how efficient root finding can be and learn something new and surprising to you. Sad

Quote
Hey I understand what it is like to deal with trolls and in that case I would support the swift action since trolls only aim to disrupt, but I wasn't trolling you.
I have no clue what you're going on about here. I responded politely to your message here-- are you referring to my non-amusement to your wall of insults, allegations, and personal attacks that you cross-posted to several threads; when you couldn't find your own post when it had been moved to offtopic?   I apologize if my abrasive response makes you feel threatened enough to need to expound on your best racist theories; but I have had several of our best posters in the past tell me they stopped using Bitcoin Talk because of harassment by you in particular, and its certainly a situation I've felt myself. You may not intend to troll; but your radical jumps into insults and attacks combined with your persistence and the stream of consciousness diving into incoherent technobabble arguments which you sometimes present is effectively indistinguishable from trolling. I and others have expended a lot of time trying to encourage better behavior under your now-banned past identities, without success. All that is left is to say that if you are unable to behave yourself and not act like a troll, you will be banned. Otherwise you won't be. The choice is yours.
 

1513  Economy / Economics / Re: Economic Totalitarianism on: February 27, 2016, 07:07:40 AM
I am adding now the fact that I sent Gregory a private message about this yesterday or the day before that, before he posted about this new invention. I also note that weeks ago I posted to Sean Bowe on the Zcash forum the importance of zk-snarks for smart contracts.

I also note that I the one who recently exposed that Segregated Witness is a Trojan Horse designed to enable Blockstream to take over Bitcoin by enabling them to version the block chain with soft forks. They are trying to push their technologically flawed Side chains in through the back door. Thus it is not surprising that Gregory has deleted my post (and somehow it didn't even appear in my Inbox as it normally should, so he must have some super moderator powers on this forum).

I am sorry folks to inform you about the very deep level of corruption in Bitcoin runs to the core.

Okay Gregory send your hitmen. I am ready.

Note this copy is being copied all over the place so it can't be deleted.

Your post wasn't deleted. It was entirely off-topic (going on about an audio format, in a thread about a cryptographic protocol.. had nothing to do with Bitcoin) and ended up getting moved to the off-topic subforum: https://bitcointalk.org/index.php?topic=1378533.0 .  I even responded to it.

The only private message you've sent me is, quoted in its incoherent entirety,

Quote
multi-sig opens a 51% attack security hole
« Sent to: gmaxwell on: February 25, 2016, 07:19:06 AM »
   Reply with quoteQuote ReplyReply Remove this messageDelete
https://bitcointalk.org/index.php?topic=1364951.msg14002317#msg14002317

But I guess I should be glad you've exposed yourself as a shameful liar. Again.

I designed the protocol used here in _2011_, Sean has been working on this implementation since last November-- even the git history goes back to Dec 4th... These sorts of things are not accomplished in days. You owe Sean and the members of the forum a retraction.

You've been banned from the technical sub-forum in the past under your prior identities. If you don't want to be again, please get some control over your behavior.
1514  Other / Off-topic / Re: Ogg Opus on: February 27, 2016, 04:31:33 AM
The container is very efficiently seek-able over the network, in fact. But your implementation must be sufficiently intelligent.

Here are some benchmarks from the opusfile library, On a 25 hour long variable bitrate audio recording performing 1000 jumps to exact sample positions with no caching:

Total seek operations: 1873 (1.873 per exact seek, 4 maximum).

So an average of less than two operations, and in the worst case 4-- meaning even with a 100ms RTT a worse case seek will not cause a noticeable delay. (and obviously, if it cached, it would be much better.) (In other words, these figures involve absolutely no caching or shared state between the 1000 trials, they're completely and totally independent... if your application only ever did a single seeking operation it would perform an expected 1.873 random reads to do it).

This test is part of the opusfile tests directory, you can perform it yourself on your own files. Even if you constructed a malicious file that looked nothing like actual VBR audio the worst it can do is bisection-- so log() operations-- (I believe the method we use can formally be proven to never perform more than 2x the number of probes as plain bisection on any input no matter how crazy).  It turns out that over large timescales VBR isn't very variable, the variations average out and once your search space is small enough that they don't you've already pretty much landed at the solution.

And this requires no indexes which require the file be written in multiple passes or that it only be seekable when it's "finished"-- a live stream is seekable while it's being recorded. You can truncate or even extract a chunk of the middle of a stream with basically no more complexity that range requests (and putting the header at the front, if you're extracting from the middle), and the resulting output is perfectly seek-able too. Corruption to the file will not break its seek-ability (except, perhaps, for specifically malicious corruption that also fixes up a checksum). And it does this with <1% overhead.

I think the seeking performance, given a good seeking implementation, is pretty good, and it's often more efficient than other less flexible containers even when they have indexes-- because to keep their indexes reasonably sized they're not high resolution-- and getting them requires seeking to the end or a rewrite of the file. To the extent that an average near 2 is less good than 1 might be with a perfect index, that is a cost for the streaming functionality, and I think a reasonable one.

I didn't design the container, but if I did-- I might have only added an additional back-reference or two at each page; I wouldn't change how seeking works.

Tim hasn't really had anything to do with Rust-- I had some infinitesimally small influence on the language (lobbied successfully for overflow of ordinary signed types to be defined as an error). Rust has some nice properties for cryptocurrency infrastructure: in particular it has no overhead, deterministic operation, with high levels of safety enforced at compile time. These things matter for decentralized cryptocurrency, since speed is also a security consideration.
1515  Bitcoin / Development & Technical Discussion / Re: The first successful Zero-Knowledge Contingent Payment on: February 27, 2016, 12:50:16 AM
Do you have example code for how the sudoku was verified in zero knowledge? If you are using libsnark, I assume it is in C, which is perfect for me.
Linked near the bottom of the article now. The client is a mixture of rust and C++.

Quote
However each content type will need to have a libsnark circuit derived from C code. How to verify subjective things? I guess that is a problem for another day. Could it be verified that what is being delivered is a valid privkey? If that privkey had utxo in a 2of2 multisig where the buyer controlled the other privkey, then I think it enables guaranteed atomic swaps.
Sure it could, but there are other ways to do atomic swaps that are likely much more efficient.

Quote
The main question is when will this hit mainnet?
Its on mainnet _now_. The transactions today were done on mainnet. Nothing new was needed in Bitcoin for this.
1516  Bitcoin / Development & Technical Discussion / The first successful Zero-Knowledge Contingent Payment on: February 26, 2016, 10:40:35 PM
I am happy to announce the first successful Zero-Knowledge Contingent Payment (ZKCP) on the Bitcoin network.

ZKCP is a transaction protocol that allows a buyer to purchase information from a seller using Bitcoin in a manner which is private, scalable, secure, and which doesn’t require trusting anyone: the expected information is transferred if and only if the payment is made. The buyer and seller do not need to trust each other or depend on arbitration by a third party.

Imagine a movie-style “briefcase swap” (one party with a briefcase full of cash, another containing secret documents), but without the potential scenario of one of the cases being filled with shredded newspaper and the resulting exciting chase scene.

An example application would be the owners of a particular make of e-book reader cooperating to purchase the DRM master keys from a failing manufacturer, so that they could load their own documents on their readers after the vendor’s servers go offline. This type of sale is inherently irreversible, potentially crosses multiple jurisdictions, and involves parties whose financial stability is uncertain–meaning that both parties either take a great deal of risk or have to make difficult arrangement. Using a ZKCP avoids the significant transactional costs involved in a sale which can otherwise easily go wrong.

In today’s transaction I purchased a solution to a 16x16 Sudoku puzzle for 0.10 BTC from Sean Bowe, a member of the Zcash team, as part of a demonstration performed live at Financial Cryptography 2016 in Barbados. I played my part in the transaction remotely from California.

The transfer involved two transactions:

    8e5df5f792ac4e98cca87f10aba7947337684a5a0a7333ab897fb9c9d616ba9e
    200554139d1e3fe6e499f6ffb0b6e01e706eb8c897293a7f6a26d25e39623fae

Almost all of the engineering work behind this ZKCP implementation was done by Sean Bowe, with support from Pieter Wuille, myself, and Madars Virza.


Read more, including technical details and links to the software at https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/
1517  Bitcoin / Development & Technical Discussion / Re: Signature aggregation for improved scalablity. on: February 26, 2016, 07:19:19 PM
Yea, I spoke to jl2012-- it's interesting how a bit of ignorance can be brilliance: He was previously unaware of the linear cancellation that was keeping this from working, so he just assumed it would work, when others had previously discarded the idea.  I still can't figure out why he wasn't jumping up and down excited to do it-- it's a nice improvement with little downside.
1518  Bitcoin / Development & Technical Discussion / Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay on: February 26, 2016, 05:47:29 PM
The warnings are disabled for whitelisted peers if the whitelistrelay flag is set.
Right. Unfortunately, Bitcoinj sends unsolicited transactions. The developers of BTCD have been complaining about that for a while, and I'd like to stop supporting unsolicited txn... but while the behavior is common we didn't want to add misbehavior for it.

Do you care to share the raw data? I would like to independently verify your claim.

I have data from the confirmation test run,

        RX packets 169821144  bytes 83997523200 (78.2 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 269163479  bytes 293982867649 (273.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        RX packets 171751847  bytes 84478658144 (78.6 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 271892507  bytes 296796582415 (276.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
and
        RX packets 175622574  bytes 85220192328 (79.3 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 277631356  bytes 303320529705 (282.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        RX packets 175824690  bytes 85283702303 (79.4 GiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 277920978  bytes 303653216190 (282.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

These were 6 hour runs, with the node pre-primed for an hour before the ran began.

As tiernolan points out, the results aren't surprising-- if you go look in my reddit comments you'll see I computed similar figures for this from first principles before measuring.

Nice technical post, but why the drama at the end?
[...]
Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted?

Took me a bit to find it: https://github.com/bitcoinclassic/bitcoinclassic/pull/16  I'm not trying to stir drama, I was earnestly surprised as jtoomim commented extensively there.

since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute
minimum of transaction sends you'd need. Am I missing something trivial here?
You are!

Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.

Now, this is obviously pretty fragile-- and our network doesn't form a nice complete cycle.  So if you want to check if your peers actually has all the transactions you have, does that reintroduce transactions*peers(*ratio of txid to tx size) traffic?  No.  Set reconciliation lets you communicate a set difference with size proportional to the difference, see my EL5 description of a computationally efficient set reconciliation scheme:  https://www.reddit.com/r/btc/comments/43iup7/mike_hearn_implemented_a_test_version_of_thin/czirz2p

The difference is, hopefully, almost zero size-- so the communication is small even if the number of transactions are large.

Quote
As far as I understand, a peer will send every transaction to everyone he hasn't received it from. So this results in an additional multiplicative factor of something like k/2, where k is the average number of neighbours for a node.

Is it this factor that you'd like to get rid of, gmaxwell?
Yes, it is.  Technically it sends an INV (a message with TXids) not the whole transaction, though this is only a constant factor improvement.
1519  Bitcoin / Development & Technical Discussion / Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay on: February 26, 2016, 05:39:52 AM
An improvement that I've been thinking about implementing (after Blocktorrent) is an option for batched INVs. Including the hashes for two txes per IP packet instead of one would increase the INV size to 229 bytes for 64 bytes of payload -- that is, you add 36 bytes to the packet for every 32 bytes of actual payload. This is a marginal efficiency of 88.8% for each hash after the first. This is *much* better.

Waiting a short period of time to accumulate several hashes together and send them as a batched INV could easily reduce the traffic of running bitcoin nodes by a factor of 2, and
Uh. Bitcoin has done this since the very early days. The batching was temporarily somewhat hobbled between 0.10 and 0.12 (especially when you had any abusive frequently pinging peers attached), but is now fully functional again and it now manages to batch many transactions per INV pretty effectively. Turn on net message debugging and you'll see the many INVs that are much larger than the minimum. The average batching size (ignoring the trickle cut-through) is about 5 seconds long-- and usually gets about 10 transactions per INV. My measurements were with these fixes in effect; I expect the blocksonly savings would have been higher otherwise.


2016-02-26 05:47:08 sending: inv (1261 bytes) peer=33900
2016-02-26 05:47:08 sending: inv (109 bytes) peer=32460
2016-02-26 05:47:08 sending: inv (37 bytes) peer=34501
2016-02-26 05:47:08 sending: inv (217 bytes) peer=33897
2016-02-26 05:47:08 sending: inv (145 bytes) peer=41863
2016-02-26 05:47:08 sending: inv (37 bytes) peer=35725
2016-02-26 05:47:08 sending: inv (73 bytes) peer=20567
2016-02-26 05:47:08 sending: inv (289 bytes) peer=44703
2016-02-26 05:47:08 sending: inv (73 bytes) peer=13408
2016-02-26 05:47:09 sending: inv (649 bytes) peer=41279
2016-02-26 05:47:09 sending: inv (145 bytes) peer=42612
2016-02-26 05:47:09 sending: inv (325 bytes) peer=34525
2016-02-26 05:47:09 sending: inv (181 bytes) peer=41174
2016-02-26 05:47:09 sending: inv (469 bytes) peer=41460
2016-02-26 05:47:10 sending: inv (973 bytes) peer=133
2016-02-26 05:47:10 sending: inv (361 bytes) peer=20541


Twiddling here doesn't change the asymptotic efficiency though; which is what my post is about.

[I'm also somewhat surprised that you were unaware of this; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.]
1520  Bitcoin / Development & Technical Discussion / Re: Using compact indexes instead of hashes as identifiers. on: February 26, 2016, 05:09:46 AM
[I split the thread, because we were getting away from talking about the compact aggregated signatures; into things which don't change the network behavior]

In fact they might be required to be public.
Something we must fight against if we want Bitcoin to retain it's utility in the future. No one uses a system of money with fungibility and privacy as bad as a bitcoin that has lost it's properties.

Quote
in many cases. I think the usage of a DB is the cause of most of the slowdown. Since the blockchain mostly never changes, there is no need for using ACID compliant DB operations for everything.
We do not store the blockchain in a database in Bitcoin Core.

Quote
With my design all nodes are doing txindex=1 and it takes very little time to "rescan" a new privkey as everything already has the tx already in linked lists and hash tables.
txindex=1 is not very compatible with scalablity; you end up with an ever-growing index. It makes some operations faster, but at a considerable cost.

In Bitcoin core reindex spends a fair amount of time rehashing things to check data integrity, because later it is assumed to be true. It could easily be made very fast-- but at the cost of additional indexes. Right now the marginal cost of a full node wallet that supports arbritary key import is about 65GB and rapidly growing. (Pruned vs non-pruned plus txindex).

Quote
so not sure why you consider it bad software. I assume you dont consider using gzip a bad practice?
The "bad software" wasn't referring to local optimizations by any means.  What I mean is that someone who doesn't like privacy can release a wallet which forces its users to always reuse addresses and then justify it with "reuse is more efficient because things use the indexes"-- but even this would only apply if they were used 'on the wire'; if they're purely local, they're purely local.   The txindex in Bitcoin Core works internally with compact indexes in a similar way, in fact.

I'm super supportive of purely local optimizations.

Quote
(when beyond the practical chance of being reorged) as 32 bits.
If it's purely local, you could probably always number that way, so long as you had code to fix up the numbering during reorg.

Quote
I wanted to use the standard bitcoin RPC test suite and I assume what is in the repo is the most complete set? Is there some bruteforce RPC tester that will use large parts of the RPC that can be used as a validation step for new implementation of bitcoin core?
There is an additional test harness built from BitcoinJ that simulates a bitcoin network and puts the node through its paces. It's called blocktester. Look at the travis ci configuration in Bitcoin core to get an idea of the tests that are automatically run on it.

Standard warnings about the near impossibility of writing new code which is consensus compatible with other code apply-- if you don't find some new surprising behavior in the network that we don't have tests for, you're almost certainly not testing hard enough to have a chance at it. Smiley
Pages: « 1 ... 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 [76] 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!