I am one of the most active members on the Dev & Tech subboard. I have been posting there for about a year. And although it accounts for 4.66% of the merit distribution
DdmrDdmr's statistics, I suspect most of it is going to its subboards. Unfortunately the stats don't offer more granularity to confirm this. I'm not really sure how many Dev&Tech - not global - merit sources there are, if they even exist.
There are a lot of technical people who deserve more merits and a higher rank, especially on Vanitysearch and Kangaroo threads.
Below is my (unbiased) list of merit-worthy posts.
One:
Three questions,
Is there any way to see the current keyspace being worked on like some type of command that prints out in intervals the progress like "continue.txt" on bitcrack. I know there is save.work but this is not readable.
and what does the
"B]" stand for at the end of the command line? picture here of the B.
Why in some keyspaces searches are there many dead kangaroos?
Current keyspace is the entire keyspace. The Tames and Wilds start at different random points and hop looking for DPs. So if you are searching from 0:FFFFFFFFFF ; then kangaroos will be spread out all over the range. The progress is group ops or total DPs found. It is readable if you use python script loaded in this thread. But it will not help you determine "progress"; total ops/DPs are your progress.
If you are in a small range (especially if using a GPU), you may get many dead kangaroos because you have many kangaroos hopping around in a small keyspace so many of them will be landing on the same points. Dead kangaroos can also happen if you are searching for a key that does not exist in the keyspace being searched.
Two:
Three:
We already incurred a speed penalty just to make it work for RTX cards and the last thing everyone needs is a even slower program just to get it to run on Linux.
Then you are doing something wrong.
My first RTX 3060 card has arrived and I will make a fix in bitcrack sp-mod #6 (windows) with fullspeed wine support.
spminer #2 for vertcoin has already been released with rtx 3060 support. Mine fullspeed on x1 riser cables and latest drivers without NVIDIA blocking you.
Any updates?
??
My RTX-3070's are doing over 2,000M/sec for cracking btc private keys and matching known valuable addresses using bloom filters. Same algo's on Gtx-1060 is about 200MB/sec
On a rack of 4 RTx-3070's, I'm seeing 10,000M/sec, using 32gb bloom filter with from 300M addresses (H160), my false positive is 10^-30
I only use linux
The main thing is to put the bloom-filter inside of bitcrack, so your not just looking for one public-key, and/or address, but your testing all 300M in parallel on every cycle ( here 10,000M/sec ), so the probablity of a hit in the secp256k1 space 2^256, or 10^76 is reasonable
Keep the bloom filter on its own M.2 drive on MOBO, have 4tb sata for hex-files, and private key database ( you need to be able to map your found hex160 )
I have two racks one using gtx-1060's the other Rtx-3070's, which I picked up last summer for $500/each, now they're +$1,000 if you can find them. CPU needs 64GB, I'm running amd threadripper with 32 core, this is not a game for windows.
Real problem here is setting up the bloom-filter, orginal model in brainflayer was 512MB, which only allows about 15M addresses, before false-positive goes astro. Solution is to cascade 4gb junks in series, as most shared memory models only allow 4gb chunks, 3 years ago using open-gl I had the bloom filters on the GPU ( gtx-1070 ), but you can only have 2gb chunks, and its actually faster to do blocks of 2048 private-keys, and then have each gpu core pass them back to threads all running cpu cores on the shared bloom-filter. With 300M valid btc addresses, you really need a 32gb bloom filter. None of this stuff is online, you must roll your own.
that is a goog question. is it possible to speed that up, comparing two list with millions of lines?
If you sort the file and use binary search to look for each item in your list then your runtime becomes O(log2(n)) for each for each entry and so you're going to have a maximum of O(NumberOfAddresses*log2(n)) as your worst case runtime. It's really not slow, that's about 30 units of time to search for an address in a list with 1,000,000,000 lines in it.
Actually fitting all that into memory is going to be a problem though. There are some algorithms I read in a Knuth book about on-disk sorting but they're very old and I think they may have to be adapted for hard disks instead of tapes.
You can't search for a h160 address in a 300gb file, you must map the file to binary. Then the file will be 10gb, and it just takes a second to yes/no whether that h160 is in the list of 300M addresses.
Early brainflay github had a tool called 'binchk' using xed ( linux ) you convert the 300gb file to uniq-hex, and get the .bin file, use binchk
Brainflayer needed to use this because the 512mb bloom-filter they use, only allowed about 10M addresses before false-positive went astro, with the binchk you can take the false positives and very if any of them are real postive.
The bloom-filter is super fast, can work on GPU, but the false-positive is high.
No point in using binary search on text, just use the model described.
Today when I scrape the blockchain, I get about 300M addresses, but after you do the "sort -u", it will be slightly less, but you also need to run a client on the memory pool, to constantly add the new addresses to the bloom-filter.
If you comparing lists with 300M lines of hex, and/or search its much better to use bloom-filters and binary search combined, drops the 2+ hour search to seconds.
[moderator's note: consecutive posts merged]Four:
Hey guys, I wanted to ask if this project is of any interest to you.
I would like to offer a bitcoin paperwallet as a product, where you can't tell from the outside that it's a paperwallet. Which you can then only read as NFC. Password encrypted with a standard NFC writing app using a standard phone so nobody will say we are the ones manipulating anything. It can be a simle NFC card with a custom design or NFC Tag.
So that you can't tell from the outside that it's a paper wallet. Maybe disguised as a gym card. It could also be customizable or a metal card. Depending on the wish of the customer.
Is just pretty shabby if you do it by pen as it can smudge and yes obviously that it is a wallet too
Apparently there is already programming - this is not hackable / bruteforceable?
Are there anyones who are interested in getting a first prototype and then using them giving us feedback if this is a cool idea?
I am stoked about any feedback.
Greetings
Bumped thread from:
I have found out that you can encrypt messages on NFC tags, this has led me to the idea of using them as parts of an even more secure paper wallet. I have checked the lifespan and they seem to have 10,000 read/write lifespans. I have quite a few of these from a project ~100 or so. I am going to start some test on these, am I missing any reason to why this would be a bad idea?
Benefits over paper wallet: No need to print key (could be held in printer cache)
Need a password to access the key (rather than just peel it)
Easy to stick on to cheap novelty physical coins
Easy to remove and reapply new one after sweeping
Can be placed in NFC blocker sleeve in storage area
Five:
Bitcoin mining scene has been dominated by pools since 2011 as a result of the infamous
pooling pressure flaw which is a direct result of
winner-take-all approach to PoW, adopted by Satoshi Nakamoto, from the first beginning.
Although there are some arguable facts, Bitcoin's implementation of winner-take-all makes it a
fair gamble for miners, but fairness is not enough for gamblers to participate in the game because nobody can afford unlimited resources and time needed for a
guaranteed break-even/win status. It is why pools are necessary for bitcoin mining; they eliminate/hide the gambling nature of the original bitcoin mining scheme
almost completely, letting the industry to grow just like a normal business: tell me how much hash rate you got, so I would tell you how your business looks like in terms of costs and revenues
Many people, including myself, do not like pools, they are centralized entities that introduce a series of risk factors to the Bitcoin ecosystem compromising the most basic decentralization assumptions for the least, but the most serious consequence is what I call it
miner alienation. As a matter of fact miners are not just being abstracted from the gambling nature of mining, they are also abstracted from the whole network.
For a pooling scheme to do anything useful in terms of hiding the variance and risks, it needs to give a substantial difficulty leverage, hence a considerable number of blocks should be submitted to the pool operator/server (typically thousands per second for large pools) which makes it absolutely impossible to fetch/validate all of them as long as they are supposed to be conventional Bitcoin blocks. It is why they've adopted a top-down block generation method in which the Pool operator builds a block template then relays its header to its clients, i.e. miners, waiting for them to find a nonce; add a few tricks to this model, and you have Stratum the current de facto standard for pooling in PoW world.
Suddenly, there was left no reason for miners to be aware of the Bitcoin network, e.g. by running a full node, and, except for a few very large mining farms, overnight, bitcoin miners turned to zombies, alienated from the actual bitcoin protocol, unconsciously and exhaustively searching for a meaningless nonce that makes a meaningless 80 bytes long string look pretty enough to be claimed as a
share. Indisputably, this situation MUST change, but how?
Although I've been trying to find a way for fixing the situation with pools, it was just a while ago that I realized how ignorant I am about the economics of the subject, and it took not more than a few days for me to realize that I'm not an exception as there is no model available (well, AFAIK) to describe the economics behind PoW pooling business.
By economic model I mean a mathematical cost vs benefit analysis of pooling as a business. Many authors have shown interest in reward distribution mechanisms used by pools from a game theoretic point of view mainly for mitigating adversarial behaviors such as block/share withholding.
Although reward distribution model is an important topic and one can find interesting mathematical material here to play with, by no means it can be categorized as a mathematical model for the core pooling business as it doesn't cover the most important question:
What is the break-even threshold for the fee that pools charge?
I was even more surprised when after applying naive probability techniques and failing to approach anywhere close to the answer, I find out myself dealing with a
decades old problem in mathematics known as
the utility of gambling.
I'm wondering if there is any related previous work that I was not able to spot, it is why I started this thread, asking members to share any resources/thoughts about the question I bolded above.
Why and how is this important? Like it or not, pooling is an abnormal phenomenon for PoW and the pressure toward it is nothing less than
a flaw, a flaw that is a consequence of
winner-take-all approach that historically dominated PoW starting with Nakamoto and Bitcoin, nevertheless any reasonable advocate would agree that this should be addressed somehow.
There can be two different approaches to this problem:
1- Designing a
winners-take-share model of PoW,
Initially, I tried this approach a couple of years ago, and I believe my
PoCW proposal was a good start, but it needs a hard fork or a project for a brand-new coin, neither is my primary target now.
2- Improving/replacing Stratum and how miners of winner-take-all PoW coins, specially Bitcoin, deal with their variance nightmare.
It is the way to go, I believe, but a closer examination of the current projects is not encouraging for the least to say.
Stratum 2.0 project:
It is a total disappointment in spite of the hype and the advertisement. No fundamental redefinition of roles and a desperate attempt to give miners a right to
negotiate the block contents they are mining without any creative idea for justifying and realizing such an attempt and all of it wrapped in an ugly and complicated set of protocols.
P2Pool and decentralized pooling:
Not scalable! P2Pool utilizes a 20 to 1 leverage while it can't improve too much and a closer look reveals that you can't use this protocol recursively because of the reward distribution complexities involved.
I conclude that the true solution to
the pooling problem is the one which is not tried or even proposed yet, and all we can do is giving a general sketch of it:
1- It should be decentralized as much as possible and this property should improve through time instead of declining.
2- It should be an open and permission-less ecosystem, people should be able to join or leave deliberately while they can choose their role with minimum requirement.
3- Roles and relationships of the parties involved, miners, pool operators, and the network, should be redefined radically.
4- The costs and revenues of each party should be economically justifiable.
Taking features 2,3,and 4 into account, I think it is not that hard to understand the importance of an economic model for the future of pooling in Bitcoin and PoW coins generally.
Six (reply):
I think the main problem is difficulty. There is one, single, global difficulty for the whole network. It is convenient when it comes to validation, but it is very inconvenient if we take mining into account. Because if we have one million miners and the whole difficulty is one million, then (assuming that all miners have the same power) each miner can produce one block with difficulty one per ten minutes (on average). So, my idea is using more than one difficulty and combining all of them to get the whole network difficulty.
So: each miner will start with difficulty equal to one. In this way, even CPU miners can start mining in this system if they really want. Then, each miner will calculate its own difficulty just by competing with itself. Each miner will know, which target should be used to produce one block per ten minutes (on average). And then, each miner will produce single block header per ten minutes that meets that miner's target.
Finally, hashes will be added in a way that respect their difficulties. So that any two hashes could be added if (and only if) their sum is divisible by four. Then, it can be divided to produce block hash that respect the difficulty and is equally probable to some other miner working alone.
Some example:
firstMinerBlockHash: 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
secondMinerBlockHash: 000000006a625f06636b8bb6ac7b960a8d03705d1ace08b1a19da3fdcc99ddbd
sumOfHashes: 000000006a7c356eff73e69811feb49ddcfad40b6170af73145195b3d726c02c (always divisible by four)
finalBlockHash: 000000001a9f0d5bbfdcf9a6047fad27773eb502d85c2bdcc514656cf5c9b00b (sumOfHashes divided by four)
In this way, shares could be combined in a decentralized way. It is the best I came up with so far, but I think it is a good start.
Seven:
Hi
Is there secp25k1 for the cuda api on C ++?
How can I create such a project in Visual Studio?
Do you have a project ready? I will only use CUDA ..
like
https://github.com/brichard19/BitCrack and
https://github.com/JeanLucPons/Kangaroohowever, I do not have enough knowledge of C ++ to understand the documents in these projects.
I just want to increase the mathematical operations in parallel.
So I am looking for a C ++ platform project that works in the CUDA API of secp256k1 library.
Thank you..
Eight:
Joe_Bauers is asking a question about the mining algorithm for his altcoin, which currently has “Market Cap Rank: N/A” on CoinGecko—but it’s listed on Yobit with a 40% spread (!) and no depth (!!), if you want to get cheated.
His mining algorithm has a flaw he does not see. When time permits, I may post on his altcoin’s development thread with advice for miners who want to exploit it. The discussion is off-topic on this thread; I will delete further posts about it.
2) A miner could "game the system" by generating a specific char for last position along with the required PoW, though this would require an extra bit of work over non-bad actors.
If you are not generating whatever "random" item publicly, the miners will not know that changing the block header will create any advantage.
What he is actually doing is arguably even worse than that.
<edit type=off-topic>Since you don't wish further posts on this.
The dreadful result of someone exploiting this obvious flaw you mention is that the Nfactor for the block is going to be 18 instead of, let's say 20.
No, the dreadful result is that your chain tip will be unstable. You are giving miners an incentive to build
only on blocks that require the next block to use an
Nfactor of 18; but you are also giving miners an incentive to broadcast any block they find. You are also giving miners with significant % of hashrate an incentive to withhold any blocks they find that require the next block to use an
Nfactor of 18, build on that in secret, and then broadcast the result.
The ultimate result will be many orphans and reorgs, and a chain that messily converges on having mostly or only blocks that require the next block to use an
Nfactor of 18. If your coin ever has enough value to attract an ASIC designer, then the ASIC will probably do
Nfactor = 18 only, and still dominate the network; so I guess that this has the benefit of being ASIC-friendly.
</edit>
- Give up, and use urandom. Not from laziness, but from sufficient wisdom not to shoot myself in the foot.
This is what
i would do (even as regular user). But what if your software is browser-based or available on multiple platform? Do you just say it's user risks for not using Linux?
Thanks for the link; I had not seen that thread. Some of the posts there invited a Nullian rant which is yet unfinished...
Summary: We went to sleep, and dreamt of a universal platform. We awoke in a nightmare where the universal platform is the web browser, the universal language is Javascript, the universal ISA is Webasm—and the morals of youths have been corrupted so that they promiscuously run network-loaded executable code from random strangers as a lifestyle. I want to kill myself, or at least take up a hobby of severe alcoholism.
There are no
good ways of dealing with this. How can one mitigate the horrors? If I were developing a web app that needed to generate secrets inside the browser, then I would start by reviewing the major browsers’ implementations of
crypto.getRandomValues(). Then, I would hope that the “move fast and break things” browser developers don’t change it, accidentally or on purpose. —Then, I would take a strong drink and/or shoot myself in the head.
As for other platforms: Every major OS offers an API for obtaining randomness. Use it. If it is bad, use a different OS.
A much bigger problem nowadays is an OS running inside of a VM. A hypervisor
must offer a hypercall for obtaining randomness from code that runs “on the metal”, and a guest OS
must use it. Otherwise, the guest OS kernel has the same problem as any application running in userland: It lacks sufficient hardware access to measure nondeterministic inputs. Another big problem is, of course, embedded devices... sigh.
The good news is that you only need to obtain a 256-bit random seed. If you have a 256-bit secure seed, then you can expand it to as much “randomness” as you may desire. That is what BIP 32 does! It is cryptographically secure; and there are good ways of doing this for any application, including applications that require forward secrecy. The focus of my OP here was about extracting randomness into a secure seed, not about what to do after that.
On the other hand, there’s no actual need for this huge pile of random numbers. If you’ve somehow managed to generate one secure 256-bit key then from that key you can derive all the “random” numbers you’ll ever need for every cryptographic protocol—and you can do this derivation in a completely deterministic, auditable, testable way, as illustrated by EdDSA. (If you haven’t managed to generate one secure 256-bit key then you have much bigger problems.)
As long as the purpose of generating random numbers is to use them as seeds to electronic signature algorithms like RSA, ECDSA, Schnorr, etc., what matters is not pure mathematical/philosophical concept of security, because the ESA itself is not purely and information theoretically secure! Comparing the two APIs provided by unix like kernels, the myth gives more credit to /dev/random because of its inherent more guaranteed entropy.
This reflects the beginning of OP here, and also an essay that I have been intending to write for the Ivory Tower... We do not live in the physical world. The real world, the crypto world, is a world of numbers and computations, where all attackers are computationally bounded.
You behave accordingly, when you generate a Hierarchical
Deterministic wallet with BIP 32: It is computationally pseudorandom, and thus secure against a computationally bounded attacker.
In this context, people who obsess about “information-theoretic security” have no idea what they are talking about.
(N.b. that the whole Linuxland argument looks very foolish to me. On my BSD systems, /dev/urandom is a symlink to /dev/random; and /dev/random behaves more or less similarly to urandom on a Linux system, except for some extra safety features at boot time when the system hasn’t yet been able to seed the random generator. In my opinion, Linux should do the same thing.)Nine:
I'm thinking to mesh broadcasting of cryptos since a while.... your idea seems a special case for that... Till now I have thought of 3 major problems:
1) regulations... EM spectrum it's not availabale, best frequencies are already used and anyway subjected to licensing all almost all over the world
2) long-reaching frequencies (lowest ones) need bigger antennas, so you have a tradeoff between distance of relaying nodes and the possibility to install them or make them mobile and concealable
3) scalability problems of mesh networks... they work good when they are crowded, and noone is contributing to/using them if they don't work effectively
the only BTC-over-RF global project I know is Blockstream-SAT: that's cool and useful for sure for digital-divide zones, but the internet-to-sat supply chain constitutes a point of centralization
Ten:
That said, the problem is the applicability of this idea because with current bitcoin core implementation, we don't keep track of the blockchain state, using Merkle Trees, explicitly.
On the other hand, proposals for UTXO commitment (using a Merkle Root somewhere in the bloc or on a side-chain, whatever) are being abandoned for a while; it is a total unfortunate, BTW, because without such a commitment there will be no light client option available for bitcoin.
And in that video it's mentioned that the UTXO size is 5GB which is already greater than people's dbcache limits of 500MB, 2 and even 4 GB and I think that's a reason why Miller's UTXO merkle tree might be a better idea since it's size-agnostic
.
Answering both in short points:
1-The paper in the video was published in Mar21, the Utreexo project is still going, so the topic is currently under research focus not abandoned.
2-Miller proposal was implemented in 2014
https://github.com/amiller/redblackmerkle,
published as an ACM notice
Miller, A., Hicks, M., Katz, J., Shi, E.: Authenticated data structures, generically.
ACM SIGPLAN Notices 49(1), 411–423 (2014),
but if u watched the video u'll find out that it did not give the performance improvement that deserves the book keeping of a Red-Black tree data structure as compared to regular BST binary search tree.
3-Easier from the video to understand that the problem that led to the 3 proposals or researches of a RBT from Miller, Forest in Utreexo, Trie in the video paper is the overhead of continuous excessive deletion& insertion of UTXOs (the tree branches get modified which may lead to a degenerate tree with longer paths, each time all internal nodes that contain the leaf hash inside their concatenated hashes must be modified)
.
-What I'm saying if u noticed that every output comes from an input don't handle them separately and "distract" ur data structure, just modify the values in place with the hierarchy of the tree almost the same.
.
I hoped for a community familiar with the problem, to find out if I'm missing something. Is my seemingly obvious easy solution was thought of before and rejected for a reason?or it just didn't cross their minds?
.
.
.
Last note, the memory requirements was the original motive of having Stateless nodes/clients ... nodes that do not keep the full Merkle Tree/Forest/Trie, they just the proof path (called witness) of the UTXOs they need to verify. Each insertion/deletion modify the parents and siblings hashes along the way, and thus may affect other proofs especially if the deletion involved a swap of tree branches (watch the video) and we wish to minimize that.
If you want it's really easy for me to find another 10 posts like these.