You mentioned that the export file has address, hash160 and balance.
yes. Can you please let me know if the address is WIF compressed or uncompressed? Or what type of address it is?
Thanks!
its base58 address , p2pk staring with 1 and some are starring with 3 p2sh addressess.
|
|
|
Hi everyone, I've opensourced a small software to parse the chainstate & output all utxos with their addresses (including latest bech32) in CSV format. As for today, computation takes ~8 minutes & creates a 6 GB CSV file (there is 60M utxos). Feel free to give it a try: https://github.com/mycroft/chainstateThank you, finally someone who is developer and want to help others., I Really appreciate it. i will post results once i try it., Thanks.
|
|
|
i have built dedicated box just for this.
[...]
do above all steps continuously. with new hex number evey time. its been up 1 month and didnt find anything.
My, that’s an expensive hobby for such an unprofitable purpose: i am from 3rd world, so its not expensive once the box is built, its just cheap electricity that matters and it gives me hope. edit: the main struggle is keeping electricity on 24x7 and keeping internet on. as the power outages are common. p.s. i am trying to break the wallet's which are open for challenges and bitcoin eater address , if you are so concerned about .
Wait... a “gem”? Perhaps you really expect to hit the 2 -160 jackpot and find a key for an “eater” address? That address with the all-zeroes Hash160 has a balance of 65 BTC plus change; that would be a “gem”, but one you’ll never find. What other “gems” do you seek? yes, i have that one too in my list and one which is open for challenge. aka warp wallet
|
|
|
ref: https://github.com/ryancdotorg/brainflayer -I HEXPRIVKEY incremental private key cracking mode, starting at HEXPRIVKEY (supports -n) FAST -k K skip the first K lines of input -n K/N use only the Kth of every N input lines -B batch size for affine transformations must be a power of 2 (default/max: 4096) -w WINDOW_SIZE window size for ecmult table (default: 16) uses about 3 * 2^w KiB memory on startup, but only about 2^w KiB once the table is built -I is where do you start -k is first skip K lines when you start, use a different 'k' for each instance process, or you can do as advised here -n 1/N Read the article on wiki 'baby-step giant step' about how to do discrete log cracking of btc, as this is essentially what your doing, each process is a giant step, and then each process does its own baby step, the reason NO to use one is that MERSENE PRIMES are better for incrementing, and the base -I its best to do a statisical analysis of what your trying to crack, .e.g. d=SQRT(r), where are is public-key, is first best guess of the private-key The author 'ryan' of brainflayer doesn't help people cuz he doesn't want morons to crack btc, thus he keep this tool in obscurity, that said, you can run -I forever and never get a hit, for instance, if you first have to have a BLF file of good high value hex-160 addresses, then you need to have BIN file to kick out the false-postives; Just running BF in brute-force mode will heat your house and nothing more The question here is how to use the muli-process, how ryan set up the multi was to use the baby-step/giant-step paradigm and have a bash-shell fire up BF client's as needed, one problem with -I is that it run's forever, thus if you serious you need to hack BF ( brainflayer ) I have dozens of versions running, and for the -I I have a -S ( stop N ) that tells BF to stop otherwise you can over-run your giant steps and waste cpu power, Feel free to ask questions i have spent about six months hacking BF IMHO BF is not useful for brute-force, where its useful is generating private-key/seed pairs for all the crypt-algos and then build a HUGE database of hash160/private-key pairs, then watch memory-pool on BTC, and when a user uses an address(hash160) and if you have that private-key ( you should if you have say 200 million hash160/pk pairs), then you own that btc Just willy nilly run bf on -I tells you next to nothing and let's say you have the pristine.hex list, that's say 40k but that means your odd's of a hit are one in 1/10e77, as likely to find a fly some where in the universe, thus first step is to cultivate all h160's ( addresses) you can find say 500 million, and then use hash-cat to generate a 500 gb uniq rainb0w and use BF to generate the h160/pk pairs, now you will have increased the odds of finding a hit, In summary just running BF in -I mode given odd's are 1/10e77 are a complete waste of time, Before one can really use BF one must spend months building databases of address/h160's, and hi-values, It's almost not worth spending time trying to crack the pristine ( satoshi coin ) as it hasn't been to done to date, thus not likely, I have tried 6+ months and have not cracked What you can crack is newbies that are not using high-entropy seeds for private-key, with your database and if your fast you can nab those, but bare in mind there are 100's of bots doing the same Thanks for the detailed info., i have built dedicated box just for this. cron job every 6 hours, parses new blocks. having + amount addresses, creates new blf and bin file and rotates it . my php script then calculates random int (using my logic) check if its in range . start brainflayer with -I mode using this hex, gets its PID sleep 10 seconds kill this process., do above all steps continuously. with new hex number evey time. its been up 1 month and didnt find anything. i have tried the password bruteforce method , but it also didnt gave much success., i like incremental thingy coz you never know when you are gonna hit a gem., coz its all random. i have couple of questions., 1) what is ecmult table and how can i create it and how it can benefit my cause ? as every time killing running brainflayer process and starting new takes around 1-4 seconds., 2) is there something like centralised blf file in memory loaded so spawinig brainflayer will be really faster., as for each instance, it first loads it in memory and then start -I counter from there. Thanks , i really appreciate your input.
|
|
|
but what does this mean ? -n K/N use only the Kth of every N input lines
sorry but english isnt my primary language., does it mean that its not going to try incrementally ? and going to pass/skip every nth value while doing incrementation ? Thanks It's just number of lines in every input lines example you have 10 lines in your BLF file you have 5 CPUs you want to run 5 brainflayer instances you run it with 1. -n 1/5 2. -n 2/5 3. -n 3/5 ... so, in 1st example, brainflayer read just first line of every 5 lines in your blf file alright it means , i have to use the exact same command i.e. -I number in all the 4 instances of brainflayer. damn, its not what i was thinking., i thought i could run 4 instances of brainflayer and use new -I number for each instance. but that would miss the hashes from .blf file. thanks for the info.
|
|
|
Thanks i have 4 core cpu.
so i must use like these , 1/4 2/4 3/4 4/4
yes, and you can test how it's work just make new blf file and insert 4 different hash160 there for example, hash160 of this HEX'es 0000000000000000000000000000000000000000000000000000000000000011 0000000000000000000000000000000000000000000000000000000000000111 etc and run brainflayer with -I 0000000000000000000000000000000000000000000000000000000000000001 option but what does this mean ? -n K/N use only the Kth of every N input lines
sorry but english isnt my primary language., does it mean that its not going to try incrementally ? and going to pass/skip every nth value while doing incrementation ? Thanks
|
|
|
how many CPU you want to use for brainflayer?
if two CPU - you can run two brainflayer instances
first ./brainflayer -v -I 0000000000000000000000000000000000000000000000000000000000000001 -b example.blf -f example.bin -n 1/2 second ./brainflayer -v -I 0000000000000000000000000000000000000000000000000000000000000001 -b example.blf -f example.bin -n 2/2
btw - why you use -f example.bin ?
Thanks i have 4 core cpu. so i must use like these , 1/4 2/4 3/4 4/4 i use -f example.bin to reduce the falce positive messages, as bloom filter is not that correct.
|
|
|
ref: https://github.com/ryancdotorg/brainflayerhi all., i am trying to use brainflayer., but its single threaded app and runs on single core., while other cpu cores are at idle., the author says Unfortunately, brainflayer is not currently multithreaded. If you want to have it keep multiple cores busy, you'll have to come up with a way to distribute the work yourself (brainflayer's -n and -k options may help)
help says Usage: /root/brainflayer/./brainflayer [OPTION]...
-a open output file in append mode -b FILE check for matches against bloom filter FILE -f FILE verify matches against sorted hash160s in FILE -i FILE read from FILE instead of stdin -o FILE write to FILE instead of stdout -c TYPES use TYPES for public key to hash160 computation multiple can be specified, for example the default is 'uc', which will check for both uncompressed and compressed addresses using Bitcoin's algorithm u - uncompressed address c - compressed address e - ethereum address x - most signifigant bits of x coordinate -t TYPE inputs are TYPE - supported types: sha256 (default) - classic brainwallet sha3 - sha3-256 priv - raw private keys (requires -x) warp - WarpWallet (supports -s or -p) bwio - brainwallet.io (supports -s or -p) bv2 - brainv2 (supports -s or -p) VERY SLOW rush - rushwallet (requires -r) FAST keccak - keccak256 (ethercamp/old ethaddress) camp2 - keccak256 * 2031 (new ethercamp) -x treat input as hex encoded -s SALT use SALT for salted input types (default: none) -p PASSPHRASE use PASSPHRASE for salted input types, inputs will be treated as salts -r FRAGMENT use FRAGMENT for cracking rushwallet passphrase -I HEXPRIVKEY incremental private key cracking mode, starting at HEXPRIVKEY (supports -n) FAST -k K skip the first K lines of input -n K/N use only the Kth of every N input lines -B batch size for affine transformations must be a power of 2 (default/max: 4096) -w WINDOW_SIZE window size for ecmult table (default: 16) uses about 3 * 2^w KiB memory on startup, but only about 2^w KiB once the table is built -m FILE load ecmult table from FILE the ecmtabgen tool can build such a table -v verbose - display cracking progress -h show this help
i dont understand how to use the -k and -n flags., anyone have experiecne in this ? Thanks p.s. i am trying to break the wallet's which are open for challenges and bitcoin eater address , if you are so concerned about . if i use it like this /root/brainflayer/./brainflayer -v -I 0000000000000000000000000000000000000000000000000000000000000001 -b example.blf -f example.bin -k 1 -n 4
i get this Invalid '-n' argument, remainder '4' must be <= modulus '0' so what am i missing here ? i tried asking him i guess, but he is super busy., .
|
|
|
There are several ways of solving your problem but even if it is explained thoroughly you will have to handle some difficult things!
1.Connect to the bitcoin network directly, parse all messages, verify the blocks and insert all the transactions contained within it into the MySQL db. This is going to be a bit hard since you have to write code to connect to the network.
2.Run bitcoind side-by-side with the MySQL db and use some combination of the getblockcount, getblockhash, getblock, gettransaction RPC commands and insert the transaction data into the MySQL db. You can run a cron on this every 5 minutes or so and you should have a fairly up-to-date MySQL db. This comes at the cost of considerable disk space since you'll need to have the entire blockchain for bitcoind (30 gigs and growing).
3.Connect to some other service and get the data that way. Blockchain.info is the obvious one and they have a wonderful API for getting all the new data you need. Unfortunately, you'll have to trust them to provide the right data. There's also electrum servers which provide data using stratum and ABE and some others. Heck, if you can get someone to open up their RPC port (8333) on bitcoind, you can connect to their bitcoind and use the same commands as #2. The drawback here is that you have to trust a third party in some way, shape or form, though, of course, you can write your own verification code.
its more like 150 gigs
|
|
|
previoulsy i could, but i no more keep track of UTXOs .
i only keep track of
address, hash160, unspent output count, amount.
so every vout is +1
and every vin is -1
same goes with amount.
every vout amount is old amount + new amount every vin amount is old amount - new amount
@btctousd81 How do you read segwit addresses? you mean bech32 (starting "bc1") ? then i dont think that bitcoin-core supports it yet for rpc. so cant read/parse those addresses using rpc. but if you mean P2SH (starting with a "3") then those are p2sh addresses those can be returned by bitcoin-core json/rpc .
|
|
|
@btctousd81 can you provide UTXOs for to each address?
previoulsy i could, but i no more keep track of UTXOs . i only keep track of address, hash160, unspent output count, amount. so every vout is +1 and every vin is -1 same goes with amount. every vout amount is old amount + new amount every vin amount is old amount - new amount
|
|
|
yes, i can do Base58 to ripemd160
You do not understand anything what you are talking about. It is possible in the one way, when you know ripemd160 and make base58. Not versa) Or you invented reverse method? alright lets not complicate it. i can convert 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa to 62e907b15cbf27d5425399ebf6f0fb50ebb88f18 AND i can convert 62e907b15cbf27d5425399ebf6f0fb50ebb88f18 to 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa sorry for confusion, i am not very good with technical words ripemd and base58. hope this helps.,
|
|
|
@btctousd81 you can convert Base58 back to ripemd160?
yes, i can do Base58 to ripemd160 and other way around too., Thanks
|
|
|
sorry for stupid question, but what does it do ?
i am new to powershell.
Thanks
|
|
|
What bitcoin-tool you use to get 'address to hash160'? You working using bitcoind rpc, not directly with blockchain disk data? at initial level, when i needed to convert 20 million+ bitcoin address to hash160, i used bitcoin-tool ref : https://github.com/matja/bitcoin-toolref: https://github.com/matja/bitcoin-tool/issues/20but for newer blocks data i am using php code to calculate hash160 from address. i am using bitcoind rpc and not reading blockchain disk data., cronjob runs my php script every 6 hours i guess, and parses all new blocks and insrts then in mysql.,
|
|
|
here it is latest dump [root@btcnode tmp]# head btctousd81_29jan2018.csv 1111111111111111111114oLvT2,0000000000000000000000000000000000000000,6531128743 111111111111111111112BEH2ro,000000000000000000000000000000000000000a,10940 111111111111111111112xT3273,0000000000000000000000000000000000000011,5340 1111111111111111111141MmnWZ,000000000000000000000000000000000000001a,5340 111111111111111111114ysyUW1,0000000000000000000000000000000000000023,5340 11111111111111111111BZbvjr,0000000000000000000000000000000000000001,1028212
can you, please, sort it by balance row next time? Thanks! yes, i can do that., you can do that too, by loading it in mysql or any other db or load it in excel. and do sort by amount.
|
|
|
do you mean magic number ? which is of 8 char long and starts at the blockfile ?
|
|
|
here it is latest dump https://www.transfernow.net/51dss251ddg5format btc address,hash160,amount in satoshis
[root@btcnode tmp]# head btctousd81_29jan2018.csv 1111111111111111111114oLvT2,0000000000000000000000000000000000000000,6531128743 111111111111111111112BEH2ro,000000000000000000000000000000000000000a,10940 111111111111111111112xT3273,0000000000000000000000000000000000000011,5340 1111111111111111111141MmnWZ,000000000000000000000000000000000000001a,5340 111111111111111111114ysyUW1,0000000000000000000000000000000000000023,5340 11111111111111111111BZbvjr,0000000000000000000000000000000000000001,1028212 11111111111111111111CJawggc,0000000000000000000000000000000000000064,1548 11111111111111111111HV1eYjP,0000000000000000000000000000000000000092,2730 11111111111111111111HeBAGj,0000000000000000000000000000000000000002,5481 11111111111111111111QekFQw,0000000000000000000000000000000000000003,1 [root@btcnode tmp]# tail btctousd81_29jan2018.csv 1DpBda3jw6x2rbkSJUFrBfUPKgFbZsr5t9,8c8deefad99de1daffac1446a01132266c9d58fa,35264328 1JtY892wdpUfeVqg3nH3fed2whZBi9xTf,036217897bf7eb792d129b962e8d478e2030ac94,19446553 19b8mLDpqbUwZCZ8Sa9TbMj2ofhcUwc5HF,5e35a15c10dd14e888b58320f737142c48e2eea6,3366861 15SQ4c9BRsR77hbBLBXxGi8h7V7pDFtShG,30ae274c7101a903e1cce0d429383b2d680c947f,10524100 1HB4EbQygbDBqj4DeRWrEHvqEyPdW2k9B7,b168df40270271f6d71e2cd961e40f58e86f7d9e,2882 1H9KeUz92d3DwdTdnH2dR4tjF9Q1ma8uHW,b114e7c464188535cdea560ac484e609d92e07ee,2182583 1GxScqUcg5MZRrkQQeTziivX6CfVTywCXh,af06263ce147c366a49e12964e50f7106e8368e1,3294 3BceAG61bUsfBxb64JpMKB5TuE1mqzPkao,6cdd6d3eb7084e3487eadb47bcec761c87ca966e,540545 1LsweYaesdzt86Ze2WSQQHso5AjoNDRikv,da0cce25e3b57eaa2822ca328a5581a9a694e2ee,3739000 15ozPLgWzBokpRUbNecatNqqDbfpADFtUb,34c38a43d285e03e0a0b4bf65ca0a6f9ae8c97e6,349051
|
|
|
edit: here it is https://transfer.sh/xGs0K/btctousd81-12jan2018.csv.7z
Hi @btctousd81, Is there still a working link to download this? I would love a copy and my current full node isn't powerful enough to generate in a reasonable time. Either way, thanks! give me few hours, i ll upload it tonight., so most probably i will post updated link tomorrow., thanks
|
|
|
|