Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: zielar on April 27, 2022, 01:00:37 AM



Title: Checking brainwallet
Post by: zielar on April 27, 2022, 01:00:37 AM
Hello,

I am looking for an effective method to scan a large dictionary (about 1GB) in terms of checking the use of its phrases as a brainwallet. Of course, I don't think any of the phrases will contain anything else, but I am asking for statistical purposes on the basis of a private phrase database.
I was able to find https://github.com/j4rj4r/Bitcoin-Brainwallet-Bruteforce, but the checking speed is not entirely satisfactory (I know ... due to API limitations), but it also does not indicate any data on whether the phrase was used as a brainwallet at all times, and in general I have doubts as to whether the above script works.
It doesn't have to be in Python ... it could also be C, C ++

Regards


Title: Re: Checking brainwallet
Post by: pooya87 on April 27, 2022, 02:36:22 AM
Get bitcoin core's source code and modify the database part to create an additional index while saving each block where it stores any output that was used. It could be in form of a hash of the output script so that searching is simpler.
Fully sync by downloading and indexing the blockchain.
Run your code to hash each item in the dictionary to get the key then create the outputs and hash them to compare search within the database you created.

;D


Title: Re: Checking brainwallet
Post by: nc50lc on April 27, 2022, 04:24:59 AM
Try if BTCRecover is faster than that tool: github.com/3rdIteration/btcrecover (https://github.com/3rdIteration/btcrecover)

The command should look like this if you already have a list of possible passphrases:
Code:
btcrecover --brainwallet --addresses 18wASW... --passwordlist dictionary.txt


Title: Re: Checking brainwallet
Post by: ABCbits on April 27, 2022, 11:28:38 AM
Get bitcoin core's source code and modify the database part to create an additional index while saving each block where it stores any output that was used. It could be in form of a hash of the output script so that searching is simpler.
Fully sync by downloading and indexing the blockchain.
Run your code to hash each item in the dictionary to get the key then create the outputs and hash them to compare search within the database you created.

;D

Alternatively run both Bitcoin Core and block explorer (such as mempool.space[1]) or Electrum server (such as Fulcrum[2]). After initial sync/indexing is done, you could write a script which read the dictionary, generate address and check address history on your own block explorer/Electrum server. If you decide to use Electrum server, you'll want to check Electrum protocol[3] to know how to get address history/balance.

[1] https://github.com/mempool/mempool (https://github.com/mempool/mempool)
[2] https://github.com/cculianu/Fulcrum (https://github.com/cculianu/Fulcrum)
[3] https://electrumx.readthedocs.io/en/latest/index.html (https://electrumx.readthedocs.io/en/latest/index.html)


Title: Re: Checking brainwallet
Post by: zielar on April 27, 2022, 02:05:21 PM
Try if BTCRecover is faster than that tool: github.com/3rdIteration/btcrecover (https://github.com/3rdIteration/btcrecover)

The command should look like this if you already have a list of possible passphrases:
Code:
btcrecover --brainwallet --addresses 18wASW... --passwordlist dictionary.txt

Thanks a lot.
Finally, I was able to use this tool and command

Code:
python btcrecover.py --brainwallet --passwordlist myfile.txt --addressdb addresses-BTC-2011-to-2021-03-31.db



Title: Re: Checking brainwallet
Post by: zielar on April 27, 2022, 03:28:20 PM
Ultimately, btcrecover does not live up to expectations. The search ends when it finds the first address. Not only that - the first address found:

NOTE Brainwallet Found using UNCOMPRESSED address
Password found: 'yellowexcalib'

When I try to check the alleged phrase in the blockchain - both the uncompressed and compressed address NEVER had any transmission.
The --addr-limit option has no effect.
The --autosave option also doesn't work
Good concept, but I need credible results, and here both credibility comes out of nowhere? as well as talking about the results in the plural (and not just about the first result)


Title: Re: Checking brainwallet
Post by: PawGo on April 27, 2022, 05:59:13 PM
What performance do you expect to have?
Is it a big problem to launch program per gpu or you need something for many gpus at the same time?
How big address database do you expect to have?


Title: Re: Checking brainwallet
Post by: zielar on April 28, 2022, 02:21:10 AM
What performance do you expect to have?
Is it a big problem to launch program per gpu or you need something for many gpus at the same time?
How big address database do you expect to have?

I don't care that the GPU is necessarily involved. My point is simply to check my dictionary without my participation. I have 3GB of strings. I divided them into 3x 1GB each (I can even divide them into smaller ones) and I want each line to be ported as the brainwallet passphrase and to check if the addresses assigned to it (compressed, uncompressed, p2sh) were already in use. btcrecover is really efficient, but doesn't work as it should (ie points to empty addresses never used [don't know why?], stops when it finds the first valid line for it). This application gives 10 minutes to scan my 1GB file without the use of GPU, which is also not surprising, because the work I want to get is not a complicated calculation. Even if it were to take half a day - let it last ... as long as it works and after running the command -> for those half a day he created a file for me in which it will be
PHASE FROM MY GLOSSARY | WALLET WITH HISTORY | PRIVATEKEY

that's all... :-)


Title: Re: Checking brainwallet
Post by: COBRAS on April 28, 2022, 02:55:36 AM
What performance do you expect to have?
Is it a big problem to launch program per gpu or you need something for many gpus at the same time?
How big address database do you expect to have?

I don't care that the GPU is necessarily involved. My point is simply to check my dictionary without my participation. I have 3GB of strings. I divided them into 3x 1GB each (I can even divide them into smaller ones) and I want each line to be ported as the brainwallet passphrase and to check if the addresses assigned to it (compressed, uncompressed, p2sh) were already in use. btcrecover is really efficient, but doesn't work as it should (ie points to empty addresses never used [don't know why?], stops when it finds the first valid line for it). This application gives 10 minutes to scan my 1GB file without the use of GPU, which is also not surprising, because the work I want to get is not a complicated calculation. Even if it were to take half a day - let it last ... as long as it works and after running the command -> for those half a day he created a file for me in which it will be
PHASE FROM MY GLOSSARY | WALLET WITH HISTORY | PRIVATEKEY

that's all... :-)

You find something ?

publick password list's reused in brutes  many many times...


Title: Re: Checking brainwallet
Post by: PawGo on April 28, 2022, 06:51:38 AM
Ah, OK, so you want something really easy.
I did something like that for myself, just a reading lines from file and checking in local database of founded addresses.
If you want to check if it was founded any time in the past, it is just a matter of DB. Take from Loyce, whatever you want ;)
I may share the code or if you want I may modify it for you.
I have used postgresql local DB.

Code:
package wif;

import org.bitcoinj.core.Base58;
import org.bitcoinj.core.ECKey;
import org.bitcoinj.core.LegacyAddress;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.params.MainNetParams;
import java.io.*;
import java.security.InvalidAlgorithmParameterException;
import java.security.InvalidKeyException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.sql.*;
import java.util.Arrays;
import java.util.Date;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Brain {

    static final NetworkParameters NETWORK_PARAMETERS = MainNetParams.get();
    static final int STATUS_PERIOD = 1000*60*1;
    static MessageDigest messageDigest;
    static Statement statement;

    static boolean RESULT = false;

    public static void main(String[] args) throws NoSuchAlgorithmException, InvalidAlgorithmParameterException, InvalidKeyException, IOException, InterruptedException {

        db();

        messageDigest = MessageDigest.getInstance("SHA-256");
        String filename=args[0];
        int skip = 0;
        if (args.length>1){
            skip = Integer.valueOf(args[1]);
        }
        FileReader fileReader = null;
        try {
            fileReader = new FileReader(filename);
        } catch (FileNotFoundException e) {
            System.err.println("not found: " + filename);
            System.exit(-1);
        }
        long count = 0;
        long start = 0;
        System.out.println("Brain start");
        try {
            BufferedReader bufferReader = new BufferedReader(fileReader);
            String line;
            //test
            System.out.println(checkAddressDatabase("16jY7qLJnxb7CHZyqBP8qca9d51gAjyXQN"));
            while ((line = bufferReader.readLine()) != null) {
                if (line.trim().isEmpty()){
                    continue;
                }
                count++;
                if (skip>0){
                    if (count<skip) {
                        continue;
                    }
                    skip = 0;
                }
                test(line);
                test(line.toLowerCase());
                test(line.toUpperCase());
                if (line.endsWith(",")){
                    line=line.substring(0, line.length()-1);
                    test(line);
                    test(line.toLowerCase());
                    test(line.toUpperCase());
                }
                if (System.currentTimeMillis()-start > STATUS_PERIOD){
                    System.out.println("UP!"+count+" "+ line + " " + (new Date()));
                    start = System.currentTimeMillis();
                    count = 0;
                }
            }
        }catch (Exception e){
            System.err.println(e.getLocalizedMessage());
            System.exit(-1);
        }
        System.out.println("Brain end");
    }

    private static void db() {

        try{
            Connection connection = DriverManager.getConnection("jdbc:postgresql://localhost:5433/bitcoin", "postgres", "pass1234");
            System.out.println("Connected to PostgreSQL database!");
            statement = connection.createStatement();
        }catch (Exception e){
            System.out.println(e.getMessage());
            System.exit(-1);
        }
    }

    private static void test(String text) throws IOException, SQLException {
        messageDigest.update(text.getBytes("UTF-8"));
        ECKey ecKey = new ECKey(messageDigest.digest(), (byte[])null);
        String address = LegacyAddress.fromKey(NETWORK_PARAMETERS, ecKey).toString();
        System.out.println(address+":"+ecKey.getPrivateKeyAsWiF(NETWORK_PARAMETERS));
        if (1==1){
            return;
        }
        if (checkAddressDatabase(address)){
            System.out.println(address);
            System.out.println(text + "|" + ecKey.getPrivateKeyAsHex()+"|"+ecKey.getPrivateKeyAsWiF(NETWORK_PARAMETERS)+"|"+address);
            FileWriter myWriter = new FileWriter("brainResult.txt", true);
            myWriter.write(text + "|" + ecKey.getPrivateKeyAsHex()+"|"+ecKey.getPrivateKeyAsWiF(NETWORK_PARAMETERS)+"|"+address);
            myWriter.close();
        }
        ecKey = PrivateKey.fromBase58(NETWORK_PARAMETERS, getCompressedWif(ecKey.getPrivateKeyAsHex())).getKey();
        address = LegacyAddress.fromKey(NETWORK_PARAMETERS, ecKey).toString();
        if (checkAddressDatabase(address)){
            System.out.println(text + "|" + ecKey.getPrivateKeyAsHex()+"|"+ecKey.getPrivateKeyAsWiF(NETWORK_PARAMETERS)+"|"+address);
            FileWriter myWriter = new FileWriter("brainResult.txt", true);
            myWriter.write(text + "|" + ecKey.getPrivateKeyAsHex()+"|"+ecKey.getPrivateKeyAsWiF(NETWORK_PARAMETERS)+"|"+address);
            myWriter.write("\r\n");
            myWriter.close();
        }
    }

    private static String getCompressedWif(String hex){
        String string = "80"+hex+"01";
        byte[] digest = messageDigest.digest(hexStringToByteArray(string));
        String checksum  = bytesToHex(messageDigest.digest(digest)).substring(0,8);
        byte[] data = hexStringToByteArray(string + checksum);
        return Base58.encode(data);
    }

    private static boolean checkAddressDatabase(String address) throws SQLException {
        ResultSet resultSet = statement.executeQuery("SELECT balance FROM public.adress WHERE address='"+address+"'");
        while (resultSet.next()) {
            return true;
        }
        return false;
    }

    private static String bytesToHex(byte[] bytes) {
        StringBuffer result = new StringBuffer();
        for (byte b : bytes) result.append(Integer.toString((b & 0xff) + 0x100, 16).substring(1));
        return result.toString();
    }

    private static byte[] hexStringToByteArray(String s) {
        int len = s.length();
        byte[] data = new byte[len / 2];
        for (int i = 0; i < len; i += 2) {
            data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4)
                    + Character.digit(s.charAt(i+1), 16));
        }
        return data;
    }
}


Title: Re: Checking brainwallet
Post by: MrFreeDragon on April 30, 2022, 08:41:31 PM
-snip-

I was able to find https://github.com/j4rj4r/Bitcoin-Brainwallet-Bruteforce, but the checking speed is not entirely satisfactory (I know ... due to API limitations)

-snip-
Regards

You should not use API and check the wallets online. The only way to check your passphrases database is to compare the generated brain wallets with your local database (take it from daily blockchair dumps - https://gz.blockchair.com/bitcoin/addresses) or take from Loyce.


Title: Re: Checking brainwallet
Post by: PrimeNumber7 on May 01, 2022, 08:34:16 PM
If all you need to do is check if a particular address has ever been used, you need to follow the following procedure:
*generate a set of all addresses that have ever received a transaction
*convert each brainwallet passphrase into a private key (which you will subsequently convert into an address)
*compare each tested address from the above step to the items in your set in step 1

The list of addresses that have ever received a transaction is very large, so you will need to use a hashing algorithm that can handle a sufficiently large number of items, or else your runtime will suffer.

Assuming there are no collisions in your above set, you should be able to compare each tested passphrase in O(1) time.


Title: Re: Checking brainwallet
Post by: LoyceV on May 02, 2022, 07:05:35 AM
The only way to check your passphrases database is to compare the generated brain wallets with your local database (take it from daily blockchair dumps - https://gz.blockchair.com/bitcoin/addresses) or take from Loyce.
That assumes you'll find a funded Bitcoin address, which is unlikely. By checking against the full list of all Bitcoin addresses ever used (https://bitcointalk.org/index.php?topic=5265993.0), you may find brainwallets that have been used in the past.
I'm not sure if brainwallets were ever used for Segwit addresses.



Has anyone figured this out using bitcoin-tool? I've used it in the past, performance isn't bad, but the command line is quite complicated to get correct.

you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.


Title: Re: Checking brainwallet
Post by: vjudeu on May 02, 2022, 07:19:11 AM
Quote
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.
SHA256 is slow for you only because you probably used some unoptimized version. If you optimize it, you could reach 40-bit blocks on CPU. It is possible that even Satoshi had that power in 2009, because he started from 40 bits in pre-release version.


Title: Re: Checking brainwallet
Post by: ABCbits on May 02, 2022, 11:57:31 AM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.

As @vjudeu said, you used un-optimized tool. You should use optimized tool such as hashcat. Even on single core VPS, i managed to get ~5000 kH/s.


Title: Re: Checking brainwallet
Post by: PawGo on May 02, 2022, 04:50:27 PM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.

As @vjudeu said, you used un-optimized tool. You should use optimized tool such as hashcat. Even on single core VPS, i managed to get ~5000 kH/s.

Currently I work on piece of software for someone who has lost access to his brainwallet. He knows the way how to produce the phrase, but as it contains some (unknown) number, we must check many possible phrases. The list of potential addresses is very small (just a few), so it is not a factor which slows us down - it is rather priv->pub generation. Initial sha256 and later Hash160 are very fast. The main problem would be to compare generated hash160 against huge database of addresses (& of course generation for both compressed and uncompressed keys). At this stage of project we have speed 47Mkey on RTX 3060 and a little more than 100Mkeys on 3090 - but we are progressing every day ;)


Title: Re: Checking brainwallet
Post by: garlonicon on May 02, 2022, 05:05:18 PM
Quote
Currently I work on piece of software for someone who has lost access to his brainwallet.
I wonder if it could handle nullius brainwallet. I mean transaction 000000000fdf0c619cd8e0d512c7e2c0da5a5808e60f12f1e0d01522d2986a51. Because it is a brainwallet, I guess it could be just SHA-256("something"), then tweaked to get this low transaction hash. But I still don't know the answer.

Quote
The main problem would be to compare generated hash160 against huge database of addresses
In this challenge, we all know that the brainwallet is on bc1qt2mdkehmphggajer3ur3g8l754scj4fdrmw3rn. I guess your software should also have an option to set destination address directly, without checking all funded wallets.


Title: Re: Checking brainwallet
Post by: PawGo on May 02, 2022, 05:09:37 PM
Quote
The main problem would be to compare generated hash160 against huge database of addresses
In this challenge, we all know that the brainwallet is on bc1qt2mdkehmphggajer3ur3g8l754scj4fdrmw3rn. I guess your software should also have an option to set destination address directly, without checking all funded wallets.

This is how it works now, just a check against around 15 addresses, it is not intended to hack the whole blockchain. Maybe in the future ;-)
As you said, testing against few thousands of possible addresses (or in fact against list of hash160), slows the process down a lot. Maybe some kind of trustful bloom filter could help, but for now it is not a real need for me.


Title: Re: Checking brainwallet
Post by: CrunchyF on May 03, 2022, 09:12:34 AM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.

As @vjudeu said, you used un-optimized tool. You should use optimized tool such as hashcat. Even on single core VPS, i managed to get ~5000 kH/s.

Currently I work on piece of software for someone who has lost access to his brainwallet. He knows the way how to produce the phrase, but as it contains some (unknown) number, we must check many possible phrases. The list of potential addresses is very small (just a few), so it is not a factor which slows us down - it is rather priv->pub generation. Initial sha256 and later Hash160 are very fast. The main problem would be to compare generated hash160 against huge database of addresses (& of course generation for both compressed and uncompressed keys). At this stage of project we have speed 47Mkey on RTX 3060 and a little more than 100Mkeys on 3090 - but we are progressing every day ;)


I have developped a tool like this with about the same performance than PawGo one (80 MKeys on a RTX 3070) .
The solution to check on a big list of hash160 is to create a bloomfilter.
For my experiment i build a bloomfilter of about 80Mega hash160 (typically the size of the utxos db).
the binary size is about 1.2 GB with an error rate of 1e-16. (that means that u will have a false positiv every 10e16 lookup in the bloomfilter).
You just have to load this bloomfilter in the GPU RAM to speed up the process.
The lookup in the filter is very fast compared to the SHA256("....)->EC multiplication->hash160 of the publickey process.
So with my tool u can check about 100M hash160 at the speed of 80M sha256(password candidate) on a RTX3070.

If tou well define the pattern of the brainwallet you want to find. I can help u

 


Title: Re: Checking brainwallet
Post by: PrimeNumber7 on May 03, 2022, 09:18:24 AM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.
When creating a set, you generally do not want to use a hashing algorithm that will result in zero collisions for any number of inputs.

A general solution is to use a linked-list when there are collisions. This will result in a worse big O notation, however, it should improve actual runtime.


Title: Re: Checking brainwallet
Post by: ymgve2 on May 03, 2022, 01:00:32 PM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.

As @vjudeu said, you used un-optimized tool. You should use optimized tool such as hashcat. Even on single core VPS, i managed to get ~5000 kH/s.

Currently I work on piece of software for someone who has lost access to his brainwallet. He knows the way how to produce the phrase, but as it contains some (unknown) number, we must check many possible phrases. The list of potential addresses is very small (just a few), so it is not a factor which slows us down - it is rather priv->pub generation. Initial sha256 and later Hash160 are very fast. The main problem would be to compare generated hash160 against huge database of addresses (& of course generation for both compressed and uncompressed keys). At this stage of project we have speed 47Mkey on RTX 3060 and a little more than 100Mkeys on 3090 - but we are progressing every day ;)


I have developped a tool like this with about the same performance than PawGo one (80 MKeys on a RTX 3070) .
The solution to check on a big list of hash160 is to create a bloomfilter.
For my experiment i build a bloomfilter of about 80Mega hash160 (typically the size of the utxos db).
the binary size is about 1.2 GB with an error rate of 1e-16. (that means that u will have a false positiv every 10e16 lookup in the bloomfilter).
You just have to load this bloomfilter in the GPU RAM to speed up the process.
The lookup in the filter is very fast compared to the SHA256("....)->EC multiplication->hash160 of the publickey process.
So with my tool u can check about 100M hash160 at the speed of 80M sha256(password candidate) on a RTX3070.

If tou well define the pattern of the brainwallet you want to find. I can help u

 

That's a pretty impressive speed - do you have any special tricks to make the EC multiplication fast?


Title: Re: Checking brainwallet
Post by: PawGo on May 03, 2022, 01:04:55 PM
Now I have 67Mkeys on rtx3060 using 22bit table and 32bit version of library (164Mkeys on 3090). I plan migration to 64bit version, as since CUDA 11.6 __int128 is natively supported. The stupid problem - 128bit ints are still not supported by Visual Studio compiler, so I will try to compile & launch it in ubuntu at WSL (or just pure linux machine).


Title: Re: Checking brainwallet
Post by: CrunchyF on May 03, 2022, 07:01:53 PM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.

As @vjudeu said, you used un-optimized tool. You should use optimized tool such as hashcat. Even on single core VPS, i managed to get ~5000 kH/s.

Currently I work on piece of software for someone who has lost access to his brainwallet. He knows the way how to produce the phrase, but as it contains some (unknown) number, we must check many possible phrases. The list of potential addresses is very small (just a few), so it is not a factor which slows us down - it is rather priv->pub generation. Initial sha256 and later Hash160 are very fast. The main problem would be to compare generated hash160 against huge database of addresses (& of course generation for both compressed and uncompressed keys). At this stage of project we have speed 47Mkey on RTX 3060 and a little more than 100Mkeys on 3090 - but we are progressing every day ;)


I have developped a tool like this with about the same performance than PawGo one (80 MKeys on a RTX 3070) .
The solution to check on a big list of hash160 is to create a bloomfilter.
For my experiment i build a bloomfilter of about 80Mega hash160 (typically the size of the utxos db).
the binary size is about 1.2 GB with an error rate of 1e-16. (that means that u will have a false positiv every 10e16 lookup in the bloomfilter).
You just have to load this bloomfilter in the GPU RAM to speed up the process.
The lookup in the filter is very fast compared to the SHA256("....)->EC multiplication->hash160 of the publickey process.
So with my tool u can check about 100M hash160 at the speed of 80M sha256(password candidate) on a RTX3070.

If tou well define the pattern of the brainwallet you want to find. I can help u

 

That's a pretty impressive speed - do you have any special tricks to make the EC multiplication fast?

UFFF ok it was a lot of work but im aware to share (open source philosophy):

this is the best cuda grid/block config i found for the RTX 3070 (48 SM cores)

#define GRID_SIZE 1024*4*5
#define BLOCK_SIZE 256

I precompute a table of kG value of 16*65535 values with x and y indexed by

0001->FFFF
0001 0000->FFFF 0000
0001 0000 0000->FFFF 0000 0000
...

I load this 2 table one for x one for y in shared memory of the GPU at the start of the program

so for a 256 scalar multiplication you just have to decompose your 256bits k (in k.G) in 16 parts of 16 bits and  do 16 sequentials points addition
I do each the addition in jacobian coordinate given by the relation (x,y,1) + (x',y',z') (the first term is from the EC table and the second term is the intermediate result of the addition process)
By this way you avoid 16 slow inversion (i think this is the most speed up trick)

The final inversion is done with the Fast Modular Inversion (Delayed Right Shift 62 bits) like in the Jean-Luc Pons Kangaroo program.
https://github.com/JeanLucPons/Kangaroo (https://github.com/JeanLucPons/Kangaroo)



 


Title: Re: Checking brainwallet
Post by: CrunchyF on May 03, 2022, 07:33:01 PM
Now I have 67Mkeys on rtx3060 using 22bit table and 32bit version of library (164Mkeys on 3090). I plan migration to 64bit version, as since CUDA 11.6 __int128 is natively supported. The stupid problem - 128bit ints are still not supported by Visual Studio compiler, so I will try to compile & launch it in ubuntu at WSL (or just pure linux machine).

Very good performance. I think you will be faster than me on a RTX 3070

What is your grid/block config ?
what the size in GPU memory of your 22bits precomputed table?
my 16bits table have a size of 33MB
How do you split your 256bits k (in k.G) in 22bits chunks?
Do u do your final inversion in a batch or directly in each thread?


Title: Re: Checking brainwallet
Post by: ymgve2 on May 04, 2022, 08:47:09 AM
I precompute a table of kG value of 16*65535 values with x and y indexed by

0001->FFFF
0001 0000->FFFF 0000
0001 0000 0000->FFFF 0000 0000
...

I load this 2 table one for x one for y in shared memory of the GPU at the start of the program

Interesting approach - it seems other solutions focus more on larger tables - does the speed advantage of having the tables in shared memory outweigh the disadvantage of doing 15 additions vs for example 11 additions with 22 bit tables?

Here's a free tip that might gain some speed: Instead of doing pure addition, you can do addition/subtraction from a "middle" value like 0x8000800080008000800080008000.... - you save one bit of table size, and negating a point is quite easy.


Title: Re: Checking brainwallet
Post by: CrunchyF on May 04, 2022, 09:43:09 AM

Interesting approach - it seems other solutions focus more on larger tables - does the speed advantage of having the tables in shared memory outweigh the disadvantage of doing 15 additions vs for example 11 additions with 22 bit tables?

Here's a free tip that might gain some speed: Instead of doing pure addition, you can do addition/subtraction from a "middle" value like 0x8000800080008000800080008000.... - you save one bit of table size, and negating a point is quite easy.

Yes it is my first Cuda project and i think that an optimisation is possible with the use of the different type of memory of the GPU (Global and shared memory) beacuse the time access are different.

To copy my tables in the GPU mem  i use the standard function :

cudaMemcpy(b, a, ..., cudaMemcpyHostToDevice);
so i'm not quite sure in which type of GPU mem the tables are loaded.

I use  a 16bit indexed table because it's easy to cast a 256 integer array in 16 parts with the (uint16_t *) operator and u d'ont have to code a specific function .

the cost of the split function of 256bits in 22bits chunk, is probably not negligeable.
 
My 2 table (x and y) have a size of 2*32MB so if u use a 22bits table u will be around (2^6)*2*32MB = 4096 MB.. half of the memory of my RTX3070!

But anyway, as u say the cost of finding a value in a 4GB table compared to a  a 32MB is probably (i dont know really)  not the same.

I focused on small table, because i wanted to have big empty space for storing the biggest bloomfilter possible.

the big optimisation would be in the coding of a well syncronised batch modular inversion because with the JeanLuc Pons code you are obliged to wait that evry thread of the batch finish the multiplication
But the algorithm not seems to be easy

Quote
Here's a free tip that might gain some speed: Instead of doing pure addition, you can do addition/subtraction from a "middle" value like 0x8000800080008000800080008000.... - you save one bit of table size, and negating a point is quite easy.

Interesting ... but only for big table no?


Title: Re: Checking brainwallet
Post by: PawGo on May 04, 2022, 09:46:58 AM

Yes it is my first Cuda project and i think that an optimisation is possible with the use of the different type of memory of the GPU (Global and shared memory) beacuse the time access are different.

To copy my tables in the GPU mem  i use the standard function :

cudaMemcpy(b, a, ..., cudaMemcpyHostToDevice);
so i'm not quite sure in which type of GPU mem the tables are loaded.


It depends how you declare that in the device code.
Normally you write to global memory. If it is with __constant__ directive, it will be faster, but teh size must be smaller.
__shared__ is a available only for single block, so you must rewrite to that memory each time you launch kernel. It is also limited.


Title: Re: Checking brainwallet
Post by: ymgve2 on May 04, 2022, 11:04:57 AM
I focused on small table, because i wanted to have big empty space for storing the biggest bloomfilter possible.

Bigger bloom filter is not necessarily better. A better approach is to have multiple smaller bloom filters, based on different parts of the hash lookup. If a 512MB bloom filter has a hit rate of 0.01, a single 1024MB bloom filter will have a hit rate of 0.005, but two independent 512MB bloom filters will have a hit rate of 0.01^2 = 0.0001


Title: Re: Checking brainwallet
Post by: CrunchyF on May 04, 2022, 12:20:27 PM
I focused on small table, because i wanted to have big empty space for storing the biggest bloomfilter possible.

Bigger bloom filter is not necessarily better. A better approach is to have multiple smaller bloom filters, based on different parts of the hash lookup. If a 512MB bloom filter has a hit rate of 0.01, a single 1024MB bloom filter will have a hit rate of 0.005, but two independent 512MB bloom filters will have a hit rate of 0.01^2 = 0.0001

i have to experiment that. It's interesting

but look here the probability of hit rate/number of hash function dont seems to follow a linear rule...
with this bloom filter calculator if you increase the number of  hash function you can increase the probability of hit rate too.


https://hur.st/bloomfilter/?n=80M&p=1.0E-7&m=&k=40 (https://hur.st/bloomfilter/?n=80M&p=1.0E-7&m=&k=40)

So what's the difference beetween one bloomfilter of 1G with k=40 (for ex) and two bloomfilter of 512M with k=20 both?

I used this tool to find the minimum of hash function for the minimum of hit ratio at my desired # of entries and size of filter.
So i'm not sure if i cut my bf in two the probability of no-hint will be squared.


Title: Re: Checking brainwallet
Post by: PawGo on May 04, 2022, 02:28:18 PM
I focused on small table, because i wanted to have big empty space for storing the biggest bloomfilter possible.

Bigger bloom filter is not necessarily better. A better approach is to have multiple smaller bloom filters, based on different parts of the hash lookup. If a 512MB bloom filter has a hit rate of 0.01, a single 1024MB bloom filter will have a hit rate of 0.005, but two independent 512MB bloom filters will have a hit rate of 0.01^2 = 0.0001

i have to experiment that. It's interesting


Indeed, I will try that too, in another project.
On the other hand, there is a number of false-positives which we must accept and test ourself.

Coming back to your previous question - I use (by default) 112 blocks (proc*4) and 384 threads, but I cannot guarantee that values are optimal.
Just right now I switched from static table (size declared on compile) to dynamically reserved, that gives possibility to create bigger tables - 24 bit is max for my 12GB card (it takes more than 60% of memory) - and 77% memory is taken by the whole program. Performance gained between 22 and 24 is not shocking, but still worth to consider.



Title: Re: Checking brainwallet
Post by: ymgve2 on May 05, 2022, 11:03:55 AM
I focused on small table, because i wanted to have big empty space for storing the biggest bloomfilter possible.

Bigger bloom filter is not necessarily better. A better approach is to have multiple smaller bloom filters, based on different parts of the hash lookup. If a 512MB bloom filter has a hit rate of 0.01, a single 1024MB bloom filter will have a hit rate of 0.005, but two independent 512MB bloom filters will have a hit rate of 0.01^2 = 0.0001

i have to experiment that. It's interesting


Indeed, I will try that too, in another project.
On the other hand, there is a number of false-positives which we must accept and test ourself.

Coming back to your previous question - I use (by default) 112 blocks (proc*4) and 384 threads, but I cannot guarantee that values are optimal.
Just right now I switched from static table (size declared on compile) to dynamically reserved, that gives possibility to create bigger tables - 24 bit is max for my 12GB card (it takes more than 60% of memory) - and 77% memory is taken by the whole program. Performance gained between 22 and 24 is not shocking, but still worth to consider.



How do you fit 24 bit tables into 8ish GB? From my calculations it would be almost 12GB (64 bytes per point, 11 different tables) - I assume you don't use compressed points, since recovering y would take too much time)


Title: Re: Checking brainwallet
Post by: NotATether on May 05, 2022, 12:53:26 PM
you will need to use a hashing algorithm that can handle a sufficiently large number of items
I didn't expect SHA256 to be so slow. I just tested a simple bash loop, and get less than 1000 hashes per second. That won't scale well to 40 billion inputs.

Is it absolutely necessary to hash the inputs with SHA256, or can a different algorithm be used for the bloom filter, such that we can create a different hash type Z(inputs) = Y(SHA256(inputs)) and thus save computational time?

Of course, Z() and Y() do not need to be cryptographic, and it would be better if they weren't.

I'll need to check the academic archives if such transfusion of a hashed output has ever been explored (let alone discovered).


Title: Re: Checking brainwallet
Post by: CrunchyF on May 05, 2022, 06:59:18 PM
Quote
Is it absolutely necessary to hash the inputs with SHA256, or can a different algorithm be used for the bloom filter, such that we can create a different hash type Z(inputs) = Y(SHA256(inputs)) and thus save computational time?

Of course, Z() and Y() do not need to be cryptographic, and it would be better if they weren't.

I'll need to check the academic archives if such transfusion of a hashed output has ever been explored (let alone discovered).


No absolutely not . SHA256 is far too slow for fast bloomfilter.
The best is to use murmurhash (non cryptographic hash function)


Title: Re: Checking brainwallet
Post by: LoyceV on May 07, 2022, 02:11:32 PM
As @vjudeu said, you used un-optimized tool. You should use optimized tool such as hashcat. Even on single core VPS, i managed to get ~5000 kH/s.
Can you walk me through this (ELI5 style)? Say I have pbies's list (https://bitcointalk.org/index.php?topic=5396801.msg60015668#msg60015668) of 200 brainwallets, and I want to use hashcat to get the associated Bitcoin addresses. I'm feeling like a n00b here, hashcat has too many options.

Once I've figured out the above, I can start on this: the input-file with passwords I'd like to check is 444 GB (https://download.weakpass.com/wordlists/all-in-one/1/all_in_one.7z) and has 40 billion entries.

I have a (dedicated) server at my disposal with:
Code:
Intel(R) Xeon(R) CPU E3-1270 V2 @ 3.50GHz
130 GB available diskspace (hdd)
16 GB RAM


Title: Re: Checking brainwallet
Post by: iceland2k14 on May 07, 2022, 05:09:31 PM
It may look too simple/generic. But it comes very handy for such kind of works....
The 3 Files required for this code can be obtained from this link  https://github.com/iceland2k14/secp256k1 (https://github.com/iceland2k14/secp256k1)

Minor Update: To handle Big Password File @LoyceV   bigger than System RAM

Code:
import secp256k1 as ice
import time
import signal, sys


input_file_is_passw = True      # Otherwise consider input file is hexkeys
#==============================================================================
btc = [line.split()[0] for line in open("btc_alltypes_address.txt",'r')]
print(f'{"-"*60}\n Read complete for Address File. \n{"-"*60}')
btc = set(btc)

#==============================================================================
def handler(signal_received, frame):
    print('\nSIGINT or CTRL-C detected. Exiting gracefully. BYE')
    sys.exit(0)
   
def chk(line, k, P):
    ac = ice.pubkey_to_address(0, True, P)
    au = ice.pubkey_to_address(0, False, P)
    a3 = ice.pubkey_to_address(1, True, P)
    ab = ice.pubkey_to_address(2, True, P)
    if ac in btc or au in btc or a3 in btc or ab in btc:
        with open('FoundTreasure.txt', 'a') as f:
            f.write(f'PassWord = {line}    Privatekey = {hex(k)} \n')

#==============================================================================
m = 0
with open("BugeHugePassFile.txt",'r') as f:
    st = time.time()
    signal.signal(signal.SIGINT, handler)
    for line in f:
        passw = line.rstrip()
        if input_file_is_passw:
            hexkey = ice.get_sha256(bytes(passw, 'utf8')).hex()
        else:
            hexkey = passw
        kk = int(hexkey, 16)
        P = ice.scalar_multiplication(kk)
        chk(passw, kk, P)
        m += 1

        if m % 10000 == 0:
            print(f'Speed: [{m/(time.time()-st):.0f}]   {m} Line checked from File. Current line : {passw}', end='\r')
#==============================================================================
print(f'{" "*130}\r {m} Line checked from File. ', end='\r')
print(f'\n\n Work Done !!\n{"-"*60}')

Code:
> python p2.py
------------------------------------------------------------
 Read complete for Address File.
------------------------------------------------------------
Speed: [22503]   12000000 Line checked from File. Current line : 4f9ea3109cf4c4292265504599a40a27dc6a7689a149c4687848695855026393
SIGINT or CTRL-C detected. Exiting gracefully. BYE


Title: Re: Checking brainwallet
Post by: LoyceV on May 08, 2022, 09:33:41 AM
It may look too simple/generic.
Actually, it looks like I'm in over my head already :(

Quote
But it comes very handy for such kind of works....
The 3 Files required for this code can be obtained from this link  https://github.com/iceland2k14/secp256k1 (https://github.com/iceland2k14/secp256k1)
I see a .dll file: is this a windows thing?

What i meant is only speed of SHA-256 hashing. AFAIK Hashcat doesn't have option to brute-force brainwallet.
So I should be able to do something like inputlist > hashcat > bitcointool > addy, and then compare the addy to the existing long list?
I need to find time for this :)


Title: Re: Checking brainwallet
Post by: iceland2k14 on May 09, 2022, 12:43:24 PM
Actually, it looks like I'm in over my head already :(
To run in Python3, only 2 input files are needed.
BTC Address File, for collision purpose ... Line by line... Name = btc_alltypes_address.txt
The Password file which is be to checked for brainwallet association. Again line by line.... Name = BugeHugePassFile.txt
Each Password will check 4 BTC Address [Compressed, Uncompressed, P2SH, Bech32]

Looks like .dll file for Windows user, while .so file for Linux user.
Yes Exactly.

Did OP @zielar managed to reach his satisfactory Speed through some tool. Will be interesting to know.


Title: Re: Checking brainwallet
Post by: PrivatePerson on May 11, 2022, 02:14:10 PM
https://github.com/ryancdotorg/brainflayer
Doesn't look like it?