Bitcoin Forum
May 13, 2024, 04:13:20 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 »
  Print  
Author Topic: VanitySearch (Yet another address prefix finder)  (Read 31298 times)
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 21, 2019, 06:06:22 PM
 #181

endo and sym are computed in CheckHashComp() in GPUCompute.h.
I quoted my last post and I added few comments.
The point (px,py) is always OK so no errors before CHECK_POINT(h, incr, 0);
The errors randomly appear after this line.
It seems that nvcc generates (in your case) a wrong code.


Code:
__device__ __noinline__ void CheckHashComp(prefix_t *prefix, uint64_t *px, uint64_t *py,
  int32_t incr, uint32_t tid, uint32_t *lookup32, uint32_t *out) {

  uint32_t   h[20];
  uint64_t   pe1x[4];
  uint64_t   pe2x[4];

  // Point
  _GetHash160Comp(px, py, (uint8_t *)h);
  CHECK_POINT(h, incr, 0);                         <-- 100% Ok up to here, means that (px,py) is good

  // Endo #1  if (x, y) = k * G, then (beta*x, y) = lambda*k*G
  _ModMult(pe1x, px, _beta);
  _GetHash160Comp(pe1x, py, (uint8_t *)h);   <-- 50% Wrong from here
  CHECK_POINT(h, incr, 1);

  // Endo #2 if (x, y) = k * G, then (beta2*x, y) = lambda2*k*G
  _ModMult(pe2x, px, _beta2);
  _GetHash160Comp(pe2x, py, (uint8_t *)h);
  CHECK_POINT(h, incr, 2);

  ModNeg256(py);

  // Symetric points

  _GetHash160Comp(px, py, (uint8_t *)h);
  CHECK_POINT(h, -incr, 0);
  _GetHash160Comp(pe1x, py, (uint8_t *)h);
  CHECK_POINT(h, -incr, 1);
  _GetHash160Comp(pe2x, py, (uint8_t *)h);
  CHECK_POINT(h, -incr, 2);

}

1715573600
Hero Member
*
Offline Offline

Posts: 1715573600

View Profile Personal Message (Offline)

Ignore
1715573600
Reply with quote  #2

1715573600
Report to moderator
"You Asked For Change, We Gave You Coins" -- casascius
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715573600
Hero Member
*
Offline Offline

Posts: 1715573600

View Profile Personal Message (Offline)

Ignore
1715573600
Reply with quote  #2

1715573600
Report to moderator
1715573600
Hero Member
*
Offline Offline

Posts: 1715573600

View Profile Personal Message (Offline)

Ignore
1715573600
Reply with quote  #2

1715573600
Report to moderator
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 21, 2019, 06:12:52 PM
 #182

You can also try VanitySearch -u -check
It will perform the check using uncompressed addresses and so use the CheckHashUncomp() function which is similar except that it calls GetHash160() instead of GetHash160Comp()

Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 21, 2019, 06:20:07 PM
 #183

It can be due to a wrong optimization concerning a carry somewhere which could explain that it works from time to time.
I had a similar problem with the CPU release when I compiled with gcc 6, gcc 7 or Visual C++ work flawlessly.
The patch (a volatile also) is at IntMop.cpp:859 and IntMp.cpp:915
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 22, 2019, 08:09:08 AM
Merited by LoyceV (1)
 #184


You can delete:

and delete u0, u1, u2 ,u3, r0, r1, r2, r3


I committed your mods and I removed unused variable and changed a bit the squaring, I just replaced the reset of variable t1 and t2 by UADD(t1, 0x0ULL, 0x0ULL); . With this, it is no longer necessary to reset to 0 t1 or t2, t1 is set with carry flag.
I also added my reduction which use MADC instruction (multiply and add).

You can try both implementation by changing at GPUEngine.gu:665
Code:
#if 1
to
Code:
#if 0


I also ported your ModSqr to CPU release in IntMod.cpp.

On my hardware no significant performance increase, the square is ~10% faster the classic mult, so on the global process, no measurable performance increase.

I removed again the volatile and added "memory" to clobber list of inline assembly. This should prevent the compiler to permute instruction (for pipelining optimization) and loose a carry or get a unexpected one.

Thanks to test the source on github and tell me if you still have the errors.

This is my last idea...
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
March 22, 2019, 12:56:49 PM
 #185

I removed again the volatile and added "memory" to clobber list of inline assembly. This should prevent the compiler to permute instruction (for pipelining optimization) and loose a carry or get a unexpected one.

Thanks to test the source on github and tell me if you still have the errors.

This is my last idea...

Unfortunately all wrong!!!

Code:
CPU found 1577 items
GPU: point   correct [0/243]
GPU: endo #1 correct [0/251]
GPU: endo #2 correct [0/268]
GPU: sym/point   correct [0/257]
GPU: sym/endo #1 correct [0/256]
GPU: sym/endo #2 correct [0/302]
GPU/CPU check Failed !
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 22, 2019, 01:15:53 PM
 #186

Unfortunately all wrong!!!

That's strange. May be I introduced an other bug.
If you restore the volatile it works ?
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
March 22, 2019, 01:16:39 PM
 #187

Unfortunately all wrong!!!

That's strange. May be I introduced an other bug.
If you restore the volatile it works ?


No.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 22, 2019, 01:18:17 PM
 #188

You may have notice that I changed the makefile.
Now you should call it like this:

Code:
make gpu=1 ccap=50 all

And also set the good variable:
Code:
CUDA       = /usr/local/cuda-8.0
CXXCUDA    = /usr/bin/g++-4.8

The readme is up-to-date
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
March 22, 2019, 01:20:43 PM
 #189

You may have notice that I changed the makefile.
Now you should call it like this:

Code:
make gpu=1 ccap=50 all

And also set the good variable:
Code:
CUDA       = /usr/local/cuda-8.0
CXXCUDA    = /usr/bin/g++-4.8

The readme is up-to-date

Yes, I already did it.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 22, 2019, 01:29:48 PM
 #190

Yes, I already did it.

It will make me crazy.
It works on my 2 configs and a user on github just post a report on a GeForce GTX 1080 Ti (ccap=6.1) running on Ubuntu 18.04 and it works fine (he uses CUDA10).
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
March 22, 2019, 01:32:23 PM
 #191

Yes, I already did it.

It will make me crazy.
It works on my 2 configs and a user on github just post a report on a GeForce GTX 1080 Ti (ccap=6.1) running on Ubuntu 18.04 and it works fine (he uses CUDA10).

Don't worry, cuda 8 needs g++ 4.9, that's the problem.


About the performance, I think most of the people use only compressed addresses.

If you do a specific ComputeKeys for only compressed keys (don't compute y at all!):

Code:
    for (uint32_t i = 0; i < HSIZE; i++) {

      // P = StartPoint + i*G
      Load256(px, sx);
      Load256(py, sy);
      ModSub256(dy, Gy[i], py);

      _ModMult(_s, dy, dx[i]);      //  s = (p2.y-p1.y)*inverse(p2.x-p1.x)
      //_ModMult(_p2, _s, _s);        // _p = pow2(s)
      _ModSqr(_p2, _s);

      ModSub256(px, _p2,px);
      ModSub256(px, Gx[i]);         // px = pow2(s) - p1.x - p2.x;
      /*
      ModSub256(py, Gx[i], px);
      _ModMult(py, _s);             // py = - s*(ret.x-p2.x)
      ModSub256(py, Gy[i]);         // py = - p2.y - s*(ret.x-p2.x);  
      */
      CHECK_PREFIX(GRP_SIZE / 2 + (i + 1));
      
      // P = StartPoint - i*G, if (x,y) = i*G then (x,-y) = -i*G
      Load256(px, sx);
      Load256(py, sy);
      //ModNeg256(dy,Gy[i]);
      //ModSub256(dy, py);
      ModSub256(dy, pyn, Gy[i]);

      _ModMult(_s, dy, dx[i]);      //  s = (p2.y-p1.y)*inverse(p2.x-p1.x)
      //_ModMult(_p2, _s, _s);        // _p = pow2(s)
      _ModSqr(_p2, _s);

      ModSub256(px, _p2, px);
      ModSub256(px, Gx[i]);         // px = pow2(s) - p1.x - p2.x;
      /*
      ModSub256(py, Gx[i], px);
      _ModMult(py, _s);             // py = - s*(ret.x-p2.x)
      
      ModAdd256(py, Gy[i]);         // py = - p2.y - s*(ret.x-p2.x);  

      //ModSub256(py, sx, px);
      //_ModMult(py, _s);             // py = - s*(ret.x-p2.x)
      //ModSub256(py, sy);
      */
      CHECK_PREFIX(GRP_SIZE / 2 - (i + 1));

    }
    
    // First point (startP - (GRP_SZIE/2)*G)
    Load256(px, sx);
    Load256(py, sy);
    ModNeg256(dy, Gy[i]);
    ModSub256(dy, py);

    _ModMult(_s, dy, dx[i]);      //  s = (p2.y-p1.y)*inverse(p2.x-p1.x)
    //_ModMult(_p2, _s, _s);        // _p = pow2(s)
    _ModSqr(_p2, _s);

    ModSub256(px, _p2, px);
    ModSub256(px, Gx[i]);         // px = pow2(s) - p1.x - p2.x;
    /*
    ModSub256(py, Gx[i], px);
    _ModMult(py, _s);             // py = - s*(ret.x-p2.x)
    
    ModAdd256(py, Gy[i]);         // py = - p2.y - s*(ret.x-p2.x);  
    */
    CHECK_PREFIX(0);

    i++;

    // Next start point (startP + GRP_SIZE*G)
    Load256(px, sx);
    Load256(py, sy);
    ModSub256(dy, _2Gny, py);

    _ModMult(_s, dy, dx[i]);      //  s = (p2.y-p1.y)*inverse(p2.x-p1.x)
    //_ModMult(_p2, _s, _s);        // _p = pow2(s)
    _ModSqr(_p2, _s);

    ModSub256(px, _p2, px);
    ModSub256(px, _2Gnx);         // px = pow2(s) - p1.x - p2.x;

    ModSub256(py, _2Gnx, px);
    _ModMult(py, _s);             // py = - s*(ret.x-p2.x)
    //_ModSqr(py, _s);
    ModSub256(py, _2Gny);         // py = - p2.y - s*(ret.x-p2.x);  

    Load256(sx, px);
    Load256(sy, py);

  }

  // Update starting point
  __syncthreads();
  Store256A(startx, sx);

you can save time. Then: SHA256 ("02+x")  and SHA256("03+x") (without thinking at y value)

On my system I got about a 8% increase of performance.

Obviously at the end you have to do a double check to know if the correct private key for the found address is k or n-k. But only for the address found.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 22, 2019, 01:38:45 PM
 #192

Don't worry, cuda 8 needs g++ 4.9, that's the problem.

I use g++ 4.8/CUDA 8 with my old Quadro and it works.

About the performance, I think most of the people use only compressed addresses.

If you do a specific ComputeKeys for only compressed keys (don't compute y at all!):

Yes you're right, I will make a second kernel optimized for compressed addresses only.
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 22, 2019, 02:10:25 PM
 #193

An other report from a user using CUDA 8 and gcc 4.8 on a GeForce GTX 460. It works.
Telariust
Jr. Member
*
Offline Offline

Activity: 38
Merit: 18


View Profile
March 23, 2019, 04:13:11 PM
Last edit: August 09, 2019, 12:12:29 AM by Telariust
 #194

...If you don't specifie the seed, the basekey is generated using timestamps (in us) plus the date and also passed into the pbkdf2_hmac_sha512.
The result of the pbkdf2_hmac_sha512 is then passed into a SHA256 wich is use as the base key.
Then we have the classic security problem of using pseudo-random seed. Alarm!
fix it faster to /dev/urandom
and human is not a source of truly random numbers.
(you can search when the possibility to set start rand seed by the user has been removed from the electrum)
This is a useful option for your program, but only when the user understands what he is doing.

...
Edit:
You also may have noticed that I have an innovative implementation of modular inversion (DRS62) which is almost 2 times faster than the Montgomery one. Some benchmark and comments are available in IntMop.cpp.
...2) the field multiplication a*b = c mod p ;  why do you use Montgomery, are you sure it is worth it?
YES! moreover, I guarantee you that the mult of montgomery is a source of slow, especially for GPU.
Why? because the cycles in the algorithm necessarily contain conditional transitions (IF/cause), which greatly affects the parallelism(WARP) on GPU.
(i would keep silent if only cpu were used - no affect)
Why f montgomery? I see 2variants:
 1) historically - the Montgomery algorithm is optimal due to the simplicity of mult (and most importantly - division!) bit word shift on base2;
(and it - true! but not for gpu!)
 2) legacy vanitygen
when samir7 wrote vanitigen (2010-2012), he used openssl. openssl used montgomery always or almost for universality.
for mult 4096 bits - is need. but we have 256 bits! I don’t have exact data, how much we win using montgomery at 256 compared to classic(column?), but I think about 15%.
However, each IF in the cycle kill speed by HALF! In Montgomery there is 1 or 2 (depending on modify algorithm).
(i would keep silent if or we counted points on a curve secp4096k1)

When you tell me, "I used Montgomery to mult 256bit!", this is the same as "I used the Furers algorithm/Schonhage-Strassen to mult 256bit!". What can I say, you're good! Cool but why?)
I understand your ego math. Your vanity does not allow you to multiply by a Column or Karatsuba  Grin
Have you improved the multiplication of montgomery? You are well done! I am sure that there are few people in the world who can do this.
But please make this program as correct. With all sincere respect for you and your work!

Hmm.. some joke:
Hi, Santa!
We have VanityGen on universal slooow openssl, Bitcrack on libsecp256k1 + miser comm about TheoremFermi, and VanitySearch on libArulberosECC optimize with (fast?)montgomery.
Please, do it so that once they do it right!  Grin
(+and raw non-working ec_grind by sipa)

When an OpenCL implementation?  Smiley
I suppose never. I mean normal speed.
You know what happened with version clBitCrack?
Brichard19 promised to release in a couple of weeks, after two months something finally appeared, but at times slower than cuda.
And so far it is. What happened?  Brichard19 can not program? No I do not think so, he norm.
Just optimizing the compiler Cuda has gone a lot ahead. DireNvidia win, its fact.
Look for example: https://arxiv.org/ftp/arxiv/papers/1005/1005.2581.pdf

on the other hand... new release JohnTheRipper 1.9
https://www.openwall.com/lists/announce/2019/05/14/1
Quote
The release of the new version 1.9.0-jumbo-1 took place more than four years after the release of the previous 1.8.0-jumbo-1.

In the new version of John the Ripper, developers abandoned the CUDA architecture due to a decrease in interest in it and focused on the more portable OpenCL framework that works great on NVIDIA graphics cards.
cuda vs opencl?.. unclear
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 23, 2019, 04:57:09 PM
 #195

Then we have the classic security problem of using pseudo-random seed. Alarm!
fix it faster to /dev/urandom

As written is the readme, for safe keys it is recommenced to use a passphrase using -s option (as for BIP38).
Concerning the default seed pbkdf2_hmac_sha512(date + uptime in us) , here we search for prefix, which means that a seed search attack might work on very short prefix and would require a very competitive and expensive hardware.

YES! moreover, I guarantee you that the mult of montgomery is a source of slow, especially for GPU.
...

As written is the readme, VanitySearch use now a 2 step folding modular multiplication optimized for SecpK1 prime.
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
March 23, 2019, 05:42:24 PM
 #196

Very strange error.

If I mod the function __device__ void _ModMult(uint64_t *r, uint64_t *a, uint64_t *b)   in any way, for example like this:

Code:
  // Reduce from 320 to 256 
  UADD1(t[4],0ULL);
  UMULLO(al,t[4], 0x1000003D1ULL);
  UMULHI(ah,t[4], 0x1000003D1ULL);
  UADDO(r[0],r512[0], al);
  UADDC(r[1],r512[1], ah);
  UADDC(r[2],r512[2], 0ULL);
  UADD(r[3],r512[3], 0ULL);

  UADD1(r[3],0x07ULL);  <-- error!!!

I got all errors like it should be with the check option:

Code:
CPU found 1539 items
GPU: point   correct [0/271]
GPU: endo #1 correct [0/248]
GPU: endo #2 correct [0/260]
GPU: sym/point   correct [0/255]
GPU: sym/endo #1 correct [0/265]
GPU: sym/endo #2 correct [0/240]
GPU/CPU check Failed !

but I got instead the correct result with the standard command:

Code:
~/VanitySearch$ ./VanitySearch -stop -t 7 -gpu 1111
Difficulty: 16777216
Search: 1111 [Compressed]
Start Sat Mar 23 18:39:22 2019
Base Key:12FF1E3D528DC8068438E8ED181E1F2505E877A7543869B0B38E500F5FA284F9
Number of CPU thread: 7
GPU: GPU #0 Quadro M2200 (8x128 cores) Grid(64x128)

Pub Addr: 1111Cf8ucVbgUtANTRGwQsWVpXVZvqFT6
Prv Addr: 5HxepgskWZ53AokCCvk8d1ZZGinupSX4Sm7tNQygZ9zQpkftRQJ
Prv Key : 0x12FF1E3D528DC8068438E8ED181E1F2505E877A7543869B5B38E500F5FA4D5D3
Check   : 1DFm6mzxxKqFo9bysKC9x1TxEz5Z9d9uAb
Check   : 1111Cf8ucVbgUtANTRGwQsWVpXVZvqFT6 (comp)

How it is possible??  Huh
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
March 23, 2019, 05:51:01 PM
 #197

How it is possible??  Huh

Found by the CPU ? Try with -t 0...
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
March 23, 2019, 06:41:13 PM
 #198

How it is possible??  Huh

Found by the CPU ? Try with -t 0...

Ok! Mystery solved!  Smiley
DaveF
Legendary
*
Offline Offline

Activity: 3472
Merit: 6271


Crypto Swap Exchange


View Profile WWW
March 23, 2019, 08:19:55 PM
 #199

An other report from a user using CUDA 8 and gcc 4.8 on a GeForce GTX 460. It works.


Does CUDA 8 work on Windows or only Linux?

-Dave

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Lolo54
Member
**
Offline Offline

Activity: 117
Merit: 32


View Profile
March 23, 2019, 08:57:09 PM
 #200

An other report from a user using CUDA 8 and gcc 4.8 on a GeForce GTX 460. It works.


Does CUDA 8 work on Windows or only Linux?

-Dave
for the moment only on linux but it seems to me that jean_luc try or will try to adjust for windows also .... it is more difficult than for linux I think
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!