Jean_Luc (OP)
|
|
March 21, 2019, 06:06:22 PM |
|
endo and sym are computed in CheckHashComp() in GPUCompute.h. I quoted my last post and I added few comments. The point (px,py) is always OK so no errors before CHECK_POINT(h, incr, 0); The errors randomly appear after this line. It seems that nvcc generates (in your case) a wrong code. __device__ __noinline__ void CheckHashComp(prefix_t *prefix, uint64_t *px, uint64_t *py, int32_t incr, uint32_t tid, uint32_t *lookup32, uint32_t *out) {
uint32_t h[20]; uint64_t pe1x[4]; uint64_t pe2x[4];
// Point _GetHash160Comp(px, py, (uint8_t *)h); CHECK_POINT(h, incr, 0); <-- 100% Ok up to here, means that (px,py) is good
// Endo #1 if (x, y) = k * G, then (beta*x, y) = lambda*k*G _ModMult(pe1x, px, _beta); _GetHash160Comp(pe1x, py, (uint8_t *)h); <-- 50% Wrong from here CHECK_POINT(h, incr, 1);
// Endo #2 if (x, y) = k * G, then (beta2*x, y) = lambda2*k*G _ModMult(pe2x, px, _beta2); _GetHash160Comp(pe2x, py, (uint8_t *)h); CHECK_POINT(h, incr, 2);
ModNeg256(py);
// Symetric points
_GetHash160Comp(px, py, (uint8_t *)h); CHECK_POINT(h, -incr, 0); _GetHash160Comp(pe1x, py, (uint8_t *)h); CHECK_POINT(h, -incr, 1); _GetHash160Comp(pe2x, py, (uint8_t *)h); CHECK_POINT(h, -incr, 2);
}
|
|
|
|
Jean_Luc (OP)
|
|
March 21, 2019, 06:12:52 PM |
|
You can also try VanitySearch -u -check It will perform the check using uncompressed addresses and so use the CheckHashUncomp() function which is similar except that it calls GetHash160() instead of GetHash160Comp()
|
|
|
|
Jean_Luc (OP)
|
|
March 21, 2019, 06:20:07 PM |
|
It can be due to a wrong optimization concerning a carry somewhere which could explain that it works from time to time. I had a similar problem with the CPU release when I compiled with gcc 6, gcc 7 or Visual C++ work flawlessly. The patch (a volatile also) is at IntMop.cpp:859 and IntMp.cpp:915
|
|
|
|
Jean_Luc (OP)
|
|
March 22, 2019, 08:09:08 AM |
|
You can delete:
and delete u0, u1, u2 ,u3, r0, r1, r2, r3
I committed your mods and I removed unused variable and changed a bit the squaring, I just replaced the reset of variable t1 and t2 by UADD(t1, 0x0ULL, 0x0ULL); . With this, it is no longer necessary to reset to 0 t1 or t2, t1 is set with carry flag. I also added my reduction which use MADC instruction (multiply and add). You can try both implementation by changing at GPUEngine.gu:665 to I also ported your ModSqr to CPU release in IntMod.cpp. On my hardware no significant performance increase, the square is ~10% faster the classic mult, so on the global process, no measurable performance increase. I removed again the volatile and added "memory" to clobber list of inline assembly. This should prevent the compiler to permute instruction (for pipelining optimization) and loose a carry or get a unexpected one. Thanks to test the source on github and tell me if you still have the errors. This is my last idea...
|
|
|
|
arulbero
Legendary
Offline
Activity: 1935
Merit: 2077
|
|
March 22, 2019, 12:56:49 PM |
|
I removed again the volatile and added "memory" to clobber list of inline assembly. This should prevent the compiler to permute instruction (for pipelining optimization) and loose a carry or get a unexpected one.
Thanks to test the source on github and tell me if you still have the errors.
This is my last idea...
Unfortunately all wrong!!! CPU found 1577 items GPU: point correct [0/243] GPU: endo #1 correct [0/251] GPU: endo #2 correct [0/268] GPU: sym/point correct [0/257] GPU: sym/endo #1 correct [0/256] GPU: sym/endo #2 correct [0/302] GPU/CPU check Failed !
|
|
|
|
Jean_Luc (OP)
|
|
March 22, 2019, 01:15:53 PM |
|
Unfortunately all wrong!!!
That's strange. May be I introduced an other bug. If you restore the volatile it works ?
|
|
|
|
arulbero
Legendary
Offline
Activity: 1935
Merit: 2077
|
|
March 22, 2019, 01:16:39 PM |
|
Unfortunately all wrong!!!
That's strange. May be I introduced an other bug. If you restore the volatile it works ? No.
|
|
|
|
Jean_Luc (OP)
|
|
March 22, 2019, 01:18:17 PM |
|
You may have notice that I changed the makefile. Now you should call it like this: And also set the good variable: CUDA = /usr/local/cuda-8.0 CXXCUDA = /usr/bin/g++-4.8
The readme is up-to-date
|
|
|
|
arulbero
Legendary
Offline
Activity: 1935
Merit: 2077
|
|
March 22, 2019, 01:20:43 PM |
|
You may have notice that I changed the makefile. Now you should call it like this: And also set the good variable: CUDA = /usr/local/cuda-8.0 CXXCUDA = /usr/bin/g++-4.8
The readme is up-to-date Yes, I already did it.
|
|
|
|
Jean_Luc (OP)
|
|
March 22, 2019, 01:29:48 PM |
|
Yes, I already did it.
It will make me crazy. It works on my 2 configs and a user on github just post a report on a GeForce GTX 1080 Ti (ccap=6.1) running on Ubuntu 18.04 and it works fine (he uses CUDA10).
|
|
|
|
arulbero
Legendary
Offline
Activity: 1935
Merit: 2077
|
|
March 22, 2019, 01:32:23 PM |
|
Yes, I already did it.
It will make me crazy. It works on my 2 configs and a user on github just post a report on a GeForce GTX 1080 Ti (ccap=6.1) running on Ubuntu 18.04 and it works fine (he uses CUDA10). Don't worry, cuda 8 needs g++ 4.9, that's the problem. About the performance, I think most of the people use only compressed addresses. If you do a specific ComputeKeys for only compressed keys (don't compute y at all!): for (uint32_t i = 0; i < HSIZE; i++) {
// P = StartPoint + i*G Load256(px, sx); Load256(py, sy); ModSub256(dy, Gy[i], py);
_ModMult(_s, dy, dx[i]); // s = (p2.y-p1.y)*inverse(p2.x-p1.x) //_ModMult(_p2, _s, _s); // _p = pow2(s) _ModSqr(_p2, _s);
ModSub256(px, _p2,px); ModSub256(px, Gx[i]); // px = pow2(s) - p1.x - p2.x; /* ModSub256(py, Gx[i], px); _ModMult(py, _s); // py = - s*(ret.x-p2.x) ModSub256(py, Gy[i]); // py = - p2.y - s*(ret.x-p2.x); */ CHECK_PREFIX(GRP_SIZE / 2 + (i + 1)); // P = StartPoint - i*G, if (x,y) = i*G then (x,-y) = -i*G Load256(px, sx); Load256(py, sy); //ModNeg256(dy,Gy[i]); //ModSub256(dy, py); ModSub256(dy, pyn, Gy[i]);
_ModMult(_s, dy, dx[i]); // s = (p2.y-p1.y)*inverse(p2.x-p1.x) //_ModMult(_p2, _s, _s); // _p = pow2(s) _ModSqr(_p2, _s);
ModSub256(px, _p2, px); ModSub256(px, Gx[i]); // px = pow2(s) - p1.x - p2.x; /* ModSub256(py, Gx[i], px); _ModMult(py, _s); // py = - s*(ret.x-p2.x) ModAdd256(py, Gy[i]); // py = - p2.y - s*(ret.x-p2.x);
//ModSub256(py, sx, px); //_ModMult(py, _s); // py = - s*(ret.x-p2.x) //ModSub256(py, sy); */ CHECK_PREFIX(GRP_SIZE / 2 - (i + 1));
} // First point (startP - (GRP_SZIE/2)*G) Load256(px, sx); Load256(py, sy); ModNeg256(dy, Gy[i]); ModSub256(dy, py);
_ModMult(_s, dy, dx[i]); // s = (p2.y-p1.y)*inverse(p2.x-p1.x) //_ModMult(_p2, _s, _s); // _p = pow2(s) _ModSqr(_p2, _s);
ModSub256(px, _p2, px); ModSub256(px, Gx[i]); // px = pow2(s) - p1.x - p2.x; /* ModSub256(py, Gx[i], px); _ModMult(py, _s); // py = - s*(ret.x-p2.x) ModAdd256(py, Gy[i]); // py = - p2.y - s*(ret.x-p2.x); */ CHECK_PREFIX(0);
i++;
// Next start point (startP + GRP_SIZE*G) Load256(px, sx); Load256(py, sy); ModSub256(dy, _2Gny, py);
_ModMult(_s, dy, dx[i]); // s = (p2.y-p1.y)*inverse(p2.x-p1.x) //_ModMult(_p2, _s, _s); // _p = pow2(s) _ModSqr(_p2, _s);
ModSub256(px, _p2, px); ModSub256(px, _2Gnx); // px = pow2(s) - p1.x - p2.x;
ModSub256(py, _2Gnx, px); _ModMult(py, _s); // py = - s*(ret.x-p2.x) //_ModSqr(py, _s); ModSub256(py, _2Gny); // py = - p2.y - s*(ret.x-p2.x);
Load256(sx, px); Load256(sy, py);
}
// Update starting point __syncthreads(); Store256A(startx, sx);
you can save time. Then: SHA256 ("02+x") and SHA256("03+x") (without thinking at y value) On my system I got about a 8% increase of performance. Obviously at the end you have to do a double check to know if the correct private key for the found address is k or n-k. But only for the address found.
|
|
|
|
Jean_Luc (OP)
|
|
March 22, 2019, 01:38:45 PM |
|
Don't worry, cuda 8 needs g++ 4.9, that's the problem.
I use g++ 4.8/CUDA 8 with my old Quadro and it works. About the performance, I think most of the people use only compressed addresses.
If you do a specific ComputeKeys for only compressed keys (don't compute y at all!):
Yes you're right, I will make a second kernel optimized for compressed addresses only.
|
|
|
|
Jean_Luc (OP)
|
|
March 22, 2019, 02:10:25 PM |
|
An other report from a user using CUDA 8 and gcc 4.8 on a GeForce GTX 460. It works.
|
|
|
|
Telariust
Jr. Member
Offline
Activity: 38
Merit: 18
|
|
March 23, 2019, 04:13:11 PM Last edit: August 09, 2019, 12:12:29 AM by Telariust |
|
...If you don't specifie the seed, the basekey is generated using timestamps (in us) plus the date and also passed into the pbkdf2_hmac_sha512. The result of the pbkdf2_hmac_sha512 is then passed into a SHA256 wich is use as the base key.
Then we have the classic security problem of using pseudo-random seed. Alarm! fix it faster to /dev/urandom and human is not a source of truly random numbers. (you can search when the possibility to set start rand seed by the user has been removed from the electrum) This is a useful option for your program, but only when the user understands what he is doing. ... Edit: You also may have noticed that I have an innovative implementation of modular inversion (DRS62) which is almost 2 times faster than the Montgomery one. Some benchmark and comments are available in IntMop.cpp.
...2) the field multiplication a*b = c mod p ; why do you use Montgomery, are you sure it is worth it? YES! moreover, I guarantee you that the mult of montgomery is a source of slow, especially for GPU. Why? because the cycles in the algorithm necessarily contain conditional transitions ( IF/cause), which greatly affects the parallelism(WARP) on GPU. (i would keep silent if only cpu were used - no affect) Why f montgomery? I see 2variants: 1) historically - the Montgomery algorithm is optimal due to the simplicity of mult (and most importantly - division!) bit word shift on base2; ( and it - true! but not for gpu!) 2) legacy vanitygen when samir7 wrote vanitigen (2010-2012), he used openssl. openssl used montgomery always or almost for universality. for mult 4096 bits - is need. but we have 256 bits! I don’t have exact data, how much we win using montgomery at 256 compared to classic(column?), but I think about 15%. However, each IF in the cycle kill speed by HALF! In Montgomery there is 1 or 2 (depending on modify algorithm). (i would keep silent if or we counted points on a curve secp4096k1) When you tell me, "I used Montgomery to mult 256bit!", this is the same as "I used the Furers algorithm/Schonhage-Strassen to mult 256bit!". What can I say, you're good! but why?) I understand your ego math. Your vanity does not allow you to multiply by a Column or Karatsuba Have you improved the multiplication of montgomery? You are well done! I am sure that there are few people in the world who can do this. But please make this program as correct. With all sincere respect for you and your work! Hmm.. some joke: Hi, Santa! We have VanityGen on universal slooow openssl, Bitcrack on libsecp256k1 + miser comm about TheoremFermi, and VanitySearch on libArulberosECC optimize with (fast?)montgomery. Please, do it so that once they do it right! (+and raw non-working ec_grind by sipa) When an OpenCL implementation? I suppose never. I mean normal speed. You know what happened with version clBitCrack? Brichard19 promised to release in a couple of weeks, after two months something finally appeared, but at times slower than cuda. And so far it is. What happened? Brichard19 can not program? No I do not think so, he norm. Just optimizing the compiler Cuda has gone a lot ahead. DireNvidia win, its fact. Look for example: https://arxiv.org/ftp/arxiv/papers/1005/1005.2581.pdfon the other hand... new release JohnTheRipper 1.9 https://www.openwall.com/lists/announce/2019/05/14/1The release of the new version 1.9.0-jumbo-1 took place more than four years after the release of the previous 1.8.0-jumbo-1.
In the new version of John the Ripper, developers abandoned the CUDA architecture due to a decrease in interest in it and focused on the more portable OpenCL framework that works great on NVIDIA graphics cards. cuda vs opencl?.. unclear
|
|
|
|
Jean_Luc (OP)
|
|
March 23, 2019, 04:57:09 PM |
|
Then we have the classic security problem of using pseudo-random seed. Alarm! fix it faster to /dev/urandom
As written is the readme, for safe keys it is recommenced to use a passphrase using -s option (as for BIP38). Concerning the default seed pbkdf2_hmac_sha512(date + uptime in us) , here we search for prefix, which means that a seed search attack might work on very short prefix and would require a very competitive and expensive hardware. YES! moreover, I guarantee you that the mult of montgomery is a source of slow, especially for GPU. ...
As written is the readme, VanitySearch use now a 2 step folding modular multiplication optimized for SecpK1 prime.
|
|
|
|
arulbero
Legendary
Offline
Activity: 1935
Merit: 2077
|
|
March 23, 2019, 05:42:24 PM |
|
Very strange error. If I mod the function __device__ void _ModMult(uint64_t *r, uint64_t *a, uint64_t *b) in any way, for example like this: // Reduce from 320 to 256 UADD1(t[4],0ULL); UMULLO(al,t[4], 0x1000003D1ULL); UMULHI(ah,t[4], 0x1000003D1ULL); UADDO(r[0],r512[0], al); UADDC(r[1],r512[1], ah); UADDC(r[2],r512[2], 0ULL); UADD(r[3],r512[3], 0ULL);
UADD1(r[3],0x07ULL); <-- error!!!
I got all errors like it should be with the check option: CPU found 1539 items GPU: point correct [0/271] GPU: endo #1 correct [0/248] GPU: endo #2 correct [0/260] GPU: sym/point correct [0/255] GPU: sym/endo #1 correct [0/265] GPU: sym/endo #2 correct [0/240] GPU/CPU check Failed !
but I got instead the correct result with the standard command: ~/VanitySearch$ ./VanitySearch -stop -t 7 -gpu 1111 Difficulty: 16777216 Search: 1111 [Compressed] Start Sat Mar 23 18:39:22 2019 Base Key:12FF1E3D528DC8068438E8ED181E1F2505E877A7543869B0B38E500F5FA284F9 Number of CPU thread: 7 GPU: GPU #0 Quadro M2200 (8x128 cores) Grid(64x128)
Pub Addr: 1111Cf8ucVbgUtANTRGwQsWVpXVZvqFT6 Prv Addr: 5HxepgskWZ53AokCCvk8d1ZZGinupSX4Sm7tNQygZ9zQpkftRQJ Prv Key : 0x12FF1E3D528DC8068438E8ED181E1F2505E877A7543869B5B38E500F5FA4D5D3 Check : 1DFm6mzxxKqFo9bysKC9x1TxEz5Z9d9uAb Check : 1111Cf8ucVbgUtANTRGwQsWVpXVZvqFT6 (comp)
How it is possible??
|
|
|
|
Jean_Luc (OP)
|
|
March 23, 2019, 05:51:01 PM |
|
How it is possible?? Found by the CPU ? Try with -t 0...
|
|
|
|
arulbero
Legendary
Offline
Activity: 1935
Merit: 2077
|
|
March 23, 2019, 06:41:13 PM |
|
How it is possible?? Found by the CPU ? Try with -t 0... Ok! Mystery solved!
|
|
|
|
DaveF
Legendary
Offline
Activity: 3654
Merit: 6660
Crypto Swap Exchange
|
|
March 23, 2019, 08:19:55 PM |
|
An other report from a user using CUDA 8 and gcc 4.8 on a GeForce GTX 460. It works.
Does CUDA 8 work on Windows or only Linux? -Dave
|
|
|
|
Lolo54
Member
Offline
Activity: 117
Merit: 32
|
|
March 23, 2019, 08:57:09 PM |
|
An other report from a user using CUDA 8 and gcc 4.8 on a GeForce GTX 460. It works.
Does CUDA 8 work on Windows or only Linux? -Dave for the moment only on linux but it seems to me that jean_luc try or will try to adjust for windows also .... it is more difficult than for linux I think
|
|
|
|
|