Bitcoin Forum
May 08, 2024, 01:12:04 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 [64] 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 »
1261  Bitcoin / Project Development / Re: Frustration Convention CANCELLED! ====> #51 <===== on: April 05, 2017, 10:22:11 AM
Unknownhostname just informed me, that he found #51.
And because he has shown me the privkey, I know he's not making it up.

Please stand by for further details...


Why do you move funds from the address to itself? What is the reason?

https://blockchain.info/address/1NpnQyZ7x24ud82b7WiRNvPm6N8bqGQnaS
1262  Bitcoin / Project Development / Re: Frustration Convention CANCELLED! ====> #51 <===== on: April 05, 2017, 10:09:29 AM
Unknownhostname just informed me, that he found #51.
And because he has shown me the privkey, I know he's not making it up.

VERY GOOOD!!!  Grin Grin
1263  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 05, 2017, 07:31:06 AM

While I do trust the LBC codebase, we're really entering uncharted territory here, so if someone of us doesn't confirm the #51 find, probably no-one will. Except the maker of the puzzle transaction and that one chooses to remain silent.

As for proof of work, read https://lbc.cryptoguru.org/man/tech and https://lbc.cryptoguru.org/man/admin#security
there is a tight challenge-response framework in place to make sure delivered work was really done by the generators and code tampering  to circumvent this has been refuted ever since.


I read that pages, but I don't understand what kind of "proof of work" we are talking about.

I mean: I'm interested in a "proof of correct work". This kind of proof needs not only to you, but to everyone is running this client.
My incentive is: I'm sure that I (and we all) are working in the correct way.

My question is: do you check in some way that my work is correct or you check only that I run your code without tampering? It is different.

For example incentive firework from SlarkBoy is good as a control system too. And money is not even necessary to perform this kind of control.


We are searching for something extremely rare. So sentences like:

"I do trust the LBC codebase"
"anyone with attention deficit hyperactivity disorder would be in the wrong place here"

are not soothing to me  Cheesy

Probably I'm only frustrated, I would really be sorry if we don't hit #51  Roll Eyes
1264  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 05, 2017, 06:18:43 AM

No sign of #51 yet? We're seriously running out of search space - less than 24h left.


Do we know for sure that there is a key for each bit?

I think we need to have another control (and incentive) system for our work. We cannot run for weeks and get at this point with the doubt that we have lost (in some way) a key.

EDIT: for example, how do you know (and I know) that I made effectively a search between keys "a and b"?  What is the proof of my work?
1265  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 04, 2017, 12:28:25 PM
Please always remember

LBC: 100 Mkeys/s = 200 Maddresses/s
oclvanitygen: 270 Mkeys/s = 270 Maddresses/s (only compressed)

Rico

I think that if oclvanitygen computes 270 Maddress/s, that means that it computes only 135M of x-coordinates (x = coordinates of points of the curve that represent the public keys) and not even a single y-coordinate.
Then the strings: "02x" and "03x" produce 2 addresses ( 2 for each x):

"02x" --> compressed public key
"03x" --> compressed public key


Current LBC generator computes 100 Mkey/s, that means that it computes 100M of x-coordinates + 100M of y-coordinates, then

"04xy" --> uncompressed public key
"02 or 03x" --> compressed public key

Total: 200 Maddresses/s.

Potentially our generator could computes other 2 addresses (not in the same time) from the same x/y, because there are other 2 points related to the same x,y coordinates:

"04x(-y)"  --> uncompressed public key
"03x / 02x" --> compressed public key

Note: sha256 should take half time if only applied to strings like "02x" or "03x" (only 256bit) respect of "04xy" (more than 512 bit).
That's why oclvanitygen seems faster than LBC.

Essentially the cost of compute an uncompressed public key is higher than that related to compressed key (because of y and because of sha256)
1266  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 03, 2017, 06:43:37 PM
1.25 Gkeys/sec
Shocked

How? GPU-client on dedicated servers?

ofc not ... lots and lots of CPU's ... no GPU for me Sad

Could you share with us some informations about how do you got this speed?  Is it expensive?

This pool could reach a very high speed if it were more people able to use a lot of CPU like you do.   Wink
1267  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 03, 2017, 04:20:15 PM
Something like that?

https://www.leadergpu.com/
1268  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 03, 2017, 02:16:40 PM
1.25 Gkeys/sec
Shocked

How? GPU-client on dedicated servers?
1269  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 02, 2017, 12:34:18 PM
I'm the only one stuck to 12Mkeys/s?  Angry
1270  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 02, 2017, 09:54:59 AM

1 answer Cheesy

AWS is the most shit cloud ever existed and exist Smiley

What do you suggest as alternative?
1271  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 02, 2017, 09:11:07 AM
I'm using t = 20

Code:
{
    "cpus":   64,
    "id":     "arulbero",
    "secret": "xxxxxxxxxxxxxx",
    "time":   20
}
1272  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 02, 2017, 07:51:55 AM
I'm using aws instances to run LBC client (version CPU only).

I chose 2 instances-type  (https://aws.amazon.com/ec2/instance-types/):

1) m4.4xlarge  (16vCPU)   about 0,13$ per hour (spot instance)
2) m4.16xlarge (64vCPU)   about 0,55$ per hour (spot instance)

generator:  gen-hrdcore-avx2-linux64

1)
Code:
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
stepping : 1
microcode : 0xb000014
cpu MHz : 2300.066
cache size : 46080 KB
physical id : 0
siblings : 16
core id : 0
cpu cores : 8
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs :
bogomips : 4600.13
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
....
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz

2)
Code:
$ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
stepping        : 1
...........
processor       : 63
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz

The problem is the speed:

1) about 6,3Mkeys/s                2) about 12Mkeys/s

obviously I have 2 different config files "lbc.json" , 1) --> "cpus":   16,   2)  --> "cpus":   64.

Why is instance #2  so slow?
1273  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: April 01, 2017, 11:56:38 AM
April fool's prank?
1274  Bitcoin / Project Development / Re: small GPU improvement on: February 25, 2017, 01:09:54 PM
Did some benchmarking and my M2000M Quadro - a midrange notebook GPU - does 60M of my BTC-optimized hash160 code + bloom-check per second. That's enough horsepower to check 30 Mkeys (uncompressed + compressed) per second.
So how do I get from my current 7 Mkeys/s to 30 Mkeys/s?

I'd like to pursue some of the ideas arulbero threw in on ECC pubkey generation, but after having looked at his code and also at things like "supervanitygen" (which is now officially less than half the speed of LBC), It's pretty clear I will have to come up with something on my own.
And also that I will have to move ECC to the GPU.

Rico

Cpu + GMP library: on your pc you could get 16,7M / 4s or 16,7/ 2s with complement or 16M / 1s with endomorphism.
My function "double_add" now takes 6s on my cpu (on your pc I guess it takes less than 3s), then 4s is a reasonable estimate. If the goal is 15Mkeys/s, CPU + GMP library are enough, otherwise you have to implement an efficient representation of field's elements (like secp256k1 library does) and/or use GPU.

About GPU and multiprecision library:
https://pdfs.semanticscholar.org/d807/b453c7f10bc547971a9344e81a88af934ad0.pdf
Quote
In this paper, we present our design and implementation of a multiple-precision integer library for GPUs which is implemented by CUDA. We report our experimental results which show that a significant speedup can be achieved by GPUs as compared with the GNU MP library on CPUs
1275  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: February 19, 2017, 07:31:35 PM
I was thinking, if sha256/ripemd160 is too fast compared to CPU pk generation, you could search for P2SH "addresses" too.

For example, with only 3 pubkeys, you could generate about 20 addresses (probably more):

6
Code:
{2 [pubkey1] [pubkey2] [pubkey3] 3 OP_CHECKMULTISIG}

+6
Code:
{1 [pubkey1] [pubkey2] 2 OP_CHECKMULTISIG}

+3
Code:
{1 [pubkey1] 1 OP_CHECKMULTISIG}


1276  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: February 14, 2017, 07:44:49 PM
I am performing some tests about endomorphism.

I remind the idea, we would like to generate:

a) 1G,  2G,  3G,  ......., kG, .......... , 2^160G
b) 1G',  2G', 3G', ......., kG', .........., 2^160G'    where G'=lambdaG
c) 1G'', 2G'', 3G'', ......., kG'',.........., 2^160G''   where G''=lambda^2G

We are sure that each row has different elements, because G, G', G'' have period n. But of course we cannot be sure that each element of b) for example is not an element of a) too. If we generated n keys instead of just 2^160, we would get the entire group of all n points, and then all the 3 rows would have the same elements. Only the order would be different.

But we have to generate only "few" elements.
Let's look at the rows a) and b) and at the relation between 2 corresponding elements: kG' = k*(lambdaG) = lambda*(kG). Where are these elements of b)?

My guess is:
multiplication by lambda produces 2^160 elements of b) evenly distributed in the space of the keys (keys respect of the generator G).

If that were true, how often would we have a "collision" (double computation of the same key in 2 distinct rows) between the 2 rows?
If the keys of the b) row are actually evenly distributed, the probability for each new key of b) to fall in the range 1-2^160 should be 2^160/2^256, about 1/2^96. If we generated 2^160 elements, we'd have 2^64 collisions.

To deal with this hypothesis, I generated 2^30 keys of the row b) (lambda1, lambda2, lambda3, ..., lambda2^30); none of these were in the range (1,2^160), so I checked how many were in larger ranges (like for example (1,2^238), and in that case I got about 2^12 'collisions' (2^238/2^256 * 2^30 = 2^12). So my hypothesis seems to have been confirmed by these results.

In summary, since we have to generate only 2^160 keys, we can accept (but obviously it's up to you) to have a double computation for one key each 2^96, only 16 'collisions' in the first 2^100 keys.

A question remains: do you want to generate random keys outside from your initial range? In case of collision, how can somebody prove to you that it is his key, since that key is indistinguishable from the others?

If you want instead to let go of endomorphism, I remind you that your generation's speed will be halved (from 1,1 M to 2,1 M for each point).
1277  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: February 13, 2017, 06:17:16 PM
CPU only for public keys generation + GPU for sha256/ripemd160?

Exactly. meanwhile I am at
Code:
real    0m8.561s
user    0m8.093s
sys     0m0.413s

(= 1959955 keys/s per CPU core with GPU support) and memory requirement on GPU a mere 29MB (GPU is bored)

Of the aforementioned 8 seconds, around 6.2 are ECC public key generation (16M uncompressed keys, the compressed key is done @ GPU).

6,2 s: CPU generates 16.7 M of public keys (x,y)
1,8 s: GPU performs SHA256 / ripemd160 of (x,y) and (x) <-compressed, what do you mean "compressed key is done with GPU"? Do you use 1 or 2 compressed keys? The x is always the same, you don't need to compute the y so you can generate 2 compressed keys for each uncompressed. Do you generate 33M of addresses each 8s or 50M of addresses?

Anyway at the moment the cpu is the bottleneck, gpu does his work at least x3 faster than cpu...
1278  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: February 13, 2017, 05:11:28 PM
Hi,

Unoptimized CPU/GPU hybrid generator. 1st successful run on 1 CPU core with Nvidia GPU in tandem: 1811207 keys/s

CPU only for public keys generation + GPU for sha256/ripemd160? Why in the meantime the pool performance has fell down?


I have a new version of the ecc_for_collider:

1) + complement private keys

2) + comments

https://www.dropbox.com/s/3jsxjy7sntx3p4a/ecc_for_collider07.zip?dl=0

The file foo.py performs 16,4 M of useless products; just to appreciate the efficiency of the generation of public keys of the script gen_batches_points07.py:

main_batch --> (x,y)   3,5M + 1S for each point

batch2 --> (betax,y)     1M for each point

batch3 --> (beta^2*x,y) 1M for each point

batch_minus --> (x,-y)  (betax,-y) (beta^2*x,-y)  0M and 0S

Total:  about 1,1M for each point!
If you know the performance of the field multiplication in your C code, you can have an idea of the performance you could reach. How long it takes your C code to perform 16,4 M multiplications (operands: big numbers and multiplication mod p)?

In the next days I want to perform some tests about endomorphism, just to be sure that everything is ok (for example  we'd like to avoid  twice computation of the same key)
1279  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: February 11, 2017, 09:09:40 PM
Imagine you want to generate a batch from 10000 to 14096 (the script actually generates batches of 4097 points)

First you generate the key k = 12048 (always we start with the middle point, to exploit the symmetry), this is the only point (a pivot point) of the batch that we get with the slower function mult

Code:
... k ...  <-- one batch, only one key k

jkx,jky,jkz = mul(k,Gx,Gy,1)
invjkz = inv(jkz,p)
(kx,ky) = jac_to_aff(jkx, jky, jkz, invjkz)


k can be any number greater than 2048 (otherwise, if k=3 for example, kG+3G gives a error because you are trying to use the addition formula instead of the double...) The first batch you can create with this script goes from 1 to 4097, the start key in that case would be k=2049.

Then the script generates three batches, each batch has 1 point + 2048 couple of points:

first batch: this is the batch you are more interested of, because it has 4097 points in your range, including the point 12048G:

(12048),(12048+1,12048-1),(12048+2,12048-2),....,(12048+2048=14096,12048-2048=10000)

the script computes this batch with the function double_add_P_Q_inv

Element #0 of the list is always kG, element #1 is the couple kG+1G, kG-1G, #2 is the couple kG+2G, kG-2G,  and so on ... --> #2048 is the couple kG+2048,kG-2048G

Code:
batch = batch + list(map(double_add_P_Q_inv,kxl[1:],kyl[1:],mGx[1:],mGy[1:],kminverse[1:]))	

Batch 1 and 2: these keys are not in your range, here we use endomorphism:

batch1:
(12048*lambda), ((12048+1)*lambda,(12048-1)*lambda), ((12048+2)*lambda,(12048-2)*lambda),  ...., (10000*lambda,14096*lambda)

batch2:
(12048*lambda^2),   ((12048+1)*lambda^2, (12048-1)*lambda^2),   ((12048+2)*lambda^2, (12048-2)*lambda^2),  ....,  (14096*lambda^2, 10000*lambda^2)

EDIT:
to make sure work still is distributable / parallelizable and the bookkeeping still being sane.
You don't worry about each key, in my opinion you have to store only a private key for 3 batches, you can think at the single key in the middle of the batch like a special seed. 99,9999% of the batches doesn't match any address with bitcoin, so when a match occurs only then you have to regenerate the entire 3 batches from this single seed to fetch the correct private key. Batch 1 and 2 are sequence of keys each different from each other, so you are sure that you are not wasting your computational efforts. I'm almost sure about the last sentence, there can't be more than three points with the same y, it is not possible checking the same key twice. Note that the 3 batches are related, they must be computed together.

Imagine you know that the pool has searched so far from key 1 to 2^50, then you know that the pool has searched keys 1*lambda, 2*lambda, 3*lambda ...  to 2^50*lambda (mod n) too, and keys 1*lambda^2, 2*lambda^2, 3*lambda^2,... to 2^50*lambda^2 (mod n).


05 runs nearly 22 seconds for 16M keys on my notebook. This is now only 3.5 times slower than what LBC optimized C version needs for 16M keys. I don't dare to estimate what optimized C code can make of this.

 Shocked

I dare: if you use complement too, you can generate 16M keys in less than half a second (with cpu, I don't know for GPU)

Considering that your current code performs 6M + 1S only for the transition from jacobian to affine coordinates for each point and that you are using J+J --> J to perform each addition (12M + 4S), your current cost should be 18M + 5S each point.

Let's say 1S = 0,8M, you have  about 22M for point.

If you are now using instead  J+A --> J to perform addition (8M + 3S), then you have about 17,2M for point.

My code uses 3,5M + 1S for each point of the first batch, and only 1M for each point of the other 2 batches.
So the average is: 5,5/3= 1,83M + 0,33S for point, let's say about 2,1M for point.


Now your speed is 16M/6s = 2,7 M/s for each cpu core.

If you could achieve a 8x - 10x improvement, let's say a 8x, so you could perform at least 21M/s. If you use (X,Y) --> (X,-Y) too, 42M/s. Let's say at least 40M k/s for each core, 15x respect of your actual speed.
With a 8-core cpu, you could generate more keys than your entire pool can handle at this moment.


Maybe tomorrow I'll add more comments on the code. Anyway read again this post, I edited it.

EDIT2:

this is a version with more comments:

https://www.dropbox.com/s/6o2az7n6x0luld4/ecc_for_collider06.zip?dl=0
1280  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: February 11, 2017, 08:04:21 PM
I tried it. On my notebook it takes

Code:
real    0m26.493s
user    0m26.490s
sys     0m0.000s

for the ~4.1mio keys (1000 * 4096). And

Code:
real    1m47.661s
user    1m47.657s
sys     0m0.003s

for 16M keys. So around ~160 000 keys/s

Ok, I don't know how to use the lists on Python  Grin

This version is faster (at least 50%) with only few modifications:

https://www.dropbox.com/s/wrbolxzbiu3y9su/ecc_for_collider04.zip?dl=0


Another update of the library with endomorphism:

https://www.dropbox.com/s/7v5i36n4k6d849b/ecc_for_collider05.zip?dl=0
Pages: « 1 ... 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 [64] 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!