unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
January 19, 2017, 11:11:55 AM |
|
root@server-02:~# ./LBC -h JSON not found - installing it. Can't locate JSON.pm in @INC (you may need to install the JSON module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at ./LBC line 88. BEGIN failed--compilation aborted at ./LBC line 88.
tried to google it and installed all kind of shit ... nothing helped ... same problem every time. The system is an Ubuntu 14.04 LTS.
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 19, 2017, 03:53:50 PM |
|
root@server-02:~# ./LBC -h JSON not found - installing it. Can't locate JSON.pm in @INC (you may need to install the JSON module) (@INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at ./LBC line 88. BEGIN failed--compilation aborted at ./LBC line 88.
tried to google it and installed all kind of shit ... nothing helped ... same problem every time. The system is an Ubuntu 14.04 LTS.
I take it, you've read https://lbc.cryptoguru.org/man/admin#on-linux and gcc is installed. If so, try to install JSON manually by doing and if it fails, paste the results. Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1915
Merit: 2074
|
|
January 19, 2017, 04:46:05 PM Last edit: January 19, 2017, 06:58:31 PM by arulbero |
|
I was thinking: why we calculate: G, G+G, G+G+G, G+G+G+G, ...., 2^160*G (private keys: 1,2,3,4,...., 2^160) instead of: G, 2*G, 4*G, 8*G, 16*G, ......... 2^(2^160)*G ? (private keys: 1,2,4,8,16, .... 2^(2^160) mod p) From here : I understand that point doubling is much cheaper than point addition (the case 2J is the optimal one, secp256k1 library uses J+A) I know that our points form a finite cyclic group that is isomorphic to the additive group of Z/nZ. So if we use the second succession instead of the first one, we probably cannot obtain the entire group. But you need only 2^160 points, not all 2^256. Am I wrong?
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 20, 2017, 03:54:17 PM Last edit: January 20, 2017, 04:07:44 PM by rico666 |
|
I was thinking:
why we calculate: G, G+G, G+G+G, G+G+G+G, ...., 2^160*G (private keys: 1,2,3,4,...., 2^160)
instead of: G, 2*G, 4*G, 8*G, 16*G, ......... 2^(2^160)*G ? (private keys: 1,2,4,8,16, .... 2^(2^160) mod p) ... I understand that point doubling is much cheaper than point addition (the case 2J is the optimal one, secp256k1 library uses J+A)
I know that our points form a finite cyclic group that is isomorphic to the additive group of Z/nZ. So if we use the second succession instead of the first one, we probably cannot obtain the entire group. But you need only 2^160 points, not all 2^256.
Am I wrong?
There are two problems. a) the resulting private keys from doubling become statistically indistinguishable from "real non-pathological(*)" private keys and b) the whole interval arithmetic for parallelization of the distribution is moot. At the moment, the client precomputes a table with 4096 (uncompressed public keys) with basically this code: int hrdec_pubkey_batch_incr(uchar (*pub)[65], uchar start[32]) { unsigned int i;
hrdec_mult_gen2(&batchpj[0], start);
for (i = 1; i < 4096; ++i) { hrdec_gej_add_ge(&batchpj[i], &batchpj[i-1], &incr_a); // increment public key }
hrdec_ge_set_all_gej_new(batchpa, batchpj); // convert all jacobian coordinates to affine
for (i = 0; i < 4096; ++i) { // compute uncompressed public keys pub[i][0] = 0x04; secp256k1_fe_normalize_var(&batchpa[i].x); hrdec_fe_get_b32(pub[i] + 1, &batchpa[i].x);
secp256k1_fe_normalize_var(&batchpa[i].y); hrdec_fe_get_b32(pub[i] + 33, &batchpa[i].y); }
return 0; }
static void hrdec_ge_set_all_gej_new(secp256k1_ge *r, const secp256k1_gej *a) { secp256k1_fe *az; secp256k1_fe u; unsigned int i;
az = (secp256k1_fe *)malloc(sizeof(secp256k1_fe) * 4096);
az[0] = a[0].z;
for (i = 1; i < 4096; i++) { secp256k1_fe_mul(&az[i], &az[i - 1], &a[i].z); }
secp256k1_fe_inv_var(&u, &az[--i]); // sets i to 4095 here
while (--i) { secp256k1_fe_mul(&az[i + 1], &az[i], &u); secp256k1_fe_mul(&u, &u, &a[i + 1].z); }
az[0] = u;
for (i = 0; i < 4096; i++) { r[i].infinity = a[i].infinity; secp256k1_ge_set_gej_zinv(&r[i], &a[i], &az[i]); } free(az); }
Although I have already optimized the sh*t out of this code, I still feel it's way too clumsy as is and potentially could be improved a lot. If anyone has any suggestions, I'm all ears. (*) If the pool finds - as it currently does by design - a private key that has e.g. 200 leading zeroes, you can assume that this private key is either "the original key" of an address generated by some bad software/as test or by a very marginal chance it got generated correctly but by accident ended up like that. If - on the other hand - your private key contains around 128 bits set/clear, you can assume it is a correctly generated original key. Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1915
Merit: 2074
|
|
January 20, 2017, 04:36:29 PM |
|
There are two problems. a) the resulting private keys from doubling become statistically indistinguishable from "real non-pathological(*)" private keys and b) the whole interval arithmetic for parallelization of the distribution is moot. At the moment, the client precomputes a table with 4096 (uncompressed public keys) with basically this code:
....
(*) If the pool finds - as it currently does by design - a private key that has e.g. 200 leading zeroes, you can assume that this private key is either "the original key" of an address generated by some bad software/as test or by a very marginal chance it got generated correctly but by accident ended up like that. If - on the other hand - your private key contains around 128 bits set/clear, you can assume it is a correctly generated original key.
a) The odds are about the same, only 1 / 2^(256-160) = 1/2^96 it is a original key. 200 leading zeroes or 2^(2^87) have exactly the same chance of happening. b) I see your point. Furthermore, finding the inverse of an element in finite field is the most of the workload, much more than multiplication.
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 20, 2017, 06:08:53 PM Last edit: January 20, 2017, 06:23:06 PM by rico666 |
|
a) The odds are about the same, only 1 / 2^(256-160) = 1/2^96 it is a original key. 200 leading zeroes or 2^(2^87) have exactly the same chance of happening.
If you look at the statistical distribution of the hamming weight of the private keys, how can you say that 200 leading zeroes are no exceptional (=rare) event? It is true, that given one (unknown) original private key, the chance of finding a supposedly alternate key that is the original is 1/2^96 in both search spaces. But the hypothesis of this actually being an alternate key is much more probable in the "leading zeroes" case. 2^256 = 115792089237316195423570985008687907853269984665640564039457584007913129639936 ncr(256;128) = 5768658823449206338089748357862286887740211701975162032608436567264518750790 ncr(256;56) = 1529810590453037906058619994778570990615168461145717234720 2^56 = 72057594037927936
Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1915
Merit: 2074
|
|
January 21, 2017, 01:43:11 PM Last edit: January 21, 2017, 02:01:26 PM by arulbero |
|
We know that: private key k : --> public key: (X,Y) = k*G private key n-k : --> public key: (X,-Y) = (X,p-Y) (no need for multiplication/inverse anymore) Example: k: 0xf3bb9dc5a (65426545754) --> X: 0x6ee5f74076df253037be5956bd169d1ce1a2e586cf3b233df6f9f4a31d9b955b Y: 0xf9680b09669f0df113b9b0bb4d37b531858a73913ac38025a31d322801d660f0 --> Address: 1LjMTsSMPrXoFFbiSExHpXsBuDamRxjs8R
n-k: 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e7d947c64e7 (115792089237316195423570985008687907852837564279074904382605163141452734948583) --> X': 0x6ee5f74076df253037be5956bd169d1ce1a2e586cf3b233df6f9f4a31d9b955b (X'=X) Y': 0x697f4f69960f20eec464f44b2c84ace7a758c6ec53c7fda5ce2cdd6fe299b3f (Y'=p-0xf9680b09669f0df113b9b0bb4d37b531858a73913ac38025a31d322801d660f0) --> Address: 1MutomTTupZ1XVvzUmzivGfWH1HCP9Y5wE
So you can calculate: (1*G,(n-1)*G), (2*G,(n-2)*G), (3*G,(n-3)*G), ....., (k*G,(n-k)*G), .... , (2^159*G, (n-2^159)*G) k from 1 to 2^159 and n-1 downto n-2^159 Same amount of keys, half time, double speed.
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 21, 2017, 05:00:41 PM |
|
We know that:
private key k : --> public key: (X,Y) = k*G private key n-k : --> public key: (X,-Y) = (X,p-Y) (no need for multiplication/inverse anymore)
...
So you can calculate:
(1*G,(n-1)*G), (2*G,(n-2)*G), (3*G,(n-3)*G), ....., (k*G,(n-k)*G), .... , (2^159*G, (n-2^159)*G)
k from 1 to 2^159 and n-1 downto n-2^159
Same amount of keys, half time, double speed.
Very intriguing as for the effective reduction in computational effort per pk, also well manageable from the aspect of interval arithmetics to keep track of "computed areas", because both are continuous. I'll certainly have a more detailed look at this. Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1915
Merit: 2074
|
|
January 21, 2017, 11:19:08 PM Last edit: January 22, 2017, 08:41:12 AM by arulbero |
|
At the moment, the client precomputes a table with 4096 (uncompressed public keys) with basically this code: int hrdec_pubkey_batch_incr(uchar (*pub)[65], uchar start[32]) { unsigned int i;
hrdec_mult_gen2(&batchpj[0], start);
for (i = 1; i < 4096; ++i) { hrdec_gej_add_ge(&batchpj[i], &batchpj[i-1], &incr_a); // increment public key }
hrdec_ge_set_all_gej_new(batchpa, batchpj); // convert all jacobian coordinates to affine
for (i = 0; i < 4096; ++i) { // compute uncompressed public keys pub[i][0] = 0x04; secp256k1_fe_normalize_var(&batchpa[i].x); hrdec_fe_get_b32(pub[i] + 1, &batchpa[i].x);
secp256k1_fe_normalize_var(&batchpa[i].y); hrdec_fe_get_b32(pub[i] + 33, &batchpa[i].y); } ................ for (i = 0; i < 4096; i++) { r[i].infinity = a[i].infinity; secp256k1_ge_set_gej_zinv(&r[i], &a[i], &az[i]); } free(az); }
Although I have already optimized the sh*t out of this code, I still feel it's way too clumsy as is and potentially could be improved a lot. If anyone has any suggestions, I'm all ears. The code from libsecp256k1 you are using has a different purpose than yours: The process of cracking Bitcoin brain wallets is to repeatedly generate public keys using guessed passwords. Key generation method as we described in 2.1.1, is to compute Q = dP . Here d is a SHA256 hash of the generated password, P is a xed point which is the base point G for secp256k1...
The time cost for computing one public key given a random private key takes : 47.2 us.
sourceYou don't have to generate one public key given a random private key, neither do you generate a lot of public keys from a lot of sparse random keys, but you have to generate a batch of "consecutive" public keys, a very more specific (and less expensive) task. Furthermore you don't have to worry about side-channel attack and other security issues. How do you generate each public key (in batchpj)? And how do you convert jacobian coordinates to affine (hrdec_ge_set_all_gej_new(batchpa, batchpj))? How many operations (multiplication, inverse) are involved? Once you have k*G , you have to add only 1*G in each step: k*G --> (k+1)*G --> (k+2)*G, .... so the effort should be focused on optimizing the addition '+G': minimizing number of operations for each addition '+G'The computation of a multiplicative inverse is the most time consuming operation (1 inverse = 22 multiplication - see table 2 ), so I would start cutting there: goal: less #inverse for each public key or better: less #inverse for each address (IA) --------------------------------------------------------------------------------------------------------------------------------- For example, given a,b elements of the field: a*b, inv(a*b) --> inv(a)=b*inv(a*b), inv(b)=a*inv(a*b) --> 1 inverse + 3 multiplication instead of 2 inverse So: private key k --> k*G = (X1,Y1,Z1) (jacobian coordinates) no inverse private key k+1 --> k*G + G = (X1,Y1,Z1)+ (XG,YG,1)=(X2,Y2,Z2) (jacobian coordinates) no inverse Z=Z1*Z2, inv(Z) --> inv(Z1)=Z2*inv(Z), inv(Z2)=Z1*inv(Z) only 1 inverse jacobian coordinates (X1,Y1,Z1) --> (X1*inv(Z1)^2, Y1*inv(Z1)^3) affine jacobian coordinates (X2,Y2,Z2) --> (X2*inv(Z2)^2, Y2*inv(Z2)^3) affine 2 private keys --> 2 public keys (only 1 inverse) --> IA = 1/2 = 0.5 ----------------------------------------------------------------------------------------------------------------------------- better: 2 private keys --> 4 private keys (k, n-k, k+1, n-k-1) --> 4 uncompressed public keys (only 1 inverse) --> IA = 0.25 --------------------------------------------------------------------------------------------------------------------------------------------------- much better: 2 private keys --> 4 private keys --> 8 public keys, 4 compressed and 4 uncompressed, 8 addresses (only 1 inverse) --> IA=0.125
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 22, 2017, 09:57:39 AM |
|
Hi, first let me thank you for your input, I very much apreciate that. After developing this thing for over 6 months, I am of course aware about several things (like I don't have to worry about side-channel attack and other security issues). I also believe that there is not much left from the original brainflayer code in the HRD-core generator. The original brainflayer gave me a keygen rate of 520000 keys/s, whereas I am now at 730000 keys/s. The time cost for computing one public key given a random private key takes : 47.2 us.
At the moment the time cost for generating 16777216 uncompressed public keys is ~ 6 seconds on my machine. Which - if I compute correctly - boils down to ~0.36 us per private key. the other 16 seconds are required for creating a compressed key (negligible), to compute the hash160 of both the uncompressed and the compressed key and to check these hash160s against the bloom filter. Of these 16seconds, 7s is the two RIPEMD160 computations and 8s the two SHA256 computations. Rest is bloom, startup/init etc. How do you generate each public key (in batchpj)? And how do you convert jacobian coordinates to affine (hrdec_ge_set_all_gej_new(batchpa, batchpj))? How many operations (multiplication, inverse) are involved?
I have posted the code already. 2 private keys --> 2 public keys (only 1 inverse) --> IA = 1/2 = 0.5 ----------------------------------------------------------------------------------------------------------------------------- better: 2 private keys --> 4 private keys (k, n-k, k+1, n-k-1) --> 4 uncompressed public keys (only 1 inverse) --> IA = 0.25 --------------------------------------------------------------------------------------------------------------------------------------------------- much better: 2 private keys --> 4 private keys --> 8 public keys, 4 compressed and 4 uncompressed, 8 addresses (only 1 inverse) --> IA=0.125
If I am not entirely mistaken, I compute 1 inverse each 4096 private keys (8192 addresses). Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1915
Merit: 2074
|
|
January 22, 2017, 10:31:32 AM Last edit: January 22, 2017, 10:45:18 AM by arulbero |
|
If I am not entirely mistaken, I compute 1 inverse each 4096 private keys (8192 addresses).
Ok, then I cannot read correctly your code, these piece of code: for (i = 0; i < 4096; i++) ...... secp256k1_ge_set_gej_zinv( &r[i], &a[i], &az[i]);
doesn't mean: "compute 4096 inverse field"! I definitely misunderstood the meaning of that line. At the moment the time cost for generating 16777216 uncompressed public keys is ~ 6 seconds on my machine. Which - if I compute correctly - boils down to ~0.36 us per private key. the other 16 seconds are required for creating a compressed key (negligible), to compute the hash160 of both the uncompressed and the compressed key and to check these hash160s against the bloom filter.
Of these 16seconds, 7s is the two RIPEMD160 computations and 8s the two SHA256 computations. Rest is bloom, startup/init etc.
The bottleneck is then SHA256, you need mining hardware.
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 22, 2017, 11:11:29 AM |
|
The bottleneck is then SHA256, you need mining hardware. I gave SHA256 quite some attention and there is a puzzling fact to SHA256: I ended up using most of the code from OpenSSL 1.1.0c as it proved to be faster than supervanitygen code, which in turn uses The Intel reference implementation from 2012: https://github.com/klynastor/supervanitygen/tree/master/sha256What puzzles me most is that I should get in between 320MB/s and 560MB/s with the Intel reference implementation on my CPU. It's padded to 64bytes, so this would mean something between 5Mhash/s and 8MHash/s for compressed keys. Should be at worst 3.3 seconds for the 2^24 compressed keys. Still, the 4.5s I get from OpenSSL is better than what I could get from the 2012 Intel code. So either I am not using the optimized code from Intel well, or there is meanwhile better optimized code (I haven't inspected the OpenSSL implementation, as reading thr OpenSSL code is ... let's say even worse than libsecp256k1 code), or I cannot simply transfer the 320MB/s to 320MB/s / 64bytes or.... Intel had 2015 a writeup about improving openssl: https://software.intel.com/en-us/articles/improving-openssl-performanceBasically the LBC generator is now about twice as fast as supervanitygen, as it does 730 000 keys/s (supervanitygen 767 000), but contrary to supervanitygen HRD-core does both uncompressed and compressed (supervanitygen only compressed). Your suggestion about doubling the key rate by using the complement private keys is as of now the only thing that could significantly raise keygen rate. Maybe I could hit 1Mkeys/s on one core on my CPU, but I believe this is really it what can be done with CPU - unless someone can come up with something really surprising. Rico
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 23, 2017, 12:40:56 PM |
|
Anyone with an AVX2 capable CPU (haswell, broadwell), might want to restart LBC for a generator auto-update.
Your keyrate will go up by ~20%.
Rico
|
|
|
|
Jude Austin
Legendary
Offline
Activity: 1140
Merit: 1000
The Real Jude Austin
|
|
January 23, 2017, 09:46:46 PM |
|
Pool looks to be picking up speed, I like it.
|
Buy or sell $100 of Crypto and get $10!
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
January 23, 2017, 11:18:38 PM |
|
Pool looks to be picking up speed, I like it.
1: Your maximum speed is 493068 keys/s per CPU core. 2: Your maximum speed is 482198 keys/s per CPU core. 3: Your maximum speed is 484588 keys/s per CPU core. 4: Your maximum speed is 484546 keys/s per CPU core. 5: Will use 24 CPUs. Best generator chosen: gen-hrdcore-avx2-linux64 (w8ing to finish , than I will check the speed )
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 24, 2017, 07:23:22 AM |
|
If you have access to a Knights Landing CPU ( http://ark.intel.com/products/codename/48999/Knights-Landing), please ping me if you would like to try a generator, I have AVX-512 binaries available. A single Intel Xeon Phi 7290 can give you 50 MKeys/s! That's right - about as much as the total pool speed right now. Rico
|
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
January 24, 2017, 10:32:12 AM |
|
If you have access to a Knights Landing CPU ( http://ark.intel.com/products/codename/48999/Knights-Landing), please ping me if you would like to try a generator, I have AVX-512 binaries available. A single Intel Xeon Phi 7290 can give you 50 MKeys/s! That's right - about as much as the total pool speed right now. Rico Intel Xeon Phi 7290 are very rare. GPU's arent ... you should try GPU ... I'm sure you can delivered great speed with GPU even with 1 server I think I can triple the pool speed. root@soft:~# lshw -C video | grep product: product: ASPEED Graphics Family product: GK210GL [Tesla K80] product: GK210GL [Tesla K80] product: GK210GL [Tesla K80] product: GK210GL [Tesla K80]
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
January 24, 2017, 11:45:12 AM |
|
GPU's arent ... you should try GPU ... I'm sure you can delivered great speed with GPU
I gave my nose a blow, unfortunately no GPU client appeared in my handkerchief. Any other suggestions how I should "try"? Rico
|
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
January 24, 2017, 12:54:01 PM |
|
I have no clue ... I'm not a programmer . But I'm sure you are and you will find a way
|
|
|
|
arulbero
Legendary
Offline
Activity: 1915
Merit: 2074
|
|
January 24, 2017, 04:14:06 PM Last edit: January 24, 2017, 04:47:46 PM by arulbero |
|
Basically the LBC generator is now about twice as fast as supervanitygen, as it does 730 000 keys/s (supervanitygen 767 000), but contrary to supervanitygen HRD-core does both uncompressed and compressed (supervanitygen only compressed).
Your suggestion about doubling the key rate by using the complement private keys is as of now the only thing that could significantly raise keygen rate. Maybe I could hit 1Mkeys/s on one core on my CPU, but I believe this is really it what can be done with CPU - unless someone can come up with something really surprising.
Looking at the equation y^2 = x^3 +7 1) my first suggestion was exploiting the symmetry with respect to the X-Axis, in that case I took care of the first term y^2 --> (X,Y), (X-Y) 2) now another suggestion (to generate other public keys almost free, but unfortunately not in your range) concerns the term x^3; you could exploit endomorphism for your specific task (not to perform faster a single multiplication k*G). Endomorphism: --> https://bitcointalk.org/index.php?topic=3238.msg45565#msg45565
--> http://crypto.stackexchange.com/questions/22738/how-do-you-derive-the-lambda-and-beta-values-for-endomorphism-on-the-secp256k1-clambda = 0x5363ad4cc05c30e0a5261c028812645a122e22ea20816678df02967c1b23bd72 (lambda^3 = 1 mod n) beta = 0x7ae96a2b657c07106e64479eac3434e99cf0497512f58995c1396c28719501ee (beta^3 = 1 mod p) private key --> public key k --> k*G = (X,Y) (lambda*k) mod n --> (lambda*k)*G = (beta*X, Y) only 2 field multiplications (lambda*k beta*X) (lambda^2*k) mod n --> (lambda^2*k)*G = (beta^2*X, Y) only 2 field multiplications (lambda^2*k beta^2*X) (beta*X and beta^2*X are mod p) In this way you can perform 3 private keys and 3 public keys, all with the same Y. 3) If you implement 1) + 2) you'll have: 6 public keys (+ 6 compressed) (X,Y) , (X,-Y) (beta*X,Y) , (beta*X, -Y) <-- these points are out of the range 1-2^160 (beta^2*X,Y), (beta^2*X, -Y) <-- these points are out of the range 1-2^160 and 6 private keys (+ 6 compressed): k, n-k, lambda*k n-lambda*k <-- these keys are out of the range 1-2^160 lambda^2*k n-lambda^2*k <-- these keys are out of the range 1-2^160 then 12 addresses at a very low computational cost.
|
|
|
|
|