Bitcoin Forum
April 26, 2024, 03:13:07 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 [50] 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 »
981  Bitcoin / Bitcoin Discussion / Re: [POLL] SegWit (BTC) vs Bitcoin Unlimited (BTU): Which Would You Choose? on: March 21, 2017, 09:34:39 AM
Definitely SegWit.

When that is through, give Miners their increase of block size... and work on LN!

This is the only way to go. Whoever is against this, is an enemy of the Bitcoin (EOTB).
Taking benefit of the doubt into account, he may not be aware of being an EOTB, so let's clarify this:

Network congestion and rising transaction fees are a problem for the whole Bitcoin ecosystem.
Network centralization is a problem for the whole Bitcoin ecosystem.

If miners - who definitely profit from both problems - (like being fat Kim Jong in a starving country) are not interested to alleviate them both, they are EOTB.

In my opinion, any measure to bring them to reason is allowed. Including change of PoW to a CPU-only version.
The reactions from e.g. Bitfury George
https://www.reddit.com/r/Bitcoin/duplicates/60ast5/bitfury_george_pledges_to_sue_all_involved_with/
show very clearly how these people think.

They have established a position as powerful stakeholder and they intend to defend it by any means.
=> Therefore, if they will not do what is best for the whole Bitcoin ecosystem (key phrase), the whole Bitcoin community can attack them by any means. Simple as that.


Rico
982  Local / Projektentwicklung / Re: Bitcoin Lotterie - Ein deutsches Projekt mit Bitcoin Jackpot on: March 21, 2017, 08:55:23 AM
Ok, abgesehen davon, dass die PHP Kinderkacke auf der Seite ungefähr eine Tonne Sicherheitslücken aufweist (ich aber keine Zeit/Lust habe diese jetzt zu exploiten):

http://bitcoin-lotterie.de/prize.php -> "Der aktuelle Jackpot liegt bei : 1.003800000000008 BTC!!!"

Habe ich was verpasst und man kann Bitcoin nun auf 15 Nachkommastellen teilen?
Das wäre tatsächlich etwas, was 3 (drei!!!) Ausrufezeichen rechtfertigt!!!1!eins!!elf


http://bitcoin-lotterie.de/about_us.php -> "Die Bitcoin Lotterie kann Dich zum Bitcoin Millionär machen. Und Du musst nichts dafür tun."

Ist natürlich bei 1 BTC Jackpot eine steile Rechnung. Wenn 1 BTC mal 1 MEUR wert ist, dann kann das sicher zutreffen, aber dann kauft man auch nicht ein Ticket für 1 Millie.

http://bitcoin-lotterie.de/login.php -> "Du hast noch keinen Acoount?"

Ne, einen Acoout habe ich nicht, und daran wird sich so schnell Nichts ändern.


Rico
983  Bitcoin / Project Development / Re: Raping GPUs and having fun on: March 21, 2017, 08:03:11 AM
What is the performance cost of emulating 64 bit as 32 bit?

Does it double the cost? For example, does a 64 bit word emulated as 32 bit use 100% of the GPU while 32 bit words use 50%?

Ok, let me elaborate on this a little bit and give you some numbers for better estimates where we are and where we're going:

In my CPU/GPU combination, one CPU core puts 8% load on the GPU and that is a situation, where a fairly strong CPU meets a midrange GPU (a 2.8 - 3.7 GHz Skylake E3-Xeon firing at a Quadro M2000M - see http://www.notebookcheck.net/NVIDIA-Quadro-M2000M.151581.0.html). At the moment it's quite possible with a stronger GPU (1080) that the CPU can put only 5-6% load to the GPU.

The current development version of the generator gives me 9 Mkeys/s for all 4 physical cores running, whereas the published version (the one you can download from FTP) gives 7.5 Mkeys/s.

Main difference is bloom filter search is done on GPU with the development version and also moving the final step of affine->normalization->64bytes to GPU resulting in an overall speed improvement of about 375000 keys/s per core.

Up to now, the GPU behaved like a "magical wand", putting bloom for it to work, didn't raise GPU load, but it raised the keyrate. This could be explained that the time the GPU needs to do the bloom filter search is basically the time the GPU would need to transfer the hashed data back to CPU (which does the bloom filter search on the current public version). Same with the affine transformation.

There is nothing left on the CPU except (heavily optimized) EC computations, so any further speed improvements need to push that to the GPU.
In terms of time, currently one 16M block takes around 6.25 seconds on my machine (if I let compute 8 blocks, it takes 50 seconds - to mitigate the startup cost).

So I thought I'd emulate what's going on on the CPU and move the code piece by piece. Going backwards, the step before the affine transformation is the Jacobi->Affine transformation, where you need to compute the square and the cube of the Jacobi Z coordinate and multiply the X with the former and the Y with the latter. All in all one Field element sqr and 3 FE mul operations.

Done that with my 128bit library (based on 64bit data types) on GPU and behold! GPU load went to 100% and the time per block went to 16 seconds! Uh. Operation successful, patient dead.
-> Back to the drawing board.

Now the same with 32bit data types is currently 12% GPU load and 5.4 seconds per block (per CPU core). So very promising, but I'm hitting a little/big endianness brainwarp hell, so I have to figure out how to do it more elegant.

Also, the new version will demand a more GPU-heavy approach before I can release it. As the bloom filter search is done on GPU, an additional 512MB of GPU memory is used per process. Running 4 processes on my Maxwell GPU with its 4GB VRAM is just fine (and as the memory can be freed from the CPU part of the generator, it takes only 100MB of host memory), but I experienced also Segmentation faults with the Keppler machines on Amazon cloud.

So the goal is really to have one CPU core being able to put at least 50% load on one GPU.

It's no small engineering feat, but at the moment LBC is the fastest key generator on the planet (some 20% faster than oclvanitygen) and I believe it is achievable to be twice as fast as oclvanitygen. That's my goal and motivation and currently I have yet to tap 65% of my GPU capacity to get there.

Quote
And am I wrong assuming that even 32 bit is emulated, specifically on Pascal/Maxwell chips? I read the white paper and it says it does half integers also.

I'm not familiar in detail with the specific hardware interna. At the moment I have a Maxwell chip for my testing and I will have a tendency to support newer architectures/chip families, than the old stuff. Another way to put it: I will not sacrifice any speed to support that "old" chip from 2009. ;-)

Sidenote:

If anyone wants to be at the true forefront of development and have a great workstation-replacement notebook, buy a Lenovo P50 (maybe P51 to be slightly ahead), because that's what I am developing on and LBC will naturally be slightly tailored to it. E.g. it has also an Intel GPU, which I am using for display. So currently I can work with the notebook basically without any limitations, as the Intel Graphics are untouched and as I have the 4 logical cores for my interaction, I can watch videos, browse etc.) and the notebook is churning 9 Mkeys/s. Ok the fan noise is distracting, because normally, the notebook is fine with passive cooling. Wink



Rico
984  Bitcoin / Project Development / Raping GPUs and having fun on: March 20, 2017, 03:20:40 PM
What happens when you try to just printf the line you commented out?

Also this: http://stackoverflow.com/questions/1255099/whats-the-proper-use-of-printf-to-display-pointers-padded-with-0s


So lessons learned and progress:

Never try to impose a data size on the GPU which it was not built for. Todays GPUs are 32bit. Using 64bit data types is a performance penalty (as the GPU internally transforms this into a sequence of 32bit operations). Moreover, defining your own 128bit arithmetics library using 64bit types on GPU ... will eventually work after you really do something to the GPU which can only be described as raping, but the GPU will not like it and show a performance consistent with its unliking...

Turns out, there is a maximum number of assembler instructions per kernel and of course I ran into it with my glorious 128bit GPU-library, then the kernel will simply crash, or your host application gets a segmentation fault (from the OpenCL library), or <insert undefined behavior here>

Printout on GPU is nothing but a straw of the desperate GPU developer.


Back to the drawing board, I'm left with a highly optimized 64bit ECC library @ CPU and the need for a (highly optimized) 32bit library on GPU. At least as long as I have parts of the computation on CPU, parts on GPU. Sounds like Frankensteins monster? It is!

Computing with 5x52 fields @ CPU, pushing data to GPU, there a conversion 5x52 -> 10x26, followed by 32bit computations.

But it is surprisingly fast - so far. As the conversion (I hope) is a mere:

Code:
static void hrd256k1_fe_52to26(hrd256k1_fe32 *out, const hrd256k1_fe *in) {
  out->n[1] = in->n[0] & 0x3FFFFFFUL;
  out->n[0] = in->n[0] >> 26;
  out->n[3] = in->n[1] & 0x3FFFFFFUL;
  out->n[2] = in->n[1] >> 26;
  out->n[5] = in->n[2] & 0x3FFFFFFUL;
  out->n[4] = in->n[2] >> 26;
  out->n[7] = in->n[3] & 0x3FFFFFFUL;
  out->n[6] = in->n[3] >> 26;
  out->n[9] = in->n[4] & 0x3FFFFFFUL;
  out->n[8] = in->n[4] >> 26;
}

And the subsequent fe_mul etc. are done using GPU native data format. We'll see.


Rico
985  Bitcoin / Project Development / The horror that is Nvidia OpenCL on: March 19, 2017, 11:56:13 AM
Observe this code snippet from the GPU client. It is a small part from the Jacobi -> Affine transformation

I know that  hrd256k1_fe_sqr and hrd256k1_fe_mul work correctly. I know that I am getting the right values into my GPU (az, jpubkey).
However, this code doesn't even run the printf when hrd256k1_fe_mul is in place. It does, when I comment the hrd256k1_fe_mul call

Code:
  hrd256k1_fe_sqr(&zi2, &az);

  apubkey2.infinity = jpubkey.infinity;

  hrd256k1_fe_mul(&apubkey2.x, &jpubkey.x, &zi2);

  printf("GPU %d\nA:%016lx %016lx %016lx %016lx %016lx\nZ:%016lx %016lx %016lx %016lx %016lx\n---\n",
         idx,
         apubkey2.x.n[0],apubkey2.x.n[1],apubkey2.x.n[2],apubkey2.x.n[3],apubkey2.x.n[4],
         apubkey2.y.n[0],apubkey2.y.n[1],apubkey2.y.n[2],apubkey2.y.n[3],apubkey2.y.n[4]
         );

Ok. a simple apubkey2 = jpubkey works. So what is it, that causes this weird behavior? To investigate,. I wrote a small synthetic  hrd256k1_fe_mul2:

Code:
static void
hrd256k1_fe_mul2(hrd256k1_fe *r, const hrd256k1_fe *a, const hrd256k1_fe *b) {
  r->n[0] = a->n[0] + b->n[0];
  r->n[1] = a->n[1] + b->n[1];
  r->n[2] = a->n[2] + b->n[2];
  r->n[3] = a->n[3] + b->n[3];
  r->n[4] = a->n[4] + b->n[4];
}

Guess what? Same problem (doesn't even printf). Now if I comment out ANY of the r->n = a->n + b->n lines, it works!
If I even do

Code:
static void
hrd256k1_fe_mul2(hrd256k1_fe *r, const hrd256k1_fe *a, const hrd256k1_fe *b) {
  r->n[0] = a->n[0]; // + b->n[0];
  r->n[1] = a->n[1] + b->n[1];
  r->n[2] = a->n[2] + b->n[2];
  r->n[3] = a->n[3] + b->n[3];
  r->n[4] = a->n[4] + b->n[4];
}

It still works! What is going on???  Huh

Rico
986  Bitcoin / Project Development / Setting a password on: March 19, 2017, 09:32:32 AM
The mechanism for setting or changing a password (=secret) is the same:

Code:
-s oldsecret:newsecret

Obviously, if you had already some password, you are changing. If you had no password before, you are setting.

"But what is oldsecret when I am setting?" you may ask.

Simple answer: anything!

So as was mentioned here already - if you're setting your secret for the 1st time, just use x (or really anything else) for the oldsecret:

Code:
-s x:newsecret

and later you just give your
Code:
-s newsecret
to identify you with the server.



There is this guy from the Centre de Calcul el-Khawarizmi - CCK - Tunisia . Logs say, he has 160 (so far) tries of giving a wrong password to his id. May this short HowTo help him.


Rico
987  Bitcoin / Project Development / Re: Large Bitcoin Collider (Collision Finders Pool) on: March 18, 2017, 08:54:35 PM
And what about AMD? are you gonna do implementations for those too?

AFAIK some users operate their GPU client on AMD cards. Myself, I wasn't successful so far - but Jude Austin says he was (Ubuntu 14.04 with fglrx).


Rico
988  Bitcoin / Project Development / 896.04 Mkeys/s on: March 18, 2017, 11:16:56 AM
It's official: 7 million pages on directory.io per second

My understanding is, most of this is done by one man and with CPUs.


Pool operation is seamless so far, I've seen a 13seconds network hiccup yesterday (which all clients handled well within 2 retries), and today I experienced a 500 error when calling the stats page. This too seems to have been only transient, although there may be some race condition at the bottom of this. => Pool operation purring like a cat

At the moment I'm completely dissecting the GPU client, as the segmentation faults I've been observing (read: have been driving me mad) for the past couple of days are 100% not my programming fault, but some internal error of the Nvidia OpenCL implementation. I'm trying to find a workaround and/or thorough internal analysis report to submit to Nvidia.


Rico
989  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: March 18, 2017, 09:30:33 AM
All of this story tell us something.

We now know that someone out there is able to crack aprox. 2^50 private keys in about 1 year.

(in a worst case scenario, also assuming that the one who cracked the 50 addresses brute forced all the possible addresses sequential)

As time goes by this capability will increase, it may take decades and decades or centuries but eventually it will be possible to crack 2^256 pvks.

The transaction referenced in this thread is a good reference to test how far the mankind cracking capabilities are.

...

2^50   1125899906842620   <- This is aprox. the number of bitcoins private keys someone is able to crack in about 1 year
...
2^160   1461501637330900000000000000000000000000000000000   
...
2^256   115792089237316000000000000000000000000000000000000000000000000000000000000000   <- And this is how safe bitcoin is.

Nope. Nope and nope.

a) 1125899906842620 is the number of private keys LBC is able to crack within 15 days (at current speed - see https://lbc.cryptoguru.org/stats)
b) Your calculator seems to have trouble with big numbers
2160 = 1461501637330902918203684832716283019655932542976 <---- also, this is how safe bitcoin is

so

c) Bitcoin is not 256bit safe



Rico
990  Local / Projektentwicklung / Re: Large Bitcoin Collider (Collision Finder Pool) - Deutscher Thread on: March 17, 2017, 08:26:09 AM
Jesus, 586MKeys/s. Shocked
Vor 2 (oder 3) Wochen war das so etwa ein Zehntel.

Miningpowerinflation wie beim BTC? Wink
So langsam müsste doch mal einer einen PK für eine seiner eigenen Adressen finden  Grin

Ja, 512 MKeys/s waren 4 mio Seiten auf directory.io die Sekunde. Da sind wir drüber hinaus.  Tongue


Eine gewisse Analogie zum BTC Mining ist nicht abzustreiten. Ich habe ja die Evolution (Inflation würde ich es nicht nennen) CPU->GPU->FPGA->ASIC schon vor geraumer Zeit prognostiziert. Momentan sind wir irgendwo beim Pfeil zwischen CPU und GPU.

Mittlerweile kenne ich mich mit dem Generierungsprozess Privkey -> hash160 so gut aus, dass ich zuversichtlich sagen kann, mittelfristig einen GPU client zu haben, der gut 4-6 mal schneller ist als oclvanitygen (knapp doppelt so schnell sind wir heute bereits).

Und wenn ich das habe, besorge ich mir ein FPGA Entwicklerboard und mache da ein wenig mit rum. Wink

Im Gegensatz zu Bitcoin-Mining, was ja eine unendliche Geschichte ist, ist das hier ein endliches Projekt. Der Suchraum ist natürlich gigantisch, aber endlich. Für diesen Suchraum sind 586 MKeys/s immer noch sehr wenig, aber die interessante Info hier ist, dass wir mit dieser Speed offensichtlich bereits in Bereiche vordringen, wo wir "Erster!" ausrufen können - schliesslich ist ja #51 bislang unangetastet und es sieht so aus als wäre es wirklich am LBC den privkey dazu zu finden.

Was vor allem für mich interesant ist, ist den Pool bei diesem Kapazitätsanstieg zu beobachten wie das ganze skaliert. Ohne Änderung, also ihn einfach so laufen zu lassen wie er ist, kann ich sagen, dass das sicher bis 50 GKeys/s skaliert. Sehr konservativ ausgedrückt - vielleicht sogar 500 GKeys/s aber so weit will ich mich nicht aus dem Fenster lehnen. Jedenfalls weiß ich schon, wie ich mit leichten Änderungen den Pool  weit über die TKeys/s Grenze hinaus skalieren könnte - sollte die Zeit mal kommen.

Ich habe auch gestern alle P2SH hash160 in die BLF Datei mit aufgenommen. Der Pool checkt also für alle mit aktueller BLF nicht mehr gegen ~11 mio  Addressen ab, sondern ~14 mio (im Übrigen zum Nulltarif - der LBC wird dadurch nicht langsamer). Vorerst nur experimentell, wenn das Ärger machen sollte nehme ich es wieder raus.


Rico
991  Bitcoin / Project Development / P2SH in BLF now on: March 16, 2017, 10:26:42 PM
The current .blf file contains now also P2SH addresses.

As of now, even if we find a privkey to a hash160 belonging to a P2SH address, we will not be able to do anything with it, but as there seem to be very efficient and GPU-friendly implementations how to get access - in time we will have them.

This means the blf file now contains over 14 million addresses we check against, the false positive rate is currently around 10-23.x - that's nearly 10000 times higher than with our previous BLF files, so I consider this experimental. If it turns out there is too much noise, I will deactivate P2SH search again.

On the positive side, this means our mean search space until a collision is found is only 136.2bits (checking currently against 14582689 addresses). If it turns out all is working, I will adjust the stats and theory section accordingly.


Rico
992  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: March 16, 2017, 07:30:52 PM
What we can say with respect to these two posts:

Between 20 and 250 there were 50 - 0 = 50 bounties found.

We call them trophies. There were these https://lbc.cryptoguru.org/trophies found.

Quote
Somewhere between 250 and 251 there is at least one bounty.
Somewhere between 251 and 252 there is at least one bounty.
Etc.

Where the probability of having more than one bounty doubles with each of these spaces.

Quote
If every single private key between 20 and 250 was not tried then there is a (small) possibility that you might have missed another bounty.

We will try every single key. Every single key between 0 and 249.35 was tried. We will now try every single key from 250 on and after #51 is found, then finish the work 249.35 to 250. If it's there, we will miss nothing.

Quote
So, once you find one bounty in a search space (250 ... 251 for example) you really should search the entire remaining space to see if there is a second (aliased) larger bounty!

Sure. That's the normal mode of operation. That's how we found e.g. f6cc30532dba44efe592733887f4f74c589c9602
In general, we seem to find uncompressed addresses with funds that pre-date the pool inception.

Quote
However if you happen to find the expected bounty and then skip to the next range there is a chance you will miss a larger bounty in the remainder of the range.

We know. We perform an exhaustive search. It's just for this specific #51 we skipped around 370 tn keys, because it has funds and it would be nice if the LBC finds it before someone else does. Then we return to our normal mode of operation.



Rico
993  Bitcoin / Project Development / Re: Let me help you on: March 16, 2017, 07:05:17 PM

edit: 384 MKeys/s = 3 million pages on directory.io per second

Rico

The real speed right now is more than 500Mkeys/s I guess.

Ask for work... got blocks [1091773040-1091779951]

Define "real" Smiley

Depends on the averaging time. The stats show a 24h average, The average in the past 1 hour will be higher.
One can estimate this from the gradient af speed growth - we're going straight up with no sign of a slowdown.
So yeah - give it a couple hours and it may show a slowdown at 480+ Mkeys.

512Mkeys = 4 million pages on directory.io per second. Insane (by todays standards).

The log files definitely belong to you, I'd say somewhere between 70 to 80% of the pool capacity is your clients.

For me it's interesting to see how the server scales. No problem there. WIthout changes, I believe the pool could handle 50 GKeys/s as is.


Rico
994  Bitcoin / Project Development / Re: Let me help you on: March 16, 2017, 04:56:35 PM
Ask for work... got blocks [1080203856-1080210383]

We have officially searched about 11 tn keys in #51 search space.

Code:
[1087617328, 1087624239] <<<                  Unknownhostname

out of 1125 tn keys Wink so about 1% done


edit: 384 MKeys/s = 3 million pages on directory.io per second

Rico
995  Bitcoin / Bitcoin Discussion / Re: can someone point me to hard (objective) technical evidence AGAINST SegWit? on: March 16, 2017, 04:14:43 PM
Thanks for the
https://medium.com/the-publius-letters/segregated-witness-a-fork-too-far-87d6e57a4179
it is very thorough in describing SW.

In fact so thorough, that for the "Problems with SW" only about 30% of the article remain.
Naturally I had a deeper look at these

as for 3.1 SW creates a financial incentive for bloating witness data

Quote
... there exists a financial incentive for malicious actors to design transactions with a small base size but large and complex witness data.

Unfortunately it's not elaborated what this financial incentive is or how it comes into play. Maybe I'm overlooking something evident, but as is in the article it is a simple statement without any proof or even explanation.

as for 3.2 SW fails to sufficiently address the problems it intends to solve

This section contains a very weak, almost no-argument against SW, namely

Quote
Linear signature hashing and malleability fixes will only be available to the SW UTXO.
i.e. "The good properties of SW will not be available to non-SW UXTOs", which in itself is a homage to SW and similar to the argument like

"But Bitcoin will not fix the USD..."

On the other hand, it contains a very strong argument - the strongest I've seen so far - against a new UXTO class:

Quote
influential individuals associated with Bitcoin Core: Greg Maxwell has postulated that “abandoned UTXO should be forgotten and become unspendable,” and Theymos has claimed “the very-rough consensus is that old coins should be destroyed before they are stolen to prevent disastrous monetary inflation.”

I followed the link to the reddit discussion and I didn't believe what I've read. If there are really people in the Bitcoin community who want to put an expiration date to Bitcoins, and some UXTO discrimination should help these f*cktards to achieve their wet fantasies. Then I have to agree: We cannot have that.

It doesn't seem like a genuine SW-UXTO problem though, more than SW-UXTO being a vehicle for discrimination according to hypothetical future bad politics. This seems like killing your newborn, because it might become Hitler.

=> Again, no technical argument, merely a political. But it surely scared the shit out of me.


as for 3.3 SW places complex requirements on developers to comply while failing to guarantee any benefits

It essentially says: "There is code change involved and that's dangerous, SW does not provide enough benefit to justify the risk of bugs in the new code."

Well - the question is, if BU or other proposals do provide enough benefit, because as is, this argument could be interpreted as "Never change a running system".

as for 3.4 Economic distortions and price fixing

purely non-technical and may be titled "I do not like soft-forks"

as for 3.5 Soft fork risks

reiteration of "I do not like soft-forks"

as for 3.6 Once activated, SW cannot be undone and must remain in Bitcoin codebase forever.

While not providing evidence for the claim there could be no roll-back, I give him that: Not having a plan B (roll-back) for the case of some catastrophic sideways fuggup is a bad thing. In general.


Executive Summary:

SW may give zealots the possibility to discriminate between SW-UXTOs and non-SW-UXTOs (a.k.a. our old good UXTOs) and as such might help foster some "interesting" novel Bitcoin policies, like destroying old coins etc.

Well - theoretically - it might. But I believe this is also a non-technical issue and more an educational one. Should such an issue arise, the bitcoin community should still be "man enough" to take a stick and beat the shit out of such zealots until they come to reason.

Executive executive Summary:

SW is like a knife: It can be useful in the kitchen, or you might slash your wrist with it. Should we ban knives?


Rico
996  Local / Projektentwicklung / Re: Large Bitcoin Collider (Collision Finder Pool) - Deutscher Thread on: March 16, 2017, 12:06:36 PM
Müsste es nicht "is not aware" heißen, weil er das _nicht_ weiß?

Und: Wo zum Henker kommt der auf 1000 CPUs?
Kauft der Amazons Service, oder lässt der bei seiner Firma "schwatt" mitlaufen?

Doch, der pool weiß ja gerade dass da noch 370 * 10¹² Schlüssel fehlen und die Statistik geht nicht davon aus, dass wir sowas mal eben überspringen, also stimmt der Offset nicht. (Man könnte sagen sie Statistik "is not fully aware")

"Die Statistik wird aufgrund der bereits abgelieferten Arbeit berechnet und nicht aufgrund des Ortes wo wir uns gerade befinden." So in etwa.

Wo der 1000 CPUs herbekommt ... irgendwie scheinen dem die Server einfach so zuzufliegen. Wink
Alles von Amazon, Google und anderen - sieht jetzt nicht nach Botnetz aus.
Ich schätze Admin in einem großen Saftladen, der ungenutzte Kapazitäten (auch eingekaufte) nutzt...
Oder reicher Schnösel, der sich nen Spaß erlaubt.


Rico
997  Local / Projektentwicklung / Re: Large Bitcoin Collider (Collision Finder Pool) - Deutscher Thread on: March 16, 2017, 11:12:37 AM
Die Website zeigt das aber noch nicht an.
Fehlt dann jetzt nicht was von 49,35 bis 50 Bits?
Oder machen wir das später nach?

https://bitcointalk.org/index.php?topic=1573035.msg18206936#msg18206936

Quote
Exceptionally, I set the pool computational front to 2^50. We will cover the ~370 tn keys after we have found #51.

You will see this on block numbers > 1073770688

The stats about predicted find time for #51 are therefore not correct anymore (because the pool is aware of missing ~370 tn keys).


=> Wir machen das später nach


Rico
998  Bitcoin / Project Development / Re: Let me help you on: March 16, 2017, 11:00:23 AM

Effectively this means, I saved you 14 days or so at current speed. It also means, the find can happen anytime from now to 41 days (current speed).

Yea I know that ... I want to bring the 41 days time to 0

For this you'd need either immediate luck, or infinite speed.

To have #51 in 1 day under worst case scenario, the pool would need to have about 13.5 GKeys/s


Rico
999  Local / Off-Topic (Deutsch) / Re: Sonnenbatterie --> Kostenloser Strom ?! on: March 16, 2017, 09:05:11 AM
Ne, hab da absolut keinen Plan von. Als Ingenieur muss man das ja auch nicht, als Lieferant für Kaco bestimmt auch nicht und schon garnicht wenn man im Labor die EMV-Prüfungen auch für Solarwandler macht.

Kaco war mir schon immer suspekt - wir nehmen lieber Victron und Midnight Solar.  Wink

Bedeutet das jetzt, ich habe die inoffizielle Aussage Kaco Wechselrichter halten nur 5 Jahre? Das kann man aus einem EMV Test herauslesen?
Nix für ungut. Wenn es DIr hilft, kann ich auch gerne eine Liste mit Themen bei denen ich mich nachweislich nicht auskenne posten - da kann man mich dann in Grund und Boden diskutieren.



Rico
1000  Bitcoin / Bitcoin Discussion / The hunt for #51 on: March 16, 2017, 07:50:29 AM
https://bitcointalk.org/index.php?topic=1573035.msg18206936#msg18206936

LBC Pool front set to 2^50 - the hunt for #51 of the puzzle transaction has begun! 0.051 BTC bounty awaits in 0-41 days. Start your engines!

Rico
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 [50] 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!