PM me with your offer (type of AWS code).
I'm able to take even AWS codes with short term validity (1 week).
Rico
|
|
|
Technicalities! I was pressing 'e' and after the chunk it was working on it downloaded a new chunk and kept going.
I know about that. You basically pressed 'e' when the current chunk was about to end (in the last loop iteration). The the command gets interpreted at the start of the next iteration (which it ends gracefully). On my notebook, with -c 4 -t 10 it looks like this. "c 4" means that there are 'oooo' chunks, so Ask for work... got blocks [131193289-131194632] (1409 Mkeys) v latest possible time to 'e' for current round, else ending at the end of next round oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo oooooooo Ask for work... got blocks [131209977-131211320] (1409 Mkeys)
vvvvvvv if you Ctrl+C on the beginning, there will be a promised but undelivered block, but not much work lost oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo the later you press Ctrl+C after a new round has been started, ^^^^^^ the more work (of your computer) you lose/waste. | really bad time to press Ctrl+C
Ask for work... got blocks [131226665-131228008] (1409 Mkeys) ooooooooooooooooooooooooooooooooooooooooooe o END requested. (Ending this loop) Waiting for children to finish... ooooooooooooooooooooooooooooooooooooooooo
It looks like unnecessary much science, but the upside for this is, it takes 0 (in words: zero) CPU time to process. Rico
|
|
|
Just wanted to chime in, when I press E to end gracefully it doesn't terminate after the loop.
Unless of course it has to do it for each thread, I most likely got impatient and terminated it ungracefully.
'E' doesn't work, but 'e' should. It's required only once ("waiting for children..." is the message signalling that it takes care of the termination of all threads) It really should work. Please try. On some systems I observed some weird input buffer configuration i.e. buffered although I explicitly set the input buffer to unbuffered mode. So I have the habit of pressing 'e' followed by 'Return'. As the 'Return' doesn't hurt anything with that combination I got LBC terminated gracefully on every Linux system. I'd be very surprised if that didn't work for you so please check. Rico $ LBC -c 4 -p 127702249-127705448 Loop off! Work on blocks [127702249-127705448] (3355 Mkeys) Best generator chosen: gen-hrdcore-avx2-linux64 PAGE-TO: 127705448 PAGE-FROM: 127702249 Estimated duration: 24m 1.2833s
END requested. (Ending this loop) Waiting for children to finish...
started up LBC and immediately pressed 'e' - now it will take24 minutes before the process ends. It's of course superfluous here, because the process would have ended in manual mode anyway, but it still works. As the blocks up to 128 000 000 are done, you can test this feature with no risk like this $ LBC -c 4 -p 1000-1100 <here press 'e'>
If it works, ok, if not, try 'e' + Return. If nothing works, please report. And you can terminate it ungracefully (Ctrl+C) with no risk.
|
|
|
I think I understood it. So, it seems that you can fetch work but then if you stop the execution of the program that work will be lost. is that it?
Yes. What will be the best way to stop LBC in a graceful manner?
It's somewhat hidden in the README.txt, I will put it in the User Manual on the web pages: You press 'e', after a while the message
END requested. (Ending this loop) Waiting for children to finish...
will appear. The LBC client will not stop immediately, but finish the blocks in the current loop. Contrary to immediate quitting, this has the advantage that proof of your work done is submitted to the server and your work is not "lost".
.. and if there is no way to stop it gracefully you can just stop it and then process the blocks interval you were working with --pages <from>-<to>, right?
If you do not stop it gracefully, the blocks will be re-issued after some time. The problem here was, that you issued the query command before your client had delivered ANY block and so the client was not yet known in the clients DB. Of course under such circumstances a query caused some stir. It's fixed now. Rico
|
|
|
Auf den LBC Log schauend: 1478622274 <schweizer IP> Query [LBC::Server:10261] error @2016-11-08 17:24:35> Route exception: hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this) at /data/web/LBC-Server/bin/../lib/LBC/Server.pm line 200. in /opt/perlbrew/perls/perl-5.24.0/lib/site_perl/5.24.0/Dancer2/Core/App.pm l. 1388 1478622284 <schweizer IP> Query [LBC::Server:10261] error @2016-11-08 17:24:44> Route exception: hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this) at /data/web/LBC-Server/bin/../lib/LBC/Server.pm line 200. in /opt/perlbrew/perls/perl-5.24.0/lib/site_perl/5.24.0/Dancer2/Core/App.pm l. 1388
"iiih" - denke ich mir - "was ist das denn?" Hat sich also ein schweizer Rösti einen Client geholt, ungefähr 5 Blöcke von je ca. 2GKeys angefordert, aber natürlich nicht geliefert. Mit folglich 0 abgearbeiteten Blöcken ist man auch in der CLient-DB unbekannt und wenn man mit "query" hechelnd abfragt wo man denn nun in der Rangliste ist macht der Server würg, weil die Client-Id gibt's nicht. Macht dem Server Nichts, stört nur mein ästhetisches Empfinden. Danke. Fixed. Und die versprochenen aber nicht gelieferten Blöcke habe ich in 7 Minuten durch ./LBC -c 64 -p 127581257-127583176 Loop off! Work on blocks [127581257-127583176] (2013 Mkeys) Best generator chosen: gen-hrdcore-avx2-linux64 PAGE-TO: 127583176 PAGE-FROM: 127581257 Estimated duration: 1m 15.445689375s oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ...
-c 64 ;-) Damit keine Missverständnisse aufkommen: Ist nicht bös' gemeint. Klar ist jeder Client willkommen und offensichtlich ist es ja der Softwarequalität zuträglich, wenn sich die Leute "unerwartet" verhalten. Rico
|
|
|
Oder auch 1012 (damit's keine Missverständnisse mit den Amis gibt).
Trotz "nur CPU mit einer Prise GPU" hat der Pool mittlerweile das Äquivalent von über 1 Billion Seiten auf directory.io abgeklappert.
Angenommen man wäre noch jung und stünde voll im Saft seines Lebens (oder so) und hätte noch 60 Jahre zu leben, das wären dann 1892160000 Sekunden. Ist deprimierend - ich weiß.
Angenommen man würde also diese knapp 1.9 Mrd Sekunden Tag und Nacht damit verbringen pro Sekunde eine Seite (= 256 Adressen) auf directory.io gegen 11 mio Adressen abzugleichen... man hätte dann nach Adam Riese eben 1.9 Mrd Seiten am Ende seines nicht so ereignisreichen Lebens gecheckt.
Es stellt sich natürlich die Frage, ob man so schnell wäre und auch ob man noch 60 Jahre zu leben hätte wenn man das denn machen würde, aber nur mal so angenommen. Der Pool bewältigt diese Aufgabe derzeit in ca. 100 Minuten. Ist auch deprimierend - ich weiß.
Im Umkehrschluß bedeutet das, dass die Arbeit, die der Pool bislang erledigt hat in etwa 568 solcher grandios durchlebter Menschenleben (34080 Jahre) entspricht. Und das in knapp 40 Tagen. Ich gebe dann wieder Bescheid, wenn die erledigte Arbeit einem Sonnenleben bei 1Seite/Sekunde entspricht.
Rico
|
|
|
LBC found 38-467 so far.
Let's be precise here. Rico
|
|
|
You clearly weren't paying attention when achow101 said: 2^160 is an unimaginably huge number.
Lets somehow imagine that every man, woman, and child in the world is running equipment that continuously generates 1 exa-address per second. That includes infants, destitute and homeless poor people, and those laying in their deathbed in the hospitals. EVERY man, woman, and child. That's 1 X 10 17 addresses per second per person times 7.4 X 10 9 people = 7.4 X 10 26 addresses generated worldwide per second. ... This is the best post i have ever read, thanks Danny Yeah - it's similar to these child frightening stories or pictures of the sun and some physics yadda. Of course I can imagine that for someone who doesn't know that Exa means 10 18 instead 10 17 (hey - it's only one order of magnitude, e.g. 10 years instead of 100 years but who am I to judge), a number like 10 48 (roughly 2 160) must look pretty unimaginable. For me, 10 48 is pretty imaginable. Intuitively I'd say it's the number of atoms of 1% of the Earth. So what? Rico
|
|
|
And practically the multiplier will be above 1.1760793641030374 !
Would you mind to work on your excessive quoting habbits? Rico
|
|
|
P.S. going from my records the next key will be found between 1.1 and 3.1 times the previous decimal key found.
Starting from the first address to number 47 this is the multiplier of the previous PK decimal value:
0 3 2.333333333 1.142857143 2.625 2.333333333 1.551020408 2.947368421 2.084821429 1.100642398 2.247081712 2.322943723 1.944092434 2.021472393 2.548084219 1.917221871 1.860279557 2.073291381 1.799651682 2.414636329 2.098608043 1.659986069 1.861611443 2.577100601 2.299969103 1.643454135 2.052663677 2.033358892 1.760317772 2.578335793 2.034906801 1.471408704 2.307257358 1.980132413 1.42310685 2.107494664 2.365105799 1.466027419 2.202637167 3.100321289 1.452946896 1.985510149 2.559189118 2.07896823 1.298070259 2.570888168 2.327752465
Lowest: 1.100642398 Highest: 3.100321289
Is there a reason behind this?
There is. In theory, the multiplier between two consecutive keys can be anything between "a little bit more than 1" 0b11111111111111111111111111111111111111111111111111111111111111111111111111111 1111111111111111111111111111111111111111111111111111111111111111111111111111111 1111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111 -> 0b10000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000 and "almost 4" 0b10000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000 -> 0b11111111111111111111111111111111111111111111111111111111111111111111111111111 1111111111111111111111111111111111111111111111111111111111111111111111111111111 1111111111111111111111111111111111111111111111111111111111111111111111111111111 111111111111111111111 hehe - count the digits. However, these extremes could only co-exist with their extreme counterparts for the pre-previous or the after-next key respectively. i.e. if #M is almost 4 times smaller than #N, then #O can be at most 2 times bigger than #N and then #L would be at most nearly 2 times smaller than #M (for the L > M > N > O sequence) This is also the reason why the sum of two consecutive numbers in your list will never be more than 6. If you send me your spreadsheet, I'll have a look and maybe do some enhanced statistical analysis on it. I need a break from the horror that is (portable) OpenCL. Rico
|
|
|
As for #47 of the puzzle transaction: ETA is anything between 0 and 269 hours.
f828005d41b0f4fed4c8dca3b06011072cfb07d4 -> 0x6cd610b53cba have a nice day. Rico
|
|
|
I've built a new .blf (and patch) with todays state of the blockchain. Please restart your clients, so they will check for updates and download.
Rico
|
|
|
As I'm trying to squeeze performance out of some GPUs I could not resist and made some estimates based on that claim (50-70 TH/s). this means 50 to 70 trillion hashes per second. Let's assume you have a quad core 4GHz CPU - perfectly cooled, so it doesn't go slower. This means you have per second 16 billion cycles of computational power at your fingertips. Let's further assume you meant 50 TH/s instead of 50-70 and let's further assume you are chinese, so in fact it would be 48 TH/s max. That would mean you'd be able to perform 3000 SHA256 computations per cycle of your CPU. Wow. Just wow. Dat statement. So either you were on some serious drugshit when posting your original claim, or we have to make more assumptions (which is quite hard when not having the stuff you obviously had). You found some magic way of computing a SHA256 below target for a given block. Effectively having a reverse SHA256. I'm working on that too, but I would never EVER ask for donations for such a project, because then I'd feel the obligation to share it with the rest of the world - or at least the donators and if I succeeded it would be MY PRRRRRECIOUSSSS So if you claim you have something like what you claim you have (while I am not quite sure yet what you claim you have) AND it's for free and everyone can use it, I would highly recommend to check it for viruses, ransomware, adware and similar stuff before even touching it remotely on a VM. Rico
|
|
|
ich habe bereits einen GPU generator am Laufen, der ca. 50% der Performance von oclvanitygen hat. Was bedeutet das denn im Gegensatz zum derzeitigen cpu-Klienten? Kommt auf die GPU an. Konkret auf meinem Notebook (Skylake Xeon vs. Quadro M2000M) ist das so 1:4, sprich der GPU generator ist 4 mal schneller als der CPU generator (edit: also wenn alle Kerne arbeiten). Aber da trifft auch eine relativ starke CPU auf eine relativ schwache GPU. Trotzdem sollte theoretisch 1:30 drin sein. Glücklicher weise habe ich von einem Mitglied seine Implementierung bekommen und studiere diese nun. Wenn ich sozusagen das Beste zweier Welten vereinen kann (ich habe eine effizientere hash160 Berechnung, er hat den Bloom Filter auf die GPU gebracht), dann könnte das richtig krachen. K-EX
|
|
|
Any news?
None of any significance. GPU generator in the making, got an alternative implementation from a member here, inspecting that, maybe merging best of both worlds. Patience. Maybe Christmas present or maybe a New Year's resolution for LBC to go GKeys/s. Rico
|
|
|
its real i will prove this in next few days.. i will post screen shots here when i started project and proof ready for show
screen shots - FTW! screen shots from the future - Mega-FTW! screen shots from the future that have been made in the past - Time-Travel-FTW! Rico
|
|
|
verfolge den englischen Thread nicht so weil mein English noch nicht so gut ist .
Die schlechte Nachricht: Ich muss noch viel über OpenCL lernen. Die gute (oder noch schlechtere - je nachdem wie man's sieht) Nachricht: Offensichtlich sind auch andere unfähig. Als Erklärung zu letzterem: Ich dachte mir ursprünglich, oclvanitygen so hinzubiegen, dass es tut was ich will. Das habe ich auch geschafft und ich habe bereits einen GPU generator am Laufen, der ca. 50% der Performance von oclvanitygen hat. Die 50% sind deswegen, weil für jeden Durchlauf (der Generator sieht sich 2^30 private keys pro "block" an) eigentlich 2 Durchläufe notwendig sind: Einer bei dem compressed public keys und einer bei dem uncompressed public keys generiert werden. Das Feld-Wald-Wiesen oclvanitygen generiert nämlich nur eine Art an public keys (habe vergessen welche). Das ist natürlich Käse, weil man idealerweise beide keys gleichzeitig erzeugen sollte, müsste, hätte, könnte. Wenn man denn könnte - da hapert's noch mit meinem OpenCL Wissen um den shitty code, der da in oclvanitygen herrscht zurechtzubiegen. Dann gibt es da noch das Problem, dass oclvanitygen eigentlich auf andere Aufgaben hin getrimmt (naja - sagen wir "barock erweitert") wurde: Präfix-Suche und regex-Suche. Die regex-Suche habe ich eh gleich geknickt, aber die Präfix-Suche ist da mit AVL Trees gelöst und das ist ja für unseren Bedarf so ziemlich das übelste was man machen kann. Also versuche ich mich momentan an einem eigenen generator, der compressed und uncompressed keys in einem Durchgang erzeugt und diese gegen einen Bloom Filter abgleicht - alles auf der GPU. Es geht alles sehr zäh, weil wie gesagt OpenCL Neuland für mich ist. Auf der anderen Seite ist Programmierung nicht so ein Neuland für mich und wenn es so klappt wie ich mir das vorstelle, dann sollte mein Generator oclvanitygen "im Staub hinter sich zurücklassen". Dauert halt, weil ich auch noch andere Dinge zu tun habe und sonst leider keiner zugegen ist, der außer bla bla mal mit anfassen könnte. Rico
|
|
|
Yay, only 1.1579208923731619542357098500868790785283756427907E+77 more addresses left to check!
I count only 2.92300327466180583641e48 - that's a whopping 29 orders of magnitude smaller. Rico
|
|
|
As of today, 100 trillion keys were searched.
As for #47 of the puzzle transaction: ETA is anything between 0 and 269 hours.
Rico
|
|
|
|