rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 02, 2017, 11:51:26 AM |
|
Ask for work... got blocks [1735316448-1735339999] (24696 Mkeys) o ..snip.. o (24.48 Mkeys/s)
I assume that's a m4.16xlarge with -t 10 The +1.3 Mkey/s you get compared to what I see right now is because of the -t 10 (you) versus -t 5 (me) Actually I would recommend everyone to set up something between -t 10 and -t 60 (*) at the moment - depending on how long you want LBC to let run (do not forget: a graceful shutdown can take 60 - 120 minutes if you have -t 60). Because starting from -t 10 the startup cost is pretty diminished. Also, I'd recommend to not stick to the "boring" 5/10/20 numbers. Try 11/17/23/37 or so (not exactly these, just dice some number if you want). You'll help to spread client-server requests more evenly and also you set up for your client more "individual" block sizes, which can actually help your client to get block intervals assigned that are being left for redistribution otherwise (if someone ends their client ungracefully). (*) and not more than 60 if you haven't enough memory, because the LBC client has a small memory leak currently (you can see it taking more and more space within a round). Nothing tragic, but it's there - I noticed when I started a client overnight with -t 420 which actually became critical on my 16GB machine. Rico
|
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
April 02, 2017, 12:27:52 PM |
|
So ... whats the best confirm for a 64 vCPU / 240 GB mem ?
Right now I have "time": 10
|
|
|
|
arulbero
Legendary
Offline
Activity: 1922
Merit: 2074
|
|
April 02, 2017, 12:34:18 PM |
|
I'm the only one stuck to 12Mkeys/s?
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 02, 2017, 12:43:06 PM |
|
I'm the only one stuck to 12Mkeys/s? If you give me access to the machine, I can have a look what's wrong. @unknownhostname -t 10 is ok. Actually on AWS I wouldn't change that. If you really were into optimizing, you could let the instances in Europe have -t 10 and the instances in US/Asia have -t 17 (bigger network lag) Rico
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 02, 2017, 01:05:14 PM |
|
9 days to go - worst case. I take it all have their hook-finds or are controlling for presence of FOUND.txt at least daily.
|
|
|
|
Rent_a_Ray
Legendary
Offline
Activity: 1344
Merit: 1046
|
|
April 02, 2017, 03:43:54 PM |
|
LBC under MacOS Sierra (brew): Unknown operating system: darwin-thread. Please report. All packages have been installed successfully. perl -V:myarchname
myarchname='i386-darwin' Any idea? Thanks in advance. Cheers, Ray
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 02, 2017, 03:53:30 PM |
|
LBC under MacOS Sierra (brew): ... Any idea?
Not supported.
|
|
|
|
Rent_a_Ray
Legendary
Offline
Activity: 1344
Merit: 1046
|
|
April 02, 2017, 03:58:23 PM |
|
LBC under MacOS Sierra (brew): ... Any idea?
Not supported. I know. But what's the reason? Is there any incompatibility between Arch Linux and Darwin? Cheers, Ray
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 02, 2017, 04:28:40 PM |
|
I know. But what's the reason? Is there any incompatibility between Arch Linux and Darwin? The problem is not the LBC client itself (Perl script) - this should be pretty much multi-OS portable The problem are the generator binaries http://unix.stackexchange.com/questions/212754/is-there-a-way-to-run-a-linux-binary-on-os-xI have no Darwin system - or even the Apple Hardware here. If someone would send a vmware-able MacOSX image my way, I could try to compile the generator for that target-OS. Rico
|
|
|
|
Rent_a_Ray
Legendary
Offline
Activity: 1344
Merit: 1046
|
|
April 02, 2017, 05:05:36 PM |
|
The generator isn't open source, am I right? I have such an image for compiling wallet sources with all the GCC, Xcode, Homebrew stuff. I will upload it coming week. Thanks Cheers, Ray
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 03, 2017, 06:34:30 AM |
|
User SlarkBoy has generously deployed quite some free giveaways in the LBC search space here and there.
The new BLF file and patch available on FTP Server (auto-updated on LBC restart) does contain these already.
While I do not know the privkeys of these, I am aware of the total amount and it's the biggest bounty program of LBC by far!
We will reveal the complete extent of these freebies once the fireworks of discovering them has started.
Have fun and a big cheers to SlarkBoy!
Rico
|
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
April 03, 2017, 01:35:48 PM Last edit: April 03, 2017, 01:46:58 PM by unknownhostname |
|
1.25 Gkeys/sec
|
|
|
|
arulbero
Legendary
Offline
Activity: 1922
Merit: 2074
|
|
April 03, 2017, 02:16:40 PM |
|
1.25 Gkeys/sec
How? GPU-client on dedicated servers?
|
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
April 03, 2017, 04:07:19 PM |
|
1.25 Gkeys/sec
How? GPU-client on dedicated servers? ofc not ... lots and lots of CPU's ... no GPU for me
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 03, 2017, 04:16:12 PM |
|
ofc not ... lots and lots of CPU's ... no GPU for me Don't take it personally It's more like no (performant) client for K80. (yet) I had a look if some cloud vendor offers any Maxwell (M60) or Pascal GPUs (P100). I'm pretty sure with these LBC would work and it would even mean a nice performance kick. Unfortunately so far only promises and plans: https://blogs.nvidia.com/blog/2016/11/16/tesla-p100-on-google-cloud-platform/observe how "now available" in the subject = "will be available" in the 1st line of the text I checked today. Nada. Only K80 and only in some locations. I also am thinking about purchasing a dedicated 200 Mkeys/s machine... Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1922
Merit: 2074
|
|
April 03, 2017, 04:20:15 PM |
|
|
|
|
|
unknownhostname
Member
Offline
Activity: 62
Merit: 10
|
|
April 03, 2017, 05:19:32 PM |
|
whats the current speed rico ?
Not the 2 day average?
|
|
|
|
rico666 (OP)
Legendary
Offline
Activity: 1120
Merit: 1037
฿ → ∞
|
|
April 03, 2017, 06:33:13 PM Last edit: April 03, 2017, 06:43:31 PM by rico666 |
|
whats the current speed rico ?
Not the 2 day average?
In the past hour we were 1107156445 keys/s (at this speed we will have #51 in 2 days) Rico
|
|
|
|
arulbero
Legendary
Offline
Activity: 1922
Merit: 2074
|
|
April 03, 2017, 06:43:37 PM |
|
1.25 Gkeys/sec
How? GPU-client on dedicated servers? ofc not ... lots and lots of CPU's ... no GPU for me Could you share with us some informations about how do you got this speed? Is it expensive? This pool could reach a very high speed if it were more people able to use a lot of CPU like you do.
|
|
|
|
Janu$$
Member
Offline
Activity: 86
Merit: 10
|
|
April 03, 2017, 07:23:01 PM Last edit: April 03, 2017, 07:33:33 PM by Janu$$ |
|
ofc not ... lots and lots of CPU's ... no GPU for me Don't take it personally It's more like no (performant) client for K80. (yet) I had a look if some cloud vendor offers any Maxwell (M60) or Pascal GPUs (P100). I'm pretty sure with these LBC would work and it would even mean a nice performance kick. Unfortunately so far only promises and plans: https://blogs.nvidia.com/blog/2016/11/16/tesla-p100-on-google-cloud-platform/observe how "now available" in the subject = "will be available" in the 1st line of the text I checked today. Nada. Only K80 and only in some locations. I also am thinking about purchasing a dedicated 200 Mkeys/s machine... Rico Hi rico, what about https://www.nimbix.net/jarvice/ ? Allegedly with Pascal GPU support: "...Nimbix is the only public cloud provider featuring NVIDIA’s latest generation Pascal Tesla P100 GPUs with NVLink, a high-bandwidth, energy-efficient interconnect that allows data sharing at rates 5 to 12 times faster than the traditional PCIe interconnects...." Regards, Janu$$
|
|
|
|
|