Bitcoin Forum
May 10, 2024, 06:59:14 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 »
301  Alternate cryptocurrencies / Mining (Altcoins) / Re: Pcie 1 to 3 Port 1X Switch Multiplier HUB Riser. Which motherboard works. on: February 19, 2017, 02:53:20 PM
are there boards known to work with 8 cards or more?
i expect mostly server boards, no?
302  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [HXX SWAP] |HexxCoin| CPU | Anonymous Zerocoin Protocol | JUST RELAUNCHED! | on: February 19, 2017, 11:49:57 AM
please chain
stop sync on 8153  Angry

OS?,  wallet version?
Outdated clients got blocked just now.

New source out!
3.0.0.8 / 99008.
Windows wallet soon.

Urgent update to fix ZeroCoin Tx exploit.
Win 7-64Pro 3.0.0.7
same, win 81x64, x64 version, 3.0.0.7

edit: not sure if this helps, last few minutes of debug log: http://pastebin.com/KxFtASGJ
303  Alternate cryptocurrencies / Pools (Altcoins) / Re: [ANN][POOL] Mining Pool Hub - Multipool. Multialgo, Auto Exchange to any coin. on: February 17, 2017, 01:20:28 PM
it seems some 1.3 zcoin are stuck on exchange for me
userid 32342
304  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.8, open source optimized multi-algo CPU miner on: February 16, 2017, 11:06:15 PM
i dont know if this is the desired behaviour, but in the HOWTO the following is stated:

Quote
libhugetlbfs can be used to make an existing application use hugepages
for all its malloc() calls.  This works on an existing (dynamically
linked) application binary without modification.

If that's the case maybe I don't need to change anything in cpuminer.

Edit: It looks like tranparent large pages is the way to go, and should have been from the start.
It wasn't so complicated when disk drives increased their block size.

my impression was that transparent/automatic huge pages won't increase the performance as much as using the hugepages calls directly in the programm, but if this is wrong that would be great news as implementing huge pages would then become easier by far (i suppose)

From my understanding of MMU and TLB from another architecture...
From the application perspective accessing huge pages is just derefeferencing a pointer like any other data
access. The magic happens when the CPU executes the load instruction and translates that pointer to a physical
memory address. The only difference is the accesses are faster because the translation is cached and
the same cached trasnslation can be used for the entire larger page before a new mapping is required for the next page.
At a higher level fewer TLBs can access more memory.

Reserving a section of memory for large pages and using the file system to access it may provide benefits beyond
large pages, though I don't see how. I would think going through the file system would add extra overhead.


interesting

maybe only a real world test will clear the performance questions up
305  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.8, open source optimized multi-algo CPU miner on: February 16, 2017, 10:29:16 PM
i dont know if this is the desired behaviour, but in the HOWTO the following is stated:

Quote
libhugetlbfs can be used to make an existing application use hugepages
for all its malloc() calls.  This works on an existing (dynamically
linked) application binary without modification.

If that's the case maybe I don't need to change anything in cpuminer.

Edit: It looks like tranparent large pages is the way to go, and should have been from the start.
It wasn't so complicated when disk drives increased their block size.

my impression was that transparent/automatic huge pages won't increase the performance as much as using the hugepages calls directly in the programm, but if this is wrong that would be great news as implementing huge pages would then become easier by far (i suppose)
306  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.8, open source optimized multi-algo CPU miner on: February 16, 2017, 05:14:11 PM
i dont know if this is the desired behaviour, but in the HOWTO the following is stated:

Quote
libhugetlbfs can be used to make an existing application use hugepages
for all its malloc() calls.  This works on an existing (dynamically
linked) application binary without modification.
307  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.7, open source optimized multi-algo CPU miner on: February 16, 2017, 04:54:40 PM

you will probably need to write a wrapper function for the allocation part to handle linux/windows differences

I don't want to mess with the OS part. I will work with someone to integrate cpuminer into a mining distro
or, if not too difficult,  I can provide a standalone cpuminer package which will work with an OS preconfigured
for large pages. My last post suggested the latter was difficult.

KopiemTu is advertized as Nvidia mining distro but it doesn't mean it can't be more unless it's excludively
sponsored by Nvidia.

Either way I think a mining distro preconfigured for large pages with cpuminer included is the best approach.
It's just not something I can, or am willing, do all myself.

I will continue, so see what exactly is involved in making a cpuminer package that supports large pages,
as long as it's transparent to desktop users and requires no OS specific mods to the application code.


afaik the systemspecific code (that being hugetlbfs and largepages for ms) has to be written anyways if you want it to be included in any distro, the distro just takes away the os setup (which is easy), or have i missed something?

i meant distros with cpumining option already available
308  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.7, open source optimized multi-algo CPU miner on: February 16, 2017, 04:28:18 PM
Large pages looks like a great feature for a specialist mining distribution. Both the OS and the apps
could be preconfigured for large pages. User friendly and plug and play.

Thougths

enabling large pages in a linux pre built image is easy, should be doable, but im not aware of any cpumining distros/images, anybody?
obviously one can always ssh into the system and install the cpuminer himself, but then he also can enable large pages himself and its not userfriendly/plug&play


Here is the best known (to me) mining distro.

https://bitcointalk.org/index.php?topic=520998.msg5764866#msg5764866

I also found this which describes how to enable an app to use large pages. It includes the following...

https://lwn.net/Articles/375096/
Quote
While applications can be modified to use any of the interfaces, it imposes a significant burden on the application developer.
 To make life easier, libhugetlbfs can back a number of memory region types automatically when it is either pre-linked or pre-loaded.
 This process is described in the HOWTO documentation and manual pages that come with libhugetlbfs.

Didn't read the HOWTO yet but what I infer from that is the OS and application are tightly coupled, which would make it
more challenging to build as a standalone application.


thats a gpu distro (nvidia), right? no oob cpuminer support afaik

i have also found the following stated from microsoft about large pages:

Large-page memory regions may be difficult to obtain after the system has been running for a long time because the physical space for each large page must be contiguous, but the memory may have become fragmented. Allocating large pages under these conditions can significantly affect system performance. Therefore, applications should avoid making repeated large-page allocations and instead allocate all large pages one time, at startup.

The memory is always read/write and nonpageable (always resident in physical memory).

The memory is part of the process private bytes but not part of the working set, because the working set by definition contains only pageable memory.

Large-page allocations are not subject to job limits.


that might be also the case for linux where its often advised to set the hugepage size on boot (in grub)

you will probably need to write a wrapper function for the allocation part to handle linux/windows differences
309  Alternate cryptocurrencies / Mining (Altcoins) / Re: Claymore's Ethereum Miner with reduced DevFee v3.1 (now works with ethpool.org) on: February 16, 2017, 03:10:28 PM
will the source be made public anytime? im interested in how you solved some things
310  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.7, open source optimized multi-algo CPU miner on: February 16, 2017, 02:37:25 PM
Large pages looks like a great feature for a specialist mining distribution. Both the OS and the apps
could be preconfigured for large pages. User friendly and plug and play.

Thougths

enabling large pages in a linux pre built image is easy, should be doable, but im not aware of any cpumining distros/images, anybody?
obviously one can always ssh into the system and install the cpuminer himself, but then he also can enable large pages himself and its not userfriendly/plug&play

311  Alternate cryptocurrencies / Pools (Altcoins) / Re: [ANN][POOL] Mining Pool Hub - Multipool. Multialgo, Auto Exchange to any coin. on: February 16, 2017, 11:42:14 AM
revisiting the api on mph:

is there an api endpoint for global balance data (like the page with all balances, but as api)?

also last time i asked there was no global api for all algos (workers etc), has this changed?

i would like to list all workers and their coin with balances in a table

Not yet.
I'm working on other critical bugs now.

I'll add that api.

to give you an idea what i want to accomplish with that api:



ideally a single api call, most likely n+1 calls (global balance+workers per not-0 balance)

thanks for looking into it
312  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.7, open source optimized multi-algo CPU miner on: February 16, 2017, 11:32:49 AM
Some disjointed thoughts on large pages.

My undertsanding is than enabling large pages in the OS reserves a block of memory for use
reducing the amount available for applications, presumably ones that don't use large pages.
This leads to an engineering issue. How many large pages should be allocated in the OS?
How many large pages would cpuminer use? Is it different based on the algo. What if there aren't
enough large pages for cpuminer?

I found an easier way to enable large pages in Linux, no grub required. It's old and for RH but the commands,
kernel variables and files exist on Ubuntu so it should work.

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/sect-Oracle_9i_and_10g_Tuning_Guide-Large_Memory_Optimization_Big_Pages_and_Huge_Pages-Configuring_Huge_Pages_in_Red_Hat_Enterprise_Linux_4_or_5.html

The Wikipedia article says large pages aren't widely used and used mostly in server environments but it
doesn't say why. I suspect it doesn't perform well in a desktop environment with multiple applications
running.

The arcticle also mentioned elevated priviledges, though it's not clear they mean the application must run
as root or only the OS config. It also mentioned issues swapping large pages to disk

The arcticle also mentions the application needs to set a flag, no such flag exists in malloc so it's not so
simple.

I have some thoughts on possible implementation. This looks like a specialist feature that is not applicable
to some environments therefore cpuminer would have to disable it by default. cpuminer would also have
to confirm large pages is enabled in the OS before using them.
Edit: I'm reluctant to advertise such a feature because then I'd be responsible fo helping users setup their OS.

Do large pages apply to static application data or only dynamically allocated memory?


ah yes i forgot, once enabled you can also set the amount dynamically in the os with the commands referenced

i fully understand and support your statement about advertising this feature, a simple cli option to enable it would be enough in my opinion (just link the os setup guides in the readme or somewhere, everything else is up to the user)

well its not widely used because for most applications there is no need to use it, only memory heavy applications can benefit from it, which largely limits the usecases for most applications (except maybe chrome Tongue)
i dont think it reduces performance, the ram is just gone, so if your system has lets say 12gb and your application uses 1gb, the remaining 11gb should still work as usual

about swapping i dont know, i never had to (and it would ruin performance as well), better know your specs and use hugepages accordingly


last note: you can enable transparent hugepages in linux (not sure if enabled by default), you can read about that here: https://www.kernel.org/doc/Documentation/vm/transhuge.txt

doesnt require you to rewrite anything afaik
seems you also need to adjust the allocations
313  Alternate cryptocurrencies / Pools (Altcoins) / Re: [ANN][POOL] Mining Pool Hub - Multipool. Multialgo, Auto Exchange to any coin. on: February 16, 2017, 01:51:11 AM
revisiting the api on mph:

is there an api endpoint for global balance data (like the page with all balances, but as api)?

also last time i asked there was no global api for all algos (workers etc), has this changed?

i would like to list all workers and their coin with balances in a table

In case anyone is interested, as there is no total balance for pool yet, here is the php code to get your absolute total balance in BTC:

Code:
<?php
$id 
"[enter your user id here]";
$key "[enter your api key here]";
$balance 0;
foreach (
json_decode(@file_get_contents("http://miningpoolhub.com/index.php?page=api&action=getminingandprofitsstatistics"))->{'return'} as $coin) {
$url "http://" $coin->{'coin_name'} . ".miningpoolhub.com/index.php?page=api&action=getdashboarddata&api_key=" $key "&id=" $id;
$json = @file_get_contents($url);
$data json_decode($json)->{'getdashboarddata'}->{'data'};
$balance += (((float)$data->{'balance'}->{'confirmed'} + (float)$data->{'balance'}->{'unconfirmed'} + (float)$data->{'balance_for_auto_exchange'}->{'confirmed'} + (float)$data->{'balance_for_auto_exchange'}->{'unconfirmed'} + (float)$data->{'balance_on_exchange'} + (float)$data->{'personal'}->{'estimates'}->{'payout'})) * (float)$coin->{'highest_buy_price'};
sleep(0.5);
}
header('content-type: application/json');
echo 
json_encode($balance);
?>


Here it is for PowerShell (you can run this on your PC by simply saving this text in a '.PS1' file i.e. 'balance.ps1'):
Code:
$id = "[enter your user id here]"
$key = "[enter your api key here]"
$balance = 0
foreach ($coin in (Invoke-WebRequest -Uri "http://miningpoolhub.com/index.php?page=api&action=getminingandprofitsstatistics" | ConvertFrom-Json).return) {
$url = "http://" + $coin.coin_name + ".miningpoolhub.com/index.php?page=api&action=getdashboarddata&api_key=" + $key + "&id=" + $id
$json = Invoke-WebRequest -Uri $url
$data = ($json | ConvertFrom-Json).getdashboarddata.data
$balance += (([float]$data.balance.confirmed + [float]$data.balance.unconfirmed + [float]$data.balance_for_auto_exchange.confirmed + [float]$data.balance_for_auto_exchange.unconfirmed + [float]$data.balance_on_exchange + [float]$data.personal.estimates.payout)) * [float]$coin.highest_buy_price
sleep(0.5);
}
echo ($balance | ConvertTo-Json)

thanks for this, but this does cycle through all coins and makes <coinAmount> +1 http calls, right? even (<coinAmount>*2) +1 if also querying workers (afaik they are not included in the dashboarddata, would need to check)

i have already thought about just implementing it that way, but i try to not spam the mph server(s) with that many requests every 10-30sec Cheesy

edit: i have implemented it like that for now, worker stats are a separate call too, very ugly solution Cheesy

some small notes on what can be added additionally:
- worker stats: time connected (12 for 12 min, 120 for 2h etc)
- dashboard stats: currency symbol (BTC,XMR etc)
314  Alternate cryptocurrencies / Pools (Altcoins) / Re: [ANN][POOL] Mining Pool Hub - Multipool. Multialgo, Auto Exchange to any coin. on: February 15, 2017, 11:19:39 PM
revisiting the api on mph:

is there an api endpoint for global balance data (like the page with all balances, but as api)?

also last time i asked there was no global api for all algos (workers etc), has this changed?

i would like to list all workers and their coin with balances in a table
315  Alternate cryptocurrencies / Mining (Altcoins) / Re: [ANN]: cpuminer-opt v3.5.7, open source optimized multi-algo CPU miner on: February 15, 2017, 09:38:35 PM
Ryzen will be a cryptonight hashing beast.

8 cores, 16MB cache

I hope you can look into the huge pages stuff.

The key for cryptonight performance is cache size and AES performance. 16MB cache is good for 8 threads
but AMD implementations of Intel technology tend to be inferiour.

You'll have to build a good case for large pages. It looks like a lot of trouble with inconsistent results. Nicehash
experimented with it, how did that work out?

Edit: here are some of the questions that need answering in addition to a typical pro-con.

1. What is large pages exactly?

2. What are the OS issues, What changes are required to the OS?

3. Implementation issues, how much code needs changing?

4. User issues, do users need to be root/admin to run cpuminer with large pages?

5. Performance issues, are there conditions where large pages decreases performance?

If you have links to info that answers these questions that's good. I'm a bit skeptical about this
and don't feel like doing all the research work. It also gives me time to decompress after the
Lyra2 issues.

i can try to answer some of these questions, i have encountered large/huge pages in linux in a different field: networking

i have worked with the intel dpdk (dataplane development kit) which roughly gives the user the ability to implement pmd (polling mode driver) for nic's. i have used this in conjunction with open vswitch to build a packet forwarding and routing VM based on ubuntu lts. large pages are used for efficient transport of many packets from host to vm and back.
in general large pages are just like normal pages, except they are bigger/larger and thus the tlb has less entries to go through and is quicker. also large copies are faster.

i have only used this with linux and in linux these pages are reserved on boot, this space is not available for other programs
on windows i have only used it for the mentioned xmr stak miner and it seems its not reserved on boot (at least i didnt see any increase in ram usage in task manager) and the miners ram usage also seems rather small (when i was dealing with the packet forwarding stuff i easily reserved 16GB of ram across multiple numa nodes)

regarding the questions directly:

1) basically larger normal pages as explained before (you can read the first paragraph below the table here: https://en.wikipedia.org/wiki/Page_(computer_memory)#Huge_pages )

2.1) on linux you will just have the reserved amount of ram less (i have not tested this with the xmr miner, might differ), you will need to enable this in the grub config once, i would rate this as 1/10 on a difficulty scale
2.2) on windows you will need to go through some gui stuff in the group policy settings to enable it, thats it, i would rate that 1/10 on a difficulty scale

2.2 side note: i have observed my networking transfer speed is limited to about 120mbit/s when i run the xmr stak miner with huge pages enabled, once i exit it, it goes back to full 1 gbit (its a realtek nic)

3) sadly i cant answer that part Tongue you might want to contact the dev of the stak miner about it, not sure if he wants to help though

4) it is stated that i have to run the miner "as admin" in windows for hugepages, though it worked flawlessly without, linux not sure

5) i only observed the nic slowdown in 2.2 side note, i only ran it on windows till now
when the miner isnt run i have not experienced any issues while hugepages are enabled
316  Alternate cryptocurrencies / Mining (Altcoins) / Re: NiceHash Miner - easy-to-use best-profit multi-device cryptocurrency miner on: February 15, 2017, 06:20:16 PM

-------------
The URL might be misspelled, or the page you're looking for is no longer available.

remove the 51 at the end
317  Alternate cryptocurrencies / Mining (Altcoins) / Re: AMD Zen CPU's: Ryzen and Naples on: February 15, 2017, 11:21:52 AM
It will definitely be worth testing as there might be something to learn.  If you're not learning in mining you're falling behind, I will be testing a Ryzen setup for CPU mining and prepping the rig the Vega Grin.



please post your findings when you have the system ready Smiley
318  Alternate cryptocurrencies / Mining (Altcoins) / Re: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. on: February 15, 2017, 11:20:01 AM
Ethereum+pascal is true. I am successfully dual mining on the gtx 1070..
sp-mod EthPascDualMiner Cheesy

quick, before claymore does it Cheesy
319  Alternate cryptocurrencies / Mining (Altcoins) / Re: AMD Zen CPU's: Ryzen and Naples on: February 15, 2017, 10:48:38 AM
ill add the following (it is afaik not confirmed, we will need to see):

small question to joblo (or anyone competent in that field, im not Tongue)

i just read the following about upcoming ryzen arch:

Quote
AMD left out 256bit AVX to save space and power to allow for higher clocks, but it can still decode avx, but it uses 2x128 bit, so it takes 2 cycles for 1 avx instruction.

if this turns out to be true, does it have an impact on mining speeds? i suppose yes

cheers

It will reduce the compute power of AVX2 code which will affect some algos, mostly Lyra2 based algos, algos that use
cubehash (x11 family) and Hodl, essentially any algo that reports AVX2 capability. However, if an algo is I/O bound a small
loss in compute power may not affect the hashrate significantly. I'd think twice before buying one for mining.

Edit: I expect the performance to be equal to AVX, maybe a little better.

Edit: I don't know what will happen to AVX2 intrinsic or assembly code. If Ryzen doesn't recongize AVX2 instructions it may
have to rely on the compiler not to generate them. That won't work for hardcoded AVX2. If, on the other hand, Ryzen
internally converts them to their AVX equivalent that won't be a problem.


im not sure about the server cpus, they will likely have the avx 256 included (?), would be interesting to see their performance if so
320  Alternate cryptocurrencies / Mining (Altcoins) / Re: BAIKAL GROUPBUY 3 - Giant Special on: February 15, 2017, 01:09:07 AM
Well it would have to given it is $400 more per miner out of the gate.  I don't like those figures.  I was leaning toward more Giant's if the price point was good but I can't justify that big a price difference.  Get that price down in the $1900 range and it becomes competitive, not at $2100 plus.

So if that can be done to get the price more competitive soon then I will buy more of the Giant miners.  Otherwise I will shift my focus back over to the L3's.  I know they can get us the price point.  Now will they do it?      

to be in compliance with the mini miner reduction to 300$ (without any discounts) the giant has to get down to 1800$ (6*300), with 10% discount that is 1620$

i could as well buy all the mini/quad miner just like that but i prefer a single larger unit, but not at this pricepoint, will wait and see

keep in mind scaling up minis and quads are a mess and they will not be in production forever

that is why im holding back and waiting for the price to drop, no point to buy as of now if your funds are limited as you are a poor student Tongue
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!