joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 03, 2017, 07:22:27 PM |
|
the first line tells docker to use the ubuntu 16.04 image as a baseline to build the image think of the dockerfile as a recipe on how to setup a machine
If I understand correctly this file is not generic and would likely need to be modified before use unless the default FROM matches the user's intended base. Or is this a hint by the dev (me) that this is the image I used and is the recommended image? Edit: I'm asking in the context of including it in the package. Does it belong there or is it someting specific to each user?
|
|
|
|
felixbrucker
|
|
March 03, 2017, 07:56:02 PM |
|
the first line tells docker to use the ubuntu 16.04 image as a baseline to build the image think of the dockerfile as a recipe on how to setup a machine
If I understand correctly this file is not generic and would likely need to be modified before use unless the default FROM matches the user's intended base. Or is this a hint by the dev (me) that this is the image I used and is the recommended image? Edit: I'm asking in the context of including it in the package. Does it belong there or is it someting specific to each user? it is generic what happens when someone builds the image (or even better: you setup automatic builds with docker hub): - docker will run a leightwight container, which runs the ubuntu 16.04 tools and programs using the host kernel (thus its way faster than VMs as most of the overhead is saved), this step is called the "FROM", it defines a baseline image to be used for this image - docker will install the neccessary tools to build and run cpuminer-opt defined with the "RUN" command, which will execute them inside this ubuntu container - docker will then build cpuminer-opt and install it in the containers system paths - docker will then set the entrypoint to the bin, so every command you append to your docker commandline will get passed to cpuminer-opt bin - docker will then remove all build deps and execute autoremove for anything else that might be unneeded - "CMD" tells docker to pass that command if nothing else is supplied by the user when running the docker image later after this procedure you end up with a docker image where you just run docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [ARG...] though for docker hub i would change the git pull stuff to just plain "COPY" and remove the apt-get remove steps as the space savings are marginal, additionally id update the build.sh to use nprocs and just use that (no static linking) everyone can then use this single image, its not specific to one user/system one problem might exist: currently cpuminer-opt determines cpu capabilities at build time, not runtime, so every system *might* need to rebuild the image for their cpu if it differs from the initially building one in general docker is used for application delivery, but cpuminer-opt is special because it relies on cpu features at build time
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 03, 2017, 08:16:53 PM |
|
Thanks Felix,
That pretty much answered all my questions but raised another. The lack of portability may be the deal breaker. Is there any advantage to docker considering the issues with cross-compiling cpuminer-opt? It seems like a feature with limited user appeal, doesn't really seem appropriate to include it in the package.
I was curious about make -j. I was considering removing the option and using the default but the man page doesn't say what the default is.
|
|
|
|
felixbrucker
|
|
March 03, 2017, 11:42:38 PM |
|
Thanks Felix,
That pretty much answered all my questions but raised another. The lack of portability may be the deal breaker. Is there any advantage to docker considering the issues with cross-compiling cpuminer-opt? It seems like a feature with limited user appeal, doesn't really seem appropriate to include it in the package.
I was curious about make -j. I was considering removing the option and using the default but the man page doesn't say what the default is.
well if your host/server is a plain docker host this solution is great (see coreos etc), also if you have access to a large homogeneous docker cluster, this might come in handy as well the third and maybe last option where it might come in handy is if you prefer your computer with a minimal footprint of tools/dependencies/libs installed and prefer dockerized containers this only applies to linux (and maybe osx), docker has windows support but im not sure the emulation part (besides win10 "native" ubuntu integration) might be performing equally here (after all it has to run a linux kernel somehow on windows) a Dockerfile is appropriate directly in the repo, but id change some stuff as mentioned before regarding -j: if not specified it defaults to 1 (ie make takes longer on multicore systems), if specified without an integer/arg its *unlimited*, else the integer count is the limit for running jobs in parallel i can submit a PR tomorrow for the Dockerfile with my changes if needed
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 03, 2017, 11:57:13 PM Last edit: March 04, 2017, 04:54:43 AM by joblo |
|
Thanks Felix,
That pretty much answered all my questions but raised another. The lack of portability may be the deal breaker. Is there any advantage to docker considering the issues with cross-compiling cpuminer-opt? It seems like a feature with limited user appeal, doesn't really seem appropriate to include it in the package.
I was curious about make -j. I was considering removing the option and using the default but the man page doesn't say what the default is.
well if your host/server is a plain docker host this solution is great (see coreos etc), also if you have access to a large homogeneous docker cluster, this might come in handy as well the third and maybe last option where it might come in handy is if you prefer your computer with a minimal footprint of tools/dependencies/libs installed and prefer dockerized containers this only applies to linux (and maybe osx), docker has windows support but im not sure the emulation part (besides win10 "native" ubuntu integration) might be performing equally here (after all it has to run a linux kernel somehow on windows) a Dockerfile is appropriate directly in the repo, but id change some stuff as mentioned before regarding -j: if not specified it defaults to 1 (ie make takes longer on multicore systems), if specified without an integer/arg its *unlimited*, else the integer count is the limit for running jobs in parallel i can submit a PR tomorrow for the Dockerfile with my changes if needed I did some more research on make -j and the default is as you say 1 but only if it's not set in MAKEFLAGS. I'll go with the default for build.sh to respect anyone who has defined MAKEFLAGS. OK I'll include a better docker file. I still think its better to use 14.04 as the baseline since that is by build environment. When I upgrade to a newer distro/compiler I can update dockerfile to match my new environment. Does this make any sense to you? In both cases anyone who knows better can modify as desired. Go aheard with the PR, it'l be a good simple first attempt, hard for me to mess it up. Edit: Dumb question: Why does the dockerfile download cpuminer-opt when it was already downloaded to get dockerfile?
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 12:09:11 AM Last edit: March 04, 2017, 12:52:51 AM by joblo |
|
Update on legacy branch.
I've decided to use 3.5.9 as the base for the legacy branch. As with the preious version it is intended only for CPUs without AES, such as Intel Core2 and several AMD archietctures. All algos up to 3.5.12 are supported, however, some do not work on AMD x2 CPUs.
The differences between the master and legacy branches.
- legacy branch includes groestl macros that were removed from the master branch resulting in faster X*, quark, and nist5 algos.
- legacy branch does not contain optimizations made in 3.5.10 that resulted in reduced performance on AMD x2 series CPUs.
- legacy branch includes new algos and bug fixes added between 3.5.10 and 3.5.12
- legacy branch will be updated very infrequently, if ever again.
- legacy release is untested by me, It's all mature code and should work, I will fix problems
- download links for legacy version will continue to be publised in the OP
- full source code will always be available
My intent was to build only sse2 binaries for Windows, however, my build script builds them all so it's less work to build them all. Only the SSE2 version and maybe the sse2-aes version will be useful. Anything with AVX should be faster using the master branch
The branch has been created in git but has not been updated yet and should not be used. It's still a vanilla 3.5.9 release.
|
|
|
|
felixbrucker
|
|
March 04, 2017, 09:53:43 AM Last edit: March 04, 2017, 01:16:37 PM by felixbrucker |
|
Thanks Felix,
That pretty much answered all my questions but raised another. The lack of portability may be the deal breaker. Is there any advantage to docker considering the issues with cross-compiling cpuminer-opt? It seems like a feature with limited user appeal, doesn't really seem appropriate to include it in the package.
I was curious about make -j. I was considering removing the option and using the default but the man page doesn't say what the default is.
well if your host/server is a plain docker host this solution is great (see coreos etc), also if you have access to a large homogeneous docker cluster, this might come in handy as well the third and maybe last option where it might come in handy is if you prefer your computer with a minimal footprint of tools/dependencies/libs installed and prefer dockerized containers this only applies to linux (and maybe osx), docker has windows support but im not sure the emulation part (besides win10 "native" ubuntu integration) might be performing equally here (after all it has to run a linux kernel somehow on windows) a Dockerfile is appropriate directly in the repo, but id change some stuff as mentioned before regarding -j: if not specified it defaults to 1 (ie make takes longer on multicore systems), if specified without an integer/arg its *unlimited*, else the integer count is the limit for running jobs in parallel i can submit a PR tomorrow for the Dockerfile with my changes if needed I did some more research on make -j and the default is as you say 1 but only if it's not set in MAKEFLAGS. I'll go with the default for build.sh to respect anyone who has defined MAKEFLAGS. OK I'll include a better docker file. I still think its better to use 14.04 as the baseline since that is by build environment. When I upgrade to a newer distro/compiler I can update dockerfile to match my new environment. Does this make any sense to you? In both cases anyone who knows better can modify as desired. Go aheard with the PR, it'l be a good simple first attempt, hard for me to mess it up. Edit: Dumb question: Why does the dockerfile download cpuminer-opt when it was already downloaded to get dockerfile? regarding make: id run make as is if the MAKEFLAGS are set (or MAKEOVERRIDE, not sure) and with nprocs otherwise, this ensures a fast build on "normal" systems without MAKEFLAGS set i can see why using a newer ubuntu baseline image is preferred: newer gcc generate slightly faster binaries i can see why using an older ubuntu baseline image is preferred: it grants stability as the devs build env uses the same tool versions ultimately you decide which image to use for docker sidenote: you can always setup a seperate (mostly) identical docker image which runs your build scripts for the various arches/tests inside the newer ubuntu docker image to verify all working well (ultimately resulting in a clean dockerized build environment) regarding the download of cpuminer-opt inside the dockerfile: thats one of the things i will change, you will see in the PR later today edit: i have submitted the PR
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 05:03:01 PM |
|
edit: i have submitted the PR
Merged.
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 05:23:38 PM |
|
I'm trying to work with branches in git and having problems. At the moment I'm stuck trying to clone the legacy branch. When I clone the repo it has no knowledge of the legacy branch, if I download a zip of the legacy branch git doesn't recognize it as a valid repo.
I would have preferred to keep the 2 branches seperate instead of having to switch branches but I can't even see the legacy branch to switch to it.
|
|
|
|
felixbrucker
|
|
March 04, 2017, 05:26:34 PM |
|
I'm trying to work with branches in git and having problems. At the moment I'm stuck trying to clone the legacy branch. When I clone the repo it has no knowledge of the legacy branch, if I download a zip of the legacy branch git doesn't recognize it as a valid repo.
I would have preferred to keep the 2 branches seperate instead of having to switch branches but I can't even see the legacy branch to switch to it.
in your local git repo do: git fetch git checkout -b legacy origin/legacy
should work
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 06:02:42 PM |
|
I'm trying to work with branches in git and having problems. At the moment I'm stuck trying to clone the legacy branch. When I clone the repo it has no knowledge of the legacy branch, if I download a zip of the legacy branch git doesn't recognize it as a valid repo.
I would have preferred to keep the 2 branches seperate instead of having to switch branches but I can't even see the legacy branch to switch to it.
in your local git repo do: git fetch git checkout -b legacy origin/legacy
should work I don't think this will work, I need to specify where the branch starts. It looks like this is trying to recreate what I did on github. I don't want to recreate it I want to use what was previously created.
|
|
|
|
felixbrucker
|
|
March 04, 2017, 06:08:20 PM |
|
I'm trying to work with branches in git and having problems. At the moment I'm stuck trying to clone the legacy branch. When I clone the repo it has no knowledge of the legacy branch, if I download a zip of the legacy branch git doesn't recognize it as a valid repo.
I would have preferred to keep the 2 branches seperate instead of having to switch branches but I can't even see the legacy branch to switch to it.
in your local git repo do: git fetch git checkout -b legacy origin/legacy
should work I don't think this will work, I need to specify where the branch starts. It looks like this is trying to recreate what I did on github. I don't want to recreate it I want to use what was previously created. fetch will update your local repo from github and checkout will checkout the branch specified
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 06:21:23 PM |
|
I'm trying to work with branches in git and having problems. At the moment I'm stuck trying to clone the legacy branch. When I clone the repo it has no knowledge of the legacy branch, if I download a zip of the legacy branch git doesn't recognize it as a valid repo.
I would have preferred to keep the 2 branches seperate instead of having to switch branches but I can't even see the legacy branch to switch to it.
in your local git repo do: git fetch git checkout -b legacy origin/legacy
should work I don't think this will work, I need to specify where the branch starts. It looks like this is trying to recreate what I did on github. I don't want to recreate it I want to use what was previously created. fetch will update your local repo from github and checkout will checkout the branch specified fetch did nothing. $ git status On branch master Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean $ git fetch $ git status On branch master Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean $ git branch * master
|
|
|
|
felixbrucker
|
|
March 04, 2017, 06:25:29 PM |
|
yes, you will need to execute the second command as well, see here: you can use git branch -a to list all branches (not only the local ones), or git branch -r for all remote branches
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 06:40:22 PM |
|
yes, you will need to execute the second command as well, see here: you can use git branch -a to list all branches (not only the local ones), or git branch -r for all remote branches You didn't do a fetch. Other that that it looks like what I want. Looking further down the road how does one clone the legacy branch? It seems like there is no option to select either the branch or the commit when cloning, you just get the repo. Not very useful for users who just want to compile the legacy to have to checkout to make the legacy visible.
|
|
|
|
felixbrucker
|
|
March 04, 2017, 06:57:47 PM |
|
fetch is only necessary if your local git repo doesnt have the new branch info from the remote(s) yet, after a clone there is no need for a fetch to get the legacy branch directly when cloning one can use: git clone <url> --branch <branch>
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 07:02:52 PM |
|
fetch is only necessary if your local git repo doesnt have the new branch info from the remote(s) yet, after a clone there is no need for a fetch to get the legacy branch directly when cloning one can use: git clone <url> --branch <branch> Thanks. I'll include that in the instructions.
|
|
|
|
felixbrucker
|
|
March 04, 2017, 07:10:00 PM |
|
im currently trying to understand the structure of cpuminer-opt/cpuminer-multi: i want to implement a basic cryptonight hashing function in javascript and/or port the C/C++ part of cpuminer-opt to js with asm.js, though im not sure what the best starting point for this basic task is
is there some documentation for devs which are not familiar with how mining software works?
i suppose writing this in js with libs is fairly easy
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 08:02:10 PM |
|
im currently trying to understand the structure of cpuminer-opt/cpuminer-multi: i want to implement a basic cryptonight hashing function in javascript and/or port the C/C++ part of cpuminer-opt to js with asm.js, though im not sure what the best starting point for this basic task is
is there some documentation for devs which are not familiar with how mining software works?
i suppose writing this in js with libs is fairly easy
Not that I'm aware of. The first step is to identify everything needed for cryptonight. If all you want is the bare hashing function look in algo/cryptonight/cryptonight-aesni.c. If you need to rely on SW AES look in algo/cryptonight/cryptonight.c. There's a lot more to it if you want to build a mining app: UI, stratum/networking, multithreading, algo interface, algo support SW, I'm probably missing a couple. If you want to build a frankenstein you need to find the line between the core SW and the algo SW. That line is primarilly scanhash, though it's a very blurred line. Almost everything above it is core SW, and everything below is algo specific. The intent of algo-gate was to try to better define that line, but you can still see when the line is crossed by the custom target functions the algo has to give to the core. multi and ccminer have a bunch of algo hooks in the core code. I'm not familiar with any other miner architectures but I presume there still exists that basic interface where you input a message and get back a hash.
|
|
|
|
joblo (OP)
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
March 04, 2017, 08:53:24 PM |
|
It looks like git clone -b is what I needed all along. I also need to change my git push command to origin HEAD. That way I won't screw up and push to the wrong branch. git clone https://github.com/JayDDee/cpuminer-opt.git -b legacy Cloning into 'cpuminer-opt'... remote: Counting objects: 1039, done. remote: Compressing objects: 100% (148/148), done. remote: Total 1039 (delta 54), reused 0 (delta 0), pack-reused 886 Receiving objects: 100% (1039/1039), 2.90 MiB | 1.18 MiB/s, done. Resolving deltas: 100% (401/401), done. Checking connectivity... done. $ git branch * legacy
|
|
|
|
|