yrk1957
Member
Offline
Activity: 531
Merit: 29
|
|
May 26, 2018, 04:43:29 PM |
|
I am having something like this root@worker:/home/user/btm-miner# ./miner -user 0x3fD23AfC8f59A5b426175FD199b4b1658C54f7C1.text 2018/05/26 16:24:33 fserver : true getGpuCount error:0 Init GPU Device 0: "GeForce GTX 1060 6GB" with compute capability 6.1 Init GPU Device 1: "GeForce GTX 1060 6GB" with compute capability 6.1 Init GPU Device 2: "GeForce GTX 1060 6GB" with compute capability 6.1 2018/05/26 16:24:35 Starting BTM mining 2018/05/26 16:24:35 Connecting to btm.uupool.cn:9220 2018/05/26 16:24:35 2 - Initializing 2018/05/26 16:24:35 2 - No work ready 2018/05/26 16:24:35 0 - Initializing 2018/05/26 16:24:35 0 - No work ready 2018/05/26 16:24:35 1 - Initializing 2018/05/26 16:24:35 1 - No work ready 2018/05/26 16:24:36 Error in connection to stratumserver: EOF, retry in 10s... 2018/05/26 16:24:36 Unable to login: Invalid login panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x5cace2] goroutine 27 [running]: github.com/leifjacky/btmgominer/clients/stratum.(*Client).Close(0x0) /data/go/src/github.com/leifjacky/btmgominer/clients/stratum/stratum.go:94 +0x22 github.com/leifjacky/btmgominer/algorithms/bytom.(*StratumClient).Start(0xc42014e000) /data/go/src/github.com/leifjacky/btmgominer/algorithms/bytom/siastratum.go:119 +0x64a github.com/leifjacky/btmgominer/algorithms/bytom.(*Miner).createWork(0xc420072540) /data/go/src/github.com/leifjacky/btmgominer/algorithms/bytom/miner.go:85 +0xcc created by github.com/leifjacky/btmgominer/algorithms/bytom.(*Miner).Mine /data/go/src/github.com/leifjacky/btmgominer/algorithms/bytom/miner.go:57 +0x9d Your wallet address is wrong...
|
|
|
|
gregory021998
Member
Offline
Activity: 129
Merit: 11
|
|
May 26, 2018, 05:06:37 PM |
|
I am having something like this root@worker:/home/user/btm-miner# ./miner -user 0x3fD23AfC8f59A5b426175FD199b4b1658C54f7C1.text 2018/05/26 16:24:33 fserver : true getGpuCount error:0 Init GPU Device 0: "GeForce GTX 1060 6GB" with compute capability 6.1 Init GPU Device 1: "GeForce GTX 1060 6GB" with compute capability 6.1 Init GPU Device 2: "GeForce GTX 1060 6GB" with compute capability 6.1 2018/05/26 16:24:35 Starting BTM mining 2018/05/26 16:24:35 Connecting to btm.uupool.cn:9220 2018/05/26 16:24:35 2 - Initializing 2018/05/26 16:24:35 2 - No work ready 2018/05/26 16:24:35 0 - Initializing 2018/05/26 16:24:35 0 - No work ready 2018/05/26 16:24:35 1 - Initializing 2018/05/26 16:24:35 1 - No work ready 2018/05/26 16:24:36 Error in connection to stratumserver: EOF, retry in 10s... 2018/05/26 16:24:36 Unable to login: Invalid login panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x5cace2] goroutine 27 [running]: github.com/leifjacky/btmgominer/clients/stratum.(*Client).Close(0x0) /data/go/src/github.com/leifjacky/btmgominer/clients/stratum/stratum.go:94 +0x22 github.com/leifjacky/btmgominer/algorithms/bytom.(*StratumClient).Start(0xc42014e000) /data/go/src/github.com/leifjacky/btmgominer/algorithms/bytom/siastratum.go:119 +0x64a github.com/leifjacky/btmgominer/algorithms/bytom.(*Miner).createWork(0xc420072540) /data/go/src/github.com/leifjacky/btmgominer/algorithms/bytom/miner.go:85 +0xcc created by github.com/leifjacky/btmgominer/algorithms/bytom.(*Miner).Mine /data/go/src/github.com/leifjacky/btmgominer/algorithms/bytom/miner.go:57 +0x9d Pool use new address not erc20
|
|
|
|
parentibule
|
|
May 26, 2018, 06:21:09 PM Last edit: May 26, 2018, 07:00:36 PM by parentibule |
|
I don't understand: I only make 650 H/s with 6x1060! edit: 720 with OC (+100/+700). edit2: very strange, only use between 35 and 60W per card and load is between 0 and 60%! edit3: pools sucks or miner sucks, I don't know but it sucks
|
|
|
|
gameboy366
Jr. Member
Offline
Activity: 252
Merit: 8
|
|
May 26, 2018, 06:39:26 PM Last edit: May 26, 2018, 08:02:32 PM by gameboy366 |
|
Mine is stuck at "New job received from stratum server" for last 20 minutes. What should I do ? I have not registered at Upool or anything All i did so far : download and extract in HiveOS- change wall address- Run *run.sh this what it's showing https://ibb.co/kwVLqohttps://ibb.co/g0cN38 https://ibb.co/iKpDAo i restarted the rig and now it is stuck at last gpu with "No work ready" message. Edit : restarted again. Now again stuck at "New job received from stratum server" Please help.
|
|
|
|
yrk1957
Member
Offline
Activity: 531
Merit: 29
|
|
May 26, 2018, 07:18:45 PM |
|
I don't understand: I only make 650 H/s with 6x1060! edit: 720 with OC (+100/+700). edit2: very strange, only use between 35 and 60W per card and load is between 0 and 60%! edit3: pools sucks or miner sucks, I don't know but it sucks Check your CPU usage, if max, then you need a better CPU to drive the cards.
|
|
|
|
gregory021998
Member
Offline
Activity: 129
Merit: 11
|
|
May 26, 2018, 08:24:59 PM |
|
I don't understand: I only make 650 H/s with 6x1060! edit: 720 with OC (+100/+700). edit2: very strange, only use between 35 and 60W per card and load is between 0 and 60%! edit3: pools sucks or miner sucks, I don't know but it sucks Hey, use two instances instead of one. (need 4 gb of ram at least and a good cpu). Instead of using run.sh, use this https://pastebin.com/gR2e1bp5Create miner.sh and miner2.sh (with the script above) and run them in screen Change the path of the miner, the address and the worker
|
|
|
|
gregory021998
Member
Offline
Activity: 129
Merit: 11
|
|
May 26, 2018, 08:26:58 PM |
|
Mine is stuck at "New job received from stratum server" for last 20 minutes. What should I do ? I have not registered at Upool or anything All i did so far : download and extract in HiveOS- change wall address- Run *run.sh this what it's showing https://ibb.co/kwVLqohttps://ibb.co/g0cN38 https://ibb.co/iKpDAo i restarted the rig and now it is stuck at last gpu with "No work ready" message. Edit : restarted again. Now again stuck at "New job received from stratum server" Please help. How much ram do you have? cpu?
|
|
|
|
nobaj
Newbie
Offline
Activity: 6
Merit: 1
|
|
May 27, 2018, 02:51:17 AM Last edit: May 27, 2018, 09:10:47 AM by nobaj Merited by vapourminer (1) |
|
Here is a little guide to run this on UBUNTU
I did a lot of trouble shot just to decide, it is not for me (all my 6 gpus nvidia rigs have crappy cpu, and I think that the pool is stealing from us or the miner is(and hate the fact that the "-url" doesn't do shit)).
So first things first, here is my run.sh, just little mods (to not kill other process, and to see the log on the term).
#!/bin/bash
cd $(dirname $0)
if [ ! -f address.txt ];then echo -e "\n file address.txt not exists! \n" exit fi
SMI=nvidia-smi
ADDR=$(cat address.txt | head -1 )
DRV=$( $SMI -h |grep Interface | awk -Fv '{print $2}' | cut -d. -f1 ) CARDS=$( $SMI -L | wc -l )
WK=$( /sbin/ifconfig eth0|grep "inet addr"|awk '{print $2}'|awk -F. '{print $3"x"$4}' )
if [ $DRV -lt 387 ];then export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:cuda8 else export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:cuda9 fi
echo "Driver = $DRV , CARD COUNT=$CARDS , WK=${WK}"
cd btm-miner ./miner -user ${ADDR}.${WK}
Take care with the address, is the most common mistake, you can´t mine directly to most of the exchanges (because the address type, i don't know how does that really works), easiest fix, just download the wallet and create one.
Now, I did everything on UBUNTU 16.04 LTS (this works for 18.04, but the oc method doesn't, someone with more knowledge should fix that very easy)
Download and fresh install UBUNTU 16.04, click all the extra installations.
Next you need to install nvidia drivers, I used 387 version, so please use the same.
just type this on the terminal and accept everything
$ sudo add-apt-repository ppa:graphics-drivers/ppa $ sudo apt update $ sudo apt install nvidia-387
RESTART the system
now you have to see all your gpus if you type "nvidia-smi"
Now download the miner and extract it.
Please copy and paste the .sh I put before, and paste your own address on the address.txt (upool pays with no registration)
now move on your terminal to the carpet of your sh and do
$ sudo chmod +x run.sh
now runit with
$ ./run.sh
you should see all working now to oc (this part doesnt work on 18.04 )
First you have to make a virutal screen for each gpu easiest way to do
$ sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration
We have to check that was correct
go to
$ sudo nano /etc/X11/xorg.conf
now search for the word Coolbits, you should see this
Option "Coolbits" "28"
not this
#Option "Coolbits" "28"
if you have the hastag please erase ti on each gpu (you will have listed all your gpus there) And dont move anything more!
ctr+x for exit, and save it (read the foot of the screen )
Now do this (change permissions so at startup the configuration doesn't reset)
$ sudo chmod 444 /etc/X11/xorg.conf && sudo chattr +i /etc/X11/xorg.conf
RESTART the system
now you have to make a new script, crankit.sh (or any other name you want)
#!/bin/bash # Script needs to run as sudo for nvidia-smi settings to take effect. [ "$UID" -eq 0 ] || exec sudo bash "$0" "$@" # Setting a terminal variable called memoryOffset # Since all my cards are the same, I'm happy with using the same Memory Transfer Rate Offset I use this conf to 1050 ti memoryOffset="700" cpuOffset="100" # Enable nvidia-smi settings so they are persistent the whole time the system is on. nvidia-smi -pm 1 # Set the power limit for each card (note this value is in watts, not percent! CHANGE THIS TO THE NUMBER OF GPUS YOU HAVE
nvidia-smi -i 0,1,2,3,4,5,6,7,8 -pl 53 ## Apply overclocking settings to each GPU nvidia-settings -a [gpu:0]/GpuPowerMizerMode=1 nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=$memoryOffset nvidia-settings -a '[gpu:0]/GPUGraphicsClockOffset[2]=$cpuOfset
nvidia-settings -a [gpu:1]/GpuPowerMizerMode=1 nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[2]=$memoryOffset nvidia-settings -a '[gpu:1]/GPUGraphicsClockOffset[2]=$cpuOffset
Add another and change the number on gpu:x for each of your gpus
to run the script you have to chmod again
$ sudo chmod +x crankit.sh
then
$ ./crankit.sh
And you are done, just change the variable on blue and run it and you will be oc your nvidia GPUS
Any tips (eth)
0x0EE0873De03961208F955B4F38d7b953204De042
|
|
|
|
VoskCoin (OP)
|
|
May 27, 2018, 11:13:50 AM |
|
Hi there, is there a miner software for windows users?
Not to my knowledge someone correct me if that’s wrong though
|
|
|
|
parentibule
|
|
May 27, 2018, 04:00:55 PM |
|
I don't understand: I only make 650 H/s with 6x1060! edit: 720 with OC (+100/+700). edit2: very strange, only use between 35 and 60W per card and load is between 0 and 60%! edit3: pools sucks or miner sucks, I don't know but it sucks Check your CPU usage, if max, then you need a better CPU to drive the cards. You're right, my CPU usage is about 200% with a Pentium G4400 3.3.Ghz! Don't think miners usualy have better CPU...
|
|
|
|
gameboy366
Jr. Member
Offline
Activity: 252
Merit: 8
|
|
May 27, 2018, 04:06:12 PM Last edit: May 28, 2018, 08:31:02 PM by gameboy366 |
|
Thanks to gregory021998 for helping me get it running.
On G4400, 4 gb ram and 12*1050ti HiveOS rig, I found out that I can't run more then 10 gpus. With 10 gpus I can only run a single instance. Hash rate - 600-750 h/s.
Only with 5 gpus I can run two instances. Hash rate approx 300-400 h/s per instance. So hash rate of 5 gpu is same as hash rate of 10. I can only run 3 instances with 3 gpus only. Hashrate was 125-140 h/s per instance i.e. 125-140 h/s per 1050ti. Cpu load was high with 3 instances. No OC.
I will test it again when I get an i5 cpu.
There is a lot of hidden potential with GPU Bytom mining and if the miners are optimised, I am sure it will be the most profitable big mcap coin. Sharing is caring guys, so please share if you discover something about Bytom gpu mining.
EDIT : After some messing around, I am able to run all 12 1050ti on single instance. Hash rate 600-800 h/s. Can't run second instance with G4400, it freezes. Neither I am able to increase hashrate with more instances like some of the guys here. My hashrates gets divided if I open more instances so Hash rate remains same (or even less). It puts a LOT of load on CPU. So yeah powerful CPU is needed.
|
|
|
|
drduycom
Newbie
Offline
Activity: 18
Merit: 0
|
|
May 27, 2018, 10:31:56 PM |
|
Thanks to gregory021998 for helping me get it running.
On G4400, 4 gb ram and 12*1050ti HiveOS rig, I found out that I can't run more then 10 gpus. With 10 gpus I can only run a single instance. Hash rate - 600-750 h/s.
Only with 5 gpus I can run two instances. Hash rate approx 300-400 h/s per instance. So hash rate of 5 gpu is same as hash rate of 10. I can only run 3 instances with 3 gpus only. Hashrate was 125-140 h/s per instance i.e. 125-140 h/s per 1050ti. Cpu load was high with 3 instances. No OC.
I will test it again when I get an i5 cpu.
There is a lot of hidden potential with GPU Bytom mining and if the miners are optimised, I am sure it will be the most profitable big mcap coin. Sharing is caring guys, so please share if you discover something about Bytom gpu mining.
So you mean 5gpus running two instances is the best choice in your case? I read the whole thread and see that 1050ti is the best p/p vga for this bytom mining, right?
|
|
|
|
le_yum
Newbie
Offline
Activity: 17
Merit: 0
|
|
May 28, 2018, 02:50:45 AM |
|
|
|
|
|
DevelopmentBank
|
|
May 28, 2018, 03:49:43 AM |
|
This girl deserves a follow, a retweet, and a good ****. The ohgodapill for Ethereum has done wonders for boosting the profitablity of our GDDR5X GPUs and now with this miner coming out soon, this female developer has been greatly contributing to the mining scene. Won't be surprised if she gets job offers soon (not that she needs them). Congratulations!
|
|
|
|
gameboy366
Jr. Member
Offline
Activity: 252
Merit: 8
|
|
May 28, 2018, 04:49:20 AM |
|
So you mean 5gpus running two instances is the best choice in your case? I read the whole thread and see that 1050ti is the best p/p vga for this bytom mining, right?
Correct 5gpus and 2 instances was best case scenario for me. 1050ti and 1060 3gb are the best as they are both cheap and as the miner is not optimised the big gpus don't have any hashrate advantage. I think with a professional miner, this problem will be solved. Just the news I needed to start my day. Thanks. B3 will go down in history as the ASIC killed by the smallest of gpus. Let's mine the hell out of this coin before Bitmain releases the real deal.
|
|
|
|
darbaslt
Newbie
Offline
Activity: 15
Merit: 0
|
|
May 28, 2018, 06:13:46 AM |
|
Hello, maybe there is some windows miner for Bytom? 1080?
|
|
|
|
monkins1010
Jr. Member
Offline
Activity: 41
Merit: 1
|
|
May 28, 2018, 11:25:47 AM |
|
So you mean 5gpus running two instances is the best choice in your case? I read the whole thread and see that 1050ti is the best p/p vga for this bytom mining, right?
Correct 5gpus and 2 instances was best case scenario for me. 1050ti and 1060 3gb are the best as they are both cheap and as the miner is not optimised the big gpus don't have any hashrate advantage. I think with a professional miner, this problem will be solved. Just the news I needed to start my day. Thanks. B3 will go down in history as the ASIC killed by the smallest of gpus. Let's mine the hell out of this coin before Bitmain releases the real deal. The B3 is not an asic.. hence why its not leagues more powerful than the gpus. It's a deep learning chip. And they can upgrade the firmware. The latest b3 firmware upgrades it from 750 h/s to 1000h/s. I think a really well written software could blast the b3 away.. if you were cynical you could say this nvidia miner is purposely a non optimised no we released to get people intrested but not effect the bitmain sales. And when a good nvidia miner is released they will release the b3+
|
|
|
|
parentibule
|
|
May 28, 2018, 03:29:26 PM |
|
I don't understand. I change nothing and now, I had: panic: runtime error: index out of range
goroutine 34 [running]: main.main.func1(0x2, 0xc4200ac240, 0xc420098ff0, 0x2, 0x2, 0xc42015c2a0) /data/go/src/github.com/leifjacky/btmgominer/main.go:92 +0x2aa created by main.main /data/go/src/github.com/leifjacky/btmgominer/main.go:87 +0x701
|
|
|
|
yrk1957
Member
Offline
Activity: 531
Merit: 29
|
|
May 28, 2018, 04:19:53 PM |
|
I don't understand. I change nothing and now, I had: panic: runtime error: index out of range
goroutine 34 [running]: main.main.func1(0x2, 0xc4200ac240, 0xc420098ff0, 0x2, 0x2, 0xc42015c2a0) /data/go/src/github.com/leifjacky/btmgominer/main.go:92 +0x2aa created by main.main /data/go/src/github.com/leifjacky/btmgominer/main.go:87 +0x701 I think you are trying to exclude cards using -E option. It does not work if you try to exclude card 0 or 1. Just a bug.
|
|
|
|
QiaMiner
Newbie
Offline
Activity: 11
Merit: 0
|
|
May 28, 2018, 08:15:54 PM |
|
Does anyone have a translation of the uupool linked page about GPU mining and if the Google translation of the page is correct? I'm currently away from my PC right now and my Chinese isn't good enough to translate what's written in the page you linked. I'm trying to access the Google translate site on mobile but it redirects me to the app that isn't capable of translating the whole thing.
Otherwise, it's quite interesting that someone managed to crack the Bytom algo. I suspected it was minable by regular GPUs when I found the B3s were powered by Sophon chips, but now I guess we have a confirmation. I wonder if this'll have a large impact on existing B3 miners now- it shouldn't be too big of a deal considering the B3s should still mine much faster than any GPUs that are capable of mining Bytom.
If a 1060 and a 1050ti is hashing 265h/s and the b3 is hashing 780h/s then no its not faster than the gpus. It would be marginally more energy efficent though but not a whole lot. I'm gonna assume the 1060 is doing 60% of that 265h/s that puts the 1060 at about 160h/s. 780/160=4.9 1060s. 1060s seem to be reasonably energy efficient so without knowing anything at all lets assume 120w per card.. 600w total vs 376w. The question I have is if these really are ASICs built for an algorithm thats supposedly built for ASICs.. why isn't it substantially more efficient like normal ASICs that were built for algos that weren't designed to be on ASICs? I feel like they're probably more powerful and they're redirecting the majority of the computational power of the device somewhere else but getting the miner to pay for the hardware an power they're siphoning off for personal profit. Or maybe i'm just dumb You would be happy to hear, that my 1060s (P106-100) only consume at maximum Load 80 w per peace. Normally they run at arount 65 to 75 depends on Algo.
|
|
|
|
|