Hexxcoin has masternodes and price is currently cheap
What's special about Hexxcoin? It's a zerocoin with MN and a upcoming fork with BTC to BitcoinZeroX You can read more about it on it's webpage here: https://hexxcoin.net/Also here is the ANN with all links: https://bitcointalk.org/index.php?topic=2958707.0You already missed one opportunity a few days ago Cheers
|
|
|
Hexxcoin has masternodes and price is currently cheap
|
|
|
one thing you could try: execute the build in docker, iirc there already is a docker file ready
then once it is built and everything works you can check the libs inside the docker container and compare them to your ones
|
|
|
Rent gpu to AI researchers?
Sorry but I call this BS phishing attempt, mainly because why would any serious AI researchers first turn to a miner forum - regular people, instead of going to some high class universities/ some other institutes...etc. anywhere in the world and collaborate with their departments there? I'm sure lots of universities/institutes would be open for these kinds of things and are well funded for these kind of projects if there's a good initiative...
If it's fishy, it's usually because it is...
just my 2 cents.
Probably because they do it as a hobby and nobody offers free access to resources without any personal gain (that includes universities). miners are a cheap alternative. That being said all the above applies as well.
|
|
|
Whoa! That is great! Are you using a VPS or your own hardware? What hardware/computer specifications do you recommend aside from the Mini PC that was recommended the other day if I may ask goodsir?
I'm using my own hardware, my setup is pretty uncommon as i have to deal with many things regular users don't. My internet is capped but there is a vpn i can use to gain unlimited traffic, then again this vpn doesn't offer a public ipv4 so i need another vpn in that vpn to get a public ipv4 with open ports i can assign. I'm using an entry level xeon e3 from 2012 for my homeserver which runs many things including bitbean quite nicely. cheers
|
|
|
Im not sure its just me or someone else have the same problem also... Sprouting 800.000 beans , my rig is online 24/7 , my connection to the network is over 30 all the time, sometimes 65,sometimes 35. Last time i received sprouts it was 2/27/2018 , now im close to 50% result what the online bean cash calculator estimated. I never believed any online coin calculator in the past, those calculator are just a rough estimate... but none of those calculatros are 50% off ... i have sprouted more than 16 times since 27, however i'm sprouting with 1.8m seems like you just had bad luck Whoa! That is just awesome! 1.8M? How long has it been sprouting if I may ask good sir? sprouting since end of january
|
|
|
Im not sure its just me or someone else have the same problem also... Sprouting 800.000 beans , my rig is online 24/7 , my connection to the network is over 30 all the time, sometimes 65,sometimes 35. Last time i received sprouts it was 2/27/2018 , now im close to 50% result what the online bean cash calculator estimated. I never believed any online coin calculator in the past, those calculator are just a rough estimate... but none of those calculatros are 50% off ... i have sprouted more than 16 times since 27, however i'm sprouting with 1.8m seems like you just had bad luck
|
|
|
Any downside to this?
i use H/s as often as possible and just format it into the appropriate unit on display. A change to not always use H/s and remove KH/s (after some time) will require some conversion to bring it down to H/s first, push it through the app and finally convert it back to whatever is the appropriate format. It wont be a large overhead in programming but it will be unnecessary. I do not know of any app using the api returned hashrate just as is, it wouldn't make sense. This is how i currently extract data from api: const result = { accepted: parseFloat(obj.ACC), acceptedPerMinute: parseFloat(obj.ACCMN), algorithm: obj.ALGO, difficulty: parseFloat(obj.DIFF), hashrate: parseFloat(obj.KHS) * 1000, miner: `${obj.NAME} ${obj.VER}`, rejected: parseFloat(obj.REJ), uptime: obj.UPTIME, cpus: parseFloat(obj.CPUS), temperature: parseFloat(obj.TEMP), }; this would need to change to this: const units = [ {key: 'PH/s', factor: 5}, {key: 'TH/s', factor: 4}, {key: 'GH/s', factor: 3}, {key: 'MH/s', factor: 2}, {key: 'KH/s', factor: 1}, {key: 'H/s', factor: 0}, ]; const unit = units.find(currUnit => obj[currUnit.key]); let hashrate = 0; if (unit) { hashrate = parseFloat(obj[unit.key]) * (Math.pow(1000, unit.factor)); } const result = { accepted: parseFloat(obj.ACC), acceptedPerMinute: parseFloat(obj.ACCMN), algorithm: obj.ALGO, difficulty: parseFloat(obj.DIFF), hashrate, miner: `${obj.NAME} ${obj.VER}`, rejected: parseFloat(obj.REJ), uptime: obj.UPTIME, cpus: parseFloat(obj.CPUS), temperature: parseFloat(obj.TEMP), }; With H/s always present it would be just like the first example except i do not need to do I hope this explains it
|
|
|
my personal choice would be to keep the api backwards compatible and just add stuff to it as cpuminer-opt/cpuminer-multi is widely used
if you feel stuff is not needed anymore just mark it deprecated and remove it at a later stage/version, however i don't see the need to remove the KH/s, it's not like it's a million extra bytes, just some few chars
|
|
|
It doesn't make sense to me to put both but if that's what people want I'll do it. I'd like some opinions from other users.
keep KH/s for backwards compatibility that is
|
|
|
The API changes were a request, but it seems the old way was preferred. I'll revert the change if the majority want.
Edit: there are more changes to the API coming, adding solved block count.
actually the requester on github was talking about what i mentioned above: use H/s in addition to KH/s Could you modify api.c to also include hashes/sec? 159 "ALGO=%s;CPUS=%d;URL=%s;KHS=%.2f;HS=%.2f;ACC=%d;REJ=%d;" 163 algo, opt_n_threads, short_url, (double)global_hashrate / 1000.0, (double)global_hashrate,
|
|
|
as this is api, why not just output H/s (a large number for fast algos), gives most fine granular control over display and formatting etc
for display it makes sense to scale it, but not for api
|
|
|
Scaled hashrate for API output.
Thanks for this (y) oh, nvm, though you changed it to H/s instead of KH/s, but you made it dynamic
|
|
|
Scaled hashrate for API output.
Thanks for this (y)
|
|
|
Hey i have a small question regarding cross compiling for windows:
i encountered a problem where it seems with the instructions provided in docs folder i generate an exe which sends "zu\r\n" as content length in its rpc response. I read online this might be related to the compiler used by mingw only supporting basically c89 which doesn't support "z" in printf statements.
How do you guys compile the windows bins?
Cheers
Found the problem: From my tests only ubuntu 14.04 (gcc 4.8.2) works, ubuntu 16.04 (gcc 5.3.1) and newer versions (gcc 6/7) didn't
|
|
|
Hey i have a small question regarding cross compiling for windows:
i encountered a problem where it seems with the instructions provided in docs folder i generate an exe which sends "zu\r\n" as content length in its rpc response. I read online this might be related to the compiler used by mingw only supporting basically c89 which doesn't support "z" in printf statements.
How do you guys compile the windows bins?
Cheers
|
|
|
if needed i will compile native windows.
that would be awesome
|
|
|
As i could not find any sign of a printf in your changes which inludes a "%zu" and the nature of content-length being probably handled by some library i can only assume the way you build windows bins is different than for zcoin/zoin
Unfortunately they both do not have any travis config files for binary building to take a look at
i could not find any of those %zu you posted.. but, not sure why it was there anymore> https://github.com/hexxcointakeover/4.0.1.X/commit/0cccd1857ff60d7114e74a845cd059f690962f70Did you try if the error persisted with those changes? I don't believe they change anything as they do not seem to be related to rpc requests
|
|
|
|