It sounds more like a kernel level bug to me. Note that the author said it was throwing invalid hashes, not a very slow rate of valid hashes.
Good point, we shall see.
|
|
|
guess many insiders do it at least the ones who are capable to write opencl code and they wont share their code with just everyone as long as they are making profit. as soon as they dont make any profit anymore with mining they will sell the code, then when not making profit with selling anymore you will see it on github for free. If i am wrong prove it and publish working code The data does not suggest this. According to mtlrt, at current N = 256, a top end video card could get around 2.004 MH/s with a HD6990 and his kernel. That means for a mid-size GPU farm owner with 50 7950s/7970s or such may see around 75MH/s or so, or more (a very rough estimate, based on performance difference between 6990 and 7950/7970). According to yacoind: # ./yacoind getmininginfo { "blocks" : 68606, "currentblocksize" : 2275, "currentblocktx" : 5, "difficulty" : 3.01596016, "errors" : "", "generate" : false, "genproclimit" : -1, "hashespersec" : 0, "networkhashps" : 72345435, "pooledtx" : 5, "testnet" : false, "Nfactor" : 7, "N" : 256, "powreward" : 20.80000000 }
Current network hash rate is estimated at 72.3MH (this is with the newest yacoin source). With just ONE mid-range GPU farm owner being able to generate more than the current YaCoin network at this point, this does not look like there are ANY "GPU insiders" mining. Or if there are, they are not hitting it with a decent sized GPU farm. However, this also raises issues of the possibility of a 51% attack if one or two ill-meaning farm owners get a hold of a GPU miner. Let's hope mtrlt either holds on to his GPU kernel like it appears he has to this point, or if released, it's released to all. Anything else risks the stability of the YaCoin network. The other risk here is a possible inadvertent "difficulty attack" if this N=8192 issue is real and a GPU miner is released to all. At that point, difficulty would increase sharply, and once N=8192 is hit, drop off a cliff. I wonder if there would be enough hash power after that to get it back to a sane level without necessitating a hard fork like feathercoin has done (maybe WindMaster has more input on this). Also, whether POS minting would be enough during that time to validate new transactions on the network. Maybe others much more knowledgeable than I on these matters could comment. I am hopeful that the 8192 issue is a more fundamental problem with GPU mining on this coin. Most serious GPU miners have 7950s or 7970s instead of 6990s, so this issue may be run into even earlier on one of those cards, as the memory amount and bandwidth on the 7950 at least is a good amount less than on the 6990. This community is ingenious, however, as the litecoin experience had shown. But if you had the guy that wrote the scrypt kernel for reaper having issues at 8192 after trying multiple lookup gap and TMTO hacks, then there is some real life for this coin I think.
|
|
|
I did some GPU testing with high N values. The gist is: at N=8192, I couldn't get it to output valid shares any more. Therefore, with current information, it seems that GPU mining will stop on 13 Aug 2013 - 07:43:28 GMT when N goes to 8192.
Very interesting news. Might there be some additional optimizations that you could do to get it to output (perhaps like messing with the lookup gap or some other TMTO hacks?) I'm wondering two things: a) why do you think it stopped working at 8192 (physical GPU limitation with current gen GPUs, or more a limitation of the code that can be more easily overcome)? b) does CPU mining still work at 8192 and beyond? (if not, then we have a problem on our hand that would at minimum necessitate a hard fork I'd think). Thanks for your insightful work on the GPU miner around YAC.
|
|
|
I've started mining a lot better once I punched the port through on my firewall.
(in your .conf file, specify: port=8999 )
for example, and then punch that port through. Makes everything faster/easier.
|
|
|
root@minermom:/opt/coins/digitalcoinSource/src# make -f makefile.unix /bin/sh ../share/genbuild.sh obj/build.h ../share/genbuild.sh: 33: ../share/genbuild.sh: cannot create obj/build.h: Directory nonexistent ../share/genbuild.sh: 34: ../share/genbuild.sh: cannot create obj/build.h: Directory nonexistent make: *** [obj/build.h] Error 2
Right from a github pull right now. You'll need to do mkdir src/obj if building from there. Source from a zipfile works fine, but not from github.
|
|
|
We need a MuppetCoin, and have the logo be a dollar bill.
|
|
|
Added additional pools, added block explorer.
|
|
|
whos fontas ? is it a new product from Fanta ?
Legendary pump and dumper extraordinaire. I do like his twitter pic though. https://twitter.com/fontase"Magic internet money. No bitches. No noobs."
|
|
|
People getting paid from next.afraid.org:8118/static ? Starting to get worried as I don't see any payment yet.
is this pool stealing our hash power? I dont see my address listed nor did i get paid. I think the pool operator had to take it down for a bit while he tweaked something.
|
|
|
POOL IS HERE: http://nibble.scryptmining.com Check us out, finally starting to see some blocks roll in. Bit rough around the edges, was thrown together in the first 5-10min of coin launch. PM me if you notice serious problems! Thanks for the pool. Hashrate seems to be off. I have ~27,500KH pointed at it and it's showing up as 56,765KH currently. Wish I had that much!
|
|
|
YAC not listed.
|
|
|
Just my 2 cents here: Congratulation to mtrlt, you deserve all your litecoins For the rest: I understood that you wanted to continue to develop yacoin for it's "uniqueness" being cpu-only (AFAIK FPGA from the start) What I see here it's quite different, maybe you should change thread title as it's not about the client, it's about the miner. If you are up to multiply your daily crypto income there are easier available-to-eveybody way to do this. Don't take this as an attack My main interest here is determining the feasibility of yacoin moving forward, as I have a decent amount of them. It's great that a development community is forming around it, as that's a huge plus, and IMO required for any "lasting" altcoin. However, I believe that this coin still needs something to set it apart from the rest if it is to be substantially successful. It currently has that. But how much of that is due to the intrinsic technical properties of the coin, and how much of that is due to the fact that no GPU miner is publicly available yet? THAT is the key question for me. Therefore, I'm very interested in the effectiveness of GPU mining in the months and years moving forward, especially as it relates to CPU. If we can at least keep the performance advantage between CPUs and GPUs < 5x or so once N >= 2048 (in a few months) then that may prove to be good enough. This is why any technical data on hash rates with various N sizes would be very interesting to me, especially as it compares to a CPU. Right now, GPUs are clearly more effective than CPUs, but the # of users mining with GPUs appears to be very low (1 or 2? ). If a reaper/cgminer YAC kernel is made available though, that will change very quickly.
|
|
|
You would indeed be one of the people I'd expect to have modified your own OpenCL code for scrypt+chacha fairly early on. Anyway, willing to post some hash rate info for your kernel at the current N=128 for a given GPU type and lookup gap (if any)? You can probably post that info safely without giving anyone a head-start on making the modifications themselves.
The knowledge might give others incentive to do it, but oh well. Currently (N=128) it does 3.4MH/s on a core-underclocked (830->738) HD6990, with lookup_gap at 1, thus no gap. As a curiosity, at N=32, it does 7.3MH/s under the same setup. This confirms the criticism that N started out too low. My calculations show N/KB increases at/around the following dates: 5/21: 256, 32KB 5/30: 512, 64KB 6/2: 1024, 128KB 6/26: 2048, 256KB 7/8: 4096, 512KB 8/14: 8192, 1024KB How do you feel your YAC GPU kernel performance will hold up off of those adjustments (in absolute terms and in relative terms to a high end CPU)? I did check out Tacotime's MC2 paper, I like the approach he takes with varying the hash algorithm to achieve maximum ASIC/FPGA resistance. Unfortunately, building GPU resistance for any good length of time looks like a much harder (impossible?) task.
|
|
|
Could you take screenshot and upload it to Imgur or somewhere else then post it here? Over the next 1 year: Over the next 20 years:
|
|
|
Here's an improved version to generate values of N over a specified range I whipped up in Python: import sys import time import datetime import csv
minNfactor = 4 maxNfactor = 30 nChainStartTime = 1367991200
def GetNfactor(nTimestamp): l = 0 if nTimestamp <= nChainStartTime: return 4
s = nTimestamp - nChainStartTime while ((s >> 1) > 3): l += 1 s >>= 1
s &= 3 n = (l * 170 + s * 25 - 2320) / 100;
if n < 0: n = 0
assert n <= 255 return min(max(n, minNfactor), maxNfactor);
def GetN(nTimestamp): Nfactor = GetNfactor(nTimestamp) return (1 << (Nfactor + 1))
while True: numDaysOut = raw_input("Generate N for how many days out? (e.g. 365): ") try: numDaysOut = int(numDaysOut) assert numDaysOut >= 1 except: print "Invalid value. Must be numeric." continue else: break
while True: quantization = raw_input("Output a value of N for each X second period (e.g. 86400 = 1 day period): ") try: quantization = int(quantization) assert quantization >= 1 except: print "Invalid value. Must be numeric." continue else: break
#startTS = time.mktime(datetime.datetime.utcnow().timetuple()) startTS = nChainStartTime #may 8th 2013 endTS = int(startTS + (numDaysOut*86400))
# rows are datetime string, TS, N, KB (N/8) with open('output.csv', 'wb') as csvfile: outWriter = csv.writer(csvfile) for ts in xrange(startTS, endTS, quantization): nVal = GetN(ts) outWriter.writerow([datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S'), ts, nVal, nVal/8])
I ran the program out 20 years and have saved the graphed output to http://goo.gl/2XsaUTo run this program, make sure you have python downloaded and installed on your computer (python.org). Save the code above as yacCalcN.py, then, open up a command prompt and invoke it. You will be prompted for some data. For example: # python yacCalcN.py Generate N for how many days out? (e.g. 365): 365 Output a value of N for each X second period (e.g. 86400 = 1 day period): 86400
.csv output will be saved to output.csv, which will be able to be opened in MS excel for charting/manipulation. The data this output produces is a bit different than what's produced by the initial VBA version. I trust this code over that (I don't know VBA very well at all). Please check the code over and let me know what you think. May be good to compare it against a plain old C++ version just in case the python datatypes are doing something different than C++ would.
|
|
|
when's the release of the windows client ?
After enough meaningful improvements have been made to make it worthwhile for people to upgrade from the official client. At this point the changes made so far are mostly cosmetic, renaming lingering NovaCoin / NVC references that pocopoco should've cleaned up originally. Once we get a release I'll update the YaCoin information thread with the updated client info, if it looks like the reception in this thread is good. I'll keep the link to the old one around as well for reference.
|
|
|
I too am invested in YAC and its future growth and adoption. Thank you Windmaster for taking this initiative!! For those wondering about N, like me, I did some code digging. As the announcement post said, N starts at 32. There is a function in main.cpp (GetNfactor) that takes in the current timestamp to calculate the "NFactor", which is eventually fed into the scrypt() function of the scrypt-jane library (which is a LIBRARY/IMPLEMENTATION, not a specific scrypt algorithm) and converted to N as such: N = (1 << (Nfactor + 1));
So basically, left bitshift 1 over by (Nfactor + 1). Near GetNfactor back in main.cpp, the following is also defined: const unsigned char minNfactor = 4; const unsigned char maxNfactor = 30;
This means that with the starting Nfactor being 4, so min N = 1 << (4 + 1) = 32, and the max Nfactor would be 30, so max N = 1 << (30 + 1) = 2147483648. From N we can get the memory required for the hashing by looking at the following code out of scrypt-jane.c: r = (1 << rfactor); chunk_bytes = SCRYPT_BLOCK_BYTES * r * 2; V = scrypt_alloc((uint64_t)N * chunk_bytes);
Given that SCRYPT_BLOCK_BYTES is defined as 64, and rfactor = 0 (as passed in), chunk_bytes = 64 * 1 * 2, or 128. So to get the memory in bytes required for a given Nval, just do Nval * 128. To get it in KB, just do Nval * 128 / 1024, or simply Nval / 8 (as a shortcut). EDIT: See the next page for an improved python script I've created to calculate N and memory required for hashingLooking at the graphed data, it appears that N very roughly approximates Moore's law (with some step lengths being shorter than 18/24 months, and others being larger). EDIT: As I graphed it out 20 years...this approximation doesn't really hold that well...unless my python adaptation is incorrect. Assuming my data is correct, what is everyone's opinion of N's growth? Does it seem realistic to a) keep GPUs out and b) keep CPU mining feasible?
|
|
|
TimeCoin, by Ronco!! ORDER NOW!!!
|
|
|
No, no, no, don't do it! Guy behind the pool is a proven scammer!Yep, I remember his FTC pool that never paid out. Thanks, I removed this pool. Also added the first p2pool.
|
|
|
Just pulled in the newest source from github and compiled. My config looks something like: rpcuser=rpcuser rpcpassword=PASSWORD rpcallowip=127.0.0.1 rpcport=10449 daemon=1 server=1 gen=0 addnode=199.204.38.220 addnode=96.126.118.229
Seems to start right up fine. Been running for 15 minutes or so with no segfault so far, I have some nodes connected to it (pushpool) and am getting a few blocks.
|
|
|
|