rallasnackbar
|
|
March 20, 2014, 02:15:48 PM |
|
Hashrate: 2.63 GH/s I think something is wrong. Properly switching from ghash back to waffle. Their pool speed went from 44.41 Gh/s to 19.98 Gh/s
|
|
|
|
|
|
It is a common myth that Bitcoin is ruled by a majority of miners. This is not true. Bitcoin miners "vote" on the ordering of transactions, but that's all they do. They can't vote to change the network rules.
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
|
|
|
|
Pfool
|
|
March 20, 2014, 02:59:39 PM |
|
Hi, A new version of http://stratehm.net has been deployed with many bug fixes and a notification system for Wafflepool news. Enjoy !
|
Thanx BTC: 19wv8FQKv3NkwTdzBCQn1AGsb9ghqBPWXi
|
|
|
comeonalready
|
|
March 20, 2014, 03:04:41 PM |
|
author=comeonalready
And it is payout per MHD!
I see you insist... Ok. Let me show you my reasoning for why IT IS NOT MHD and then you can just slam down your irrefutable proof and I'll stand rebuked. let's say that your hashrate is 1 MHs. Meaning that your computational power can "solve" 1 million hashes in a second. Analogous to active power being measured in kW, the energy consumption is measured based on the consumption of power in a unit of time thus the kWh unit of measure. When you want to see how much energy you consumed in a DAY you just count the kWh which means you consumed 24 kWh/day (for 1 kWh). when you use BTC/MHD you are implying that your hash rate is 1 "MH per day" which is false. your hash rate is an average of 1 MHs over a day's time and you contributed with 3600* 1 MH in a day's time. WP states [0.01 btc/(average MHs) in a day] and not the amount of hashes in a day. And lastly, the unit of measure is MHs. you can twist that figure to show a day or a year but it's still MHs as reported by your miners. 1- No, 1 MHD is one Mega Hash Day, not 1 Mega Hash per Day. There is no divisor in an MHD unit. You put that in there. Why didn't you add it to your example of kWH too? 2- You've made an error in your explanation. There are 86400 seconds in a day, not 3600. That is how many seconds there are in an hour. 3- 1 MHD = 86400 MH, or the number of hashes computed over the course of one day at an average of 1 MH/s instantaneous rate. 4- The reason I chose to indicate MHD over MHH (hour) MHW (week) is because that is the length of time that everyone naturally wants to know and compare day over day and pool to pool. 5- In your last sentence, you conveniently removed the divisor from MH/s, which is a rate at which hashes are computed. I have a perfect analogy for all this, but I refuse to dumb it down and spoon feed it to the masses. I happen to be one of those people who cannot help but roll his eyes every time a news reporter quantifies a distance or area in the number of football fields that could fit into that space -- as in the average distance from the earth to the moon is 4,200,000 football fiends laid out end to end.
|
|
|
|
bountygiver
Member
Offline
Activity: 100
Merit: 10
|
|
March 20, 2014, 03:15:31 PM |
|
I think MHD is a wrong unit as MH = million hashes, you can't multiply a unit with no relation to time to day. kWh is used to calculate energy used is because kW = kJ/s
I think it is more accurate to call it MHD/s unless we know what to call MH/s
|
12dXW87Hhz3gUsXDDCB8rjJPsWdQzjwnm6
|
|
|
comeonalready
|
|
March 20, 2014, 03:18:11 PM Last edit: March 20, 2014, 03:38:38 PM by comeonalready |
|
I think MHD is a wrong unit as MH = million hashes, you can't multiply a unit with no relation to time to day. kWh is used to calculate energy used is because kW = kJ/s
I think it is more accurate to call it MHD/s unless we know what to call MH/s
MHD/s is asinine. If you mine all day at 1 MH/s, how many MH have you computed at the end of the day? My entire point is that the unit name must match the value being stated before it, so yesterday we earned 0.00742039 BTC per what??? The correct answer is 86000 MH -- or for simplicity sake, 1 MHD 0.00742039 BTC / Day @ 1MH / sec is also accurate, but both a mouthful and incredibly awkward as contained within it are two different time periods.
|
|
|
|
bountygiver
Member
Offline
Activity: 100
Merit: 10
|
|
March 20, 2014, 03:43:51 PM |
|
You can't simply assume the base unit of time is seconds, that's why you don't multiply scalars by unit of time directly. Simply, 1 day is not 86000, it is seconds/day. But since we are doing (BTC / day) / (MH/s) = BTCs/MHD = BTC/(MHD/s)
|
12dXW87Hhz3gUsXDDCB8rjJPsWdQzjwnm6
|
|
|
luthan
Member
Offline
Activity: 94
Merit: 10
|
|
March 20, 2014, 03:48:52 PM |
|
i was expecting to see some more rejects since the stratum changes, but i am not. should i be worried?
0.3 % right now
|
|
|
|
dexu
|
|
March 20, 2014, 03:50:17 PM |
|
i was expecting to see some more rejects since the stratum changes, but i am not. should i be worried?
0.3 % right now
Same here 0.3%
|
|
|
|
elpsycongro
|
|
March 20, 2014, 03:59:21 PM |
|
i was expecting to see some more rejects since the stratum changes, but i am not. should i be worried?
0.3 % right now
Same here 0.3% My long term average is 2% (up from 0.2% old stratum) still not bad since i changed nothing from old config.
|
|
|
|
luthan
Member
Offline
Activity: 94
Merit: 10
|
|
March 20, 2014, 04:06:28 PM |
|
i was expecting to see some more rejects since the stratum changes, but i am not. should i be worried?
0.3 % right now
Same here 0.3% My long term average is 2% (up from 0.2% old stratum) still not bad since i changed nothing from old config. that's the thing, i didn't do anything either. so why would i see no changes whatsoever?
|
|
|
|
elpsycongro
|
|
March 20, 2014, 04:12:05 PM |
|
Not near my rig right now last i checked was 5hrs ago maybe its lower now...
|
|
|
|
Thirtybird
|
|
March 20, 2014, 04:40:37 PM |
|
i can see what you are saying, so as ltc was the asnwer to BTC becoming too high in difficulty due to ASICS we gotta wait and see what the answer to LTC is against the coming ASICS., if there is a such a coin that stands a chance and pools can adapt to it that would be the new standard...
An easy way of making ASICs unprofitable is to design algorithms that require large memory buffers and that have performance bound by memory bandwidth rather than arithmetic. ASICs provide the greatest benefits for algorithms that are arithmetic-bound, and they provide the least benefits for algorithms that are bound by memory bandwidth. By combining a large size memory buffer with random access patterns, we would get a level playing field that evolves very slowly. GPUs of today have 200-300GB/s memory bandwidth which has only increased by a small margin generation-to-generation. GPUs are expected to get a nice jump in bandwidth when memory technologies like die-stacked memory show up in a few years, but after that bandwidth growth will be very very slow again. A large part of the complexity and cost in a GPU is the memory system, and this is something that is only feasible to build because millions of GPUs are sold per week. By developing an algorithm that requires a hardware capability that is only cost-feasible in commodity devices that are manufactured in quantities of several million or more, it would push ASICs completely out, and keep them for a very long time, perhaps indefinitely. It's one thing to fab an ASIC chip, it's another thing to couple it to a high-capacity high-bandwidth memory system. If you design an algorithm that uses the "memory wall" as a fundamental feature, it will make ASICs no better than any other hardware approach. Great Post and so true... If they want a leveled plane of mining, that should be the way... Best Regards, LPC Ya, so there's already coins that do this. YACoin was the first, and currently takes 4 MB per thread to complete a calculation. That will be 8 MB on May 31st. All the other scrypt-chacha coins will get there eventually, but YAC is the trailblazer Sorry, but 4MB isn't a lot of memory. 1GB or more would start to be the size of memory I'm talking about. Anything that's just a few megabytes in size is small enough that someone that wanted it badly enough could just put SRAM on-die. CPUs and GPUs already have aggregate on-chip cache sizes that are 10 times that size, so 4MB is nowhere near large enough. The data size has to be large enough so that the on-chip caches are useless, and remain useless over at least a 10 year period. I would put that at something over 1GB. We'll have to disagree on what constitutes "a lot", but even in YACoin, the effects of 4 MB hashes are taking their tolls. You can't parallelize as many threads on today's GPUs as you can at lower N Factors. A Radeon R9 290 with 2560 shaders would need 40 GB (no, not 4, 40!) to fully utilize the card. Luckily, OpenCL is flexible, and we can adapt the code and recompile the OpenCL kernel we are using to utilize lookup-gap to give a larger effective memory size and thus use more threads. If we were unable to change lookup-gap, the performance would degrade MUCH faster than 50% for every N-Factor change. An ASIC is, by definition, a hard-coded piece of software in silicon format. If they could utilize lookup-gap, it would need to be set in the design, and it would then be that balance between speed of the computations vs the amount of memory included. But then, it will only work for a given N-Factor, so you'd have to switch to a different coin eventually. How much dram can you fit in an ASIC die? I would guess not enough to do more than a couple of hashes at a time, and unless the speed of the chip is significantly faster than today's GPU cores, I think we're still a long way off from ASICs for high memory (even 4 MB, NF=14) coins.
|
|
|
|
atomicchaos
|
|
March 20, 2014, 05:26:20 PM |
|
If I'm not mistaken, you can bump up virtual memory to overcome any shortfall in memory.
|
BTC:113mFe2e3oRkZQ5GeqKhoHbGtVw16unnw2
|
|
|
gaalx
|
|
March 20, 2014, 05:36:27 PM |
|
poolwaffle, whether we will produce Vertcoin (VTC) Scrypt-N?
|
|
|
|
Eugenok
Newbie
Offline
Activity: 28
Merit: 0
|
|
March 20, 2014, 06:18:27 PM |
|
Now it's back to 0.00592860 BTC/Mh
|
|
|
|
cleanbaldy
Newbie
Offline
Activity: 42
Merit: 0
|
|
March 20, 2014, 06:19:51 PM Last edit: March 20, 2014, 06:37:39 PM by cleanbaldy |
|
I was reading about the new Stratum server and I think it's causing issues with my cudaminer (750 Ti) rig. I'm running steady at 1.8 Mh/s, but since yesterday it is reporting less. (From 1.8 -> ~ 1.45) On the below graph, you can see it drop down and stay well below where it usually is. The first dip, i'm assuming, is when the server switched over? After that, I did two reboots when I noticed the drop in khash, which you can see on the graph as well. (Windows Updates slowed down my shutdown/restart, allowing it to show up on the graph) Can someone please tell me why I'm seeing the drop in khash all of a sudden, even when my rig is running at 1.8 MH/s still? Something I need to re-configure in the cudaminer.bat to work better on intensity? The biggest issue is that my BTC / Day / 1mH DROPPED as well! So, not only is it reporting lower, it IS lower. I just posted this up on /r/wafflepool and immediately, someone else said "I have the same issue!" Help? https://i.imgur.com/jX3dRRK.png
|
|
|
|
tachyon_john
Newbie
Offline
Activity: 28
Merit: 0
|
|
March 20, 2014, 07:04:38 PM |
|
I was reading about the new Stratum server and I think it's causing issues with my cudaminer (750 Ti) rig.
I'm running steady at 1.8 Mh/s, but since yesterday it is reporting less. (From 1.8 -> ~ 1.45)
On the below graph, you can see it drop down and stay well below where it usually is. The first dip, i'm assuming, is when the server switched over? After that, I did two reboots when I noticed the drop in khash, which you can see on the graph as well. (Windows Updates slowed down my shutdown/restart, allowing it to show up on the graph)
Can someone please tell me why I'm seeing the drop in khash all of a sudden, even when my rig is running at 1.8 MH/s still? Something I need to re-configure in the cudaminer.bat to work better on intensity?
The biggest issue is that my BTC / Day / 1mH DROPPED as well! So, not only is it reporting lower, it IS lower.
I just posted this up on /r/wafflepool and immediately, someone else said "I have the same issue!"
Help?
Since the new stratum code went live on wafflepool, I have been seeing a strange behavior on my cudaminer boxes where the GPUs end up idling down to zero activity, and then they ramp back up to full utilization again when they get a "stratum detected new block", message, often around 10-15 seconds later. This idling/ramping behaviour never happened prior to the new stratum code going online the other day. cudaminer prints no timeouts or other debugging info to indicate that there's a problem, but it's very obvious that the miner code is waiting on a message from the server, which is arriving very "late", let's say. Once the detected new block message shows up, the GPUs ramp back up and typically continue running flat out again for a minute or two, sometimes for several minutes before they idle out again. If I watch the GPUs using nvidia-smi, I can see that they are dropping to 0% utilization when this is occuring, so cudaminer is definitely waiting on something from the server. All three of my mining rigs started behaving this way when the new stratum server went online...
|
|
|
|
tachyon_john
Newbie
Offline
Activity: 28
Merit: 0
|
|
March 20, 2014, 07:19:18 PM |
|
i can see what you are saying, so as ltc was the asnwer to BTC becoming too high in difficulty due to ASICS we gotta wait and see what the answer to LTC is against the coming ASICS., if there is a such a coin that stands a chance and pools can adapt to it that would be the new standard...
An easy way of making ASICs unprofitable is to design algorithms that require large memory buffers and that have performance bound by memory bandwidth rather than arithmetic. ASICs provide the greatest benefits for algorithms that are arithmetic-bound, and they provide the least benefits for algorithms that are bound by memory bandwidth. By combining a large size memory buffer with random access patterns, we would get a level playing field that evolves very slowly. GPUs of today have 200-300GB/s memory bandwidth which has only increased by a small margin generation-to-generation. GPUs are expected to get a nice jump in bandwidth when memory technologies like die-stacked memory show up in a few years, but after that bandwidth growth will be very very slow again. A large part of the complexity and cost in a GPU is the memory system, and this is something that is only feasible to build because millions of GPUs are sold per week. By developing an algorithm that requires a hardware capability that is only cost-feasible in commodity devices that are manufactured in quantities of several million or more, it would push ASICs completely out, and keep them for a very long time, perhaps indefinitely. It's one thing to fab an ASIC chip, it's another thing to couple it to a high-capacity high-bandwidth memory system. If you design an algorithm that uses the "memory wall" as a fundamental feature, it will make ASICs no better than any other hardware approach. Great Post and so true... If they want a leveled plane of mining, that should be the way... Best Regards, LPC Ya, so there's already coins that do this. YACoin was the first, and currently takes 4 MB per thread to complete a calculation. That will be 8 MB on May 31st. All the other scrypt-chacha coins will get there eventually, but YAC is the trailblazer Sorry, but 4MB isn't a lot of memory. 1GB or more would start to be the size of memory I'm talking about. Anything that's just a few megabytes in size is small enough that someone that wanted it badly enough could just put SRAM on-die. CPUs and GPUs already have aggregate on-chip cache sizes that are 10 times that size, so 4MB is nowhere near large enough. The data size has to be large enough so that the on-chip caches are useless, and remain useless over at least a 10 year period. I would put that at something over 1GB. We'll have to disagree on what constitutes "a lot", but even in YACoin, the effects of 4 MB hashes are taking their tolls. You can't parallelize as many threads on today's GPUs as you can at lower N Factors. A Radeon R9 290 with 2560 shaders would need 40 GB (no, not 4, 40!) to fully utilize the card. Luckily, OpenCL is flexible, and we can adapt the code and recompile the OpenCL kernel we are using to utilize lookup-gap to give a larger effective memory size and thus use more threads. If we were unable to change lookup-gap, the performance would degrade MUCH faster than 50% for every N-Factor change. An ASIC is, by definition, a hard-coded piece of software in silicon format. If they could utilize lookup-gap, it would need to be set in the design, and it would then be that balance between speed of the computations vs the amount of memory included. But then, it will only work for a given N-Factor, so you'd have to switch to a different coin eventually. How much dram can you fit in an ASIC die? I would guess not enough to do more than a couple of hashes at a time, and unless the speed of the chip is significantly faster than today's GPU cores, I think we're still a long way off from ASICs for high memory (even 4 MB, NF=14) coins. I'm not convinced that the only way to parallelize these algorithms is the way that it's being done currently. It is often possible to write GPU codes where several threads or even a whole warp/wavefront work collectively on an algorithm step. I haven't looked at the details of scrypt-chacha specifically, but I wouldn't be surprised if there are alternative algorithm formulations other than the one you refer to. The top-end GPUs today have 8GB to 12GB of RAM. In the next two years, there will be GPUs and other GPU-like hardware (e.g. Xeon Phi) that will have significantly more memory than they do now, likely in the range of 32GB. I've read analyst articles that expect that Intel will put at least 16GB of eDRAM onto the next Xeon Phi (though likely on its own separate die), a much larger scale variant of what Intel is already doing for integrated graphics. Next week is NVIDIA's GPU conference, perhaps there will be some public announcements about what they're doing for their next-gen GPUs.
|
|
|
|
moon.raker
Newbie
Offline
Activity: 11
Merit: 0
|
|
March 20, 2014, 07:44:04 PM |
|
poolwafle : do i need to tweak my expiry setting to avoid stales ? do they mean any kind of loss when if they are accepted ?
on another topic
with all the extra server capacity : what u think about seting up a scrypt-n based "altpool" on a different port ? or the current sever already capable to handle scrypt-n ? is there actually ANY scrypt-n based multipool around ? could it be a possible direction for WP to evolve , or pool mining those would simply destroy that market ? or its just way to early to work on or even to think about the implementation ?
yeah i suck at english but i think u might get my point
|
|
|
|
Thirtybird
|
|
March 20, 2014, 07:57:16 PM |
|
I'm not convinced that the only way to parallelize these algorithms is the way that it's being done currently. It is often possible to write GPU codes where several threads or even a whole warp/wavefront work collectively on an algorithm step. I haven't looked at the details of scrypt-chacha specifically, but I wouldn't be surprised if there are alternative algorithm formulations other than the one you refer to. The top-end GPUs today have 8GB to 12GB of RAM. In the next two years, there will be GPUs and other GPU-like hardware (e.g. Xeon Phi) that will have significantly more memory than they do now, likely in the range of 32GB. I've read analyst articles that expect that Intel will put at least 16GB of eDRAM onto the next Xeon Phi (though likely on its own separate die), a much larger scale variant of what Intel is already doing for integrated graphics. Next week is NVIDIA's GPU conference, perhaps there will be some public announcements about what they're doing for their next-gen GPUs.
That's all good information, and I'm sure you're quite correct on there being alternate ways to rework these hashes for new hardware - that's the one aspect of mining my knowledge is very shallow on. Regarding those badboy GPU's with 8 and 12GB of memory - they aren't real common, and their cost would be prohibitive compared to running multiple "smaller" GPU's, but that's where the schedule of increasing N comes in - it's taking a stab at where computing power will be in the future, and it could be very wrong, but it's still scaling up the requirements over time. The Xeon Phi looks to be an interesting beast, and I was unfamiliar with it until you brought it up. The specs call for 61 cores at 1.238GHz for the high end machine, which doesn't sound massively parallel. Time will tell, but I'm not going to be the guinea pig to plunk down $4,000 USD to find out. Christian Buchner (of cudaminer) has been in touch with nVidia, and they're on board with crypto-mining - I believe they've evan assisted with optimizaing some of his kernel code. Regarding their announcement, I do know that their Maxwell architecture is already providing improved performance per watt on the mid-range 750Ti card.
|
|
|
|
|