Bitcoin Forum
February 18, 2018, 11:28:06 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ... 85 »
61  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN]CureCoin - Protein Folding Research based Proof of Work on: October 23, 2016, 01:49:53 AM
It's interesting to see what will happen. In the end, in my opinion, potential worldwide merchant acceptance is really the only thing that pushes coins to the top of the market cap. And that simply cannot happen with the current centralized CureCoin 1.0 . .

In my opinion it's sad that the CC team was somehow lacking ambition for CC2.0... Indeed their aim should have been to create THE ULTIMATE COIN that would send to trashcan all the useless global-warming-increasing GPU-based coins out there and concurrence Bitcoin, because being the ultimate cryptocurrency would push the coin to the top of the market cap and conversely offer tens, maybe hundreds of GPPD to F@H.

But for CC2.0 to be the ultimate cryptocurrency, I think they should:
   - Remove all the flaws of CC 1.0 (mainly the premine -> DONE)
   - Not add flaws in CC 2.0 (the certification authorities)
   - Add massive advantages over Bitcoin (either new or inspired from other popular coins):
      - Protection against quantum computing (DONE)
      - Block time under 30s to allow effective payments
      - Anonymous transactions (Vorksholk argued that it might allow illegal stuff, but cash payments in US$ allow it too and they don't help F@H in compensation)

Maybe SigmaX or CC3.0 will be the ultimate coin...

And then, having the technically ultimate coin that moreover helps F@H, CC's PR should contact all major merchants accepting Bitcoin and convince them accepting Curecoin.



Let me preface this by saying these are really good debates to have--and if you have specific ideas to address some of the concerns (namely certificates and WU validation of a dynamic WU system), we'd love to hear and discuss them. Here's how we're currently seeing these issues:

Currently, the certificate system looks like the best way to be able to validate peoples' contributions to scientific computation networks. Bitcoin works because PoW uses SHA-256D which everyone can validate--they know the exact requirements and validation procedure for any PoW. Since the very nature of scientific computing is large workloads which continually evolve and change format (not everyone is folding the same molecule in the same way--in fact, aside from redundancy, no one is), this means of decentralized work validation isn't possible. It's really a tradeoff--technically it's possible to create a PoW based on validating a computation related to simulating a single molecular system, but then it's unusable as a means of scientific research because research teams can't change the molecule, can't change what about it is being simulated, etc. They also can't improve the software (aside from optimizing the software to produce the same results faster), and the workloads would have to be unimaginably small to be practical PoW targets (because everyone needs to recompute them). It's also a DDoS vulnerability to the network, because people could pretend to mine a ton of blocks with bogus WUs, and everyone would have to recompute the entire WU to see that it's false. With Bitcoin it's nanoseconds to compute a double-SHA256 hash. With even a small molecular simulation it's on the scale of seconds to minutes, which means someone could submit a few bogus blocks to the network, and the entire network would spend an hour frozen while it reprocesses every WU just to find out that they weren't valid PoW submissions. We also couldn't have a short blocktime for this reason, because even the slowest full nodes would need to validate blocks much faster than they are transmitted, and making the PoW validation require more than a trivial amount of time requires that the blocktime leaves a large enough window to ensure that peers aren't being "buried" by PoW validations, even ignoring the DDoS factor.

We are considering a pretty fast blocktime--in the range of 1 to 2 minutes. 30s might be a bit short, but our goal is to get it as low as reasonably possible without causing any issues on the network. Anonymous transactions aren't our goal.

Thanks for your answer and the news!!

While I understand that you wanted to remove SHA256D PoW from CC2.0 because it is a waste of electricity, adding a third party in the loop (the certification authorities, who in the case of F@H we are not even sure are willing to participate, even though the recent results of Curecoin team might help convincing them) goes against the concept of decentralization which is one of the pillars of the success of blockchain technology and Bitcoin.

If we want to replace Bitcoin someday, we need the trust of investors so they get ready to buy lots of M$ of Curecoin. And for that we should be as decentralized as possible and not have vulnerabilities Bitcoin doesn't have (a third party is a potential vulnerability and even if the problems an outside hacker or an insider at Stanford might create could be a posteriori corrected in the blockchain it would create hassle, would make bad publicity for the coin and make its value drop...).

As far as motivation, CureCoin is waiting for a technology I'm currently working on to become available that will drastically improve CureCoin's network security. The technology isn't built for CureCoin specifically, but for the entire ecosystem to use, and CureCoin will be one of the first adopters. Update to come very soon (Tuesday) about what exactly the technology is. Smiley

Haha, nice teaser!! Cheesy On Tuesday we're going to break the record of people reading this topic! Cheesy

We're certainly open to ideas if anyone can think of a way to enable universities and other research institutes to still create work for which miners are rewarded without involving certificates (or a central service which just holds Curecoin and pays it out).

The announcement on Tuesday will clarify why a certificate blockchain isn't a problem for network security (double-spends, blockchain forking, etc.) though Smiley The main issue with the certificate blockchain is simply a university being able to give themselves more Curecoin than they should be getting--it doesn't let them attack the consensus of the network to any concerning degree with the technology we're working on.
62  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 11:25:10 PM
Found a block on both miners and all of my debugging shows nothing but good results, so here's the proxy version 2 with cuda miners updated to support proxy v2:

https://github.com/Vorksholk/PascalCoin-CUDA/releases/download/v1.03/CUDA_Pascal_v1.03.zip

AMD miner coming in the next few hours.

How can you tell if GPU 0 or GPU 1 found a block?

And I can not wait for the AMD GPU miner.

If you're using a name exclusively for a single rig, then you can just look at what the number is after the name. If you're mining with multiple pascal wallets on multiple machines all using the same miner name, then you need to pay attention to which nonce shows up in the any pascalcoin wallet when that block is announced in the logs, and see which of your machines with the gpu listed in the miner name (VorkGPUs00 for example is the name all of my GPUs I have at device #0 in my rigs are mining with, VorkGPUs01 is for all GPUs at device #1) mined that nonce by looking at the output in the proxy (or converting to hex and finding it in the miner output itself).
63  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 10:57:14 PM
Found a block on both miners and all of my debugging shows nothing but good results, so here's the proxy version 2 with cuda miners updated to support proxy v2:

https://github.com/Vorksholk/PascalCoin-CUDA/releases/download/v1.03/CUDA_Pascal_v1.03.zip

AMD miner coming in the next few hours.
64  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 10:48:34 PM
Right--the problem stems from each GPU hashing the same data. Effectively two GPUs will hash slightly faster than one with the current system if they are slow enough that they don't complete a kernel run every second, because the timestamp will then be different at some points during both miners' running. However, GPUs fast enough to complete one or more kernel grinds a second will mostly duplicate each others' work, because the headerout file contains the miner name (like VorkGPUs01 or VorkGPUs00), so both GPUs are hashing the same 164 bytes of input data, adding their own 4-byte nonces which are deterministically produced (so two miners would produce the same nonce iteration, assuming the same kernel launch parameters (block and grid size, which are currently hard-coded into the binaries and set at what seemed to be the most efficient parameters on 9xx and 10xx cards during my testing), as well as the same 4-byte timestamp.

Still waiting to find a block with my new solution, but everything appears to be working correctly. Just a dumb oversight on my part, just waiting to get a block from both GPUs on the test system to make sure everything's kosher Smiley

Looks like you got a few blocks the past hour.

Yup, but I have multiple systems mining under the same name. My test rig got 1 block with GPU #1, and if I get 1 block with GPU #0 I can be pretty sure everything is perfect.

The update will require new CUDA mining binaries too, btw.
65  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 10:43:47 PM
Ahh. Gotcha. TY. Testing a multi gpu rig vs a single see how many blocks they solve.

I ran a two-gpu rig overnight and it got the exact same number of blocks as a single-GPU rig (all GPUs are the same), and I looked more into this, so I know for sure multi-GPU rigs for the moment are only effectively mining with the power of one GPU. I'm testing a fix right now.

Ahh. That explains it. Even though in the .jar proxy it shows as shares being submitted by individual / different GPU's.

Right--the problem stems from each GPU hashing the same data. Effectively two GPUs will hash slightly faster than one with the current system if they are slow enough that they don't complete a kernel run every second, because the timestamp will then be different at some points during both miners' running. However, GPUs fast enough to complete one or more kernel grinds a second will mostly duplicate each others' work, because the headerout file contains the miner name (like VorkGPUs01 or VorkGPUs00), so both GPUs are hashing the same 164 bytes of input data, adding their own 4-byte nonces which are deterministically produced (so two miners would produce the same nonce iteration, assuming the same kernel launch parameters (block and grid size, which are currently hard-coded into the binaries and set at what seemed to be the most efficient parameters on 9xx and 10xx cards during my testing), as well as the same 4-byte timestamp.

Still waiting to find a block with my new solution, but everything appears to be working correctly. Just a dumb oversight on my part, just waiting to get a block from both GPUs on the test system to make sure everything's kosher Smiley
66  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 10:16:58 PM
Ahh. Gotcha. TY. Testing a multi gpu rig vs a single see how many blocks they solve.

I ran a two-gpu rig overnight and it got the exact same number of blocks as a single-GPU rig (all GPUs are the same), and I looked more into this, so I know for sure multi-GPU rigs for the moment are only effectively mining with the power of one GPU. I'm testing a fix right now.
67  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN]CureCoin - Protein Folding Research based Proof of Work on: October 22, 2016, 09:56:43 PM
It's interesting to see what will happen. In the end, in my opinion, potential worldwide merchant acceptance is really the only thing that pushes coins to the top of the market cap. And that simply cannot happen with the current centralized CureCoin 1.0 . .

In my opinion it's sad that the CC team was somehow lacking ambition for CC2.0... Indeed their aim should have been to create THE ULTIMATE COIN that would send to trashcan all the useless global-warming-increasing GPU-based coins out there and concurrence Bitcoin, because being the ultimate cryptocurrency would push the coin to the top of the market cap and conversely offer tens, maybe hundreds of GPPD to F@H.

But for CC2.0 to be the ultimate cryptocurrency, I think they should:
   - Remove all the flaws of CC 1.0 (mainly the premine -> DONE)
   - Not add flaws in CC 2.0 (the certification authorities)
   - Add massive advantages over Bitcoin (either new or inspired from other popular coins):
      - Protection against quantum computing (DONE)
      - Block time under 30s to allow effective payments
      - Anonymous transactions (Vorksholk argued that it might allow illegal stuff, but cash payments in US$ allow it too and they don't help F@H in compensation)

Maybe SigmaX or CC3.0 will be the ultimate coin...

And then, having the technically ultimate coin that moreover helps F@H, CC's PR should contact all major merchants accepting Bitcoin and convince them accepting Curecoin.



Let me preface this by saying these are really good debates to have--and if you have specific ideas to address some of the concerns (namely certificates and WU validation of a dynamic WU system), we'd love to hear and discuss them. Here's how we're currently seeing these issues:

Currently, the certificate system looks like the best way to be able to validate peoples' contributions to scientific computation networks. Bitcoin works because PoW uses SHA-256D which everyone can validate--they know the exact requirements and validation procedure for any PoW. Since the very nature of scientific computing is large workloads which continually evolve and change format (not everyone is folding the same molecule in the same way--in fact, aside from redundancy, no one is), this means of decentralized work validation isn't possible. It's really a tradeoff--technically it's possible to create a PoW based on validating a computation related to simulating a single molecular system, but then it's unusable as a means of scientific research because research teams can't change the molecule, can't change what about it is being simulated, etc. They also can't improve the software (aside from optimizing the software to produce the same results faster), and the workloads would have to be unimaginably small to be practical PoW targets (because everyone needs to recompute them). It's also a DDoS vulnerability to the network, because people could pretend to mine a ton of blocks with bogus WUs, and everyone would have to recompute the entire WU to see that it's false. With Bitcoin it's nanoseconds to compute a double-SHA256 hash. With even a small molecular simulation it's on the scale of seconds to minutes, which means someone could submit a few bogus blocks to the network, and the entire network would spend an hour frozen while it reprocesses every WU just to find out that they weren't valid PoW submissions. We also couldn't have a short blocktime for this reason, because even the slowest full nodes would need to validate blocks much faster than they are transmitted, and making the PoW validation require more than a trivial amount of time requires that the blocktime leaves a large enough window to ensure that peers aren't being "buried" by PoW validations, even ignoring the DDoS factor.

We are considering a pretty fast blocktime--in the range of 1 to 2 minutes. 30s might be a bit short, but our goal is to get it as low as reasonably possible without causing any issues on the network. Anonymous transactions aren't our goal.

As far as motivation, CureCoin is waiting for a technology I'm currently working on to become available that will drastically improve CureCoin's network security. The technology isn't built for CureCoin specifically, but for the entire ecosystem to use, and CureCoin will be one of the first adopters. Update to come very soon (Tuesday) about what exactly the technology is. Smiley
68  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 09:43:11 PM
Well it appears to be updating correctly for me on 2 systems and each system appears to be hashing at same rate

1 system has 2 gpu's both showing reported hash about the same speed and both finding nonces at close to a similar rate, one has 34 now and the other has 36

On a totally different system with 1 GPU, Similar to GPU in above system, has about 33 nonces and was started just after my first system.

No blocks yet but I looked at text files and watched the live update to each window when finding a nonce, on the multiple GPU system they all seem to update at same time.



Yup, you're right. Given the design of the standalone miners, the proxy needs to create a different headerout.txt file for each GPU, since that data includes the miner name (which needs to be different for each GPU mining). Will update tonight, hopefully.

I see a lot of submitted shares from the GPU's liek this :

GPU 0 submitted a share [payload: xxxxxxxx00 nonce: -1311227863 timestamp: 1477171165]

They are showing a negative nonce, some positive.

Any ideas?

Negative nonces aren't a problem. Bit of a Comp-Sci lesson: "negative" numbers are actually just "large" numbers interpreted in such a way that they're negative. For example, with 8 bits, the largest number I can represent is 11111111, which is 255. Or I can interpret it as a negative number, in which case (a bit of a simplification here), it's -1. 11111110 is -2. 11111101 is -3. Etc. It's called two's complement.

The idea is that, if I want negative numbers, I can take my normal range (0 to 255, in the case of 8-bit numbers), and partition about half of it as negative numbers. In this way, I can have numbers from -128 to 127. Similarly with 32-bit numbers, we could either have all positive integers from 0 to 2^32, or have negative numbers and positive numbers, which span from -2^31 to 2^31 - 1.

In this way, -1 in a 32-bit signed integer is the same as 2^32 - 1 in an unsigned integer. In hex, the number FFFFFFFF is either 4,294,967,295 or -1, depending on whether you interpret it as a signed or unsigned integer. The binary -- 11111111111111111111111111111111 -- is the same either way. The way JSON decoding works, a number larger than 2^31-1 isn't seen as a 32-bit integer, because 2^31 would require more than 31 bits plus a sign bit to represent, assuming we're operating with signed bits. So instead, we just give it a negative number which represents the same bit pattern, and it's happy.

The pascal network itself uses unsigned integers--Cardinals--but the JSON decoder expects signed 32-bit integers. As such, approximately half of the nonces will actually be negative when interpreted as signed ints. They're not invalid or different, the negative number is just another way to express the same bit pattern as a positive number in a different system.
69  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 03:56:20 PM
Thank you very much for what you are doing!
Can you help, I can not start mining. Can not figure out where to save files from the "PascalCoin_CUDA_ProxyMiner" folder. And in what sequence they run. Thank you. And please forgive me for my bad english.

You can leave all of the files in PascalCoin_CUDA_ProxyMiner where they are--just extract the zip file somewhere. You run the proxy first, give it the information it asks about (host will be 127.0.0.1, port will be 4009, number of GPUs to support is the number of cards you want to mine with, and your miner name is the miner name you enter in the PascalCoin wallet), and then run the PascalCoinCUDA_ProxyMiner_smXX.exe miner in the same directory. Multiple GPUs will have to be launched from the command line with an argument, so make sure you cd to the directory they are in first to ensure their working directory is correct.
70  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 03:53:10 PM
Thanks Vork.

I seem to have a problem: the proxy isn't updating the headerout.txt file when a block is found on the network.
In the console, the miner notify is working just fine.

Did anyone encounter the same issue?
Thanks!

So you're seeing miner-notify with PascalProxyv1.jar, but the file "headerout.txt" in the same directory as PascalProxyv1.jar isn't changing? Try deleting it and running PascalProxyv1.jar again, does it reappear?

Thanks for all the work. Hope it's worthwhile your time to do it. :-)

Any way to put back the MH display into the proxy_smxx miners? It's not showing anything other than the shares.

That's weird, the proxy_smxx miners are the exact same code as before except the filename to write data to has changed (and on my machines, I'm seeing the hashrate printed out).
71  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 22, 2016, 04:43:01 AM
Alright, here's the CUDA miner with the working proxy: https://github.com/Vorksholk/PascalCoin-CUDA/releases/download/v1.02/CUDA_Pascal_v1.02.zip

VirusTotal: 0/55 https://virustotal.com/en/file/a7c451c3a19c4052cdf10d974da057f51bdb957734bb47c29115044741540648/analysis/1477111465/

These instructions are slightly different than the previous GPU miner! Take note!
1. You must already have PascalCoin installed. If you don't have it, download it from sourceforge here: https://sourceforge.net/projects/pascalcoin/. Once it is installed, run the PascalCoinWallet.exe provided in the download.
2. You must be using a 256-bit secp256k1 key. This is the default behavior of the PascalCoin wallet.
3. Your miner name must be exactly 8 characters long. The miner expects that the input is exactly 176 total bytes (which is achieved by using a secp256k1 key and a 10-character name)  NOTE: NOT 10 like before! 8 characters, because the last two will be used to identify each GPU!
4. You must have RPC enabled in your client (any port of your choosing, default is 4009)
5. You must run the proxy miner (PascalProxyv1.jar) in the same directory as the PascalCoinCUDA_ProxyMiner_smXX.exe file you run (everything is where it needs to be if you just extract the provided zip).

To run the proxy, just double-click on it on Windows, or in Linux do
Code:
java -jar /path/to/PascalProxyv1.jar

How does this work? The proxy connects to the PascalCoin wallet, grabs the current mining data, and then creates a headerout.txt file for the miner(s) to use. In return, the miner(s) write their shares to files called datainXX.txt, where XX is the GPU number (starting at 00, and going through 99). The miners use the device argument (0 if none is provided) to determine which device to use, and that's the file they write out to. The proxy waits for changes in those datainXX files, and then ferries the results back to the PascalCoin wallet.

You will notice that the PascalCoin wallet log constantly says "Sending Error JSON RPC id () : Error: Proof of work is higher than target" whenever you find a share. This is because the proxy's behavior is to submit any target 24000000 share, and since the target is 29xxxxxx, only about 1 in 2^5 = 32 shares will be an actual block. At a target of 2Axxxxxx, that'd be 2^6 or 64 shares. At a target of 2Bxxxxxx, that'll be 2^7 or 128 shares. etc.

Whenever I can get around to it, the OpenCL miner will work with this proxy.

Bottom line: you no longer need to rely on my modified PascalCoin wallet. You can use the official one from the developer, and run the proxy. The proxy won't work with old versions of the miner, but it will work with the version included in the zip linked above. Smiley
72  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 21, 2016, 10:01:24 PM
Hi everybody.

I've found an explanation about why PascalCoin is allways "discovering servers" and It's difficult to connect: It's a Bug  Grin

In few hours I'll update version build to 1.0.9 correcting this issue (now I'm testing). It's important that everybody update it too to make PascalCoin network more stable.

Affected versions are 1.0.6, 1.0.7 and 1.0.8 (earlier versions don't have this bug... but they have "other" bugs).
Of course Vorksholk binaries have this bug too, I'll talk with him to arrange it.

The dev.

Hey, figured I'd just write a proxy that connects to 1.0.9 and ferries the necessary data to a usable form for the CUDA miner, and then ferries results back. But when I submit a nonce like "3791967799" (bigger than 2^31 but smaller than 2^32) to your client, it is rejected for not being an integer, despite being smaller than 2^32. Did you use a signed integer instead of an unsigned integer for this? These nonces never had an issue before. If you can make a release that allows the full nonce range (up to 2^32), then everyone can GPU-mine with multiple GPUs to one wallet using the proxy I'm making.

Try StrToInt64 instead of StrToInt, mask out the extra 32 bits, and put it in a Cardinal?
73  Alternate cryptocurrencies / Marketplace (Altcoins) / Re: PASCAL COIN + ACCOUNT TRADING THREAD on: October 20, 2016, 08:46:32 PM
Agreed. The price will be going up even more

Yeah. A gtx 1060/70/80 is lucky to get 4-6 blocks now, which for a $250-700 per GPU investment is going to warrant a price jump.
 Sadly looks like a single mining farm will be controlling the price as they have 50%+ of all coins generated.

They have 50%+ of the daily production, but probably 5% of the total float.
74  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 20, 2016, 02:15:02 PM
Vorksholk, will the Opencloud miner you are working on also work with the new Linux version of Pascal?  Thanks

I won't be providing Linux binaries, but the code should compile with few, if any, modifications on Linux.
75  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 20, 2016, 07:07:21 AM
That seems a lot less than it should be.
I mean that's only like 60 newer GPUs. Should be like 6 to 8 times that if we go by the top 2 miners.
Probably ~25-30 AMD GPUs with an unoptimized miner, or perhaps as few at ~15-20 with an optimized one.

Going by the performance difference of the GTX 1060 vs RX480's in eth/dag I don't think a well-running RX480 will see more than 400MH per card.
More like 340-350MH.

I tell you though, once the AMD miner is out, the net hash rate will hit 40PH within a day or two. Cheesy

Eth is a bad metric though, since it's memory bound Wink
76  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 20, 2016, 03:50:29 AM
That seems a lot less than it should be.
I mean that's only like 60 newer GPUs. Should be like 6 to 8 times that if we go by the top 2 miners.
Probably ~25-30 AMD GPUs with an unoptimized miner, or perhaps as few at ~15-20 with an optimized one.
77  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 20, 2016, 03:40:25 AM
My miner is showing negative hash rates. Any idea?

Yup, it's just a silly artifact of the dumb way I measure hashrate in my CUDAminer (and it will probably appear in the OpenCL version too, since I'm going to reuse as much code as possible to save time). It basically calculates a "time" value which is only based on the hour, minute, second, and millisecond. As such, when the day rolls over, the start time appears appears as a larger number than the "current time", so the hashrate is negative (and also wrong). Just restart the miner if you want to get correct hashrate.

The GPU miner was an absolute hackjob, so I did the minimum work necessary to make it actually work for its intended purpose. One side-effect is that, when running for a long time, the hashrate measurement is incorrect. Time permitting, will fix at some point by using a non-disgusting way of measuring milliseconds elapsed since the beginning of mining.
 
78  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 20, 2016, 03:36:44 AM
OCminer has won 574 out of the last 1026 blocks which is 56%.  Will this change if Vorksholk comes out with the new miner?

Anyone know what is the cost of the type of computer OCminer is using or one that would win blocks?  Thanks

It certainly will. No promises, but you'll probably see an card like the RX 480 hit 500 MH/s+, and "ocminer" probably has ~10,000 MH/s or ~20 RX 480s (or a more optimized/faster miner with fewer/different cards, of course). An optimized kernel with BFI and an even more intelligent midstate would certainly do even better, but just grabbing my CUDA code, shoving it in a .CL file, changing the method signature/params, and then (the time consuming part) building the host code should see pretty significant improvements over CUDA. Also for anyone wondering: the OpenCL implementation won't provide any benefit for NVidia cards.

For anyone who's of the technical persuasion wondering why AMD GPUs are so much faster than NVidia GPUs for SHA-256 and similar hashing functions:
NVidia focuses more on architecture complexity, AMD focuses more on raw compute performance
SHA-256 uses 32-bit int rotation. AMD GPUs have a single instruction for this, NVidia GPUs have to do (a << r) | (a >> (32 - r)) which is two shifts and an 'or' instruction. So oversimplification: AMD chips are about 3 times better at 32-bit integer rotation.

As we saw with Siacoin (Blake2b-algo), NVidia GPUs performed amazingly well (still unable to match cost-per-dollar with AMD, but they were still incredibly competitive). Why was this? I was able to implement three of the four shifts used by Blake2b as byte_perm, since the rotations were rotations by amounts that were divisible by 8 (so the shifts could be realized as selecting certain bytes in a certain order, rather than actually shifting bits). In SHA-256, this is not the case (the shifts are by 2, 3, 5, 7, 10, 11, 13, 17, 18, 19, 22, and 25, none of which are divisible by 8...). Also, blake2b uses 64-bit rotations which AMD doesn't have a single instruction for (although their 32-bit instructions still offer some advantage).

GFLOPs isn't really a useful metric, since we're not concerned with floating point performance necessarily. GFLOPs is a nice tool to compare relative performance of multiple chips of the same architecture, but between architectures, for the purposes of mining, it isn't terribly useful.

Also, RX 480 arrived, so fingers crossed Saturday gives me enough time to throw this together.
79  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 19, 2016, 09:04:37 PM
That seems a lot less than it should be.
I mean that's only like 60 newer GPUs. Should be like 6 to 8 times that if we go by the top 2 miners.

Going by the difficulty and assuming we're hitting a block every 5 minutes (which we aren't, but the difficulty has had long enough to adjust that it's fluctuations are just statistical noise), the current network is about 14 or 15 GH/s.
80  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Pascal Coin: P2P cryptocurrency without need of historical operations on: October 19, 2016, 04:03:03 PM
What's the estimated time for each block ? The time the difficulty tries to adjust itself to?
5 minutes.
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ... 85 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!