Posts Tagged ‘GTX 1080 X11evo


Time for some overclocking of the GeForce GTX 1080 Founders Edition and running the tests again to see what hashrate increase can we expect from the GPU with the increased operating frequencies. The Founders Edition cards are somewhat limited in the max power you can get by the presence of only one 8-pin PCI-E power, default TDP limit of 180W and a Power Limiter that allows for just 20% increase over the default TDP (216W max TDP). We already know that the GTX 1080 and GTX 1070 GPUs are handling quite well overclocking and you can squeeze a lot more from them if you are not limited and don’t care than much about the power usage. Unfortunately for some more serious overclocking you will have to wait for the GTX 1080 cards that come with custom cooling solutions form Nvidia’s partners that will allow for more serious overclocking. Meanwhile we are pushing the GTX 1080 Founders Edition to what it can do without touching the core voltage and doing some benchmarks with: Power Limit + 20%, Core Clock + 240 MHs, Memory Clock + 125 MHs, the max settings that are running stable for 24/7 mining on our test card and the results are below.


As you can see from the table of hashrate results with the stock settings and the overclocked GTX 1080 the average performance increase we are getting is about 12%. A result that is not bad, but a more serious overclock can help us get even more hashrate from the GPU, unfortunately making it less attractive in terms of hashrate per watt. The problem with low performance for Neoscrypt and not that great for Lyra2RE still remain and unlike with Ethereum where we see some, though not enough, performance boost testing under Windows 10, with these two algorithms Windows 10 does not help, so they really need GPU specific optimizations to max out performance unlike the other algos that already scale pretty well on the GTX 1080. Other algorithms also do not show any significant difference in results between Windows 7 and Windows 10, so no need to upgrade or downgrade your OS. Of course for Ethereum mining on GTX 1080 or GTX 1070 you would still need to go for Linux for the best possible performance as still Windows hashrate is not satisfactory at all.


Time for a quick look at the power usage of the GeForce GTX 1080, the reference design from Nvidia that has a TDP of 180W set by the manufacturer, making it pretty energy efficient for the performance it provides for crypto currency mining. We are checking the situation with the different algorithms apart from Ethereum, because we already know that the GTX 1080 is far from great choice for Ethereum mining.


The GTX 1080 Founders Edition TDP limit of 180 Watt has a power limiter that can give you up to extra 20% increase, moving the slider to increase the maximum available power to be used by the GPU. This can help you get some extra hashrate for the algorithms that max out the 180 Watt TDP of the GPU and as you can see these are actually quite a few. The non reference versions of GTX 1080 should come with higher TDP limits and even higher factory clocks, so they should be able to achieve even more performance as they will be able to keep higher boost frequency of the GPU while mining. Then again you might also want to lower the power limit in order to reduce the power usage and get an even better hashrate per Watt of power used, but for that you might have to also play with the voltage of the GPU for best results and also the clocks.

In the table above you can see that Neoscrypt and Lyra2Re are far from maxing out the TDP of the GTX 1080, the reason for that is that they do not perform that well in terms of hashrate, so some optimizations for them might help give you better performance. Other than that the power usage is high and the hashrate you get is seriously increased as compared to the previous generation of Nvidia GPUs, so performance per Watt for the GTX 1080 is actually great in most algorithms for crypto mining.


We continue our series of tests of the new Nvidia GeForce GTX 1080 Founders Edition (reference design) for crypto currency mining after yesterday we have checked the situation for mining Ethereum with the new Pascal GPUs and have seen the not so great results. Ethereum and the Dagger-Hashimoto algorithm it uses is still doing better on AMD GPUs, but Ehereum mining is set to end at some point with the altcoin switching to PoS only and the forks that use the same algorithm are not yet providing a real alternative. So with the growing difficulty and the switch to PoS at some point in probably less than a year you should also be interested in how the GTX 1080 performs in other new GPU mineable algorithms as well as in the more popular ones that. This is the reason we are now going to compare the GeForce GTX 1080 Founders Edition with GTX 980 Ti and GTX 970 to see the difference in terms of performance in other algorithms as well. As you will probably see the GTX 1080 does provide a nice performance boost compared to the two other older generation alternatives, but with its currently high price even the power saving does not make it great option for building multi-GPU mining rigs for the moment. The more interestingly priced GTX 1070 will most likely be the better choice for building 6 GPU mining rigs succeeding the GTX 970 as the best Nvidia-based GPU for crypto mining, though we are yet to test the 1070 in order to confirm that.


The ccminer forks we have used for testing:
Blakecoin (Blake256-8rounds), Decred (Blake256-14rounds), Vcash (Blake256-8rounds) ccminer
Lyra2REv2 ccminer
X11evo ccminer
All other algorithms

Tests were done under Windows 7 with already existing ccMiner releases that are not compiled with CUDA 8.0 and with support for Compute 6.1 that the GTX 1080 uses, though without optimizations specific for GTX 1080 or the Pascal architecture in general we are probably not going to see much of a difference. Anyway, we do have planned to do additional testing comparing the difference in terms of performance with existing ccMiner forks compiled with CUDA 8.0 and Compute 6.1 against the results here achieved with CUDA 6.5/7.5 and Compute 5.2 releases that were tested here.

We had almost problem-free experience testing the above algorithms on the GTX 1080, though there are some things that we need to note. For example the Nist5 default intensity (21?) was crashing the miner, so tested with 20 where it was working fine. The very low Neoscrypt performance on the GTX 1080 is because the algorithm probably needs special optimizations to take advantage of the GPU. The Lyra2RE performance was not that much faster on the GTX 1080 as compared to GTX 980 Ti, so here some optimizations will most likely result in increased performance. Note that we have also tested X11, though now that is has moved to the ASIC phase you probably won’t want to GPU mine it anyway.

What is clearly seen from the comparison above is that the new GTX 1080 performs about twice as fast (on average) than a single GTX 970 and we are using Gigabyte Windoforce 970 OC GPU comparing to stock 980 Ti and GTX 1080. The GTX 1080 uses about the same power as a single GTX 970, but offers twice the performance, but the real issue here is the price at the moment. If the GTX 1080 was about twice the price of GTX 970 it might’ve been an interesting option for miners that are currently using Nvidia GTX 970 mining rigs, but it is unfortunately more like 2.5 and not two times. Comparing the GTX 1080 to the GTX 980 Ti shows an average performance advantage of almost 50% in favor of the GTX 1080, with not that great price difference between the two cards (the GTX 1080 is about 30% more expensive). So definitely no point in going for GTX 980 Ti instead of GTX 1080 for mining, though still both cards are not the best choice for multi-GPU mining rigs, then again if you are buying just a single GPU for gaming and want to also mine with it when not gaming the GTX 1080 might be much more interesting that for use on multi-GPU mining rigs.