Trying Gigabyte GTX 750 Ti Video Card For Crypto Mining

25 Feb
2014

gigabyte-gtx-750-ti-crypto-mining

The Geforce GTX 750 Ti video cards based on the new Maxwell architecture from Nvidia have generated quite a lot of interest among the users mining crypto currencies thanks to their very good hashrate per watt of used power. And after we have tried a reference GTX 750 Ti board that does perform pretty well and overclocks decently to provide some extra hashrate we are now moving to trying out different non-reference design video cards based on the GTX 750 Ti. Out goal is to find the best choice for overclocking and gaining the maximum possible performance for use the GPU for mining crypto currencies. So we took a Gigabyte GTX 750 Ti (N75TOC-2GI) video card for a spin to see what we can get out of that board…

gigabyte-gtx-750-ti-stock-scrypt

The default Scrypt mining performance with CUDAminer was about 273 KHS, or slightly more than what we got with the reference card at stock frequencies of about 265 KHS. The two advantages of the Gigabyte board were the presence of an external PCI-E power connector and the much better cooling solution compared to the stock cooler. However we have found out that the TDP limit of the Gigabyte was still set at 38.5W in the video BIOS, though with the Power Target limit removal method you can get much higher limit set and avoid the Power Target functionality limiting your performance.

gigabyte-gtx-750-ti-stock-overclock

Overclocking the Gigabyte GTX 750 Ti card to +135 MHz for the GPU and +700 MHz for the video memory brought the Scrypt mining performance to about 303 KHS (the maximum stable clocks for mining), however we were hitting the TDP limit. So we have increased the TDP limit to 65.5W by modifying the video BIOS and flashing the modified version on the Gigabyte board and the result we got with the same overclocked frequencies was up to 322 KHS. Unfortunately the Gigabyte board did not allow for higher GPU frequencies that +135 MHz or to increase the GPU voltage higher than the default value. And while 322 KHS with a silent operation and 42 degrees C of the GPU is not a bad result at all, we are going to be checking out other different GTX 750 Ti boards to see if we are going to be able to get a bit more hashrate than that. So stay tuned for more updates on that…






Check Some More Similar Crypto Related Publications:

22 Responses to Trying Gigabyte GTX 750 Ti Video Card For Crypto Mining

Robin

February 25th, 2014 at 15:48

Trying to get some clarification on the model number. You mention yours is: N75TOC-2GD and I can see that it has a 6 pin power connector. I’ve only been able to find the N75TOC-2GI but I can’t confirm whether that does have a power connector or not. Help much appreciated before everything sells out ;) Thanks for the write up!

admin

February 25th, 2014 at 20:16

It is the GI with power connector, the GD was a typo :)

laptopfreek0

February 25th, 2014 at 21:19

Just wondering what config you had on this. Also if you were using -H 1 or -H 0 then could you try with -H 2 and report your hashrate then. I am running 5 asus cards and am only getting 230 ish per card with -H 2.

admin

February 25th, 2014 at 21:41

Are you using the T5x24 kernel for cudaminer? H1 and H2 options should not make that much of a difference.

You can try opening Chrome browser, Media Player Classic or something else that uses GPU acceleration to see if you will get higher performance, some people are reporting lower performance with only the miner running…

laptopfreek0

February 25th, 2014 at 22:20

T5x24 yup. I’m using the Asus GTX750TI-OC-2GD5 on ubuntu with driver 331.49. I know that I can’t overclock on linux, but it should be almost the same performance as stock clock. Also H2 offloads everything to GPU where H1 and H0 hashes on the CPU. I’m trying to avoid using CPU because I have 5 cards in the system, and the point is power efficiency, and CPU mining in general sucks.

admin

February 26th, 2014 at 00:56

It is possible that the performance under Linux may not be as good as under Windows, we have not tested under Linux. You might want to try the autotune feature of cudaminer as well, it is possible that another kernel configuration might work better under Linux.

Ryan

February 26th, 2014 at 01:44

What arguments did you give to cudaminer? I am getting ~245 with H2, T5x24

Nick

March 2nd, 2014 at 01:40

You couldn’t squeeze anymore out of the core because that was the most the boost table will allow without modification. While in kepler bios editor, adjust the top end of it a bit. Stable at 1324 core, 52C; however my RAM doesn’t want to go past about an extra 500mhz though without crashing. Oh well.

Robin

March 2nd, 2014 at 21:38

I feel like there must be something wrong with my first card.

Using the exact same model, plugged directly into PCI-e x16 slot.

OS: Win 7 x64
Mobo: Gigabyte GA-990FXA-UD5

Cudaminer setting: –no-autotune -l T5x24 -H 2 -m 1 -i 0

Stock = 232KH/s

+60 core clock
+600 memory clock
= 257KH/s

Launch Chrome and start playing a HD video first, then start cudaminer:
= 269KH/s

Trying
+135/700 crashes cudaminer
+115/675 crashes cudaminer
+95/700 crashes cudaminer
+95/650 = 265KH/s (278KH/s with Chrome/Youtube trick).

Can’t seem to get anywhere near 290 nevermind 300 plus.

admin

March 2nd, 2014 at 23:47

Robin, have you tried modifying your video BIOS to increase the power limits, it might help you get some extra performance… we got about 20 KHS higher hashrate when modifying the BIOS.

Robin

March 3rd, 2014 at 17:50

Hi, yes I have. The most I could get out of it was 280. I tried a second card (without flashing) and that too was limited to 270 ish. I don’t understand what the difference is. Can’t even get +60/+600 stable nevermind +135/+700. Very strange. I just don’t understand :/

admin

March 3rd, 2014 at 18:36

That is weird, there could be more variance in the video memory frequency, but the GPU should easily get to +135. What are the temperatures of the GPU, maybe you are hitting the temp target and that is causing frequency drop and thus drop in performance?

Robin

March 3rd, 2014 at 19:31

The Cudaminer author suggested using the x86 version of Cudaminer (I’d just defaulted to using x64). This got me from 260KH/s to 290KH/s (at +50/+500) on a desktop machine that I use for a few other things. Much better! +60/+600 is now stable and doing 297KH/s. The temp is 56 in a closed case at this speed. Thanks for your advice and help so far :) Definitely worth editing this article to mention that using the x86 version will likely be better!

Robin

March 3rd, 2014 at 19:56

Spoke too soon. +60/+600 crashed after about 8 mins. +135/+700 crashed almost instantaneously. Stable 292KH/s with +50/+500 though. Is there a methodology for increasing core clock and memory clock rates (some sort of relationship between the two)?

Tallec

March 6th, 2014 at 17:42

Have you try to mod both of bios tdp limit and default frequent and boost frequent?

admin

March 6th, 2014 at 20:52

Robin, we have tried a few more Gigabyte cards and there seem to be a lot of variance in what they are capable of… we had cards not able to run with more than +100/+600 and others that had no problems at +135/+700 MHz.

Tallec, yes we did try that. On the reference board there was no problem with the higher frequency being set, though the GPU did not handle well much over the +135 MHz. However on the Gigabyte further increasing the frequencies in BIOS did not have any effect on the actual operating frequencies.

Robin

March 9th, 2014 at 19:21

Thanks, that’s useful to know. Very strange that they can vary so much!

Nestor

March 12th, 2014 at 20:38

Se puede usar cudaminer en un mismo motherboard con esta placa + cgminer con una placa ati?

RavenX

March 13th, 2014 at 10:26

Just got a Gigabyte GPU like that and I’m running some tests on several parameters over cudaminer.
It is connected directly on PCI x16 slot in a regular closed case desktop computer.

– H1 gives me 10KHs more than -H2, but we’re “stressing” the CPU this way.

With -H2 I have a more “smooth” operation”:
– 10ºC less on CPU, usually at 53~55ºC (with -H1 it keeps always running over 60ºC, which makes the fan very noise).
– GPU always under 60ºC.
– low noise from the whole system.

Thus, I presume that -H2 will generate less power consumption.

admin

March 13th, 2014 at 12:35

With -H 1 you are also using the CPU for calculations and this gives you some extra hashrate, not worth the extra power usage, but with a single card you can still use it for some extra hashrate. For multiple cards however you should stick to -H 2, otherwise you may get really low performance.

Daniel

March 14th, 2014 at 18:47

Thanks for the helpful posts! I am now running 5 of the Gigabyte cards and getting about 280 kh/s each with 420 Watts pulled at the wall. Now that the new Nvidia driver allows going beyond +135 MHz, I want to try increasing the TDP.

I came across a post that about a new overclocked version that Gigabyte is coming out with: N75TWF2OC-2GI. There are details on the Gigabyte website.

As far as I can tell, the only difference is the clocks. Should I take this to be achievable clock rates for the N75TOC-2GI model? Or do you suspect that something other than just the BIOS has been changed between these two cards?

admin

March 14th, 2014 at 21:43

Daniel, after checking the available information about the new model N75TWF2OC-2GI it seems that only the BIOS might be different by setting the maximum GPU clocks a bit higher compared to the previous model that we have used for this article. The board and the cooler seem to be the same, the video memory as well and it runs on the same default frequency for both models. So probably not too much point in getting the more clocked model as the N75TOC-2GI will most likely be able to provide the same overclock results.

Leave a Reply

Your email address will not be published. Required fields are marked *

top