Meet The Gigabyte GeForce GTX 660 Ti OC

Our final GTX 660 Ti of the day is Gigabyte’s entry, the Gigabyte GeForce GTX 660 Ti OC. Unlike the other cards in our review today this is not a semi-custom card but rather a fully-custom card, which brings with it some interesting performance ramifications.

GeForce GTX 660 Ti Partner Card Specification Comparison
  GeForce GTX 660 Ti(Ref) EVGA GTX 660 Ti Superclocked Zotac GTX 660 Ti AMP! Gigabyte GTX 660 Ti OC
Base Clock 915MHz 980MHz 1033MHz 1033MHz
Boost Clock 980MHz 1059MHz 1111MHz 1111MHz
Memory Clock 6008MHz 6008MHz 6608MHz 6008MHz
Frame Buffer 2GB 2GB 2GB 2GB
TDP 150W 150W 150W ~170W
Width Double Slot Double Slot Double Slot Double Slot
Length N/A 9.5" 7.5" 10,5"
Warranty N/A 3 Year 3 Year + Life 3 Year
Price Point $299 $309 $329 $319

The big difference between a semi-custom and fully-custom card is of course the PCB; fully-custom cards pair a custom cooler with a custom PCB instead of a reference PCB. Partners can go in a few different directions with custom PCBs, using them to reduce the BoM, reduce the size of the card, or even to increase the capabilities of a product. For their GTX 660 Ti OC, Gigabyte has gone in the latter direction, using a custom PCB to improve the card.

On the surface the specs of the Gigabyte GeForce GTX 660 Ti OC are relatively close to our other cards, primarily the Zotac. Like Zotac Gigabyte is pushing the base clock to 1033MHz and the boost clock to 1111MHz, representing a sizable 118MHz (13%) base overclock and a 131MHz (13%) boost overclock respectively. Unlike the Zotac however there is no memory overclocking taking place, with Gigabyte shipping the card at the standard 6GHz.

What sets Gigabyte apart here in the specs is that they’ve equipped their custom PCB with better VRM circuitry, which means NVIDIA is allowing them to increase their power target from the GTX 660 Ti standard of 134W to an estimated 141W. This may not sound like much (especially since we’re working with an estimate on the Gigabyte board), but as we’ve seen time and time again GK104 is power-limited in most scenarios. A good GPU can boost to higher bins than there is power available to allow it, which means increasing the power target in a roundabout way increases performance. We’ll see how this works in detail in our benchmarks, but for now it’s good enough to say that even with the same GPU overclock as Zotac the Gigabyte card is usually clocking higher.

Moving on, Gigabyte’s custom PCB measures 8.4” long, and in terms of design it doesn’t bear a great resemblance to either the reference GTX 680 PCB nor the reference GTX 670 PCB; as near as we can tell it’s completely custom. In terms of design it’s nothing fancy – though like the reference GTX 670 the VRMs are located in the front – and as we’ve said before the real significance is the higher power target it allows. Otherwise the memory layout is the same as the reference GTX 660 Ti with 6 chips on the front and 2 on the back. Due to its length we’d normally insist on there being some kind of stiffener for an open air card, but since Gigabyte has put the GPU back far enough, the heatsink mounting alone provides enough rigidity to the card.

Sitting on top of Gigabyte’s PCB is a dual fan version of Gigabyte’s new Windforce cooler. The Windforce 2X cooler on their GTX 660 Ti is a bit of an abnormal dual fan cooler, with a relatively sparse aluminum heatsink attached to unusually large 100mm fans. This makes the card quite large and more fan than heatsink in the process, which is not something we’ve seen before.

The heatsink itself is divided up into three segments over the length of the card, with a pair of copper heatpipes connecting them. The bulk of the heatsink is over the GPU, while a smaller portion is at the rear and an even smaller portion is at the front, which is also attached to the VRMs. The frame holding the 100mm fans is then attached at the top, anchored at either end of the heatsink. Altogether this cooling contraption is both longer and taller than the PCB itself, making the final length of the card nearly 10” long.

Finishing up the card we find the usual collection of ports and connections. This means 2 PCIe power sockets and 2 SLI connectors on the top, and 1 DL-DVI-D port, 1 DL-DVI-I port, 1 full size HDMI 1.4 port, and 1 full size DisplayPort 1.2 on the front. Meanwhile toolless case users will be happy to see that the heatsink is well clear of the bracket, so toolless clips are more or less guaranteed to work here.

Rounding out the package is the usual collection of power adapters and a quick start guide. While it’s not included in the box or listed on the box, the Gigabyte GeForce GTX 660 Ti OC works with Gigabyte’s OC Guru II overclocking software, which is available on Gigabyte’s website. Gigabyte has had OC Guru for a number of years now, and with this being the first time we’ve seen OC Guru II we can say it’s greatly improved from the functional and aesthetic mess that defined the previous versions.

While it won’t be winning any gold medals, in our testing OC Guru II gets the job done. Gigabyte offers all of the usual tweaking controls (including the necessary power target control), along with card monitoring/graphing and an OSD. It’s only real sin is that Gigabyte hasn’t implemented sliders on their controls, meaning that you’ll need to press and hold down buttons in order to dial in a setting. This is less than ideal, especially when you’re trying to crank up the 6000MHz memory clock by an appreciable amount.

Wrapping things up, the Gigebyte GeForce GTX 660 Ti OC comes with Gigabyte’s standard 3 year warranty. Gigabyte will be releasing it at an MSRP of $319, $20 over the price of a reference-clocked GTX 660 Ti and $10 less than the most expensive card in our roundup today.

Meet The Zotac GeForce GTX 660 Ti AMP! Edition The First TXAA Game & The Test
Comments Locked

313 Comments

View All Comments

  • TheJian - Monday, August 20, 2012 - link

    Please point me to a 7970 for $360. The cheapest on newegg even after rebate is $410.

    Nice try though. "I'm pretty disappointed". Why? You got a 30in monitor or something? At 1920x1200 this card beats the 7970 ghz edition in a lot of games. :) Skyrim being one and by 10fps right here in this article...LOL.

    Mod to 670 isn't worth it when the shipped cards already beat it (3 of them did here). Remember, you should be looking at 1920x1200 and ignoring Ryans BS resolution only 2% or less use (it's a decimal point at steampowered hardware survey). If you're not running at 2560x1600 read the article again ignoring ryans comments. It's the best card at 1920x1200, regardless of Ryans stupid page titles "that darned memory"...ROFL. Why? STill tromps everything at 1920x1200...LOL.

    Got anything to say Ryan? Any proof we'll use 2560x1600 in the world? Can you point to anything that says >2% use it? Can you point to a monitor using it that isn't a 27/30in? Raise your hand if you have a 30in...LOL.
  • JarredWalton - Tuesday, August 21, 2012 - link

    http://www.microcenter.com/single_product_results....

    That's at least $20 cheaper than what you state.
  • CeriseCogburn - Thursday, August 23, 2012 - link

    That's a whitebox version card only. LOL
  • TheJian - Friday, August 24, 2012 - link

    And the prices just dropped, so yeah, I should be off by ~20 by now :) White box, as stated. No game. Well, dirt showdown don't count it's rated so low ;)

    But nothing that states my analysis is incorrect. His recommendations were made based on 2560x1600 even though as proven 98% play 1920x1200 or less and the monitor he pointed me to isn't even sold in the USA. YOu have to buy it in Korea. With a blank faq page, help is blank, no phone and a gmail acct for help. No returns. Are you going to buy one from out of country from a site like that? Nothing I said wasn't true.
  • Mr Perfect - Thursday, August 16, 2012 - link

    I wonder if any board partners will try making the board symetrical again by pushing it up to 3GB? It's not like the extra ram would do any good, but if you could keep an already memory bandwidth starved card humming along at 144GB/s and prevent it from dropping all the way down to 48GB/s, it might help.
  • CeriseCogburn - Sunday, August 19, 2012 - link

    It doesn't drop to 48GB, that was just the reviewers little attack.
    You should have noticed the reviewer can't find anything wrong, including sudden loss of bandwidth, in this card, or the prior released nVidia models with a similar weighted setup.
    The SPECULATION is what the amd fanboys get into, then for a year, or two, or more, they will keep talking about it, with zero evidence, and talk about the future date when it might matter.... or they might "discover" and issue they have desperately been hunting for.
    In the mean time, they'll cover up amd's actual flaws.
    It's like the hallowed and holy of holies amd perfect circle algorithm.
    After years of the candy love for it, it was admitted it had major flaws in game, with disturbing border lines at shader transitions.
    That after the endless praise for the perfect circle algorithm, that, we were told - when push came to shove, and only in obscurity, that no in game advantage for it could be found, never mind the endless hours and tests spent searching for that desperately needed big amd fanboy win...
    So that's how it goes here. A huge nVidia advantage is either forgotten about and not mentioned, or actually derided and put down with misinformation and lies, until some amd next release, when it has appeared it is the time that it can finally be admitted that amd has had a huge fault in the exact area that was praised, and nVidia has a huge advantage and no fault even though it was criticized, and now it's okay because amd has fixed the problem in the new release... ( then you find out the new release didn't really fix the problem, and new set of sdpins and half truths starts after a single mention of what wrong).
    Happened on AA issues here as well. Same thing.
  • JarredWalton - Tuesday, August 21, 2012 - link

    Most games are made to target specific amounts of memory, and often you won't hit the bottlenecks unless you run at higher detail settings. 1920x1200 even with 4xAA isn't likely to hit such limits, which is why the 2560x1600 numbers can tell us a bit more.

    Best case for accessing the full 2GB, NVIDIA would interleave the memory over the three 64-bit connections in a 1:1:2 ratio. That means in aggregate you would typically get 3/4 of the maximum bandwidth once you pass 1.5GB of usage. This would explain why the drop isn't as severe at the final 512MB, but however you want to look at it there is technically a portion of RAM that can only be accessed at 1/3 the speed of the rest of the RAM.

    The better question to ask is: are we not seeing any major differences because NVIDIA masks this, or because the added bandwidth isn't needed by the current crop of games? Probably both are true to varying degrees.
  • CeriseCogburn - Thursday, August 23, 2012 - link

    " GTX 660 Ti and 7950 tied at roughly 67fps. If you want a brief summary of where this is going, there you go. Though the fact that the GTX 660 Ti actually increases its lead at 2560 is unexpected. "

    Theory vs fact.
  • TheJian - Monday, August 20, 2012 - link

    Memory starved at what? NEVER at 1920x1200 or less. Are you running a 30in monitor? All 24in monitors are 1920x1200 or below on newegg (68 of them!). 80% of the 27inchers are also this way on newegg.com. 3GB has been proven useless (well 4gb was):
    http://www.guru3d.com/article/palit-geforce-gtx-68...

    "The 4GB -- Realistically there was NOT ONE game that we tested that could benefit from the two extra GB's of graphics memory. Even at 2560x1600 (which is a massive 4 Mpixels resolution) there was just no measurable difference."

    "But 2GB really covers 98% of the games in the highest resolutions. "
    Game over even on 2560x1600 for 4GB or 3GB. Ryan is misleading you...Sorry. Though he's talking bandwidth mostly, the point is 98% of us (all 24in and down, most 27in) are running at 1920x1200 or BELOW.
  • Galcobar - Thursday, August 16, 2012 - link

    Was wondering about how the Zotac was altered to stand in as a reference 660 Ti.

    Were the clock speeds and voltages lowered through one of the overclocking programs, or was a reference BIOS flashed onto it? I ask because as I understand AMD's base/boost clock implementation, the base clock is set by the BIOS and is not alterable by outside software.

Log in

Don't have an account? Sign up now