GTX 680 - Post opinions on performance

Discussion in 'General Discussion' started by Bart S, Mar 22, 2012.

  1. Bart S

    Bart S Member

    Joined:
    Oct 5, 2010
    Messages:
    843
    Likes Received:
    86
    ok so 6.5ghz is a long shot but with the smaller cooler running architecture the possibilities of achieving +1.5 - 2.5 ghz are very reasonable. The new z77 motherboard manufactures are already claiming support for 2600mhz overclocking capabilities but they dont mention anything about a cooler lol.
     
  2. DurgeDriven

    DurgeDriven Banned

    Joined:
    Mar 20, 2012
    Messages:
    6,320
    Likes Received:
    42
    I think that is short sighted.

    You do not pay for PCI-e 3.0 ...it is free. lol ?

    7970 tested 9% faster at pcie3 in compute true no advantage for games .......but no foul either.

    If you going to have 2 x Gen3 cards Sandybridge is limited to 8x8x

    Sandy-E has 40 lanes aka 16x16x plus 8 lanes GPU you think that will not help more powerful gen3 dual card. ?
     
  3. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    I had 780SLi board that allowed 3 x 16x slots. My last P68 board (UD7) was 3 x 16x and this board Maximus Z68 is 2 x 8x. There is no difference. This doesn't mean that i wouldnt take PCi-E 3.0 for my next board. Just dont think its a selling point or a reason to "jump ship".
     
  4. DurgeDriven

    DurgeDriven Banned

    Joined:
    Mar 20, 2012
    Messages:
    6,320
    Likes Received:
    42
    Sandy are 8x8x no matter the motherboard ...all she wrote.

    You not going dispute that 16x16x is faster, no matter how small the difference.
    Why did so many people get triple channel CPU.

    Sandy-E and Ivy Boards will offer more bandwidth to hungry Dual Gen 3 Cards got no doubt.

    How big a difference is any ones guess. ;)
     
  5. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    I did get it wrong ... heres the correct info (not that it matters but i love to argue lol!!! - forgive me!)

    UD7 is a tril-sli board offering 16x16x8x

    Multi-display Support with 3-way SLI™ and 3-way CrossFireX™

    1. 2 x PCI Express x16 slots, running at x16 (PCIEX16_1, PCIEX16_2) - mobo
    * For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16_1 slot; if you are installing two PCI Express graphics cards, it is recommended that you install them in the PCIEX16_1 and PCIEX16_2 slots.

    2. 2 x PCI Express x16 slots, running at x8 (PCIEX8_1, PCIEX8_2) - NF200
    * The PCIEX8_1 slot shares bandwidth with the PCIEX16_1 slot and the PCIEX8_2 slot with PCIEX16_2. The PCIEX16_1/PCIEX16_2 slot will operate at up to x8 mode when the PCIEX8_1/PCIEX8_2 is populated.

    My older 780i Sli board a 16x16x16x !! I believe Sandy bridge got away with it due to lower latencies.

    My argument still stands that there will be zero FPS differences (although im truly taken by the GPUCompute benchies you mentioned!)

    Interesting you mentione x58 (tripple channel cpu) as they released socket 1155 to blow away its tripple channel memory bandwidth with something running back on a 100MHz bus! Very interesting. I added 10Gb/s memory bandwidth more bandwidth by running a rig that was technical inferior!

    Sandy bridge was a masive step forward and one of Intels greatest achievements in the last 5 years. Im hoping future tocks of Ivybridge bring about similar changes. Anyone putting their money into AMD CPUs/boards during this next 2 years is simply wasting their money and throwing FPS out the window.
     
  6. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    DurgeDriven, i also comes from the past where we saw the PC bus move to from PCI to AGP 1.0 then to AGP2.0 then pci-e 1.0 then pci-e 2.0 and now PCi-e 3.0.

    For all these changes, gfx cards onboard memory have become big enough that data doesnt travel the bus back to system memory as it did in the old AGP days, but relies more on latency than bandwidth. PCi-e 2.0 cards are all 16x speed cards, but never suffer when put into 8x slots. Therefore will pci-e 3.0 really make a difference? I truly doubt it but i forever live in hope that these rediculous specs wars will one day lead to additonal performance!! I would bet that there will be zero fps difference between a pci-e 3.0 card being supposedly strangled in an 8x slot pci-e 2.0 slot.

    Where pci-e 3.0 will come nto effect is when we have GFX cards, SSDs and thunderbolt type bandwidth requirements all at the same time.
     
  7. MKD

    MKD Registered

    Joined:
    Mar 16, 2012
    Messages:
    49
    Likes Received:
    1
    10 % on pcie 3.0 EASY.. AND if you are tossing 2 or 3 in a SLI driving 3 screens at huge resolution YOU will USE and appreciate pcie3.0. On a single card system driving one hd monitor it will have no effect.. BUT the bandwidth passing through the bus in super high resolution setups are seeing a big bonus..


    AND I can tell you personally my old gtx 280's SLI'd Nvidia support narrowed my stutter issues to one lane being 8x.. That was in an 1366 board.. the day I moved to another 1366 with 2 16x lanes my stutters were memories...

    It is the huge bandwidth in triple or more screens that is going to utilize the PCIE 3.0

    I'll let ya know in another week or two... soon as Ivy comes out of hiding
     
  8. Bart S

    Bart S Member

    Joined:
    Oct 5, 2010
    Messages:
    843
    Likes Received:
    86
    24 - 25 april is the date, checkbook or credit card I dont care get me on a z77 and an overclocking monster please. I believe the first gen of ivy will not show off pcie3 to its potential you will have to wait till the architecture sees its first early revisions, then I think we will all be stunned. I think I will go for a £170 moboi mid high range, cheap Ivy chip to get me going and the best memory on release day. Within a year you will find out the Ivy's that are the ones to go for and no matter which I buy to start with I will get good money for on ebay.
     
  9. DurgeDriven

    DurgeDriven Banned

    Joined:
    Mar 20, 2012
    Messages:
    6,320
    Likes Received:
    42
    You are missing something mate SandyBridge platform is limited to 16 PCI-e lanes aka 8x 8x

    Sandy-E has 40 PCi-e lanes so you can run 16x 16x 8x

    When you get your new IvyBridge CPU and 2 Gen3 cards you will be running 8x 8x on Sandy. lol

    Tell me then it does not make a difference...... :)
     
  10. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    OK, for the record, i own 2 x 580s. I am a benchmark whore. When compared witht he UD& which i showed allows 16x16x and now i have fallen back to a maximus iv which happens to be a 8x8x - I SEE ZERO FPS diferences with both my cores screaming at 950Mhz on the 580s and my processor running at 5.1GHz. Proof is in the pudding. I have the pudding! SB may drop to 8x, unless you have a dedicated board such as the UD7, but its LATENCY IS HALF the latency seen on 1366 boards. Hence the 8x makes NO DIFFERRENCE.

    No the 280's spoken about are lame bandwidth users when comapred to the 580's. My 580SLi when clocked (to 950) are able to shift 200gb/s of pure memory bandwidth - do i see stutters falling back to an 8x8x board? - NO, abosultely NOT. Am i fanatical about smoothness to the point that only CRT reveal the true smoothness, and ALL LCDs even 120Hz are faulted in one way or another? I am a fanatic.

    From my observations, i will hazzard a guess that a PCi-e 3.0 card such as 680 will NOT be hampered on an 8x slot. why not? because its bandwidth is significantly less than two screaming 580's. The maths is the maths! GTX 580 Memory Bandwidth = 192.4 GB/sec. GTX 680 Memory Bandwidth = 192.2 GB/sec. The mem bandwidth of my 580s are 220GB/sec each.

    680's will NOT be affected in the FPS stakes by using them in a PCi-e 2.0 slot. I apperciate that optimised drivers may fluctuaute a few FPS here and there and im still intrigued by the GPU-Compute difference that was mentioned, but would probably put this down to change of board with optimised drivers moreso than bandwidth ever making a difference in this given instance.

    I come from a background of a decade in PCs watching bus bandwidth make little difference in the real world. The only bus i can really wax lyrical about is SATA3 and only because Dual SSD's smash it to bits! (dont get me started on why any SSD less than 512Gb is a waste of money from a bandwidth perspective lol!)

    Ps guys, i love the banter and i love to learn!
     
  11. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    MKD, did you try same board same manufacturer or a different manufacturer? My point being, i doubt it was the 16x change that made the difference!

    MKD, i argue that the bus width makes No difference at all when all cards these days have enough ram in them to do their thing. The display of 3 screens is not affected by bus bandwidth as each screens display is directly out the back of the GFX card and hence uses the cards direct bandwidth to display. If we were using thnderbolt accross which to display then bus bandwidth becomes important. When each card is repsonsible for the screen output, they do not use the bus from which to display. High res simply requiers high amounts of framebuffer and a monster GFX card to achieve it.

    I will admit that a 4x slot will definately hamper performance, but for the most part, and especially on the low latency Sandybridge 8x slots, these provide enough bandwidth for a card like a 580 which is comparable to the 680 in bandwidth terms.

    CONSIDER THIS:
    If they release another dual GFX monster such as a 690 and they dont use an onboard NF200 equivalent, then this sort of card willneed a bigger bus. But the manufacturers have always gotten around this by putting their own internal bus between cards using NF200. And in these instances such as the 590, they do not suffer by not being in a pci-e 3.0 socket!! and i put a thoudand pound bet on the fact that if you took the 590 for which you should be claiming bandwidth starvation on a snadybridge 8x socket, then we should be able to put it into a Z77 socket with PCi-e 3.0 anmd watch its fps climb through the roof. My thousand pouns says that moving 590 into PCi-e 3.0 will bring about next to ZERO FPS change.
     
    Last edited by a moderator: Apr 15, 2012
  12. DurgeDriven

    DurgeDriven Banned

    Joined:
    Mar 20, 2012
    Messages:
    6,320
    Likes Received:
    42
    How do you have a clue what Gen3 cards will need or use. :)
    What is their bandwidth in dual ?

    GTX590 is not Gen3

    I was not meaning a GTX680 either, certainly not a single one.

    SandyBridge is 16lanes Sandy-E is 40lanes ....all I said.
     
  13. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    DurgeDriven, im offering a REALSTIC view based on experience. My 20 years in this game tells me with absolution that gen 3.0 cards will perform no better in or out of a gen 2 slot - Unless NVidia have managed to double their bandwidth useage overnight then i know that gen 2 will not be the limit. The example of a gtx 590 should really have been replaced with the 7970 as this is a gen 3 card and allows far higher bandwidth than any other single card (much higher than gtx680!). I cannot who heartedly tell you this is the case, but i KNOW without having a gen 3 card that its bandwidth effect will never be seen on a gen 2 hardware. If TWO gen 3 cards shared a single slot with no NF200 equivalent, then indeed gen 3 bandwidth is vital.

    My point is, for those upgrading purely for gen3 pci-e 3.0 and are expecting great FPS boosts are simply spending the money for nothing in return. There was a chap sat on a forum stating that he couldn't wait to upgrade his gen 2 board to gen 3 so he can finally see the gains on his dual 570's. Well, the guy is on a hiding to nothing! He wasn't interested in CPU, just GPUs. I think people that misunderstand to this degree need educating, and simply shouting that gen 3 card would be starved in gen 2 slots is erroneous and wrong at this current time.

    My point originally being that for those cash starved, buying a Z68 based Sandy Bridge is great value and awesome performance for little money really. Those with cash, sure buy Ivybridge, it will kick @rse! (I wouldn't be buying the 1155 version though!! I would want the all singing all dancing 2011 version!)

    And Durge, to back you up, if i was buying next years 512bit based bus version of keplar (its coming!) and not 256bit as per current GTX 680 equivalent (they will undoubtedly call it the 780) then i would buy NO LESS THAN THE IVYBRIDGE 2011 version. This new card may well starve the bus on gen 2!!

    DurgeDriven, i look forward to the next time we knock heads! :eek:)
     
  14. Bart S

    Bart S Member

    Joined:
    Oct 5, 2010
    Messages:
    843
    Likes Received:
    86
    I thinks we will have to wait till ivybridge 2nd gen revision to see Pcie3 to shine for its purpose. Its always a gentle transition so we arnt forced to upgrade everything at once but in a timely fashion, ivybridge needs a trial run first on its shrink rapped package before any solid revision work is done on its archeitecture to bring out pcie3s potential. Buy a z77 board soon get a mid range ivy when it comes out, wait 12 month then get you daddy ivy.
     
  15. MKD

    MKD Registered

    Joined:
    Mar 16, 2012
    Messages:
    49
    Likes Received:
    1
    I changed boards..

    And is all your testing on high resolutions required for 3 screen or more? I think that is the key to 8x vs 16 x.. High resolution.. All the testing uses 1 screen..
     
  16. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    MKD, only framebuffer affects high res, and of course brute force throughput on the card itself. PCi-E at THIS stage still will barely affect it if at all. I have a friend that went to the expense of 3gb 580 SLi cards ... his fps is the same as my 1.5Gb 580 SLi cards. Now there are instances where you can flood the framebuffer and show the difference between the two cards, but it only ever needs a mild tweaking like slight lowering of AA to bring both back in line again and definately needs 3 screens to see this effect. Single screen perfomance are both are identical. AMD cards always needed a bit more ram than the NV equivalent especially with AA involved, i always felt NV with 1.5gb allows similar settings to AMD with 2gb. Although In comparison, i have never seen an AMD card match the suggested equivalent nvidia card when it comes to outright quality and when i say this i mean feeling in games on turning, shooting, precision and smoothness of frames. IMHO, AMD never came close. The worst example where benchmarks showed two cards to be all but identical was the 4870x2 and the GTX295. In this instance, the benchmarks were comparable, but when you had these things in your hands the GTX295 wiped the floor with the AMD in every instance and on every game. The smoothness and "quality" of performance was uncomparable.

    (its only now i look at your specs to see you have an AMD lol!! Here i go ofending again! - btw, nice clocks! But get your i5 up, it will surely do 4.6 on air without an issue. They are cool chips compared with i7!)
     
  17. Bart S

    Bart S Member

    Joined:
    Oct 5, 2010
    Messages:
    843
    Likes Received:
    86
    hes on a watercooled gtx 680 now jonchard and from what it sounds like he will have 2 soon.
     
  18. jonchard

    jonchard Registered

    Joined:
    Apr 7, 2012
    Messages:
    65
    Likes Received:
    0
    MKD, if you do dual 680's then i would love to see what benchies you pull. You will need to push 5ghz to see the best from them both though! But i would love to see any real world benchies. I have been watching the dual SLI results and have been a little subdued with the results i see. Compared with my own rig, i see little to jump ship for at the moment. I need to see some decent benchies at 5Ghz plus on latest Ivybridge platforms. Where i really do see some impressive figures is with 3d Mark 2011, but this just doesnt reflect the real world. MKD, or anyone out there, if you go down the dual 680 route with fast CPU, then post please!!

    What clock speeds do you get watercooled? My 580's when clocked heat my water significantly to the point my two rather large rads get pretty warm and there is probably 5 litres of water amongst it all! I was surprised tbh. Keeps my lounge, child and wife warm lol!
     

Share This Page