EVGA GeForce GTX 580 Graphics Card
by Mark W. Hibben
One of the Best
The GTX 580 series of graphics cards based on the nVidia Fermi processor currently occupies the top slot in Passmark’s ratings of graphics cards, and for that reason was chosen for our X79 reference system as described in X79 System Design. The Passmark graphics benchmarks of the X79 system were described in our recent review of the Gigabyte X79-UD5 main board, and here, the benchmarks at least partially reflected the overall system performance, not just the graphics card. In this review I’ll focus on the features and performance characteristics that are truly unique to this graphics card, and that primarily means gaming. Gaming is where the PC desktop is still king, due in large part to the graphics performance of cards such as the GTX 580. When it comes to photo-realistic rendering, frame rate and resolution, a modern desktop PC equipped with the GTX 580 will out-perform any other platform, be it console or mobile.
Model Features and Specs
In the spirit of cost effective performance, we chose the least expensive GTX 580 model from EVGA, P/N 015-P3-1580-AR, the basic non-overclocked model with 1536 MB of memory.
The 580 series have all 512 CUDA cores enabled in the Fermi processor, and this probably has a more direct effect on the card performance than any other single factor. Our card operates at the stock rate of 772 MHz for the graphics clock, 1544 MHz for the processor clock, and 2004 MHz for the memory clock, and all these rates can be overclocked using EVGA’s overclock utilities. These double width cards sport 2 Dual Link DVI-D connectors for a maximum display resolution of 2560 x 1600 pixels, as well as a Mini-HDMI connector. Any two connectors can be active at the same time, including the HDMI. The HDMI also fully supports digital audio with audio output selectable at the Windows Sound control panel. The card of course supports nVidia standards such as PhysX, CUDA, and SLI, and we tested two-way SLI gaming performance for this review. The GTX 580 is certified for Win 7, and of course supports DirectX 11. The GTX 580 is rated by nVidia for a maximum power draw of 244 W and a maximum GPU temperature of 97 C. The GTX 580 also supports nVidia’s implementation of stereoscopic 3D using active glasses (available from nVidia) and compatible monitors.
Although this card doesn’t feature the latest PCIE 3.0 interface (still PCIE 2.0 x16), it holds its own against even the newest AMD Radeon HD 7970 card, just edging out the 7970 in the Passmark 3D Graphics benchmark.
Installation and Use
Installing the EVGA card produced no problems due to mechanical misalignment, with the card slotting right in. If possible, the user should install the card next to an open slot, or at least a slot occupied by a short card so that the fan is unobstructed. The card takes an 8-pin and a 6-pin PCIE power cable, and here we have our first complaint regarding EVGA’s power guidance. EVGA in their specs for the GTX 580 forward nVidia’s recommendation that the system power supply be at least 600 W, but don’t include the nVidia power rating for the GTX 580 of 244 W. Instead, EVGA recommends that the system power supply be capable of 42 amps on the +12 Volt rail. The 600 W recommendation for system power is almost ludicrous in its lack of specificity. Is this for a single or multiple graphics card system? And to what type of processor and chipset platform does it apply? Power supply sizing is one of the major areas where home PC builders often go wrong, and poor guidance such as this is partly to blame. (And it doesn’t help that inadequate power can manifest itself in all sorts of flakey system behavior that doesn’t seem to have anything to do with power, including seemingly inexplicable BSODs.) The second requirement is also ludicrous in its bloat. Simple math says that this requirement means that the card consumes 504 Watts (42 * 12), and this is complete nonsense. We measured power consumption of the card for this review, and have never seen anything close to this in steady state, even running with the GPU overclocked. If the 42 Amp requirement is meant to be a current spike, EVGA should say so, although I don’t know how a consumer would be able to use the information, since I haven’t seen a supply that specs this. Furthermore, even very large supplies, such as the 1200 W Thermaltake ToughPower Grand that we used for the X79 system, typically aren’t rated for such high current loads. The Thermaltake is rated for 40 Amps on the +12V rail, and at 480 W, this was just about right for our 2-way SLI testing. See the Power Measurement section for more specific guidance.
Another installation headache we encountered was with EVGA’s supplied driver disk. Oh, the driver worked just fine, until we tried SLI. Then the nVidia control panel simply refused to show SLI as an option, although it recognized the two GTX 580 cards. The solution turned out to be to download the very latest driver directly from nVidia. Everyone in the Windows PC universe is familiar with the need to stay up-to-date with drivers, so downloading a driver patch is no big deal, but is it too much to ask that a supplied driver actually work? Apparently so, when it comes to new platforms like X79. Once the new driver (version 285.62) was installed, everything was fine and configuring SLI and PhysX very straightforward. Overall, we much prefer the nVidia control panel to the Catalyst Control Center of AMD, which takes consumer handholding to new heights of obfuscation.
A final note regarding HDMI output. Although we like HDMI for being a high performance video + audio cable system that features nearly bullet proof connectors for the full size version, we aren’t big fans of the smaller mini and micro HDMI connectors. EVGA uses a mini HDMI connector to save space on the card and provides a short mini HDMI to full HDMI adapter cable. The adapter cable actually worked just fine, but the mini HDMI connector in the 580 housing seemed to have flimsy mechanical support. In general, the mini and micro HDMI receptacles need very rigid support to ensure that the mating cable connector stays properly seated, but all too often, as in the case of the GTX 580, the support for the receptacle flexes excessively. HDMI cables tend to be heavy, and the weight of the cable tended to pull the cable connector out of the receptacle. More than once, we had to firmly re-seat the connector in order to get a picture.
There’s no graceful degradation with HDMI, it either works or not, and unfortunately, the state of the nVidia control panel doesn’t serve as a good guide. The card can still communicate with the display device, and therefore show it as normal in the control panel, even when the display isn’t able to show a picture because of a bad connection or other cabling problem. A bit more diagnostic capability in this area would be nice, especially for non-technical consumers. Other than this connector issue, we never had any problems using HDMI output, and most of our game testing was done using the HDMI output to a 1080p HD TV.
Power Consumption Measurements
The EVGA GTX 580 manages power consumption by varying clock rates and core voltage in much the same way as modern CPUs such as the Intel Core i7 3930K. In EVGA’s precision tuner, which provides constant monitoring of almost all graphics card functions, you can see the card throttling back when the computer is idle. The graphics or core clock throttles back from the maximum of 772 MHz to 50 MHz. The processor or shader clock throttles back from the maximum of 1554 MHz to 101 MHz. The memory clock throttles back from 2004 MHz to 135 MHz.
At idle, the stock GTX 580 was measured to draw a mere 33 Watts wall plug equivalent power. Wall plug equivalent power is the power drawn by the power supply at the AC electrical outlet in order to power the card, which is higher than the DC power drawn by the card itself due to the efficiency of the computer power supply, which is approximately 80%. The idle power was measured by measuring wall plug power draw of our X79 system after waiting approximately ten minutes with the system idle in the Windows login screen, then repeating the same measurement with the card removed. (Yes, Windows will boot without a video card installed. The Windows audio annunciation provides assurance of this.) The difference between the two measured values gives the graphics card idle power. Knowing the idle power is essential to calculate the maximum power drawn by the card under full graphics load.
In general the power drawn by a PC has four main components:
Psys = Pproc + Pdrive + Pgraph + Pchipset
where all quantities are expressed as wall plug equivalent AC power. To calculate the full load graphics power consumption, we take the difference in system power between the full graphics load Psys and the Psys at idle. Assuming there’s no difference in power consumption for the drives and chipset, then the full load graphics power is given by:
Pgraph-full = Psys-full - Psys-idle + Pgraph-idle + Pproc-idle - Pproc-full
where, once again, all quantities are wall plug equivalent AC power. Some authors, such as Hilbert Hagedoorn (Guru3d) use an approximation that assumes negligible change in processor power consumption, but we found that this yielded an overestimate of graphics card power for our tests. Using CPUID’s HWMonitor, we observed significant processor power consumption change during the graphics benchmark testing. We performed the power measurements under a variety of system configurations in order to determine sensitivity to overclocking of the CPU and GPU. In general, we didn’t see much sensitivity to overclocking of the CPU, although there is some slight effect if the base clock is increased, as in the case of our “sweet spot” overclocking where the base clock is increased by the 1.25 fixed multiplier. The main effect here is to increase the graphics card idle power. Overclocking the graphics card also increases idle power draw, mostly due to higher fan speed. (See the details of our overclocking tests later in this article.) The table on the next panel summarizes our measurement results. All power results are expressed as wall plug equivalent AC power with the card driving a single 1920 x 1200 pixel monitor at 60 Hz.
|PC Configuration||Graphics Card Configuration||Test||Graphics Idle Power - Watts||Maximum Graphics Power - Watts|
|Stock with XMP Memory at 2133 MHz||Stock Clock Rate||PT 7 3D Graphics||33||230|
|Stock with XMP Memory at 2133 MHz||5% Overclock||PT 7 3D Graphics||36||254|
|Overclocked to 4.375 GHz with 125 MHz base clock||Stock Clock Rate||PT 7 3D Graphics||36||225|
|Overclocked to 4.5 GHz with 125 MHz base clock||Stock Clock Rate||PT 7 3D Graphics||36||232|
|Overclocked to 4.5 GHz with 125 MHz base clock||5% Overclock||PT 7 3D Graphics||39||249|
|Stock with XMP Memory at 2133 MHz||Stock Clock Rate||Crysis 2||33||242|
|Stock with XMP Memory at 2133 MHz||5% Overclock||Crysis 2||36||263|
|Stock with XMP Memory at 2133 MHz||10% Overclock||Crysis 2||36||311|
|Overclocked to 4.5 GHz with 125 MHz base clock||Stock Clock Rate||Crysis 2||36||248|
|Overclocked to 4.5 GHz with 125 MHz base clock||5% Overclock||Crysis 2||39||268|
The Performance Test 7 3D Graphics test drove the GPU to maximum utilization, 98%, and maximum allowed clock rate as indicated in EVGA Precision, but didn’t cause the graphics card to draw as much power as during Crysis 2 game play, which also achieved the same GPU utilization of 98%. This is because the PT 7 test didn’t fully utilize the GPU memory, whereas Crysis 2 did.
In PT 7, memory utilization in the GPU was a low 166 MB, or 10.8% of the total memory of 1536 MB, whereas in Crysis 2 memory utilization was 1493 MB or 97.2%. This was the only difference visible in the EVGA Precision readout. We consider that Crysis 2 fully maxed out the 580, and therefore the Crysis 2 power consumption values represent realistic maximum values. To get a conservative DC power estimate, multiply the above numbers by 0.9 (equivalent to a highly efficient power supply). For purposes of power budgeting for an SLI system we recommend using the above single card values multiplied by the number of cards in your system. This is conservative guidance, since often in SLI, both cards are not fully utilized, but we consider it appropriate margin. As usual, the reader uses these values at the reader’s risk, and the author accepts no responsibility for damage or unintended consequences of following the guidance offered here.
A point I’ve made in previous articles is that for modern DirectX 11 games, the choice of graphics card is far more important than the computer platform that hosts the card. To illustrate this, we performed a series of benchmarks using Crysis 2 at 1920x1080 screen pixel resolution, all graphics quality settings to maximum, and the DX11 and Tesselation packs installed. We mixed and matched between our X79 system, an older X38 system, and the EVGA GTX 580 and EVGA GTX 470 cards. The GTX 470 was a good choice for this comparison because it has Physx and DX11 capability just like the 580, and uses the same nVidia Win7 x64 driver (Version 285.62), but is not quite as fast as the 580. Literally, all we did was plug cards in and out between the systems. The X38 system uses the Asus P5E WS motherboard, a Core 2 Quad 9550, 4 core, non-hyperthreaded processor running at 3.0 GHz. The board has 4 GB of DDR 2 1066 memory. Our X79 system uses the Gigabyte X79-UD5 motherboard, a Core i7 3930K processor running at 3.8 GHz (stock clock rate). The X79 system has 16 GB of DDR 3 2133 MHz XMP memory. Both systems run Windows 7 Ultimate 64 bit editions. As can be seen from the graph on the next panel, frame rate performance mostly correlated with the choice of graphics card, whereas the choice of processor platform made very little difference.
Naturally, this behavior may vary from game to game. We chose Crysis 2 specifically because it makes such heavy use of the nVidia card’s capabilities, not only for DirectX, but also PhysX. Our point here is that if the reader is contemplating a system upgrade to improve gaming performance, the first place to look is the graphics card, and the GTX 580 currently offers the best performance. Usually a graphics card will drop in without major disruption to the rest of the system, either hardware or software, so it makes a very convenient upgrade path.
If your motherboard supports SLI, adding a second high end card is another great way to boost performance for most games, as we found in our SLI testing. Not surprisingly, overclocking the CPU turned out to have little effect on frame rate performance for most games, underscoring the unimportance of the processor platform, so SLI will usually provide more performance increase. We tested both approaches in various combinations on four games: Batman – Arkham City, Crysis 2, Modern Warfare 3, and Elder Scrolls V: Skyrim. All testing was performed on our X79 system using a screen resolution of 1920 x 1080 (1080p) driven from the output of the HDMI port on the GTX 580. The stock X79 configuration used default clock settings except for the XMP memory clock which was set to 2133 MHz.
The overclocked configuration had the base clock set to 125 MHz, with a turbo mode maximum multiplier of 37 for a maximum clock rate of 4.625 GHz, and the XMP memory clock was set to 2000 MHz. Graphics settings were maxed out and vertical sync disabled wherever possible.
As can be seen, not all games benefit equally from overclocking or SLI, but generally SLI can yield frame rate improvements of about 50%, making it a good option for performance enhancement of a gaming rig. Skyrim doesn’t have a way to disable V-sinc within the game, and you can see the effect of this in our results.
Overclocking the GPU
EVGA provides some useful tools for software controlled overclocking of the card from within Windows. The most useful tool is EVGA Precision (EVGAP), which provides a convenient way to bump up clock rates, as well as monitor the state of the GPU via a scrolling graph. In EVGA Precision you can also configure the software to load your settings on system startup, eliminating the need to manually set up overclocking when you turn on the computer. Almost. There’s one vital adjustment that was left out, Vcore of the GPU, which is set in another utility called EVGA Voltage Tuner, and unfortunately, this setting has to be applied manually with every startup of the computer, since it can’t be saved or applied automatically. This wouldn’t be so bad except that increasing the core voltage is essential for even mild overclocking in order to keep the card, and your display output, from locking up. Why voltage adjustments can’t be saved, we can’t imagine, except perhaps to make the overclocked versions of the GTX 580 more attractive.
For this review we explored two sets of overclock settings corresponding to 5% and 10% faster than the stock configuration. Our settings for the card are shown below.
|GPU Parameter||Stock||Overclock 5%||Overclock 10%|
|Vcore||1.013 V||1.063 V||1.138 V|
|Core (Graphics) Clock||772 MHz||811 MHz||849 MHz|
|Processor (Shader) Clock||1544 MHz||1622 MHz||1698 MHz|
|Memory Clock||2004 MHz||2104 MHz||2204 MHz|
|Fan Setting||Auto (1770 RPM at idle)||Manual, 4140 RPM||Manual, 4140 RPM|
For the overclock settings, we manually set the fan speed to the maximum allowed in EVGA precision. This was noisy, but guaranteed that the GPU stayed below its thermal design limit of 97 C.
There is some benefit to overclocking the GPU, but not much. The Performance Test 7 results did show a performance improvement in the 3D Graphics Mark as can be seen in the chart on the next panel which compares stock, 5% and 10% overclocked GPUs running on our stock X79 system driving a 1920 x 1200 pixel display. Note that the stock GPU configuration at 4288.8 still scores higher than the average score of 3944 posted on Passmark’s web site and shown earlier in this review. Passmark’s 3D Graphics benchmark is not entirely platform independent, and does benefit from the high performance X79 system.
In Crysis 2 testing, the performance improvement was even more slender. Shown on the next panel is a table comparing the PT 7 3D Graphics Mark and the Crysis 2 frame rates for the stock X79 system and the stock and overclocked GPU. Crysis 2 frame rates reflect the fact that a 1080p monitor was used for the tests, whereas a 1920 x 1200 monitor was used for the Performance Test 7 benchmark.
|Test||Stock GPU||5% OC GPU||10% OC GPU|
|PT7 3D Graphics Score||4288.8||4448.4||4501|
|PT7 3D Graphics % Change||NA||3.7%||4.9%|
|Crysis 2 Frame Rate||50.27||51.53||52.31|
|Crysis 2 Frame Rate % Change||NA||2.5%||4.1%|
All this discussion of benchmarks and frame rates really ignores the fundamental virtue of the current generation of nVidia and AMD Radeon graphics cards. In modern DirectX 11 based games such as Crysis 2 and Modern Warfare 3 these cards produce a near photo-realistic rendering of the game environment that game consoles can’t match. An example of this from Crysis 2 is shown on the next panel. The quality difference can be hard to quantify, which is why reviewers tend to ignore it, but the difference is discernible. With the right hardware, the PC gaming experience is far superior to the current generation of consoles. For the reader looking to build or upgrade a gaming rig, the GTX 580 is a great choice.