System Setup
| CPU: |
Intel Pentium IV 520 2.8E (14*200MHz)
|
3.5GHz (14*250MHz) |
| Motherboard: |
Asus P5LD2-Deluxe
|
Asus P5LD2-Deluxe |
Asus P5LD2-Deluxe |
| Memory: |
2*512MB Crucial BallistiX PC6400 DDR2 |
| Hard Drives: |
80GB WD Caviar SE
|
| Video Card: |
Gigabyte GV-RX18L256V-B - 500MHz/495MHz |
558MHz/495MHz |
|
Asus Extreme AX800XL-2DTV - 398/490MHz |
| Operating System: |
Windows XP Professional Service Pack 2 w/Direct X 9.0c |
| Drivers: |
ATi Catalyst 5.11 |
| Cooler: |
Swiftech H20-120 Rev 3 |
| Case: |
CoolerGuys Windtunnel IV |
| Power Supply: |
Enermax |
| Software: |
Fraps 2.5.3 |
Bench'em All 2.654 |
| Direct X Benchmarks: |
F.E.A.R |
Splinter Cell 3 - Built In Demo |
| |
Far Cry |
Half Life II - Anandtech Canals Test |
|
Serious Sam 2 Demo - Self Made Demo |
| OpenGL Benchmarks: |
Doom 3 timedemo |
Spec Viewperf 8.0 |
All the tests were run at both 1024*768 and 1600*1200, with the exception of Spec, which ran with a 2D resolution of 1600*1200. Apart from Spec all tests were run with no ansiotropic filtering and no anti-aliasing, as well as with 4X anti-aliasing and 16X ansiotropic filtering. Splinter Cell 3 and Serious Sam 2 were both run with full PS 3.0 features in addition to the other tests, but only when overclocked. With Spec, Doom 3, and Far Cry we used Bench'em All to run the tests. The other tests were run with Fraps recording the results by frame and second.
For F.E.A.R we used the built-in graphics test demo, as it did show most if not all of the main portions of the game. For Splinter Cell 3 we used the built-in timedemo, with a batch file that allowed us to control which PS version we used (Thanks to Beyond 3D for this). Serious Sam 2 was run using a recorded demo that I made using the initial demo and the first part of the level that is included. The overclocking results are all run with the CPU overclocked as well, as there seemed to be some points that were CPU bound in our tests (as you will see). In all honesty the differences between the two video card clock speeds don't show much of a difference so much of the increase is due to the increase in CPU speed.

| Spec View Perf |
Gigabyte GV-RX18L256V-B |
Asus Extreme AX800XL-2DTV |
Gigabyte GV-RX18L256V-B 558/495 |
| 3dsmax-03 |
15.24 |
13.21 |
20.71 |
| catia-01 |
11.79 |
11.63 |
14.66 |
| ensight-01 |
17.92 |
16.09 |
21 |
| light-07 |
10.11 |
10.04 |
12.59 |
| maya-01 |
14.42 |
14.32 |
18.12 |
| proe-03 |
14.55 |
14.32 |
17.44 |
| sw-01 |
12.79 |
12.47 |
14.96 |
| ugs-04 |
13.38 |
13.38 |
15.69 |
Of all the tests that Spec goes through only two of them show any significant improvement from the newer core, that of 3dsmax-03 and ensight-01. There is a 15.4% and 11.3% improvement by moving to the x1800XL core respectively. Otherwise the GPU doesn't make any real difference, though with the CPU overclock all the tests improved by anywhere from 17% to 36%. Overall this synthetic professional application doesn't show much of a difference between newer generations of mainstream hardware, with the two exceptions mentioned before.
Moving on to our game based OpenGL test, we will look at Doom 3. The game and its engine were widely anticipated, and since the game has been released there have been other games based on this engine, such as Quake IV. The quality of the rendered image is very nice indeed, thought the environment is a little dark for some details. Lets see how this 'next' generation card handles this game.

At the lowest settings we tested, the x1800XL card manages to easily get above 60fps. The x800XL basically reaches the 60fps mark, as it is only 0.3fps lower than this mark. Moving to the x1800XL gives us a 11% increase in frame rate, which translates into the 7fps increase we see. Overclocking gives a 18% improvement, much of which can be attributed to the increase in CPU speed. What though do the AA and ansiotropic filtering tests show?

Here is where the marked difference between the generations starts to be shown. The x800XL provides an average of just over 30fps, which some do consider 'playable'. The x1800XL card however manages to increase that to over 50fps at stock speed and two fps under the 60fps mark when overclocked. The performance difference between the two different cards is almost 50% of the frame rate of the x800XL or 17fps, quite an improvement. Overclocking provides a 6fps increase or a 11% increase on the stock speeds. Does this trend continue when we bump up the resolution?

The difference is not as pronounced as the previous test, but it is still pretty high. Again the x1800 can almost reach the 60fps mark when overclocked. The difference between the x800XL and x1800XL is 34% of the frame rate of the x800XL, or 13fps. Overclocking the card/CPU gives you a 5fps increase or a 10% increase. How does turning on the AA and ansiotropic filtering affect performance here?

Just as at 1024*768 the x1800XL provides a 50% increase in frame rate over the x800XL. Overclocking gives a 10% increase, by basically overclocking the card by 10%. However at this resolution the game isn't playable, rather I'd suggest that either 1024*768 with 4X AA and 16X Ansio or 1600*1200 without these settings on would provide a very playable game. However this is with an OpenGL game, which ATi is still working at improving, how does it fair in DirectX based games?
NEXT