xtreview is your : Video card - cpu - memory - Hard drive - power supply unit source
ASUS Debuts AGEIA PhysX Hardware -benchmark review test
A little over a year ago, we first heard about a company called AGEIA whose goal was to bring high quality physics processing power to the desktop. Today they have succeeded in their mission. For a short while, systems with the PhysX PPU (physics processing unit) have been shipping from Dell, Alienware, and Falcon Northwest. Soon, PhysX add-in cards will be available in retail channels. Today, the very first PhysX accelerated game has been released: Tom Clancy\'s Ghost Recon Advanced Warfighter, and to top
off the excitement, ASUS has given us an exclusive look at their hardware.
We have put together a couple benchmarks designed to illustrate the impact of AGEIA\'s PhysX technology on game performance, and we will certainly comment heavily on our experience while playing the game. The potential benefits have been discussed quite a bit over the past year, but now we finally get a taste of what the first PhysX accelerated games can do.
With NVIDIA and ATI starting to dip their toes into physics acceleration
as well (with Havok FX and in-house demos of other technology), knowing the playing field is very important for all parties involved. Many developers and hardware manufacturers will definitely give this technology some time before jumping on the bandwagon, as should be expected. Will our exploration show enough added benefit for PhysX to be worth the investment?
Before we hit the numbers, we want to take another look at the technology behind the hardware.
AGEIA PhysX Technology and GPU Hardware benchmark review test
First off, here is the low down on the hardware as we know it. AGEIA, being the first and only consumer-oriented physics processor designer right now, has not given us as much in-depth technical detail as other hardware designers. We certainly understand the need to protect intellectual property, especially at this stage in the game, but this is what we know.
PhysX Hardware:
125 Million transistors 130nm manufacturing process 128MB 733MHz Data Rate GDDR3 RAM 128-bit memory bus interface 20 giga-instructions per second 2 Tb/sec internal memory bandwidth "Dozens" of fully independent cores
There are quite a few things to note about this architecture. Even without knowing all the ins and outs, it is quite obvious that this chip will be a force to be reckoned with in the physics realm. A graphics card, even with a 512-bit internal bus running at core speed, has less than 350 Gb/sec internal bandwidth. There are also lots of restrictions on the way data moves around in a GPU. For instance, there is no way for a pixel shader to read a value, change it, and write it back to
the same spot in local RAM. There are ways to deal with this when tackling physics, but making highly efficient use of nearly 6 times the internal bandwidth for the task at hand is a huge plus. CPUs aren\'t able to touch this type of internal bandwidth either. (Of course, we\'re talking about internal theoretical bandwidth, but the best we can do for now is relay what AGEIA has told us.)
Physics, as we noted in last years article, generally presents itself in sets of highly dependant small problems. Graphics
has become sets of highly independent mathematically intense problems. It\'s not that GPUs can\'t be used to solve these problems where the input to one pixel is the output of another (performing multiple passes and making use of render-to-texture functionality is one obvious solution); it\'s just that much of the power of a GPU is mostly wasted when attempting to solve this type of problem. Making use of a great deal of independent processing units makes sense as well. In a GPU\'s SIMD architecture, pixel pipelines
execute the same instructions on many different pixels. In physics, it is much more often the case that different things need to be done to every physical object in a scene, and it makes much more sense to attack the problem with a proper solution.
To be fair, NVIDIA and ATI are not arguing that they can compete with the physics processing power AGEIA is able to offer in the PhysX chip. The main selling points of physics on the GPU is that everyone who plays games (and would want a physics card) already
has a graphics card. Solutions like Havok FX which use SM3.0 to implement physics calculations on the GPU are good ways to augment existing physics engines. These types of solutions will add a little more punch to what developers can do. This won\'t create a revolution, but it will get game developers to look harder at physics in the future, and that is a good thing. We have yet to see Havok FX or a competing solution in action, so we can\'t go into any detail on what to expect. However, it is obvious that a
multi-GPU platform will be able to benefit from physics engines that make use of GPUs: there are plenty of cases where games are not able to take 100% advantage of both GPUs. In single GPU cases, there could still be a benefit, but the more graphically intensive a scene, the less room there is for the GPU to worry about anything else. We are certainly seeing titles coming out like Oblivion which are able to bring everything we throw at it to a crawl, so balance will certainly be an issue for Havok FX and similar
solutions.
DirectX 10 will absolutely benefit AGEIA, NVIDIA, and ATI. For physics on GPU implementations, DX10 will decrease overhead significantly. State changes will be more efficient, and many more objects will be able to be sent to the GPU for processing every frame. This will obviously make it easier for GPUs to handle doing things other than graphics more efficiently. A little less obviously, PhysX hardware accelerated games will also benefit from a graphics standpoint. With the possibility for games
to support orders of magnitude more rigid body objects under PhysX, overhead can become an issue when batching these objects to the GPU for rendering. This is a hard thing for us to test for explicitly, but it is easy to understand why it will be a problem when we have developers already complaining about the overhead issue.
While we know the PhysX part can handle 20 GIPS, this measure is likely simple independent instructions. We would really like to get a better idea of how much actual "work"
this part can handle, but for now we'll have to settle for this ambiguous number and some real world performance. Let's take a look a the ASUS card and then take a look at the numbers.
ASUS PhysX Card benchmark review test
It\'s not as dramatic as a 7900 GTX or an X1900 XTX, but here it is in all its glory. We welcome the new ASUS PhysX card to the fold:
The chip and the RAM are under the heatsink/fan, and there really isn't that much else going on here. The slot cover on the card has AGEIA PhysX written on it, and there's a 4-pin Molex connector on the back of the card for power. We're happy to report that the fan doesn't make much noise and the card doesn't get very warm (especially when
compared to GPUs).
We did have an occasional issue when installing the card after the drivers were already installed: after we powered up the system the first time, we couldn't use the AGEIA hardware until we hard powered our system and then booted up again. This didn't happen every time we installed the card, but it did happen more than once. This is probably not a big deal and could easily be an issue with the fact that we are using early software and early hardware. Other than that,
xtreview is your : Video card - cpu - memory - Hard drive - power supply unit source
we would be happy to answer for your question . if you have suggestion or comment
regarding this review our support would be glad to help just join our forum and ask u will get the best answer
to discuss check our forum section :-)