PhysX for massive performance via GPU ?
Posted
by devdude
on Stack Overflow
See other posts from Stack Overflow
or by devdude
Published on 2009-06-02T14:18:47Z
Indexed on
2010/04/14
22:33 UTC
Read the original article
Hit count: 479
I recently compared some of the physics engine out there for simulation and game development. Some are free, some are opensource, some are commercial (1 is even very commercial $$$$). Havok, Ode, Newton (aka oxNewton), Bullet, PhysX and "raw" build-in physics in some 3D engines.
At some stage I came to conclusion or question: Why should I use anything but NVidia PhysX if I can make use of its amazing performance (if I need it) due to GPU processing ? With future NVidia cards I can expect further improvement independent of the regular CPU generation steps. The SDK is free and it is available for Linux as well. Of course it is a bit of vendor lock-in and it is not opensource.
Whats your view or experience ? If you would start right now with development, would you agree with the above ?
cheers
© Stack Overflow or respective owner