PhysX later on?


(Dragzonox) #1

Weeeeellllll…

I’m getting a PhysX card in a few days, and a small thing I was thinking about…
Won’t it be an idea in a later patch, to add the possibility to use one’s PhysX card?

It could make blood, and handling of vehicles much smooooter :bump:


(Domipheus) #2

I wouldn’t count on it to be honest, to use it efficiently they would no doubt have to pretty much re-write the game :slight_smile:


(Lanz) #3

I just have to ask… what’s the point of bying a card like that? How many games support it? Also, don’t expect it to work in a multiplayer game, since physics needs to be consistent over the network, it’s not a feature that can be changed to work differently on just your pc like with graphics.


(Ragnar_40k) #4

At least in multiplayer games you wont see any use of it in the near future. The difference what a player with such a card would see and what player without such a card sees is too big. Additionally such cards cause a fps drop, due to the increased number of objects, which must be transferred via the PCI-bus and must be drawn by the graphics card. You might use such a card to speed up the actual game physics (afaik the Unreal Engine 3 supports it), but imho 300 bucks are far to much for such a little advantage.

For single player games on the other hand such a card might be used in the future when DirectX10 and its physics engine is out.

I would compare the status of physics card with the status of the S3 Virge chips some years ago: nice looking games, but worse fps the usual in games and far to expensive (nevertheless I had one ;)).


(Apocalypse) #5

According to most reviews, the PhysX card is currently too pointless to buy. Most even recommend that your money is best put into another gfx card for SLI or some other upgrades. There’s currently like ONE game? GRAW? that supports it. The best part is that even utilising the advanced physics and PhysX, you are going to see a drop in the FPS for basically more debris… Is that worth it?

Back to the topic of seeing it in QW? I really would doubt it. Using the PhysX (according to what I read) would require integrating it within the game from the ground up. With QW finishing, I doubt that they would want to do that. The best bet would be integrating Havok which is basically bolted on but last I heard, its 100k-200k per game investment to use Havok.


(Jaquboss) #6

ID physic engine rockz they don’t need others work to have physics, i guess it will be much better in QW so forget about havok


(Dragzonox) #7

Games like Cell Factor (who have to run with a PhysX card) and UT2007 are really big titles…

I have a almost top computer right now (AMD X2 4200+ 2giga ram and Dual 7800GT), I just want to see what a PPU can do…
Not much to it, BUT still… Mabye some small fixes?..


(McAfee) #8

Both ATI and NVIDIA are considering adding more physics capabilites to their next generation GPUs.
You really think they are going to let one small company take a bite out of their gaming shares!?
They’ll try anything in their path (as long as it’s legal)


(Lanz) #9

If the money are burning in your pocket, then by all means go ahead and waste them on it. But tbh I think you will get disapointed. Even if I had unlimited resources to spend on gadgets I don’t think I would have bought one.


(McAfee) #10

Reviews for GRAW show it having less performance with the PPU card. Sure it looks pretier, but if I shell US$300 for some hardware, I’d expect my games to perform better not worse.

Worst part is you can’t tweak the physics levels. So you can’t even have a “real” comparison. If you add the card you are stuck with HQ physics (which make the game slower). And if you go with software mode you get LQ physics (which still look good and are faster than HW-Mode)

It would be good if you could use LQ physics WITH the hardware. That way you should have a boost in performance, and btw a good comparison. But the developers of that game left that option out.

So to this date, you really don’t if you are paying for what you get.


(Apocalypse) #11

Yes, and both of which are not out yet. :slight_smile:

I guess it’s your money but have you seen the effects that’s in GRAW? Its like meh?


(carnage) #12

in terms of tech we are only realy entering the introduction of “good” physics in games with engines like hl2 and the doom3. and the work these physics do in the game is realativly small to what i would expect them to be doing 10 - 15 years down the line

so think of this kinda like quake 1 and then they way that expanded onto graphics cards. you could run ET in software mode but it would be very inefective because the advanced nature of the graphics would mean a lot more calcualtions. so as physics moves on developers are going to have to find ways to optimise the calcaultions or suport some kind of physics card that is more effeicent then performing calcualtions like this in the CPU

and although physics arent a very large part in MP games at the moment because of the strain on the net code they are still creaping in evidently in ETQW so who knows what the future holds


(SCDS_reyalP) #13

IMO, ‘physics coprocessors’ are a dead end. It doesn’t benefit from specialized hardware nearly as much as rendering does. For rendering, you can do massive parallelization and relatively simple processors. You also can gain a lot form fast specialized memory interfaces. Physics really wants something more like a general purpose CPU and some vector units. It also tends to be more tightly connected to game logic, unlike rendering where you can just throw a bunch of commands in a pipeline and forget about it.

Not to say that physics processors won’t ever give you a performance gain, but I suspect it won’t be a big win in the long term. It makes more sense just to add more general purpose cores, which is the direction the CPU manufacturers have already taken.


(kamikazee) #14

Maybe the only use could be simulations where you would need lots of physics calculations.


(carnage) #15

Not to say that physics processors won’t ever give you a performance gain, but I suspect it won’t be a big win in the long term. It makes more sense just to add more general purpose cores, which is the direction the CPU manufacturers have already taken.

a very good point makes me re consider my argument. rendering needs to be much less controled but in a MP enviroment esecialy everyone needs paralel physics calcualtion so every card would have to have exactly the same features or game would only be able to suport one card. and its pretty unlikly that games will go the way of requiring specific hardware from the user. increased CPU seem more likly


(Ragnar_40k) #16

Afaik the PPUs are not accurate enough for scientific physics simulations since they don’t support the double IEEE standard. They only use 32 bit float for perfomance reasons (which is enough for games).


(Iced_Eagle) #17

Yes, and both of which are not out yet. :)[/quote]

Cell Factor is out…

http://ageia.com/physx_in_action/cellfactor.html

UT2k7 is due out in a few month’s time… I really want to see how they will be using PhysX in that :slight_smile:

Right now, if a lot of major studios are putting out games that by the looks of them I know I can’t play without a PPU, then I might get one… Cell Factor right now is too much of a gimmick and you can’t play online… UT2k7 is unknown how much it will use the PPU.

I mean if you only get more realistic crates falling apart, it’s not worth $300…


(datoo) #18

I don’t think this is true. A processor designed specifically for physics processing will be multitudes faster than a general purpose core, from what I understand. The issue is getting games to support physics processing for more than just a few effects here and there (like in GRAW). Until a lot of people have PPUs, developers won’t make them required for their games, and people aren’t going to have PPUs until they come down in price or are embedded in gfx cards.


(SCDS_reyalP) #19

How so ? What makes “Physics” different than general purpose calculations ? As I already mentioned, it is more intensive on vector math, but that’s about it.

That might lead you to say “then put a bunch of vector units on an add-on card” but then you have to send all your data over whatever bus that card is on. Physics is tightly connected to the rest of your game data, so synchronization is likely to be a bottleneck. The graphics gets away with this because it is mostly one way, and so can be highly pipelined. Even so, graphics cards are one of the main drivers improving peripheral buses.

If a “bunch of vector units on an addon card” does solve the problem, then we already are already very close to having one… the GPU. Modern GPUs are getting more flexible and higher precision every generation. People are already using them for some no-rendering tasks.

Also, CPU and GPUs have huge economies of scale in their production. Even if an extra core or GPU is technically much more complicated than your PPU would be, the PPU is likely to cost more until production reaches a similar scale.

For developers, there is a big advantage to limiting the number of instruction sets they have to code for. The variation among GPUs is bad enough.

(this concludes todays issue of “nerdy technical rant” :moo:)


(Lanz) #20

Well look at it this way, a specifically designed physics processor may or may not be the best design option but compare this too the number of dual cpu’s released nowdays. The next pc that a gaming enthusiast buys will very likely have a dual core processor but it will not be as likely that it will contain a physics card. As a game developer you have more use for one more general purpose processor than you have for a physics specific one and more or less everyone will have it in a couple of years. This makes it ideal for developers and gamers, and imo predicts the death of physics cards as I see as hype nowdays that will not last.

Still, splitting physics and game logic in two is not as easy as it sounds. I would not be surprised if games that actually lose performance/fps because the use of physics cards is actually because you get threads waiting for other threads to finish.

There’s a couple of things that makes this a lot harder than when we went from software rendered graphics to hardware rendered:

  1. It’s easy to parallellizise gfx because you send instructions to it and forget it until next frame while with physics you send calculations and then have to poll for the results. Requiering some form of threading or similar solution.

  2. Physics really doesn’t scale well. Eg. If you have a game with 50 barrels falling down a mountain, if you can’t see it happen from your fov the game doesn’t have to render it, but the physics for it still have to be calculated. If you turn your back to that falling barrel you don’t want it to land differently because the physics cheated.

I had a couple of more points but heck I forgot what I had in mind now…

edit: bah! I type to slow, I blame the English for it because it’s a convenient excuse… :smiley: