Any tweaks for increasing FPS???
[quote=“Cerebrate”]
I honestly can not see the difference between 16 and 24.[/quote]
i was just too lazy to go into detail. try loading up radar and stand outside the warehouse. look up at the windows and walk toward and away from the building. see that crappy depth error on the wooden beams below them? that’s the best example off the top of my head, but there are others. goldrush comes to mind.
also, i have to assume SD concurs with me, since none of their presets lower depthbits, not even “fastest”. in fact they all explicitely set it to 24. that can only mean SD intended that cvar to remain at 24 for some reason. judging from what it does to my game, i believe i know why.
what i haven’t noticed is any benefit from lowering it to 16. can’t see why anyone would want to lower the quality of such a thing - seeing clearly as far away as possible can only be a “Good Thing”. this one falls under the same category as colorbits IMO - the penalty for lowering it is worse than keeping it on high. r_colorbits 16 is absolutely horrid, IMO.
texturebits can actually give some fps gains with little or no downside, however.
Having it on 16 makes the graphics flicker on long distances on my computer. I have to either set it to 0 or 32. If memory serves, the same goes for Call of Duty except that it flickers on anything below 32.
I found out a very important one, don’t go to server with a lot of players 16 or less is best. (in multiplayer that is)
more players in a map = more calculating for your cpu 
works for me.
Greetz Way2Evil
well, since the thread starter didn’t seem to have much luck, i went poking around trying to find something else that might help. i set r_ext_compiled_vertex_array to 0 and my framerate was literally halved. obviously that cvar has a lot to do with framerate. so i googled it, and it seems that some people might get the opposite effect. so perhaps try that cvar at 1 and 0 and see which is better. for me, it’s obviously 1. for you, perhaps not? 1 is default, btw.
i can’t say i researched it in depth, i just skimmed over a few forum posts, etc. it’s a shot in the dark, but give it a try!
anyone suggest shutting down programs in the background and turning anti-virii off ? (if so, my bad) -It may not show as a fps improvement, but believe me…it will help alot.
just to add: it’s kind of nice to make toggles for some things…
bind “=” toggle cg_wolfparticles
bind “-” toggle cg_atmosphericeffects
I don’t get any difference at all by having r_picmip at 3 instead of 0. Isn’t that weird?
Not really, it just means that texture memory and GPU bandwidth isn’t your limiting factor, which shouldn’t be a surprise if you have a decent video card.
Not really, it just means that texture memory and GPU bandwidth isn’t your limiting factor, which shouldn’t be a surprise if you have a decent video card.[/quote]
I suppose you’re right.
Yep Cerebrate, it’s normal, modern graphic cards have bigger fill rate than the games will be able to use for next few years. Polygon count is what now matters the most.
And in the case of ET, polygon rate is largely limited by bus, cache and memory bandwidth, not GPU. Also running the physics etc takes up a significant portion of these as level complexity goes up, so it isn’t even drawing the polygons that is really the problem whith high poly levels.
all i know is i’ve got a P4 2.53 (no HT), running at 2.8, FSB at 147, memory at 184, a gig of 2700 DDR, a geforce 6800 with 256 megs, 4x AGP at 66, i cap fps at 76 and i still drop into the 50’s all the time (sometimes lower shudder). which of course sucks because i run maxpackets at 76 and i refuse to lower it and get owned by the “packet lottery”. i just deal with the choke when my frames drop, and i swear a lot.
i’ve found ET to be more demanding of my PC than D3 or HL2 were. and that’s no bullshit. it’s nearly impossible to get a constant, decent framerate from ET on my box. i’m not greedy either. i don’t even want 125 fps. my monitor runs at 85 hz - all i ask for is a ROCK solid 76 fps and i’d be happy as a clam.
ouroboro: I’m not trying to lecture you or anything, but you don’t have r_swapintervals on, do you? Or r_displayrefresh? It’s just that it sounds so strange that you can’t get a solid 76 with that system of yours.
nope. ive tweaked everything as finely as i can get it on my box - while still maintaining a playable level of visibility. i run a high res (1280x1024), which accounts for part of it. i absolutely refuse to use low res and play a game that looks like Wolf 3D. i like to be able to make out faraway targets easily, instead of trying to determine what that distant glob of pixels is. still, i made a fairly busy timedemo on a random server running radar, which i use for tests, and i get ~85 on that benchmark. and i usually score >100 on more framerate-friendly demos. my guess is that reyalP is right about the CPU being stressed when there are actually players on the server.
the bottom line is, my CPU is junk. when i clocked it up to 2.8, i saw an instant ~5 FPS gain. that told me where my bottleneck is.
ouroboro: I don’t understand how you prioritize. Virtually everyone will agree that a solid FPS should be considered above high resolution, but hey, it’s a free game. 
It doesn’t help you do know that 90% of all the top players use r_mode 6?
BTW, have you considered overlcocking? Just a little bit?
Go into display properties and click on advanced, you should be able to change some of the goodies if your video drivers allow you to.
“top players use xyz” means nothing. half of the top CS players use 640x480 simply because the leet guy they learned from used it back in the day, so New Guy learned on it and became leet on it. there’s no justifiable reason to use it. i prefer 1280 because for one thing, faraway objects are easier to see. for another, although the hit detection in ET is not pixel-precise, i still consider it better that my targets be comprised of as many pixels as possible, for more precise aiming.
as for overclocking, i OC’ed my 2.53 to 2.8 and got a few more frames. i went a tad higher for a while but settled on 2.8 as the most stable speed for my CPU.
