Indeed, not getting it to 100% means ur gfx is seriously the bottleneck. When i turn my max cfg on(unplayable quality for only movie-making), both cores near 100% and i’m running a 3.8GHz c2d e8500 and ati hd4850.
Of course i have tried just high quality in a playable sense, but this game does seem to draw a lot of cpu.
CPU usage?
I have found mw2 takes more cpu on each game than et:qw
I do understand the better quality the gfx card the cpu does not take as much load… is that true?
well in all actual fact, when thinking about it, I have always had 3GB of RAM, I have just upgraded my 9800 GT to a GTS 250, and in COD:MW2 the CPU load seems a lot higher, as before it didn’t, but could this be because I have a better card MW2 has chosen me as a host and that’s why I have a greater resource usage?
ET:QW didn’t change in resources sense.
CPU Cache is there because the link between the CPU and the RAM is too low-a bandwidth for the information to be accessed in time. The CPU stores what it assumes will be needed first/most commonly in it’s various tiers of cache (level 1 being faster - but more expensive)
Yes, that’s correct. Thanks for elaborating (was in a hurry and couldn’t squeeze in more than a couple of words in my last post). 
I’m quite certain that Brink will be designed to work with multiple CPU cores. ETQW made good use of my dual core E6700 and then my quad core Q6600, so I’m quite sure that my current CPU (an i7 860 with 8 virtual cores) will be able to chew up brink quite nicely. My video has built-in PhsyX, so I’m more interested in the 3D physics system than the main game engine itself. I can’t imagine designing a game in this era that isn’t at least capable of multi-core operation…to not do so would greatly limit performance on mid and high level systems. I can’t imagine a cutting edge developer like Splash Damage not continuing to embrace multicore processing…
Any triple A title is going to tap all the system power it can get…
BRINK != ET:QW.
The engine that BRINK is using is a whole new engine based on fresh idtech4 engine. It’s not etqw engine tweaked again.
Now, regarding the consoles hardware, the game will probably be using a lot of thread.
You’re right, ETQW is a cousin of Brink, not a parent. But they are in the same game “family”. It wouldn’t be shocking for them to have some comparable features…
Here is the idtech4 family portrait:
Doom 3 (2004) – id Software
Doom 3: Resurrection of Evil (2005) – Nerve Software
Quake 4 (2005) – Raven Software
Prey (2006) – Human Head Studios
Enemy Territory: Quake Wars (2007) – Splash Damage
Wolfenstein (2009) – Raven Software
Brink (2010) – Splash Damage
I stand corrected…
I seem to recall talk that at least some portion of the Brink code was ‘substantially rewritten’ or something to that effect. Maybe I’m just losing my mind in my old age…
Who are you replying to ?
Awesome. You mentioned Brink was being built on the basis of id Tech 4. For a start, Brink’s been built from a new engine on the basis of Tech 4, yes?
Paul Wedgwood: Yes.
http://www.vg247.com/2010/05/04/interview-brink-part-two-splash-damages-paul-wedgwood/
:rolleyes:
There is a somewhat detailed discussion on a few changes in this thread:
http://www.splashdamage.com/forums/showthread.php?t=18326&highlight=renderer+brink+rewritten
Mostly detailing changes to the audio system…
probably same as client outside of graphics adaptor, I would guess dual core w/ 2g RAM
but as always, more/faster is better
you need a dual core and 2 gig of RAM just to host a dedicated server?
I could run 2 ETQW servers on 1 1.8 Ghz core…
And then it was still below 80%
cache impact on overall system performance ? thats depend
its [dramatically serious]issue in about 12% of game code[especially when egnine developers way too attracted(remember one story from Gammeln city ?) by various “optimizations”(Intel-way or other)], which in fact more hurt overall product performance, than improve in appx 70%.
point is recent engines/games is about 5x times more bottlenecked by ram AMOUNTS, than CPU->>RAM bandwitch.
which leaves us with keeping in mind numbers about 60% of nowdays PC’s have 4Gb RAM and ~14% PC’s have 8Gb and above[and this part of charts is explosively grow in end of 2009].
so this lead us to simple conclusions about x64 binaries as mandatory requirement for any PC entertanment with decent performance/feats.
p.s.
wait for 6-6-6-20 1600 Mhz DDR III available(48 and 32 nm -Waffel-made), and buy it to solve 1st problem. which is start become available about ~fall 2010.
notorious DDR III[1st and second generation]was only marginaly faster than DDR II, because horrific timings.
p.p.s.
note: L1 cache amount and performance was have [dramatically]more serious impact on overall system performance under heavvyweight load[nmbr crunching, commercial computing(rendering AV/3D, PP 2D and etc].
mainly because L2/L3 cache not helps when traffic is [WAY]more serious, less predictible[cache-able], cache-fail-proof.
note on VIA CPU’s[or some Transmeta/PPC’s. not matter] advantages over Intel or AMD in some applications.
not only because shorter execution pipelines[lower latency]
study some Seymour Cray [http://en.wikipedia.org/wiki/Seymour_Cray] R&D results[good articles IMO. even nowdays] about importance of bottlenecks eliminations, not attempts to polish impact of them.’
[QUOTE=Crytiqal;225990]you need a dual core and 2 gig of RAM just to host a dedicated server?
I could run 2 ETQW servers on 1 1.8 Ghz core…
And then it was still below 80%[/QUOTE]
if he can afford it, why not ?
note: even my home gateway[mITX mobo-based thingy in pizza-box-alike/size-case]have have 4x core CPU and 8Gb of RAM.
serious[decent/recent] games requires Serious hardware to host server.
you “could” run ETQW “server” even on atom-based netbooks, but with predictable experience[similar to on-some-ISP’s-rented-ranked servers-related whineposts].
but thats not point, excuse or reason 2 do such thing.