leaf nodes- how big?


(ACROBAT) #1

What is the optimal size for a 6 sided leaf node?

When I do my own hinting and use blocksize 0, I normally just make the nodes as large as possible unless there are thousands of brushes in one. In that case I break it up.

But are these nodes TOO big?

Second question…

Does it slow the game down a lot to have leaf nodes that aren’t terribly complex but say like 20 sided instead of 6?


(Detoeni) #2

Q1.Leaf size is not that much of an issue, more important is what leafs can see into each other and what is then drawn on screen.
Splitting up large nodes is only of use if it reduces what is been drawn.

Q2. Me best guess would be that 20 sides is getting too complex, a leaf is just a volume of space that has stuff drawn in it. Q3map2 does throw errors if a leaf/portals get too complex though.


(ACROBAT) #3

I was doing all the hinting for a map that someone else did but the map is gigantic. Some of his leaf nodes by his design were like 20-30 sided and in those cases I just broke them up for no other reason then to make them less complex.

Most of the large 6 sides leafs I left in intact so I guess unless there are a TON of detail brushes inside those big leafs, I will leave them large. Someone told me earlier that if you have too many detail brushes in a leaf, collision damage will lag.

Any other tidbits will be greatly appreciated.


(Detoeni) #4

Too much collision data can be bad (brush or q3map_clipmodel), but Iv only seen it as an issue in vq3 and rtcw with very complex terrain, - wolf et is much more for giving. In these cases then spliting the node/s can sometimes make a diffirance.


(SCDS_reyalP) #5

Too much collision data can be bad (brush or q3map_clipmodel), but Iv only seen it as an issue in vq3 and rtcw with very complex terrain, - wolf et is much more for giving. In these cases then spliting the node/s can sometimes make a diffirance.[/quote]
Oasis suffers noticeably from this, as did RTCWs mp_assault.

One way to benchmark this is to run a dedicated server and watch how much CPU it uses as a single, high FPS client moves around. The client should be capped at an FPS level it can actually maintain, because client FPS has a direct effect on server load. Since the server is essentially only doing that clients physics, this gives you a good idea of how much CPU physics in that area costs.

With this method, I find that one client getting 200FPS in the allies first spawn on oasis takes 8-10% of my servers CPU (athlon XP 2000). On most the rest of the map, it takes 2-4%. Note that in mods without an equivalent of etpros b_optimizeprediction, the cost on clients (especially those with higher pings) is significantly higher.

There aren’t really any hard numbers that I know of (as there are many factors in performance and they all interact) but if you have a lot of complicated clip or brushwork without much structure it’s something to look at. There probably is a threshold where certain things stop fitting comfortably in typical cache sizes, but it would take a bit of work to identify.

Obviously this is only something to worry about if you are really interested in squeezing out every last bit of performance, or are running into serious performance problems.


(ACROBAT) #6

One nice advantage I have found is using the blocksize 0 command speeds up even the lightphase for some reason.

You can have a map that would be an hour compile, and it becomes a 90 second compile. I have a friend working on a GIGANTIC map for jk2, and one of the largest ones I’ve seen, and it compiles in about a minute now. It’s pretty nice.