safe_malloc failed in q3map2 2.5.15 light phase


(seven_dc) #1

I looked hese forums but did not fould solution for my error. Apparently my mermory is running out when I try to compile my map with q3map2 2.5.15. Here is more detailed error:

        0 light entities
--- SetupBrushes ---
     6023 opaque brushes
--- SetupDirt ---
       48 dirtmap vectors
--- SetupSurfaceLightmaps ---
    10639 surfaces
     9760 raw lightmaps
       71 surfaces vertex lit
    10568 surfaces lightmapped
    10171 planar surfaces lightmapped
      200 non-planar surfaces lightmapped
      197 patches lightmapped
       65 planar patches lightmapped
--- SetupTraceNodes ---
************ ERROR ************
safe_malloc failed on allocation of 180879360 bytes

So when q3map2 tries to take about 2 gigs of mermory my computers says no. But how come it takes so much mermory?
I am using light compile:

-light -fast -samples 2 -shade -filter -patchshadows -external -lightmapsize 256 -approx 8

My map is pretty big but I am using lightmapscale 2.0 in worldspawn. I have done just basic building placement so there is no details yeat.


BUTTSEX LIVE


(Davros) #2

do you need -shade? i thought this enabled phong shading for all lightmap surfaces, whereas your probably better off creating phong shaders for only a few of your shaders you want phong shading for. and you dont need -samples 2 AND -filter as they pretty much cancel each other out. (samples sharpens , while filter softens the shadow edges. just try -light -fast -samples 2 and go from there


(ydnar) #3

-shade does nothing. You need -shadeangle N or shaders with q3map_shadeAngle for that to work.

Also as Torchy said, using -filter disables -samples. Use one or the other. I recommend -samples.

Quit other running applications before trying to light the map, and make sure the virtual memory settings allow for large allocations. Lighting a large complex map takes a lot of memory. Are you compileing the map with -meta in the BSP phase?

y


(seven_dc) #4

do you need -shade?

I thought that -shade just enambles the phong shading in phong textures my terrain has phong in them. But as ydnar said. It doesn’t anything alone.
I tried to remove the -filter and -shade switch. And same error.

@ydnar

Yes I am using -meta I even tried -pathcmeta. i do not think My map is complicated, just big. I am not so familiar with my winXP swap storage, maybe the fact that I have 1,5 gigs free in my windows partition has something to do with the mermory problems.


LOVE HELP


(seven_dc) #5

Mindlink suggested using -lomem switch. Is that ok? and how much mermory it takes? I cannot test it now because I am at work.


Mary Jane


(WolfWings) #6

Stupid thought… if your map is THAT big, increase the gridsize, or to test if that’s the problem try a compile with -nogrid in the light phase.


(seven_dc) #7

It is not THAT big… Or is it? overall piccie:
http://koti.mbnet.fi/seven/temp/et/flayout.jpg

How do I increase the gridsize?


Yamaha Bruin 350 History


(Godmil) #8

in worldspawn try: gridsize 256 256 512


(WolfWings) #9

If #3 is a truck… good grief. That’s a fairly large map, so yes, the default gridsize (128 128 64) is likely too detailed for your map.

First tip, if your map has a large ‘sky area’ that can’t be gotten to by actual players, add a single rectangular, axis-aligned ‘lightgrid’ brush to your map to define the area that players and moving entities need to be lit inside.

Second, the entity-key to set is _gridsize on the worldspawn. Try “256 256 128” to start with, and fiddle with that as you wish from that point. The first two numbers are the X and Y steps for the lightgrid, the third number is the Z-axis stepping. Doubling any one of those numbers cuts your lightgrid size in half. The lightgrid itself is used for lighting dynamic models, so having it too course (any number too high) can cause ‘dark sunlight’ or ‘bright shadows’ with entities, such as players or hand-grenades.


(seven_dc) #10

Thx!
I have set the lightgrid brush, but those gridsize parameters could be handy. I shall try compile tonight and report findings back here.
BTW It takes about 40 seconds to escort the truck from start to finnish and I am planning to increase the speed so it would be around 30 secs.

Edit: It works! with low gridsize and -lowmem swich The mermory consuption is only ~500MB


THE CIGAR BOSS


(]UBC[ McNite) #11

Ok i ll just hijack this thread for my compile problem then…

safe_malloc failed on allocation of 173015040 bytes

But the funny thing is: There didn’t really change much in my map compared to the last compiles which went well…

Log 1 (some days back):

    19131 total world brushes
    18690 detail brushes
        0 patches
    44465 boxbevels
    25490 edgebevels
      782 entities
    89134 planes
        0 areaportals
Size: -5152, -5664,    64 to  5152,  5664,  2464

Log today:

    19117 total world brushes
    18688 detail brushes
        0 patches
    44706 boxbevels
    26301 edgebevels
      790 entities
    91380 planes
        0 areaportals
Size: -5664, -5664,    64 to  5152,  5664,  2464

I can see the planes went up, but i have no clue why cuz the number of brushes is the same.
What i did mainly between the last working compile and today is:
a) i have ungrouped a lot of func_groups (barrels, furniture, segments of tunnelwalls) that i had grouped for the sake of easy selection (i had about 810 in the last long and working compile).
b) I moved about 25% of my terrain into 1 func_group now giving it standard caulk_terrain and a terrain surface tex (until now the terrain was about 25 func_groups with caulk and a tex on the surface, no alphamap used).

Suggestions anyone?
And what exactly happens in the setuptracenodes phase? So i can get around the prob?
(Btw i read about lomem switch… only in my q3map2build with q3ma2 2.5.16 it doesn’t show up… is it a batch-file only switch?)

And yea… my virtual mem is at 2.5GB max, but the process dies when its using about 1.3 GB (when i looked at it last time it went up to 1.7 GB and made a nice compile).


(]UBC[ McNite) #12

hmmmmmmm i just went back to a safe of the map of 2 days back before I started to get all the terrain-parts back into one, and it went through the setuptracenode step. And it does even have slightly more data:

    19186 total world brushes
    18755 detail brushes
        0 patches
    44794 boxbevels
    26279 edgebevels
      790 entities
    91456 planes
        0 areaportals
Size: -5664, -5664,    64 to  5152,  5664,  2464

So now I m really confused :bash: :banghead: :bash:
In another thread i read ydnar say that having a terrain of 1 func_group done standard-way creates a mesh and is therefore a lot better for gameperformance than having a terrain made of lots of single brushes (well mine was about 25 func_groups but still… they were func_groups with a shader using phong-shading on them).

edit: just in case those data don’t say nothing I just checked the total verts… they r 800 more in the last compile that went well compared to the log of the failed one :eek2:


(kat) #13

BSP isn’t quite set in stone, that means that although the final results may be approximately the same, the order in which the compiler comes across and then uses the map data changes from compile to compile. If you think about every time you save your map file the editer rejigs that data and reorders things around which is how q3map/2 reads it (iirc).


(Twisted0n3) #14

It’s certainly rather odd that it would complain of being out of memory with nearly 1GB still available, though, don’t you think?


(kat) #15

Not really. The actual amount of data BSP can handle is 2MB (or there abouts). The memory the compilation ‘process’ uses (i.e. to get from ‘A’ to ‘B’) is different to the malloc error which refers to the BSP data itself being uber bloated, not the compilation process. It’s usually caused by leaving to much of a map as structural brushwork; the more of that there is, the more splits, planes and so forth, and the more BSP data you use to describe the maps physical structure.

The work around, mentioned above, to reduce BSP usage is to force increase the blocksize from it’s default (1024 iirc), but that’s a ‘hack’ imho, a dirty one at that.


(Twisted0n3) #16

So it’s fixed by detailing? It should show as something other than an “out of memory” error then – since it sounds like it’s more like MAX_MAP_VISIBILITY than a genuine out of RAM situation.


(kat) #17

It’s not exclusively fixed by detailing, put it that way. It tends to be more about ‘intelligent’ structuaral brush placement, which is kind of similar but not the same in terms of solving a M_M_V. ‘maloc’ is exclusively BSP data not ‘VIS(ibility)’ iirc (although the two go hand in hand, so to speak).


(]UBC[ McNite) #18

I didn’t touch the structurals between the compiles, and with a VISdatasize of about 125.000 in a map with that size I d rather say its damn optimized when it comes to VIS and structural work. There are certainly no small structural brushes, as they get noticed in the visdatasize pretty well everytime and then get detailized.
Blocksize is 1024 1024 4096, and that means I have only 1 level of blocks in my map. Mapdimensions in Z-axis is 64 to 2464, so all the horizontal splitting is done by hints throughout the map (or structurals of course). That was my way to avoid lots of leafs and i reduced them from about 330 average visible to about 220 when I set the z-blocksize to 4096.

I ll try and test what fucked it up finally by re-doing all the steps I did again in the map that compiled well yesterday, trying to track down the step that fucked it up.


(=PoW= Kernel 2.6.5) #19

OK - Let me take a wild stab in the dark at this.

I was getting the safe_malloc error on my Windows2000 laptop with 1 GB of RAM and almost 4GB of virtual.
Yet the map compiles fine on my Linux box with 768MB RAM and 768MB swap.

If any C (or C++) Windows people are around maybe they can verify what I’m about to say.

I’m guessing that a safe_malloc() call in Windows means you only want memory allocated from phyiscal RAM and NOT from virtual RAM.
I’m guessing that the reason for the error is that q3map2 can not get enough REAL memory due to all the running processes that also require real memory.

Anyone want to comment on this theory?


(SCDS_reyalP) #20

safe_malloc just calls malloc and checks the result, and errors out if it fails. So it’s down to what the C runtime library does.

It certainly seems to fail before windows is entirely out of swap, but what the exact conditions that make windows malloc decide to fail are isn’t clear.