sigsegv


(jaybird) #1

Does anyone know how to force a core drop in linux on segmentation faults?

I’ve tried resetting SIGSEGV to SIG_DFL in a couple different ways, but when the server crashes it does not create a core file. I’ve looked in the et root, mod directory, and in /tmp.

I force a crash by calling ClientUserinfoChanged(-1) in G_RunFrame.

I’m running the code to change the signal action to default right before G_Init is called.


(tjw) #2

You will not be able to get a core.

However you can run et inside of gdb directly:

‘gdb ./etded.x86’

Also, you can attach gdb to a running instance of et. This works best since you can have your et console and gdb console in two seperate terminals.

'gdb ./etded.x86 pidof etded.x86

Note that this command will halt the running instance of etded.x86. You’ll need to run ‘continue’ in the gdb console to get it to start running again.


(jaybird) #3

Actually I just figured it out.
You can get core files. If you want the code, tjw, let me know and I’ll hook it up.

The problem I had was that core files were seemingly disabled by default on my linux distro.
Running ‘ulimit -c unlimited’ from a bash shell fixed that problem. I’m getting very helpful core files now ;]

I need core dumps because a couple of my users are experiencing crashes that I’m unable to duplicate. Sending a core dump is the easy way to debug those.


(tjw) #4

I know I spent several hours once trying to get etded.x86 to dump a core and I was unsuccessful. I came to the consulsion that signal 11 was being trapped in the binary. Perhaps that was back in the 2.56 days and 2.60 changed things? Was there anything special you had to do (aside from setting your userlimit for coresize)?


(jaybird) #5

Yeah, you need to manually override the overridden signal handler that is set in the main binary. I sent forty the code; I’ll PM it to you as well. Note that I only have this working for Linux. I still have no idea how to get something similar for win32 applications (an idea how to do something like this would be much appreciated anyone).


(Mr.Mxyzptlk) #6

Wrap something like this with ifdef or cvar value somewhere early in the game module
initialization process, like inside dllEntry(), vmMain(), or CG_Init() . Next, make sure
your linux distro enables cores.


    struct sigaction sa;

    int32 rv = sigaction(SIGSEGV, 0, &sa);
    if (rv == -1)
        errorf("get SIGSEGV failed: %d
", errno);

    sa.sa_handler = SIG_DFL;
    sa.sa_sigaction = 0;
    sigemptyset(&sa.sa_mask);
    sa.sa_flags = 0;

    rv = sigaction(SIGSEGV, &sa, 0);
    if (rv == -1)
        errorf("set SIGSEGV failed: %d
", errno);

    debugf("coredumps enabled
");


(jaybird) #7

Well, the Windows dumps kinda took care of themselves. I had no idea what Dr. Watson did before, but the handy little fellow is giving me some nice dumps to debug with ;]


(jaybird) #8

Just curious how the etpro guys managed this in etpro?