clarry: What's incorrect about it? I'm not even sure there's a legit use case for handling sigsegv today, I think these programs should just be unconditionally killed.
dtgreene: Well, the program *does* crash when it's executed, right?
Right, but if it handles SIGSEGV, it could avoid crashing. I don't think that's something unprivileged code should be able to do, ever, as a program that's intended to be native and portable and run on a system with virtual memory. Maybe there's some excuse for emulators and such that need to support broken non-native code, but even then I think there should be proper architecture in place instead of spaghetting' things together with SIGSEGV handlers, unportable mmap extensions/assumptions, etcetra.
Debuggers and debugging tools often need to handle SIGSEGV in order to allow such errors to be debugged. (valgrind is an example here; if a SIGSEGV occurs, it prints a backtrace and a summary of memory usage.)
IMHO that's just a hack to work around the lack of better architecture, as expected from a system designed in the 70s. About as hacky as what ldd did (and on many systems sill does), which is to just set an environment variable and then execute the program whose library dependencies you wanted to inspect, with the expectation that the run-time link editor does the right thing. Ugh.
What you can do safely or correctly in a signal handler is not much, so real debuggers are going to need something more advanced like ptrace anyway. I think that's kinda the way things should've always been; the system should provide an out-of-band (and higher privilege) mechanism for debuggers to interact/interfere with the debuggee. Delivering signals to the debuggee in order to enable some sort of limited debugging is very very backwards. If there's some asynchronous event that the debugger should be able to be notified of and which cannot be monitored e.g. by hooking into system calls, then provide a mechanism for the debugger only to subscribe to such events.
Also, IIRC user mode linux uses SIGSEGV internally. Interestingly enough, an OS kernel actually handles the kernel-mode equivalent in an interesting way; the kernel checks to see if the memory reference was valid, and if it was, loads the memory contents from disk (either from the file backing the memory region, or from the swap partition/file), then lets the process that continued it continue as if nothing happend.
Sure, handling page faults and other hardware exceptions in kernel (or other more privileged, if we want to consider microkernel architecture) code is fair game. I'm not sure how I feel about instruction emulation; I haven't thought much about it to say whether it's reasonable to handle unimplemented instructions in userspace software without kernel intervention (and the associated performance penalty).