#111 closed defect (released)
omindex should impose resource limits on filter programs
Reported by: | Olly Betts | Owned by: | Olly Betts |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | Omega | Version: | SVN trunk |
Severity: | normal | Keywords: | |
Cc: | Blocked By: | ||
Blocking: | #120 | Operating System: | All |
Description
It would be a good idea to have some mechanism to limit the CPU and memory that a filter program can use - it's better to skip a problematic file than have the indexer hang or worse render the server unusable if a filter program should enter an infinite loop or consume excessive amounts of memory.
setrlimit() allows limiting CPU and memory usage, but somehow we need to determine a sane memory limit. sysconf(_SC_AVPHYS_PAGES) seems suitable on Linux at least.
Attachments (2)
Change History (7)
comment:1 by , 18 years ago
Status: | new → assigned |
---|
comment:3 by , 17 years ago
Partly committed to SVN - filters are now limited to 5 minutes of CPU time.
comment:4 by , 17 years ago
attachments.isobsolete: | 0 → 1 |
---|
comment:5 by , 17 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
I've committed this patch (with an extra doxygen comment). It's not growing implementations for other platforms just sitting as an attachment here...
comment:6 by , 17 years ago
Operating System: | → All |
---|---|
Resolution: | fixed → released |
Solaris has sysconf(_SC_AVPHYS_PAGES) too:
http://bama.ua.edu/cgi-bin/man-cgi?sysconf+3C
We need to multiply it by sysconf(_SC_PAGESIZE) to get bytes, but being careful that the value doesn't overflow.
This wiki page may be useful:
http://www.net-snmp.org/wiki/index.php/Memory_HAL
Perhaps we should use something like: