Opened 15 years ago
Last modified 22 months ago
#424 reopened defect
Magic filter limits are a bad idea
Reported by: | Chris Collins | Owned by: | Olly Betts |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | Omega | Version: | |
Severity: | normal | Keywords: | |
Cc: | Blocked By: | ||
Blocking: | Operating System: | All |
Description
Concocting magical execution limits is generally a very bad idea - leave this to the sysadmin or user who is more likely to have a good idea of the machine parameters and what is suitable for it.
Patch attached which adds configuration options to configure both the CPU runtime limit and the memory limit.
Attachments (1)
Change History (10)
by , 15 years ago
Attachment: | xapian-omega-sysadmin-limits.diff added |
---|
comment:1 by , 15 years ago
Severity: | major → normal |
---|
comment:2 by , 15 years ago
The last ditch catch is the process ulimits set before the indexer is even started. If somebody is running tools without reasonable limits set beforehand, they should be prepared to fix the mess.
The CPU limit is reasonable - it's a safe, fixed value on a parameter that scales relatively well between systems. The problem is the memory limit, is magic - you're trying to discern how much memory a child task should be allowed to use by looking at machine state rather than just being told what is safe, and what isn't, and trusting the OS to do the right thing, or already be configured defensively.
The fact that this issue back in 1.0.7 rendered omindex practically useless on Linux should have been a warning sign that you shouldn't be second guessing this information, but you've instead chosen to completely ignore the warning sign and continue with this "oh, but I know better than the machine owner and the OS" attitude.
I will accept the patch isn't perfect, but giving some twiddles to the person setting it up is better than rendering the indexer useless because you presumed what was and wasn't acceptable with its runtime behavior, and given you haven't done the right thing yourself, I've at least given you something to build off of.
comment:3 by , 15 years ago
Our desire is for omindex to work out-of-the-box with sensible default behaviour - forcing all users to perform custom configuration to prevent denial of service issues, and issues with omindex failing to finish due to some bad filters, isn't in line with this aim. This applies whether the configuration required is setting ulimit, or setting up a configuration file. The default process memory ulimit on my (ubuntu) system is "unlimited", so your suggested patch would force me to perform such a configuration step, for example.
Therefore, if we were to start using a configuration option, it would need to default to using some kind of limit, and the current behaviour is probably the default we'd want to use; I haven't heard or thought of a better default than using a (large) proportion of the available memory on a system.
As Olly said, we're not necessarily against adding some configuration in this area, but if there's no actual situation in which users would benefit from adjusting the configuration, we'd just be adding code bloat - so we need a justification for adding configurability in the form of a description of a situation which is helped by the configurability.
I was not aware of this issue in 1.0.7 rendering omindex practically useless on Linux - there doesn't seem to be a ticket about this in this Trac instance, and I can't find anything on the subject with a quick google. Could you provide a reference for this, so we're not missing relevant information, please? I use omindex on Linux regularly and it works for the files I'm passing it; perhaps I'm not using the same filters as you, though.
The limits were added in response to ticket #111 - as Olly said, this was in response to real problems experienced by users. Setting ulimit is another mechanism for limiting resource usage, but would require users to do this explicitly, and anyway I'm not convinced we'd want the limit for the main omindex process to be the same as that for the filter programs - omindex can end up using a significant amount of memory while indexing, without that necessarily representing a problem.
I did find ticket #358 (and the linked http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=548987 ) which indicates that omindex used to use _SC_AVPHYS_PAGES instead of _SC_PHYS_PAGES, and this did cause a problem - however, it's been changed to use _SC_PHYS_PAGES, and I can't see any mention of this not working correctly since 1.0.17 where the fix was applied.
If there's a real (or perhaps even hypothetical) situation in which the current code fails to work well, please describe it to us so we can improve the design and code accordingly.
comment:4 by , 15 years ago
I'm fairly sure the reporter is referring to the issue in #358 as this report in the Debian BTS seems to be from the same person: http://bugs.debian.org/cgi-bin/bureport.cgi?bug=564154
The 7/8 fraction is rather arbitrary - we could just limit to the total physical memory, which should still catch run-away filter processes before they cause the machine to swap excessively.
comment:5 by , 14 years ago
Milestone: | → 1.2.4 |
---|---|
Resolution: | → fixed |
Status: | new → closed |
Memory limit changed to the size of physical memory (rather than 7/8 of that) in trunk r15129.
Nobody has so far said why it would actually be useful to make the limits configurable - realistically, a filter which uses more than the size of physical memory isn't going to be usable, and if you want a smaller limit, you can use ulimit before running omindex (though that would apply to omindex itself too as Richard pointed out).
So I think it's best to just close this ticket now.
comment:6 by , 14 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
The issue in 1.0.7 was a real usage condition where a busy system with 4GBs of RAM running a 64-bit kernel, building a very large index (process resident size was around 2GB) would fail once the 'free memory' dropped down to about 120MBs - the AS limit would prevent fork() from building a valid process (minimum process space is about 128MB) resulting in further indexing failing. Part of this was caused by the incorrect use of RLIMIT_AS (which should never be used in such a clumsy way), but the main problem was the sliding memory limit.
Likewise, on similarly large memory systems, 7/8th physical is FAR TOO LARGE a limit to be useful - you've already done significant damage if a process is allowed to run-away long enough to consume that much memory.
I accept that you need to stop the tools from running away, but you need to do so in a way with reasonable, controllable limits, not some magic number that you think is fine, and is only safe for the limited environment in which you test your code.
comment:7 by , 14 years ago
The only situation not catered for that I can see is wanting to set a more proscriptive set of limits for filters than for omindex
. Of course, people can do this anyway by wrapping (or auto-wrapping in a ccache
style) the programs invoked as filters.
So I'd be moderately in favour of:
- removing the filter limit entirely
- closing this ticket
and certainly in favour of:
- not making filter limits configurable
- closing this ticket
Either way, configurability doesn't belong inside omindex
; use the operating system controls to do this sort of thing.
comment:8 by , 14 years ago
I'm happy with the proposal to remove the limit on the basis that wrappers are easy enough to implement, and will achieve olly's requirements, will make those limits admin serviceable without rebuilding the binaries, and prevents omindex from having to do a half-assed job of imposing that limit itself.
If that's acceptable to olly, I'll even produce the patch myself.
comment:9 by , 13 years ago
Milestone: | 1.2.4 |
---|
1.2.4 is a closed milestone, so unsetting (I'm surprised to discover trac allows an open ticket to have a closed milestone set...)
These aren't meant to be "magic" limits, just a last ditch catch for a filter program which has gone into an infinite loop, or a finite loop with insane memory consumption. If they are too tight for a particular genuine situation, they should be relaxed.
I can see you might find them philosophically problematic, but they were added in response to actual instances of filter programs misbehaving in these ways, which prevents indexing the content. So these limits address a potential denial of service by someone able to supply content to the indexer, which is a common scenario. I don't see an alternative way to address this issue, but I'm happy to hear suggestions.
I'm not totally averse to making them configurable (though I dubious if there isn't a practical benefit), but a default of "no protection" is a regression on this denial of service issue.
Also, omega.conf is (at least currently) configuration for the omega CGI only. If we're going to use it from the indexers, then the current search behaviour (environment var then "same directory as the omega CGI" then sysconfdir) needs considering as it means that the CGI and indexers can find a different configuration file, which is likely to catch some people out.
Your patch is missing any documentation of the new options. Also it would be better to use the standard functions for parsing integers.