Opened 5 years ago

Closed 12 months ago

#781 closed defect (fixed)

Sockets not closed on RemoteDatabase error during initialisation

Reported by: German M. Bravo Owned by: Olly Betts
Priority: normal Milestone: 1.4.23
Component: Backend-Remote Version: 1.4.11
Severity: normal Keywords:
Cc: Blocked By:
Blocking: Operating System: All

Description

Sockets are never closed when remote database throws an exception during RemoteDatabase initialization.

The reason is update_stats() (https://github.com/xapian/xapian/blob/789fb5215c63240c178bf98ad1e79544d682dc80/xapian-core/backends/remote/remote-database.cc#L94), or any other part of the body of the constructor, could throw an exception, and since the object isn't fully constructed yet, the destructor isn't called and thus RemoteConnection's destructor is also never executed, leaving an unreferenced open socket.

To fix this, I propose wrapping the body of the constructor in a try { ... } catch(...) { do_close(); throw; }

Attachments (4)

tst-xapian_remote.cc (2.2 KB ) - added by German M. Bravo 5 years ago.
Reproducer of the socket leak
tst-remote-leak.cc (1.1 KB ) - added by Olly Betts 5 years ago.
C++ helper
trigger-remote-fd-leak (389 bytes ) - added by Olly Betts 5 years ago.
script to reproduce
remote-leak-test.cc (1.3 KB ) - added by Olly Betts 4 years ago.
All-in-one C++ reproducer

Download all attachments as: .zip

Change History (31)

comment:1 by German M. Bravo, 5 years ago

comment:2 by Olly Betts, 5 years ago

It would be really good to have a regression test for this to (a) make sure it is actually fixed and (b) to ensure it stays fixed. Do you have a good way to trigger it on demand we could turn into one?

comment:3 by Olly Betts, 5 years ago

Thinking about this some more, the basic idea of the fix seems right, but:

  • I'm not sure do_close() is quite the right way to achieve this - in particular it will try to wait for confirmation of the close for a writable database, which is probably foolish if we hit NetworkError. There can't be any changes to be written or committed at this point; it's possible the remote end managed to lock the database, but we aren't likely to be able to wait for it to confirm it has unlocked it. I'm thinking link.do_close(false) is a better approach.
  • If whatever we call to close the connection has any likelihood of throwing we probably should catch and discard any exceptions from it (because it's more useful to re-throw the original exception rather than any exception we hit trying to close the connection). If we switch to link.do_close(false) this is much less of a concern, since that boils down to one or two calls to close() - nothing in that code path should throw that I can see.

comment:4 by German M. Bravo, 5 years ago

As an alternative, we could also do try {} catch (...) {} outside, where the object is being created and directly close the socket there.

comment:5 by German M. Bravo, 5 years ago

If we use link.do_close(false), what about exceptions thrown by end_transaction()/commit() (as stated in the comment in do_close(), line 618)?

comment:6 by Olly Betts, 5 years ago

If the constructor fails we don't have an object so I'm not sure I see how we can close the socket in the calling code in that case.

It also seems a cleaner design for objects to deal with clean up of their resources rather than rely on the caller having to do something in an error case (which is all too easy to fail to do).

Line 618 doesn't seem to be in do_close() on master (I guess you're looking at an unclean tree?) but the comment I think you must mean isn't relevant for what I'm suggesting because (a) as the comment says dtor_called() swallows any exceptions (because it's intended to be called from the destructor, and we must not throw an exception from the destructor because the destructor gets called when propagating an exception, and (b) with my proposal we wouldn't be calling dtor_called() anyway (because it's pointless to do so since there can't be an active transaction or anything to commit at this point).

Do you have a nice way to reproduce the socket leak we could use in a regression test?

comment:7 by Olly Betts, 5 years ago

It occurs to me that if there was a variant of RemoteConnection which owned its fds and was responsible for closing them then this would be a non-problem (because the link object's destructor should get called even if the RemoteDatabase object's constructor hasn't completed).

Looking at remoteconnection.cc, the current semantics of the ownership of these fds is a little murky - the RemoteConnection object only closes them if explicitly asked to, except that if its read_at_least() method gets a zero-size read indicating EOF then it calls do_close(false). That seems buggy as if the parent then tries to close those fds they may have been reused and refer to other databases (this may only manifest in multi-threaded code). I've not tried to follow the code paths yet to see if this is a problem in practice.

comment:8 by German M. Bravo, 5 years ago

That’ll be a problem too. In my experience, properly handling file descriptors is always tricky to say the least. What I do is I only close file descriptors in destructor of the owner object and instead use shutdown everywhere else. Otherwise really nasty things can happen such as file descriptors being use where they shouldn’t.

by German M. Bravo, 5 years ago

Attachment: tst-xapian_remote.cc added

Reproducer of the socket leak

comment:9 by German M. Bravo, 5 years ago

I've added a reproducer which shows the socket leak. It implements the most basic server which throws an exception after client opens the connection.

Run as c++ -std=c++14 -o tst-xapian_remote -lxapian tst-xapian_remote.cc && ./tst-xapian_remote, after a while, you'll start receiving NetworkError: Couldn't connect (context: remote:tcp(localhost:5000)) (Can't assign requested address), when there are no longer any file descriptors available.

comment:10 by Olly Betts, 5 years ago

Thanks. I needed to link with -pthread and this code tweak:

--- tst-xapian_remote.cc.orig	2019-05-01 18:37:20.246441957 +1200
+++ tst-xapian_remote.cc	2019-05-01 18:38:25.242556029 +1200
@@ -5,6 +5,7 @@
 #include <iostream>
 #include <string>
 #include <cstdio>
+#include <cstring>
 #include <cassert>
 #include <thread>
 
@@ -58,7 +59,7 @@
 	}
 
 	struct sockaddr_in serv_addr;
-	memset(&serv_addr, 0, sizeof(serv_addr));
+	std::memset(&serv_addr, 0, sizeof(serv_addr));
 	serv_addr.sin_family = AF_INET;
 	serv_addr.sin_addr.s_addr = INADDR_ANY;
 	serv_addr.sin_port = htons(port);

But on Linux it starts to fail with:

NetworkError: Couldn't resolve host localhost (context: remote:tcp(localhost:5000)) (No address associated with hostname)

I guess that's a symptom of running out of fds.

comment:11 by Olly Betts, 5 years ago

Indeed it seems to be - this shows the first unused fd increasing by 2 each time (the sleep(1) is to allow the server thread time to exit):

--- tst-xapian_remote.cc.orig	2019-05-01 18:37:20.246441957 +1200
+++ tst-xapian_remote.cc	2019-05-01 18:45:34.599309013 +1200
@@ -5,6 +5,7 @@
 #include <iostream>
 #include <string>
 #include <cstdio>
+#include <cstring>
 #include <cassert>
 #include <thread>
 
@@ -58,7 +59,7 @@
 	}
 
 	struct sockaddr_in serv_addr;
-	memset(&serv_addr, 0, sizeof(serv_addr));
+	std::memset(&serv_addr, 0, sizeof(serv_addr));
 	serv_addr.sin_family = AF_INET;
 	serv_addr.sin_addr.s_addr = INADDR_ANY;
 	serv_addr.sin_port = htons(port);
@@ -86,12 +87,16 @@
 int main() {
 	std::thread server(fake_server);
 
-	while (true) {
+	for (int i = 0; i < 10; ++i) {
 		try {
 			auto wsdb = Xapian::Remote::open_writable("localhost", port, 10000, 10000, Xapian::DB_CREATE_OR_OPEN);
 		} catch (const Xapian::Error& exc) {
 			std::cerr << exc.get_description() << std::endl;
 		}
+		sleep(1);
+		int fd = dup(1);
+		std::cerr << "First unused fd = " << fd << std::endl;
+		close(fd);
 	}
 
 	return 0;

comment:12 by German M. Bravo, 5 years ago

I suppose the error looks different in linux, yes.

In the chat olly asked:

doesn't the fake server part of your reproducer leak fds?

The server doesn't ever close the socket, yes, but you could (and maybe should) also close the socket as the last line of fake_server_process() by doing close(sock) so the server doesn't leak. The reproducer will still work and show the problem since the file descriptor in the client's end is still never closed.

but even patching that and using fork() to run the fake server i'm not able to reproduce it showing your patch as fixing the leak

Apparently, after reading your last comment above, you got it working?

comment:13 by Olly Betts, 5 years ago

Apparently, after reading your last comment above, you got it working?

No - my last comment above was before those on chat - in that comment I thought I could reproduce the leak, because the reproducer failed due to running out of fds (once I did the simple patching to get it to build on Linux).

But after I made the comment I then tried to use the reproducer to show the fix worked. I had to adjust it to run with xapian git master (it hard codes the remote protocol version for 1.4.x), but then after applying your patch to git master it still reported leaks - that presumably either means the reproducer is buggy or your patch doesn't solve the leak.

Then I spotted the server thread leaked fds, so moved that to a child process. But the parent process still leaks fds.

I'll try to debug what's going on.

comment:14 by Olly Betts, 5 years ago

I think there's something amiss with the fake server code, but I'm not sure what (aside from it implementing the 1.4.x remote protocol version which git master doesn't support).

However, I made a reproducer which uses the standard server (and a bit of low cunning). It's (currently) a shell script and C++ helper.

In the xapian-core directory of a built git master xapian tree:

$ c++ -Iinclude -std=c++14 -o tst-remote-leak .libs/libxapian-1.5.so -Wl,-rpath ~+/.libs tst-remote-leak.cc
$ ./trigger-remote-fd-leak
Starting xapian-tcpsrv
Starting writable server on host localhost, port 5050
Listening...
Opening DB for writing in background
olly     15973  0.0  0.0  38924  4812 pts/3    S+   11:23   0:00 /home/olly/git/xapian/xapian-core/examples/.libs/simpleindex tmp.db
olly     15989  0.0  0.0   6148   892 pts/3    S+   11:23   0:00 grep simpleindex
Trying to open remote
First unused fd = 3
total 0
lrwx------ 1 olly olly 64 May  2 11:23 0 -> /dev/pts/3
lrwx------ 1 olly olly 64 May  2 11:23 1 -> /dev/pts/3
lrwx------ 1 olly olly 64 May  2 11:23 2 -> /dev/pts/3
Connection from a00:d818:: port 6360
DatabaseLockError: REMOTE:Unable to get write lock on tmp.db: already locked (context: remote:tcp(localhost:5050))
First unused fd = 5
total 0
lrwx------ 1 olly olly 64 May  2 11:23 0 -> /dev/pts/3
lrwx------ 1 olly olly 64 May  2 11:23 1 -> /dev/pts/3
lrwx------ 1 olly olly 64 May  2 11:23 2 -> /dev/pts/3
lrwx------ 1 olly olly 64 May  2 11:23 3 -> 'socket:[174877618]'
lrwx------ 1 olly olly 64 May  2 11:23 4 -> 'socket:[174877619]'
Done

Your patch helps, but it still appears to leak one socket. Same with my alternate patch.

by Olly Betts, 5 years ago

Attachment: tst-remote-leak.cc added

C++ helper

by Olly Betts, 5 years ago

Attachment: trigger-remote-fd-leak added

script to reproduce

comment:15 by Olly Betts, 5 years ago

And it's not something like a cached socket in the libc DNS resolver code - if I loop in the helper I get an additional socket still open after each attempt.

comment:16 by Olly Betts, 5 years ago

Aha, there's another unrelated bug which causes us to leak sockets in some situations.

comment:17 by Olly Betts, 5 years ago

OK, I've pushed a fix for that other bug ([245d4fdd1aee03ac60e8996f26ce570536471ba2]) and a fix for this one ([3dcd784bc105d7dc0db10bba1c9a8be28f52611b] which adds an OwnedRemoteConnection subclass which takes care of releasing the fds and uses it in the appropriate cases).

I haven't yet turned the reproducer into a testcase in the testsuite - I think it can be done, but I need to think about how best to do it. Keeping this ticket open so I don't forget.

comment:19 by Olly Betts, 5 years ago

Status: newassigned
Summary: Sockets never closed on RemoteDatabase errorsSockets not closed on RemoteDatabase error during initialisation

comment:20 by Olly Betts, 5 years ago

Milestone: 1.4.12

comment:21 by Olly Betts, 5 years ago

Priority: highnormal
Severity: blockernormal
Version: 1.4.11

comment:22 by Olly Betts, 5 years ago

Milestone: 1.4.12

Not a blocker for 1.4.12.

by Olly Betts, 4 years ago

Attachment: remote-leak-test.cc added

All-in-one C++ reproducer

comment:23 by Olly Betts, 4 years ago

The latest attachment is closer to something we can put in the testsuite - it's a single C++ source and will exit 0 for success and 1 for failure. I've checked it passes for current git master and fails if I revert the changes in 3dcd784bc105d7dc0db10bba1c9a8be28f52611b.

It doesn't really fit with how our test harness works so it would need to be a separate test program I think. It really ought to start the server on a free port dynamically like our test harness does.

comment:24 by Olly Betts, 3 years ago

Milestone: 1.5.0

I thought of this very simple approach:

diff --cc xapian-core/backends/remote/remote-database.cc
index 5fb53b353045,8c2d469c1595..000000000000
--- a/xapian-core/backends/remote/remote-database.cc
+++ b/xapian-core/backends/remote/remote-database.cc
diff --git a/xapian-core/tests/api_wrdb.cc b/xapian-core/tests/api_wrdb.cc
index 9be2c04196e4..1bc1c4e1a156 100644
--- a/xapian-core/tests/api_wrdb.cc
+++ b/xapian-core/tests/api_wrdb.cc
@@ -1972,3 +1972,9 @@ DEFINE_TESTCASE(protocolbug1, remote && writable) {
                   db.replace_document(1, doc));
     db.commit();
 }
+
+DEFINE_TESTCASE(remotefdleak1, remote && writable) {
+    Xapian::WritableDatabase wdb = get_writable_database("");
+    TEST_EXCEPTION(Xapian::DatabaseLockError,
+                  auto wsdb = get_writable_database_again());
+}

With 3dcd784bc105d7dc0db10bba1c9a8be28f52611b reverted:

$ ./runtest ./apitest -v remotefdleak1
Running test './apitest -v remotefdleak1' under eatmydata and valgrind
Running tests with backend "honey"...
Running tests with backend "none"...
Running tests with backend "inmemory"...
Running tests with backend "glass"...
Running tests with backend "singlefile_glass"...
Running tests with backend "multi_glass"...
Running tests with backend "multi_glass_remoteprog_glass"...
Running test: remotefdleak1... POSSIBLE UNRELEASED MEMORY - RETRYING TEST ok
./apitest backend multi_glass_remoteprog_glass: All 1 tests passed.
Running tests with backend "multi_remoteprog_glass"...
Running test: remotefdleak1... POSSIBLE UNRELEASED MEMORY - RETRYING TEST FDLEAK: 10 -> socket:[66855]
./apitest backend multi_remoteprog_glass: 0 tests passed, 1 failed.
Running tests with backend "remoteprog_glass"...
Running test: remotefdleak1... POSSIBLE FDLEAK: 7 -> socket:[64031] FDLEAK: 8 -> socket:[64037]
./apitest backend remoteprog_glass: 0 tests passed, 1 failed.
Running tests with backend "remotetcp_glass"...
Running test: remotefdleak1... FAIL: EXCEPTION std::string Failed to get 'Listening...' from command './runsrv ../bin/xapian-tcpsrv --one-shot --interface 127.0.0.1 --port 1239 -t300000 --writable .glass/dbw' (output: DatabaseLockError: Unable to get write lock on .glass/dbw: already locked
)
./apitest backend remotetcp_glass: 0 tests passed, 1 failed.
./apitest total: 1 tests passed, 3 failed.

And with current git master:

$ ./runtest ./apitest -v remotefdleak1
Running test './apitest -v remotefdleak1' under eatmydata and valgrind
Running tests with backend "honey"...
Running tests with backend "none"...
Running tests with backend "inmemory"...
Running tests with backend "glass"...
Running tests with backend "singlefile_glass"...
Running tests with backend "multi_glass"...
Running tests with backend "multi_glass_remoteprog_glass"...
Running test: remotefdleak1... POSSIBLE UNRELEASED MEMORY - RETRYING TEST ok
./apitest backend multi_glass_remoteprog_glass: All 1 tests passed.
Running tests with backend "multi_remoteprog_glass"...
Running test: remotefdleak1... POSSIBLE UNRELEASED MEMORY - RETRYING TEST ok
./apitest backend multi_remoteprog_glass: All 1 tests passed.
Running tests with backend "remoteprog_glass"...
Running test: remotefdleak1... ok
./apitest backend remoteprog_glass: All 1 tests passed.
Running tests with backend "remotetcp_glass"...
Running test: remotefdleak1... FAIL: EXCEPTION std::string Failed to get 'Listening...' from command './runsrv ../bin/xapian-tcpsrv --one-shot --interface 127.0.0.1 --port 1239 -t300000 --writable .glass/dbw' (output: DatabaseLockError: Unable to get write lock on .glass/dbw: already locked
)
./apitest backend remotetcp_glass: 0 tests passed, 1 failed.
./apitest total: 3 tests passed, 1 failed.

So if we skipped it for remotetcp we'd have a regression test, but only for the prog variant.

I'll ponder if there's a way to get it to work for the tcp variant too.

comment:25 by Olly Betts, 3 years ago

The tcp failure is because this tries to start a second remote server rather than a second connection to the existing server.

Perhaps we should route all database access through stub files and then we could ask for the path of the database and then we could do:

WritableDatabase wsdb(get_writable_database_path_again());

(Except that method isn't exposed currently, but it could be.)

comment:26 by Olly Betts, 12 months ago

A recent change has made the testcase above work for remotetcp too: 2bf8eee128456d0d2076096ab2b56f47b0b65776

The fix for this no longer cleanly reverts, so instead I tried applying this change to the parent of the original fix (3dcd784bc105d7dc0db10bba1c9a8be28f52611b).

Running test './apitest -v remotefdleak1' under eatmydata
Running tests with backend "honey"...
Running tests with backend "none"...
Running tests with backend "inmemory"...
Running tests with backend "glass"...
Running tests with backend "singlefile_glass"...
Running tests with backend "multi_glass"...
Running tests with backend "remoteprog_glass"...
Running test: remotefdleak1... POSSIBLE FDLEAK: 5 -> socket:[6132367] FDLEAK: 6 -> socket:[6132372]
./apitest backend remoteprog_glass: 0 tests passed, 1 failed.
Running tests with backend "remotetcp_glass"...
Running test: remotefdleak1...

and it seems to hang there. A clean failure without the fix would be nicer, but this will do.

comment:27 by Olly Betts, 12 months ago

Milestone: 1.5.01.4.23
Resolution: fixed
Status: assignedclosed
Note: See TracTickets for help on using tickets.