(foreign functions) Safe release of resources during cleanup

During cleanup, the normal order of reclaiming blobs isn’t followed. Quoting from the documentation:

These objects are reclaimed regardless of their registration count. The order in which the atoms
or blobs are reclaimed under PL_cleanup() is undefined.

Streams are a kind of blob (and therefore a kind of atom) and their cleanup order is also undefined. One result of this is that if a stream is created using Snew(), the garbage collector can get deadlocked.

This is @jan’s resonse (lightly edited) when I sent a gdb traceback to him:

Shutdown first kills all threads, including the gc thread, so the final cleanup is single threaded (and skipped if killing the threads failed).

From the stack dump, this can happen if an archive becomes subject to GC that is still open (my guess, might be wrong). That poses an interesting problem. As the thread that abandoned the archive still has a lock on its streams we cannot close them and neither can the initiating thread as, while it has a lock, it no longer knows about the file.

This is an interesting case. I’ve before considered to generally close I/O streams when reclaiming stream blobs. This shows that it’s problematic.

I guess we have some options. I think all of these options imply adding a function to pl-stream.c that must be called to releases streams from GC. I see some possible functionality:

  • If the stream is locked, leave it alone. Fairly safe, but looses precious resources such as file descriptors.
  • Forcefully reclaim the stream and destroy it. As it is garbage, that should be safe. The stream may be linked to other (stream) resources that may also be garbage collected before or after us.

Might get complicated. Normal execution can guarantee order of garbage collections between dependent objects using references on the atoms/blobs. Shutdown cannot. Do we need some API to
find out that the system is performing its shutdown?

It’s not clear to me that an API for checking whether we’re in shutdown suffices. This API would help avoid double-free, but instead that could result in leaking resources.

The problem occurs if BLOB1 points to BLOB2 but BLOB2 is freed first. There are two ways of detecting that BLOB1 points to BLOB2:

  1. BLOB2 keeps some kind of reference count
  2. BLOB1 calls PL_register_atom(BLOB2) and PL_unregister_atom(BLOB2) when it removes the reference.

For scenario #1, the “in shutdown” API can be used to prevent BLOB2 from freeing itself if its reference count is non-zero; eventually BLOB1 will be freed and it can call BLOB2’s release callback.

But scenario #2 seems to require re-running GC over both atoms and streams (PL_get_stream() and PL_release_stream() are similar to PL_register_atom() and PL_unregister_atom() in this respect). (In theory, there could be as many GC passes as blobs). [It’s not clear to me if PL_unregister_atom() also needs to check for being in shutdown, which could add unacceptable overhead for the normal situation.]

Thanks for starting this. There are two scenarios though. The deadlock in gc that you showed before is easy to reproduce. Put this in a file and run ?- t.. Note that you should not call the archive_open/4 from the toplevel as the toplevel remembers the bindings and thus the archive handle does not become garbage.

t :-
    archive_open('z.zip', read, Handle, []),
    writeln(Handle).

Now, either immediately or after calling ?- garbage_collect_atoms. the system will deadlock. The reason is that the archive blob release hook calls Sclose() on the archive from the gc thread while the main thread opened and locked the stream. It is safe to reclaim this archive and open stream as nobody can access it any longer. We could implement this by adding e.g., SForceClose() that will not try to lock the stream but simply destroys the stream and its lock. Alternatively we could add e.g., STryClose() which would detect the stream to be locked and just returns without doing anything.

During shut-down (PL_cleanup()) we have a different problem. If we have an archive with an open entry we have (at least) two blobs: the archive blob and the stream blob that represents the open entry. Normally the latter should be released first, which happens because opening the archive entry references the archive blob. During PL_cleanup() though, both are reclaimed, but the order is not defined (and may vary between runs). The archive code must have enough information to ensure no bad things happen regardless of the order in which the two are released.

Calling PL_blob_data() on an atom that was released is safe. It currently returns a length of zero and a content that represents the const char* "<reclaimed>" while the type is a unique constant that will cause a crash if you try to use it. Note that under normal operation it may also return the content of a newly allocated atom/blob. During PL_cleanup() though, we are sure the handle will not be reused.

I’m tempted to add the two SClose() alternatives (or maybe an SCloseEx(Stream, Flags)?) and change PL_blob_data() to return NULL for content and type (as well as length) to provide a reliable way to verify that a blob handle that you own in foreign code has already been released (during shutdown; during normal operation this is not reliable, but it should never happen that the blob is released early). @peter.ludemann, would that suffice?

In case it’s not clear, the current library(archive) code is quite buggy during various kinds of close and cleanup situations – memory leaks, use-after-free, double-free. [“You are in a maze of twisty call-backs, all alike.”] So, if the current code deadlocks, that doesn’t necessarily mean that there’s a problem with PL_cleanup().

It seems that we’re seeing something similar to “recursive” or “reentrant” locks here – which is its own contentious philosophical discussion (if you ever meet Mike Burrows and lack a topic of conversation, just mention that you favor recursive locks). I haven’t finished fixing the library(archive) code yet, so I don’t know what features I need, so the SClose() changes may not be needed. However, it probably is useful to have a way of verifying that a blob has been released; the PL_blob_data() trick for checking this requires a term_t but I need a similar test for an atom_t. Or, possibly a way for the blob to check its own “register_atom” reference count.

On the other hand, multiple GC passes at cleanup might be a better approach; in most cases there would be only one pass. Perhaps we need a stronger definition of what it means when a blob’s “release” callback returns FALSE (e.g., what should it return in the case of “something is using me”, or should it have a way of indicating an error happened?)

Probably it’s best for me to finish fixing the cleanup code for library(archive) and see what I come up with. The current code is clearly wrong; I think I have a way of simplifying it a lot but there are some tricky bits (e.g., close_parent). When I have the code working properly when garbage_collect_atoms/0 is called, let’s see what’s missing during cleanup.

Well, it works mostly fine as long as you respect the libarchive restrictions and use setup_call_cleanup/3 to guarantee cleanup. It is really easy to crash the system though :frowning: Thanks for addressing this!

As I’ve shown, it is easy to deadlock without cleaning up. Simply open an archive and leave reclaiming it to GC is enough :frowning:

PL_blob_data() takes an atom_t. That is what you normally have access to in this context as while you can store atom_t in C structures, you typically cannot do that with term_t as the scope of these is limited to a foreign predicate call. It surely would be possible to get the reference count. I’m not sure what you can use it for though. During normal execution it can change asynchronously. During PL_cleanup() not.

Not sure. During PL_cleanup() you’ll probably have some atom/blobs with zero references, but a lot with references. You could of course run multiple passes, trying to find the blobs with the lowest reference count and releasing these first. It is not clear whether it is better to release one with a reference count of 1 before one that has a reference count of 2 though. This is also rather costly. The atom array can have many million entries.

:+1: Whatever though, you need to be able to close streams from the GC thread, which is impossible when they are locked by some other thread. Of course, this is a more general problem. It is in general quite easy to get into deadlocks using the blob release handler. It was mostly designed to release external data structures that are not subject to locking.

I’ve pushed some stuff that we surely need:

  • PL_blob_data() now returns NULL if the blob was released before.
  • A new function Sgcclose(IOSTREAM *s, int flags) allows reclaiming streams in the garbage collector that may be locked by some other thread.
  • I’ve used the above in the archive blob release function. Some simple tests work fine (which is already a lot better than a deadlock).
  • I have added a Prolog flag agc_close_streams that is now by default false. When set to true any stream handle that is garbage collected while the stream is still open will cause the system to close the stream and write a warning. I’ve set it to true in my init.pl. If you are interested in this topic, please do the same and report if this causes problems.

The patch is quite involved and regression is not impossible.

I tried the latest sources, but – as expected – there’s no difference (I haven’t used Sgcclose() yet).

Question about stream locks – I assume that PL_get_stream() increments a use count and PL_release_stream() decrements it, with no locks are involved … S__close() first locks the stream and then calls the IOFUNCTIONS::close callback; if it’s a stream that was created by Snew() for an archive entry, that stream has its own lock distinct from the lock on the “parent” stream (the “parent” stream is in the “handle” given to Snew()) – and the only way to get into a deadlock would be if there’s a recursive close (or other I/O operation) on the same stream? (Which is very possible from looking at the current code.)

Details of package(archive): An archive blob is created with a “parent stream” (archive_wrapper::data) and when a new entry is started (either reading or writing), an “entry stream” is created using Snew(), whose handle has a pointer to the parent stream, which is what’s actually used for I/O … deadlock, use-after-free, etc occur if cleanups for the two streams happen in the wrong order and that’s what I’m trying to fix.

[I hope this question is clear … my prose can be rather convoluted.]

Not wrt. double free, etc. You can open an archive, forget about the blob and run the atom garbage collector. That works fine for me.

There are locks involved. Only one thread can own a stream. It can own it “multiple times” though.

yes.

Recursive close is detected.

As I understand it, during normal execution the entry is always closed before the archive itself. During shutdown it can be the other way around. I think all you need to do is for the archive close function to see there is an open entry (can be only one), tell this entry to go into close parent mode if it is not already and otherwise ignore the archive blob release. It is then the task of the entry stream release to handle the whole thing including the archive itself.

That was my plan – the current code seems to do the same thing in multiple places and gets itself confused. (I’ve started by documenting and simplifying the list of possible states.)

1 Like

I’ve rewritten the close/release logic and most of the tests now pass without a memory leak. Sgcclose() made a difference - without it, my new code went into an infinite loop (possibly due to a use-after-free bug). Simplified close/release logic - fixed most memory leak problems · kamahen/packages-archive@faaee3a · GitHub (this is the latest of a series of changes, most of which will be squashed to make a PR).

I need to add some more tests - it turns out that the logic for a “read” archive is a bit different from a “write” archive, so a number of tests that are currently only for “read” need to have a similar “write” test.

In retrospect, a slightly different design for library(archive) would have made things quite a bit simpler.

Should the call to PL_release_stream() be only if ! ar->argc? (That is, don’t call PL_release_stream() if the “close blob” is called in a garbage collection context) That doesn’t seem right.

Currently, I can get a crash (or use-after-free error) when cleanup calls the blob’s “release” and that calls PL_release_stream(), but so far I haven’t seen that when I call garbage_collect/0 or garbage_collect_atoms/0. So, perhaps we need a way of determining that we’re in cleanup, when the proper order of calling blob “release” functions isn’t guaranteed?

Is it too late for that? As long as the API doesn’t change all should be fine. I think small API changes could also be tolerated given a good motivation.

It seems that reliable operation demands for allowing an arbitrary order of releasing the blobs. I thought the approach was to have the release for the archive itself test that there is an open member (that is already reflected by the status). If this is the case, set some flag. If next the member stream is released it also deletes the archive. This is similar to the (the same as) close_parent behavior.

No, I think it would be a non-compatible API change (add an archive_item blob).

I thought that was what I’m doing, but perhaps I’ve missed something when closing an open member (there are multiple call-backs, so some things are done indirectly). I’ll re-re-review the code and – if that doesn’t find the problem – send you a traceback offline.

How would that help? There is always at most one open member that is now represented by a stream while the archive structure has a state attribute that tells is there is an open member (or not).

Thanks!

From an operational point of view, this is correct – the blob allows only zero or one active “item” (or “member”) in the archive at a time.

But from an object modelling point of view, it’s nicer to have a variety of objects that have specific behaviors for operations such as “read” or “close” … the alternative is to have state transition diagrams which can get very messy and require careful checking everywhere (I’ve had to add quite a few “valid state” checks to the code for library(archive)). On the other hand, the multiple object style requires other checks, such as there being only one active “item” at a time.

The stream model fits somewhat uneasily into logic programming. There have been a few attempts to make it more logical, such as Mercury’s “state variable” (the “!” notation, is shown in the Wikipedia example). Mutable blobs are a generalization of streams; and they have problems, especially if they wrap an underlying foreign language implementation (library(archive) has its own state transitions on top of libarchive’s implicit state transitions).

A related question is: how to handle errors that occur during garbage collection.

For example, an output stream can get an error when it calls fclose() E.g., when flushing the buffers, it could get an “out of disk space” error or a network error on a socket … the result is a corrupted file object (or an incomplete network transmission), with no indication to the user.

Solutions:

  1. If the programmer doesn’t call close/1 (or equivalent), that’s just too bad - naughty programmer and an occasional upset user who discovers a year later that their file is corrupt.
  2. Raise an asynchronous exception, similar to how SIGINT or SIGTERM is handled.

Is it possible to define a new signal, call PL_raise_exception() and have it handled by on_signal/3? Or, perhaps, there should be a way for a blob’s release function to return an error (currently, it can only return “success” or “do not garbage collect” – the latter, presumably, because an underlying resource is still in use).

This also implies that anything that wraps a stream (e.g., using Snew()) should be able to distinguish between being closed due to a call to close/1 or due to garbage collection – currently, both cases result in a call to the “close” function defined in the stream’s “functions” (an IOFUNCTIONS struct).