Best practice for resource disposal in TCP connection

I’m writing some Node code for a Visual Studio Code extension that needs to communicate with a swipl process without blocking the terminal or “polluting” it, and thanks to suggestions by Jan I managed to achieve this quite easily through a TCP connection.

I have some questions though about how to best dispose of resources (both for sockets and their streams), plus some other things about the overall pattern.


The file creating the TCP server is loaded on running the process (swipl -f As it is, the prolog top level is blocked/not shown until a TCP client connects, but that’s fine with me as I would have prevented the terminal from being shown to the user until the TCP connection is set up anyway.

(The code is without resource disposal as that is part of the questions below.)

:- initialization(enable_tcp_reply(_)).

enable_tcp_reply(Port) :-
  % Set the server up and start listening
  tcp_bind(Server, localhost:Port),
  tcp_listen(Server, 1),
  % Get the connecting client
  tcp_accept(Server, Client, _),
  tcp_open_socket(Client, In, Out),
  % Start the main query loop (in a thread)
  thread_create(read_and_reply(In, Out), _, [detached(true)]).

read_and_reply(In, Out) :-
  % Read the query sent by the client
  read_term(In, Goal, [variable_names(Vars)]),
  % Call it once and return the bound variables in JSON form (as per my requisites)
  write_bindings_as_json(Out, Vars),
  % Redo
  read_and_reply(In, Out).

For those interested, here are the parts in the Node script I’m using for testing related to the TCP client (actually the script is different and uses the readline module to test queries interactively, but that’s unneeded).

Note that using "localhost" as the host for createConnection() (which is also its default) has the connection fail because of some IPv6 assumptions (found related issues on GitHub). The address must be specified directly.

const net = require("net");

// Create and connect the client
const client = net.createConnection({ host: "", port: "..." }); // Add port

// Receive answers from the server (event-driven pattern; others are possible as sockets are also streams)
client.on("data", data => {
  console.log(`data from server: ${data}`); // Actually call JSON.parse() and do stuff

// Test a query (the \.\s+$ pattern is required by read_term)
client.write("A=2. ");

// Logs 'data from server: {"A":2}'

Aside from a general code review for the Prolog part, my questions are below.

1) tcp_accept/3 signature and overall pattern

Currently I’m calling tcp_accept right after tcp_listen (as also mentioned in tcp_bind/2) and this seems to work fine, but I see that it deviates from the docs’ example, where tcp_accept is called with the server socket’s input stream (after opening it with tcp_open_socket) rather than with the server socket itself.
What’s the correct pattern and does it mean that tcp_accept/3 accepts both a socket and an input stream as its first argument? (possibly alluded to in tcp_open_socket/3)

Maybe the pattern with tcp_open_socket on the server is just to be able to close its streams and both are valid?

2) tcp_listen/2 with 1 max pending connections

I went this way as I only expect one connection to the TCP server (the one from the VSC extension), but I’m not sure it’s a sensible choice.

3) tcp_setopts(Socket, reuseaddr)

Found this in some implementations and I don’t know about its use cases (and if it fits mine).

4) Resource disposal / error handling

Currently I would only add the closing of opened client’s streams (using a stream pair rather than separate streams) and I don’t know if the server socket and its streams would be correctly disposed of on the swipl process being terminated (either by the extension or by users closing the swipl terminal).
(For what concerns the spun thread, I guess I should be fine.)

Regarding errors, I’m thinking of handling both improper queries from the client (something that should never happen anyway, as queries in my case are in a precise form) and call errors in the same catch block (for now).
I’m less sure about handling the thread fail error that occurs on the client disconnecting, as I don’t see how a disconnection could occur on the extension part (but I think I should probably handle that anyway).

Anyway, here are the edits for code review:

:- initialization(enable_tcp_reply(_)).

enable_tcp_reply(Port) :-
      tcp_open_socket(Client, InOut), % Using a stream pair for one-time closing
        stream_pair(InOut, In, Out),
        read_and_reply(In, Out)

read_and_reply(In, Out) :-
  % Try to read the query, execute it, and return its results all at once
      read_term(In, Goal, [variable_names(Vars)]),
      write_bindings_as_json(Out, Vars)
    write(Out, '\"error\"') % Error notification (send anything)
  % Flush either the JSON result or an error
  % Redo
  read_and_reply(In, Out).


Regarding your questions:

  1. see the documentation of tcp_open_socket/2 which touches on this matter.
  2. what’s your specific concern here?
  3. see man 2 setsockopt for information about this socket option (and others).
  4. I couldn’t tell exactly what you’re asking for here, care to clarify?

Hi (and thanks)

  1. Ok, so is it wrong to use the pattern I’m using (calling tcp_accept right away on the socket without calling tcp_open_socket first)? And if so, what am I losing here (maybe that streams are not properly closed)?
  2. It’s my first time setting up a TCP connection (not only in Prolog but in general), so I just wondered whether limiting pending connection requests to 1 only because I’m expecting only one connection is the right choice.
  3. Ok, thanks for pointing me to the right docs.
  4. Just some advice about best patterns for closing sockets/streams and catching error in a TCP connection (and if the couple of points where I added code were enough).

It’s not wrong, and AFAICT you’re not losing anything. Either way you should ensure the listening server socket is closed after you finish using it.

There are different ways to do that, depending on your specific situation:

  1. If your server socket is intended to have the same lifetime as the entire swipl process, you don’t have to worry about closing it as it’ll be closed automatically when the process finishes.
  2. If you use tcp_open_socket/2-3 and obtain streams from the socket, closing the streams with close/1 closes the socket as well.
  3. Otherwise, close the socket with tcp_close_socket/1.