Http server performance?

who has experience of using the http prolog server? should i put nginx before it? how many connections can an http server handle? is there a benchmark?

The biggest deployment is probably SWISH. It handles about 8 million HTTP requests per week as well as a websocket per connected user (typically a couple of hundreds, peaking at about 1200 occasionally). Possibly the TerminiusDB team may have interesting data?

Generally not as it just adds more overhead and configuration. nginx might be better at fighting DDoS like attacks. The core Prolog server allows limiting the number of worker threads and the number of pending connections (accepted connections that are waiting for a worker), but that is about it. You can also use nginx for load balancing, fail over, etc.

That is limited by the number of open files (sockets) supported by the OS and the number of threads the system can handle (which depends on the OS and the hardware). As the SWI-Prolog server is based on the one thread per connection model, it is more targeting CPU intensive than connection intensive applications.

No. That is hard. There are a lot of configuration options such as session management yes/no, thread scaling, etc. There are a lot of different tasks one may want to execute (static file serving, HTML, JSON, etc.) Expect Linux to get best performance and scalability.

If you consider to implement some more widely used set of tasks to test server performance, please do. Please share code and preliminary results before publishing as some tweaking can make huge differences.

1 Like