In my case, I’m aiming more for something akin to cluster computing.
This urge stems from when I discovered that I could speed up incredibly a clpfd constraint problem labelling by using the predicate concurrent_and/2
by labelling first a few variables and finishing the labelling of other variables in parallel.
With you help, I managed to write this script:
:- use_module(library(http/http_server)).
:- use_module(library(clpfd)).
:- use_module(library(pengines)).
n_queens(N, Qs) :-
length(Qs, N),
Qs ins 1..N,
safe_queens(Qs).
safe_queens([]).
safe_queens([Q|Qs]) :- safe_queens(Qs, Q, 1), safe_queens(Qs).
safe_queens([], _, _).
safe_queens([Q|Qs], Q0, D0) :-
Q0 #\= Q,
abs(Q0 - Q) #\= D0,
D1 #= D0 + 1,
safe_queens(Qs, Q0, D1).
% cluster('http://192.168.1.12:8001', 12).
cluster('http://localhost:8000', 7).
create_cluster(Cluster) :-
findall(C, create_cluster_(C), Cs),
append(Cs, Cluster).
create_cluster_(Cluster) :-
cluster(Server, Cpus),
length(Cluster, Cpus),
maplist(=(Server), Cluster).
create_cluster_queue(ClusterSize) :-
create_cluster(Cluster),
length(Cluster, ClusterSize),
message_queue_create(Queue, [alias(cluster)]),
maplist(thread_send_message(Queue), Cluster).
destroy_cluster_queue :-
message_queue_destroy(cluster).
strip_modules(MGs, Gs) :-
maplist(strip_module, MGs, _, Gs).
run(Goal) :-
copy_term(Goal, Copy, MGs),
strip_modules(MGs, Gs),
( Gs = [G | OtherGs]
-> foldl([A, B, (A, B)]>>(true), OtherGs, G, Conjuction)
; Conjuction = true
),
setup_call_cleanup(
thread_get_message(cluster, Server),
pengine_rpc(Server, (Conjuction, Copy)),
thread_send_message(cluster, Server)),
Goal = Copy.
server(Port) :-
http_server([port(Port)]).
with this you can add nodes with the predicate cluster/2
and use the create_cluster_queue
to “create” a cluster (which is just a message queue with server urls).
Then, the wrapper predicate run/1
can be used to run a goal on a pengine in the cluster.
The particularity of the run/1
predicate is that it can deal with attributed variables and replicate them in the pengine.
Finally, this compose with concurrent_and/2
to label the n_queens problems in parallel on multiple nodes:
?- %distribute the script and start a server on each node
?- server(8000).
?- %create the cluster queue
?- create_cluster_queue(ClusterSize).
?- % then you can run your labeling problem on multiple nodes
?- n_queens(20, [A | Bs]), concurrent_and(label([A]), run(label(Bs)), [threads($ClusterSize)]).
A = 5,
Bs = [1, 4, 6, 3, 9, 14, 18, 15, 19|...],
ClusterSize = 7 .
If anyone asks if this is faster than a single process labeling, I have no idea…
I was just interested to see if this was even possible at all
However, I have a termination problem.
If I abort after the first solution, the interpreter will take a long time to return the prompt.
I believe thread cancellation is not working well somewhere…