Performance cost of dictionaries?

If L has size N. Then backtracking over all subsets of
L, i.e. enumerating the power set, only takes O(N) space,
if you don’t table/memoize. It has maximally N calls depth,

so you need only a Prolog stack and choice points of size N:

subset([], []).
subset([E|Tail], [E|NTail]):-
  subset(Tail, NTail).
subset([_|Tail], NTail):-
  subset(Tail, NTail).

I think any tabling/memoization is a mistake. Or cannot be
done meaningful on a “micro” step level. Proof of my hypothesis
is the list you produced:

Basically you materialize all solutions? Can we not
get more bang out of table/memoize than only
materialize all solutions? For example table/memoize

of Fibonacci numbers turns an exponential problem into a linear
problem. Why does this not happen for the rect problem?
Or do you not materialize? What do you mean by "number of

entries", is this only an integer in your program, or effectively
the number of entries, i.e. materialized?