The separation will lead to the same polynomial complexity,
if you search in the apropriate target subspace. The key for
complexity reduction, or switching to a tamer complexity class,
is the choice of the target subspace not the interwining.
The interwining itself is not extremly successful to be honnest.
If interwining would work, if we had very powerful 2nd order
theorem prover, we could even have nice ZFC provers that
deduce sets. Or nice Peano provers that can invent formulas
for classical mathematical induction. But most provers chocke
since this interwining is relatively poor. Mostly lazy building of
formulas. Also in your case its only an interwining at the top,
while the prototype of James Trewern allows also dispersed
interwining I suspect. I think neither do any good. You could
equally well use, where invent/1 draws from a polynomial class only:
Note how Prolog homoiconicity is used for higher order,
a Prolog rule (i.e. clause) is also a Prolog term, the fresh
variables in the Prolog rule become universally quantified
during assertz/1 in the implementation of validate/1. The interwining
would only fold invent/1 and validate/1 into one thing. Which you
could also do automatically via freeze/2 and when/2, but freeze/2
and when/2 although they can reorder goals, they do not change
complexity classes in principle so much. You can study what folding
does already with the Seven-Eleven problem. I don’t think there is a general
folding speed-up theorem anywhere? But I might be mistaken.
BTW: The problem is a little bit how to formulate such a speed-up theorem.
While there are notions of NP-Hard or PSPACE-Complete, I am currently
short of material on the P class itself. Maybe we have to nevertheless
look at it from a combined complexity aspect. So if the patterns for your
2nd order SLD MIL change, are part of the input metric, we might somehow
have a combined complexity again, which has some hardness.