It is useful to try and capture the story of a predicate or a feature in the docs?

I think eLearning is the more fitting suggestion, since its simply a documentation tool without any modeling or code generation of sort – which was usually the selling point of CASE – Computer Aided Software Engineering …

The engineering of software continues to be a manual activity.

The broader category I see (read) emerging is dev-tech: new kinds of developer tools for increased productivity, which, i guess, includes those AI co-pilots …

Hmm… maybe. How do you measure the goodness of writing? By volume (more is better)? By sentence length distribution and deviation from the mean? By comma frequency? Or do you have some other measure?

If you want to go down the “literate programming” rabbit whole like I once did, start with WEB and CWEB (and direct descendants). Next comes noweb, which is actually a great idea if you don’t pay attention to the fact that Norman Ramsey had to put his name into it :slight_smile: I extensively used the last stable 2.12 version of noweb, or at least some of it. It is language-independent and extensible; in fact, you have to extend it and adjust it to the language you are using, your style, and so on. Some third-party extensions can be found and some might even be useful. It does let you do almost anything you can think of, including any fancy stuff you’d like with inline comments. It does have cross-referencing facilities (but those are naturally language-specific and you might have to add your own). It is not a single platform but a collection of command-line tools. It also lets you produce your “weaved” documentation in HTML and LaTeX, in a fashion, but I ended up making my own crappy backend (can’t remember why but I had good reasons at the time).

Some of the issues I experienced. First, the more useful you make it, the more conventions you must follow. Then, it is difficult to tell when to use the literate programming style and when to just type code. It is easy to fall into the trap of interleaving code and prose just for the sake of it (and with an expressive language like Prolog you can put a lot of your “literate” into the Prolog source itself).

Where I saw the biggest advantage is indeed notebook style exploratory data analysis where you want to have your code and your results (tables, visualizations) right next to each other; and iterate on your code based on the results you see. This is a highly specialized kind of programming.

I published a paper about it but it is useless because my only incentive at the time was to get a publication in a peer-reviewed journal with my name on it, and certain other names on it, too. The software that goes with it is abandonware.

No, we cannot dispose of natural language. Communication using written or spoken natural language, as well as pictures, body language (if we have a video!) and other non-verbal cues is very efficient because most of the meaning is implicit and because it is ambiguous. The question that I tried to address in this post higher up in the thread was: what should we document in the code that the machine executes, and what must we document in some other way.

No need for all this heavy infrastructure when doing literate programming in Prolog. I was quite happy with a 100-line awk program that transformed the “literate” file into a .pl file.

A long time ago, I worked on a complex map recognition project, using Quintus Prolog. When I started work, I wasn’t sure how to do this, and it involved things like image transforms using matrixes, so I started by writing an essay about how I would solve the problem, using Latex. As I worked through the details, I started adding code. This worked very well, especially because Prolog doesn’t care about the order of predicates. Also, I tend to work in a “batch” style (using make to keep a project-specific set of commands), so it fitted well with my style (e.g., I’d type make test1 which ran the “literate programming extraction”, then compiled with warnings (similar to swipl with the -o and --undefined=error options), then ran my test(s); and I’d then look at the result in the Emacs *compilation* window.

For people who don’t like the “batch” style workflow (even if it has a very fast turn-around) or Emacs, I suspect that the literate programming style isn’t as attractive. But for this kind of problem, with large amounts of data, interactive development didn’t make much sense anyway.

I wonder what you mean by “heavy” when you say “heavy infrastructure”. It is relatively easy to use and the computational overhead so small it is hard to measure. The one huge advantage is that you get cross-referenced “weaved” documents with an index at the end, for free (that is, the tool does the cross-linking for you). But maybe you had that already. I am not trying to convince anyone :slight_smile: I myself see little need for that kind of tools in my daily work. But it is good to know what is out there.