Over the years I have written lots of prolog code, some of which might be useful generally. Is there anybody on the team who’d be interested in a guided tour with an aim to identifying what might be good to share, and maybe how it should adapt to fit in among existing packs?
Okay, here’s a list of potentially useful stuff. Anything stand out as particularly wanted?
-
gen - declarative source code expansion system
- instead of using term_expansion, load_expand(File) will read a File, perform source code transformations, generate ‘xfile.pl’ and then load(XFile).
- this process vastly simplifies understanding the effects of code expansion, and debugging of expanded code.
- Supports variable name preservation and a make extension to handle changes to the original sources.
-
A variety of non-standard extensions to CHR, implemented using source transformation.
-
struct.pl
- Allows to declare structs which are used by gen.pl source code expansion
- E.g. declare fact and use it.
- struct:struct_decl(rectangle(width, hight)).
- same_width(Rect, Rect2) :-
- Rect = #rectangle(width: W),
- Rect2 = #rectangle(width: W2),
- W = W2.
- Gets preprocessed into
- Rect = rectangle(W, _)
-
functional
- basic higher-order functions such as you’d typically find in a functional programming language
-
math
- tolerant float comparison
- rounding predicates
-
poly
- linear polynomial handling over Prolog variables
- structure is like poly([3A, 4B], C)
- fc.pl has helper code.
-
failure-driven-test
- failure_driven_test(FactName, Arity, FactTest)
- Easily test your predicate over a large variety of data.
- Uses bagof to iterate through all Facts test them with FactTest.
- Count how many pass/fail.
- test_list([ a, b, c, … ])
- Run the code in the list as code.
- Each line is expected to pass, and any failure is reported.
- Similar to plunit, but without the overall test runner. Instead, you just write tests as predicates and call them.
- failure_driven_test(FactName, Arity, FactTest)
-
equations.pl
-
- Adapted from Art of Prolog chapter 23.
- Main change is it allows the variables to be Prolog variables instead of only atoms
-
-
diff_list.pl
- create, append and close off diff lists.
-
id_var
- using CHR, attach names to variables
- aux code in repl_chr
-
table-0424
- read TSV files into nested lists of atoms
-
readtable
- interpet above into typed data based on header row
-
read_object
- used by readtable, also supports nested object data
- convert text acording to type declaration
- includes measurement unit declarations
-
repl_chr
- includes utilities to help display CHR information
- Show all current constraints in a module
- Show all constraints matching a pattern
-
units.pl
- parse, portray, and convert measurements
- convert measurements between bases
- e.g., ‘13mm/kg’ to 0.013 to ‘X.XXXin/lb’
-
costvectors.pl
- implements an algebra over lists of numbers with lexical sorting
-
encoding
- DCG to convert terms to/from binary for serialization
-
client4
- receive instruction over TCP, act on it and send response
-
log
- utility for sending Prolog and/or CHR traces to a text file, because sometimes it’s easier to debug a full execution trace listing than using an interactive debugger.
-
nb_queue
- uses C++ extension nb_vector
- Enables your backtracking parser to use a stream as a datasource. We lazily convert the stream to a list as the data is requested by the parser, and that list is preserved through backtracking operations.
-
util - tons of little functions
- [tolerant_unify_terms/2,member_rest/3,all_same/2,open_list_member/2,symbolic_member/2,take/3,delim_list/3
- ,expression_prolog_vars/2,leaves/2,update_value/2,listNear/2,open_list_close/1,drop/3, any_drop/3,expression_real_unify/2
- ,nub_sieve/2,output_error/1,update_nonvar/2,leaf/2,leaf_if/3,test_util/0,compare_with_vars/2,expect_phrase/4
- ,open_list_empty/1,skip/2,commas_list/2,list_real_unify/2,tail_sieve/2,integers_to_n/2
- ,table_member/2,blank_zero/2,expect/2,reshape/3,numerical_member/2,is_braces/1,split/4,member_no_unify/2
- ,update_term/3,compare_sets/5,integers_to_n0/2,in_both/3,blank_zeros/2,brace_index/3,unbind_value/3,update_term/2
- ,trace_if_source/0,ignore_vars/1,all_args/2,only_in_first/3,failure/2,object_member/2,expect2/3,list_open/3,safe_member/2
- ,delim_list/2,insert_operator/3,update_term_nonvar/2,unifiable/2,(->>)/2,boolean_member/2,first/2,braces_list/2
- ,insert_arg/4,update_value_unless_index/4,update_term_unless_index/3,find_parts/3,sorted_subset/2,elementNear/2
- ,are_different/2,skip/1,open_list_append/2,leaves_if/3,any_arg/2
- [tolerant_unify_terms/2,member_rest/3,all_same/2,open_list_member/2,symbolic_member/2,take/3,delim_list/3
These are not in ready-to-share condition, so please just pick one and I will work to ready it.
Out of curiosity: what extensions have you implemented for CHR?
Thanks for sharing. There seems to be a lot of nice stuff there. I think this mostly raises the question on how the Prolog community can best share code. For the moment we have two mechanisms
- Be part of the bundled libraries, either as part of the core library or one of the bundled extension modules. This code should be well maintained, documented, portable and serve some purpose that justifies the added size and complexity to the project. There is surely (mostly old) stuff that should never have gotten in there
- Distribute as a pack add-on, typically based on a GitHub repository. That is lightweight and judging the quality and value is totally up to the publisher and users.
My original hope was that the packs would in part act as a playing field for stuff to be added to the main system. That didn’t really happen. The packs do influence the core system though. Consider e.g., arouter that influenced the http_dispatch library, sort_dict that gave rise to support dict sorting in the ECLiPSe derived sort/4, etc. One of the reasons is that many of the packs build on each other, creating a parallel universe that often duplicates significant parts of the core functionality. I typically do not want to copy that as I want to avoid that the core contains too many libraries that do more or less the same but a little differently.
For CHR, I implemented an alternate notation that made it much easier for some rules to write the code.
qqq
++con1(…)
– con2(…)
++con3(…)
<=> effect.
Rather than worry if something is simpagate, propagate, or reduce, I just declare what matches I want to keep and what deletes I want. Actually it’s a bit more sophisticated than the above description, and has stuff helpful for managing constraint interaction and lifetime. It’s hard to explain.
Query qis Matches <=> Code
Generates a chr constraint and a rule that acts as a Prolog query for Matching constraints. It will pass/fail/unify and not affect the database.
Only one question, Jan: What do other languages do? E.g. Python. I suspect part of their thing is a team of interested people who review contributions and enable people to amend core libraries which amendments may be integrated if the review board likes the change.
One possibility: A team of [volunteer] curators, community voting on which packs to review => code gets reviewed => constructive criticism + suggestions on “what to do next” get published (preferrably on / reachable from the swipl packs pages == infrustructure requirements) => hopefully someone steps up
hm, with Discourse, we could create a dedicated forum and we could discuss contributions.
My thoughts exactly. Before I got lost being all non-commited (like equal opportunities, man), and then decided to scratch the next possible paragraph. What I wanted to say was: "I think the community is small & friendly enough, so that reviewing could get carried out in public, so even “non-curators” could chime in (or at least lurk and soak in the expected “wisdom.of ages” … … and the inevitable [religious] crusades, i fear "
Learning from others cannot be a bad idea. Anyone with good/bad experiences in communities that are sufficiently comparable? There are IMO two aspects:
- How to make the pack system better?
- How to make the core system libraries grow? With also a question on whether they should grow or new functionality should be primarily in extension packs?
It seems a big selling point that the core system comes with a rich set of libraries that are maintained as a whole, is somewhat consistent without too much overlap and quite rarely suffers from regressions. At the same time, there is still a lot of stuff missing there (e.g., type and mode handling), some stuff is dubious (e.g., the BerkeleyDB interface), a mess (organization of the RDF/XML libraries), etc.
A possible step good be real clean guidelines on what should be where? And, while I do think that a pack system with really minimal requirements is useful, could we somehow separate them in high quality and maintained vs. one-shot partial attempts to support something?
Could start by improving the structure of the documentation for these. It’s a bit confusing as is.
- Under Download you have Add-Ons, which leads to Packs, described as “known packages”. Which it instructs to install using pack_install. Can we not just call these Packages
Tried that and it worked fine. One thing is missing: A text search for packs. If I want a “machine learning” pack, I want to search, not browse alphabetically by name.
This is a nicely described pack:
http://www.swi-prolog.org/pack/list?p=plcairo
Wolud be best to put the extended description under the title rather than below the fold.
- Under Documentation you have Packages The first is Paxos. Is that new? I don’t see it on my computer. Second is ssl. Given that these are in code known as libraries, why does the website call them packages?
The way this is structured is also kind of clunky. See how they are cut off both horizontally and vertically? How do I find a library?
I’m also confused by the broader structure. Why is CLP not under packages? It’s under The SWI-Prolog Library. Why is Pengines in Packages and not SWI-Prolog Library?
Documentation aside. In Visual Studio for example I find two extension mechanisms. One is an IDE extension system, where within the IDE itself I get to browse and search and install extensions to the program. The other is a program library system where I get to browse and search and install libraries to my current project. Both of these are user-contributed, and all the content is user-managed, much like the pack system Prolog has.
Additionally, Visual Studio Code is Microsoft’s Open Source project. In that the code for the IDE and the code for the .Net libraries are open source, and grown by the community under the direction of the Microsoft .net Core team.
The Microsoft team’s game plan is primary. They decide what they want added to the core, and direct contributions from others and vet them.
I think that’s how SWI should go. The libraries you most want in the core libraries, you setup a project to intergrate. Contributors can work together to improve (under experimental) the style, quality, test suite, documentation, and then when it’s ready it gets published and promoted.
For releasing and reviewing the code, why not just use github’s facilities for code review, with each piece having its own separate review? This will allow people to comment on specific parts (even after the code has been committed, I think), contribute changes, report bugs, suggest sharing a part or not sharing, etc.
Other choices include phabricator (I know of one project that moved from phabricator to github tools, but don’t know if that was because of features or for other reasons).
When it gets down to the coding details, I like github fine. For getting public engagement, I think a forum here discussing what project to pursue.
So, use the forum to announce and point to github. The forum can still be used for design discussions and github for the details.
Another possibility is to use Google Docs for making comments and having discussions on a design doc. (Something similar can be done by having the design doc as a README.md and commenting on that in github … but when I worked at Google, we tended to use Docs comments rather than code review tools (and our tools were much nicer than github’s) and eventually moving the design doc to a Markdown or similar file.)
The silly answer is of course history
-
Packages reside in the
packages
directory of the source tree. They are git submodules wrt. the main project and may or may not be installed. They should be considered optional components of the installation. If they are installed they indeed show up in the system library. As a normal user, the only thing you should be aware of is that not all installed systems ship with all of these. -
Packs were invented to allow for lightweight user contributions. Anne used Add ons on the website.
Paxos is not that new. It was extracted from the TIPC package as it can also work on top of the much more widely available UDP messaging. The package should not yet be in the distribution. It is far too immature. It would be great to have a mature implementation, so if anyone is interested …
Part of the CLP libraries have been developed as part of the core system. There is no clear boundary to what should be in the core library and what would be better a package.
So, yes this is all a bit of a mess. Maybe do this:
- Rename Packages into Optional components in the manual.
- Rename Add-ons into Packages (or Packs?)
Update of the overall add-on/package page as well as the contents is of course another issue. A search facility is only one step though. With 279 items listed your browser search is still acceptable for the short term.
Deviating a bit from the original posting (oh, how I miss threaded discussions!) … is there a mechanism for auto-updating of packs? (It would be so nice if I could just do “sudo apt-get upgrade” and magic happens everywhere …)
OTOH, is there much harm in making packs non-optional? If I don’t use something, it occupies a few kB on my disk, which hardly matters nowadays (and if you’re on something space-constrained, like a Raspberry Pi, you probably want less than the standard install anyway).
Core libraries
Optional libraries
Packs (user-contributed libraries)
Optional Libraries I think would make sense for things that fit one of these categories:
- Links to some other software that not everyone has
- Has a different license
- Is immature and we’re working on it becoming a core library.
I don’t see a reason CLP or Regex are not now core.
You’re adding to the confusion. User packs can be upgraded using ?- pack_upgrade(Pack).
Extensions/components are upgraded during the build using git submodule update
before building.
I assume you refer to the extensions/components for being optional? Some definitely need be as they pull in a lot of dependencies. Think of xpce
, which you typically do not on a server as it pulls in X11. But also JPL which is hard to get properly configured, ODBC, etc.
If you do a default build you get it all and if you want to select I guess you know what you are doing.