Quite a few basic Prolog questions of ChatGPT


Feb 17 2023

Reorganized most of what was originally here in to several separate topics with a table of contents.

The starting topic:

LLM (Large Language Model) such as ChatGPT prompts related to Prolog

2 Likes

Good point about numbers and tokenization. I have been asked about the abilities/limitations of ChatGPT as well. ChatGPT is impressive but it’s important to understand the limitations. Here are some queries I’ve come up with that I believe illustrate the limitations of ChatGPT, even without numerical tokens.

Other resources

Large Language Models from scratch (part 1) (part 2)

EMNLP: Prompt engineering is the new feature engineering

Learn Prompting - A Free, Open Source Course on Communicating with Artificial Intelligence

137 emergent abilities of large language models

First, for the record, I really appreciate the work you have done in demonstrating how useful ChatGPT can be, and the information on how to use it in practice.

I tried to keep out since I am well aware of my contrarian opinions on a hot topic. I will limit myself to posing a few questions. I know they hint to my answer but I am not wise enough to avoid that.

Q1: Have we decided that we are not willing to design languages and compilers and libraries, and would rather just have faster ways for generating boilerplate?

Q2: Are we ready to take answers that are good some of the time, even if we don’t know which ones are the good ones and which ones are hallucinated?

Q3: What would it take to have an AI that does the asking, instead of the answering?

Q4: If we had an AI that asks questions, would we take its questions as seriously as we take ChatGPT’s answers?

And for good measure I will share a thought on the topic by a revered and yet refreshingly divisive person. I am doing it because I suspect that it has put the nucleus of an idea in my mind and I am simply rehashing it here, poorly.

I feel that the effort to use machines to try to mimic human reasoning is both foolish and dangerous. It is foolish because if you look at human reasoning as is, it is pretty lousy; even the most trained mathematicians are amateur thinkers. Instead of trying to imitate what we are good at, I think it is much more fascinating to investigate what we are poor at. It is foolish to use machines to imitate human beings, while machines are very good at being machines, and that is precisely something that human beings are very poor at. Any successful AI project by its very nature would castrate the machine.

Sauce

2 Likes

No, there is no consensus that we should abandon the design of programming languages, compilers, and libraries in favor of faster methods for generating boilerplate. Boilerplate generation can be useful for speeding up certain routine tasks, but it is not a replacement for thoughtful language and library design. Both approaches have their own advantages and disadvantages, and the choice between them will depend on the specific requirements of a project or problem.

image

Asking in a new chat resulted in

image

Asking in same conversation after other questions answered.

The readiness to accept answers from AI systems that are “good some of the time” will depend on the specific use case and the risks involved. In certain scenarios, such as using an AI system for personal assistance or entertainment, people may be more willing to tolerate errors or inaccuracies in the answers. However, in critical applications where the consequences of an incorrect answer can be severe, such as medical diagnosis or financial planning, people will require higher levels of confidence in the accuracy of the answers before they are willing to accept them.

In general, the acceptance of AI answers will depend on the level of trust people have in the AI system and its ability to provide reliable answers. As AI systems continue to improve and demonstrate their ability to provide accurate answers, it is likely that people will become more accepting of answers that are “good some of the time”. However, it is important to ensure that AI systems are transparent about their limitations and uncertainties, and that they provide clear explanations for their answers to build trust with users.

Developing an AI that can ask questions, instead of just answering them, is a complex challenge that requires advanced natural language processing and machine learning techniques. This AI would need to understand the context of the conversation, generate relevant questions based on that context, and continue to ask follow-up questions to gather more information. It would also require a large database of knowledge to draw from and the ability to learn from previous conversations. Additionally, the AI would need to be able to understand the motivations and goals of the user, and ask questions that are likely to lead to a resolution or solution. Such an AI would require a significant amount of research and development, but it has the potential to greatly enhance human-computer interaction.

Whether people would take the questions asked by an AI as seriously as answers provided by an AI would depend on a number of factors, such as the quality of the AI’s questions, the context in which the questions are asked, and the reputation and trust people have in the AI. If the AI is able to generate high-quality, relevant, and insightful questions, people may take its questions seriously and consider them valuable. On the other hand, if the AI is not able to generate questions that are relevant or insightful, people may not take its questions seriously. Ultimately, the level of trust and respect that people have for an AI’s questions will depend on the performance of the AI and its ability to demonstrate value.


Thanks for asking. :slightly_smiling_face:

Thanks for letting me know this is a useful topic. :slightly_smiling_face:

While there are many more questions that I am asking and not posting because they are just not of value, I am also posting less each day as I try to post only information that is new, noteworthy, and/or of value. I do plan to reorganize this whole topic but don’t plan to delete anything. Still looking for different high level sections that are of value. One I did try but need more JavaScript code for is in translating JavaScript to TypeScript. I really think TypeScript is the better code to use but I have no experience with production quality TypeScript. Also working on some meta predicates and chatting with ChatGPT on that was turning into me trying to train and/or find the correct vocabulary to use. So I abandoned that until I had working code to try my reverse process for teaching ChatGPT by giving it correct code first and trying to get it to create a correct question that creates the code given.

If English is not your native language and if you have more questions then post them in your native language and I will let ChatGPT answer them as such and I can ask ChatGPT for an English translation also.

An observation already made by many others, with a lot of research, is that ChatGPT will easily perform better than the huge majority of people in situations that require free-text answers to questions to which the asker thinks they know exactly what the answer should be. This includes university exams and job interviews. Funny thing.

It is the royal We, as in “We, the Boris of Prolog”