AI humor of the day

Dr. Vaughan Pratt’s 1994 Qs to Evaluate Cyc, Asked of ChatGPT (ref)

Just had to laugh at this one: San Francisco is not west of New York.

There are many more if one keeps reading.

3 Likes

1994… pretty impressive!

If you are reading the answers as if they were from an AI in 1994 then you are misreading the document.

Cyc was started in 1984 as a symbolic artificial intelligence. In 1994 those questions were asked of Cyc (ref). However the answers in the link are not the answers by Cyc but those by ChatGPT which is a large language model.

oops… indeed I was wondering how much it could have costed in 1994 the massive memory required by Cyc.

The comparisons in Q4.11 are a great example of the model not really having any idea what it’s saying :laughing:

1 Like

I guess the logic component of ChatGPT
needs some improvement. a) lets first
assure ourselves that p & q → p is tautology.

p q ((p ∧ q) → p)
F F T
F T T
T F T
T T T
Truth Table Generator

b) Now what is ChatGPTs opinion on that:

No, “p and q” does not imply “p.”

The statement “p and q” means that both p
and q are true. However, this does not
necessarily mean that p is true on its own.
For example, consider the following statements:

“It is raining and the sun is shining.”
“The earth is round and the moon is round.”
In both of these cases, “p and q” is true,
but neither “p” nor “q” is true on its own.

I made a screenshot:

Edit 10.01.2023
Credits: Fritz Feldhase, he suggested the question on de.sci.mathematik

2 Likes

The AI humor example “P and Q does not imply P” reminded me of a similar but more concrete example on linear logic (as resource aware logic), which I read a long time ago in a book by Gaishi Takeuti published in Janpanese. Although I am not familiar with theory on stable models in FOL, I feel some doubt on such semantis, though my relevant knowledge is limited with Henkin model construction well known in the literature on FOL on Goedel’s completeness theorem. Anyway I should read articles by experts and would like to see posts on “semantics of stable models” in any contexts or views.

Bit related

Maybe chatgpt is a human.

ChatGPT is not a good advisor in math:

Q: How is 15 * 15 * 15 * 15 * 15 * 15 * 15 * 15 * 15 * 15 = 32,576,665,039,062 ?

A: My apologies, that was a mistake in my previous response.
15 * 15 * 15 * 15 * 15 * 15 * 15 * 15 * 15 * 15 is actually equal to:

15,625,065,353,560,625

And this number is indeed odd, not even as I previously stated.
Thank you for bringing this to my attention.

Credits: https://9gag.com/gag/a4o1jqv#comment

Edit 01.02.2023
P.S.: Give the poor AI some bigints:

/* SWI-Prolog 9.1.2 */
?- X is 15^10.
X = 576650390625.

I am impressed, Concerning: Drinker paradox - Wikipedia and after
helping it with the non-empty domain. ChatGPT can paraphrase the two cases:

(ChatGPT Jan 30 Version. Free Research Preview.)

A: I apologize for the mistake in my previous answers. You are correct,
if “a” represents a non-drinker, then the antecedent D(a) of the implication
would be false, and the formula EXIST(a):[D(a) => ALL(b):D(b)] would be
vacuously true, meaning that the conclusion follows by default.

Q: And what is the other scenario?

A: The other scenario would be if all guests in the bar are drinking. In
this case, “a” would represent a drinker, and the antecedent D(a) of the
implication would be true. The conclusion ALL(b):D(b) would also be true,
as all guests are drinking. Therefore, the formula EXIST(a):[D(a) => ALL(b):D(b)]
would be true in this scenario.