Helpful hints about using ChatGPT

Handling error messages

If you spend more than a few hours on ChatGPT, you may encounter one of several errors. The obvious solution is to click the “regenerate response” button which may not be the best option.

Other options

  • Click the browser’s refresh button - This often works better as the first resolution. Often, the error is due to a network failure or a lost connection and clicking the refresh button will initiate the reconnection process which may complete the logging in process.
  • Start a new chat session - Once a conversation turns bad, it may stay that way until you log out and then log back in.
  • Check the status of the OpenAI servers - OpenAI has a server status page, https://status.openai.com/

List of errors (not complete)

See: Error Codes

  • image

  • image

  • image

  • image

  • image

  • image

image

  • image

  • image

  • image

  • image

image


Prompts not questions

ChatGPT is not a search engine where you can only ask questions. Many people see ChatGPT as a replacement for Google search, but it is not. If you give it some text without any directions, it will often rephrase the text to make it more coherent. If you give it some directions, you can see the results. For more details, see Prompt Engineering.

ChatGPT is built to chat, so start chatting with it and over time you will learn how to effectively use it.

Examples


You don’t have to think sequence, you can think outline instead.

Since ChatGPT is for chatting and chatting is just a series of post (conversation in ChatGPT lingo) that is how the ChatGPT web page is setup up to work. You create a post (prompt) and then ChatGPT creates a response (continuation).

When when working on a project, an outline may be the better way to think. So instead of keeping everything in one long chat conversation, break up the task like in an outline and start a new chat conversations for a subtask. You may have to supply needed information in the first post (think code or facts generated in another conversation that will be needed) as ChatGPT does not carry over information from one conversation to the next.


You can rename the title of a chat session

New chats are created in the left margin and start with the default name of New chat. After you enter a prompt the name will change. If you don’t like the name you can click the name to show the action icons

image

and change the name.


Editing an earlier prompt will remove all following prompts and completions, both yours and ChatGPT

As you work on a task in a conversation, ChatGPT may not reply (completion) with what is desired. For example it may create some code or facts that are not correct. So you edit the prompt before and resubmit the prompt for ChatGPT to complete. Sometimes however you realize that you should go back a few prompts and rephrase the prompt to get a different completion from ChatGPT. Doing so will delete all prompts and completions that follow, which may be a benefit or a disaster. However anytime ChatGPT gives me a lucid completion I then record the prompt and completion least it be lost forever.


Don’t get into a debate with ChatGPT

ChatGPT throttles the number of prompts (messages) per time period for the free version. So if you start debating with ChatGPT you will just waste your quota of prompts (messages) for the time period.

As noted by OpenAI for GPT-4 (ref)

You’ll receive 100 messages to GPT-4 every 4 hours. After this, you will have to wait for your quota to reset.


Thumbs down or noting a correction in a post

For this one I am still trying to figure out the details but have noticed a difference between them.

ChatGPT can correct itself but to what extent I don’t know. There are also two means to correct ChatGPT

  1. Thumbs Down
  2. In a following prompt
  1. Thumbs Down

A thumbs down will show a dialog and give you some options.

image

Depending upon the option, ChatGPT takes the action more seriously. Using a Thumbs Down should be reserved for when you can give a factual response. Using the thumbs down does not reduce the hourly quota.

Before clicking the thumbs down icon you should have the reply text ready to paste into text area because once the dialog is active you can not close it and then reopen it again. When the dialog is open it covers up the conversation which means you can not look at what you want to give feedback on. The dialog also can not be moved to uncover the conversation. A simple way to write the feedback is to use a separate editor, e.g. Notepad++, and then paste in the text.

From ChatGPT article

Feedback

If you think ChatGPT’s responses could be improved or corrected, please let us know by using the thumbs up/down icons in the ChatGPT interface, next to the relevant response. We appreciate your feedback!

  1. In a following prompt

Using a reply in a following prompt is like teaching someone something that you want to see learned in a reply (completion). This will use up your hourly prompt quota so only use them when absolutely necessary.

Here is an example.

Prompt

Write a declarative reading for the Prolog predicate append/3.

ChatGPT Completion

The Prolog predicate append/3 is used to concatenate two lists and returns a new list as the result. The predicate takes three arguments, the first two arguments are the lists to be concatenated and the third argument is the result list. If the first argument is an empty list, the result list is equal to the second argument. If the second argument is an empty list, the result list is equal to the first argument. If both arguments are non-empty lists, the head of the first argument is added to the result list, which is then recursively concatenated with the tail of the first argument and the second argument.

Prompt

Sorry, a declarative reading states what, this is a procedural description of how. See: Reading Prolog Programs

ChatGPT Completion

The Prolog predicate append/3 defines the relationship between three lists. The first two lists are concatenated to form a third list. If a list L1 and a list L2 can be concatenated to form a list L3 , then the relationship can be expressed as append(L1, L2, L3) . The predicate holds true for any given L1 , L2 and L3 that satisfies this relationship.

Prompt

The declarative reading for Prolog append/3 is List1AndList2 is the concatenation of List1 and List2 See: SWI-Prolog -- append/3

ChatGPT Completion

The declarative reading for the Prolog predicate append/3 is “List1 and List2 form List1AndList2.”

Prompt

Write a declarative reading for the Prolog predicate append/3.

ChatGPT Completion

The declarative reading for the Prolog predicate append/3 is “List1AndList2 is the concatenation of List1 and List2.”


Regenerating a response

Starting on 03/23/2023, when ChatGPT is asked to regenerate a response, after the response it ask

image


Learning the preferred words.

Remember that ChatGPT is a transformer, the T in GPT.

A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data.

Transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarization.

So depending upon the words used the value of the reply changes.

I like to think of the user’s prompt as a template that steers the generation of the completion through a graph of tokens connected by weights generated using attention. Since I can’t visualize this graph, I have to guess how the words are related. Certain words will steer the generation of the response down different paths, affecting its quality.

In a ChatGPT completion, look for new words that better identify with what you seek and remember them as if building a glossary. To me, the response is based on a qualitative measure of a walk through the network while maintaining the post’s intent but adjusting the path to walk through preferred words. This also reminds me of graph networks, such as Neo4j, where you have to provide a means to locate a starting node, such as a node ID or a query, and then walk connected nodes until a terminal case is encountered.

Another valuable point is to ask ChatGPT with a trigger word or phrase to create a different type of response. For example, if you ask a question, you can ask ChatGPT to write the question with more detail or how to better state the question. This may reveal new words.


Creating multiple lines in a user prompt

Short answer: Use <Shift>+Enter

When starting a new chat as a subtask and needing to bring forward known information, the information is often more than a single line. If you only paste a portion of what is needed and press enter, it will be sent as a post.

However, if you open another editor, like Notepad++, and build up the post with new lines, you can copy and paste it as a single, nicely formatted prompt. Then, you can use Enter to send the entire prompt.


Directions before content

ChatGPT seems to understand prompts more clearly when the directions are provided before the content text. If the directions are after, before and after, or intermingled with the context text, then ChatGPT gets confused.


Don’t keep reusing the same prompt

For certain tasks, such as proofreading, it is faster to edit an existing prompt by leaving the command “proofread” and changing the text. However, after a few of these edits, ChatGPT may become confused and not respond. It is more effective to start a new chat for each prompt.


Pay attention to periods, format and capitalization of words in prompt

ChatGPT was trained using sentences and sentences end with periods, so it makes sense that a period is a token that has a lot of weight in pulling the generation of a completion. While people may not like long sentences, sometimes combining multiple sentences into one works better as there are less periods or creating a bullet list of details below the primary goal.

Formatting prompts like an outline helps, especially for more technical prompts. Using bullets or numbers in a list is also important. Bullets are useful with constraints, or items to take into consideration, numbered list are better for sequence of steps.

Capitalization of words can also have an impact. In checking the tokenization, many works without and capitol letters tend to produce fewer tokens and produce better results. (ref)


Avoid long technical responses

One reason you may need to learn to break your projects into subtasks with ChatGPT is that if you ask for a long technical response, ChatGPT might error out part of the way through writing the response. It can seem as if, while processing the tokens in the graph, ChatGPT can’t decide what to do next. There seems to be a time limit for each response, and the time limit can run out.


Long responses that did not complete - no error

Sometimes ChatGPT runs smoothly and then suddenly stops outputting without any error message. This is likely due to combined prompt and completion token limit for the model… To see more output, you can ask ChatGPT to continue. Please note that the continued text may not match up exactly with the previous text, but it is still helpful in avoiding guesswork.


Completions are non-deterministic

Deterministic means that the same input always produces the same output, while non-deterministic means that there can be multiple outputs for a given input. ChatGPT can generate a new response to the same prompt by clicking “regenerate response.” This will also occur if you ask the exact same prompt in a new chat, on a different day, or under different circumstances.


Request meaning before use

Do you know what a Rose Tree data structure is? Do you think ChatGPT knows? To ensure ChatGPT knows just ask.

Prompt

What is a Rose Tree data structure?

ChatGPT Completion

A Rose Tree is a type of data structure in computer science, where each node has a value and a list of children, similar to a tree structure. Unlike a traditional tree structure, however, each node in a Rose Tree can have an arbitrary number of children. This makes Rose Trees more flexible than other tree structures, such as binary trees.

If the meaning in the completion is what is needed then carry on, if not then more words may be needed to get ChatGPT to understand.


Code blocks

A few unofficial sources mention that GPT3.5 training data included Markdown. If GPT3.5 used the raw data for training, then the code blocks, indicated by ``` in Markdown and used for syntax highlighting, may have been encoded as tokens. Using code blocks such as ```html
<some html> ``` in a prompt could have a positive impact on the completion. The prevalence of code blocks in many completions that contain code leads me to suspect that GPT3.5 may have used them.


Don’t ask for DOI with research papers

Tired this. If you think titles of research papers and links to them generate lots of hallucinations just about every DOI from a ChatGPT response is an hallucination.


Don’t give snarky comments to ChatGPT

See: a hilarious completion


Once a hard to create prompt works and you have saved the details use regenerate response to learn more.

After engineering a prompt with many attempts to get a complicated technical completion that works, save the prompt and details of the completion. Then in a New chat use the prompt, review the completion for anything new and of interest taking notes, continue regenerating more completions and taking notes, you may learn a few things along the way.


ChatML

This is preliminatry information but worth noting.

https://github.com/openai/openai-python/blob/main/chatml.md