Tips for LLM prompts

prompt engineering



April 17, 2023

General tips to get better outputs from ChatGPT.


This post recaps OpenAI’s suggestions (as of writing) for improving the outputs of ChatGPT.

These suggestions are based on different papers that explor how to improve an LLM’s output. There is a big variety of teams working in this area with a matching variety of proposed improvements.

Thankfully, we can also extract a broad group of suggestions based on what the different improvements share in common.


The OpenAI cookbook has more details about the different proposed improvements.

# Tips and tricks for ChatGPT Prompts

The following lists are high-level suggestions for getting better results from ChatGPT. It is meant as a quick reference we can scan for good habits and practices before jumping into an LLM dialogue.

I hope it is helpful for others, and suggestions/improvements are always welcome!

Improving outputs

  • Split large, complex tasks into subtasks
    • If possible, structure and isolate the instructions per each subtask
  • Prompt the model to explain its reasoning(s) before answering
  • If the output was bad, try making the instructions clearer
    • Start with simple and direct language
    • You can get more complex as the conversation and context grow
  • Have the model generate many answers, then ask it to distill them into a single, best answer
  • If possible, fine-tune custom models to maximize performance

Generic tips

  • Explicitly guide the model through the thought process
    • This helps it stay focused on sub-tasks
  • “Let’s think step by step…”
    • This trick works best on logical, mathematical, and reasoning tasks
    • We can possibly leverage it for other tasks by breaking them down into a “logical” structure
  • Give the model a few examples of the task you want (Few-Shot)
  • Split a question into different types of prompts and alternate between them
    • Selection Prompt: finds the relevant pieces of info
    • Inference Prompt: uses the relevant pieces to generate an answer
    • Halter Prompt: figures out when the alternating should stop
  • Reduce hallucinations by constraining what the model can say
    • Give it a way to back down
    • “If you do not know the answer, say ‘I don’t know’ …”

API Tips

  • Give the model an identity that behaves in a certain way with an explicit intent
    • “You are a [[creative and helpful]] [[expert editor]] who is [[helping to edit my writing]]…”
  • Ask to model to answer from the perspective of an expert
  • Try restating the original “system” message to keep the model on-task.
  • If the model is getting off-track, remind it of its instructions and current context at the end of the prompt