Skip to content

03/02/2026 7:19 PM

⬅️ [02/02/2026 10:20 AM](<./02_02_2026 10_20 AM.md>) | ⬆️ [2026 - February](<./README.md>) | [04/02/2026 9:14 AM](<./04_02_2026 9_14 AM.md>) ➡️

03/02/2026 7:19 PM

I have been messing with prompting for the user stand-in LLM. I've discovered that the best way to get good responses is to just go along with instruct tuned models. Don't try to do fancy things like have the LLM continue text within the user response. Just append an instruction as the user at the end of the clarification dialog saying:

Please write the response to the clarifying question above as if you were the user described in the system prompt. Reply directly with the clarification text only. Your text should start with something like 'Regarding your question, I am asking about ' and be short and concise.


⬅️ [02/02/2026 10:20 AM](<./02_02_2026 10_20 AM.md>) | ⬆️ [2026 - February](<./README.md>) | [04/02/2026 9:14 AM](<./04_02_2026 9_14 AM.md>) ➡️