This whitepaper recently shared by Lee Boonstra of Google serves as a comprehensive guide for leveraging LLMs effectively in production environments, highlighting techniques, strategies, and best practices that are shaping the future of AI applications. The whitepaper can be downloaded here.

Looking for a quick wrap-up of the whitepaper? Here goes!
Here are the key pointers from Google’s 69-page whitepaper on prompt engineering:
- Show examples: Use examples to guide the model toward the structure and output you want.
- Keep it simple: Write prompts that are clear and easy to understand to avoid confusion.
- Be clear about the output: Say exactly what kind of response you’re looking for—this helps with accuracy and format.
- Give instructions, not restrictions: Instead of saying what not to do, tell the model what you do want.
- Set token limits wisely: Use max token settings to control output length, cost, and performance.
- Use variables in prompts: Add placeholders (like {{name}}) to make prompts reusable across tasks.
- Try different input styles: Test different formats and writing tones to see what works best.
- Vary class order in classification tasks: When showing examples, shuffle class labels to avoid introducing bias.
- Adjust with model updates: Update your prompts over time to keep up with model improvements.
- Try different output formats: Use JSON, markdown, bullet points, etc., to match your desired structure.
- Collaborate with others: Work with other prompt engineers to find better techniques.
- Keep track of what you try: Log your prompt versions and results to improve over time.

Leave a comment