Best Practices for Writing AI Instructions in Auto-Fill Fields
Auto-Fill Fields allow you to generate values for Custom Fields using Automations. The quality of your AI instructions directly impacts accuracy, consistency, and reliability of results. Poorly written instructions can lead to vague, inconsistent, or incorrect outputs, while clear and specific ones enable predictable behavior across requests.
This guide outlines best practices, common pitfalls, and practical examples to help you design effective AI instructions.
Best Practices
Be Specific
Vague prompts lead to vague answers. Tell the AI exactly what to do and in what form the answer should appear.
Example
Good: “From {{ticket.all_messages}}, identify the Issue Type.”
Poor: “Figure out the issue type.”
Provide Context
Include information the AI should focus on (ticket title, request text, last message). Use available variables like {{ticket.title}}
, {{ticket.all_messages}}
, or {{request.requester.name}}
.
Example
Good: “Use {{ticket.request_message.text}} to determine the urgency level.”
Poor: “Set urgency.”
Keep Instructions Concise
Long or complex instructions confuse AI. Short, direct instructions work best.
Guidelines
Use 1–2 sentences.
Avoid filler or redundant words.
Limit to a single task per instruction.
Example
Good: “Summarize the request in one sentence.”
Poor: “Provide a detailed overview of the user’s expressed issue in as much detail as possible.”
Show Examples in Instructions (Input → Output)
Demonstrate how a few inputs should map to outputs.
Example
Input: “Can’t log in” → Category: Access
Input: “Billing error on invoice” → Category: Billing
Handle Uncertainty with Fallback Rules
Tell AI what to do if it cannot decide.
Example
“If none of the categories apply, select ‘Other’.”
Use Domain Language Carefully
Spell out acronyms or shorthand.
Do not assume AI understands team-specific terms without examples.
Testing Your Instructions
Even well-designed instructions may need refinement. Testing ensures reliability across real-world requests.
Steps
Draft the instruction using best practices.
Run on a few sample tickets of varying complexity.
Check outputs for accuracy and consistency.
Refine wording as needed (simplify, add examples, clarify boundaries).
Confirm reliability before enabling in production.
Common Pitfalls to Avoid
Undefined terms
Avoid vague words like “urgent,” “important,” or “high priority” without definition.
Instead: “Mark as High if the request mentions downtime, outage, or deadline-critical issue.”
Overly long instructions
More words do not mean more clarity.
Quick Reference Checklist
Is the task defined clearly and specifically?
Prevents vague outputs
Did I tell AI what part of the request to analyze?
Focuses AI on correct input
Am I avoiding vague terms and multiple tasks?
Keeps instructions unambiguous
Is the instruction short, simple, and easy to test?
Improves reliability
Last updated
Was this helpful?