Best Practices for Writing AI Instructions in Auto-Fill Fields

Auto-Fill Fields allow you to generate values for Custom Fields using Automations. The quality of your AI instructions directly impacts accuracy, consistency, and reliability of results. Poorly written instructions can lead to vague, inconsistent, or incorrect outputs, while clear and specific ones enable predictable behavior across requests.

This guide outlines best practices, common pitfalls, and practical examples to help you design effective AI instructions.

Best Practices

Be Specific

Vague prompts lead to vague answers. Tell the AI exactly what to do and in what form the answer should appear.

Example

  • Good: “From {{ticket.all_messages}}, identify the Issue Type.”

  • Poor: “Figure out the issue type.”

Provide Context

Include information the AI should focus on (ticket title, request text, last message). Use available variables like {{ticket.title}}, {{ticket.all_messages}}, or {{request.requester.name}}.

Example

  • Good: “Use {{ticket.request_message.text}} to determine the urgency level.”

  • Poor: “Set urgency.”

Keep Instructions Concise

Long or complex instructions confuse AI. Short, direct instructions work best.

Guidelines

  1. Use 1–2 sentences.

  2. Avoid filler or redundant words.

  3. Limit to a single task per instruction.

Example

  • Good: “Summarize the request in one sentence.”

  • Poor: “Provide a detailed overview of the user’s expressed issue in as much detail as possible.”

Show Examples in Instructions (Input → Output)

Demonstrate how a few inputs should map to outputs.

Example

  • Input: “Can’t log in” → Category: Access

  • Input: “Billing error on invoice” → Category: Billing

Handle Uncertainty with Fallback Rules

Tell AI what to do if it cannot decide.

Example

  • “If none of the categories apply, select ‘Other’.”

Use Domain Language Carefully

  • Spell out acronyms or shorthand.

  • Do not assume AI understands team-specific terms without examples.

Testing Your Instructions

Even well-designed instructions may need refinement. Testing ensures reliability across real-world requests.

Steps

  1. Draft the instruction using best practices.

  2. Run on a few sample tickets of varying complexity.

  3. Check outputs for accuracy and consistency.

  4. Refine wording as needed (simplify, add examples, clarify boundaries).

  5. Confirm reliability before enabling in production.

Common Pitfalls to Avoid

  1. Undefined terms

    • Avoid vague words like “urgent,” “important,” or “high priority” without definition.

    • Instead: “Mark as High if the request mentions downtime, outage, or deadline-critical issue.”

  2. Overly long instructions

    • More words do not mean more clarity.

Quick Reference Checklist

Check
Why it matters

Is the task defined clearly and specifically?

Prevents vague outputs

Did I tell AI what part of the request to analyze?

Focuses AI on correct input

Am I avoiding vague terms and multiple tasks?

Keeps instructions unambiguous

Is the instruction short, simple, and easy to test?

Improves reliability

Last updated

Was this helpful?