During mountain rescue training, our instructors spent as much time telling us what not to do as what we should do. "Do not fix the rope to that anchor point." "Do not move without three points of contact." These constraints weren't restrictive; they created the safe space within which we could work effectively.
It's the same with AI. One of the most powerful parts of context engineering is setting clear constraints.
Instead of just giving the AI a goal, give it rules of engagement. Add a "Constraints" section to your prompt that tells it what to avoid.
· "Do not use these words: [list of forbidden words]."
· "Do not write in a passive voice."
· "Your tone should be professional but not academic."
This immediately refines the output, saving you editing time. You are fencing off the areas you don't want the AI to go, forcing it down the path you do.
Take a prompt you use regularly. Add one single constraint to it (e.g., "Write this for a year 11 reading level"). Observe the difference in the result.
If you could tell an AI to stop doing one annoying thing, what would it be?