Sets the model personality, behavior rules, and context before the conversation starts.
The system prompt defines how the model should behave for the entire conversation. It sets the persona, establishes constraints, provides background knowledge, and specifies output format. Think of it as the model's job description.
The system prompt goes at the beginning of the context, before any user messages. The model treats it as high-priority instructions. Different providers handle system prompts differently, but the net effect is similar: the model tries to follow system prompt instructions throughout the conversation.
Always. Every production API call should include a system prompt that defines what the model should and should not do, its tone and style, constraints on output format, and relevant background knowledge.
Writing system prompts that are too vague ("be helpful"). Making them too long (consuming valuable context). Not testing with adversarial inputs. Putting instructions in the user message that belong in the system prompt.
Maximum length is limited by context window. Best practice is to keep it under 1000 tokens for most applications.