Order of Fields Matters in Structured LLM Responses

I had a very strange behavior with my responses. I was working on an email-processing tool. We use Pydantic models for structured responses.

We have something like:

class Action(BaseModel):
    actionType: (
        IGNORE | REPLY_WITH_TEMPLATE | GENERATE_CUSTOM_EMAIL
    )
    reasoning: str

I had strange responses like:

{
    "actionType": "REPLY_WITH_TEMPLATE",
    "reasoning": "There is no template available, " +
        "we need to generate response using company's FAQ"
}

My first instinct was to fine-tune the prompt, and although it relatively helped, these weird errors were still present.

After some research, I came to an obvious yet elegant solution.

I just changed the order of fields and put reasoning in the first place:

class Action(BaseModel):
    reasoning: str
    actionType: (
        IGNORE | REPLY_WITH_TEMPLATE | GENERATE_CUSTOM_EMAIL
    )

This way, when the model generates tokens for actionType, it already has reasoning, so the action type matches what was generated in the reasoning, and the model gives back a consistent answer.