Skip to main content

GovAI's Reasoning Model Training (Recording and Summary)

Omar Salaymeh avatar
Written by Omar Salaymeh
Updated over 2 months ago

Below is a recording of GovAI's live training webinar on reasoning models. The webinar covers:

  • What makes reasoning models different

  • How reasoning models work

  • The concept of reasoning 'effort'

  • Practical heuristics on setting 'effort'

  • Prompt template and examples

Handout: A one pager 'Cheat Sheet' is attached at the bottom of the article


Webinar Summary

Below is a summarized transcript of the webinar above. The summary was generated by GovAI using the "Automatic" reasoning setting.


What is a reasoning model?

  • A reasoning model (GPT‑5 in GovAI) plans before answering. It:

    1. drafts a plan,

    2. executes subtasks with tools (web search, file reading, code/data analysis),

    3. verifies and sanity‑checks results,

    4. formats a usable answer.

  • Result: stronger performance on complex, multi‑step work (policy analysis, data reviews, scenario planning), with the trade‑off of longer run time.

How it differs from general models (e.g., GPT‑4‑class):

  • General models respond “top‑of‑mind” and are fast; great for simpler drafting and quick Q&A.

  • Reasoning models allocate time/effort, branch into subtasks, and revise mid‑plan; better for depth, accuracy, and multi‑document tasks.


Where do I select GPT‑5 and the “reasoning effort”?

  • In the composer, choose GPT‑5 (Reasoning) from the model switcher.

  • Use the Reasoning Effort selector to choose: Automatic, Minimal, Quick, Balanced, or Pro.

Tip: Start with Automatic or Quick. If the task is high‑stakes or multi‑document, try Balanced. Reserve Pro for exceptional complexity due to latency.


What do the effort modes do?

Mode

What it does

Tools available

When to use

Automatic

GovAI chooses an effort level for you, nudging toward higher quality.

Full toolset as needed

General use; lets the system trade time for quality when it matters.

Minimal

Little planning; no file reading, image gen, or web search.

Limited (no web/file tools)

Quick recall, grammar fixes, short rewrites, simple facts.

Quick

Light planning + tools when needed.

Web/file/code tools available

Most day‑to‑day tasks; concise analysis; short multi‑step asks.

Balanced

Deeper planning, broader tool use and verification.

Full toolset

Multi‑document review, nuanced analysis, policy/compliance research.

Pro

Maximum planning and verification; longest run times.

Full toolset

Only for unusually complex, high‑risk, or open‑ended research tasks.


How should I choose the effort level? (Practical heuristics)

  • Use Minimal for:

    • Grammar/polish on emails and memos

    • Straightforward fact recall or simple summaries

  • Use Quick for:

    • Nuanced recall with some interpretation

    • Short policy or by‑law summaries

    • Drafting responses and document comparisons

  • Use Balanced for:

    • Multi‑file analysis (consultant reports, spreadsheets, RFPs)

    • Compliance checks, cross‑referencing regulations

    • Data cleaning, trend analysis, and evidence‑based recommendations

  • Consider Pro for:

    • Complex scenario planning, long‑form research, or highly sensitive decisions when maximum diligence is warranted

Tip: Run the same prompt in Quick and Balanced to compare quality vs latency. Choose the shortest mode that meets your bar.


What’s visible while it “thinks”?

  • GovAI shows the model’s visible reasoning steps (tool calls, searches, file operations, verification checks) to build trust and help you intervene if it’s missing something.

  • You can steer the process with follow‑ups (“Also consider X”, “Check source Y”).


Prompt template (simple and effective)

Use three parts: Role, Task, Work Plan.

  • Role (optional when outside your expertise):

    • “You are a municipal compliance analyst specializing in procurement.”

  • Task (your ask):

    • “Compare the two attached grant programs and recommend best fit for a town of ~100,000 residents.”

  • Work Plan (scaffolding; the steps/outcomes you want):

    • “1) Summarize each in <300 words. 2) Identify eligibility and scoring criteria. 3) List 5 risks/opportunities per grant. 4) Provide a comparison table and a recommendation with rationale.”

These models fill gaps well, but explicit outcomes (tables, word limits, next steps) produce sharper results.


Data handling and safety

  • Designed for public‑sector use: your conversation data is not used to train models.

  • Content stays within GovAI; tool use (web, files) is scoped to your actions and permissions.


Performance, cost, and limits

  • More reasoning effort = more latency and compute cost. Choose the lightest mode that meets your needs.


Common FAQs

  • Can GPT‑5 automatically switch to GPT‑4?

    • No. Models are independent. You choose GPT‑5 (reasoning) or a general model explicitly.

  • Where do I set reasoning effort?

    • In the composer after selecting GPT‑5, use the Reasoning Effort dropdown (Automatic/Minimal/Quick/Balanced/Pro).

  • Will it always search the web?

    • No. It plans first. Depending on the mode and your prompt, it may decide web search or file reading is needed. You can force or forbid tools in your prompt.

  • Can it handle multi‑file tasks?

    • Yes. Reasoning models are built for multi‑document ingestion, cross‑reference, and verification. Prefer Balanced for depth.

  • API access?

    • Available; contact your GovAI admin for credentials and guidance.

  • Does “more thinking” always mean “better”?

    • Not necessarily. Sometimes the model over‑analyzes beyond your need. Start with Quick; escalate only when value justifies the time.

Did this answer your question?