· Jay Dixit

Guide to Codex CLI

Codex is OpenAI’s interactive terminal-based coding assistant. It runs in a terminal UI, reads your codebase, makes edits, and runs commands. Similar to Claude Code but from OpenAI.

Overview

Codex is OpenAI’s interactive terminal-based coding assistant. It runs in a terminal UI, reads your codebase, makes edits, and runs commands. Similar to Claude Code but from OpenAI.

Basic Usage

Running with an Input Prompt

You can run Codex directly with a prompt:

codex "explain this codebase"

Image Inputs

Paste images directly into the composer or attach via CLI:

codex -i screenshot.png "Explain this error"
codex --image img1.png,img2.jpg "Summarize these diagrams"

Non-Interactive Mode

Run Codex non-interactively with the exec command:

codex exec "fix the CI failure"

Configuration

Configuration File Location

  • Location: ~/.codex/config.toml
  • Shared between CLI and IDE extension

Accessing Configuration

From IDE Extension

  1. Click gear icon (top right)
  2. Codex Settings → Open config.toml

High-Level Configuration Options

Default Model

Via config.toml:

model = "gpt-5"

Via CLI:

codex --model gpt-5

Model Provider

Select backend provider (must be defined in config first):

Via config.toml:

model_provider = "ollama"

Via CLI:

codex --config model_provider="ollama"

Approval Prompts

Control when Codex pauses before running commands:

Via config.toml:

approval_policy = "on-request"

Via CLI:

codex --approval-policy on-request

Sandbox Level

Adjust filesystem and network access:

Via config.toml:

sandbox_mode = "workspace-write"

Via CLI:

codex --sandbox workspace-write

Reasoning Depth

Tune reasoning effort (when supported):

Via config.toml:

model_reasoning_effort = "high"

Via CLI:

codex --config model_reasoning_effort="high"

Command Environment

Restrict/expand environment variables for spawned commands:

Via config.toml:

[shell_environment_policy]
include_only = ["PATH", "HOME"]

Via CLI:

codex --config shell_environment_policy.include_only='["PATH","HOME"]'

Profiles

Switch between different configurations:

  1. Define profiles in config.toml:
   [profiles.my-profile]
   # ... profile-specific settings
  1. Launch with profile:
   codex --profile my-profile

Note: Profiles currently apply to CLI only.

IDE Extension Personalization

Settings

Click gear icon → IDE settings

Keyboard Shortcuts

Click gear icon → Keyboard shortcuts

For complete configuration reference, see Codex config documentation🔗.

Models & Reasoning

  • GPT-5-Codex (optimized for agentic coding)
  • Default: GPT-5
  • Switch with /model command

Reasoning Levels

  • Default: Medium
  • Upgrade to High for complex tasks (via /model command)

Using Specific Models

Launch with specific model via flag:

codex --model gpt-5-codex

See OpenAI models page🔗 for details.

Approval Modes

  • Read files automatically
  • Make edits automatically
  • Run commands in working directory automatically
  • Requires approval for: outside working directory, network access

Read Only

  • Chat and plan without actions
  • Switch with /approvals command
  • Use when you want to explore before diving in

Full Access

  • No approvals needed for any actions
  • Includes network access
  • Exercise caution before enabling

Key Differences from Claude Code

FeatureCodex CLIClaude Code
ProviderOpenAIAnthropic
Default ModelGPT-5-CodexClaude Sonnet 4.5
Approval ModesAuto/Read Only/Full AccessMultiple modes
Image SupportCLI flags + pasteDirect paste
Non-interactivecodex execTask tool

Resources

Advanced Configuration Reference

Codex supports several mechanisms for setting config values:

  • Config-specific command-line flags, such as --model o3 (highest precedence).
  • A generic -c/--config flag that takes a key=value pair, such as --config model="o3".
  • The key can contain dots to set a value deeper than the root, e.g. --config model_providers.openai.wire__api_="chat".
  • For consistency with config.toml, values are a string in TOML format rather than JSON format, so use key='{a = 1, b = 2}' rather than key='{"a": 1, "b": 2}'.
    • The quotes around the value are necessary, as without them your shell would split the config argument on spaces, resulting in codex receiving -c key={a with (invalid) additional arguments =, 1,, b, =, 2}.
  • Values can contain any TOML object, such as --config shell_environment__policy.include__only_='["PATH", "HOME", "USER"]'.
  • If value cannot be parsed as a valid TOML value, it is treated as a string value. This means that -c model='"o3"' and -c model=o3 are equivalent.
    • In the first case, the value is the TOML string "o3", while in the second the value is o3, which is not valid TOML and therefore treated as the TOML string "o3".
    • Because quotes are interpreted by one’s shell, -c key="true" will be correctly interpreted in TOML as key = true (a boolean) and not key = "true" (a string). If for some reason you needed the string "true", you would need to use -c key='"true"' (note the two sets of quotes).
  • The $CODEX_HOME_/config.toml configuration file where the CODEX_HOME_ environment value defaults to ~/.codex. (Note CODEX_HOME_ will also be where logs and other Codex-related information are stored.)

Both the --config flag and the config.toml file support the following options:

model

The model that Codex should use.

model = "o3"  # overrides the default of "gpt-5-codex"

model_providers_

This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with model_provider_ to select the corresponding provider.

For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you could add the following configuration:


model = "gpt-4o"
model_provider_ = "openai-chat-completions"

[model_providers.openai_-chat-completions]

name = "OpenAI using Chat Completions"

base_url_ = "<LinkPeek href="https://api.openai.com/v1"></LinkPeek>"

env_key_ = "OPENAI_API__KEY_"

wire_api_ = "chat"

query_params_ = {}

Note this makes it possible to use Codex CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use Codex CLI with Ollama running locally:

[model_providers.ollama_]
name = "Ollama"
base_url_ = "<LinkPeek href="http://localhost:11434/v1"></LinkPeek>"

Or a third-party provider (using a distinct environment variable for the API key):

[model_providers.mistral_]
name = "Mistral"
base_url_ = "<LinkPeek href="https://api.mistral.ai/v1"></LinkPeek>"
env_key_ = "MISTRAL_API__KEY_"

It is also possible to configure a provider to include extra HTTP headers with a request. These can be hardcoded values (http_headers_) or values read from environment variables (env_http__headers_):

[model_providers.example_]

http_headers_ = { "X-Example-Header" = "example-value" }

env_http__headers_ = { "X-Example-Features" = "EXAMPLE_FEATURES_" }

Azure model provider example

Note that Azure requires api-version to be passed as a query parameter, so be sure to specify it as part of query_params_ when defining the Azure provider:

[model_providers.azure_]
name = "Azure"

base_url_ = "<LinkPeek href="https://YOUR_PROJECT_NAME.openai.azure.com/openai"></LinkPeek>"
env_key_ = "AZURE_OPENAI__API__KEY_"  # Or "OPENAI_API__KEY_", whichever you use.
query_params_ = { api-version = "2025-04-01-preview" }
wire_api_ = "responses"

Export your key before launching Codex: export AZURE_OPENAI__API__KEY_=…

Per-provider network tuning

The following optional settings control retry behaviour and streaming idle timeouts per model provider. They must be specified inside the corresponding [model_providers_.<id>] block in config.toml. (Older releases accepted top‑level keys; those are now ignored.)

Example:

[model_providers.openai_]
name = "OpenAI"
base_url_ = "<LinkPeek href="https://api.openai.com/v1"></LinkPeek>"
env_key_ = "OPENAI_API__KEY_"

request_max__retries_ = 4            # retry failed HTTP requests
stream_max__retries_ = 10            # retry dropped SSE streams
stream_idle__timeout__ms_ = 300000    # 5m idle timeout

request_max__retries_

How many times Codex will retry a failed HTTP request to the model provider. Defaults to 4.

stream_max__retries_

Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to 5.

stream_idle__timeout__ms_

How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to 300_000_ (5 minutes).

model_provider_

Identifies which provider to use from the model_providers_ map. Defaults to "openai". You can override the base_url_ for the built-in openai provider via the OPENAI_BASE__URL_ environment variable.

Note that if you override model_provider_, then you likely want to override model, as well. For example, if you are running ollama with Mistral locally, then you would need to add the following to your config in addition to the new entry in the model_providers_ map:

model_provider_ = "ollama"
model = "mistral"

approval_policy_

Determines when the user should be prompted to approve whether Codex can execute a command:


approval_policy_ = "untrusted"

If you want to be notified whenever a command fails, use “on-failure”:


approval_policy_ = "on-failure"

If you want the model to run until it decides that it needs to ask you for escalated permissions, use “on-request”:


approval_policy_ = "on-request"

Alternatively, you can have the model run until it is done, and never ask to run a command with escalated permissions:


approval_policy_ = "never"

profiles

A profile is a collection of configuration values that can be set together. Multiple profiles can be defined in config.toml and you can specify the one you want to use at runtime via the --profile flag.

Here is an example of a config.toml that defines multiple profiles:

model = "o3"
approval_policy_ = "untrusted"

profile = "o3"

[model_providers.openai_-chat-completions]
name = "OpenAI using Chat Completions"
base_url_ = "<LinkPeek href="https://api.openai.com/v1"></LinkPeek>"
env_key_ = "OPENAI_API__KEY_"
wire_api_ = "chat"

[profiles.o3]
model = "o3"
model_provider_ = "openai"
approval_policy_ = "never"
model_reasoning__effort_ = "high"
model_reasoning__summary_ = "detailed"

[profiles.gpt3]
model = "gpt-3.5-turbo"
model_provider_ = "openai-chat-completions"

[profiles.zdr]
model = "o3"
model_provider_ = "openai"
approval_policy_ = "on-failure"

Users can specify config values at multiple levels. Order of precedence is as follows:

  1. custom command-line argument, e.g., --model o3
  2. as part of a profile, where the --profile is specified via a CLI (or in the config file itself)
  3. as an entry in config.toml, e.g., model = "o3"
  4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to gpt-5-codex)

model_reasoning__effort_

If the selected model is known to support reasoning (for example: o3, o4-mini, codex-*, gpt-5, gpt-5-codex), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation]( ), this can be set to:

  • "minimal"
  • "low"
  • "medium" (default)
  • "high"

Note: to minimize reasoning, choose "minimal".

model_reasoning__summary_

If the model name starts with "o" (as in "o3" or "o4-mini") or "codex", reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation]( ), this can be set to:

  • "auto" (default)
  • "concise"
  • "detailed"

To disable reasoning summaries, set model_reasoning__summary_ to "none" in your config:

model_reasoning__summary_ = "none"  # disable reasoning summaries

model_verbosity_

Controls output length/detail on GPT‑5 family models when using the Responses API. Supported values:

  • "low"
  • "medium" (default when omitted)
  • "high"

When set, Codex includes a text object in the request payload with the configured verbosity, for example: "text": { "verbosity": "low" }.

Example:

model = "gpt-5"
model_verbosity_ = "low"

Note: This applies only to providers using the Responses API. Chat Completions providers are unaffected.

model_supports__reasoning__summaries_

By default, reasoning is only set on requests to OpenAI models that are known to support them. To force reasoning to set on requests to the current model, you can force this behavior by setting the following in config.toml:

model_supports__reasoning__summaries_ = true

sandbox_mode_

Codex executes model-generated shell commands inside an OS-level sandbox.

In most cases you can pick the desired behaviour with a single option:


sandbox_mode_ = "read-only"

The default policy is read-only, which means commands can read any file on disk, but attempts to write a file or access the network will be blocked.

A more relaxed policy is workspace-write. When specified, the current working directory for the Codex task will be writable (as well as $TMPDIR on macOS). Note that the CLI defaults to using the directory where it was spawned as cwd, though this can be overridden using --cwd/-C.

On macOS (and soon Linux), all writable roots (including cwd) that contain a .git/ folder as an immediate child will configure the .git/ folder to be read-only while the rest of the Git repository will be writable. This means that commands like git commit will fail, by default (as it entails writing to .git/), and will require Codex to ask for permission.


sandbox_mode_ = "workspace-write"

[sandbox_workspace__write_]

exclude_tmpdir__env__var_ = false
exclude_slash__tmp_ = false

writable_roots_ = ["*Users/YOU*.pyenv/shims"]

network_access_ = false

To disable sandboxing altogether, specify danger-full-access like so:


sandbox_mode_ = "danger-full-access"

This is reasonable to use if Codex is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.

Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.

Approval presets

Codex provides three main Approval Presets:

  • Read Only: Codex can read files and answer questions; edits, running commands, and network access require approval.
  • Auto: Codex can read files, make edits, and run commands in the workspace without approval; asks for approval outside the workspace or for network access.
  • Full Access: Full disk and network access without prompts; extremely risky.

You can further customize how Codex runs at the command line using the --ask-for-approval and --sandbox options.

Connecting to MCP servers

You can configure Codex to use [MCP servers]( ) to give Codex access to external applications, resources, or services.

Server configuration

STDIO

[STDIO servers]( ) are MCP servers that you can launch directly via commands on your computer.


[mcp_servers.server__name_]
command = "npx"

args = ["-y", "mcp-server"]

env = { "API_KEY_" = "value" }

[mcp_servers.server__name.env_]
API_KEY_ = "value"

Streamable HTTP

[Streamable HTTP servers]( ) enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).


experimental_use__rmcp__client_ = true
[mcp_servers.figma_]
url = "<LinkPeek href="https://mcp.linear.app/mcp"></LinkPeek>"

bearer_token_ = "<token>"

For oauth login, you must enable experimental_use__rmcp__client_ = true and then run codex mcp login server_name_

Other configuration options


startup_timeout__sec_ = 20

tool_timeout__sec_ = 30

Experimental RMCP client

Codex is transitioning to the [official Rust MCP SDK]( ).

The flag enabled OAuth support for streamable HTTP servers and uses a new STDIO client implementation.

Please try and report issues with the new client. To enable it, add this to the top level of your config.toml

experimental_use__rmcp__client_ = true

[mcp_servers.server__name_]
…

MCP CLI commands


codex mcp --help

codex mcp add docs -- docs-server --port 4000

codex mcp list
codex mcp list --json

codex mcp get docs
codex mcp get docs --json

codex mcp remove docs

codex mcp login SERVER_NAME_

codex mcp logout SERVER_NAME_

Examples of useful MCPs

There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.

Some of the most common MCPs we’ve seen are:

  • [Context7]( ) — connect to a wide range of up-to-date developer documentation
  • Figma [Local]( ) and [Remote]( ) - access to your Figma designs
  • [Playwright]( ) - control and inspect a browser using Playwright
  • [Chrome Developer Tools]( ) — control and inspect a Chrome browser
  • [Sentry]( ) — access to your Sentry logs
  • [GitHub]( ) — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)

shell_environment__policy_

Codex spawns subprocesses (e.g. when executing a local_shell_ tool-call suggested by the assistant). By default it now passes your full environment to those subprocesses. You can tune this behavior via the shell_environment__policy_ block in config.toml:

[shell_environment__policy_]

inherit = "core"

ignore_default__excludes_ = false

exclude = ["AWS_*_", "AZURE_*_"]

set = { CI = "1" }

include_only_ = ["PATH", "HOME"]
FieldTypeDefaultDescription
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
inheritstringallStarting template for the environment:
all (clone full parent env), core (HOME, PATH, USER, …), or none (start empty).
ignore_default__excludes_booleanfalseWhen false, Codex removes any var whose name contains KEY, SECRET, or TOKEN (case-insensitive) before other rules run.
excludearray<string>[]Case-insensitive glob patterns to drop after the default filter.
Examples: "AWS_*_", "AZURE_*_".
settable<string,string>{}Explicit key/value overrides or additions – always win over inherited values.
include_only_array<string>[]If non-empty, a whitelist of patterns; only variables that match one pattern survive the final step. (Generally used with inherit = "all".)

The patterns are glob style, not full regular expressions: * matches any number of characters, ? matches exactly one, and character classes like [A-Z]/[<sup>0</sup>-9] are supported. Matching is always case-insensitive. This syntax is documented in code as EnvironmentVariablePattern (see core/src/config_types.rs_).

If you just need a clean slate with a few custom entries you can write:

[shell_environment__policy_]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG_ = "1" }

Currently, CODEX_SANDBOX__NETWORK__DISABLED_=1 is also added to the environment, assuming network is disabled. This is not configurable.

otel

Codex can emit [OpenTelemetry]( ) log events that describe each run: outbound API requests, streamed responses, user input, tool-approval decisions, and the result of every tool invocation. Export is disabled by default so local runs remain self-contained. Opt in by adding an [otel] table and choosing an exporter.

[otel]
environment = "staging"   # defaults to "dev"
exporter = "none"          # defaults to "none"; set to otlp-http or otlp-grpc to send events
log_user__prompt_ = false    # defaults to false; redact prompt text unless explicitly enabled

Codex tags every exported event with service.name = $ORIGINATOR (the same value sent in the originator header, codex_cli__rs_ by default), the CLI version, and an env attribute so downstream collectors can distinguish dev/staging/prod traffic. Only telemetry produced inside the codex_otel_ crate—the events listed below—is forwarded to the exporter.

Event catalog

Every event shares a common set of metadata fields: event.timestamp, conversation.id, app.version, auth_mode_ (when available), user.account_id_ (when available), terminal.type, model, and slug.

With OTEL enabled Codex emits the following event types (in addition to the metadata above):

  • codex.conversation_starts_
  • provider_name_
  • reasoning_effort_ (optional)
  • reasoning_summary_
  • context_window_ (optional)
  • max_output__tokens_ (optional)
  • auto_compact__token__limit_ (optional)
  • approval_policy_
  • sandbox_policy_
  • mcp_servers_ (comma-separated list)
  • active_profile_ (optional)
  • codex.api_request_
  • attempt
  • duration_ms_
  • http.response.status_code_ (optional)
  • error.message (failures)
  • codex.sse_event_
  • event.kind
  • duration_ms_
  • error.message (failures)
  • input_token__count_ (responses only)
  • output_token__count_ (responses only)
  • cached_token__count_ (responses only, optional)
  • reasoning_token__count_ (responses only, optional)
  • tool_token__count_ (responses only)
  • codex.user_prompt_
  • prompt_length_
  • prompt (redacted unless log_user__prompt_ = true)
  • codex.tool_decision_
  • tool_name_
  • call_id_
  • decision (approved, approved_for__session_, denied, or abort)
  • source (config or user)
  • codex.tool_result_
  • tool_name_
  • call_id_ (optional)
  • arguments (optional)
  • duration_ms_ (execution time for the tool)
  • success ("true" or "false")
  • output

These event shapes may change as we iterate.

Choosing an exporter

Set otel.exporter to control where events go:

  • none – leaves instrumentation active but skips exporting. This is the

default.

  • otlp-http – posts OTLP log records to an OTLP/HTTP collector. Specify the

endpoint, protocol, and headers your collector expects:

[otel]
exporter = { otlp-http = {
  endpoint = "<LinkPeek href="https://otel.example.com/v1/logs"></LinkPeek>",
  protocol = "binary",
  headers = { "x-otlp-api-key" = "${OTLP_TOKEN_}" }
}}
  • otlp-grpc – streams OTLP log records over gRPC. Provide the endpoint and any

metadata headers:

[otel]
exporter = { otlp-grpc = {
  endpoint = "<LinkPeek href="https://otel.example.com:4317"></LinkPeek>",
  headers = { "x-otlp-meta" = "abc123" }
}}

If the exporter is none nothing is written anywhere; otherwise you must run or point to your own collector. All exporters run on a background batch worker that is flushed on shutdown.

If you build Codex from source the OTEL crate is still behind an otel feature flag; the official prebuilt binaries ship with the feature enabled. When the feature is disabled the telemetry hooks become no-ops so the CLI continues to function without the extra dependencies.

history

By default, Codex CLI records messages sent to the model in $CODEX_HOME_/history.jsonl. Note that on UNIX, the file permissions are set to o600, so it should only be readable and writable by the owner.

To disable this behavior, configure [history] as follows:

[history]
persistence = "none"  # "save-all" is the default value

file_opener_

Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.

For example, if the model output includes a reference such as 【F:/home/user/project/main.py†L42-L50】, then this would be rewritten to link to the URI vscode://file/home/user/project/main.py:42.

Note this is not a general editor setting (like $EDITOR), as it only accepts a fixed set of values:

  • "vscode" (default)
  • "vscode-insiders"
  • "windsurf"
  • "cursor"
  • "none" to explicitly disable this feature

Currently, "vscode" is the default, though Codex does not verify VS Code is installed. As such, file_opener_ may default to "none" or something else in the future.

model_context__window_

The size of the context window for the model, in tokens.

In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use model_context__window_ to tell Codex what value to use to determine how much context is left during a conversation.

model_max__output__tokens_

This is analogous to model_context__window_, but for the maximum number of output tokens for the model.

project_doc__max__bytes_

Maximum number of bytes to read from an AGENTS.md file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.

project_doc__fallback__filenames_

Ordered list of additional filenames to look for when AGENTS.md is missing at a given directory level. The CLI always checks AGENTS.md first; the configured fallbacks are tried in the order provided. This lets monorepos that already use alternate instruction files (for example, CLAUDE.md) work out of the box while you migrate to AGENTS.md over time.

project_doc__fallback__filenames_ = ["CLAUDE.md", ".exampleagentrules.md"]

We recommend migrating instructions to AGENTS.md; other filenames may reduce model performance.

tui

Options that are specific to the TUI.

[tui]

notifications = true

notifications = [ "agent-turn-complete", "approval-requested" ]

[!NOTE] Codex emits desktop notifications using terminal escape codes. Not all terminals support these (notably, macOS Terminal.app and VS Code’s terminal do not support custom notifications. iTerm2, Ghostty and WezTerm do support these notifications).

[!NOTE] > tui.notifications is built‑in and limited to the TUI session. For programmatic or cross‑environment notifications—or to integrate with OS‑specific notifiers—use the top‑level notify option to run an external program that receives event JSON. The two settings are independent and can be used together.

Config reference

KeyType / ValuesNotes
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
modelstringModel to use (e.g., gpt-5-codex).
model_provider_stringProvider id from model_providers_ (default: openai).
model_context__window_numberContext window tokens.
model_max__output__tokens_numberMax output tokens.
approval_policy_untrusted \on-failure \on-request \neverWhen to prompt for approval.
sandbox_mode_read-only \workspace-write \danger-full-accessOS sandbox policy.
sandbox_workspace__write.writable__roots_array<string>Extra writable roots in workspace‑write.
sandbox_workspace__write.network__access_booleanAllow network in workspace‑write (default: false).
sandbox_workspace__write.exclude__tmpdir__env__var_booleanExclude $TMPDIR from writable roots (default: false).
sandbox_workspace__write.exclude__slash__tmp_booleanExclude /tmp from writable roots (default: false).
disable_response__storage_booleanRequired for ZDR orgs.
notifyarray<string>External program for notifications.
instructionsstringCurrently ignored; use experimental_instructions__file_ or AGENTS.md.
mcp_servers_.<id>.commandstringMCP server launcher command.
mcp_servers_.<id>.argsarray<string>MCP server args.
mcp_servers_.<id>.envmap<string,string>MCP server env vars.
mcp_servers_.<id>.startup_timeout__sec_numberStartup timeout in seconds (default: 10). Timeout is applied both for initializing MCP server and initially listing tools.
mcp_servers_.<id>.tool_timeout__sec_numberPer-tool timeout in seconds (default: 60). Accepts fractional values; omit to use the default.
model_providers_.<id>.namestringDisplay name.
model_providers_.<id>.base_url_stringAPI base URL.
model_providers_.<id>.env_key_stringEnv var for API key.
model_providers_.<id>.wire_api_chat \responsesProtocol used (default: chat).
model_providers_.<id>.query_params_map<string,string>Extra query params (e.g., Azure api-version).
model_providers_.<id>.http_headers_map<string,string>Additional static headers.
model_providers_.<id>.env_http__headers_map<string,string>Headers sourced from env vars.
model_providers_.<id>.request_max__retries_numberPer‑provider HTTP retry count (default: 4).
model_providers_.<id>.stream_max__retries_numberSSE stream retry count (default: 5).
model_providers_.<id>.stream_idle__timeout__ms_numberSSE idle timeout (ms) (default: 300000).
project_doc__max__bytes_numberMax bytes to read from AGENTS.md.
profilestringActive profile name.
profiles.<name>.*variousProfile‑scoped overrides of the same keys.
history.persistencesave-all \noneHistory file persistence (default: save-all).
history.max_bytes_numberCurrently ignored (not enforced).
file_opener_vscode \vscode-insiders \windsurf \cursor \noneURI scheme for clickable citations (default: vscode).
tuitableTUI‑specific options.
tui.notificationsboolean \array<string>Enable desktop notifications in the tui (default: false).
hide_agent__reasoning_booleanHide model reasoning events.
show_raw__agent__reasoning_booleanShow raw reasoning (when available).
model_reasoning__effort_minimal \low \medium \highResponses API reasoning effort.
model_reasoning__summary_auto \concise \detailed \noneReasoning summaries.
model_verbosity_low \medium \highGPT‑5 text verbosity (Responses API).
model_supports__reasoning__summaries_booleanForce‑enable reasoning summaries.
model_reasoning__summary__format_none \experimentalForce reasoning summary format.
chatgpt_base__url_stringBase URL for ChatGPT auth flow.
experimental_resume_string (path)Resume JSONL path (internal/experimental).
experimental_instructions__file_string (path)Replace built‑in instructions (experimental).
experimental_use__exec__command__tool_booleanUse experimental exec command tool.
responses_originator__header__internal__override_stringOverride originator header value.
projects.<path>.trust_level_stringMark project/worktree as trusted (only "trusted" is recognized).
tools.web_search_booleanEnable web search tool (alias: web_search__request_) (default: false).
    Share:

    Related Posts

    View All Posts »