-
Notifications
You must be signed in to change notification settings - Fork 1
Code Mode
Code Mode (pg_execute_code) is a revolutionary approach that provides access to all 232 tools through a single JavaScript sandbox, eliminating the massive token overhead of multi-step tool calls. It dramatically reduces token usage (up to 90%) and is included by default in all presets.
Instead of spending thousands of tokens on back-and-forth tool calls, AI agents use a single execution to reason faster. Code executes in a sandboxed VM context with multiple layers of security. All pg.* API calls execute against the database within the sandbox, providing:
-
Static code validation — blocked patterns include
require(),process,eval(), and filesystem access - Rate limiting — 60 executions per minute per client
- Hard timeouts — configurable execution limit (default 30s)
-
Full API access — all 22 tool groups are available via
pg.*(e.g.,pg.core.readQuery(),pg.jsonb.extract(),pg.introspection.dependencyGraph()) -
Requires
adminOAuth scope — execution is logged for audit
Prefer Code Mode when a task requires:
- 3+ sequential tool calls — batch them into one execution
- Conditional branching — make decisions based on query results
- Data transformation — process and aggregate across multiple tables
- Complex orchestrations — loops, error handling, and multi-step workflows
Code Mode is included in all tool filtering presets by default. To use it, call pg_execute_code with your JavaScript code:
// Example: Get table stats with row counts
const tables = await pg.query(`
SELECT tablename FROM pg_tables
WHERE schemaname = 'public'
`);
const results = [];
for (const t of tables.rows) {
const count = await pg.query(`SELECT COUNT(*) as cnt FROM ${t.tablename}`);
results.push({
table: t.tablename,
rows: count.rows[0].cnt,
});
}
return results;Inside Code Mode, every tool is accessible through the pg API using the pattern pg.{group}.{method}().
| Group | Methods | Example |
|---|---|---|
pg.core |
readQuery, writeQuery, listTables, describeTable, createTable, createIndex, etc. |
pg.core.readQuery({sql: "SELECT..."}) |
pg.jsonb |
extract, set, insert, delete, contains, pathQuery, agg, etc. |
pg.jsonb.extract({table, column, path}) |
pg.text |
search, fuzzy, headline, rank, etc. |
pg.text.search({table, columns, query}) |
pg.transactions |
begin, commit, rollback, savepoint, release, rollbackTo, execute, status
|
pg.transactions.execute({statements}) |
pg.performance |
explain, tableStats, indexStats, baseline, etc. |
pg.performance.explain({sql}) |
pg.admin |
vacuum, analyze, reindex, cancelBackend, etc. |
pg.admin.analyze({table}) |
pg.monitoring |
databaseSize, tableSizes, connectionStats, etc. |
pg.monitoring.databaseSize() |
pg.backup |
dumpTable, dumpSchema, copyExport, copyImport, etc. |
pg.backup.copyExport({table}) |
pg.schema |
createSchema, createView, createSequence, listViews, etc. |
pg.schema.listViews() |
pg.introspection |
dependencyGraph, topologicalSort, cascadeSimulator, schemaSnapshot, constraintAnalysis, migrationRisks
|
pg.introspection.dependencyGraph({schema}) |
pg.migration |
init, record, apply, rollback, history, status
|
pg.migration.status() |
pg.stats |
descriptive, percentiles, correlation, timeSeries, etc. |
pg.stats.descriptive({table, column}) |
pg.partitioning |
createPartition, listPartitions, etc. |
pg.partitioning.listPartitions({table}) |
pg.vector |
search, createIndex, embed, distance, etc. |
pg.vector.search({table, column, vector}) |
pg.postgis |
distance, buffer, pointInPolygon, etc. |
pg.postgis.distance({table, column, lat, lng}) |
pg.cron |
schedule, unschedule, listJobs, etc. |
pg.cron.listJobs() |
pg.partman |
createParent, runMaintenance, etc. |
pg.partman.showConfig() |
pg.kcache |
queryStats, reset, topCpu, topIo, etc. |
pg.kcache.queryStats() |
pg.citext |
convertColumn, listColumns, analyzeCandidates, etc. |
pg.citext.analyzeCandidates() |
pg.ltree |
query, subpath, lca, match, etc. |
pg.ltree.query({table, column, path}) |
pg.pgcrypto |
hash, encrypt, decrypt, genRandomUuid, etc. |
pg.pgcrypto.hash({data, algorithm}) |
Tool names map to API methods by dropping the group prefix:
pg_jsonb_extract → pg.jsonb.extract()
pg_vector_search → pg.vector.search()
pg_text_search → pg.text.search()
Common core tools are available directly on pg for convenience:
pg.readQuery("SELECT..."); // pg.core.readQuery()
pg.listTables(); // pg.core.listTables()
pg.describeTable({ table }); // pg.core.describeTable()
pg.exists("users", "id=1"); // pg.core.exists() — positional args work
pg.createIndex("users", ["email"]);For raw SQL when no typed method exists:
// Read queries
const result = await pg.query("SELECT * FROM users WHERE id = $1", [userId]);
return result.rows;
// Write queries
await pg.execute("UPDATE users SET last_login = NOW() WHERE id = $1", [userId]);
return { success: true };Use pg.help() to list all available groups and methods:
const api = pg.help(); // Returns {group: methods[]} mapping
const coreApi = pg.core.help(); // Group-specific methods and examples-
Schema.Table:
'public.users'auto-parses to{schema: 'public', table: 'users'} -
JSONB Paths: Both
'a.b.c'(string) and['a','b','c'](array) work -
Aliases:
query/sql,table/tableName, etc. resolve automatically
Code Mode enforces strict security:
-
require()- No module imports -
process- No process access -
eval()- No dynamic code execution - Filesystem access - No file operations
- Network requests - No external HTTP calls
- Requires
adminOAuth scope - Rate limited: 60 executions/minute
Code Mode replaces multiple sequential tool calls with a single execution:
| Approach | Tool Calls | Typical Tokens |
|---|---|---|
| Individual tools | 5-10+ calls | ~5,000-15,000 |
| Code Mode | 1 call | ~500-1,500 |
Tip
Maximize Token Savings: Instruct your AI agent to prefer Code Mode over individual tool calls:
"When using postgres-mcp, prefer pg_execute_code (Code Mode) for multi-step database operations to minimize token usage."
For maximum savings, use --tool-filter codemode to run with Code Mode as your only tool. See the section below for configuration.
Note
AntiGravity Users: Server instructions are automatically sent to MCP clients during initialization. However, AntiGravity does not currently support MCP server instructions. For optimal Code Mode usage in AntiGravity, manually provide the contents of ServerInstructions.ts to the agent in your prompt or user rules.
Code Mode (codemode group) is automatically included in all tool filtering presets. You never need to add it manually — it's available in starter, essential, dev-schema, and every other shortcut.
If you control your own setup, you can run with only Code Mode enabled — a single tool that provides access to all 232 tools' worth of capability through the pg.* API:
{
"mcpServers": {
"postgres-mcp": {
"command": "node",
"args": [
"/path/to/postgres-mcp/dist/cli.js",
"--transport",
"stdio",
"--tool-filter",
"codemode"
],
"env": {
"POSTGRES_HOST": "localhost",
"POSTGRES_PORT": "5432",
"POSTGRES_USER": "your_user",
"POSTGRES_PASSWORD": "your_password",
"POSTGRES_DATABASE": "your_database"
}
}
}
}This exposes just pg_execute_code. The agent writes JavaScript against the typed pg.* SDK — composing queries, chaining operations across all 22 tool groups, and returning exactly the data it needs — in one execution. This mirrors the Code Mode pattern pioneered by Cloudflare for their entire API: fixed token cost regardless of how many capabilities exist.
If you don't have admin access or prefer individual tool calls:
{
"args": ["--tool-filter", "starter,-codemode"]
}Important: AntiGravity does not currently support automatic MCP server instructions.
For optimal Code Mode usage in AntiGravity, manually provide the contents of src/constants/ServerInstructions.ts to the agent in your prompt or user rules.
This ensures the AI understands the full pg.{group}.{method}() API, response structures, and tool-specific gotchas available in Code Mode.
// Analyze query performance across all tables
const stats = await pg.query(`
SELECT relname, n_live_tup, n_dead_tup,
last_vacuum, last_analyze
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10
`);
const analysis = stats.rows.map((row) => ({
table: row.relname,
liveRows: row.n_live_tup,
deadRows: row.n_dead_tup,
bloatRatio: row.n_dead_tup / (row.n_live_tup + row.n_dead_tup + 1),
needsVacuum: row.n_dead_tup > 1000,
lastVacuum: row.last_vacuum,
lastAnalyze: row.last_analyze,
}));
return {
summary: `Found ${analysis.filter((a) => a.needsVacuum).length} tables needing vacuum`,
tables: analysis,
};- Tool-Filtering — Presets and custom tool filtering
- OAuth-and-Security — Admin scope requirements
- Quick-Start — Installation and first steps