A SQL query complexity estimator helps developers understand the approximate computational cost of a query before running it. While EXPLAIN ANALYZE provides exact costs, this estimator gives a quick big-O tier estimate based on table size, indexing, and query structure.
Query Parameters
Complexity Estimate
Results update automatically
Optimization Suggestions
How to Use the SQL Query Complexity Estimator
This tool provides a rough big-O complexity tier for SQL queries based on table size and query structure. It is not a replacement for EXPLAIN ANALYZE — use this for quick architectural decisions before writing the query.
Step 1: Set Table Size
Select the approximate number of rows in your primary (largest) table. Complexity tiers are most meaningful at large scale — a slow O(n²) query might be acceptable at 1K rows but catastrophic at 1M rows.
Step 2: Specify Index Coverage
Whether the WHERE clause column has an index dramatically changes complexity. An indexed lookup on a 1M-row table requires ~20 comparisons (log₂ of 1M ≈ 20). Without an index, every row must be scanned: 1M comparisons. That is a 50,000x difference.
Step 3: Count JOINs and Index Coverage
Each JOIN multiplies complexity. Two tables of 1M rows each with indexed JOIN columns: O(n log n). Without indexes: approaches O(n²) — potentially 1 trillion operations. Always ensure foreign key columns are indexed in both tables.
Step 4: Check for Correlated Subqueries
A correlated subquery like WHERE salary > (SELECT AVG(salary) FROM employees WHERE dept = e.dept) runs the inner query for every row in the outer query. This creates O(n²) complexity. Rewrite as a CTE or subquery join to eliminate the per-row re-execution.
Frequently Asked Questions
Is this SQL complexity estimator free?
Yes, completely free with no account required. All analysis runs in your browser.
What does O(n log n) mean for a SQL query?
O(n log n) complexity means the query's execution time grows proportionally to n × log(n) where n is the number of rows. This is typical for indexed range scans and sorted operations. A table with 1M rows at O(n log n) requires roughly 20M operations — much better than O(n²) which would require 1 trillion operations.
Why does adding a JOIN increase query complexity?
Each JOIN multiplies the result set before filtering. A JOIN between two 100K-row tables can produce up to 10B intermediate rows if no index is available. With proper indexes, the join can use nested-loop or hash-join strategies that reduce this dramatically. The estimator uses approximate big-O analysis based on index availability.
Does an index on a JOIN column always help?
Usually yes for nested-loop joins — the inner table lookup becomes O(log n) instead of O(n). However, for hash joins on large result sets, the database may choose a full scan + hash build which doesn't benefit from the index. EXPLAIN ANALYZE will show which join strategy the query planner chose.
What is a subquery's impact on query complexity?
Uncorrelated subqueries (independent of the outer query) are executed once and cached, adding O(n log n) overhead. Correlated subqueries (referencing outer query columns) are re-executed for each row of the outer query, potentially adding O(n²) or worse. Always prefer JOINs over correlated subqueries when possible.
How do aggregate functions affect query performance?
Aggregate functions (COUNT, SUM, AVG, MAX, MIN) require scanning all rows in the group or the entire table if ungrouped. With an index on the GROUP BY column, PostgreSQL can use an index scan. Without an index, the database must sort all rows first, adding O(n log n) overhead.