Back to Insights
2026-04-12 3 min read Tanuj Garg

Postgres as a Search Engine: Why You Probably Don't Need Elasticsearch

Backend & Systems#Postgres#Backend#Search#Performance

Postgres as a Search Engine: Why You Probably Don't Need Elasticsearch

Introduction

I’ve lost count of how many times I’ve seen a seed-stage startup add Elasticsearch to their stack before they even have 10,000 rows of data.

The justification is always the same: "We need powerful search." But what they actually get is a massive increase in operational complexity, a second source of truth to sync, and a significant increase in their AWS bill.

In real systems, simplicity is a feature. Unless you are building the next Google or handling multi-language fuzziness at extreme scale, Postgres is likely the only search engine you need.


Section 1: The Trap of Premature Search Specialization

The biggest mistake engineers make is assuming that because a tool is "specialized," it is better for their specific use case.

Elasticsearch is brilliant, but it comes with a high "mental tax." You now have to handle:

  • Data synchronization (CDC or application-level hooks)
  • Indexing lag (Consistency issues)
  • Cluster management and memory tuning

In a startup, your primary goal is velocity. Every minute spent debugging a sync lag between Postgres and Elastic is a minute you aren't shipping features.


Postgres has evolved. With tsvector and tsquery, you can perform complex, weighted searches directly on your relational data.

The Technical Edge: GIN Indexes

Generalized Inverted Indexes (GIN) allow Postgres to map words to the rows they appear in. When combined with a tsvector column, searching millions of rows takes milliseconds, not seconds.

It’s not just about simple LIKE queries. You get:

  • Stemming (searching "run" finds "running")
  • Weighting (making titles more important than bodies)
  • Ranking (ordering by relevance)

Section 3: Practical Application: Implementing Search in 5 Minutes

Stop reaching for a new service. Here is how I implement high-performance search in production systems using only Postgres:

  1. The Vector Column: Add a generated tsvector column to your table.
  2. The Index: Create a GIN index on that column.
  3. The Query: Use the @@ operator to match queries against your vector.
-- Adding search capability to a products table
ALTER TABLE products ADD COLUMN search_vector tsvector
GENERATED ALWAYS AS (
  setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
  setweight(to_tsvector('english', coalesce(description, '')), 'B')
) STORED;

CREATE INDEX idx_products_search ON products USING GIN(search_vector);

This simple setup handles 99% of search requirements for early-to-mid stage companies with zero extra infrastructure.


Section 4: Common Mistakes: Where People Get It Wrong

The most common mistake I see is using pg_trgm (trigrams) for everything. While trigrams are great for fuzzy matching (typos), they are significantly slower than full-text search for large bodies of text.

Another mistake is not using weights. An "Outcome-driven" search should always prioritize attributes like titles or tags over raw body text. If your search results feel "off," it's usually because your weights are flat.


Final Thought

Scaling isn't about using the most complex tools; it's about getting the maximum leverage out of the tools you already have. Before you double your infra complexity with a dedicated search cluster, push Postgres to its limits. You’ll be surprised how far it takes you.