Neues vom PostgreSQL Planet
Radim Marek: PostgreSQL MVCC, Byte by byte
You run SELECT * FROM orders in one psql session and see 50 million rows. A colleague in another session runs the same query at the same moment and sees 49,999,999. Neither of you is wrong, and neither is seeing stale data. You are both reading the same 8KB heap pages, the same bytes on disk.
Shaun Thomas: Enforcing Constraints Across Postgres Partitions
Postgres table partitioning is one of those features that feels like a superpower right up until it isn't. Just define a partition key, carve up data into manageable chunks, and everything hums along beautifully. And what's not to love? Partition pruning in query plans, smaller tables, faster maintenance, easy archiving of old data; it's a smorgasbord of convenience.Then you try to enforce a unique constraint without including the partition key, and Postgres behaves as if you just asked it to divide by zero. Well... about that.
Bruce Momjian: Postgres 19 Release Notes
I have just completed the first draft of the Postgres 19 release notes. It includes little developer community feedback and still needs more XML markup and links. This year I have created a wiki page explaining the process I use.
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 19 – Online enabling and disabling of data checksums
Tudor Golubenco: Introducing Xata OSS: Postgres platform with branching, now Apache 2.0
Ahsan Hadi: pgEdge Vectorizer and RAG Server: Bringing Semantic Search to PostgreSQL (Part 2)
In my previous blog, I walked through setting up the pgEdge MCP Server with a distributed PostgreSQL cluster, and connecting Claude to live database data through natural language. In this blog I want to look at a different problem: how do you build AI-powered search over your own content, without adding a separate vector database to your infrastructure?This is where the pgEdge Vectorizer and RAG Server come in.
Lætitia AVROT: Postgres performance regression: are we there yet?
Andreas Scherbaum: PGConf India 2026 - Review
Gabriele Bartolini: Owning the pipe: physical replication, cloud neutrality, and the escape from DBaaS lock-in
This article examines how managed database services deliberately suppress access to the physical replication stream, turning operational convenience into permanent lock-in. It makes the case for a cloud-neutral stack — PostgreSQL, Kubernetes, and CloudNativePG — as the only architecture that returns full operational sovereignty to the organisation that owns the data.
Ming Ying: ParadeDB is Officially on Railway
David Wheeler: pg_clickhouse 0.2.0
In response to a generous corpus of real-world user feedback, we’ve been hard at work the past week adding a slew of updates to pg_clickhouse, the query interface for ClickHouse from Postgres. As usual, we focused on improving pushdown, especially for various date and time, array, and regular expression functions.
Cornelia Biacsics: Contributions for week 14, 2026
The Toulouse PostgreSQL User Group met on April 7, 2026 organized by
- Geoffrey Coulaud
- Xavier SIMON
- Jean-Christophe Arnu
Speakers:
Richard Yen: Understanding PostgreSQL Wait Events
One of the most useful debugging tools in modern PostgreSQL is the wait event system. When a query slows down or a database becomes CPU bound, a natural question is: “What are sessions actually waiting on?” Postgres exposes this information through the pg_stat_activity view via two columns:
wait_event_type wait_eventThese fields reveal what the backend process is blocked on at a given moment. Among the different wait types, one category tends to cause confusion:
Jeremy Schneider: Zero autovacuum_cost_delay, Write Storms, and You
A few days ago, Shaun Thomas published an article over on the pgEdge blog called [Checkpoints, Write Storms, and You]. Sadly a lot of corporate blogs don’t have comment functionality anymore.
Ruohang Feng: 504 Extensions: Expand the PostgreSQL Landscape
Lukas Fittl: Waiting for Postgres 19: Reduced timing overhead for EXPLAIN ANALYZE with RDTSC
Shaun Thomas: Checkpoints, Write Storms, and You
Every database has to reconcile two uncomfortable truths: memory is fast but volatile, and disk is slow but durable. Postgres handles this tension through its Write-Ahead Log (WAL), which records every change before it happens. But the WAL can't grow forever. At some point, Postgres needs to flush all those accumulated dirty pages to disk and declare a clean starting point. That process is called a checkpoint, and when it goes wrong, it can bring throughput to its knees.
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 19 – new pg_get_*_ddl() functions
Seiten
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- nächste Seite ›
- letzte Seite »

