Neues vom PostgreSQL Planet
Andrei Lepikhov: Inventing A Cost Model for PostgreSQL Local Buffers Flush
In this post, I describe experiments on the write-versus-read costs of PostgreSQL's temporary buffers. For the sake of accuracy, the PostgreSQL functions set is extended with tools to measure buffer flush operations. The measurements show that writes are approximately 30% slower than reads. Based on these results, the cost estimation formula for the optimiser has been proposed:
flush_cost = 1.30 × dirtied_bufs + 0.01 × allocated_bufs.
Deepak Mahto: PostgreSQL Table Rename and Views – An OID Story
Recently during a post-migration activity, we had to populate a very large table with a new UUID column (NOT NULL with a default) and backfill it for all existing rows.
Instead of doing a straight:
ALTER TABLE ... ADD COLUMN ... DEFAULT ... NOT NULL;we chose the commonly recommended performance approach:
Floor Drees: Chaos testing the CloudNativePG project
Ian Barwick: PgPedia Week, 2025-12-07
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 19 – Implement ALTER TABLE … MERGE/SPLIT PARTITIONS … command
Floor Drees: Sticking with Open Source: pgEdge and CloudNativePG
Gabriele Bartolini: CloudNativePG in 2025: CNCF Sandbox, PostgreSQL 18, and a new era for extensions
2025 marked a historic turning point for CloudNativePG, headlined by its acceptance into the CNCF sandbox and a subsequent application for incubation. Throughout the year, the project transitioned from a high-performance operator to a strategic architectural partner within the cloud-native ecosystem, collaborating with projects like Cilium and Keycloak. Key milestones included the co-development of the extension_control_path feature for PostgreSQL 18, revolutionising extension management via OCI images, and the General Availability of the Barman Cloud Plugin.
Imran Zaheer: PostgreSQL Recovery Internals
Modern databases must know how to handle failures gracefully, whether they are system failures, power failures, or software bugs, while also ensuring that committed data is not lost. PostgreSQL achieves this with its recovery mechanism; it allows the recreation of a valid functioning system state from a failed one. The core component that makes this possible is Write-Ahead Logging (WAL); this means PostgreSQL records all the changes before they are applied to the data files. This way, WAL makes the recovery smooth and robust.
Floor Drees: PostgreSQL Contributor Story: Manni Wood
REGINA OBE: FOSS4GNA 2025: Summary
Free and Open Source for Geospatial North America (FOSS4GNA) 2025 was running November 3-5th 2025 and I think it was one of the better FOSS4GNAs we've had. I was on the programming and workshop committees and we were worried with the government shutdown that things could go badly since we started getting people withdrawing their talks and workshops very close to curtain time.
Cornelia Biacsics: Contributions for week 53, 2025
Emma Sayoran organized a PUG Armenia speed networking meetup on December 25 2025.
FOSDEM PGDay 2026 Schedule announced on Dec 23 2025. Call for Paper Committee:
- Teresa Lopes
- Stefan Fercot
- Flavio Gurgel
Community Blog Posts:
- Chiira B. Mwangi I joined a community!
Ryan Lambert: Improved Quality in OpenStreetMap Road Network for pgRouting
Recent changes in the software bundled in PgOSM Flex resulted in unexpected improvements when using OpenStreetMap roads data for routing. The short story: routing with PgOSM Flex 1.2.0 is faster, easier, and produces higher quality data for routing! I came to this conclusion after completing a variety of testing with the old and new versions of PgOSM Flex. This post outlines my testing and findings.
Taras Kloba: PostgreSQL as a Graph Database: Who Grabbed a Beer Together?
Graph databases have become increasingly popular for modeling complex relationships in data. But what if you could leverage graph capabilities within the familiar PostgreSQL environment you already know and love? In this article, I’ll explore how PostgreSQL can serve as a graph database using the Apache AGE extension, demonstrated through a fun use case: analyzing social connections in the craft beer community using Untappd data.
Shinya Kato: New PostgreSQL Features I Developed in 2025
I started contributing to PostgreSQL around 2020. This year I wanted to work harder, so I will explain the PostgreSQL features I developed and committed in 2025.
I also committed some other patches, but they were bug fixes or small document changes. Here I explain the ones that seem most useful.
These are mainly features in PostgreSQL 19, now in development. They may be reverted before the final release.
Oleg Bartunov: Unpublished interview
Interview with Oleg Bartunov
“Making Postgres available in multiple languages was not my goal—I was just working on my actual task.”
Tomas Vondra: Don't give Postgres too much memory (even on busy systems)
A couple weeks ago I posted about how setting maintenance_work_mem too high may make things slower. Which can be surprising, as the intuition is that memory makes things faster. I got an e-mail about that post, asking if the conclusion would change on a busy system. That’s a really good question, so let’s look at it.
To paraphrase the message I got, it went something like this:
Umair Shahid: PostgreSQL Column Limits
If you’ve ever had a deployment fail with “tables can have at most 1600 columns”, you already know this isn’t an academic limit. It shows up at the worst time: during a release, during a migration, or right when a customer escalation is already in flight.
But here’s the more common reality: most teams never hit 1,600 columns; they hit the consequences of wide tables first:
Mayur B.: PostgreSQL Santa’s Naughty Query List: How to Earn a Spot on the Nice Query List?
Santa doesn’t judge your SQL by intent. Santa judges it by execution plans, logical io, cpu utilization, temp usage, and response time.
This is a practical conversion guide: common “naughty” query patterns and the simplest ways to turn each into a “nice list” version that is faster, more predictable, and less likely to ruin your on-call holidays.
Hans-Juergen Schoenig: PostgreSQL Performance: Latency in the Cloud and On Premise
PostgreSQL is highly suitable for powering critical applications in all industries. While PostgreSQL offers good performance, there are issues not too many users are aware of but which play a key role when it comes to efficiency and speed in general. Most people understand that more CPUs, better storage, more RAM and alike will speed up things. But what about something that is equally important?
We are of course talking about “latency”.
Radim Marek: Instant database clones with PostgreSQL 18
Have you ever watched long running migration script, wondering if it's about to wreck your data? Or wish you can "just" spin a fresh copy of database for each test run? Or wanted to have reproducible snapshots to reset between runs of your test suite, (and yes, because you are reading boringSQL) needed to reset the learning environment?
When your database is a few megabytes, pg_dump and restore works fine. But what happens when you're dealing with hundreds of megabytes/gigabytes - or more? Suddenly "just make a copy" becomes a burden.
Seiten
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- nächste Seite ›
- letzte Seite »

