Neues vom PostgreSQL Planet
Cornelia Biacsics: Contributions for week 13, 2026
The Prague PostgreSQL Meetup met on March 30, 2026, organized by Gulcin Yildirim Jelinek and Mayur B.
Speakers:
- Radim Marek
- Mayur B.
Community Blog Posts:
- Pat Wright about Nordic Pg Day 2026
Community Videos:
- Pavlo Golub about SCALE 23x
David Wheeler: pg_clickhouse 0.1.10
Hi, it’s me, back again with another update to pg_clickhouse, the query interface for ClickHouse from Postgres. This release, v0.1.10, maintains binary compatibility with earlier versions but ships a number of significant improvements that increase compatibility of Postgres features with ClickHouse. Highlights include:
Radim Marek: Don't let your AI touch production
Not so long ago, the biggest threat to production databases was the developer who claimed it worked on their machine. If you've attended my sessions, you know this is a topic I'm particularly sensitive to.
These days, AI agents are writing your SQL. The models are getting incredibly good at producing plausible code. It looks right, it feels right, and often it passes a cursory glance. But "plausible" isn't a performance metric, and it doesn't care about your execution plan or locking strategy.
Richard Yen: WAL as a Data Distribution Layer
Every so often, I talk to someone working in data analytics who wants access to production data, or at least a snapshot of it. Sometimes, they tell me about their ETL setup, which takes hours to refresh and can be brittle, with a lot of monitoring around it. For them, it works, but it sometimes gets me wondering if they need all that plumbing to get a snapshot of their live dataset. Back at Turnitin, I set up a way to get people access to production data without having to snapshot nightly, and I thought maybe I should share it with people here.
Lætitia AVROT: PAX: The Storage Engine Strikes Back
Pavel Stehule: Using non ACID storage as workaround instead missing autonomous transactions
When I was younger, the culture war (in my bubble) was about transactional versus non-transactional engines, Postgres versus MySQL (MyISAM). Surely, I preferred the transactional concept. Data integrity and crash safety is super important. But it is not without costs. It was visible 30 years ago, when MySQL was a super fast and PostgreSQL super slow database. Today on more powerful computers it is visible too, not too strong, but still it is visible. And we still use non-transactional storages a lot of - applications logs.
Shaun Thomas: What is a Collation, and Why is My Data Corrupt
The GNU C Library (glibc) version 2.28 entered the world on August 1st, 2018 and Postgres hasn't been the same since. Among its many changes was a massive update to locale collation data, bringing it in line with the 2016 Edition 4 release of the ISO 14651 standard and Unicode 9.0.0. This was not a subtle tweak. It was the culmination of roughly 18 years of accumulated locale modifications, all merged in a single release.Nobody threw a party.What followed was one of the most significant and insidious data integrity incidents in the history of Postgres.
David Wheeler: pg_clickhouse 0.1.6
We fixed a few bugs this week in pg_clickhouse, the query interface for ClickHouse from Postgres The fixes, improve query cancellation and function & operator pushdown, including to_timestamp(float8), ILIKE, LIKE, and regex operators. Get the new v0.1.6 release from the usual places:
Antony Pegg: pgEdge MCP Server for Postgres Is Now GA. Here’s Why That Matters
If you’re building agentic AI applications, you’ve probably already hit the wall where your LLM needs to actually talk to a database. Not just dump a schema and hope for the best, but genuinely understand the data model, write reasonable queries, generate code for new UIs and even entire applications, and do it all without you holding its hand through every interaction.
Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 19 – Add UPDATE/DELETE FOR PORTION OF
Vibhor Kumar: pg_background v1.9: a calmer, more practical way to run SQL in the background
There is a kind of database pain that does not arrive dramatically. It arrives quietly.
A query runs longer than expected. A session stays occupied. Someone opens another connection just to keep moving. Then another task shows up behind it. Soon, a perfectly normal day starts to feel like too many people trying to get through one narrow doorway.
That is where pg_background becomes useful.
Antony Pegg: Replicating CrystalDBA With pgEdge MCP Server Custom Tools
A disclaimer before we start: I'm product management, no longer an engineer. I can read code, I can write it … incredibly slowly. I understand PostgreSQL at a product level, and I know what questions to ask. But the code in this project was written by Claude - specifically, Claude Code running in my terminal as a coding agent. I directed the architecture, made the design calls, reviewed the output, and did the testing. Claude wrote the code. This is a vibe-coding story as much as it is a technical one.The pgEdge Postgres MCP Server has a custom tool system.
Elizabeth Garrett Christensen: Postgres Vacuum Explained: Autovacuum, Bloat and Tuning
If you’ve been using Postgres for a while, you’ve probably heard someone mention "vacuuming" the database or use the term “bloat.” These both sound choresome and annoying — but they’re just part of life in a healthy database. In modern Postgres versions, autovacuum usually handles these issues for you behind the scenes. But as your database footprint grows, you might start wondering: Is the default setting enough? Do I need to vacuum Postgres manually? or Why is my database suddenly taking up way more disk space than it should?
Deepak Mahto: Why Ora2Pg Should Be Your First Stop for PostgreSQL Conversion
I have been doing Oracle-to-PostgreSQL migrations for over last decades across enterprises, cloud platforms, and everything in between. I have used commercial tools, cloud-native services, and custom scripts. And when it comes to table DDL conversion, I keep coming back to the same tool: Ora2pg. Not because it is the flashiest or the easiest to set up, but because once you understand its configuration model, nothing else comes close to the control it gives you.
Umut TEKIN: Patroni: Cascading Replication with Stanby Cluster
Patroni is a widely used solution for managing PostgreSQL high availability. It provides a robust framework for automatic failover, cluster management, and operational simplicity in PostgreSQL environments. Patroni offers many powerful features that make PostgreSQL clusters easier to manage while maintaining reliability and operational flexibility.
Lætitia AVROT: pg_service.conf: the spell your team forgot to learn
Richard Yen: The Hidden Behavior of plan_cache_mode
Most PostgreSQL users use prepared statements as a way to boost performance and prevent SQL injection. Fewer people know that the query planner silently changes the execution plan for prepared statements after exactly five executions.
This behavior often surprises engineers because a query plan can suddenly shift—sometimes dramatically, even though the query itself hasn’t changed. The reason lies in the planner’s handling of custom plans vs generic plans, controlled by the parameter plan_cache_mode.
Cornelia Biacsics: Contributions for week 12, 2026
From March 23 to March 26, the following contributions were made to PostgreSQL at SREcon26 (Americas):
PostgreSQL booth volunteers:
- Aya Griswold
- Erika Miller
- Gabrielle Roth
- Jennifer Scheuerell
- Umair Shahid
- Alex Wood
PostgreSQL speakers:
Henrietta Dombrovskaya: Prairie Postgres Second Developers’ Summit and Why You Should Participate
In my current position as Database Architect at DRW, I talk with end users more than I ever did in my life. Our end users are application developers who look at PostgreSQL from a very utilitarian perspective. Trust me, they do not care whether Postgres is the most advanced DBMS or not. They are very pragmatic: they need a database that will help them accomplish their goals: write the data fast, store reliably, read anything in milliseconds, and run analytics.
Radim Marek: Good CTE, bad CTE
CTEs are often the first feature developers reach for beyond basic SQL, and often the only one.
But the popularity of CTEs usually has less to do with modernizing code and more to do with the promise of imperative logic. For many, CTE acts as an easy to understand remedy for 'scary queries' and way how to force execution order on the database. The way how many write queries is as if they tell optimizer "first do this, then do that".
Seiten
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- nächste Seite ›
- letzte Seite »

