At FOSDEM someone asked how long 64 bit Transaction-IDs will last.
That will not last very long if you have a busy database, doing many writes over the day. MVCC keeps the new and old versions of a row in the table, and the TXID will increase with every transaction. At some point the 4 billion transactions are reached, the TXID will overrun, and start again at the beginning. The way transactions are working in PostgreSQL, suddenly all data in your database will become invisible. No one wants that!
To limit this problem, PostgreSQL has a number mechanism in place:
- PostgreSQL splits transaction ids into half: 2 billion in the past are visible, 2 billion in the future are not visible - all visible rows must live in the 2 billion in the past, at all times.
- Old, deleted row versions are eventually removed by VACUUM (or Autovacuum), the XID is no longer used.
- Old row versions, which are still live, are marked as “freezed” in a table, and assigned a special XID - the previously used XID is no longer needed. The problem here is that every single table in every database must be Vacuumed before the 2 billion threshold is reached.
- PostgreSQL uses lazy XIDs, where a “real” transaction id is only assigned if the transaction changes something on disk - if a transaction is read only, and does not change anything, no transaction id is consumed.
Now back to the original problem: 64 bit transaction ids. As we have seen above, it might still make sense to expand the id range beyond the currently used 32 bit.
The problem is not new, and was discussed a number times. From the last discussions it looks like that the path forward will be a mix of 32 bit transaction ids, and an additional epoch. The underlying problem is that expanding to true 64 bit will instantly double the space requirements for the TXID in every single row (tuple header), from 32 bit today to 64 bit. And it’s used twice, for xmin and xmax, that makes it 16 bytes just for the transaction ids.
But how long will 64 bit really last? And most importantly, is it enough for the foreseeable future?
To answer this question we first need to find out how many transaction ids a database will use. There are a number blog post which aim at the general problem, like “how many transactions can a database do these days” or “how did transaction performance in PostgreSQL improve over time”?
From these blog posts it seems like the near future aim is “1 million transactions per second”. Now that is read and write transactions, and the second blog post is more in the “several thousand write transactions per second” range. But nevertheless let’s just use this number, and see if that will be alright.
64 bit transaction ids is an insanely large number:
Eighteen quintillion, four hundred forty six quadrillion, seven hundred forty four trillion, seventy three billion, seven hundred nine million, five hundred fifty one thousand, six hundred sixteen.
Assuming that 1M transactions will be used per second, this leaves:
Since I want to know the time which the 64 bit transaction ids will last, I not only need to divide by 1M (10^6), but also by 2 - because PostgreSQL splits transaction ids into the visible and invisible part. The result is still large, let’s see how much time that is:
A year has
31536000 seconds (without leap year).
292 thousand years, doing 1M write transactions every second.
Now that assumes linear usage of transaction ids. Using an epoch might change that, depending on the implementation. But it does not seem as if the database is running out of 64 bit transaction ids anytime soon.