Recently I was looking for a way to copy a directory with all subdirectories, using Ansible. For reasons beyond this post I couldn't use the synchronize (rsync) module. So I had to find a way to copy everything with basic Ansible steps.
Continue reading "Ansible: copy a directory recursive"
Most of my systems run on a software RAID 1 configuration (that is, two disks, where each disk is mirrored to the other). This way, one of the disks can fail and still all the data is available.
If a disk failure happens, the disk is replaced with a similar disk, and then needs to be configured and re-added to the RAID.
Newer systems all use the GUID partition table (GPT), and therefore allow almost unlimited disk sized. Instructions for re-adding a disk using GPT are a bit different from the days when MBR (up to 2 TB disk space) was used, therefore I'm writing them down here for future use.
Continue reading "Replace and re-add a failed drive to a Linux software RAID"
A while ago I asked for recommendations for Android Podcast apps. From all over the Internet I got a great number of recommendations, and looked into all of them.
Continue reading "Android Podcast Apps"
At FOSDEM someone asked how long 64 bit Transaction-IDs will last.
To refresh: PostgreSQL is currently using 32 bits for the TXID, and is good for around 4 billion transactions:
fosdem=# SELECT 2^32;
That will not last very long if you have a busy database, doing many writes over the day. MVCC keeps the new and old versions of a row in the table, and the TXID will increase with every transaction. At some point the 4 billion transactions are reached, the TXID will overrun, and start again at the beginning. The way transactions are working in PostgreSQL, suddenly all data in your database will become invisible. No one wants that!
To limit this problem, PostgreSQL has a number mechanism in place:
- PostgreSQL splits transaction ids into half: 2 billion in the past are visible, 2 billion in the future are not visible - all visible rows must live in the 2 billion in the past, at all times.
- Old, deleted row versions are enevtually removed by VACUUM (or Autovacuum), the XID is no longer used.
- Old row versions, which are still live, are marked as "freezed" in a table, and assigned a special XID - the previously used XID is no longer needed. The problem here is that every single table in every database must be Vacuumed before the 2 billion threshold is reached.
- PostgreSQL uses lazy XIDs, where a "real" transaction id is only assigned if the transaction changes something on disk - if a transaction is read only, and does not change anything, no transaction id is consumed.
Continue reading "How long will a 64 bit Transaction-ID last in PostgreSQL?"
After setting up OpenWeatherMap in openHAB, I had another project on my list: send a forecast for the next day.
That is rather easy to do with a Cron rule.
Continue reading "Weather Forecast in openHAB based on OpenWeatherMap, using Ansible"
Next item on my home automation todo list: weather, and forecast. No good system without that data!
After exploring the options which openHAB supports, I settled for OpenWeatherMap. Note: you need an account with OWM, the basic functionality is free, the paid options give you more and better forecast.
And of course, I install everything using Ansible, and can just repeat the entire installation if something does not work.
This setup is also used in a weather forecast for tomorrow.
Continue reading "Install OpenWeatherMap in openHAB, using Ansible"
You might know that problem: the brand new SSD in your system is super fast, but after a good time using it, the card is dead. Unlike spinning disks, which usually fail over time, and show I/O errors by blocks, SSD cards are prone to a problem called "Wear leveling". Blocks which are written more often will "wear out", and become unresponsible. More writes increase this risk. And a typical openHAB system does a number of writes all the time: every time an external status changes, it's written to the event log. By default the syslog is written to disk as well, and then there is a myriad of systemd services, writing status information into files.
Continue reading "Avoid "wear out" of SSD-cards in an openHAB system"
A while ago I posted about adding a Fritz!Box to openHAB, using Ansible. Now I had to use the Playbook to install another Raspberry, and found that some parts are missing in my posting. Mostly at the end, when it comes to configuring the Fritz!Box.
Therefore let's run over all the steps again, and make sure that everything is covered.
Continue reading "Configure a FRITZ!Box in openHAB, using Ansible"
The PostgreSQL Project participates in Google Code-In (GCI) 2018. This is a program which allows pre-university students to pick up tasks defined by the partnering open source projects, learn about these projects, and also win a prize (certificates, t-shirts, hoodies, but also a trip to Google HQ).
Every project creates a number different tasks, some technical, some design based, some about updating documentation, or validating bugs. Whatever is useful in order to get to know the project better. Students can select tasks and submit their work. Mentors from the project then evaluate the work, and either approve it or send it back to the student because more work is needed.
Now we are halfway into this year's competition, it's time to run the numbers.
Continue reading "Google Code-In 2018 - Halftime"
For a long time I was using a Makefile to quickly build, start, stop and then wipe a predefined PostgreSQL version. That comes handy if you just want to test something on an older version, without actually installing the software. Everything happens in a single directory, even a different port is assigned.
When I needed that setup recently, I ran into unrelated build errors:
relpath.c:21:10: fatal error: catalog/pg_tablespace_d.h: No such file or directory
Can't be - pg_tablespace_d.h is included in the tarball I'm using.
Continue reading "Using Makefiles to build PostgreSQL"