Skip to content
Showing user profile of selected author: -

Daily Notes in Obsidian

Obsidian is a note-taking software and knowledge base software, where the notes/files are written in Markdown. For quite a while I'm using it in my daily work.

One of the cool features it has is named "Daily Notes". As the name implies, there is a new note generated for every day. For me, this is used for writing down notes which do not deserve their own note. But also this is rather heavily used to share all kind of content from my mobile devides into the daily note in the first place. Content doesn't have to stay there, in fact most of it is either handled one way or another, or is moved to a different place. But it is a very nice collection point in the first place.

By default they are created in the main folder of the Obsidian vault - over time, these are hundreds of files, and no real structure. Which deserves to organize the Daily Notes in a better way.

 

Continue reading "Daily Notes in Obsidian"

KeepingYouAwake on Mac OS X

On my Mac, one of the annoying "features" is when the Mac screensaver comes on, the device eventually goes to sleep, and it disconnects the network. Which in turn timeouts services like Slack or Google, because these services keep a network connection open at all times. When waking up the device, I often have to login again into all the services, even though the device is just sitting in my working room on the desk all day and night. Very annoying.
I suppose it's one of these things where Apple thinks they know better how users want their device to behave.

KeepingYouAwake is a nice little tool which prevents all of this.

When it is running and activated, it prevents the Mac from going to sleep. Which in turn never deactivates the network. And never timeouts the online services.

What's not to love about it?

Picture made by Anton Atanasov

Extract better GPS coordinates from images using exiftool

Sometimes I have to extract Exif information from images, mostly the GPS coordinates. The coordinates coming raw from the images are not very helpful. Let's look at a picture I took today:

Bowl of ice cream

darktable shows the following coordinates:

latitude: N 52° 40,198'
longitude: E 013° 16,852'
elevation: 93,90 m above sea level

Now that is not very helpful, because neither OpenStreetMap nor Google recognize this format out of the box:

N 52° 40,198' E 013° 16,852'

Coordinates not working in OpenStreetMap

Coordinates not working in Google

Bummer. And I don't have the time or energy to fix that every time I need the coordinates. Luckily exiftool can output the coordinates in different formats, which is super helpful. For my use cases I choose the Degrees.MinutesSeconds format, also named "Decimal degrees", or DD. This format shows latitude and longitude geographic coordinates as decimal fractions of a degree.

exiftool -time:all -location:all -G -a -s -c "%.6f"

The explanation for the options used here:

  • -G: Print group name for each tag
  • -a: Allow duplicate tags to be extracted
  • -s: Short output format
  • -v: Print verbose messages
  • -q: Quiet processing
  • -c "%.6f": Set format for GPS coordinates

Using these settings, I get the following coordinates:

[Composite]     GPSAltitude                     : 93.9 m Above Sea Level
[Composite]     GPSLatitude                     : 52.669959 N
[Composite]     GPSLongitude                    : 13.280862 E
[Composite]     GPSPosition                     : 52.669959 N, 13.280862 E

Which sure enough brings me right to the ice cream place "Il Pistacchio" in Hohen Neuendorf, which I visited earlier today.

Il Pistacchio on OpenStreetMap

Watch for changed files in SyncThing

For syncing files between my devices I'm using SyncThing. This tool is reliable, available on Linux, Android, Mac and iOS. And it encrypts the communication.


But sometimes I want to know when files in certain directories have changed - as example in my Obsidian vault. This allows me to post-process the files.

Some of the use cases I have in Obsidian:

  • Resolve links in Daily Notes: when I share a URL from my RSS reader or from other sources into Obsidian, the URL sometimes is just a link to a URL shortener. I then later need to resolve the link - or let a script do it right away for known shortlinks, and update the daily note.
  • Remove tracking information from URLs: many shared links include campaign and tracking information, and this can be removed straight away.
  • Extract content from certain Toots: I follow a couple interesting accounts on Mastodon, and when I share the Toot link into Obsidian, the script extracts the Toot content and adds it, along with the original link, to a pre-defined note.
  • Extract links from Toots: many news websites include a link (sometimes again with tracking information) in their Toots. When I share such a Toot into Obsidian, a script picks up the link, extract the target link and updates the daily note.

All of this is not very complicated, and a couple of lines in Python do the job. The main parts for the script are:

  • extract the API key from the SyncThing configuration
  • Open a connection to the local SyncThing instance
  • Watch for certain events

 

Continue reading "Watch for changed files in SyncThing"

PGSQL Phriday #008 – pg_stat_statements

The topic for this month's PGSQL Phriday blogging challenge is: pg_stat_statement. And Michael Christofides gave me a perfect opener in his invitation.

For anyone who doesn't know, I'm running a weekly interview series with people from the PostgreSQL community. It's called "PostgreSQL Person of the Week". One of the questions in the default set I give everyone is:

What is your favorite PostgreSQL extension?

And guess what the answer is: by far everyone's favorite is pg_stat_statements!

What does this extension do? It tracks statistics for planning and execution for queries run by users. This can be used to find long-running queries, users who run too many or too heavy queries, or just generate statistics about the workload. In short: very useful data. And the extension itself does not need a lot of resources. Even better.

This extension is so popular that it has double the interview mentions than the next one (which is PostGIS - by itself also a very popular extension). From the slides I occasionally present at conferences or meetups:

PostgreSQL has a lot of extensions, head over to the PostgreSQL Extension network (PGXN) which is operated by David E. Wheeler to find out about around 360 extensions. Currently in my interviews I have 51 different extensions mentioned.

This extension is so useful that people say:

  • Julia Gugel: "I like pg_stat_statements as it helps a lot with performance troubleshooting."
  • Daniel Westermann: "pg_stat_statements, because it is just required if you want to troubleshoot performance related issues. I still wonder why it comes as an extension and is not there by default."
  • Lætitia Avrot: "Of course, I advise my customers to use pg_stat_statements to monitor their performance"
  • Alexander Kukushkin: "The pg_stat_statements extension is something that everyone must enable for performance monitoring and troubleshooting."
  • Flavio Gurgel: "I cannot live without pg_stat_statements, I think it’s mandatory for server optimisation."
  • Anthony Nowocien: "pg_stat_statements and I will be glad to see it in core."

This is just a small selection of quotes, but this shows that everyone loves pg_stat_statements. I encourage you to head over and read more interviews. There's plenty of insight from community members.

PGSQL Phriday #007 – Triggers for tracking changes in a table

This month's #PGSQLPhriday is hosted by Lætita Avrot, and she asks about triggers.

History time. Shortly after I started using PostgreSQL, I had a need to track changes in tables. Back then - this was the early version 7.x days - there was no such option available. I set out to write a tool for it. The logical choice to do that was to pick triggers to implement this. Today the world is different, PostgreSQL gained replication, and along with this, one can hook tools into the replication and stream all the changes. Back then there was no replication.

The second thing I discovered was that pl/pgSQL can't do the job. That was a rather big disappointment. My idea was to have one function which can be used with a trigger, the function figures out the columns and writes the changes into a separate table. However one can't access arbitrary column names in NEW and OLD in pl/pgSQL trigger functions. Something like this doesn't work:

columnname1 := "created_at";
columnname2 := "changed_at";

NEW.$columnname1 := OLD.$columnname1;
NEW.$columnname2 := OLD.$columnname2;

In pl/pgSQL, you have to "know" the column names in advance. Depesz recently posted some workarounds for this, but these options also have not been available back then.

Which made me write the tool in C. This at least allowed me to access the NEW and OLD values, and recoed changes. The tool is called "table_log", I also presented it at the first PGDay in Prato, Italy in 2007, and originally it was hosted on pgfoundry. This site is also long gone, I later copied the code to GitHub. But I also know that PostgreSQL 9.x had some internal changes which renders the tool non-working. However because the entire ecosystem had improved, and other tools are available, I did not update the old code anymore.

My conclusion: Triggers were one of the first "advanced" features I used in PostgreSQL, and I like them very much. They allowed me to implement an audit feature I need.

fwupdmgr: /usr/libexec/fwupd/efi/fwupdx64.efi and /usr/libexec/fwupd/efi/fwupdx64.efi.signed cannot be found

From time to time our laptops receive firmware updates, by using the Linux Vendor Firmware Service (short: fwupd). This worked fine for a long time, until it didn't. One day I was facing the following error message:

root@laptop:/root# fwupdmgr update

╔══════════════════════════════════════════════════════════════════════════════╗
║ Upgrade Embedded Controller from 0.1.23 to 0.1.25?                           ║
╠══════════════════════════════════════════════════════════════════════════════╣
║                                                                              ║
║ ...                                                                          ║
║                                                                              ║
╚══════════════════════════════════════════════════════════════════════════════╝

Perform operation? [Y|n]: 
Downloading…             [***************************************]
Decompressing…           [***************************************]
Authenticating…          [***************************************]
Waiting…                 [***************************************]
Waiting…                 [***************************************]
/usr/libexec/fwupd/efi/fwupdx64.efi and /usr/libexec/fwupd/efi/fwupdx64.efi.signed cannot be found

During the first few occasions I basically ignored the error message, and attributed it to a glitch in a software package. Maybe a later update will fix this.

But this never happened, so I looked into the issue.

Ubuntu split the 1.6.x version of fwupd into separate packages, and does not install the packages fwupd-signed and fwupd-unsigned to deal with EFI binaries.

apt-get install -y fwupd-signed fwupd-unsigned

Now everything is working again:

root@laptop:/root# fwupdmgr update
╔══════════════════════════════════════════════════════════════════════════════╗
║ Upgrade Embedded Controller from 0.1.23 to 0.1.25?                           ║
╠══════════════════════════════════════════════════════════════════════════════╣
║                                                                              ║
║ ...                                                                          ║
║                                                                              ║
╚══════════════════════════════════════════════════════════════════════════════╝

Perform operation? [Y|n]: 
Downloading…             [***************************************]
Downloading…             [***************************************]
Decompressing…           [***************************************]
Authenticating…          [***************************************]
Waiting…                 [***************************************]
Writing…                 [***************************************]
Waiting…                 [***************************************]
Successfully installed firmware
Do not turn off your computer or remove the AC adapter while the update is in progress.
Waiting…                 [***************************************]

Voila! Also updated the playbook which installs the laptops, to include the two new packages.

QNAP: exclude files and directories from rsync

I'm moving files from one QNAP system to another, and I'm using rsync for this. It's preinstalled on a QNAP system. So far, so good.

To sync entire shared volumes, I want to exclude the '@Recently-Snapshot' and '@Recycle' entries - I don't want to sync the trash bin and I also don't want to sync entire snapshots.

The usual approach when using rsync is to just use the --exclude option.

rsync --exclude='@Recently-Snapshot' --exclude='@Recycle'

To my surprise this does not work. rsync on the QNAP does not complain, but also does not ignore the entries. Using escapes in front of the '@' doesn't work either. Ok, which version is the rsync program anyway?

[~] # rsync --version
rsync  version 3.0.7  protocol version 30
Copyright (C) 1996-2009 by Andrew Tridgell, Wayne Davison, and others.

Ouch, that is old. Very old. Released December 2009. Pretty sure QNAP did not fix all the bugs in there.

But according to the documentation, and the --help output, it accepts the --exclude option. Still not working though.

Ok, there is one more option: --exclude-from

I create a text file and add the two entries in there:

@Recently-Snapshot
@Recycle

And then I use the --exclude-from option to skip entries in these two directories:

rsync --exclude-from=/tmp/pattern.txt

This option works. At least something.

Summary: The rsync on a QNAP system does not accept the --exclude parameter, but the --exclude-from parameter works.

And for everyone asking why I don't use the integrated file copy: this one skips certain files, but I also want my dot files copied over.

PGSqlPhriday #006: One Thing You Wish You Knew While Learning PostgreSQL: psql commands

For this month's #PGSQLPhridayGrant Fritchey asks: What is the one thing you wish you knew while you learn PostgreSQL.

My preferred client for PostgreSQL is psql, and while it is very powerful I like a few features a most:

  • \timing
  • \watch

Both are internal commands in psql, \timing is there for a very long time, but got improved at some point. \watch came later, but is also there for a couple years now.

 

Continue reading "PGSqlPhriday #006: One Thing You Wish You Knew While Learning PostgreSQL: psql commands"

Dynamic content in static websites in Hugo

With people moving away from Twitter, mostly to Mastodon, discovering the new accounts became a problem.

For people in the PostgreSQL community I created a website which lists different social media accounts. This website is part of the "PostgreSQL Person of the Week" interview project, however the data source is dynamic, and stored in a different repository. This allows me to keep the repository for the website private, but publish the data for the social media links - this data is public anyway. The interview repository is private, because who wants to see upcoming interviews anyway? ;-)

The interview website is made with Hugo, a static website generator. Normally Hugo looks for content, templates, and other data in the current directory - my private repository.

As part of compiling the website, Hugo can fetch external data, either in JSON or CSV format. This is using the getJSON() and the getCSV() functions, which can be used in Shortcodes, as example.

 

Continue reading "Dynamic content in static websites in Hugo"

PostgreSQL 95

Someone at FOSDEM 2023 asked the question: "What happens when PostgreSQL rolls over the version number to 95? Will this cause problems like back then in Windows?"

What does that even mean? When Microsoft released the version after Windows 98, they opted for naming it Windows 10, not Windows 9. Because apparently a lot of code out there checks if the string starts with "Windows 9" and then assumes that the OS is one of the very old ones. This might not be the only such problem, as another blog article by Microsoft shows. Apparently they used "3.95" for the Windows 95 internal version, because lazy programmers.

You may also remember that there is already a "Postgres95", released around 1994 (shortly before Windows 95 was released). This was before the name was "PostgreSQL", and this is the first version which had SQL support. Before that, QUEL was the query language.

Enough history. And even though the release of PostgreSQL 95 is still a few years out, this raises two questions:

  1. Does today's source code work (aka: compile, and pass tests) with version number 95?
  2. Does your application work against version PostgreSQL 95?

 

Continue reading "PostgreSQL 95"

Relational and Non-relational Data: PGSQL Phriday #005

Ryan Lambert asks in this month's PGSQL Phriday:

  • What non-relational data do you store in Postgres and how do you use it?
  • Have you attempted non-relational uses of Postgres that did not work well? What was the problem?
  • What are the biggest challenges with your data, whatever its structure?
  • Bonus: How do you define non-relational data?

Looking over the many databases we operate at work, I can say that we are mostly a "relational shop". Almost every database we have uses relational structures, and does not attempt to go in on "full NoSQL", or "store non-relational data". Many of the databases are on the larger size, often in the hundreds of GB or even several TBs. We have good people in our dev teams who understand both PostgreSQL and SQL, and can develop a proper schema for the data, and write performant queries.

There are a few exceptions though, but at least for us it's only a small number. Whenever we retrieve or get data which comes in from web requests, we may store this as JSON, and then we use JSONb. Quite often however it's rather "storage as JSON", not "querying the JSON in the database". It's just more convenient to keep the data in JSON format all the way, instead of transforming it several times.

 

Continue reading "Relational and Non-relational Data: PGSQL Phriday #005"

Fix LXC network issues in Ubuntu 22.04

Ran into a curios problem while updating the GitHub Actions Workflow for a project:

Run sudo lxc exec test-container --env DEBIAN_FRONTEND=noninteractive -- apt-get -y install -y openssh-client openssh-server openssh-sftp-server
Reading package lists...
Building dependency tree...
Reading state information...
openssh-client is already the newest version (1:8.9p1-3).
The following additional packages will be installed:
  libpsl5 libwrap0 ncurses-term publicsuffix python3-distro ssh-import-id wget
Suggested packages:
  molly-guard monkeysphere ssh-askpass ufw
The following NEW packages will be installed:
  libpsl5 libwrap0 ncurses-term openssh-server openssh-sftp-server
  publicsuffix python3-distro ssh-import-id wget
0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
Need to get 1371 kB of archives.
After this operation, 7679 kB of additional disk space will be used.
Ign:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 openssh-sftp-server amd64 1:8.9p1-3

...

Ign:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 ssh-import-id all 5.11-0ubuntu1
Err:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 openssh-sftp-server amd64 1:8.9p1-3
  Cannot initiate the connection to archive.ubuntu.com:80 (2620:2d:4000:1::19). - connect (101: Network is unreachable) Cannot initiate the connection to archive.ubuntu.com:80 (2620:2d:4000:1::16). - connect (101: Network is unreachable) Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1562::18). - connect (101: Network is unreachable) Could not connect to archive.ubuntu.com:80 (185.125.190.39), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.36), connection timed out Could not connect to archive.ubuntu.com:80 (91.189.91.39), connection timed out

...

E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/s/ssh-import-id/ssh-import-id_5.11-0ubuntu1_all.deb  Cannot initiate the connection to archive.ubuntu.com:80 (2620:2d:4000:1::19). - connect (101: Network is unreachable) Cannot initiate the connection to archive.ubuntu.com:80 (2620:2d:4000:1::16). - connect (101: Network is unreachable) Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1562::18). - connect (101: Network is unreachable)
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Error: Process completed with exit code 100.

For the tests, I'm spinning up a LXC container on the Runner, and then try to install software in it. This specific Runner is using Ubuntu 22.04 (new), and the network connection to archive.ubuntu.com is failing. Another Runner in the same workflow, using Ubuntu 20.04, is working fine. 20.04 was the old test setup, 22.04 is the new one. No other changes. But why is it suddenly failing?

 

Continue reading "Fix LXC network issues in Ubuntu 22.04"

GitHub Actions: Node.js 12 actions are deprecated

If you use GitHub Actions to run Workflows and tests, you might have spotted this warning recently:

Node.js 12 actions are deprecated. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/. Please update the following actions to use Node.js 16: actions/checkout@v2

This warning means that GitHub will deprecate a certain action, which checks out the repository into the runner. This is going on since early 2022 and by summer 2023 they plan to upgrade all actions to v16.

 

Continue reading "GitHub Actions: Node.js 12 actions are deprecated"

GitHub Actions: The `set-output` command is deprecated and will be disabled soon

If you use GitHub Actions to run Workflows and tests, you might have spotted this warning recently:

The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

This warning means that GitHub will deprecate a certain syntax which populates variables, and disables it by end of May 2023.

 

Continue reading "GitHub Actions: The `set-output` command is deprecated and will be disabled soon"