Skip to content

Read a JSON text into a variable in Ansible without parsing it

For one project I need to insert the content of a local file into another file on the remote system, and the first file happens to be JSON. The JSON file is in compact format (jq --compact-output) and is supposed to stay this way. When Ansible reads the content of the file, it determines that the content is JSON, and parses the content into the variable - and along the way is uncompressing the format. Not what I want.


Continue reading "Read a JSON text into a variable in Ansible without parsing it"

Does an openHAB Item exist?

In a more complex openHAB2 Rule I'm writing, I need to find out if a certain Item exist. If I try to access it when it does not exist, the Rule will fail. Not good.

I could pass in a parameter to the framework which builds the Rule, but that is cumbersome and requires changes on several places.

Let's see if I can have the Rule figure this out.


Continue reading "Does an openHAB Item exist?"

Hanging script at the end of Python multiprocessing

While writing and stress-testing a Python script which uses the multiprocessing library, I ran into the problem that occasionally the script hangs at the end. Literally runs past the last line of code and then hangs.

In this script I'm using Queues and Events, so I made sure that I properly close the queues in the forked/spawned (tried both) processes and also clean out all queues in the parent. Nevertheless occasionally - like seldom, but it happens - the script hangs. Checked the process list, checked remaining threads, checked the queues, all fine. Still ...

When I hit Ctrl+C, I get the following stack trace:

^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/lib/python3.9/multiprocessing/", line 300, in _run_finalizers
  File "/usr/lib/python3.9/multiprocessing/", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.9/multiprocessing/", line 201, in _finalize_join
  File "/usr/lib/python3.9/", line 1053, in join
  File "/usr/lib/python3.9/", line 1073, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):

(Can't change the Python 3.9, as I don't control the target system where this is supposed to run)

There's nothing left to join, the only two remaining threads are MainThread and QueueFeederThread.


After some more debugging I had the simple idea not only to empty the queue before finishing the script, but setting the queue variable to None:

worker_queue = None

This helps. The script is no longer hanging at the end. At a later point I will see if the problem is fixed in newer versions.