Tracking ISS Position with TLE, JavaScript and CesiumJS

I wrote a short example of one way to visualise orbits of satellites or other space objects. Here, I use CesiumJS, which is a wonderful
software that you can tap into by simple referencing the JavaScript library and Stylesheets in your web page. You’ll
to get a key from their website to use it though.

The rendering of the scenes happens fully on the client side inside the browser and can require a substantial amount of memory
and computing resources if you work with complex scenes. However, this one is simple enough to be enjoyed on systems with not
to many resources.

The TLEs, which stands for “Two-Line-Elements”, is a specially formatted text file, containing two-lines with object data for each object that is being tracked and a header line, giving the name of the object. It can be obtained from the Celestrak website and typically looks like this:

ISS (ZARYA)             
1 25544U 98067A   19331.55255787  .00001394  00000-0  32378-4 0  9995
2 25544  51.6465 271.3494 0006428 331.6839  17.4235 15.50048091200593

CesiumJS is very cool and lets you visualise data with a few lines of code. Of course, you can write complex applications with it.
The point to consider is that the rendering of the scenes you create does happen fully on the client side and can require a large amount of RAM and processing power.

Here’s a visualisation of the what the code does. It also lets you zoom, pan and rotate.

RTL-SDR and ADS-B Messages

Currently working with my RTL-SDR device to catch ADS-B messages from nearby airplanes.
Luckily, there’s an app for that called dump1090 that works with my device out of the box.
I run it on FreeBSD 12.1 and it catches the messages well. Unfortunately, it is a bit behind and I am not sure the app is still maintained. In any case, the problem is with the way the app produces json files, and uses Google maps to visualize the data captured. So I am currently rewriting part of it to produce a GeoJson formatted output file and a Webpage that uses Mapbox (OpenStreeMaps) instead of Google.
The C-code currently compiles on Linux but not (yet) on FreeBSD, but I am confident I can have a working base version by the end of next week (depending of schedule of course).

Here’s a sneak peak at the current layout (which will be improved once the backend and data-display works well enough)

The goal of this project is to learn and demonstrate how to visualise real-time data and to learn how to work with signals from antennas – but that will be another project…

Visualizing JSON Data with Pandas and Matplotlib

I wrote a quick example program in Python. The code consumes data in JSON format, uses Pandas to work the data and Matplotlib to display the data.
It is a Jupyter notebook, but can easily be adapted to work standalone. Find the code on GitHub.

The data is from the SWPC website and contains monthly predictions on what the number of sunspot and the solar 10.7cm flux will be – this data is important, for example, for radio amateurs.
The data is valid as of November 11, 2019 but is going to change over time. The current data can be found at

As you can see the activity is predicted to be very low until December 2022. The new solar cycle, Solar Cycle 25, is believed to have either already started or to be starting soon, until the end of 2019.

Basic DevOps with VirtualBox, Cron and Rsync

In a previous article, I described how I migrated some websites from a bare metal server to a virtual machine running on the same server.

It’s actually three virtual machines. One is running the development, on the test and one the production environment. The three are basically identical in setup and installed software, but production has more resources assigned to it (2 processors instead of one, and more virtual disk space for data).

The next challenge was, how to automate the development process and avoid to having manually copy the files each time a update is performed.

There are several ways to do this, of course but they ideally all have something to do with using DevOps methodology.

Continue reading Basic DevOps with VirtualBox, Cron and Rsync

Running my own BOINC Server

BOINC is a open-source software provided by the University of Berkeley and is intended for people to contribute computing time of their computers to running calculations for scientific projects.
Examples of such projects are Einstein@Home, SETI@Home, or LHC@Home among many others.

BOINC client can be run standalone or in connection with Oracle’s VirtualBox. Some projects indeed require VirtualBox to run.

I have been running the BOINC client software for several years now on different platforms like Fedora, Ubuntu, FreeBSD and Windows 10 and contributed to a handful of science projects, but mostly to the LHC@Home.

BOINC also has a server part that let’s you host your own science projects. If you have a lot of computations to do, and need additional computing power, you might want to look at this solution.
BOINC server consists of several parts, such as Apache HTTP server and MySQL databaseserver. However, this is a bit tedious. So thankfully, volunteers provide Docker containers and VirtualBox VMs you can download and use. Details can be found here.

Continue reading Running my own BOINC Server

Publishing to Twitter with RabbitMQ and Pika

The goal was to replace the existing solution of publishing to Twitter directly from the script producing the maps of earthquake locations, with a new solution, that allows for publishing to Twitter in an asynchronous way, and implement a better throttling mechanism, as the old way ran into Twitter rate limits several times, which led to either the account being blocked or shadow banned, because it was identified as publishing spam during periods of high seismic activity.

Now, the applications, after having successfully created the maps, places a message in a RabbitMQ queue. The queue is checked periodically and the message is published to Twitter after a short random delay of something between 1 and 4 seconds.

You can see the published Tweets here. This is the development and test account, the application is still in development mode, so the links may or may not work.

RabbitMQ and Pika – Error Messages

So I was trying to do some work with RabbitMQ using Python and Pika. Namely, I want to write message queues for use in some of my applications that let me do stuff asynchronously, so that the Python program is not blocking for any significant amount of time.

For that I installed rabbitmq-server and the package python3-pika on an Ubuntu 19.04 box.

Then, as I always do, I test the installation with some sample code that I suppose is working. The most likely candidate for this is the “Hello World” example from the RabbitMQ website. It should work right out of the box, right?


#!/usr/bin/env python
import pika

connection = pika.BlockingConnection(
channel =


channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")


#!/usr/bin/env python
import pika

connection = pika.BlockingConnection(
channel =


def callback(ch, method, properties, body):
    print(" [x] Received %r" % body)

    queue='hello', on_message_callback=callback, auto_ack=True)

print(' [*] Waiting for messages. To exit press CTRL+C')
Continue reading RabbitMQ and Pika – Error Messages

Numba Example

An example program I wrote (actually adapted from an existing one) some time ago, showing the speed-up achieved when switching from CPU to GPU documentation, it also served to test the setup and benchmark my systems.

The example can be run as a Jupyter notebook or in a terminal.

The program generates a fractal image, the Mandelbrot set, and measures the time the computation takes.

Image generated by CPU and GPU computations

Unsurprisingly, there is a considerable speed-up when switching from CPU to GPU.
On my systems the CPU version takes around 4.4s to compute the image, while the GPU version does it in around 0.3s – considerable time saver, I’d say.

The other interesting thing here is, how easy it is to used your NVIDIA card with Python and take advantage of these speed-ups.

My code is available at Github, feel free to use, comment or share.

Migrating Websites to Virtualbox™ – Part Two

Earthquakes in New Zealand, Sep. 7, 2019

As I have announced a few days ago, I was looking into how to migrate my websites to a virtual server environment using VirtualBox.

The installation and configuration was pretty straightforward and it was basically the same as on original websites, the operation systems remains Ubuntu 18.04 LTS and the software environment is identical. However, this was a good opportunity to clean-up some things that have become outdated.

My company website runs on the newer version of Zotonic Erlang CMS (at the time of writing this is 0.51). There was no problem migrating the content and database from a previous version (namedly 0.39).

My website is still running on Yaws Webserver, but some of the data acquisition code needed to be updated, as the source format changed. Thankfully, we are close to solar minimum of solar cycle 24, so there is time for a bigger update on how data on solar events is collected and displayed. For the time being, SDO Videos are no longer produced, as there was an api change on, that’s fixed now, but I decided to redo the whole process of how this data is acquired and treated.

The is also still running on Yaws Webserver, and a handful of sources for earthquake data, namely Iceland, Turkey, Mexico, Switzerland, Philippines and some others were ditched, as they make it exceedingly difficult to acquire the data, and I’ve decided it’s not worth my time. I will spend efforts on improving the data display on the remaining data sources.

The Website is also running on Yaws Webserver and it currently only displays data on near earth objects.

Now, the interesting part will be to see how the VirtualBox environment behaves in production and how easy it is to do DevOps style development with it.

Migrating Websites to Virtualbox™

I am currently running some websites on bare metal servers and while I am not prepared (yet) to move these to virtual servers in the cloud, I do want to virtualize them and run them on Oracle’s VirtualBox.

Most of the migration is straightforward, of course. I duplicated the Ubuntu 18.04 LTS environment in a VirtualBox and moved the configuration and files over. For the data collected I created a separate storage container which expands as needed.

There was only one issue in networking. I used bridged adapter in network settings, however the box was only reachable from the host operating system, not from other machines. That is fixed now, not sure how, though. It’s one of these “change settings multiple times until it works” type of fixes.

Currently the development and test environments are moved, and the development environment is set up so I can edit the files and connect to the database. Now the only thing to figure out is how to best publish the changes from development to test to production.
This should happen with the least possible effort and highest degree of automation possible. Still working on that…