Then I saved this file and enabled the service with
sudo systemctl enable atlas-jupyter.service
After that I could start it by typing
sudo systemctl start atlas-jupyter.service
And the Jupyter server now starts without problem every time I boot the virtual machine and I don’t have to log in and can also start the virtual machine in headless mode, meaning no console (GUI) necessary.
This is just a quick note for people using ATLAS Experiment OpenData with Oracle VirtualBox VMs on Fedora. The installation of the ATLAS VM is wonderfully described here. I just have two quick notes. One for users that run Oracle VirtualBox on Fedora (31) and one that is for the VM itself.
First, you’ll need to install the VirtualBox Extension Pack to be able to start the VM. This is because it needs USB 2.0 support enabled to be able to start the VM. Setting USB settings to USB 1.1 will not fix this. The VM will start, but will get stuck during boot up. On Ubuntu, the VirtualBox Extension Pack is provided as an Ubuntu package and can be installed with the command
sudo apt install virtualbox-ext-pack
On Fedora there is no such package. So you’ll have to get the VirtualBox Extension Pack from the Website so you’ll have to get it from the above link and manually install it. This is quite easy, though.
Click the download link and choose Open with Oracle VM VirtualBox. After installation the ATLAS VM starts without issue.
Particle physics is kind of a hobby of mine and since some time it is even possible to get access to some of the data generated with the LHC accelerator at CERN. One such dataset is from the LHCb experiment, which gives you access to data about decays of B-mesons to three hadrons. The largest file is some 636MB (B2HHH_MagnetDown.root) which I chose to start exploring.
Exploring means that at the start I do not know exactly what kind of data is in there. So I had to do some exploring and decided to start with just reading out some data and drawing graphs with it. For starters, I am not primarily interested in the physics, but in how to work with the data and what I can use it for. So I study the toosl and the programming, but it is clear that at the same time I analyse the data, I will have to study the physics behind it, otherwise it is not possible to make useful evaluations.
The rendering of the scenes happens fully on the client side inside the browser and can require a substantial amount of memory and computing resources if you work with complex scenes. However, this one is simple enough to be enjoyed on systems with not to many resources.
The TLEs, which stands for “Two-Line-Elements”, is a specially formatted text file, containing two-lines with object data for each object that is being tracked and a header line, giving the name of the object. It can be obtained from the Celestrak website and typically looks like this:
CesiumJS is very cool and lets you visualise data with a few lines of code. Of course, you can write complex applications with it. The point to consider is that the rendering of the scenes you create does happen fully on the client side and can require a large amount of RAM and processing power.
Here’s a visualisation of the what the code does. It also lets you zoom, pan and rotate.
Currently working with my RTL-SDR device to catch ADS-B messages from nearby airplanes. Luckily, there’s an app for that called dump1090 that works with my device out of the box. I run it on FreeBSD 12.1 and it catches the messages well. Unfortunately, it is a bit behind and I am not sure the app is still maintained. In any case, the problem is with the way the app produces json files, and uses Google maps to visualize the data captured. So I am currently rewriting part of it to produce a GeoJson formatted output file and a Webpage that uses Mapbox (OpenStreeMaps) instead of Google. The C-code currently compiles on Linux but not (yet) on FreeBSD, but I am confident I can have a working base version by the end of next week (depending of schedule of course).
Here’s a sneak peak at the current layout (which will be improved once the backend and data-display works well enough)
The goal of this project is to learn and demonstrate how to visualise real-time data and to learn how to work with signals from antennas – but that will be another project…
I wrote a quick example program in Python. The code consumes data in JSON format, uses Pandas to work the data and Matplotlib to display the data. It is a Jupyter notebook, but can easily be adapted to work standalone. Find the code on GitHub.
The data is from the SWPC website and contains monthly predictions on what the number of sunspot and the solar 10.7cm flux will be – this data is important, for example, for radio amateurs. The data is valid as of November 11, 2019 but is going to change over time. The current data can be found at services.swpc.noaa.gov.
As you can see the activity is predicted to be very low until December 2022. The new solar cycle, Solar Cycle 25, is believed to have either already started or to be starting soon, until the end of 2019.
In a previous article, I described how I migrated some websites from a bare metal server to a virtual machine running on the same server.
It’s actually three virtual machines. One is running the development, on the test and one the production environment. The three are basically identical in setup and installed software, but production has more resources assigned to it (2 processors instead of one, and more virtual disk space for data).
The next challenge was, how to automate the development process and avoid to having manually copy the files each time a update is performed.
There are several ways to do this, of course but they ideally all have something to do with using DevOps methodology.
BOINC is a open-source software provided by the University of Berkeley and is intended for people to contribute computing time of their computers to running calculations for scientific projects. Examples of such projects are Einstein@Home, SETI@Home, or LHC@Home among many others.
BOINC client can be run standalone or in connection with Oracle’s VirtualBox. Some projects indeed require VirtualBox to run.
I have been running the BOINC client software for several years now on different platforms like Fedora, Ubuntu, FreeBSD and Windows 10 and contributed to a handful of science projects, but mostly to the LHC@Home.
BOINC also has a server part that let’s you host your own science projects. If you have a lot of computations to do, and need additional computing power, you might want to look at this solution. BOINC server consists of several parts, such as Apache HTTP server and MySQL databaseserver. However, this is a bit tedious. So thankfully, volunteers provide Docker containers and VirtualBox VMs you can download and use. Details can be found here.
The goal was to replace the existing solution of publishing to Twitter directly from the script producing the maps of earthquake locations, with a new solution, that allows for publishing to Twitter in an asynchronous way, and implement a better throttling mechanism, as the old way ran into Twitter rate limits several times, which led to either the account being blocked or shadow banned, because it was identified as publishing spam during periods of high seismic activity.
Now, the applications, after having successfully created the maps, places a message in a RabbitMQ queue. The queue is checked periodically and the message is published to Twitter after a short random delay of something between 1 and 4 seconds.
You can see the published Tweets here. This is the development and test account, the application is still in development mode, so the links may or may not work.
So I was trying to do some work with RabbitMQ using Python and Pika. Namely, I want to write message queues for use in some of my applications that let me do stuff asynchronously, so that the Python program is not blocking for any significant amount of time.
For that I installed rabbitmq-server and the package python3-pika on an Ubuntu 19.04 box.