Then I saved this file and enabled the service with
sudo systemctl enable atlas-jupyter.service
After that I could start it by typing
sudo systemctl start atlas-jupyter.service
And the Jupyter server now starts without problem every time I boot the virtual machine and I don’t have to log in and can also start the virtual machine in headless mode, meaning no console (GUI) necessary.
This is just a quick note for people using ATLAS Experiment OpenData with Oracle VirtualBox VMs on Fedora. The installation of the ATLAS VM is wonderfully described here. I just have two quick notes. One for users that run Oracle VirtualBox on Fedora (31) and one that is for the VM itself.
First, you’ll need to install the VirtualBox Extension Pack to be able to start the VM. This is because it needs USB 2.0 support enabled to be able to start the VM. Setting USB settings to USB 1.1 will not fix this. The VM will start, but will get stuck during boot up. On Ubuntu, the VirtualBox Extension Pack is provided as an Ubuntu package and can be installed with the command
sudo apt install virtualbox-ext-pack
On Fedora there is no such package. So you’ll have to get the VirtualBox Extension Pack from the Website so you’ll have to get it from the above link and manually install it. This is quite easy, though.
Click the download link and choose Open with Oracle VM VirtualBox. After installation the ATLAS VM starts without issue.
Particle physics is kind of a hobby of mine and since some time it is even possible to get access to some of the data generated with the LHC accelerator at CERN. One such dataset is from the LHCb experiment, which gives you access to data about decays of B-mesons to three hadrons. The largest file is some 636MB (B2HHH_MagnetDown.root) which I chose to start exploring.
Exploring means that at the start I do not know exactly what kind of data is in there. So I had to do some exploring and decided to start with just reading out some data and drawing graphs with it. For starters, I am not primarily interested in the physics, but in how to work with the data and what I can use it for. So I study the toosl and the programming, but it is clear that at the same time I analyse the data, I will have to study the physics behind it, otherwise it is not possible to make useful evaluations.
I was lucky enough to be able to participate in the 3rd FCC Workshop from January 13 to 17, 2020 and got I first hand look behind the scenes of the planning of the Future Circular Collider (FCC) which is supposed to come after the current Large Hadron Collider (LHC) has gone through its High Luminosity (HL) upgrades and needs to be replace somewhere in the beginning of the 2040s. That sounds like a long time, but as was pointed out, there is a lot of civil engineering work to be done – namely digging the 100 km circumference tunnels – and this needs to be started soon.
FCC is actually several colliders all of which are refereed to as FCC-INT. The first to be implemented is the FCC-ee collider, which is a electron-positron collider. Here, there is some competition between the circular FCC-ee (in CERN), the linear ILC (in Japan) and CLIC (in CERN) designs.
If you’re new to this subject I recommend reading Circular and Linear e+e− Colliders: Another Story of Complementarity by Alain Blondel and Partick Janot (arxiv.org:1912.11871). In a nutshell, FCC-ee is the front-runner if you plan to do more than just Higgs physics. Namely EW, Flavour and Top physics as well as Beyond Standard Model physics (BSM) and if you want to keep the road open to a proton-proton (a hadron collider) called FCC-hh. Current thinking seems to be that FCC-ee is favoured but with synergies of either ILC (or even CLIC) being built in Japan.
What I profited most in these 5 intense days, was to get some points drawn which I can now connect. Especially in QCD and EFT, BSM physics, but also collider technologies, software used to do particle physics and data acquisition (DAQ) process.
I now have a much better general understanding about the actual data which is being collected. Unfortunately, with my Windows 10 notebook, I couldn’t really participate in the software workshop – this is corrected now. It’s running Fedora 31 – which turns out to be noticeably faster…
I enjoyed my stay at CERN. Nice international atmosphere. Some buildings could use a make-over, though :-).
The rendering of the scenes happens fully on the client side inside the browser and can require a substantial amount of memory and computing resources if you work with complex scenes. However, this one is simple enough to be enjoyed on systems with not to many resources.
The TLEs, which stands for “Two-Line-Elements”, is a specially formatted text file, containing two-lines with object data for each object that is being tracked and a header line, giving the name of the object. It can be obtained from the Celestrak website and typically looks like this:
CesiumJS is very cool and lets you visualise data with a few lines of code. Of course, you can write complex applications with it. The point to consider is that the rendering of the scenes you create does happen fully on the client side and can require a large amount of RAM and processing power.
Here’s a visualisation of the what the code does. It also lets you zoom, pan and rotate.
Currently working with my RTL-SDR device to catch ADS-B messages from nearby airplanes. Luckily, there’s an app for that called dump1090 that works with my device out of the box. I run it on FreeBSD 12.1 and it catches the messages well. Unfortunately, it is a bit behind and I am not sure the app is still maintained. In any case, the problem is with the way the app produces json files, and uses Google maps to visualize the data captured. So I am currently rewriting part of it to produce a GeoJson formatted output file and a Webpage that uses Mapbox (OpenStreeMaps) instead of Google. The C-code currently compiles on Linux but not (yet) on FreeBSD, but I am confident I can have a working base version by the end of next week (depending of schedule of course).
Here’s a sneak peak at the current layout (which will be improved once the backend and data-display works well enough)
The goal of this project is to learn and demonstrate how to visualise real-time data and to learn how to work with signals from antennas – but that will be another project…
I wrote a quick example program in Python. The code consumes data in JSON format, uses Pandas to work the data and Matplotlib to display the data. It is a Jupyter notebook, but can easily be adapted to work standalone. Find the code on GitHub.
The data is from the SWPC website and contains monthly predictions on what the number of sunspot and the solar 10.7cm flux will be – this data is important, for example, for radio amateurs. The data is valid as of November 11, 2019 but is going to change over time. The current data can be found at services.swpc.noaa.gov.
As you can see the activity is predicted to be very low until December 2022. The new solar cycle, Solar Cycle 25, is believed to have either already started or to be starting soon, until the end of 2019.
In a previous article, I described how I migrated some websites from a bare metal server to a virtual machine running on the same server.
It’s actually three virtual machines. One is running the development, on the test and one the production environment. The three are basically identical in setup and installed software, but production has more resources assigned to it (2 processors instead of one, and more virtual disk space for data).
The next challenge was, how to automate the development process and avoid to having manually copy the files each time a update is performed.
There are several ways to do this, of course but they ideally all have something to do with using DevOps methodology.
After Firefox in early September, Google had also revealed plans to support DNS over HTTPS (DoH).
In traditional DNS, the traffic between DNS servers and client that is looking up an address is going over the wire in un-encrypted and un-authenticated form. This means that the client does not know if the DNS server he is talking to is actually the correct server and that the connection has not been hijacked and he is delivered spoofed entries.
There have been efforts before to secure DNS traffic, and the most advanced and seasoned approach here is DNSCrypt, which is also using the default port TCP 443 (HTTPS) for its traffic. The DNSCrypt v 2 protocol specification exists since 2013, but the protocol goes back to around 2008. It’s well tested and secure, and I would have expected this to be the quasi-standard to be used in Web browsers. In fact, Yandex browser already used this.
BOINC is a open-source software provided by the University of Berkeley and is intended for people to contribute computing time of their computers to running calculations for scientific projects. Examples of such projects are Einstein@Home, SETI@Home, or LHC@Home among many others.
BOINC client can be run standalone or in connection with Oracle’s VirtualBox. Some projects indeed require VirtualBox to run.
I have been running the BOINC client software for several years now on different platforms like Fedora, Ubuntu, FreeBSD and Windows 10 and contributed to a handful of science projects, but mostly to the LHC@Home.
BOINC also has a server part that let’s you host your own science projects. If you have a lot of computations to do, and need additional computing power, you might want to look at this solution. BOINC server consists of several parts, such as Apache HTTP server and MySQL databaseserver. However, this is a bit tedious. So thankfully, volunteers provide Docker containers and VirtualBox VMs you can download and use. Details can be found here.