Then I saved this file and enabled the service with
sudo systemctl enable atlas-jupyter.service
After that I could start it by typing
sudo systemctl start atlas-jupyter.service
And the Jupyter server now starts without problem every time I boot the virtual machine and I don’t have to log in and can also start the virtual machine in headless mode, meaning no console (GUI) necessary.
This is just a quick note for people using ATLAS Experiment OpenData with Oracle VirtualBox VMs on Fedora. The installation of the ATLAS VM is wonderfully described here. I just have two quick notes. One for users that run Oracle VirtualBox on Fedora (31) and one that is for the VM itself.
First, you’ll need to install the VirtualBox Extension Pack to be able to start the VM. This is because it needs USB 2.0 support enabled to be able to start the VM. Setting USB settings to USB 1.1 will not fix this. The VM will start, but will get stuck during boot up. On Ubuntu, the VirtualBox Extension Pack is provided as an Ubuntu package and can be installed with the command
sudo apt install virtualbox-ext-pack
On Fedora there is no such package. So you’ll have to get the VirtualBox Extension Pack from the Website so you’ll have to get it from the above link and manually install it. This is quite easy, though.
Click the download link and choose Open with Oracle VM VirtualBox. After installation the ATLAS VM starts without issue.
Particle physics is kind of a hobby of mine and since some time it is even possible to get access to some of the data generated with the LHC accelerator at CERN. One such dataset is from the LHCb experiment, which gives you access to data about decays of B-mesons to three hadrons. The largest file is some 636MB (B2HHH_MagnetDown.root) which I chose to start exploring.
Exploring means that at the start I do not know exactly what kind of data is in there. So I had to do some exploring and decided to start with just reading out some data and drawing graphs with it. For starters, I am not primarily interested in the physics, but in how to work with the data and what I can use it for. So I study the toosl and the programming, but it is clear that at the same time I analyse the data, I will have to study the physics behind it, otherwise it is not possible to make useful evaluations.
CERN had its OpenDays on September 14 and 15. As the LHC is in Long Shutdown 2 (LS2) for upgrades until early 2021, this was a good possibility for CERN to present itself and its work to the public.
Both days drew huge crowds and lines for underground visits were long – at one point waiting times for ATLAS visits were 3 hours.
I arrived on Sunday, September 15 shortly before 10 a.m. and after getting my wrist band at the check-in tent went straight for transport to remote site – I already know part of the Meyrin site, and Atlas was already overcrowded so I went to the bus stop in search of Bus F, to go to the CMS Experiment site. Unfortunately, I couldn’t find this bus, so I decided to jump on the one going to the LHCb site. Good choice!
I have been using BOINC software to participate in scientific computing projects for around four years and contributed to several projects such as Einstein@Home, SETI@Home, Asteroids@Home and my personal favourite LHC@Home.
Starting with getting LHC@Home projects directly from LHC@Home, I switched to a pool with Gridcoin. I am now switching back and let my boxes crunch exclusively for LHC@Home.
My Boinc four clients now use a local SQUID proxy especially configured for LHC@Home and CERNVM-FS. While the number of machines probably does not do much to cut down on network usage, it’s something I tried some years ago and had abonded it. Apparently, LHC@HOme is now recommending you run a local proxy if you have several crunchers in your network.
As per documentation “CernVM 4 is a virtual machine image based on Scientific Linux 7 combined with a custom, virtualization-friendly Linux kernel”. It’s base image is very small, which means currently around 20MB. The rest of the OS and applications is downloaded on demand via CernVM-FS.
I learned a lot in the recent CernVM/CernVM-FS workshop at CERN in Geneva (actually in the part in France). It offered interesting approaches and insights in how to work with bigdata in complex environments, where almost every user has her own requirements and software setup.
The current CernVM image can be downloaded from here. There are images for different virtual environments available. I chose the VirtualBox version as I have worked with VirtualBox for quite a while using it with BOINC and LHC@Home, but you can chose another environment, for example on AWS, Azure or Docker image.
So far so good. The next step is getting an CERN account. This is needed to access CernVM Online to create a CernVM Context. Once the context is created online, the CernVM on the desktop needs to be paired with the online context. This will automatically configure the CernVM for your needs.
This is simple enough. A few difficulties arise, however. First, for the un-initiated like me, identifying the resources – i.e. the software and data – needed to work with CernVM. Is quite a challenge. Second, I to identify the account and determining the permissions needed is also challenging. While you can register for the Cern website with, say, your Google account, this will give you access to some resources but apparently not CernVM Online. A Cern light-weight account gives access to CernVM but so far, while I got CernVM running and associated it successfully with a online context, I so far get ‘access denied’ on underlying resources.
While CernVM is still work in progress, it inspired me to look a bit closer into VirtualBox and its possibilities and I am currently in the process of moving the development environment of my sun.spaceobservatory.ru (a.k.a sun.ofehr.space) website onto a VM.