I wrote a quick example program in Python. The code consumes data in JSON format, uses Pandas to work the data and Matplotlib to display the data. It is a Jupyter notebook, but can easily be adapted to work standalone. Find the code on GitHub.
The data is from the SWPC website and contains monthly predictions on what the number of sunspot and the solar 10.7cm flux will be – this data is important, for example, for radio amateurs. The data is valid as of November 11, 2019 but is going to change over time. The current data can be found at services.swpc.noaa.gov.
As you can see the activity is predicted to be very low until December 2022. The new solar cycle, Solar Cycle 25, is believed to have either already started or to be starting soon, until the end of 2019.
In a previous article, I described how I migrated some websites from a bare metal server to a virtual machine running on the same server.
It’s actually three virtual machines. One is running the development, on the test and one the production environment. The three are basically identical in setup and installed software, but production has more resources assigned to it (2 processors instead of one, and more virtual disk space for data).
The next challenge was, how to automate the development process and avoid to having manually copy the files each time a update is performed.
There are several ways to do this, of course but they ideally all have something to do with using DevOps methodology.
After Firefox in early September, Google had also revealed plans to support DNS over HTTPS (DoH).
In traditional DNS, the traffic between DNS servers and client that is looking up an address is going over the wire in un-encrypted and un-authenticated form. This means that the client does not know if the DNS server he is talking to is actually the correct server and that the connection has not been hijacked and he is delivered spoofed entries.
There have been efforts before to secure DNS traffic, and the most advanced and seasoned approach here is DNSCrypt, which is also using the default port TCP 443 (HTTPS) for its traffic. The DNSCrypt v 2 protocol specification exists since 2013, but the protocol goes back to around 2008. It’s well tested and secure, and I would have expected this to be the quasi-standard to be used in Web browsers. In fact, Yandex browser already used this.
BOINC is a open-source software provided by the University of Berkeley and is intended for people to contribute computing time of their computers to running calculations for scientific projects. Examples of such projects are Einstein@Home, SETI@Home, or LHC@Home among many others.
BOINC client can be run standalone or in connection with Oracle’s VirtualBox. Some projects indeed require VirtualBox to run.
I have been running the BOINC client software for several years now on different platforms like Fedora, Ubuntu, FreeBSD and Windows 10 and contributed to a handful of science projects, but mostly to the LHC@Home.
BOINC also has a server part that let’s you host your own science projects. If you have a lot of computations to do, and need additional computing power, you might want to look at this solution. BOINC server consists of several parts, such as Apache HTTP server and MySQL databaseserver. However, this is a bit tedious. So thankfully, volunteers provide Docker containers and VirtualBox VMs you can download and use. Details can be found here.
The goal was to replace the existing solution of publishing to Twitter directly from the script producing the maps of earthquake locations, with a new solution, that allows for publishing to Twitter in an asynchronous way, and implement a better throttling mechanism, as the old way ran into Twitter rate limits several times, which led to either the account being blocked or shadow banned, because it was identified as publishing spam during periods of high seismic activity.
Now, the applications, after having successfully created the maps, places a message in a RabbitMQ queue. The queue is checked periodically and the message is published to Twitter after a short random delay of something between 1 and 4 seconds.
You can see the published Tweets here. This is the development and test account, the application is still in development mode, so the links may or may not work.
So I was trying to do some work with RabbitMQ using Python and Pika. Namely, I want to write message queues for use in some of my applications that let me do stuff asynchronously, so that the Python program is not blocking for any significant amount of time.
For that I installed rabbitmq-server and the package python3-pika on an Ubuntu 19.04 box.
An example program I wrote (actually adapted from an existing one) some time ago, showing the speed-up achieved when switching from CPU to GPU documentation, it also served to test the setup and benchmark my systems.
The example can be run as a Jupyter notebook or in a terminal.
The program generates a fractal image, the Mandelbrot set, and measures the time the computation takes.
Unsurprisingly, there is a considerable speed-up when switching from CPU to GPU. On my systems the CPU version takes around 4.4s to compute the image, while the GPU version does it in around 0.3s – considerable time saver, I’d say.
The other interesting thing here is, how easy it is to used your NVIDIA card with Python and take advantage of these speed-ups.
My code is available at Github, feel free to use, comment or share.
CERN had its OpenDays on September 14 and 15. As the LHC is in Long Shutdown 2 (LS2) for upgrades until early 2021, this was a good possibility for CERN to present itself and its work to the public.
Both days drew huge crowds and lines for underground visits were long – at one point waiting times for ATLAS visits were 3 hours.
I arrived on Sunday, September 15 shortly before 10 a.m. and after getting my wrist band at the check-in tent went straight for transport to remote site – I already know part of the Meyrin site, and Atlas was already overcrowded so I went to the bus stop in search of Bus F, to go to the CMS Experiment site. Unfortunately, I couldn’t find this bus, so I decided to jump on the one going to the LHCb site. Good choice!
Nice day in Lucerne and excellent opportunity to learn about CSCS’s work and interact with the staff. This was my second CSCS Lab Day, and altough I am not working in the HPC field, I learned a lot. This event is interesting, because it is focused on the interaction of HPC users with the CSCS infrastructure, so you can get a lot of information about containers, virtualization and CSCS user environment, without being overwhelmed with all the HPC specific stuff.
The day started with a talk given by Prof. Demenico Giardini, ETH Zurich who described, how the Seismometer of the InSight Mars mission was developped and deployed and what results obtained were so far.
As I have announced a few days ago, I was looking into how to migrate my websites to a virtual server environment using VirtualBox.
The installation and configuration was pretty straightforward and it was basically the same as on original websites, the operation systems remains Ubuntu 18.04 LTS and the software environment is identical. However, this was a good opportunity to clean-up some things that have become outdated.
My company website ofehrmedia.com runs on the newer version of Zotonic Erlang CMS (at the time of writing this is 0.51). There was no problem migrating the content and database from a previous version (namedly 0.39).
My sun.ofehr.space website is still running on Yaws Webserver, but some of the data acquisition code needed to be updated, as the source format changed. Thankfully, we are close to solar minimum of solar cycle 24, so there is time for a bigger update on how data on solar events is collected and displayed. For the time being, SDO Videos are no longer produced, as there was an api change on Helioviewer.org, that’s fixed now, but I decided to redo the whole process of how this data is acquired and treated.
The earth.ofehr.space is also still running on Yaws Webserver, and a handful of sources for earthquake data, namely Iceland, Turkey, Mexico, Switzerland, Philippines and some others were ditched, as they make it exceedingly difficult to acquire the data, and I’ve decided it’s not worth my time. I will spend efforts on improving the data display on the remaining data sources.
The planets.ofehr.space Website is also running on Yaws Webserver and it currently only displays data on near earth objects.
Now, the interesting part will be to see how the VirtualBox environment behaves in production and how easy it is to do DevOps style development with it.