After Firefox in early September, Google had also revealed plans to support DNS over HTTPS (DoH).
In traditional DNS, the traffic between DNS servers and client that is looking up an address is going over the wire in un-encrypted and un-authenticated form. This means that the client does not know if the DNS server he is talking to is actually the correct server and that the connection has not been hijacked and he is delivered spoofed entries.
There have been efforts before to secure DNS traffic, and the most advanced and seasoned approach here is DNSCrypt, which is also using the default port TCP 443 (HTTPS) for its traffic. The DNSCrypt v 2 protocol specification exists since 2013, but the protocol goes back to around 2008. It’s well tested and secure, and I would have expected this to be the quasi-standard to be used in Web browsers. In fact, Yandex browser already used this.
CERN had its OpenDays on September 14 and 15. As the LHC is in Long Shutdown 2 (LS2) for upgrades until early 2021, this was a good possibility for CERN to present itself and its work to the public.
Both days drew huge crowds and lines for underground visits were long – at one point waiting times for ATLAS visits were 3 hours.
I arrived on Sunday, September 15 shortly before 10 a.m. and after getting my wrist band at the check-in tent went straight for transport to remote site – I already know part of the Meyrin site, and Atlas was already overcrowded so I went to the bus stop in search of Bus F, to go to the CMS Experiment site. Unfortunately, I couldn’t find this bus, so I decided to jump on the one going to the LHCb site. Good choice!
Nice day in Lucerne and excellent opportunity to learn about CSCS’s work and interact with the staff. This was my second CSCS Lab Day, and altough I am not working in the HPC field, I learned a lot. This event is interesting, because it is focused on the interaction of HPC users with the CSCS infrastructure, so you can get a lot of information about containers, virtualization and CSCS user environment, without being overwhelmed with all the HPC specific stuff.
The day started with a talk given by Prof. Demenico Giardini, ETH Zurich who described, how the Seismometer of the InSight Mars mission was developped and deployed and what results obtained were so far.
As I have announced a few days ago, I was looking into how to migrate my websites to a virtual server environment using VirtualBox.
The installation and configuration was pretty straightforward and it was basically the same as on original websites, the operation systems remains Ubuntu 18.04 LTS and the software environment is identical. However, this was a good opportunity to clean-up some things that have become outdated.
My company website ofehrmedia.com runs on the newer version of Zotonic Erlang CMS (at the time of writing this is 0.51). There was no problem migrating the content and database from a previous version (namedly 0.39).
My sun.ofehr.space website is still running on Yaws Webserver, but some of the data acquisition code needed to be updated, as the source format changed. Thankfully, we are close to solar minimum of solar cycle 24, so there is time for a bigger update on how data on solar events is collected and displayed. For the time being, SDO Videos are no longer produced, as there was an api change on Helioviewer.org, that’s fixed now, but I decided to redo the whole process of how this data is acquired and treated.
The earth.ofehr.space is also still running on Yaws Webserver, and a handful of sources for earthquake data, namely Iceland, Turkey, Mexico, Switzerland, Philippines and some others were ditched, as they make it exceedingly difficult to acquire the data, and I’ve decided it’s not worth my time. I will spend efforts on improving the data display on the remaining data sources.
The planets.ofehr.space Website is also running on Yaws Webserver and it currently only displays data on near earth objects.
Now, the interesting part will be to see how the VirtualBox environment behaves in production and how easy it is to do DevOps style development with it.
I am currently running some websites on bare metal servers and while I am not prepared (yet) to move these to virtual servers in the cloud, I do want to virtualize them and run them on Oracle’s VirtualBox.
Most of the migration is straightforward, of course. I duplicated the Ubuntu 18.04 LTS environment in a VirtualBox and moved the configuration and files over. For the data collected I created a separate storage container which expands as needed.
There was only one issue in networking. I used bridged adapter in network settings, however the box was only reachable from the host operating system, not from other machines. That is fixed now, not sure how, though. It’s one of these “change settings multiple times until it works” type of fixes.
Currently the development and test environments are moved, and the development environment is set up so I can edit the files and connect to the database. Now the only thing to figure out is how to best publish the changes from development to test to production. This should happen with the least possible effort and highest degree of automation possible. Still working on that…
The idea of a Tor bridge is that it’s IP address is not entered into the public database but remains hidden. With this wrong configuration line, the Tor node starts “normally” as a Tor exit node and publishes the IP address to the system in the public database.
I ran into this nice little story on Schneier on Security blog, describing how to transform a Tesla into an surveillance platform, using its cameras that give a 360° view of the car’s environment. The article is a synopsis of a more detailed description on Wired describing the Tesla modifications. In fact, it is quite easy to turn your Tesla into your own private surveillance platform. Everything you need is already there, you just need to plug in a notebook running the right (open-source) platform and your set.
Now we can understand why the Basle Police might find it interesting to own and run a few Teslas in the city of Basel. Not that I want to claim they actually used the car in this way, I would have no knowledge of that. Just saying it is a possibility that it will be used in this way, either by them or any other entity.
I can also imagine it would be very interesting to have access to all data from all Tesla cars in your fleet and record was going on around them – even if parked. In fact, since the car already is a surveillance platform when you buy it, and you can use it as such with a few modifications, why not use it as such for your own purposes? Distributing your cars strategically in an area will give on traffic flow and where people go.
Note that while I write here specifically about Tesla, this holds true for any car from any manufacturer that produces similar cars and it will especially be true for self-driving cars.
As per documentation “CernVM 4 is a virtual machine image based on Scientific Linux 7 combined with a custom, virtualization-friendly Linux kernel”. It’s base image is very small, which means currently around 20MB. The rest of the OS and applications is downloaded on demand via CernVM-FS.
I learned a lot in the recent CernVM/CernVM-FS workshop at CERN in Geneva (actually in the part in France). It offered interesting approaches and insights in how to work with bigdata in complex environments, where almost every user has her own requirements and software setup.
The current CernVM image can be downloaded from here. There are images for different virtual environments available. I chose the VirtualBox version as I have worked with VirtualBox for quite a while using it with BOINC and LHC@Home, but you can chose another environment, for example on AWS, Azure or Docker image.
So far so good. The next step is getting an CERN account. This is needed to access CernVM Online to create a CernVM Context. Once the context is created online, the CernVM on the desktop needs to be paired with the online context. This will automatically configure the CernVM for your needs.
This is simple enough. A few difficulties arise, however. First, for the un-initiated like me, identifying the resources – i.e. the software and data – needed to work with CernVM. Is quite a challenge. Second, I to identify the account and determining the permissions needed is also challenging. While you can register for the Cern website with, say, your Google account, this will give you access to some resources but apparently not CernVM Online. A Cern light-weight account gives access to CernVM but so far, while I got CernVM running and associated it successfully with a online context, I so far get ‘access denied’ on underlying resources.
While CernVM is still work in progress, it inspired me to look a bit closer into VirtualBox and its possibilities and I am currently in the process of moving the development environment of my sun.spaceobservatory.ru (a.k.a sun.ofehr.space) website onto a VM.