Don’t disturb my circles

In the digital age, we have grown used to the fact that it is possible to make a perfect copy of something. Since all digital data is just a series of bits, we don’t have to worry about imprecision, and we can employ all kinds of fancy error correction to make sure our copies are identical to the source material. It’s important to know that despite this, data isn’t always copied exactly: random noise can flip bits, and we often throw away information in the name of compression. Of course, this isn’t limited to digital information; legends, languages and memories evolve over time.

All this to say that if I wanted N pictures of the same circle, I could draw one and copy it N-1 times. This is obviously no fun, so I decided to copy them all by hand, and examine what kinds of error are introduced, and how they evolve.

Here’s my starting circle, drawn freehand with a mouse (freemouse?):

In retrospect, I probably should have started from a perfect circle, but this isn’t too bad

When copying, I would follow these two rules:

  1. Draw in a single stroke, without stopping or lifting the mouse
  2. Reproduce imperfections as closely as possible

When I first started doing this, I was thinking about ensō, the circular brushstrokes made by Zen Buddhists with the intent of allowing the body to create without the mind’s oversight (…I think). I guess I shouldn’t be surprised that my circles were not ensō, and that the mental effort of attempting repeated pixel-perfect tracing left me neither Zen nor zen.

Anyway, after 100 iterations, here’s how my circle evolved:

I don’t know about you, but I find this kind of mesmerising to watch. Here are a few observations that I’ll try to quantify below:

  1. My circle does not stay a circle for long. It only took 5 iterations for the start and end points to not line up, causing them to flail around for the next 95.
  2. The circle drifts upwards and to the right, almost hitting the rightmost edge of the frame. This doesn’t show up particularly well on the white background, but the line is only a few pixels away from the edge of the image in the final frames.
  3. The circle gets a lot more squiggly over time. This in itself shouldn’t be surprising, but the effect after 100 iterations is surprisingly large. In particular, we see fissures opening up and starting to wiggle around. Some of these (like the one at 2 o’ clock) are mouse slips, but most aren’t.
  4. Perhaps it’s just me, but the South-West and North-East quadrants of the circle seem much more badly behaved than the other two. Perhaps this says something about my mouse control?

To do anything quantitative with these circles, we’d better convert the images to something more pliable. Since I drew the circles with a stroke width of 5 pixels, it makes sense to skeletonize the line – i.e. trim it down until it is only one pixel thick. Luckily, there seem to be several Python libraries which have implemented this (I used skimage).

Once skeletonized, we can then fit a circle to the line so we can look at the deviations from circularity later I did this by placing the circle’s centre at the mean \(x\) and \(y\) coordinates of the line’s pixels, and set its radius to be the mean distance of pixels from the centre. I’m not convinced this is the best fitting circle, but it seems to work fine.

Finally, I can then unwrap the line to express the pixel’s locations in polar coordinates, centred on the circle:

I originally wanted to parametrise things along the path length of the circle, but this got messy fast. The main issue is that when skeletonized, some particularly spiky bits of the line end up as spurs off the main circle (you can see a few tiny spurs in panel 2 of the picture above). This makes counting length along the circle difficult, so I switched to just using angle as a coordinate.

OK, now we’ve got some useable data, let’s take a look at some of the points I listed earlier. Looking at the animation, it’s obvious that the circle gets squigglier over time. One way to describe this would be to look at the circle’s circumference as a function of time:

Well it’s definitely increasing, and showing a possible slowing after about 60 iterations. It’s difficult to get any more information out of just the circumference, so let’s look at something a bit more complex.


Since the squiggles I keep referring to can be thought of as waves distorting the shape of the circle, it makes sense to look at the Fourier series of the unwrapped points. We’ll consider a range of different wavelengths, and the coefficients of the resulting Fourier series will show what fraction of the distortion arises from waves with that particular wavelength.

So, for example, if the line were quite smooth but distorted into an ellipse, we would see large coefficients for long-wavelength disturbances, and low coefficients for the short-wavelength ones. On the other hand, if the shape of the circle were roughly right, but with lots of small deviations, the long-wavelength coefficients would be small, and the short-wavelength coefficients would be larger. Since my circles are basically neither, so should see perturbations (i.e. high coefficients) at a range of wavelengths. By the way, from now on, I’ll use ‘order’ to refer to the scale of the perturbations – this is the inverse of wavelength, so ‘high-order’ is ‘low-wavelength’ and vice versa.

Here are the results of the Fourier analysis for all 101 frames. In each image, the drawn circle is on the left, with its circle of (arguably-)best fit superimposed. Top right is the unwrapped points plotted by radius against angle, while plot below it shows the amplitude of the Fourier modes, up to \(n = 40\).

We can see that the power in each mode starts out low, but tends to grow over time (which is what we should expect). Other than that, there’s not a huge amount that I can deduce from these plots – basically just:

  1. The fact that the amplitude of the 1st-order component is not zero means that the circle’s position and/or radius are not optimal. You can also see this in the unwrapped plot in the top-right, where the whole line occasionally shifts and warps between frames – in theory, most of the line should stay still. This might be due to sections where the line loops back on itself (i.e. multiple values of the radius for a given angle), which make it hard to define what the circle of best fit even is.
  2. The 4th-order component is consistently quite large. This has the effect of making the circle more square – you can see this particularly on the right-hand side towards the end of the animation. I’m not sure there’s any particular significance here; with 40 components, the odds of one being particularly high for a while just due to random chance are pretty good.
  3. Besides that, there aren’t really any standout wavelengths. When drawing the circles, I wondered whether the drawing generally introduces errors with a particular length scale the fact that the amplitudes smoothly decrease for higher-order components shows that this isn’t the case.

OK – that’s definitely enough Fourier stuff for now. Let’s take a look at something else I spotted: the fact that big perturbations tend to appear in the bottom-left and top-right quadrants of the circle, while the other two stay comparatively smooth.

Now we could look at this by dividing the circle into its quadrants and repeating the Fourier analysis, but I’ve thought of a more interesting way: if we break the circle down into line segments, and look at the distribution of angles of those segments, differences in bumpiness in each quadrant should be visible as a surplus of lines pointing in a certain direction.

To show what I mean, here’s a simplified example using an octagon, and then showing what happens when you add a bump in a particular quadrant:

Here, ‘angle’ is measured between the line segment and vertical; since the lines aren’t directed, the graph repeats at an offset of 180\(^\circ\) (as each segment is counted twice). I feel like the graphs on the right are easier to interpret wrapped back onto a circle, so let’s do that, and see how they evolve over time:

(I’ve changed the lines into bars, since they’re a bit easier to follow when they’re jumping around)

The units of the plot’s radial axis are normalised to the average size of the bars for my starting circle. The growth of the bars over time reflects the growing circumference of the circles over time. While it’s nowhere near as clear cut as the octagon example, you can see a clear drift from a circle to a NE-SW pointing ellipse as the circle evolves. Here’s just that last frame for comparison:

This is pretty good evidence for the badly-behaved vs well-behaved quadrants idea. I won’t sell too much on the cause, but I suspect it’s to do with the fact that I’m right-handed. So if I’m more accurate with side-to-side motion of my hand (as opposed to forward-and-back), we’d expect to see more errors in regions where the line is perpendicular to the unreliable forward-and-back axis. If this were the case, then these slips would likely be closer to the North-South axis instead of the East-West, since my arm is aligned roughly SSW-NNE when using the mouse.

You can also see that the bars pointing East-West and North-South are usually pretty prominent, implying a bias toward drawing horizontal or vertical lines over diagonals. It’s hard to say whether this is caused by my perception, my mouse movement, or just representative of the fact that the drawing consists of pixels in a grid.


I hope this has been a useful primer on what to expect if you manually trace the same circle 100 times. If uniformity is what you’re after, you’ll need steadier hands than mine (or I could just introduce you to my friends Ctrl+C and Ctrl+V).

The topologist’s world map

A while ago, I saw this map, and thought it was pretty interesting:

Credit: Sian Zelbo

It represents the USA’s 48 contiguous states (plus Lake Michigan) crammed into a rectangle in a way that preserves their borders. For example, California (in the bottom-left corner) borders, going clockwise, Oregon (green), Nevada (blue), and Arizona (purple).

I describe this a a topologist’s map because topology is a branch of mathematics concerned with the way that space is connected. In topology it’s common to think of stretchy, distortable surfaces that can be moved around without being punctured or torn. In the map above, the shapes of states are distorted, but as long as they’re not torn or separated, this is topologically equivalent to a normal map.

My first thought after seeing this was ‘I wonder if you could make a world map this way?’

Obviously you can’t do it in exactly the same way (and haven’t been able for the past 175 million years), but I decided to try it out and see what it looked like.

The plan, roughly speaking, was:

  1. Make a list of countries, and specify which other countries they border
  2. Use this list to make a graph, and relax the points so they end up reasonably spaced
  3. Make a Voronoi diagram of the points, and hope it ends up as a topologist’s map

I ended up having to manually collate a list of countries and their borders. Plotted as a network using the networkx and netgraph Python modules, this was the result:

Here you can see two distinct landmasses – America on the left and Afro-Eurasia on the right. North and Central America appears basically just as a string of countries, while South America is a bit more interesting. For the other landmass, Asia appears on the top-left, Europe on the bottom-left, and Africa on the right, connected by Egypt linking into the Middle East. Note the existence of two pairs of bordering countries at the top centre: the Dominican Republic and Haiti, and Ireland and the UK.

We can try to neaten this up a bit to get

which I guess you could interpret as a butterfly or a dragon or something. Not much has changed really, aside from a rotation and a flip.

We can immediately see that this is not going to fit in a neat rectangle like the USA map – well, it technically could, but it’s going to look awful. This has also demonstrated a problem with trying to automate this process: the graph has no idea of chirality.

Imagine that this graph existed as a series of balls connected with string. You would be able to spin the Egypt (EG) ball around and flip Africa relative to Eurasia. Looking at the string of American countries, Belize (BZ) and El Salvador (SV) are both shown on the same side, although they are on the Atlantic and Pacific coasts, respectively.

For a slightly more complex example, Europe is connected to Asia only through Russia and Turkey. There are two ways to walk along coastline from Russia to Turkey: the relatively short trip around the Black Sea, passing through Ukraine, Romania and Bulgaria; or the alternative, along the Baltic, Atlantic, and Mediterranean coasts. This graph has no idea which is ‘correct’ – you’d be able to turn the Black Sea inside-out while preserving the graph, in this model.

It was at some point while I was thinking about adding extra countries to represent seas that it was time to call the automation quits, and just make the map manually. This has some advantages – I get to choose the overall layout, it’s much easier to include stylistic elements, and I don’t have to spend ages describing to a computer what common sense is.

My chosen software to make the map was Inkscape; it’;s the first time I’ve tried making a map, but the ability to ignore complex coastlines and borders means that I can just represent each country as a simple shape. I decided to emulate a T and O map, for no particular reason beyond it looking nice. A few hours of surprisingly relaxing Inkscape later, I arrived at

Note: at the moment, I’m aware of a few errors in this version. They are:

  • Bolivia/Paraguay/Argentina/Brazil border needs flipping
  • South Sudan/Kenya/Ethiopia/Uganda border needs flipping
  • Missing Bahrain
  • “Nicuragua” and “Dominican Repbulic”

I’ve deliberately left the map unlabelled, because I enjoyed going through and trying to identify the countries. Swipe across the image to reveal a few hints – remember that east is up, and north is left.

This kind of map has a few interesting quirks: for example, Spain and Morocco appear nowhere near each other here, and Indonesia, Brunei, and Timor-Leste make an appearance, despite being on totally separate landmasses to the other nations.

There’s not a huge amount more to say about the final map, so here it is – I’ll add a couple of final comments below:

After people were surprisingly interested in this map, I have made posters available to buy here (if you’re interested in a digital copy, send me an email: tom@(this website) ).

  • I’d previously heard the fact that Norway and North Korea are only separated by one country (Russia), but this map shows several other surprisingly short chains. I think my favourite is that only one country (Chad) separates countries with coastlines on the Red Sea, Mediterranean Sea, and Atlantic ocean (Sudan and Nigeria)
  • Some countries get really distorted – mostly when they find themselves near the centre of a continent. I’d often thought of Germany as the centre of Europe, but here, Austria and Hungary get really stretched out because they end up bordering countries on opposite sides of the continent
  • There are probably (hopefully?) other interesting things about this map, but I’ve been looking at it for too long, so I’ll leave it here

Distograms

Let’s say you work for a fast food chain, and you’re working out where to open a new location. For the sake of simplicity, let’s say that suitable units abound, and you have almost complete freedom of locale. Here’s the problem: you’re not the only chain around, and your new location will have to compete with surrounding businesses. I can imagine two possible plans of action in this case:

  1. Open your new location as far as possible from any competition and hope that business will be driven by convenience. I’ll call this the even spread strategy.
  2. Open your new location right next to the competition, under the assumption that they chose that place for a reason, and that you’ll be able to steal some of their customers. I’ll refer to this as clustering.

If either of these strategies dominates, we should be able to detect that in real-world data.

To get some real-world data, I headed over to OpenStreetMap, and made use of one of their bulk download tools. It turns out that all of greater London plus some surrounding countryside is only 1.4 GB, which is surprisingly manageable.

1.4 GB of London, with cycle paths highlighted, for some reason. Swipe across to reveal fast food chains (blue) and coffee shops (orange). There’s some nice clustering along the arterial routes, possibly lending some credence to the clustering strategy.

Since we’re only interested in fast food, I wrote a quick script to scan through the .xml files and extract anything matching fast_food or coffee_shop. After a little deduplication (apparently Costa, Domino’s, and Itsu are inconsistent with branding – or OSM users are), we’re left with a list of London’s most common eateries:

Fast food

ChainNumber
Pret a Manger130
Subway116
McDonald’s94
KFC71
Domino’s Pizza51
Burger King39
Itsu27
Papa John’s22
Morley’s18
Chicken Cottage13

Coffee shops

ChainNumber
Starbucks118
Costa100
Caffè Nero62
Le Pain Quoditien10

The leaders here will probably not surprise anyone who’s been to London.

To save us from having to deal with a 14×14 matrix, I’m going to restrict the list to chains with 30 or more franchises, leaving nine (marked with a ‘★’ above). Now, to see whether their locations are clustered or evenly spread, I’ll consider each pairing of chains, \((A, B)\). Then for each individual location of chain \(A\), I’ll find the closest location of chain \(B\). The result is then a 9×9 array in which each entry is a list of distances from each \(A\) to the closest \(B\).

Here’s an example:

When it comes to KFCs and Caffè Nero (Caffès Nero?), we see an range of distances between 100 metres and 5 kilometres. The vertical dotted line is the mean distance; in this case, about 700 m – all in all, it’s a pretty good log-normal distribution. Whether there is clustering or not will be evident in the distributions at low distance – clustered locations will have an apparent surplus here, while chains using the even spread strategy should have very few. The distribution seen here is pretty close to what we’d expect for a random distribution, which is unsurprising given the (presumably?) low level of competition between KFC and Caffè Nero.

One interesting property of these histograms is that you might expect them to be transitive – i.e. the histogram of distances from Caffè Neros to the nearest KFC being identical to the one above. This isn’t the case, however, a fact that can be shown with a simple example:

Here, every B is near an A, but most As are not near a B.

With that out of the way, let’s move on to the giant matrix of histograms:

From this distogram matrix, I can find the most and least ‘attractive’ pairings, based off their mean distance (note that these means are done in log space, which gives slightly different results than just the mean of all distances):

Most attractive:
  1. Starbucks → Pret (127 m)
  2. Pret → Caffe Nero (166 m)
  3. Costa → Pret (176 m)
Least attractive:
  1. Domino’s → Domino’s (2.53 km)
  2. Pret → Domino’s (2.52 km)
  3. Burger King → Domino’s (2.51 km)

In summary, coffee shops are densely clustered, and nobody wants to be near a Domino’s (especially Domino’s). That latter example is exactly the kind of thing I was expecting to find – given Domino’s’ reliance on delivery, it makes little sense to have two franchises within one delivery-distance of each other. I’m now wondering if the Pret → Domino’s distance is indicative of an urban/suburban divide.

On the other hand, coffee shops seem to show the exact opposite trend (as well as being a nice real-world demo of the intransitivity I mentioned above). It seems that the public demand for coffee is utterly insatiable, or that people are particularly loyal to specific chains.

For each chain, using the distograms, I can work out which other chain has the lowest and highest mean distance. From the data above, we already know that the closest chain to both Starbucks and Costa is Pret. This exercise isn’t a great way of showing trends, since there are unequal numbers of each chain; Pret is a common nearby cafe because there are an awful lot of them.

We now have enough information to answer the original question: competing chains tend to be attractive, rather than repulsive.

The size of the Earth

Or, Taking ‘geometry’ too literally

One of the advantages of this time of year is that sunrise tends to happen just after I get up in the morning, meaning that occasionally, my south-east facing room gets treated to a nice sunrise. One of the best kinds is when the sun shines upwards onto the bottom of a layer of clouds, producing a lovely orange-pink sky which only lasts a few minutes. After one such sunrise, while I was slowly waking up over breakfast, I realised that this is only possible because of the curvature of the Earth (the lighting of the bottom of the clouds, I mean – though of course, this is true of the sunrise itself, too).

After some further thought, I realised that you could use this to measure the height of the clouds if you know the radius of the Earth, or vice versa, which is a bit more interesting.

The geometry of the problem is pretty simple, and will be familiar to anyone who’s calculated the distance to the horizon:

In this highly technical rendering, the brown circle is the Earth, with its radius \(R\); and above it, a flat layer of clouds at an altitude \(h\). From this height, the distance to the horizon is \(d\), which can be calculated using Pythagoras. The sun is rising to the right, and the amount of time the clouds will be illuminated is just the time that the sun spends between the dashed and dotted lines.

So the thing we need to know is the angle \(\varphi\), which is easily obtained from the right triangle:

\(\cos \varphi = \frac{R}{R + h}\)

To make things a little simpler, we can assume that \(R \gg h\), i.e. the altitude of the clouds is very small compared to the Earth. This is hopefully a reasonable assumption, but the \(\cos\) can be left in if you really want:

\(\varphi = \sqrt{\frac{2h}{R+h}}\)

Now we’ve calculated \(\varphi\) one way, let’s do it another way, and equate the two. Naïvely, this could be done by assuming that the Earth rotates at a rate of \(2\pi\) radians per day, so the angle is just

\(\varphi = \frac{T}{1~\mathrm{day}}2\pi\),

where \(T\) is time for which the clouds are illuminated. The problem here is that the sun does not (generally) rise perpendicular to the horizon; if you’re not near the equator, it will also move to the right (or left if you’re in the southern hemisphere). Generally speaking, the sun’s deviation from vertical will be roughly equal to your latitude:

This approximation won’t work so well during the solstices at higher latitudes, but it’s pretty good when you consider I’m going to have to guess the altitude of some clouds in a minute. This brings our modified, more accurate formula for \(\varphi\) to

\(\varphi = \frac{T \cos \mathcal{l}}{1~\mathrm{ day}}2\pi\),

where \(\mathcal{l}\) is your latitude.

Finally, we can equate the two formulae for \(\varphi\) to give us

\(\sqrt{\frac{2h}{R+h}} = \frac{T \cos \mathcal{l}}{1~\mathrm{ day}}2\pi\)

If we want to work out the radius of the Earth, this can be contorted into

\( R = h \left( \frac{1}{2} \left( \frac{1~\mathrm{day}}{\pi T \cos \mathcal{l}} \right)^{2} – 1\right) \)

We can see that as long as \(T \ll 1~\mathrm{day}\), we’re going to get a pretty big number for \(R\). We have a problem though, in that this means that our estimate for the radius is proportional to the estimate for cloud height and the square of our estimate for how long they’re lit up:

\( R \propto h T^{-2} \)

Essentially, this means that errors in \(h\) will produce proportional errors in \(R\), but the effect of errors in \(T\) will be doubled. There’s not much that can be done about this, so let’s plug in some numbers and see how we did:

Luckily, I know \(\mathcal{l} = 52^o\), so that’s easy enough. The clouds in the photo above are altocumulus, which means they’re probably somewhere between 3 and 6 kilometres up; I’ll take the middle of the range – \( h = 4.5\) km. Finally, \(T\): I wasn’t sitting around watching the sky all morning, so I don’t have a measurement. About 10 minutes, or maybe a bit less, seems reasonable.

Taking these values gives a final result of 12 500 kilometres. This would have been really great if we were measuring the Earth’s diameter, but since we’re not, it means that a factor of two has crept in somewhere. The actual value is about 6 350 kilometres.

So, what went wrong? I believe the main culprit is that it’s still quite close to the winter solstice (the photo was taken on the 20th of January), meaning that the sun rose at a shallower angle than it would later in the year. This angle is a bit tedious to calculate (so I haven’t), but to explain the factor of 2 error, you only need to vary it by 13\(^\circ\), which seems pretty reasonable. The squared term in the formula is actually quite helpful here, as you only need a small change in angle to produce a large change in \(R\).

There are a couple of other possibilities, too: I assumed the cloud layer is flat, while it will probably conform neatly to the curvature of the Earth. This also introduces the possibility that the edge of the cloud sheet could shade the rest, making the sunrise seem faster. On the other hand, I feel like 10 minutes is an overestimate of the duration. A lower value would counteract these effects, but (probably) only partially.

So there you have it: if you ever find yourself wondering how big the Earth is, simply look up at sunrise (and know your latitude, and the date, and the height of the clouds. Oh, and keep a stopwatch handy.)

Precipitopography

As I write this, the UK and Ireland are being lightly scoured by winter storm Brendan, which is producing some lovely patterns on the radar rainfall maps. Generally speaking, we get our weather from the southwest, and the prominent western coastlines receive the brunt of the rainfall. When you look at one of these maps, it seems as though you can even make out the coastline, marked by the place where the rain begins (it’s a lot more obvious on an animated map – the fronts and clouds move, but the coast doesn’t).

An occluded front moving west to east
(15:00 on 2019-01-13)
Now with sketched outline!

So, today’s question is: can we reconstruct Great Britain’s coastline using only rainfall data?

With the help of Bash and wget, I am now the proud owner of 288 snapshots of the UK and Ireland, sampling every five minutes from 00:00 to 23:55 on the 13th January 2019. During this time, the front visible above swept across the entirety of Great Britain, generally from west to east.

The first step to turning these into useful data is dealing with the awful colourbar in use on netweather.tv. Luckily, the images aren’t antialiased, so it’s as simple as comparing the pixel values to those in the colourmap.

OK, I get that rate of rainfall is basically a continuous function, but why is 90 mm/hr basically the same as 0?! Maybe I’ve just looked at viridis too long, but I have a harder time than I should with non-perceptually-uniform colourmaps.
Before and after recolourmapping
Zoom-in on a portion of the front, in the Bristol Channel (spoiler). I thought the structure of the front was pretty neat. As the front moves north, the Coriolis force seems to be splitting it into chunks.

Now, we’ll integrate over our 24-hour period – this just amounts to stacking all the images and summing the values in each pixel. The result should be a map of (a pretty good approximation of) the total rainfall received at each point in the UK and Ireland:

Relief map from Wikimedia Commons

That’s not too bad! It’s a long shot from accurately capturing the coastline, but there’s a lot of detailed topography on view here. Incidentally, this effect is called orographic rainfall, and is caused by the terrain lifting incoming air, which forces water to condense. The area to leeward is termed the rain shadow, and gets less rain because the atmosphere has lost water content, and while descending the hills, the air warms, which increases its water-holding capacity.

But first, let’s talk artifacts. The most prominent is the mysterious blocky island over St. Geroge’s Channel between Ireland and Wales. This isn’t really visible in any of the individual frames, so I guess it must correspond to a region where whatever algorithm they use overestimates the rate of rainfall; it does seem to be in an area that’s quite far away from the radar stations.

You can roughly guess where the stations are by looking for the rays emanating from them – I assume these are caused by shadows and reflections from terrain and objects near the detectors. One neat thing is that you can see how the range of each sensor is about 150 km; this is too far for line-of-sight, but the radio waves are able to diffract over the horizon. I’m able to spot seven stations in the UK and Ireland (marked with black dots on the right-hand map), near Limerick, Belfast, Exeter, Preston, Glasgow, the Isle of Lewis, and a field in Hertfordshire. It feels like there should be more coverage in the east, but it’s hard to tell if there is any there, since it’s so dry.

On that note, there’s a surprisingly huge variation in the amount of rainfall here, after two pixels in Ireland which claimed over half a metre each (which I do not believe), a few spots in the Scottish Highlands claimed over 100 mm of rainfall, while the east of England only got 3 (I guess it felt like more with the wind).

Finally, there seems to be a very prominent (and literal) rain shadow off the northeast coast of Great Britain; for a distance of about 50 km, the radar records no rainfall at all. I’m not sure if this is real, but would be surprised if it were – how can a cloud know if it’s over land or sea? I was wondering if cliffs on the coast could be barriers to the radar signal, but I can’t come up with any reason why they should behave differently to cliffs on land. A mystery to be sure… I might retry this on another day, and see if the shadow’s still there.

Anyway, here’s a nice animated map – watch that front sweep across from 13:00 onwards.

P.S. I blame YouTube for the darkness/compression

Stellar Etymology

I like astronomy, and I also like etymology. So a while ago, I set out on a project to combine the two. Astronomy is (admittedly,) one (of many) thing(s) that just about all cultures have in common, and I really enjoy learning about the various myths and figures that people thought deserved a place in the sky.

When it comes to modern usage, three cultures have left the strongest impression: Greek, Roman and Arabic. Most of the constellations we know today were described by Ptolemy, while most of the stars in them have Arabic names. Here and there, there’s a scattering of Latin, and – as I was surprised to find – Italian, Persian, Hebrew, and even English. I was curious, wondering if the northern and equatorial latitudes were dominated by classical names, while more modern schemes (which I assumed to be mostly English and Latin) crept in further south, applied to stars invisible from the northern hemisphere.

To do this, I collated a list of all IAU-approved star names. In my mind, this is the closest to an ‘official’ list of star names, though it is of course very Eurocentrically biased for historical reasons. I used VizieR to make my catalogue, but later found this list, which would have been much easier. I then set about the task of assigning each a language of origin. For some, this was simple: Alkaid, Bellatrix, and Electra are reasonably obviously Arabic, Latin, and Greek, respectively, while the majority are (to me) less obvious, such as Denebola and Sualocin. These resulted in a lot of Wikipedia searches.

In the end, my list contained 336 stars, of which 209 have Arabic etymology, 57 Latin, and 28 Greek. If you’re keeping count, you’ll then know that that leaves 42 stars with ‘other’ etymology. This is a lot more than I expected, outnumbering Greek, and getting pretty close to Latin. I’ll put those aside for now, and concentrate on the Big 3.

That means it’s time for graphs! So, as a reminder of the original question:

Does the language of origin of star names vary with latitude?

The short answer is “no, not really”:

This figure has a fair amount going on, so probably warrants explaining. In the top-right is a map of the sky, oriented like a world map, using an equirectangular projection. The coordinates “right ascension” and “declination” can be thought of as longitude and latitude, respectively – both can be measured in degrees (although astronomer prefer to measure R.A. in hours, and yes, it increases right-to-left). Anyway, coloured circles represent stars with names from the Big 3 languages, while black crosses are other languages. If you look closely, you might be able to make out a couple of constellations:

  • Part of Ursa Major (the plough/big dipper) is at the top centre with its seven Arabic stars
  • Orion appears at (90, 0), very much shrunken by the projection. The one Latin star (Bellatrix) is a purple island in a sea of green
  • Above and to the right of Orion, at about (60, 30) the dark blue blob denotes the Pleiades (or seven sisters), where ten stars with Greek names are piled on top of each other

Below and to the left of the sky map are charts showing the fraction of stars from each language in bins of right ascension and declination. Declination is the one we’re looking for, as that corresponds to the latitude on earth at which a given star passes directly overhead. The trends are roughly what we’d expect: Arabic is dominant, and Latin and Greek get 10-20% each. Toward the south celestial pole, we start to run into issues of having very few named stars; in my catalogue, there are only three stars south of -60° S, and none between -70 and -80. There might be hints of an increase in Latin there, but there are simply too few stars for this to be convincing.

We can compare this to the distribution in right ascension. If stars’ names depend on latitude, there shouldn’t be any trends in RA. Here, we see basically the same pattern, and we can identify peaks, where, say the Pleiades dominate the Greek contribution around 60°.

So, having debunked my original prediction, let’s take a look at the miscellaneous etymologies that I found while collating my catalogue. Here’s the sky distribution again, focussing on those other stars:

There’s something interesting to say about most of the stars here, so let’s go through them.

The first thing to note is that five different stars share two Italian names. I can only guess this is due to confusion when copying star maps over the ages.

The four English contributions consist of two stars named after people (one conveniently including a date), one description, and ‘Peacock’. ‘Peacock’ and ‘Avior’ (which I’ve labelled as unknown) were both named for use as navigation stars for the Royal Air Force, as they didn’t appear to have existing names. ‘Peacock’ is simply named after its host constellation, Pavo; I have no idea where ‘Avior’ came from, and haven’t been able to find any information on it. If we follow the pattern, it should be called ‘Keel’ because it’s in Carina, but this is probably the wrong place to expect consistency.

There are a slew of languages that get one star each. There’s not much to say about these, other than how surprisingly ancient some of the languages are – if ‘Sargas’ really is Sumerian (which is uncertain), the name could be up to 5000 years old. There are also a few more unknown names – entries that simply appeared on someone’s star chart and got copied.

Finally, we reach Rotanev and Sualocin , courtesy of one Niccolò Cacciatore, who when helping to compile a catalogue in 1814, couldn’t resist Latinising his name to Nicolaus Venator and claiming ownership over the two brightest stars in Delphinus (the Dolphin). I can’t deny I’d be tempted to do the same – it definitely beats paying for them.

Shooting at a hurricane

Sign from here

A couple of years ago, Florida found itself staring down Hurricane Irma, a powerful category-5 storm which had already wrought havoc in the Leeward Islands and Greater Antilles. In true Florida Man fashion, a Facebook event sprung up that encouraged people to shoot at Irma – something which apparent thousands of Floridians agreed was a good idea.

Of course this was all tongue in cheek; nobody sane would suggest that a display of martial power could have any effect on a hurricane.

Nonetheless, it did make me wonder how much gunfire it would take to stop a hurricane, and how it could actually work. I came up with three possible scenarios:

( I should probably say at this point that I have an amateur knowledge of hurricanes, and even less knowledge about firearms. I accept no responsibility for facepalms induced by the following text – you have only yourself to blame )

  1. A steady stream of bullets will entrain air, producing wind in the same direction as the gunfire. If the wind is strong enough, and well aimed, this could modify the hurricane’s airflow – perhaps even enough to disrupt it.
  2. Hurricanes are heat engines, and fired rounds are warm. A huge number of bullets could be used to deliver huge quantities of thermal energy. The problem here is that adding extra energy will make the hurricane stronger, but there are ways around this.
  3. Hurricanes can strengthen over water, generally weaken over land, and don’t like mountains at all. If effects 1 and 2 are weak enough, you’re going to need a literal mountain of ammunition. In this case, you’re better off constructing a wall of bullets to shield you from the hurricane.

Now, before we actually start working things out, we need to pick a gun. I’m going to go with the highest-selling firearm in the USA, the Smith & Wesson M&P shield 9mm pistol. This pistol can fire a 7.5 g bullet at a velocity of 360 m/s. Getting the temperature of the bullet is a bit trickier – you can find a surprising amount of discussion on this topic; the consensus for 9mm rounds seems to be around 150 °C. I’m going to assume that the entire bullet is this hot (even though this is likely an overestimate – the jacket gets heated due to friction with the barrel).

Finally, I’m going to make one further assumption. People who are willing to shoot at a hurricane will also be willing to take a boat out into the storm and deliver the momentum/heat/pile of bullets where it’s most effective. This avoids the problem that these bullets will get, at best, a couple of kilometres out to sea, and if the hurricane’s that close, you’re probably too late.

1: Disruption by wind

As bullets travel through the air, they slow due to air resistance. Momentum is still conserved, though, so the momentum lost from the bullet is transferred to the surrounding air. Each bullet carries a maximum of 2.7 N s of momentum; encouragingly, this is enough to slow 1 m3 of air by 2 m/s (about 5 mph). So now we need to estimate how much momentum it takes to disrupt a hurricane.

Hurricanes are roughly cylindrical, and extend most of the height of the troposphere. Since air density drops roughly exponentially with height, the total momentum should be relatively easy to estimate; starting from the integral

\( \mathrm{momentum} = \int_0^{\infty} \int_0^{\infty} \int_0^{2\pi} \rho(z) v(r) r ~ \mathrm{d}\phi~\mathrm{d}r~\mathrm{d}z\)

we can approximate the vertical (\(z\)) integration like so:

\(\int_0^{\infty} \rho(z)~\mathrm{d}z \approx \rho_0 H \)

I’ve added the variables \(\rho_0\) and \(H\), which are the surface air density and the scale height of the atmosphere – 1.2 kg/m3 and 8 km respectively.

NHC prediction cone for Irma as of 2017-09-09, taken from here

The velocity integral is storm-specific, and a little trickier. Immediately before landfall in Florida, the National Hurricane Center predicted that hurricane-force winds extended approximately 100 km from the centre, while storm-forced winds reached 300 km. From these two points, we can fit a power law:

\( v(r) \approx 23 000~\left(r\,/\,\mathrm{m}\right)^{-0.57}~\mathrm{m/s} \)

This is obviously going to create problems when \(r\) is very small, so I’ll truncate the integral at 30 km, which was roughly the radius of Irma’s eyewall. This is less obviously going to create problems when \(r\) is large, so I’ll limit the outer radius to 300 km, the extent of the storm-force winds. Finally, putting all of this together, we get a total momentum of around \(7 \times 10^{16}\) m/s.

Since each bullet is capable of delivering 2.6 N s of momentum, we’ll need about \(\mathbf{2.7 \times 10^{16}}\) bullets. That’s quite a lot – let’s see if the heat is any better.

2: Disruption by heat

The problem with adding heat to a hurricane is that it’s quite likely to intensify. Hurricanes work by exchanging heat from the warm ocean and cold upper atmosphere, so adding heat near sea level only feeds the hurricane. What if, instead, we used gunfire to heat the upper atmosphere? No temperature gradient means no convection; no convection means no hurricane.

Well, there’s an immediate problem – I’m happy to let our gunners use a boat, but I think a plane is a bit too far, and since our 9mm bullets can only reach an altitude of 1 km or so, we don’t stand much chance of heating the upper atmosphere. It should be enough, however, to heat the lower kilometre: this will prevent convection of water vapour, which is what carries most of the heat.

So, to work out how much heat we need, we must know three things: how temparature changes with height, how air density changes with height, and the heat capacity of (pretty humid) air.

Graph from here, cropped and with the relevant bit in red

This graph gives us the first piece of information – assuming a surface temperature of 30 °C, the temperature of moist air at 1 km is 27 °C. Also changing the surface temperature doesn’t have much effect on the temperature gradient; it’s always around 3° cooler at an altitude of 1 km. We can do a quick bit of interpolation to get

\( T(z) = (30 – 3 z\,/\,\mathrm{km}) °C \)

For the density, I can use an exponential formula: \( \rho(z) \, = \, \rho_0 \, \mathrm{exp}\left(\frac{z}{8~\mathrm{km}}\right) \). The heat capacity is more complex than it initially appears, since it depends on the humidity of the air. If we assume it’s saturated with water, we’ll get an upper bound for the heat capacity of \(C_v\) = 754 J / kg °C.

Combining these three, we get another integral for the total heat needed (where \(z\) is measured in km, and \(R\) is the nominal radius of the hurricane):

\( \mathrm{heat} = \pi R^2 C_v \rho_0 \int_0^1 (30 – 3 z)~\mathrm{exp}(z / 8)~\mathrm{d}z \)

Plugging in the numbers again, we get a total heat requirement of \(6.9 \times 10^{15}\) J, or in bullets, \(\mathbf{4.8 \times 10^{16}}\). Bizarrely, this is almost identical to the previous answer; if you found yourself in possession of around 1017 bullets, you could try both and see what works better.

Alternatively, you could try…

3: Disruption by giant pile of ammo

Both the previous estimates gave numbers of bullets in the tens of quadrillions – for comparison, the total number of bullets fired during the Second World War is estimated at 100 billion (1011) or so. If we armed every Floridian with two pistols, and they fired each 10 times per second, it would take just under four years to deliver the required number of bullets. That’s actually a lot less than I’d have thought, but still far too slow.

Since we’ve got all these bullets, why not just dump them in a huge pile? If our rounds are roughly 1x1x2 cm, they take up a volume of around 100 cubic kilometres. Surely this is enough to disrupt any weather system that tries to pass over.

Terrain images © Google

Oh – that’s actually quite a lot smaller than I expected.

Between Florida and the Bahamas, the sea is about 500 metres deep, meaning the rounds could form an artificial island with an area of over 200 km2. Again, that’s big, but probably not big enough – for comparison, Irma’s eye had an area of around 3000 km2.

So, how about instead of disrupting the hurricane, we just try to defend against it? 100 km3 of material would allow us to build a flood barrier 185 metres tall (including borders with Alabama and Georgia), which would protect it even if all the ice on Earth melted.

Unsurprisingly, this giant flood barrier does create one problem (aside from decreasing Florida’s tourism income): hurricanes cause a lot of damage from rainfall. This wall would do a pretty good job of retaining water, which slightly misses the point of avoiding hurricane damage.

Luckily, the wall has a convenient built-in solution: hold a match to it, and it will provide enough energy to evaporate the entire rainfall produced by Irma… in a few hours. Then you’ll need another wall.

This whole exercise was obviously ridiculous, but I think it really shows the power of slightly-organised hand-waving (a.k.a Fermi estimation). Two very different methods involving very different physics gave basically the same answer. Taking one of these answers and calculating something different (but related), we get basically the same thing. That, and the knowledge that you can’t stop a hurricane by shooting at it.

Black Hole Score

Thanks to ESO/S. Brunier for use of their lovely starmap. And no, binary black holes don’t actually look like this.

About 1.5 billion years ago, a pair of black holes, each about 30 times the mass of the Sun, collided.

About 4 years ago, the spacetime ripples from this merger caused the length of a set of tunnels to change by about the diameter of a proton. This made a lot of people very excited, and is widely regarded as one of the most important observations in astronomy.

Of course, these kinds of discovery come with a surge of media attention, and this one had an extra advantage (in addition to being about black holes, which are always popular): when treated as a sound wave, the measured distortion in spacetime made a funny sound.

I’ve heard a few different representations of the chirp (as it’s called), and heard chirps from a few different gravitational wave detections, but I’ve never seen anyone try to express it using musical notation.

My self-set challenge then, was to convert this:

into something resembling sheet music.

To determine the correct note pitches, durations, and dynamics, I originally planned to extract the original LIGO data and calculate the frequency spectrum myself. Unsurprisingly, it turns out this is really hard, so I just fit functions for the frequency and amplitude to the spectrogram from the paper. I’d like to say I did something fancy here, but I just eyeballed it (using the same colour scheme helps a lot).

LIGO spectrogram compared to my fit functions – overlaid on the top, and separately on the bottom

The frequency ranges from 45 Hz initially, and is loudest at 163 Hz, which can be expressed musically as F#1 and F3, giving us the range of pitches in the final score.

Next up, I needed to work out the note durations. There was some freedom of choice here, as I could adopt a lower tempo with faster notes. The final scaling is a compromise between having enough resolution to express the note lengths, and resorting to using all quasihemidemisemiquavers (1/128 notes). I decided on using one bar to represent 0.01 seconds of time, which gives a tempo of 24000 crotchets (quarter notes) per minute. Since the spectrogram covers 0.15 seconds, the sheet music will be 15 bars long.

Finally, dynamics – this is arguably the handwaviest conversion. The amplitude of the gravitational waves is expressed as strain – the relative size of the spacetime distortion. For GW150914, this was about 10-21 (strain has no units). Plugging the strain into this formula

gives a peak amplitude of -420 dB. Then taking this table and wildly extrapolating, we can determine that at its loudest, the gravitational wave reaches a volume of ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp, which I’ll express as p129. From there, I can just check my amplitude function to place the other dynamical markings.

Adding a final decoration or two gives the final piece:

pdf download

By the way, those final few notes are each 0.0003 seconds long – on a piano, the player’s hands would have to move rightward at 64 km/h.