Mark @mwczrk challenged me to create a map of Mars' topography with a perceptually linear colormap for 1 Ethereum, as some of these maps ended up as a bad example of using a rainbow colormap in a recent article by Scientific American. Here is the result! Read the full story.

Who wins if Bart Simpson, Son Goku and Johnny Bravo fight about decimal precision? And why on Earth is that related to their haircuts??

Posits are a recently proposed alternative to the widely used floating-point numbers. The approach: a mathematically clearer design of computer-representable numbers. The aim: More precision with fewer bits. If we hap a CPU based on posits would there be any benefit for the weather and climate forecasting community?

Yes, and the reason is quite simple: Posits have a higher decimal precision (i.e. the decimal places that you have at least correct after unavoidable rounding errors) around 1 and yet a wide range of representable numbers. Basically, this means you get always precise answer if you don't pay attention where (on the real axis) you perform calculations. However, if you make sure that you scale your calculation to performed around 1 you win several decimals places of accuracy!

Guess now, what happens in many fluid simulations? ;)

(copyright for the cartoon pictures by somebody else...)

Ever played billards with way too many balls and then they even get stuck in the middle? Using a Brownian Motion Simulator a fixed seed in the middle of the domain particles touching the seed get stuck and a fractal is built piece by piece. You can find the project description here or the model code on github

A snapshot of relative vorticity from a shallow water model at dx = 3.75km resolution (1024x1024 grid points) reveals such a wide range of scales of motions ... stunning.

In many climate science studies a running mean filter is used as a low-pass filter. Authors then usually claim to find some large scale (e.g. decadal) co-variability based on running mean-filtered time series. What's the problem with that?

To illustrate that, I took two time series, that have by construction a certain correlation, i.e. if *X* is a given random variable, then *Y* is constructed via

`Y_i = c*X_i + sqrt(1-c**2)*e`

where *c* is the desired correlation, *i* is the index, usually representing time. *e* follows a Gaussian Normal distribution with mean 0 and standard deviation 1. As you can see from the figure, it gets interesting once X has some autocorrelation. In that case, the variability, that is not responsible for the correlation of *X* and *Y*, is smeared out in the running mean filter and one should expect a large increase of the correlation. E.g. a true correlation of 0.4 can go easily beyond 0.6 once a running mean filter is applied.

# Reynolds, Rossby and Ekman numbers in geostrophic turbulence

The art of turbulence

What are the structures of Reynolds, Rossby and Ekman numbers in geostrophic turbulence?

To answer that question, I used a shallow water model, chose a resolution sufficiently high in order to simulate a wide range of eddies and computed for a single time step the norm of the advective terms |adv|, coriolis terms |cor| and diffusion terms |diff| (biharmonic mixing is applied in the model) for each grid cell. What you see is

```
Re = |adv| / |diff|
Ro = |adv| / |cor|
Ek = |diff| / |cor|
```

in log-scale. One day I want to make an art series of that...