Who wins if Bart Simpson, Son Goku and Johnny Bravo fight about decimal precision? And why on Earth is that related to their haircuts??
Posits are a recently proposed alternative to the widely used floating-point numbers. The approach: a mathematically clearer design of computer-representable numbers. The aim: More precision with fewer bits. If we hap a CPU based on posits would there be any benefit for the weather and climate forecasting community?
Yes, and the reason is quite simple: Posits have a higher decimal precision (i.e. the decimal places that you have at least correct after unavoidable rounding errors) around 1 and yet a wide range of representable numbers. Basically, this means you get always precise answer if you don't pay attention where (on the real axis) you perform calculations. However, if you make sure that you scale your calculation to performed around 1 you win several decimals places of accuracy!
Guess now, what happens in many fluid simulations? ;)
(copyright for the cartoon pictures by somebody else...)
Ever played billards with way too many balls and then they even get stuck in the middle? Using a Brownian Motion Simulator a fixed seed in the middle of the domain particles touching the seed get stuck and a fractal is built piece by piece. You can find the project description here or the model code on github
A snapshot of relative vorticity from a shallow water model at dx = 3.75km resolution (1024x1024 grid points) reveals such a wide range of scales of motions ... stunning.
In many climate science studies a running mean filter is used as a low-pass filter. Authors then usually claim to find some large scale (e.g. decadal) co-variability based on running mean-filtered time series. What's the problem with that?
To illustrate that, I took two time series, that have by construction a certain correlation, i.e. if X is a given random variable, then Y is constructed via
Y_i = c*X_i + sqrt(1-c**2)*e
where c is the desired correlation, i is the index, usually representing time. e follows a Gaussian Normal distribution with mean 0 and standard deviation 1. As you can see from the figure, it gets interesting once X has some autocorrelation. In that case, the variability, that is not responsible for the correlation of X and Y, is smeared out in the running mean filter and one should expect a large increase of the correlation. E.g. a true correlation of 0.4 can go easily beyond 0.6 once a running mean filter is applied.
What are the structures of Reynolds, Rossby and Ekman numbers in geostrophic turbulence?
To answer that question, I used a shallow water model, chose a resolution sufficiently high in order to simulate a wide range of eddies and computed for a single time step the norm of the advective terms |adv|, coriolis terms |cor| and diffusion terms |diff| (biharmonic mixing is applied in the model) for each grid cell. What you see is
Re = |adv| / |diff|
Ro = |adv| / |cor|
Ek = |diff| / |cor|
in log-scale. One day I want to make an art series of that...
How to identify the seasonal cycle from a time series?
One might think of the seasonal cycle as the climatological mean of a given variable for a certain time in the year. Hence, we define the seasonal cycle as the average of a variable not on a continuous time axis but on a cyclic one starting from 0 at Jan 1 00:00 and ending one year later after 365.25 days.
However, we can also estimate the seasonal cycle by a series of sine waves, a constant and possibly a linear trend. This comes with the disadvantage that a seasonal cycle of arbitrary shape is only approximated with a finite (and usually small) number of sine waves. In turn, this has several advantages
- There is an analytic and continuous function for the seasonal cycle, that can be evaluated at any time of the year.
- A much smaller amount of parameters is necessary to describe the seasonal cycle. For 4 sine waves, a constant and linear trend this would be 10.
- The seasonal cycle is smooth.
This is an example of sea surface temperature from the ECCO2 data set. The seasonal cycle is nicely estimated and any lagged Autocorrelation at 365.25 days is removed from the time series. The result is totally comparable to the standard approach described above (denoted here by rmean, as a running mean filter is applied afterwards for smoothing)