# Choose your optimization algorithm carefully

Why is simulation important anyway? Well, first off we need it since many phenomemna (I would even say all interesting phenomemna) cannot be encapsulated by a closed form mathematical expression which is basically what you can do with a pen and paper or a mathematical software.

Sounds abstract? Ok, then let me give you an example of why we need simulation. Say you have the job of doing optimization and you have 1 variable where all you have to do is to find a point on the definition range of that variable which gives you the best solution. Let’s say we want to figure the optimal time we need to boil our bolognese for a perfect taste. Now the bolognese can boil from 0 minutes to 1200 minutes which is a very wide range. Say also that we have a way to evaluate whether the bolognese is perfect or not given the amount of minutes it has been cooked. Let’s call this magical evaluation function f(t). Now the annoying thing about t is that we can only evaluate it given a t and we don’t know what it looks like in general. As such we have to search the t for the value that gives the perfect taste.

For this reason we have to search through the entire space, i.e., simulate the function until we have a clear idea of what it looks like. The landscape presented here is just an illustration of what it can look like in a one dimensional problem. Now imagine that you had D dimensions that you cut into M pieces each. You can quickly realize that this space grows exponentially with the number of dimensions which is a nasty equation. This basically meanst that if you have 10 variables that you cut into 10 pieces each you will have billion possible solutions to search through. Does 10 billion sound ok to you? Well, maybe it is. But then let me put in computing time instead. Let’s say you have a fairly complicated evaluation function such that you can only evaluate 1000 points per second. This means that you will have to spend ((10^10)/1000/3600)/24 = 116 days of simulation time on a computer to search through this landscape. To make matters worse, most variables or parameters you might have to search through rarely decomposes into only 10 valid parts. A more common scenario is that the parameters and variables are continuous by nature which in principle means that the number of “parts” are infinite. For all practical reasons it might mean that you need hundreds of parts for each dimension yielding (((100^10)/1000/3600)/24)/360 = 3.2 billion years of computation time. That’s almost as old as our planet which is currently billion years old. We cannot wait that long!

Luckily most computers today can evaluate more than 200 billion calculations at the same time which means that even our naive calculation comes down to (((100^10)/200000000000/3600)/24)/360 = 16 years. This in turn means that if you are able to cut down the number of splits into 40 then you can search through the landscape brute force in ((40^10)/200000000000/3600)/3600 = 14.6 hours. As it turns out science has given us the ability to search through landscapes in a none brute force manner which means that we do not have to evaluate all points that we cut the space into. We can evaluate gradients and also fit functions to data which guides the exploration of the space. This all boils down to that we can in fact search through hundreds of variables and parameters within a day using advanced methods. We will look into some of these methods right now!

## Setting up the experiment

We would like to optimize the Bolognese quality by tweaking the time we cook it. As we saw above it’s not a nice function to optimize which makes it excellent for hard core testing algorithms for robustness and ability to avoid local maxima.
Since there are many optimization methods and algorithms I will try to do a benchmark on the most common ones and the ones I like to use. This is not an exhaustive list and there may be many more worthy of adding, but this is the list I know. Hopefully it can guide you in your choices moving forward.
The algorithms we will look into are primarily implemented in NLopt except the Simulated annealing which comes from the standard optim function in R. Either way the list of algorithms are:

The algorithms we will look into are primarily implemented in NLopt except the Simulated annealing which comes from the standard optim function in R. Either way the list of algorithms are:

• Simulated Annealing (SANN)
• Improved Stochastic Ranking Evolution Strategy (ISRES)
• Constrained Optimization BY Linear Approximations (COBYLA)
• Bound Optimization BY Quadratic Approximation (BOBYQA)
• Low-storage Broyden–Fletcher–Goldfarb–Shanno algorithm (LBFGS)
• Augmented Lagrangian algorithm (AUGLAG)