summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndré Nusser <andre.nusser@googlemail.com>2020-02-01 11:25:17 +0100
committerAndré Nusser <andre.nusser@googlemail.com>2020-02-01 11:25:17 +0100
commit12b97c2d78627cadf43d2f76bedb98bbdb1ae5f2 (patch)
tree6a62decf8664bca1ae08623ae2a214c95ecf21b4
parent030fd17de3a9980e2c054f5bf1c09843f604da31 (diff)
Commit old uncommitted changes.
-rw-r--r--sampling_alg.tex13
1 files changed, 8 insertions, 5 deletions
diff --git a/sampling_alg.tex b/sampling_alg.tex
index 2a0ef2b..461e7e4 100644
--- a/sampling_alg.tex
+++ b/sampling_alg.tex
@@ -128,23 +128,26 @@ I will list a number of drawbacks of this algorithm in the following. These will
Velocity Layers. Round Robin and Random Selection. Is there actually any academic related work? What is actually the mathematical problem that we are trying to solve?
\section{Requirements}
-\paragraph{Normal Distribution.} The samples should roughly be drawn from a normal distribution.
+% \paragraph{Normal Distribution.} The samples should roughly be drawn from a normal distribution.
+\paragraph{Close Sample.} The chosen sample should be reasonably close to the requested power value.
\paragraph{Avoid Same Samples.} When we have multiple samples to choose from we should always take one which was last played far enough in the past.
\paragraph{Randomization.} To avoid patterns (like e.g. in round robin), we want some form of randomization.
\paragraph{Equal Probability.} Locally, samples should have almost the same probability of being chosen.
\section{Suggested Algorithms}
\subsection{Resampling}
-Resample from normal distribution until we find a fitting sample. This seems rather wasteful.
+Resample from normal distribution until we find a fitting sample, aborting after a certain amount of tries. This seems rather wasteful.
\subsection{Objective Function}
-Define an objective function which depends on the history of samples and the current power requested -- or better, a power which comes from the normal distribution of the power requested. The sample that we choose is then the one which minimizes this objective function. This is a nice way to balance the two contradictory requirements of sampling by normal distribution and avoiding samples that were just played.
+Define an objective function which depends on the history of samples and the current power requested. The sample that we choose is then the one which minimizes this objective function. This is a nice way to balance the two contradictory requirements of choosing a sample which is close to the requested power and avoiding samples that were just played.
-The rough algorithm should go as follows. A sample with power $l$ is requested. We draw one sample from the normal distribution around $l$ and call it $p$. Let $t_q$ be the time at which $q$ was played last (the unit does not matter as it is parametrized by $\beta$ anyway), and let $r(q,t)$ be a random number generator uniformly producing numbers in the range $[0,1]$. At the current time $t$, we now want to find the sample $q$ minimizing the objective function
+The rough algorithm should go as follows. A sample with power $p$ is requested.
+% We draw one sample from the normal distribution around $l$ and call it $p$.
+For any sample $q$, let $t_q$ be the time at which $q$ was played last (the unit does not matter as it is parametrized by $\beta$ anyway), and let $r(q,t)$ be a random number generator uniformly producing numbers in the range $[0,1]$. At the current time $t$, we now want to find the sample $q$ minimizing the objective function
\[
f(q, t) \coloneqq \alpha \cdot (p-q)^2 + \beta \cdot (t_q - t)^{-2} + \gamma \cdot r(q,t).
\]
-We have to ensure that $t_q \neq t$ to avoid division by zero.
+We have to ensure that $t_q \neq t$ to avoid division by zero.\todo{Adapt to what is actually in the code.}
\section{Implementation Details}
Instead of iterating over all samples and computing the objective function, we can simply do a binary search for the value closest to the requested power and then search down and upwards until we can be sure that there cannot be any better value. This is the case as soon as the first summand exceeds the best found value.