summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndré Nusser <andre.nusser@googlemail.com>2020-02-04 09:50:31 +0100
committerAndré Nusser <andre.nusser@googlemail.com>2020-02-04 09:50:31 +0100
commit98d16e373c5c30dd4ea1cf37ce8603f82ea15b9f (patch)
tree7185824784850d913ef3b42a6232ee9d4fe50590
parent34afd53e09169371acef03db6bcd3e23cdad6640 (diff)
Write section about emulation capabilities.
-rw-r--r--sampling_alg_lac2020/LAC-20.tex19
1 files changed, 13 insertions, 6 deletions
diff --git a/sampling_alg_lac2020/LAC-20.tex b/sampling_alg_lac2020/LAC-20.tex
index b4a4648..f20f2cf 100644
--- a/sampling_alg_lac2020/LAC-20.tex
+++ b/sampling_alg_lac2020/LAC-20.tex
@@ -267,7 +267,7 @@
\noindent Sampling drum kits well is a difficult and challenging task. Especially, building a drum kit sample bank with different velocity layers requires producing samples of very similar loudness, as changing the gain of a sample after recording makes it sound less natural. An approach that avoids this issue is to not categorize the samples in fixed groups but to simply calculate their loudness and then dynamically choose a sample, when a sample corresponding to e.g.\ a specific MIDI value is requested. We present a first investigation of algorithms doing this selection and discuss their advantages and disadvantages. The seemingly best candidate we implemented in DrumGizmo -- a FLOSS drum plugin -- and we do experiments on how our suggested algorithms perform on the samples drum kits.
\end{abstract}
-\section{Introduction}
+\section{Introduction} \label{sec:introduction}
\todoandre{Talk about the general problem of sample selection.}
\todoandre{Limit scope to drums.}
\todoandre{Talk about round robin.}
@@ -427,13 +427,20 @@ We already explained the core part of the sample selection algorithm. The remain
Note that the worst-case complexity of evaluating the objective function is linear in the number of samples for the instrument that we are considering. However, in practice we can avoid evaluation for most samples by simply starting with the \enquote{most promising} sample and recursively evaluate the neighbors until the future possible evaluations cannot beat the currently best value.
-\section{Emulation Capabilities}
-\todo{Talk about which other algorithms we are a general case of, i.e., which algorithms can we emulate using the right power values and parameter settings.}
-The main advantage of the described sampling algorithm is that it can emulate the most common sample choice algorithms. In the following we describe which algorithms can be emulated and how we have to set the parameters and power values for that.
+\section{Emulating Other Sample Selection Algorithms}
+One of the main advantages of the described sampling algorithm is that it can emulate the most common sample choice algorithms.
+Sometimes this can be done with just adjusting the parameters $\alpha, \beta, \gamma$, and sometimes we have to prepare the power values of the drum kit accordingly.
+In the following we describe which algorithms can be emulated and how we have to set the parameters and power values for that.
-\paragraph{Round Robin.} bla
+First, note that all extreme choices of the parameters -- meaning that we set one parameter of $\alpha, \beta, \gamma$ to a positive value and all other to zero -- emulate different selection algorithms.
-\paragraph{Which other??} bla
+\paragraph{Choose Closest.} If we set $\alpha > 0$ and $\beta = \gamma = 0$, then the objective function reduces to the first summand and thus we just alway choose the sample $s$ that minimizes $\abs{p - p_s}$, i.e., the closest sample.
+
+\paragraph{Choose Oldest.} Similarly, if $\beta > 0$ but $\alpha = \gamma = 0$, then the objective function reduces to the second summand and thus is minimal by the sample $s$ that maximizes $t-t_s$, i.e., the oldest sample.
+
+\paragraph{Random Selection.} If now $\gamma > 0$ and $\alpha = \beta = 0$, then the objective function reduces to the third summand and we thus always select a sample uniformly at random.
+
+\paragraph{Round Robin.} The previously mentioned emulations were straight forward, however, the arguably most commonly used sample selection algorithm in practice is round robin. As already discussed in Section \ref{sec:introduction}, round robin assumes the samples to already be grouped. In our case this means that samples $s_1, \dots,s_k$ that belong to the same velocity group should all have the same power value, i.e., $p_{s_1} = \cdots = p_{s_k}$. Now if there is a query with power value $p$, we want to always choose the closest group of samples, thus $\alpha$ should be huge. After restricting to the samples of a specific group, we now always want to play the oldest sample, thus we simply want $\beta > 0$. If we additionally want to randomize round robin in a way that we sometimes choose the second or third oldest sample, then we want to set $\gamma$ to a small value greater than zero. \todoandre{improve last few sentences.}
\section{Implementation} \label{sec:implementation}
\todobent{Give a short introduction to DrumGizmo, including a link to the git repository.}