1
0
Fork 0

Adjust thesis title and other minor fixes

master
Jack Henschel 2 years ago
parent 7e38eb74d3
commit 11db708b22

@ -914,7 +914,7 @@
\hspace*{\coverpageindent}%
\parbox[t][132pt+6mm]{0.75\textwidth-\coverpageindent}{%
\noindent% First position the title
\parbox[t]{0.75\textwidth-\coverpageindent}{\raggedright%
\parbox[t]{0.95\textwidth-\coverpageindent}{\raggedright%
\usefont{T1}{phv}{b}{n}\fontsize{18}{21}\selectfont{\th@sistitl@}}\par%
\vspace{8mm}%
\noindent% followed by the author

@ -236,7 +236,7 @@ Nevertheless, there are certainly advances to be made by having some amount of c
<!-- challenges of autoscalers: See Qu et al Auto-scaling web applications -->
\nop
\nop{}
{\renewcommand{\arraystretch}{1.5}
\begin{table}[h!]
\centering
@ -512,13 +512,13 @@ Horovitz and Arian \cite{EfficientCloudAutoScalingSLA_2018} proposed an algorith
It is important to note that this solution is not a full autoscaler by itself:
instead it is a machine learning algorithm that only learns and suggests the ideal thresholds for scaling.
For example, "to not raise response time above 100ms, add more replicas when CPU utilization is above 78.5%".
This makes the operation more transparent to the cluster administrator and requires less trust, as the administrators remains in full control.
This makes the operation more transparent to the cluster administrator and requires less trust, as the administrator remains in full control.
<!-- At its core is the reinforcement learning-based Q-Learning approach but it is enhanced with several optimizations for faster convergence. -->
As the name suggest, Q-Threshold leverages a Q-Learning algorithm and is enhanced with several optimizations for faster convergence.
Q-learning is a model-free reinforcement-learning algorithm that learns the optimal actions in state space through a reward function.
In the context of horizontal scaling the goal of Q-Learning is finding the optimal autoscaling policy while obeying a specified SLA.
Therefore, the reward function needs to tradeoff SLA violations (in their case response time, which should be as low as possible) for resource utilization (which should be as high as possible).
Therefore, the reward function needs to trade-off SLA violations (in their case response time, which should be as low as possible) for resource utilization (which should be as high as possible).
When the SLA is violated, the reward is negative.
The authors have conducted extensive simulations with different variations of their algorithm and found satisfying results: Q-Threshold completely avoids SLA violations and has a stable behavior when scaling due to workload changes.

@ -227,7 +227,8 @@
%% argument in box brackets. This is done because the title is part of the
%% metadata in the pdf/a file, and the metadata cannot contain linebreaks.
%%
\thesistitle{Dimensioning, Performance and Optimization of Cloud Applications}
\thesistitle{Dimensioning, Performance and Optimization of Cloud-native Applications}
%% \thesistitle[Dimensioning, Performance and Optimization of Cloud-native Applications]{Dimensioning, Performance\\and Optimization of\\Cloud-native Applications}
%\thesistitle[Title of the thesis]{Title of\\ the thesis}
%%
@ -425,6 +426,8 @@ text must be identical to the text on the abstract page.
skipbelow=\topsep
]{leftrule}
\newcommand{\nop}[1]{}
%% All that is printed on paper starts here
%%
\begin{document}
@ -464,7 +467,7 @@ text must be identical to the text on the abstract page.
\end{abstractpage}
\begin{abstractpage}[french]
Je suis Jack.
TODO
Lorem ipsum.
\end{abstractpage}

Loading…
Cancel
Save