1
0
Fork 0

French abstract and other frontmatter fixes

master
Jack Henschel 2 years ago
parent f9e038886c
commit 2613ecfcfc
  1. 2
      aaltothesis.cls
  2. 38
      thesis.tex

@ -1036,7 +1036,7 @@
\ifstrequal{#1}{french}{%
\renewcommand*{\AbstractLang}{french}%
\renewcommand*{\unilogo}{\hspace{-1em}\includegraphics[height=0.9in]{images/Eurecom.pdf}}%
\renewcommand*{\Cod@}{2021}
\renewcommand*{\Cod@}{2022}
}{%
\PackageError{aaltothesis}{%
Only english, finnish or french is allowed as optional parameter%

@ -151,7 +151,7 @@
%% * the second when you want to print your thesis to bind it, or
%% * the third when producing a ps file and a pdf/a from it.
%%
\documentclass[english, 12pt, a4paper, sci, utf8, a-1b, online]{aaltothesis}
\documentclass[english, 12pt, a4paper, sci, utf8, a-2b, online]{aaltothesis}
%\documentclass[english, 12pt, a4paper, elec, utf8, a-1b]{aaltothesis}
%\documentclass[english, 12pt, a4paper, elec, dvips, online]{aaltothesis}
@ -277,7 +277,19 @@
%% as the abstract page.
%%
\thesisabstract{
TODO TODO TODO TODO TODO TODO TODO TODO TODO
Cloud computing and software containers have seen major adoption over the last decade.
Due to this, several container orchestration platforms were developed, with Kubernetes gaining a majority of the market share.
Applications running on Kubernetes are often developed according to the microservice architecture.
This means that applications are split into loosely coupled services that are distributed across many servers.
The distributed nature of this architecture poses significant challenges for the observability of application performance.
We investigate how such a cloud-native application can be monitored and dimensioned to ensure smooth operation.
Specifically, we demonstrate this work based on the concrete example of an enterprise-grade application in the telecommunications context.
Finally, we explore autoscaling for performance and cost optimization in Kubernetes
- i.e., automatically adjusting the amount of allocated resources based on the application load.
Our results show that the elasticity obtained through autoscaling improves performance and reduces costs compared to static dimensioning.
Moreover, we perform a survey of research proposals for novel Kubernetes autoscalers.
The evaluation of these autoscalers shows that there is a significant gap between the available research and usage in the industry.
We propose a modular autoscaling component for Kubernetes to bridge this gap.
}
%% Copyright text. Copyright of a work is with the creator/author of the work
@ -453,21 +465,29 @@ TODO TODO TODO TODO TODO TODO TODO TODO TODO
We investigate how such a cloud-native application can be monitored and dimensioned to ensure smooth operation.
Specifically, we demonstrate this work based on the concrete example of an enterprise-grade application in the telecommunications context.
Finally, we explore autoscaling for performance and cost optimization in Kubernetes
-- i.e., automatically adjusting the amount of allocated resources based on the application load.
--- i.e., automatically adjusting the amount of allocated resources based on the application load.
Our results show that the elasticity obtained through autoscaling improves performance and reduces costs compared to static dimensioning.
Moreover, we perform a survey of research proposals for novel Kubernetes autoscalers.
The evaluation of these autoscalers shows that there is a significant gap between the available research and usage in the industry.
We propose a modular autoscaling component for Kubernetes to bridge this gap.
%% This thesis also contributes an overview of available literature about cloud application autoscaling.
\end{abstractpage}
\begin{abstractpage}[french]
TODO
Lorem ipsum.
Le cloud computing et les conteneurs logiciels ont connu une adoption majeure au cours de la dernière décennie.
Par conséquent, plusieurs plateformes d'orchestration de conteneurs ont été développées, parmi lesquelles Kubernetes a obtenu la majorité des parts de marché.
Les applications fonctionnant sur Kubernetes sont souvent développées selon l'architecture de microservices, qui signifie que les applications sont divisées en services faiblement couplés qui sont distribués sur nombreux serveurs.
La nature distribuée de cette architecture pose des défis importants pour l'observabilité des performances des applications.
Nous étudions comment une telle application cloud-native peut être surveillée et dimensionnée pour assurer un fonctionnement sans heurts.
Plus précisément, nous démontrons ce travail en nous appuyant sur l'exemple concret d'une application d'entreprise dans le contexte des télécommunications.
Enfin, nous explorons l'autoscaling pour l'optimisation des performances et des coûts dans Kubernetes
--- c'est-à-dire l'ajustement automatique de la quantité de ressources allouées en fonction de la charge de l'application.
Nos résultats montrent que l'élasticité obtenue par l'autoscaling améliore les performances et réduit les coûts par rapport au dimensionnement statique.
De plus, nous réalisons une étude des propositions de recherche pour de nouveaux autoscalers Kubernetes.
L'évaluation de ces autoscalers montre qu'il existe un écart important entre la recherche disponible et l'application dans l'industrie.
Nous proposons donc un composant modulaire de mise à l'échelle automatique pour Kubernetes afin de combler cet écart.
\end{abstractpage}
%% Preface

Loading…
Cancel
Save