Markus Ableidinger (Johannes Kepler Universität Linz)

Structure preserving splitting integrators for stochastic differential equations

In this talk we will discuss stochastic differential equations, where the solution trajectories are governed by geometric structures as, e.g., energy preservation or dissipation. An efficient strategy for constructing structure preserving integrators is to split the SDE into subsystems which inherit the geometric structure and build a numerical integrator by composition of the exact flows of the subsystems. We apply this approach on SDEs arising in micromagnetism (stochastic Landau-Lifshitz-Gilbert equation) and neuroscience (stochastic Jansen and Rit Neural Mass Model).

Randolf Altmeyer (Humboldt-Universität zu Berlin)

Covariance estimation in high dimensions using random matrix theory

Classically, covariance estimation is done using the empirical covariance which has good properties for large sample size if the dimension is fixed. We consider the harder problem, when the dimension \(p\) and the sample size \(n\) grow as \(p/n \rightarrow c\) for some positive constant \(c\). The key tools are random matrix theory, as well as some Fourier analysis. We will present the necessary background and related work. After that, we will extend the known methods and try to analyze the optimality of the extensions.

Andre Beinrucker (Universität Potsdam)

The Wisdom of the Crowds

In 1907 Sir Francis Galton, a famous British mathematician, took part in a public event where the weight of an ox was estimated by members of the audience. He remarked that the median of the estimation was surprisingly close to the true weight of the ox. This effect was named "Wisdom of the Crowds" later on and has been applied in many areas of our daily life. In this talk, we discuss what is needed to make a crowd wise and demonstrate the concept in practical exercises. Further, I show how the idea is related to the subject of my dissertation.

Todor Bilarev (Humboldt-Universität zu Berlin)

Multiplicative Market Impact Model with Transient Impact: Stability, Absence of Arbitrage and Hedging

In this talk, we will discuss a market model with a single risky asset and a large trader whose actions can affect its price. Instantaneous (when block trades are executed) and transient market impact are incorporated in our model. The price process is given by a positive process (geometric Brownian motion) multiplied by a non-negative factor depending on the full trading history of the large trader. We will explain how the wealth process of the large trader evolves in time by linking our model to the so-called Marcus canonical stochastic differential equation, for which a Wong-Zakai type of approximation result holds. Thus, the wealth process for general semimartingale trading strategies is derived as the limit (in the ucp topology) of wealth processes corresponding to absolutely continuous strategies. Moreover, we show how the same process appears as a limit of block-trading approximations. In addition, we show the absence of arbitrage opportunities (known as NA condition in liquid markets).
In the proposed model we will discuss the optimal trade execution problem leading to free-boundary singular stochastic optimization problems, and pricing and hedging of European contingent claims, leading to non-linear pricing PDEs.

Laura Bittner (Bergische Universität Wuppertal)

Probabilistic Reliability-Model in Design for Fatigue Life under Cyclic Thermomechanic Loading

Blades in gas turbines are exposed to very strong forces caused by a.o. rotation, air pressure and temperature. These forces and temperature loads can be modeled mathematically by the PDE of thermal elasticity. The stress and strain states that result from its solution influence the durability of the component.
If this process is repeated cyclically, the formation of cracks due to material fatigue is unavoidable. Since place and time of appearing cracks at the surface are not predictable, it is important to integrate a stochastic approach. In particular Poisson-Point-Processes turn out to be appropriate to this task. Furthermore a Weibull approach is used to determine the failure probability.
These failure probabilities that also depend on the components shape can be expressed by local cost functionals. Using methods of shape optimization we proof the existence of optimal designs maximizing the components reliability.

Thomas Buder (Technische Universität Dresden)

Field cancerization in the human colon

This work is devoted to investigate the formation of a cancer field within the human colon by a stochastic approach. In detail, we are interested on the influence of the structure of the human colon on the formation of such a cancer field. In order to model the human colon, we introduce the two-folded crypt structure. This structure is composed of complete subgraphs representing human colonic crypts. Subsequently, basic dynamics of cell proliferation and cell death on this structure are modeled by utilizing Moran dynamics. The absorption probabilities and times are calculated and allow a quantitative estimate of the expected timescale needed to form a cancer field in the human colon.

Thomas Cayé (Eidgenössische Technische Hochschule Zürich)

Local Utility Maximization with Small Nonlinear Price Impact

We consider a market with one safe asset and one risky asset with general, not necessarily Markovian dynamics. In this setting, we study the tradeoff between expected returns, the variance of the corresponding positions, and nonlinear trading costs proportional to a power of the order flow. In the limit for small costs, explicit formulas obtain.

This is joint work (in progress) with M. Herdegen and J. Muhle-Karbe.

Jamil Chaker (Universität Bielefeld)

Eindeutigkeit des Martingal-Problems für eine Klasse von nichtlokalen Operatoren

Sei \(Z_t\) ein anisotroper stabiler Lévy Prozess mit unterschiedlichen Stabilitätsindizes in jeder Koordinatenrichtung. Im Rahmen dieses Vortrags untersuchen wir stochastische Differentialgleichungen der Form \(dX_t = \xi(X_{t-}) dZ_t\), indem wir das zugehörige Martingal-Problem studieren.

Peter Czuppon (Albert-Ludwigs-Universität Freiburg)

The one-dimensional cooperative biased voter model

Motivated by the phenomenon of cooperation in nature we introduce a model with nearest-neighbor dynamics. Therefore, we consider a biased voter model with types 0 (defectors) and 1 (cooperators). The bias \(\alpha\), the selection parameter, leads to an advantage for the defectors. This interacting particle system is already well understood and all kinds of limit behavior have been studied. The analysis relies heavily on the monotonicity of the process.
We expand this model and introduce a cooperation mechanism: individuals of type 1, the cooperators, help neighboring individuals of either type to reproduce with rate \(\gamma\) to another arbitrary neighboring site. Unfortunately, the monotonicity of the process is not given anymore under this additional transition such that most of the known methods do not apply in this case.
The goal is to identify the limiting distributions of this interacting particle system and to study the asymptotic behavior of the model for different parameter ranges and dimensions. In the talk we will only cover the one dimensional case. Here, we see that coexistence is not possible. For \(\alpha>\gamma\) the defectors will take over the system, whereas for \(\gamma>\alpha\) the cooperators will survive almost surely on the infinite lattice.

Alexander Dalinger (Technische Universität Darmstadt)

On the hydrodynamic behavior of a 1D system with next neighbor interactions

We consider a one-dimensional system of particles interacting with each other over the next neighbors. Starting with an arbitrary configuration of particles, the aim is to understand the time evolution of the particle density. It can be proved that the particle density converges in the hydrodynamic limit to a solution of a nonlinear heat equation. This result will be discussed in the talk.

Todor Dinev (Universität Trier)

Über Buehler-Konfidenzbereiche

Wir stellen eine verallgemeinerte Konstruktionsmethode des kleinsten (bzgl. Mengeninklusion) monotonen (bzgl. einer "designierten Statistik") Konfidenzbereiches für einen beliebigen interessierenden Parameter in einem allgemeinen Modell auf einem total prägeordneten Stichprobenraum vor. Diese Methode, welche von Robert J. Buehler aus dem Jahre 1957 stammt, ist hauptsächlich in der Zuverlässigkeitstheorie bekannt, kann aber auch im Bereich der Intervallschätzung ganz allgemein angewandt werden und verdient, unserer Meinung nach, größere Aufmerksamkeit.
Nach der Aufstellung einiger Optimalitätseigenschaften von Buehler-Konfidenzbereichen gehen wir anhand einiger Beispiele kurz auf das Problem der Wahl einer sinnvollen designierten Statistik ein.

Dennis Dobler (Universität Ulm)

Inference Procedures for Cumulative Incidence Functions

The analysis of cumulative incidence functions (and more generally of transition probability matrices of non-homogeneous Markov processes) is of great importance (especially in medical applications) and it constantly gives rise to new statistical developments. While observations may be incomplete, e.g. due to random left-truncation and right-censoring, estimation of these probabilities is conducted by employing counting processes and the Aalen-Johansen estimator. However, results of weak convergence towards a Gaussian process cannot be utilized straightforwardly since the complicated limiting covariance structure depends on unknown quantities. Already established tests, which are popular in the literature, have a simple limit distribution but may have poor power.
In order to construct asymptotically valid and powerful inference procedures, we introduce a large variety of resampling techniques using a martingale representation of the Aalen-Johansen estimator. A new aspect to this approach is given by the possibility to flexibly choose these multipliers dependent on the data, covering, for instance, the Wild bootstrap as well as the Weird bootstrap. In doing so, we gain conditional weak convergence towards a Gaussian process with correct covariance functions resulting in consistent tests and confidence bands.
For small samples the performance in the competing risks set-up is assessed via simulation studies illustrating the type I error control and analyzing the power of the developed tests and confidence bands.

Markus Doktor (Technische Universität Kaiserslautern)

Gibbs Tessellations for Foam Modeling

If we look onto the microstructure of a foam we observe characteristic features like the length of the edges and the shape of the cells. The common techniques for modeling use the Laguerre tessellation generated by a parametric point process (e.g. sphere packing or Poisson process) which are able to deal with the size of the cell but not with the length of the edges or the shape of the cell. So we are using a different approach - Gibbs processes - including the geometric characteristics of the process in the model in contrast to other models. In the talk we see that an intuitive model causes problems in simulation and parameter estimation which leads to a generalized model. For this model we see fast simulation and consistent estimation techniques. Finally we investigate some examples.

Achim Dörre (Universität Rostock)

Bayesian Estimation of a Proportional Hazards Model for Doubly Censored Durations

Doubly censored data consist of uncensored, left and right censored data and occur frequently in survival time contexts. Bayes estimators of a parametric proportional hazards model for random durations subject to double-censoring are investigated. In particular, we prove consistency and asymptotic normality of the Bayes estimators with respect to quadratic loss. In addition, estimators of standard errors are derived. The proposed model is applied to rating class data from a large German bank. Initial rating class effect and time effects for rating transitions are analysed. In the given setting, no conjugate prior distribution is available. Therefore, the computation of the estimators is performed by use of MCMC algorithms for which logarithmic ratios of the posterior density are derived. A simulation study is conducted in order to analyse the finite-sample performance in comparison to maximum likelihood estimation. It turns out that parametric Bayes estimators and confidence intervals can be derived in a consistent manner and are asymptotically normal. Their performance in finite samples is satisfying and does not require asymptotic justifications. It is argued that the proposed Bayes estimators are a reasonable alternative to maximum likelihood estimation for small and moderate sample size.

Karl Ewald (Technische Universität Wien)

Confidence Sets Based on the LASSO Estimator

In a linear regression model with fixed dimension, we investigate the distribution of the LASSO estimator in finite samples as well as in an asymptotic setup. In finite samples and asymptotically, in the case where the LASSO estimator is tuned to perform conservative model-selection, we derive formulas for computing the minimal coverage probability of the entire parameter vector for a large class of sets. This enables the construction of valid confidence sets based on the LASSO estimator. The choice of shape for the confidence sets is also discussed. Moreover, in the case where the LASSO estimator is tuned to enable consistent model-selection, we give a simple confidence set with minimal coverage probability converging to one. The findings are a generalization of results of Poetscher & Schneider (Electron. J. Stat., 2010).

Mark Feodoria (Christian-Albrechts-Universität Kiel)

Portfolio Optimization under fixed Transaction Costs

Mein Thema behandelt die Bestimmung von asymptotisch erwartungsnutzen-optimalen Handels-strategien bei festen Transaktionskosten. Feste Kosten bedeutet in diesem Zusammenhang, dass jede Transaktion eines Anlegers sein Vermögen um einen festen Betrag \(\varepsilon\) reduziert. Betrachtet wird ein Modell aus einer Anleihe \(S^0=1\) und (zunächst) einer Aktie \(S\), deren Preisprozess als allgemeiner Itô-Prozess \begin{equation*} dS_t= \mu_t dt + (\sigma_t)^{\top} dW_t \end{equation*} vorausgesetzt wird. Unter diese Klasse von Modellen fällt auch das Black-Scholes Modell. Gesucht wird nun ein vorhersehbarer Prozess \(\varphi\), der den erwarteten Nutzen des Endvermögens maximiert. Dabei beschreibt \(\varphi_t\) die (zufällige) Anzahl an Aktien im Portfolio zur Zeit \(t\in[0,T]\).
Aus einigen heuristischen Überlegungen kann man eine Kandidatenstrategie \(\varphi\) herleiten, die stets in einem Intervall \([\varphi^* + \Delta\varphi^-, \varphi^* + \Delta\varphi^+]\) um die bei Abwesenheit von Transaktionskosten optimale Strategie \(\varphi^*\) liegt und beim Erreichen der Grenzen auf \(\varphi^*\) handelt. In meiner Arbeit möchte ich beweisen, dass der Erwartungsnutzen der obigen Strategie bis auf einen mit den Transaktionskosten \(\varepsilon\) verschwindenden Term optimal ist.
Kern meines Vortrags sollen die heuristische Herleitung der Kandidatenstrategie und der Beweis zur asymptotischen Optimalität dieser Strategie sein.

Tobias Fissler (Universität Bern)

Higher order elicitability and Osband's principle

A statistical functional, such as the mean or the median, is called elicitable if there is a scoring function or loss function such that the correct forecast of the functional is the unique minimizer of the expected score. Such scoring functions are called strictly consistent for the functional. The elicitability of a functional opens the possibility to compare competing forecasts and to rank them in terms of their realized scores.
In this talk, we explore the notion of elicitability for multi-dimensional functionals and give both necessary and sufficient conditions for strictly consistent scoring functions. One of the main results is a multi-dimensional version of Osband's principle. We cover the case of functionals with elicitable components, but we also show that one-dimensional functionals that are not elicitable can be a component of a higher order elicitable functional. In the case of the variance this is a known result. However, an important result of this paper is that spectral risk measures with a spectral measure with finite support are jointly elicitable if one adds the 'correct' quantiles. A direct consequence of applied interest is that the pair (Value at Risk, Expected Shortfall) is jointly elicitable under mild conditions that are usually fulfilled in risk management applications.

This talk is based on a joint work with Johanna F. Ziegel (University of Bern). The preprint is available at http://arxiv.org/abs/1503.08123.

Gordon Frank (Universität Rostock)

Additives Hazardratenmodell für zufällig doppelt-trunkierte Verweildauern

Dieses Papier beschäftigt sich mit zufälliger Doppeltrunkation, bei welcher die obere Trunkationsvariable gleich der unteren, plus eine deterministische Variable ist. Über zufällige Punktmaße wird ein Maximum Likelihood Schätzer für exponentielle Verweildauern und unbekannter Trunkationsverteilung hergeleitet. Zusätzlich wird dieses Exponentialmodell zu einem additiven Hazardratenmodell verallgemeinert. Sowohl Konsistenz als auch asymptotische Normalität wird bewiesen. Die Zufälligkeit der Anzahl der Beobachtungen, welche bei zufälliger Trunkation auftritt, wird für beide Modelle berücksichtigt. Eine Simulationsstudie zeigt, dass dies in eine Verringerung des mittleren quadratischen Fehlers des exponentiellen Modells resultiert. Beide Modelle werden auf einen Datensatz angewandt, der die Dauern bis zur Insolvenz von Deutschen Unternehmen beinhaltet.

Peter Frentrup (Humboldt-Universität zu Berlin)

Optimal Liquidation in a Multiplicative Market Impact Model

We describe a market model for trading a single risky asset, in which a large investor seeks to liquidate his position in an infinite time horizon, while maximizing expected proceeds. Trading large orders has an adverse effect on the asset's price, which is determined by the investor's current volume impact and is multiplicative in relation to the current price. The volume impact is a deterministically mean-reverting process whenever no trade occurs. We justify why the proceeds should have a certain form, heuristically by describing a multiplicative limit order book, and also by drawing a link to Marcus type SDEs. The martingale optimality principle suggests that the two dimensional state space of volume impact and number of held assets is separated by a free boundary into a wait- and a sell-region. We derive this free boundary using classical calculus of variations and prove optimality. If time permits, we discuss a variant of our model with stochastic volume impact, in which case verification of optimality reduces to showing certain analytic properties of Hermite functions, some of which remain to be fully proven.

Daniel Gaigall (Leibniz Universität Hannover)

Comparison of statistical tests in the paired and independent survey sampling case

Let \(n\in\mathbb N\) and \((X_1 , Y_1),\dots,(X_{2n},Y_{2n})\) be independent and identically distributed random variables with values in a product set \(S\times S\) with distribution \({\cal L}(X_1,Y_1)\). In many practical situations one can choose between the experimental design of a paired survey sampling with observations of the independent and identically distributed random variables \((X_1 , Y_1),\dots,(X_n,Y_n)\) and the experimental design of an independent survey sampling with observations of the two independent blocks of independent and identically distributed random variables \(X_1,\dots,X_n\) and \(Y_{n+1},\dots,Y_{2n}\).
It is obvious and important to pose the question of the better design. Assume that the underlying statistical problem is the testing problem of equality of the marginal distributions, \(\operatorname H:~{\cal L}(X_1)={\cal L}(Y_1),~\operatorname K:~{\cal L}(X_1)\neq {\cal L}(Y_1)\). Depending on the kind of experiment, different statistical tests apply.
In this talk, the comparison of such tests will be done by using asymptotic relative efficiencies of statistical tests. Of course, this efficiencies depend on the distribution \({\cal L}(X_1,Y_1)\). The focus is on the practical most important case of multivariate normal distribution.

Julian Gerstenberg (Leibniz Universität Hannover)

Rémy's Kette und die Brownsche Exkursion

Ein klassisches Resultat (D. Aldous, u.a.) über die Asymptotik zufälliger ebener Bäume besagt: Ist \(T_n\) ein gleichverteilter zufälliger binärer Baum mit \(n\) Blättern und \(H'_{T_n}\) der zugehörige geeignet standardisierte Harris-Pfad (oder auch "Dyckpfad") zu \(T_n\), so konvergiert \(H'_{T_n}\) mit \(n\rightarrow\infty\) in Verteilung gegen eine Brownsche Exkursion \(B^{ex}\) (der Länge \(1\)).
Im Vortrag wird nun zunächst Rémy's Kette vorgestellt, eine Markovkette \(T=(T_n)_{n\geq 1}\), bei der \(T_n\) für alle \(n\) ein gleichverteilter binärer Baum mit \(n\) Blättern ist. Das bekannte Resultat über Verteilungskonvergenz wird in dieser Situation verschärft; aus der Verteilungskonvergenz wird eine fast-sicher gleichmäßige Konvergenz gegen eine Brownsche Exkursion \(B^{ex}\), in Zeichen: \begin{equation*} H'_{T_n}\longrightarrow_{\text{glm.}} B^{ex}~~~\text{fast sicher für}~n\rightarrow\infty. \end{equation*} Dieser Limes stellt sich als "bestmöglicher" Limes zu Rémy's Kette dar: \(B^{ex}\) erzeugt die terminale \(\sigma\)-Algebra von \(T\) (modulo \(P\)), in Zeichen: \begin{equation*} \sigma(B^{ex})~=_P~\mathcal{T}(T):=\bigcap\limits_{n\geq 1}\sigma(T_{n},T_{n+1},\dots). \end{equation*} Insbesondere besitzt jede Zufallsgröße \(X\), die fast-sicherer Limes einer Folge von Zufallsgrößen der Gestalt \(f(T_n)\) ist, eine Darstellung als \(X=\Phi(B^{ex})\) (fast sicher) für eine passende Funktion \(\Phi\).
Im Vortrag wird (grob) skizziert, wie man dieses Resultat ausgehend von der bekannten Verteilungsasymptotik erhalten kann. Wichtige Hilfsmittel dabei sind Martingalkonvergenzsätze und der Satz von Hewitt-Savage.
Der Beweis liefert als einfaches Korollar eine fundamentale Eigenschaft der Brownschen Exkursion: Mit \(\left\| B^{ex}\right\|:=\text{sup}_{0\leq t\leq 1}B^{ex}(t)\) gilt \begin{equation*} \sigma\left( B^{ex}\right) ~=_P~\sigma\left( \frac{B^{ex}}{\left\| B^{ex}\right\| }\right), \end{equation*} in Worten: Skaliert man eine Brownsche Exkursion auf Maximalhöhe \(1\), so geht keine Information verloren. Insbesondere kann man \(\left\| B^{ex}\right\|\) aus der normierten Exkursion zurückgewinnen (fast sicher).

Lukas Gonon (Eidgenössische Technische Hochschule Zürich)

Evolution of Firm Size

Recently, mathematical theories for optimal investment have been proposed that aim to base portfolio selection on quantities that are easy to observe, such as market capitalizations, rather than on quantities which can only be estimated with substantial error, such as growth rates and volatilities. What we propose in this talk is a very simple mechanism to explain why the relative size of a firm in the market could be informative about its performance: We suppose that firms are created at random times, each of them getting assigned a randomly chosen individual growth rate and volatility. The size of each firm then evolves as a geometric Brownian motion with this drift and volatility until it disappears from the market after a random lifetime. At any fixed time, there is a finite number of firms alive and their sizes contain information about their drifts and volatilities. Under suitable distributional assumptions, we are able to find the conditional distribution of the growth rate and the volatility given the current size of a firm, and this can then be used to make portfolio choices based on the observed sizes.

This is a joint work with Chris Rogers.

Adrian Gonzalez (Technische Universität Berlin)

An individual-based model for Lenski's long-term evolution experiment

The Lenski experiment investigates the long-term evolution of bacterial populations. Its design allows the direct comparison of the reproductive fitness of an evolved strain with its founder ancestor. It was observed by Wiser et al. (2013) that the relative fitness over time increases sublinearly, a behaviour which is commonly attributed to effects like clonal interference or epistasis. In this talk, we present an individual-based probabilistic model that captures essential features of the design of the Lenski experiment. We assume that each beneficial mutation increases the individual reproduction rate by a fixed amount, which corresponds to the absence of epistasis in the continuous-time (intraday) part of the model, but leads to an epistatic effect in the discrete-time (interday) part of the model. Using an approximation by near-critical Galton-Watson processes, we prove that under some assumptions on the model parameters which exclude clonal interference, the relative fitness process converges, after suitable rescaling, in the large population limit to a power law function.

The talk is based on a joint work with Noemi Kurt, Anton Wakolbinger and Linglong Yuan.

Claus Griessler (Universität Wien)

Optimality and cyclical monotonicity

The concept of cyclical monotonicity is crucial in optimal transport. Under mild assumptions on a cost function \(c\), defined on the product of two Polish spaces \(X\) and \(Y\), a probability measure \(\gamma\) is known to be a cost-minimizing coupling of probabilities \(\mu\) on \(X\) and \(\nu\) on \(Y\), if and only if \(\gamma\) is concentrated on a cyclically monotone set. Beiglböck and Juillet (2014) modified the concept for martingale transport and re-established the equivalence result using more restrictive assumptions on the cost function. In this talk, I shall present further variations of these connections. The necessity part can be generalized to a very large class of problems, including optimal transport with a prescribed continuum of marginal measures. Regarding sufficiency, the situation is more complicated, but it can be shown that the restrictions on the cost function in martingale transport can be considerably relaxed. Furthermore, cyclical monotonicity turns out to be sufficient in optimal transport with \(n\) given marginals if the cost function satisfies a mild integrability condition.

Based on joint work with Mathias Beiglböck and Augusto Gerolin, respectively.

Philipp Gschoepf (Humboldt-Universität zu Berlin)

TERES - Tail Event Risk Expectile based Shortfall

A flexible framework for the analysis of tail events is proposed. The framework contains tail moment measures that allow for Expected Shortfall (ES) estimation. Connecting the implied tail thickness of a family of distributions with the quantile and expectile estimation, a platform for risk assessment is provided. ES and implications for tail events under different distributional scenarios are investigated, particularly we discuss the implications of increased tail risk for mixture distributions. Empirical results from the US, German and UK stock markets, as well as for the selected currencies indicate that ES can be successfully estimated on a daily basis using a one-year time horizon across different risk levels.

Stefan Häfner (Universität Duisburg-Essen)

Weak Monte Carlo path generation method with optimal control variates

In this talk, a novel modification of the Single level Monte Carlo path simulation approach, allowing for further significant complexity reduction, is introduced. The idea of the modification is using the method of control variates to remove the variance. Under a proper choice of control variates, one can reduce the complexity order below \(\varepsilon^{-2}\) with \(\varepsilon\) being the precision to be achieved. These theoretical results are illustrated by several numerical examples.

Felix Hermann (Albert-Ludwigs-Universität Freiburg)

The Partial Duplication Random Graph

We study a model for an evolving random graph: At each time step a vertex \(v\) is chosen u.a.r. and partially duplicated. That means a new node \(w\) is added to the network after which each neighbor of \(v\) is connected to \(w\) with probability \(p\) independently. Here, \(p\in(0,1)\) denotes the only parameter of our model. In comparison to the extensively discussed preferential attachment models, this process has substantially different topological properties, while its asymptotics are not yet well understood. Following a brief introduction, this talk will outline one issue in particular, i.e. the vast appearance of vertices without any edges, and its connection to the often desired power-law property. The main tool we used to compute the limit of the degree distribution is a duality to a piecewise deterministic Markov process. In conclusion to the presentation further results and open questions will be discussed.

Simone Hermann (Technische Universität Dortmund)

Modeling Crack Growth in Fatigue Experiments - A Bayesian Prediction Approach Based on a Nonhomogeneous Poisson Process

In constructional engineering, experiments of material fatigue are very expensive and therefore seldom. In our research project on "Statistical methods for damage processes under cyclic load" of the collaborative research centre 823 from the TU Dortmund University, the engineers conducted an experiment, in which they set several prestressed concrete beams under cyclic load, starting with initial cracks. The observed crack widths exhibit irregular jumps with increasing frequency which influence the growth process substantially. Sound measurements provide information concerning the break times of the tension wires which perfectly match the observed jumps in the crack width data. This finding has important implications for the project aiming to find a stochastic model that describes - and ultimately allows to predict - the development of the crack width curve.
A nonhomogeneous Poisson process is defined to describe the number of broken tension wires over time. For the crack width curve, expert knowledge from the engineering researchers is used to form a stochastic model dependent on the counting process. A Bayesian estimation procedure and the resulting predictive distribution will be presented, once for the Poisson process and further in turn for the whole crack width process conditional on the Poisson process.

Samuel Hetterich (Goethe-Universität Frankfurt a.M.)

Belief Propagation Guided Decimation on Random Formulas

Let \(\Phi\) be a uniformly distributed random \(k\)-SAT formula with \(n\) variables and \(m\) clauses. Non-constructive arguments (second moment method) show that \(\Phi\) is satisfiable for clause densities \(m/n \leq r_{k-SAT} \sim 2^k\ln 2\) with high probability (tending to \(1\) as \(n\) tends to infinity). Yet no efficient algorithm is known to find a satisfying assignment beyond \(m/n \sim 2^k\ln(k)/k\) with a non-vanishing probability. One type of algorithms are decimation algorithms (Unit Clause, Shortest Clause, ...) which at each step assign one variable due to a specific rule, simplify the formula and iterate this procedure on the decimated formula. On the basis of deep but non-rigorous statistical mechanics ideas, a message passing algorithm called Belief Propagation Guided Decimation has been put forward (Mezard, Parisi, Zecchina: Science 2002; Braunstein, Mezard, Zecchina: Random Struc. Alg. 2005).
A message passing algorithm is a recursive procedure which operates 'messages' associated with edges of the factor graph (representing the \(k\)-SAT formula - vertices correspond to variables, edges to clauses) and updates them recursively through local computations done at the vertices on the graph. The problem of computing marginals of the uniform distribution over all satisfying assignments can be done in time of order \(2^n\). On trees, the complexity of computing marginals using a certain message passing algorithm reduces dramatically down to linear order. These update rules that yield exact marginals on trees have been discovered independently in several different contexts: statistical physics (under the name 'Bethe-Peierls approximation'), coding theory (the 'sum-product' algorithm), and artificial intelligence ('belief propagation').
Thus, Belief Propagation Guided Decimation at each decimation step assigns variables due to marginals approximated by using the Belief Propagation algorithm. Experiments suggested that the algorithm might succeed for densities very close to \(r_{k-SAT}\) for \(k=3,4,5\) (Kroc, Shabharwal, Selman: SAC 2009). Unfortunately, although the factor graph of \(\Phi\) for clause densities smaller than \(r_{k-SAT}\) is locally tree-like with high probability, Belief Propagation Guided Decimation fails to find satisfying assignments at clause density beyond \(2^k\ln(k)/k\) with high probability at least for sufficiently large \(k\) (Coja-Oghlan: 'Belief Propagation Guided Decimation Fails on Random Formulas').
In this talk I will give an introduction to Belief Propagation Guided Decimation and some intuition and heuristic explanation of the probabilistic analyses related to the phase diagram of the solution space geometry of random \(k\)-SAT formulas.

Imke Höfers (BTU Cottbus-Senftenberg)

Portfolio optimization under dynamic risk contraints

We consider an investor faced with the classical portfolio problem of optimal investment in a log-Brownian share and a fixed-interest bond, but constrained to choose portfolio and consumption strategies which reduce the corresponding shortfall risk. Risk measures are calculated for short time intervals and imposed on the strategy as a risk constraint. To derive optimal strategies under this constraint, we apply dynamic programming techniques and combine the resulting Hamilton-Jacobi-Bellman equation with the method of Lagrange multipliers to handle the constraint. An approximate solution to the constrained portfolio problem is obtained by using a policy improvement algorithm. In addition we present various numerical methods to solve the partial differential equations arising in this algorithm. Our numerical results indicate that the effect of the risk constraint is very small, that is the investor is not losing very much compared to the unconstrained case.

Maike Klein (Friedrich-Schiller-Universität Jena)

An Optimal Stopping Problem with Expectation Constraints

We maximize the expected reward \(\mathbb{E}[f(X_\tau)]\) for a time-homogeneous diffusion \(X\) over stopping times \(\tau\) with \(\mathbb{E}[\tau]\leq T\). The associated value function is compared to the one obtained in the classical problem where stopping times with values in \([0,T]\) are considered. Our optimal stopping problem is reformulated as an optimal control problem with controlled time horizon and it is shown that under certain assumptions the value function is the unique solution of a variational inequality.

Hakam Kondakji (BTU Cottbus-Senftenberg)

Portfoliooptimierung unter partieller Informationen und Expertenmeinungen in einem Finanzmarkt mit Gaußscher Drift

In diesem Vortrag untersuchen wir optimale Portfoliostrategien für nutzenmaximierende Investoren in einem zeitstetigen Finanzmarktmodell mit partieller Information über die Drift. Die Drift bzw. mittlere Aktienrendite wird durch einen Ornstein-Uhlenbeck-Prozess modelliert, welcher nicht direkt beobachtbar ist. In klassischen Modellen nimmt man an, dass Informationen über die Drift lediglich aus Beobachtungen der Aktienpreise gewonnen werden können. Die daraus gewonnen Schätzungen besitzen oft nur eine unbefriedigende Genauigkeit, da Drifteffekte in den Aktienpreisen typischerweise von einer hohen Volatilität überlagert werden. Aus diesem Grunde versuchen Praktiker zusätzliche Informationsquellen wie z.B. Unternehmensberichte, Ratings, Nachrichtenmeldungen, Empfehlungen von Analysten oder ihre eigene subjektiven Einschätzungen über die zukünftige Renditen in die Schätzungen mit einzubeziehen. Diese als Expertenmeinungen bezeichneten Informationen werden mit den Aktienpreisbeobachtungen kombiniert, um eine verbesserte Driftschätzung zu erhalten. Im Falle von erwartungstreuen Expertenmeinungen verbessert dies die Varianz des Schätzers und führt zu einer besseren Performance der darauf aufbauenden optimalen Portfoliostrategien.
Dieser Ansatz kann als eine zeitstetige Version des aus der Literatur bekannten statischen Black-Litterman-Modells betrachtet werden. Auch mit Expertenmeinungen bleibt die Drift nur partiell beobachtbar. Die beste Schätzung für die Drift ist die bedingte Erwartung gegeben die verfügbaren Informationen, der sogenannte Filter. Werden in dem betrachteten Modell mit Gaußscher Drift lediglich Aktienpreise beobachtet, führt dies zum klassichen Kalman-Filter. In dieser Arbeit betrachten wir nun zusätzlich Expertenmeinungen, die zu zufälligen Zeitpunkten eintreffen und als Gaußsche Zufallsgrößen modelliert werden. Hierfür konstruieren wir mittels eines markierten Punktprozesses den entsprechenden modifzierten Filter.
Für das dynamische Portfoliooptimierungsproblem mit einer Nutzenfunktion geben wir in speziellen Fällen explizite Lösungen für die Wertfunktion des Problems an. Wir präsentieren numerische Ergebnisse und diskutieren den Wert der zusätzlichen Informationen.

Benedikt Köpfer (Albert-Ludwigs-Universität Freiburg)

Comparison of European Option Prices in Semimartingale Models

A set of real-valued functions \(\mathcal{F}\) induces a so-called integral stochastic order on the set of random variables by \begin{align*} X \leq_\mathcal{F} Y \Leftrightarrow E[f(X)] \leq E[f(Y)], \forall f \in \mathcal{F}. \end{align*} In terms of financial mathematics the right-hand side can be interpreted as prices of European options with payoff function \(f\). Assume there are two different models \((X_t)_{t \geq 0}\), \((Y_t)_{t \geq 0}\) describing the evolution of asset prices under the same pricing measure. Then a stochastic order at maturity \(X_T \leq_\mathcal{F} Y_T\) directly provides an ordering of the corresponding European option prices.
Conditions for such comparison results can be obtained with methods from stochastic analysis and semigroup theory. Results of this type are known for Markov processes and if one process is markovian and the other a semimartingale. We will discuss approaches to compare two semimartingales.

Kai Kümmel (Friedrich-Schiller-Universität Jena)

Zur Dynamik von Stochastischen Differentialgleichungen mit Lévy-Rauschen

Für eine spezielle Klasse stochastischer Differentialgleichungen (MSDEs) wird eine Verallgemeinerung des Satzes von Hartman-Grobman vorgestellt. Speziell wird gezeigt, dass bestimmte MSDEs und deren Linearisierung lokal konjugiert sind. Ferner werden grundlegende dynamische Eigenschaften für analoge Gleichungen mit Gedächtnis (SDDEs) gezeigt.

Christoph Kustosz (Technische Universität Dortmund)

Depth Based Estimation and Testing for Explosive Autoregressive Processes with Applications

We introduce methods to analyse growth models based on autoregressive equations with errors which do not satisfy standard assumptions. In particular error distributions just satisfying \(med(E_n)=0\) are considered under the additional assumption that the underlying process is a growth process. We propose statistics based on simplicial depth and derive asymptotic distributions for this statistics to construct asymptotic tests. Further parameter confidence sets and depth based estimators are defined. Thereby, for example, simultaneous parameter confidence sets for nonlinear autoregressive models can be found. The resulting methods are then compared with common estimators, tests, and confidence sets to illustrate their robustness with respect to deviations from normally distributed errors. Based on the Paris-Erdogan equation the methods can be applied to crack growth in pre-stressed concrete, where the experimental conditions imply skewed errors and growth processes quite naturally. In addition we discuss an example in finance to show other potential fields of application.

Matti Leimbach (Technische Universität Berlin)

Noise-induced strong stability

It is a well known fact that additive noise can turn an explosive ODE into a non-explosive SDE. In many cases, one can even show the existence of an unique invariant probability measure of the corresponding Markov process. This phenomenon is often called noise-induced stabilization or noise-induced stability. We investigate whether the noise can induce a stronger concept of stability - the existence of a random attractor. We call such a phenomenon noise-induced strong stability. First, we present an example which shows that these two concept are not the same. Second, we show that noise-induced strong stability can indeed occur.

Anna Leister (Philipps-Universität Marburg)

Maximum Likelihood Schätzung in hidden Markov Modellen mit zustandsbedingten Mischungen

Basierend auf neuen Resultaten bezüglich der Identifizierbarkeit von nichtparametrischen hidden Markov Modellen untersuchen wir Maximum Likelihood Schätzung in hidden Markov Modellen mit zustandsbedingten allgemeinen Mischungen. Wir beweisen, dass ein Maximum Likelihood Schätzer existiert und die zugehörigen Mischungsverteilungen endlichen Träger besitzen. Weiterhin diskutieren wir die Konsistenz des Schätzers und zeigen insbesondere, dass die zustandsbedingten Mischungsdichten konsistent geschätzt werden können, auch ohne die Identifizierbarkeit des Mischungsmodells vorauszusetzen. Numerische Eigenschaften und Vergleiche zur parametrischen Schätzung werden anhand von Simulationen untersucht.

Claudine Leonhard (Universität zu Lübeck)

Approximation of stochastic evolution equations

Various phenomena in e.g. biology or geology can be modeled with stochastic evolution equations. We are concerned about finding a solution to this type of equation. We give a short introduction to stochastic evolution equations; a notion of a solution for parabolic semilinear equations is presented and the random process driving it, the Q-Wiener process, is defined. Analytical solutions to stochastic evolution equations are usually not computable, therefore numerical approximation schemes are in need. The solution process belongs to a Hilbert space of infinite dimension, so the approximation requires a discretization in time and space and an approximation of the Q-Wiener process. We illustrate the specialities inherent in devising numerical schemes for these equations.

Jan-Erik Lübbers (Technische Universität Darmstadt)

Gerichtete Irrfahrt auf dem Leitergraph

Wir betrachten eine gerichtete Irrfahrt auf einem Graphen, der durch Perkolation aus dem Leitergraphen \(\mathbb{Z}\times\{0,1\}\), bedingt auf die Existenz eines unendlichen Pfades, entsteht. Die betrachtete Irrfahrt soll dazu neigen, sich "nach rechts" zu bewegen. Es zeigt sich, dass sie transient ist und ein kritischer Wert existiert, sodass die Irrfahrt sich genau dann mit positiver Geschwindigkeit fortbewegt, wenn ihr bias unterhalb dieses Wertes liegt. In dieser Situation stellt sich die Frage, ob die Irrfahrt einem zentralen Grenzwertsatz genügt.

Jens Lueddeckens (Martin-Luther-Universität Halle-Wittenberg)

Eine inhomogene stochastische Differentialgleichung mit fraktalem Poisson-Maß und fraktaler Brownscher Bewegung

Ziel des Vortrags ist es, die Existenz und Eindeutigkeit einer Lösung einer inhomogenen stochastischen Differentialgleichung mit fraktaler Brownscher Bewegung und fraktalem Poisson-Maß im Hida-Raum \((S)_{-1}\) nachzuweisen.
Hierfür werden zunächst der Hida-Raum \((S)_{-1}\), die fraktale Brownsche Bewegung sowie das fraktale Poisson-Maß eingeführt und die Hermite-Transformation für Elemente des \((S)_{-1}\) vorgestellt. Im Anschluss wird ein Existenz- und Eindeutigkeitssatz bezüglich der Lösbarkeit einer inhomogenen fraktalen stochastischen Differentialgeichung formuliert.
Abschließend werden noch Zusammenhänge zwischen dem \(L^2(\Omega)\) und dem \((S)_{-1}\) sowie Anwendungsmöglichkeiten des Existenz- und Eindeutigkeitssatzes im Raum \(L^2(\Omega)\) aufgezeigt.

Andreas Mändle (Carl von Ossietzky Universität Oldenburg)

An Anderson-Darling approach for testing the goodness of fit of multivariate data

In the fields of finance and insurance we often face the problem of modelling extremal events, e.g. when measuring the risk of holding an equity portfolio or the insurance risk of possible losses in different lines of business. This involves making distributional assumptions for multivariate, sometimes even high-dimensional data. It has been frequently observed that in practice tails are heavier than "normal" and extremes appear in clusters, indicating tail dependence. In such cases the assumptions of normality are violated. Therefore there is often uncertainty if the normal assumption can still be justified.
In the univariate case a popular method of testing the assumption of normality is by using the Anderson-Darling test. It is known for its strong power, especially when detecting deviations in the tails of a distribution. Here a possible generalization of the Anderson-Darling test to the multivariate case will be considered. Although some theoretical results about a multivariate extension of the multivariate Anderson-Darling statistic are already known, so far its application in a multivariate test seemed inconvenient, as the calculation of the \(n\)-variate test statistic required the calculation of an \(n\)-dimensional integral.
Here, a calculation formula of this multivariate Anderson-Darling statistic for finite, multidimensional samples will be presented. Using this formula immensely simplifies the calculation and thus serves as one key ingredient to facilitate the practical use of the test. In a multivariate example setting it will be demonstrated how the test can be applied, to test the assumption of a multivariate normal distribution. Although it will be referred to methods that allow simulating the asymptotic statistic, conveniently and preferably the sample quantiles will be computed using Monte-Carlo simulations. In order to evaluate the use of this new approach in comparison with the other tests, the test which is proposed here will run against the widely used tests for multivariate normality.

Marvin Müller (Technische Universität Dresden)

Stochastic Stefan-type Problems and Limit Order Book Models

We introduce a class of price-time-continuous models for the limit order book density, where the evolution of buy and sell side is described by a semilinear second-order SPDE and the mid price process defines a moving boundary separating buy and sell side. Following empirical observations by Cont et al. (2013) and Lipton et al. (2013) we assume price changes to be determined by the bid-ask imbalance. The resulting limit order book model can be considered as a generalization of the linear stochastic Stefan problem introduced by Kim, Sowers and Zheng (2012).
In order to show existence of a solution we transform the problem into a stochastic evolution equation, where the boundary interaction leads to an additional drift. Regularity properties of the linear part in the equation allow to control the non-linearities and establish (local) existence and uniqueness results. This provides a framework for further analysis of the problem.

Alexandra Neamtu (Friedrich-Schiller-Universität Jena)

Dynamics of Non-densely Defined Stochastic Evolution Equations

We consider a class of stochastic evolution equations with a non-densely defined linear part. Such situations can occur due to additional restrictions that are incorporated in the domain of a linear operator. Our theory is motivated by the deterministic case considered by P. Magal and S. Ruan (2010). We can transform such a SDE into a pathwise problem, from which we derive a random dynamical system. Furthermore, we investigate the existence of random fixed points and attractors. As applications, we analyze population models, transport equations with nonlinear boundary conditions and parabolic equations with nonlocal boundary conditions under stochastic influences.

David Prömel (Humboldt-Universität zu Berlin)

Robust Super-replication for Time-invariant Derivatives

The connection between model-independent pricing and the Skorokhod embedding problem has been a driving force in robust finance for the last decade. We establish a general pricing-hedging duality for financial derivatives which are susceptible to the Skorokhod approach. Using Vovk’s approach to mathematical finance we derive a model-independent super-replication theorem in continuous time, given information on finitely many marginals. Our result covers a broad range of exotic derivatives, including lookback options, discretely monitored Asian options, and options on realized variance.

The talk is based on a joint work with M. Beigleböck, A. M. G. Cox, M. Huesmann and N. Perkowski.

Felicia Raßmann (Goethe Universität Frankfurt a.M.)

Chasing Phase Transitions in Random Discrete Structures

The study of random discrete structures has been an active field of research since the pioneering work of Erdös and Rényi on random graphs in the 1950s/60s and has since then played a key role in combinatorics. In particular the identification of phase transitions, where a small change of a parameter entails a huge change in the structure of the outcome of the experiment, has been of great interest. One popular example for a phase transition is the emergence of a giant component in random graphs. Over the past decade, progress has been made in studying phase transitions in what physicists call "disordered systems". These systems exhibit a structure where the geometry of interactions is determined by a sparse random graph or hypergraph. The mathematical rigorous study of disordered systems is very demanding, but physicists have developed analytic, non-rigorous methods to put forward precise conjectures on the location and nature of phase transitions in these systems. The challenge is now to prove the predictions.
In this talk we investigate random hypergraph 2-coloring, the problem of coloring a random hypergraph \(H(n,p)\) containing \(n\) vertices and each hyperedge with probability \(p\) independently with two colors such that no monochromatic edge exists. We present the predictions for phase transitions in this model and what we can actually prove at the moment. We provide an insight into the methods we use, such as the second moment method and the so called "planted model".

Jasmin Röder (Justus-Liebig-Universität Gießen)

Pfadabhängige BSDEs mit Sprüngen

Wir untersuchen pfadabhängige BSDEs mit Sprüngen, wobei die Pfadabhängigkeit in diesem Zusammenhang die Abhängigkeit des Generators und der BSDE-Endbedingung von einem Pfad eines adaptierten càdlàg Prozesses \(X\) meint. Für jeden zur Zeit \(t\in[0,T]\) gestoppten Pfad \(\gamma_{t}\) ist die zu untersuchende BSDE von der Form \begin{align*} Y_{\gamma_{t}}(s) = &\, \Phi(X_{\gamma_{t},T}) +\int_{s}^{T}f(r,X_{\gamma_{t},r},Y_{\gamma_{t}}(r-),Z_{\gamma_{t}}(r), U_{\gamma_{t}}(r))dr \\ &-\int_{s}^{T}Z_{\gamma_{t}}(r)dW(r) -\int_{s}^{T}\int_{\mathbb{R}^{l}}U_{\gamma_{t}}(r,x)\tilde{N}(dr,dx) \end{align*} für \(s\in[t,T]\). Der zum Zeitpunkt \(T\) gestoppte Pfad \(X_{\gamma_{t},T}\) ist definiert durch \begin{equation*} X_{\gamma_{t},T}(u)=\gamma(u)\mathbb{1}_{[0,t)}(u) +(\gamma(t)+X(u)-X(t))\mathbb{1}_{[t,T]}(u) \end{equation*} für \(u\in[0,T]\). Wir untersuchen die Existenz und die Eindeutigkeit der Lösung einer solchen BSDE und beweisen außerdem den zugehörigen Vergleichssatz. Des Weiteren analysieren wir die Pfaddifferenzierbarkeit der Lösung dieser BSDE und zeigen, dass diese Pfadableitungen wieder eine eindeutige Lösung einer pfadabhängigen BSDE mit Sprüngen ist. Ein weiteres Resultat ist die Verbindung einer pfadabhängigen PIDE und der Lösung einer pfadabhängigen BSDE mit Sprüngen.

Alexandros Saplaouras (Technische Universität Berlin)

Robustness of Backward Stochastic Differential Equations with Jumps

The last 25 years due to the seminal paper of Pardoux and Peng, Backward Stochastic Differential Equations have gained a lot of attention due to their applications in stochastic control, theoretical economics, stochastic differential games and financial mathematics. In the first part of the current work we present a general result of existence and uniqueness of solutions of BSDEJs driven by a square integrable martingale. We will proceed by presenting the robustness of the solutions, i.e. the continuity of the operator which maps the standard data (in a general sense) to the solution of the BSDEJ. An immediate application of the robustness property is the Euler scheme, i.e. the solution of the discretization of a BSDEJ driven by a square integrable Feller martingale with independent increments converges to the solution of the initial BSDEJ.

Anton Shardin (BTU Cottbus-Senftenberg)

Optimale Steuerung eines Energiespeichers unter partieller Information

Die zunehmende Einspeisung von Strom aus zeitlich und räumlich fluktuierenden und schwer prognostizierbaren erneuerbaren Energiequellen wie Sonne und Wind führt zu neuartigen Problemen beim Ausgleich von Angebot und Nachfrage auf dem Strommarkt und in den Übertragungsnetzen. In diesem Zusammenhang gewinnen Energiespeicher wie z.B. Pumpspeicherkraftwerke an Bedeutung. Deren Aufgabe ist es, Strom in Zeiten großen Angebotes aufzunehmen und bei hoher Nachfrage wieder zurück in das Stromnetz einzuspeisen. Neben zahlreichen technischen Fragen zur Entwicklung von leistungsfähigen und kostengünstigen Speichern entsteht auch das Problem der ökonomischen Bewertung derartiger Speicher auf dem Energiemarkt.
Eine solche Bewertung erfordert eine angemessene Modellierung der fluktuierenden Stromeinspeisung aus erneuerbaren Energieträgern durch stochastische Prozesse. Grundidee der Speicherbewertung ist, den erwarteten Zahlungsstrom, welcher die Kosten für den Kauf von Energie (Laden) und die Gewinne aus dem Verkauf von Energie (Entladen) enthält, durch eine geeignet gewählte Speicherstrategie zu maximieren. Der bei der Lösung dieses Optimierungsproblemes entstehende optimale Zielfunktionswert kann zugleich als der faire Preis einer Realoption interpretiert werden. Mit der optimalen Speicherstrategie kann der Speichermanager entscheiden, zu welchem Zeitpunkt welche Menge an Energie zu laden bzw. zu entladen ist.
Den Energiepreis beschreiben wir durch einen Ornstein-Uhlenbeck Prozess, dessen Mean-Reversion Level von einem nicht direkt beobachtbaren Faktorprozess abhängt. Dieser Faktorprozess beschreibt den Zustand des Energiemarktes und wird als Markovkette angesetzt. Das resultierende Optimalsteuerungsproblem unter partieller Information kann durch Aufnahme des Filters für den Faktorprozess in ein Problem unter vollständiger Information transformiert werden. Für dessen Lösung untersuchen wir die zugehörige Hamilton-Jacobi-Bellman-Gleichung. Die Lösung dieser partiellen Differentialgleichung gelingt nur numerisch unter Verwendung von Differenzenverfahren und eines Semi-Lagrangeschen Approximationsschemas. Wir präsentieren numerische Ergebnisse für ein Steuerungsproblem mit endlichem Zeithorizont.

Tobias Siems (Universität Greifswald)

An EM-Algorithm for the L2-Potts functional

We assume a dataset with normal distributed data, which is subject to successive changes in mean. The goal is to predict the time points where the changes happen. This can be achieved by minimizing the so called Potts functional. The problem is that the Potts functional and therefore the estimated changepoints depend on at least one crucial parameter. In my talk I will give a brief overview about the Potts functional and how it can be translated into a stochastic model, by using "spike and slab" priors. At this end I will show the EM algorithm for the estimation of the parameters of the Potts functional.

Kathrin Skubch (Goethe Universität Frankfurt a.M.)

How does the core sit inside the mantle?

For any \(k\geq 3\) the \(k\)-core \(\mathcal C_k(G)\) of a graph \(G\) is defined as the (unique) maximal subgraph of \(G\) of minimum degree \(k\). For fixed \(d>0\) let \(G(n,d/n)\) denote the random graph on the vertex set \([n]=\{1,\ldots,n\}\) in which any two vertices are connected with probability \(p=d/n\) independently. Pittel, Wormald and Spencer were the first to determine the precise threshold \(d_k\) beyond which the \(k\)-core \(\mathcal C_k(G(n,d/n))\) is non-empty w.h.p. [Journal of Combinatorial Theory, Series B 67 (1996) 111-151]. Specifically, for any \(k\geq3\) there is a function \(\psi_k:(0,\infty)\to[0,1]\) such that for any \(d\in(0,\infty)\setminus\{d_k\}\) the sequence \((n^{-1}|\mathcal C_k(G(n,d/n))|)_n\) converges to \(\psi_k(d)\) in probability. Furthermore, Pittel, Wormald and Spencer pointed out that a simple "branching process" heuristic predicts the correct threshold and the correct size of the \(k\)-core, and this argument has subsequently been turned into an alternative proof of their result.
The aim of the present paper is to enhance this branching process perspective of the \(k\)-core problem. Here we derive a multi-type branching process that describes precisely how the \(k\)-core is "embedded" into the random graph for any \(k\geq3\) and any fixed average degree \(d=np>d_k\). This generalises prior results on, e.g., the internal structure of the \(k\)-core. Indeed, in the physics literature it was suggested to characterise the core by means of a "message passing" algorithm called Warning Propagation. The Warning Propagation description of the core will play a key role in the present paper.

This is joint work with Amin Coja-Oghlan, Oliver Cooley and Mihyun Kang.

Inga Stolz (Universität zu Lübeck)

Komplexität messen: Ein nützliches und interessantes Werkzeug für die Zeitreihenanalyse - die ordinale Musterverteilung

Die Kolmogorow-Sinai-Entropie ist ein bekanntes Maß zur Komplexitätsmessung von dynamischen Systemen, sie zu berechnen ist jedoch nicht einfach. Die Wissenschaftler Band und Pompe erfanden kürzlich das Konzept der Permutationsentropie, welche, wie sich herausgestellt hat, eng mit der Kolmogorow-Sinai-Entropie verbunden ist. Dieses Maß nutzt aus, dass in den Auf und Abs einer Messung viel Information über das zugrunde liegende System gespeichert ist. In der ordinalen Zeitreihenanalyse werden diese Auf und Abs durch sogenannte ordinale Muster beschrieben und ihre Verteilungen ausgewertet.
Das Ziel dieses Vortrages ist es, die Methode der ordinalen Muster in der Zeitreihenanalyse vorzustellen, Abwandlungen und deren Auswirkungen auf die Beziehung zur Kolmogorow-Sinai-Entropie zu diskutieren sowie experimentelle Daten (im Besonderen EEG-Daten) in diesem Zusammenhang zu beleuchten.

Sara Svaluto-Ferro (Eidgenössische Technische Hochschule Zürich)

Polynomial Preserving Jump-Diffusions on the Unit Interval

We present the class of polynomial preserving processes, which contains several important processes such as affine processes or processes with quadratic diffusion coefficients. Processes in this class have the property that the calculation of (mixed) moments only requires the computation of matrix exponentials. A polynomial preserving process can be defined as the solution to a martingale problem whose operator is polynomial preserving, namely maps any polynomial to a polynomial of the same or lower degree. Choosing the unit interval as state space, we characterize those polynomial preserving operators for which the existence of a solution to the martingale problem is guaranteed.

Andreas Thalhammer (Johannes Kepler Universität Linz)

Mean-square stiffness analysis of stochastic differential equations

Whereas the concept of stiffness of deterministic problems is well-examined, the notion of stiffness for stochastic differential equations (SDEs) has not been treated in detail yet. A reliable characterisation of stiffness in the stochastic setting is of great importance in practice, particularly if we are dealing with problems where a necessary reduction of time step sizes for explicit methods leads to significantly higher computational cost.
In this talk, we provide an overview of existing approaches to characterise stiffness for deterministic and stochastic problems. Furthermore, we extend a recently developed concept of stiffness for deterministic differential equations to the stochastic setting and link the proposed notion of stiffness in the mean-square sense with results from the stability theory of linear stochastic differential equations. Using these results, we conclude the talk by presenting numerical experiments with linear systems of SDEs coming from spatial discretisation schemes for stochastic partial differential equations.

Maria Umlauft (Universität Ulm)

Analyse zeitdynamischer onkologischer Daten in Abwesenheit eines Ereignisses in der interessierenden Ereigniskategorie

In diesem Vortrag sollen verschiedene Methoden zur Analyse von Überlebenszeitdaten unter dem besonderen Umstand eines nicht beobachteten Ereignisses in einer Ereigniskategorie vorgestellt werden. Die vorliegenden Daten stammen aus einer Kohortenstudie und enthalten Beobachtungen von Patienten, welche an einem neuroendokrinen Tumor erkrankt sind. In Folge dessen wurden sie mit verschiedenen Radiopeptidtherapien behandelt, deren Einfluss auf die Diabetesinzidenz untersucht werden soll. Eine Besonderheit für die Analyse der vorliegenden Daten ist die Notwendigkeit einer Zuhilfenahme von konkurrierenden Risiken-Modellen. Die sonst standardmäßig angewandte Cox-Regression liefert im Fall eines nicht beobachteten Ereignisses auf Grund der multiplikativen Intentitätsstruktur keine zufriedenstellenden Ergebnisse. Die Herausforderung liegt damit in der Auswahl geeigneter Methoden, welche in diesem Vortrag beschrieben und untereinander mit Hilfe einer Simulationsstudie verglichen werden sollen.

Martin Weidner (Imperial College London)

Smooth Densities for Rough Differential Equations on Manifolds

We want to show the smoothness of the law of solutions of rough differential equations on smooth manifolds where the driving noise is a suitable Gaussian process. The analogue result for vector space valued solutions has been established recently. Furthermore, the situation on manifolds is quite well understood if the driving noise is Brownian motion. We aim at combining these results while paying particular attention to whether additional structures on the manifold (such as a Riemannian metric) play a role.

This is work in progress.

Xiu Xu (Humboldt-Universität zu Berlin)

Localised Conditional Autoregressive Expectile Model

Localised conditional autoregressive expectile (CARE) model accounts for time-varying parameters in tail risk modelling. Our technique strikes a balance between parameter variability and the modelling bias resulting in potentially varying parameter homogeneity interval lengths. Over this intervals one can safely assume a parametric model in expectile estimation. Based on empirical evidence at three stock markets between 2005-2014 we show that CARE parameters vary over time and exhibit changing distributional properties. It is recommended to use between 1 month and 1 year of data in expectile modelling at any trading day.