Seminario [ RIMANDATO] “ricerca scientifica nelle aziende”: option pricing, hedges and deep learning

Il seminario è rimandato causa malattia dello speaker.

Il giorno 14 aprile alle ore 14.30 in aula Caldirola si terrà il seminario

Option pricing, hedges and deep learning 

Simona Sheit and Matthias Blank (d-fine GmbH, Frankfürt)

Pricing and hedging of options (and other derivatives) are at the core of mathematical finance. These are financial contracts that can be bought and sold and give its owner the right to buy or sell a certain asset at a future point in time. Under ideal market assumptions, options and their underlying assets can be modelled using (continuous) stochastic processes whose dynamic can be described by stochastic differential equations. A classical solution to pricing and hedging options in such a setting is the Black-Scholes model, for which its authors received a Nobel Prize in 1997. 

In incomplete markets we can, by their very definition, no longer hedge every claim, and techniques that are based on the possibility of perfect hedging, like the Black-Scholes ansatz, can no longer be used for pricing derivatives. 

On the other hand, over the last decade or so, deep learning algorithms have been successfully able to tackle many problems in machine learning that were previously thought to be extremely difficult, and neural networks are at the forefront of the current boom in machine learning applications. Areas where neural networks have accomplished stunning results range from image and speech recognition to the outright spectacular generation of art. 

Using these techniques in quantitative finance is intriguing, and one recent seminal paper here has been the Deep Hedging framework of Bu ̈hler et al. (Deep Hedging, 2019), applying deep neural networks to the problem of optimal hedging in incomplete markets. 

In this presentation, we will give an overview of the basic concepts in mathematical finance and show how financial assets can be modelled and simulated in a rigorously math- ematical framework. We then briefly introduce neural networks and discuss how they can be trained on simulated data of this type to find (model-dependent) trading strategies. In the last years, these techniques have been the focus both of active scientific research and heightened industry interest. 

Keywords: Option pricing, SDEs, Black-Scholes, deep learning, (recurrent) neural networks

Seminario: “How to develop a responsible AI: explaining decisions and avoiding discriminations “

Mercoledi 8 Giugno 2022, Aula Caldirola

di Greta Greco e Daniele Regoli (Banca Intesa)

Explainable Artificial Intelligence (XAI) is a set of techniques that allows the understanding of both technical and non-technical aspects of Artificial Intelligence (AI) systems. Within XAI techniques, counterfactual explanations aim to provide to end users a set of features (and  their corresponding values) that need to be changed in order to achieve a desired outcome. Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations, and in particular they fall short of considering the causal impact of such actions. In this section, we present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations capturing by design the underlying causal relations from the data, and at the same time to provide feasible recommendations to reach the proposed profile.

Along with explainability, increasing importance id dedicated to assessing to what extent AI models can be considered fair, in terms of the impact they might have on individuals. Algorithms are developed by humans and learn from historical data, chances are that bias and prejudices can be replicated and even exacerbated by models at scale. In this section we present how to measure discrimination in data and what are the strategies that can mitigate bias in AI models predictions.

Online symposium at CUNY on “Language, learning and network”

Fri 4 Dec 2020:  Language, learning, and networks

Learning to understand:  Statistical learning and infant language development
Jenny Saffran, University of Wisconsin

A mathematical theory of learning in deep neural networks
Surya Ganguli, Stanford University and Google

Neural scaling laws and GPT-3
Jared Kaplan, Johns Hopkins University and OpenAI
 

Sponsored by the Initiative for the Theoretical Sciences and the CUNY doctoral programs in Physics and Biology. 
Supported in part by the Center for the Physics of Biological Function, a joint effort of The Graduate Center and Princeton University.

Website: https://itsatcuny.org/biophysics