{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Andrey Varkentin. Risk management with Python.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this tutorial we are going to learn some basic risk management instruments, that could be used not only in trading, but also in daily operations of every organisation, which strive to manage its own risks. However, now it is broadly used in financial organisations due to specifics of risks, these organisations are facing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial is devoted to VaR and Monte-Carlo simulation and is organized as follows: first, we will introduce theoretical foundations of VaR, then apply basic VaR and Monte-Carlo simulation on Tesla equity prices, finally, look at Bocconi University Student Investment Club example realization of 4 types of VaR." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Theory" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Value-at-risk (VaR) has been used for decades to assess market risk. In a nutshell, the VaR is a statistical technique used to measure the level of risk of a portfolio given a certain confidence interval and within a fixed timeframe.[[1]](https://medium.com/@Francesco_AI/big-data-and-risk-management-in-financial-markets-part-i-eed4e245f3db)\n", "\n", "The VaR can be computed through:\n", "\n", "- Historical method - we organise histrical returns in descending order and choose a threshold to define possible losses;\n", "- Delta-Normal method - first step to modelling: we evaluate mean and standard deviation and model the distribution of returns, after that taking the required threshold;\n", "- Monte Carlo simulation - we model returns trajectories, and then run a multitude of simulated trials, which result in distribution of returns.\n", "\n", "We will try to implement them in our first part of Tesla equity analysis.\n", "\n", "Of course, this is not the complete list of methods. In BSIC approach there are 4 methods of VaR as follows [[2]](http://www.bsic.it/wp-content/uploads/2017/03/VaR-with-Python.pdf):\n", "- Parametric VaR -to build it, the only variables needed are the mean and the standard deviation of a portfolio/security. The problem is that it works under two restrictive assumptions, namely normal distribution and independence of returns. This leads to myopically equating all returns in terms of importance, overlooking big shocks that should be carried over and should be given more power to impact the actual VaR.\n", "- EWMA (Exponentially weighted moving average) tries to solve the problem of slow reaction to new information and the equal importance of returns. Using a decay factor the EWMA formula is able to weight different information as it comes in, giving more importance to recent returns and less importance to data far in the past by slowly decaying their contribution to the VaR. Through this, the measure limits the ‘echo effect’, occurring when a large shock of the past becomes too old to be considered and leaves the estimation, causing a big change in the VaR which is not due to a change in the markets.\n", "- Historical Simulation(HS) VaR is instead efficient when the risk manager cannot, or doesn’t intend to, make assumptions on the underlying distribution of returns as it is calculated by the simple picking of the chosen percentile loss in a given period of time. This method is even simpler than the parametric one and that is precisely its weakness. \n", "- Filtered Historical Simulation VaR can be described as being a mixture of the historical simulation and EWMA methods. Returns are first standardized, with volatility estimation weighted as in EWMA VaR, before a historical percentile is applied to the standardized return as in the historical model. From the graphs it is easy to spot that this model looks very much like EWMA, as returns are standardised and weighted by the same decay factor. The main difference lies in the fact that this model is generally more conservative because it looks at the worst past losses and adjust its VaR value according to it.\n", "\n", "We will look at them in our second part of Tesla equity analysis.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's go deep into formal definition of VaR. More information about this, portfolio optimisation technics and etc. can be found [here [3]](http://www.quantatrisk.com/2013/03/08/var-expected-shortfall-black-swan/).\n", "\n", "More formally, given a daily (or, weekly, monthly, etc.) distribution of returns (of, for example, a single stock, a portfolio of assets, etc.), we are interested in finding the value of VaRα of, say, α=0.05 (five percent) which would say to as that there is 5% of chances that the value of VaR 0.05 would be exceeded in trading on the next day. This value is located in the left tail of the distribution and, by convention, it is given as positive number. Therefore, we define VaRα as:\n", "$$P[X\\leq VaR\\alpha]=1-\\alpha$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Okay then, having VaRα calculated we know how far the loss could reach at the 1−α confidence level. The next super important question in risk management every risk manager should ask or at least be interested in reporting is: if VaRα event occurs on the next day, what is the expected loss we might expect to suffer (say, in dollars)? VaRα is the threshold. Now we are interested in the expected value of loss given the exceedance. It is the definition of expected shortfall and is based on the concept of conditional probability as follows:\n", "\n", "$$E[X | X \\gt VaR\\alpha] = ES$$ . \n", "\n", "In general, if our daily distribution of returns can be described by a function f(x) which would represent a power density function (pdf), then:\n", "\n", "$$ES = 1 / \\alpha \\int_{-\\infty}^{VaR\\alpha}xf(x)dx.$$ " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "VaRα and ES can be, in fact, calculated by the method is based on the empirical distribution, i.e. using data as given:\n", "\n", "$$VaR\\alpha=h^{VaR}_{i}$$ for $$ \\sum _{i=1}^{M-1}H_{i}(h_{i+1}-h_{i}) \\leq \\alpha ,$$\n", "\n", "where H represents the normalized histogram of returns (i.e., its integral is equal 1) and M is the number of histograms bins. Similarly for ES, we get:\n", "\n", "$$ES = \\sum_{i=1}^{h^{VaR}_{i}} h_{i}H_{i}(h_{i+1}-h_{i}).$$ " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Practice. Tesla equities from 2010." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to implement all theoretical constructions on history of Tesla equity prices from 29/06/2010" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# install package for downloading equity prices\n", "# !pip install pandas-datareader" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math\n", "\n", "import matplotlib.pyplot as plt\n", "# necessary packages\n", "import numpy as np\n", "from pandas_datareader import data\n", "from scipy.stats import norm\n", "\n", "# download tesla price data into DataFrame\n", "tesla = data.DataReader(\"TSLA\", \"yahoo\", start=\"29/06/2010\")\n", "\n", "# calculate the compound annual growth rate (CAGR) which\n", "# will give us our mean return input (mu)\n", "days = (tesla.index[-1] - tesla.index[0]).days\n", "cagr = ((((tesla[\"Adj Close\"][-1]) / tesla[\"Adj Close\"][1])) ** (365.0 / days)) - 1\n", "print(\"CAGR =\", str(round(cagr, 4) * 100) + \"%\")\n", "mu = cagr\n", "\n", "# create a series of percentage returns and calculate\n", "# the annual volatility of returns\n", "tesla[\"Returns\"] = tesla[\"Adj Close\"].pct_change()\n", "vol = tesla[\"Returns\"].std() * math.sqrt(252) # 252 - number of trading days\n", "print(\"Annual Volatility =\", str(round(vol, 4) * 100) + \"%\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Uhm, high growth rates and volatility, situation typical for startups." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's plot charts for Tesla with Seaborn and Bokeh to see the whole dynamics." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tesla.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "ax = sns.lineplot(data=tesla[\"Adj Close\"])\n", "ax.set(xlabel=\"date\", ylabel=\"Adj close\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For Bokeh candlestick visualization we need column date" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tesla2 = tesla.reset_index()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from math import pi\n", "\n", "import pandas as pd\n", "from bokeh.plotting import figure, output_file, show\n", "\n", "tesla2[\"Date\"] = pd.to_datetime(tesla2[\"Date\"])\n", "\n", "mids = (tesla2.Open + tesla2.Close) / 2\n", "spans = abs(tesla2.Close - tesla2.Open)\n", "\n", "inc = tesla2.Close > tesla2.Open\n", "dec = tesla2.Open > tesla2.Close\n", "w = 12 * 60 * 60 * 1000 # half day in ms\n", "\n", "output_file(\"candlestick.html\", title=\"candlestick.py example\")\n", "\n", "TOOLS = \"pan,wheel_zoom,box_zoom,reset,save\"\n", "\n", "p = figure(\n", " x_axis_type=\"datetime\", tools=TOOLS, plot_width=1000, toolbar_location=\"left\"\n", ")\n", "\n", "p.segment(tesla2.Date, tesla2.High, tesla2.Date, tesla2.Low, color=\"black\")\n", "p.rect(\n", " tesla2.Date[inc], mids[inc], w, spans[inc], fill_color=\"#D5E1DD\", line_color=\"black\"\n", ")\n", "p.rect(\n", " tesla2.Date[dec], mids[dec], w, spans[dec], fill_color=\"#F2583E\", line_color=\"black\"\n", ")\n", "\n", "p.xaxis.major_label_orientation = pi / 4\n", "p.grid.grid_line_alpha = 0.3\n", "\n", "show(p) # open a browser" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that since 2013 price of Tesla equities increased in 12 times and also there are lots of peaks and cavities. We also can divide all historical prices in 3 periods according to price levels: 2010-2013, 2014 - 2016, 2017-2018. For more thorough analisys it is better to calculate VaR separately for each period but for our tutorial we'll ignore this fact." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's do 1 circle of our modeling Tesla returns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "S = tesla[\"Adj Close\"][\n", " -1\n", "] # starting stock price (i.e. last available real stock price)\n", "T = 252 # Number of trading days\n", "mu = mu # Return\n", "vol = vol # Volatility\n", "\n", "# create list of daily returns using random normal distribution\n", "daily_returns = np.random.normal((mu / T), vol / math.sqrt(T), T) + 1\n", "\n", "# set starting price and create price series generated by above random daily returns\n", "price_list = [S]\n", "\n", "for x in daily_returns:\n", " price_list.append(price_list[-1] * x)\n", "\n", "# Generate Plots - price series and histogram of daily returns\n", "\n", "plt.plot(price_list)\n", "plt.show()\n", "plt.hist(\n", " daily_returns - 1, 100\n", ") # Note that we run the line plot and histogram separately, not simultaneously.\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Histogram reminds normal distribution, though there are some overshoots. Let's repeat it for 1000 times, this is what is done in Monte-Carlo simulation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "result = []\n", "# choose number of runs to simulate - I have chosen 1000\n", "for i in range(1000):\n", " # create list of daily returns using random normal distribution\n", " daily_returns = np.random.normal(mu / T, vol / math.sqrt(T), T) + 1\n", "\n", " # set starting price and create price series generated by above random daily returns\n", " price_list = [S]\n", "\n", " for x in daily_returns:\n", " price_list.append(price_list[-1] * x)\n", "\n", " # plot data from each individual run which we will plot at the end\n", " plt.plot(price_list)\n", " # append the ending value of each simulated run to the empty list we created at the beginning\n", " result.append(price_list[-1])\n", "\n", "# show the plot of multiple price series created above\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.distplot(result, bins=50)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This distribution is not normal, more similar to lognormal, with some overshoots near 1750" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# use numpy mean function to calculate the mean of the result\n", "print(round(np.mean(result), 2), \"$\")\n", "print(\"5% quantile =\", np.percentile(result, 5))\n", "print(\"95% quantile =\", np.percentile(result, 95))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we have found that we can expect Tesla equity value of \\$514.96. Moreover, there is a 5% chance that our stock price will end up below around \\$191.92 and a 5% chance it will finish above $1051.50.\n", "\n", "Thus, we can ask ourselves, whether we are ready to risk a 5% chance of ending up with a stock worth less than \\$191.92, in order to chase an expected return of around 38%, giving us an expected stock price of around \\$514.96?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check the realisation of 4 types of VaR. First, compare absolute VaR values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math\n", "\n", "import numpy as np\n", "import pandas as pd\n", "from scipy.stats import norm\n", "\n", "\n", "def VaR(\n", " Returns,\n", " Formula=\"Parametric Normal\",\n", " Confidence_Interval=0.95,\n", " Period_Interval=None,\n", " EWMA_Discount_Factor=0.94,\n", " Series=False,\n", " removeNa=True,\n", "):\n", "\n", " \"\"\"\n", " This funcction can caluclate both single value VaR and series of VaR values through time.\n", " Supported formulas as Parametric Normal, Parametric EWMA, Historical Simulation and Filtered Historical Simulation\n", " \n", " \"\"\"\n", "\n", " # Removes the NAs from the series\n", " if removeNa == True:\n", " Returns = Returns[pd.notnull(Returns)]\n", "\n", " # Data need to be returns already, then here the interval for the sampling is set for No input\n", " if Series == True and Period_Interval == None:\n", " Period_Interval = 100\n", " elif Period_Interval == None:\n", " Period_Interval = len(Returns)\n", "\n", " # ==========================================================================\n", " # ==========================================================================\n", " # ==========================================================================\n", "\n", " # ===============================\n", " # Parametric Normal VaR\n", " # ===============================\n", " if Formula == \"Parametric Normal\":\n", "\n", " if Series == False:\n", " Data = Returns[-Period_Interval:]\n", " stdev = np.std(Data)\n", " Value_at_Risk = stdev * norm.ppf(Confidence_Interval)\n", " if Series == True:\n", " Value_at_Risk = pd.Series(index=Returns.index, name=\"ParVaR\")\n", " for i in range(0, len(Returns) - Period_Interval):\n", " if i == 0:\n", " Data = Returns[-(Period_Interval):]\n", " else:\n", " Data = Returns[-(Period_Interval + i) : -i]\n", " stdev = np.std(Data)\n", " Value_at_Risk[-i - 1] = stdev * norm.ppf(Confidence_Interval)\n", "\n", " # ============================\n", " # EWMA Parametric VaR\n", " # ============================\n", " if Formula == \"Parametric EWMA\":\n", "\n", " ## Defining exponentially smoothed weights components ##\n", " Degree_of_Freedom = np.empty([Period_Interval,])\n", " Weights = np.empty([Period_Interval,])\n", " Degree_of_Freedom[0] = 1.0\n", " Degree_of_Freedom[1] = EWMA_Discount_Factor\n", " Range = range(Period_Interval)\n", " for i in range(2, Period_Interval):\n", " Degree_of_Freedom[i] = Degree_of_Freedom[1] ** Range[i]\n", " for i in range(Period_Interval):\n", " Weights[i] = Degree_of_Freedom[i] / sum(Degree_of_Freedom)\n", "\n", " if Series == False:\n", " sqrdData = (\n", " Returns[-Period_Interval:]\n", " ) ** 2 ## Squaring returns for the formula\n", " EWMAstdev = math.sqrt(sum(Weights * sqrdData))\n", " Value_at_Risk = EWMAstdev * norm.ppf(Confidence_Interval)\n", " if Series == True:\n", " Value_at_Risk = pd.Series(index=Returns.index, name=\"EWMAVaR\")\n", " sqrdReturns = (\n", " Returns ** 2\n", " ) ## For efficiency here we square returns first so the loop does not do it repeadetly\n", " ## This loop repeats the VaR calculation iterated for every xxx period interval\n", " for i in range(0, len(Returns) - Period_Interval):\n", " ## this is needed as, supposing x is a number, referencing a pd series as a[x,0] is a mistake. correct is a[x:]\n", " if i == 0:\n", " sqrdData = sqrdReturns[-(Period_Interval):]\n", " else:\n", " sqrdData = sqrdReturns[-(Period_Interval + i) : -i]\n", "\n", " EWMAstdev = math.sqrt(sum(Weights * sqrdData))\n", " ## unfortunately pd series work differently for singular entries. So if a[x:] gets up to the last number\n", " ## a[] does not work. So a[-1] will get the equivalent to the last of a[x:-1]\n", " Value_at_Risk[-i - 1] = EWMAstdev * norm.ppf(Confidence_Interval)\n", "\n", " # ============================\n", " # Historical Simulation\n", " # ============================\n", " if Formula == \"Historical Simulation\":\n", "\n", " if Series == False:\n", " Data = Returns[-Period_Interval:]\n", " Value_at_Risk = -np.percentile(Data, 1 - Confidence_Interval)\n", " if Series == True:\n", " Value_at_Risk = pd.Series(index=Returns.index, name=\"HSVaR\")\n", " for i in range(0, len(Returns) - Period_Interval):\n", " if i == 0:\n", " Data = Returns[-(Period_Interval):]\n", " else:\n", " Data = Returns[-(Period_Interval + i) : -i]\n", " Value_at_Risk[-i - 1] = -np.percentile(Data, 1 - Confidence_Interval)\n", "\n", " # ====================================\n", " # Filtered Historical Simulation\n", " # ====================================\n", " if Formula == \"Filtered Historical Simulation\":\n", "\n", " # Defining exponentially smoothed weights components\n", " Degree_of_Freedom = np.empty([Period_Interval,])\n", " Weights = np.empty([Period_Interval,])\n", " Degree_of_Freedom[0] = 1.0\n", " Degree_of_Freedom[1] = EWMA_Discount_Factor\n", " Range = range(Period_Interval)\n", " for i in range(2, Period_Interval):\n", " Degree_of_Freedom[i] = Degree_of_Freedom[1] ** Range[i]\n", " for i in range(Period_Interval):\n", " Weights[i] = Degree_of_Freedom[i] / sum(Degree_of_Freedom)\n", "\n", " Value_at_Risk = pd.Series(index=Returns.index, name=\"FHSVaR\")\n", " EWMAstdev = np.empty([len(Returns) - Period_Interval,])\n", " stndrData = pd.Series(index=Returns.index)\n", "\n", " # For efficiency here we square returns first so the loop does not do it repeadetly\n", " sqrdReturns = Returns ** 2\n", "\n", " # Computations here happen in different times, because we first need all the EWMAstdev\n", " # First get the stdev according to the EWMA\n", " for i in range(0, len(Returns) - Period_Interval):\n", " if i == 0:\n", " sqrdData = sqrdReturns[-(Period_Interval):]\n", " else:\n", " sqrdData = sqrdReturns[-(Period_Interval + i) : -i]\n", "\n", " EWMAstdev[-i - 1] = math.sqrt(sum(Weights * sqrdData))\n", "\n", " # Now get the Standardized data by dividing for the EWMAstdev.\n", " # Length is here -1 because we standardize by the EWMAstdev of the PREVIOUS period.\n", " # Hence also EWMAstdev is [-i-2] instead of [-i-1].\n", " for i in range(0, len(Returns) - Period_Interval - 1):\n", " stndrData[-i - 1] = Returns[-i - 1] / EWMAstdev[-i - 2]\n", " stndrData = stndrData[pd.notnull(stndrData)]\n", " # Finally get the percentile and unfilter back the data\n", " for i in range(0, len(stndrData) - Period_Interval):\n", " if i == 0:\n", " stndrData2 = stndrData[-(Period_Interval):]\n", " else:\n", " stndrData2 = stndrData[-(Period_Interval + i) : -i]\n", "\n", " stndrData_pct = np.percentile(stndrData2, 1 - Confidence_Interval)\n", " # Unfilter back with the CURRENT stdev\n", " Value_at_Risk[-i - 1] = -(stndrData_pct * EWMAstdev[-i - 1])\n", "\n", " # For FHS the single take of VaR does not work because we need to standardize for the preceeding stdev\n", " # hence it is always necessary to calculate the whole series and take the last value\n", " if Series == True:\n", " Value_at_Risk = Value_at_Risk\n", " if Series == False:\n", " Value_at_Risk = Value_at_Risk[-1]\n", "\n", " return Value_at_Risk\n", "\n", "\n", "def VaR_Compare(\n", " Returns, Confidence_Interval=0.95, Period_Interval=100, EWMA_Discount_Factor=0.94\n", "):\n", "\n", " \"This function calculates different VaR series and plots it in the same graph\"\n", "\n", " # Use for each VaR call the same values, here they are set\n", " Ret = Returns\n", " CI = Confidence_Interval\n", " PI = Period_Interval\n", " EWMAdf = EWMA_Discount_Factor\n", "\n", " # Call the single VaR series\n", " VaRPN = VaR(\n", " Ret,\n", " Formula=\"Parametric Normal\",\n", " Confidence_Interval=CI,\n", " Period_Interval=PI,\n", " EWMA_Discount_Factor=EWMAdf,\n", " Series=True,\n", " removeNa=True,\n", " )\n", " VaREWMA = VaR(\n", " Ret,\n", " Formula=\"Parametric EWMA\",\n", " Confidence_Interval=CI,\n", " Period_Interval=PI,\n", " EWMA_Discount_Factor=EWMAdf,\n", " Series=True,\n", " removeNa=True,\n", " )\n", " VaRHS = VaR(\n", " Ret,\n", " Formula=\"Historical Simulation\",\n", " Confidence_Interval=CI,\n", " Period_Interval=PI,\n", " EWMA_Discount_Factor=EWMAdf,\n", " Series=True,\n", " removeNa=True,\n", " )\n", " VaRFHS = VaR(\n", " Ret,\n", " Formula=\"Filtered Historical Simulation\",\n", " Confidence_Interval=CI,\n", " Period_Interval=PI,\n", " EWMA_Discount_Factor=EWMAdf,\n", " Series=True,\n", " removeNa=True,\n", " )\n", "\n", " # Concat the different VaR series in the same dataframe and plot it\n", " AllVaR = pd.concat([VaRPN, VaREWMA, VaRHS, VaRFHS], axis=1) # .reset_index()\n", " # print(AllVaR.columns)\n", " from matplotlib import pyplot as plt\n", "\n", " plt.figure(figsize=(8, 5))\n", " sns.lineplot(data=AllVaR, hue=AllVaR.columns)\n", " plt.legend()\n", " plt.show()\n", " # AllVaR.plot(lw=1)\n", "\n", " return AllVaR.columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in [\n", " \"Parametric Normal\",\n", " \"Parametric EWMA\",\n", " \"Historical Simulation\",\n", " \"Filtered Historical Simulation\",\n", "]:\n", " print(i, np.round(VaR(tesla[\"Returns\"], Formula=i, Series=False, removeNa=True), 4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "FHS doesn't work with Series = False, so let's look for the whole history and take the mean value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "VaR(\n", " tesla[\"Returns\"],\n", " Formula=\"Filtered Historical Simulation\",\n", " Series=True,\n", " removeNa=True,\n", ").mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Numbers differ significantly.\n", "What about values in dynamic?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "VaR_Compare(tesla[\"Returns\"]) # ParVaR EWMAVaR HSVaR FHSVaR" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "FHS shows the biggest volatility, while conservative ParVaR is most smooth. What to choose is up to you and environment, in which you have to take a decision." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Conclusion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this tutorial we have covered some basic instruments of risk management: several application of VaR and Monte-Carlo simulation. One should remember that each of them has its own pros and cons, so use them wisely.\n", "\n", "Moreover, there are lots of other methods and technics and even other types of VaR, that are not covered here. For example, there are some extensions called coherent risk measures alternatives. The conditional VaR (CVaR) or expected shortfall, computes the expected return of the portfolio in the worst scenarios for a certain probability level. The entropic VaR (EVaR) instead represents the upper bound for both VaR and CVaR, and its dual representation is related to the concept of relative entropy.\n", "\n", "Thus, I hope that this tutorial has helped you to get acquainted with some foundations and got hooked you with this topic." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }