**An invited tutorial by Ralph Vince**

*Leverage space is simply a manifold, structured such that we can examine and discuss matters pertaining to growth functions of stochastic outcomes, such as processes of money management and portfolio allocations in the capital markets and gambling scenarios. It is a framework for observing and analyzing these functions, as well as drawing conclusions about how we manage these functions for our benefit.*

*Since growth functions are a function of time, much pertaining to what occurs in the manifold of leverage space is a function of time also. Additionally, we examine what occurs therein in the asymptotic or long-run sense as well.*

We begin with the notion of a *player*, who is confronted with a set of *discrete*, possible outcomes for a random event that he will allocate resources to, and the outcome of this event will determine how much resources he gets back or loses.

These discrete outcomes have a probability associated with each possible outcome. Let’s us take the hypothetical case of a player confronted with a coin toss proposition that will return 2 units to him if heads is tossed, but will cost him 1 unit if tails is tossed. Thus, the set of discrete outcomes presented is:

Outcome | Quantity | Probability |

Heads | 2 | 0.5 |

Tails | -1 | 0.5 |

The player wishes to maximize what he will get back, considering the sum of all possible discrete outcomes times their respective probabilities (the classical *‘expectation’*), and considering the amount he has available to risk on the *proposition*.

Since the expectation is the same regardless of the amount risked upon it, the expected growth is maximized by wagering 100%.

Our player is permitted to risk a fraction, *f,* of his available capital. Being a fraction it is bound between the values of 0 and 1, and thus, given the independence of the expectation to the amount wagered, expected growth is always maximized for a positive expectation at *f *= 1 whereas expected growth is always maximized for a negative expectation at *f *= 0 (*i.e.* nothing risked).

However, we are looking at quitting after only one play. That is to say, for a positive expectation, such as this 2 to 1 coin toss, expected growth is maximized by risking 100% of the player’s available resources if the player’s horizon is 1 play. Thus, the fraction to risk is a function the player’s *horizon, *the point where he intends to cease risking resources on the proposition.

Let us examine what now happens at two plays of this same proposition, this same coin toss that pays 2 to 1. Utilizing the model presented in [2] and [3] (which yield an expected growth optimal fraction to wager of 1 for a horizon of 1) we find that the fraction of available resources to wager for a horizon of 2 plays is now *f* = .5 per play.

And if we continue expanding the horizon, that is, the number of plays where the player will cease risking resources on the proposition, we find the fraction to risk (on each play) continues to diminish up to a point:

This point where the expected growth-optimal fraction settles to an asymptote as the horizon (which we will call by the variable *Q*) approaches infinity we will refer to as the *Kelly Criterion* growth-optimal fraction ( where *f* = .25 in our hypothetical 2 to 1 coin toss case).

In the real world, oftentimes, what is risked in terms of resources is often not that which is “put up.” Consider the case of short sales, for example, or futures positions, where a certain good faith margin is put up, but what is actually risked is not the same amount.

In the 2 to 1 coin toss example already cited, what is put up (1 unit) is also that which is risked, and this is almost always the case in gambling situations. In capital markets, trading situations, the mechanics are often a little more complicated.

Building on this example, let us suppose the player must put up 1 unit to assume a wager, but what he can actually lose is a different amount, ranging from 1 unit down to .1 unit. In such situations, the Kelly Criterion must be amended to weight for the largest losing outcome of the discrete outcomes which are possible. This amendment I refer to as Optimal *f*, but it is necessary so as to keep the value as a fraction (0<= *f *<=1).

Heads | Tails | Kelly Criterion | Optimal f |

2 | -1 | 0.25 | 0.25 |

2 | -0.8 | 0.375 | 0.3 |

2 | -0.5 | 0.75 | 0.375 |

2 | -0.25 | 1.75 | 0.4375 |

2 | -0.1 | 4.75 | 0.475 |

The critical difference as exemplified here is that the answer returned by the Kelly Criterion, sans weighting to largest potential loss, is *not* the optimal fraction but rather the optimal *leverage factor*. That is, if we look at the case of a 2 to .25 coin toss, we find the Kelly Criterion solution to be 1.75. This does not mean to risk 175% of our available capital[1], but rather to operate as though we have 1.75 times more resources from which 1 unit (the amount which must be put up) is divided into.

In the largest losing outcome-weighted solution, the Optimal *f *solution, we have an actual fraction to wager (and it should be pointed out that the Kelly Criterion solution, which is a leverage factor, never equals the expected asymptotic growth optimal fraction to wager except in the special case where the amount put up to assume the wager equals the worst case possible outcome) and the player thus multiplies the amount of available resources by this fraction, dividing the largest potential losing outcome into that product.

Ultimately, the number of wagers to assume is the same regardless of whether one uses the original Kelly Criterion calculation or the worst-possible outcome weighted solution (Optimal *f*) but only the latter is an actual fraction. By using the actual fraction, we bound the risk axes between 0 and 1.

Thus far we have discussed the expected growth-optimal peak of, which we see for a set of discrete outcomes with a positive expectation begins at *f *= 1 for a horizon of 1 (*Q*=1) and migrates to its asymptotic peak. But there is much more to the character of the curve and its implications than simply the peak.

Let us examine some of these properties now. Shown in the next figure is our 2 to 1 coin toss game after 40 plays. The vertical axis represents what we would expect to make, as a multiple of our initial resources, the horizontal axis representing the fraction of our resources we risk, and we can declare that the more we risk, the greater will be our drawdown.

Notice that if we risk a fraction of .4 we make exactly the same as risking a fraction of .1 (and both making less than the peak at .25) yet risking .4 will have approximately 4 times the drawdown that risking .1 has. Clearly, it does not pay to exceed the peak, a point we will refer to as *kappa*.

Additionally, notice that beyond risking a fraction of .5 the expected growth is less than 1. In other words, if we risk greater than a fraction of .1 (or risk more aggressively than 1 unit for every 2 units we have) we are multiplying our resources by an amount less than 1. Thus, even though we are engaged in a very favorable proposition and *not* borrowing, we can still lose all of our resources by being aggressive beyond a certain point. We call this point *psi*.

Starting at *f* = 0 and increasing the value for *f*, we find that the expected return increases at an ever faster rate, up to a point, then decreases. That is to say, there is a point left of the peak, *kappa*, where marginal increases in gain slows down for increasing risk. This is the point where the curve goes from concave up to concave down and represents the peak of the first derivative of the curve. This *inflection point* we call *nu *and view it as the most conservative of the risk-adjusted return optimal points.

The (left-of-the-peak) inflection point (*nu*) does not appear at first. Clearly there can be no inflection point at *Q* = 1 which is a straight line (as shown in Figure 1) nor for a purely convex function when *Q* = 2 (as shown in Figure 2) and, in the 2 to 1 coin toss example, even at *Q* = 8 (as shown in Figure 3) there is still no inflection point. Figure 6, where Q = 40, clearly shows an inflection point. This point, *nu*, like *kappa*, the peak itself, migrates towards the asymptotic peak value, *kappa*, as *Q* gets ever greater approaching infinity.

Another important point, the more aggressive of the two risk-adjusted optimal points we refer to as *zeta*, and it represents that point where gain with respect to risk is generally maximized. If we look at the vertical axis as gain and the horizontal axis as risk, the clearly that point on the curve which exhibits the steepest slope with the point on the curve where *f* = 0 (and is therefore tangent to the curve, and therefore can only appear when *Q* is sufficiently large that a *nu* point appears) is the point where the ratio of return to risk is maximized and we call this point *zeta* (*nu* < *zeta* < *kappa*).

Notice in Figure 7 the slope of the point *kappa* (in green) as being shallower than that of the return / risk optimal point, *zeta*, the point where the tangent line (in blue) touches the return function.

Finally, it should be born out that if we are using the fractional representation that the expected growth-optimal peak, *kappa*, will never be greater than the sum of the probabilities of the winning discrete. Thus, in a coin toss situation, *kappa* can never be greater than .5 for an unbiased coin, regardless of the payout. One can always be certain the peak of the curve, *kappa*, for all possible sets of discrete outcomes, will reside somewhere between 0 and the sum of the probabilities of the winning outcomes.

One final point that must be made here: whenever one assumes a position, whenever anyone places a wager, assumes a position in the capital markets or has a vested stake in a growth function based on a stochastic outcome, one resides somewhere on this very same curve. The peak and the other chronomorphically-important loci of growth regulation (*nu, zeta, psi*) are at different points for *Q*, but one is always somewhere on this curve, the benefits of consequences of the specific location at work regardless of whether acknowledged or not.

Thus far we have looked at only one proposition. We now turn our attention to multiple, simultaneous propositions.

If we have two coins, two propositions two components, we are now looking for the two values for *f* that represent a coordinate in three-dimensional space. For *N* components, we are thus looking for a peak in an *N+1* dimensional space. For two, 2 to 1 coin tosses, played simultaneously, we therefore have a peak at *f** _{1}*= .23,

*f*

*=.23 for an aggregate fraction risked of .46.*

_{2}The propositions do not have to be identical, they can be entirely different propositions. What is necessary, however, is that the propositions transpire over the same windows of time. That is, if we are looking at multiple simultaneous propositions, they should al transpire over what we call a holding period, and a holding period can be any uniform window of time, a day, a week, a month, a year, any uniform period of time across components. Thus, for example, if our holding periods are months, our table of potential outcomes must be outcomes over the course of a month.

Notice now the necessity for scaling all axes to values between 0 and 1. Thus, we obtain an *N + 1* dimensional manifold scaled in all dimensions between 0 and 1 save for the return (altitude) dimension.

Also notice the grey area in Figure 8, representing those *f *value coordinate where the growth function is less than 1. Notice how one can be at an asymptotic peak value for one axis (*f *= .23) but the other axis being such that the aggregate growth function for both simultaneous propositions is less than 1 in the aggregate. This is counter to the usually-accepted notions of diversification.

Just as with the single proposition, the surface changes shape as the number of plays or holding periods increases, the important points of growth regulation migrating at varying rates towards the asymptotic peak. Further, these important loci of growth regulation (*nu, zeta, psi*) although single points in the simple-proposition case (*N*-dimensional manifolds in an *N+1* dimensional leverage space manifold) these points (with the exception of *kappa*, always a solitary point) themselves manifolds (of *N* dimensions) in *N+1* leverage space. The calculation of these manifolds covered in-depth in [1] and just as with the single-proposition, one is always at some loci, some point on the surface in leverage space, whether acknowledged or not.

Therefore, often one does wish to allocate based upon the coordinates of a point in leverage space. For example, being within the *N* dimensional *zeta* manifold at a given horizon, *Q*, in an *N+1* dimensional manifold of leverage space for *N* components, one is therefore seeking to satisfy the criterion of being reward / risk optimal (with risk defined as drawdown in the case of the *zeta* manifold), as opposed to simply solving, as with most portfolio-allocation strategies, the more conventional measure of being mean-variance return optimal over the next, solitary period. The manifold of leverage space allows us to satisfy other criteria, wither by residing at certain loci to achieve those criteria, or by paths along the surface within the leverage space manifold, based on other events, to achieve other criteria.

Often, varying risk metrics are employed where, if violated, would result in ruin to the leverage space implementer. For example, if the probability of a given drawdown by a given horizon, Q, is exceeded, then that part of the surface of leverage space should be removed (so that the surface at such loci is at an altitude of 0). This is exemplified in Figure 9 for the two component case:

Other risk-violating criteria might be, for example, allocating more than *X*% to any one component (in which case, the altitude, the return function, for all loci where any component has an *f* value greater than .1 would be reduced to 0. The superimposition of any risk metrics on the surface of leverage space and the notion that various criteria can be satisfied by residing at different loci or traversing various paths along this surface, opens up a wide range of possibilities in portfolio construction and allocation, and money management techniques.

There is much more to explore here.

**References**

[1] M. L. de Prado, R. Vince, Q. J. Zhu, Optimal Risk Budgeting under a Finite Investment Horizon, SSRN, 2364092, 2013.

[2] R. Vince and Q. J. Zhu. Inflection point significance for the investment size. SSRN, 2230874, 2013.

[3] R. Vince and Q. J. Zhu. Optimal Betting Sizes for the Game of Blackjack. SSRN, 2324852, 2013.

[4] R. Vince. *Risk-Opportunity Analysis*. Createspace Division of Amazon, 2012.