Wednesday, February 17, 2016

Decoding Financial Networks: Hidden Dangers and Effective Policies 


Two changes have ushered in a new era of analyzing the complex and interdependent world surrounding us. One is related to the increased influx of data, furnishing the raw material for this revolution that is now starting to impact economic thinking. The second change is due to a subtler reason: a paradigm shift in the analysis of complex systems.

The buzzword "big data" is slowly being replaced by what is becoming established as "data science." While the cost of computer storage is continually falling, storage capacity is increasing at an exponential rate. In effect, seemingly endless streams of data, originating from countless human endeavors, are continually flowing along global information superhighways and being stored not only in server farms and the cloud, but -- importantly -- also in the researcher's local databases. However, collecting and storing raw data is futile if there is no way to extract meaningful information from it. Here, the budding science of complex systems is helping distill meaning from this data deluge.

Traditional problem-solving has been strongly shaped by the success of the reductionist approach taken in science. Put in the simplest terms, the focus has traditionally been on things in isolation -- on the tangible, the tractable, the malleable. But not so long ago, this focus shifted to a subtler dimension of our reality, where the isolation is overcome. Indeed, seemingly single and independent entities are always components of larger units of organization and hence influence each other. Our world, while still being comprised of many of the same "things" as in the past, has become highly networked and interdependent -- and, therefore, much more complex. From the interaction of independent entities, the notion of a system has emerged.

Understanding the structure of a system's components does not bring insights into how the system will behave as a whole. Indeed, the very concept of emergence fundamentally challenges our knowledge of complex systems, as self-organization allows for novel properties -- features not previously observed in the system or its components -- to unfold. The whole is literally more than the sum of its parts.

This shift away from analyzing the structure of "things" to analyzing their patterns of interaction represents a true paradigm shift, and one that has impacted computer science, biology, physics and sociology. The need to bring about such a shift in economics, too, can be heard in the words of Andy Haldane, chief economist at the Bank of England (Haldane 2011):
Economics has always been desperate to burnish its scientific credentials and this meant grounding it in the decisions of individual people. By itself, that was not the mistake. The mistake came in thinking the behavior of the system was just an aggregated version of the behavior of the individual. Almost by definition, complex systems do not behave like this. [...] Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behavior of any one node.

In a nutshell, the key to the success of complexity science lies in ignoring the complexity of the components while quantifying the structure of interactions. An ideal abstract representation of a complex system is given by a graph -- a complex network. This field has been emerging in a modern form since about the turn of the millennium (Watts and Strogatz 1998; Barabasi and Albert 1999; Albert and Barabasi 2002; Newman 2003).

Underpinning economics with insights from complex systems requires a major culture change in how economics is conducted. Specialized knowledge needs to be augmented with a diversity of expertise. Or, in the words of Jean-Claude Trichet, former president of the European Central Bank (Trichet 2010):

I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analyzing complex dynamic systems in a rigorous way.

What's more, scientists themselves have acknowledged this call for action (see, e.g., Schweitzer et al. 2009; Farmer et al. 2012).

In what follows, I will present two case studies that provide an initial glimpse of the potential of applying such a data-driven and network-inspired type of research to economic systems. By uncovering patterns of organization otherwise hidden in the data, these studies caught the attention not only of scholars and the general public, but also of policymakers.

The network of global corporate control

A specific constraint related to the analysis of economic and financial systems lies in an unfortunate relative lack of data. While other fields are flooded with data, in the realm of economics, a lot of potentially valuable information is deemed proprietary and not disclosed for strategic reasons. A viable detour is utilizing a good proxy that is exhaustive and widely available.

Ownership data, representing the percentages of equity a shareholder has in certain companies, is such a dataset. The structure of the ownership network is thought to be a good proxy for that of the financial network (Vitali, Glattfelder and Battiston 2011). However, this is not the main reason for analyzing such a dataset. Ownership networks represent an interface between the fields of economics and complex networks because information on ownership relations crucially unlocks knowledge relating to the global power of corporations. As a matter of fact, ownership gives a certain degree of control to the shareholder. In other words, the signature of corporate control is encoded in these networks (Glattfelder 2013). These and similar issues are also investigated in the field of corporate governance.

Bureau van Dijk's commercial Orbis database comprises about 37 million economic actors (e.g., physical persons, governments, foundations and firms) located in 194 countries as well as roughly 13 million directed and weighted ownership links for the year 2007. In a first step, a cross-country analysis of this ownership snapshot was performed (Glattfelder and Battiston 2009). A key finding was that the more control was locally dispersed, the higher the global concentration of control lay in the hands of a few powerful shareholders. This is in contrast to the economic idea of "widely held" firms in the United States (Berle and Means 1932). In fact, these results show that the true picture can only be unveiled by considering the whole network of interdependence. By simply focusing on the first level of ownership, one is misled by a mirage.

In a next step, the Orbis data was used to construct the global network of ownership. By focusing on the 43,060 transnational corporations (TNCs) found in the data, a new network was constructed that comprised all the direct and indirect shareholders and subsidiaries of the TNCs. Then, this network of TNCs, containing 600,508 nodes and 1,006,987 links, was further analyzed (Vitali, Glattfelder and Battiston 2011). Figure 1 shows a small sample of the network.

Analyzing the topology of the TNC network reveals the first signs of an organizational principle at work. One can see that the network is actually made up of many interconnected sub-networks that are not connected among themselves. The cumulative distribution function of the size of these connected components follows a power law, as there are 23,824 such components varying in size from many single isolated nodes to a cluster of 230 connected nodes. However, the largest connected component (LCC) represents an outlier in the powerlaw distribution, as it contains 464,006 nodes and 889,601 links.

This super-cluster contains only 36 percent of all TNCs. In effect, most TNCs "prefer" to be part of isolated components that comprise a few hundred nodes at most. But what can be said about the TNCs in the LCC? By adding a proxy for the value or size of firms, the network analysis can be extended. In the study, the operating revenue was used for the value of firms. Now it is possible to see where the valuable TNCs are located in the network. Strikingly, the 36 percent of TNCs in the LCC account for 94 percent of the total TNC operating revenue. This finding justifies focusing further analysis solely on the LCC.

In general, assigning a value v_j to firm j gives additional meaning to the ownership network. As mentioned, a good proxy reflecting the economic value of a company is the operating revenue. Assigning such a non-topological variable to the nodes uncovers a deeper level of information embedded in the network. If shareholder i holds a fraction W_{ij} of the shares of firm j, W_{ij} v_j represents the value that i holds in j. Accordingly, the portfolio value of firm i is given by
p_i = sum_j W_{ij} v_j, (1.1)
However, in ownership networks, there are also chains of indirect ownership 80 links. For instance, firm i can gain value from firm k via firm j, if i holds shares in j, which, in turn, holds shares in k. Symbolically, this can be denoted as i -> j -> k.

Using these building blocks, and the fact that ownership is related to control, a methodology is introduced that estimates the degree of influence that each agent wields as a result of the network of ownership relations. In other words, a network centrality measure is provided that not only accounts for the structure of the shareholding relations, but -- crucially -- also incorporates the distribution of value. This allows for the top shareholders to be identified. As it turns out, 730 top shareholders have the potential to control 80 percent of the total operating revenue of all TNCs. In effect, this measure of influence is one order of magnitude more concentrated than the distribution of operating revenue. These top shareholders are comprised of financial institutions located in the United States and the United Kingdom (note that holding many ownership links does not necessarily result in a high value of influence).

Combining these two dimensions of analysis -- that is, the topology and the shareholder ranking -- finally uncovers yet another pattern of organization. A striking feature of the LCC is that it has a tiny but distinct core of 1,318 nodes that are highly interconnected (12,191 links). Analyzing the identity of the firms present in this core reveals that many of them are also top shareholders. Indeed, the 147 most influential shareholders in the core can potentially control 38 percent of the total operating revenue of all TNCs. In other words, a "superentity" with disproportional power is identified in the already powerful core, akin to a fractal structure.

This emerging power structure in the global ownership network has possible negative implications. For instance, as will be discussed in the next section, global systemic risk is sensitive to the connectivity of the network (Battiston et al. 2007; Lorenz and Battiston 2008; Wagner 2009; Stiglitz 2010; Battiston et al. 2012a). Moreover, global market competition is threatened by potential collusion (O'Brien and Salop 2001; Gilo, Moshe and Spiegel 2006).

Subjecting a comprehensive global economic dataset to a detailed network analysis has the power to unveil organizational patterns that have previously gone undetected. Although the exact numbers in the study should be taken with a grain of salt, they still give a good first approximation. For instance, the very different methods that can be used to estimate control from ownership all provide very similar aggregated network statistics.

Finally, although it cannot be proved that the top influencers actually exert their power or are able to leverage their privileged position, it is also impossible to rule out such activities -- especially since these channels for relaying power can be utilized in a covert manner. In any case, the degree of influence assigned to the shareholders can be understood as the probability of achieving one's own interest against the opposition of the other actors -- a notion reminiscent of Max Weber's idea of potential power (Weber 1978).

An ongoing research effort aims to extend this analysis to include additional annual snapshots of the global ownership network up to 2012. The focus now lies on the dynamics and evolution of the network. In particular, the stability of the core over time will be analyzed. Preliminary results on a small subset of the data suggest that the structure of the core is indeed stable. If verified, this would imply that the emergent power structure is resilient to forces reshaping the network architecture, such as the global financial crisis. The structure could also potentially be resistant to market reforms and regulatory efforts.

DebtRank

In an interconnected system, the notion of risk can assume many guises. The simplest and most obvious manifestation is that of individual risk. The colloquialism "too big to fail" captures the promise that further disaster can be averted by identifying and assisting the major players. This approach, however, does not work in a network. In systems where the agents are connected and therefore codependent, the relevant measure is systemic risk. Only by understanding the architecture of the network's connectivity can the propagation of financial distress through the system be understood. In essence, systemic risk is akin to the process of an epidemic spreading through a population.

A naive intuition would suggest that by increasing the interconnectivity of the system, the threat of systemic risk is reduced. In other words, the overall system should be more resilient when agents diversify their individual risks by increasing the shared links with other agents. Unfortunately, this can be shown to be false (Battiston et al. 2012a). Granted, in systems with feedback loops, such as financial systems, initial individual risk diversification can indeed start off by reducing systemic risk. However, there is a threshold related to the level of connectivity, and once it has been reached, any additional diversification effort will only result in increased systemic risk. Above this certain value, feedback loops and amplifications can lead to a knife-edge property, in which case stability is suddenly compromised.

Now a paradox emerges: Although individual financial agents become more resistant to shocks coming from their own business, the overall probability of failure in the system increases. In the worst-case scenario, the efforts of individual agents to manage their own risk increase the chances that other agents in the system will experience distress, thereby creating more systemic risk than the risk they reduced via risk-sharing. Against this backdrop, the highly interconnected core of the global ownership network looms ominously.

To summarize, in the presence of a network, it is not enough to simply identify the big players that have the potential to damage the system should they experience financial distress. Instead, it is crucial to analyze the network of codependency. The phrase "too connected to fail" captures this focus. However, for this approach to be implemented, a full-blown network analysis is required. Insights can only be gained by simulating the dynamics of such a system on its underlying network structure. For instance, one cannot calculate analytically the threshold of connectivity past which diversification has a destabilizing effect.

Still, there is a final step that can be taken in analyzing systemic risk in networks. Next to "too big to fail" (which focuses on the nodes) and "too connected to fail" (which incorporates the links), a third layer can be added by utilizing a more sophisticated network measure called "centrality." In a nutshell, a node's centrality simply depends on its neighbors' centrality. For example, PageRank, the algorithm that Google uses to rank websites in its search-engine results, is a centrality measure. A webpage is more important if other important webpages link to it. Recall also that the methodology for computing the degree of influence that was discussed in the previous section is another example of centrality.

A study focusing on this "too central to fail" notion of systemic risk has been conducted (Battiston et al. 2012b). The work employed previously confidential data on the 2008 crisis gathered by the US Federal Reserve to assess systemic risk as part of the Fed's emergency loans program. Inspired by the methodology behind the computation of shareholder influence and PageRank, a novel centrality measure for tracking systemic risk, called DebtRank, is introduced.

In the study, debt data from the Fed is augmented with the ownership data used in the analysis of the network of global corporate control. As mentioned, the ownership network is a valid proxy for the undisclosed financial network linking banks. The data also includes detailed information on daily balance sheets for 407 institutions that, together, received bailout funds worth $1.2 trillion from the Fed. The data covers 1,000 days from before, during and after the peak of the crisis, from August 2007 to June 2010. The study focuses on the 22 banks that collectively received three-quarters of that bailout money. It is interesting to observe that almost all of these banks were members of the "super-entity."

DebtRank computes the likelihood that a bank will default as well as how much this would damage the creditworthiness of the other banks in the network. In essence, the measure extends the notion of default contagion into that of distress propagation. Crucially, Debt- Rank proposes a quantitative method for monitoring institutions in a network and identifying the ones that are the most important for the stability of the system.

Figure 2 shows an "X-ray image" of the global financial crisis unfolding. It is striking to observe how many of the major players are affected and how some individual institutions threaten the majority of the economic value in the network (a DebtRank value larger than 0.5). Indeed, if a bank with a DebtRank value close to one defaults, it could potentially obliterate the economic value of the entire system. And, finally, the issue of "too central to fail" becomes dauntingly visible: Even institutions with relatively small asset size can become fragile and threaten a large part of the economy. The condition for this to happen is given by the position in the network as measured by the centrality.

In a forthcoming publication (Battiston et al. 2015), the notion of DebtRank is re-expressed making use of the more common notion of leverage, defined as the ratio between an institution's assets and equity. From this starting point, the authors develop a stress-test framework that allows the computation of a whole set of systemic risk measures. Again, since detailed data on the bilateral exposures between financial institutions is not publicly available, the true architecture of the financial network cannot be observed. In order to overcome this problem, the framework utilizes Monte Carlo samples of networks with realistic topologies (i.e., network realizations that match the aggregate level of interbank exposure for each financial institution).

As an illustrative exercise, the authors run the framework on a set of European banks, with empirical data comprising the aggregated interbank lending and borrowing volumes having been obtained from Bankscope, which covers 183 EU banks. The interbank network is reconstructed for the years 2008 to 2013 using the so-called fitness model. Importantly, the attention is placed not only on first-round effects of an initial shock, but also on the subsequent additional rounds of reverberations within the interbank network. A crucial result is given by the following relation:
L(2) = l^b S, (1.2)
where L(2) represents the total relative equity loss of the second round of distress propagation induced by the initial shock S, and with l^b > 0 being the weighted average of the interbank leverage. In other words, l^b is derived from the interbank assets and equity. In detail, S is computed from the unit shock on the value of external assets and the external leverage, that is, from the leverage related to the assets that do not originate from within the interbanking system.

Equation (1.2) implies the highly undesirable conclusion that the second-round effect of distress propagation is also at least as detrimental as the initial shock. This result highlights the important fact that waves of financial distress ripple multiple times through the network in a way that intensifies the problem for the individual nodes. This mechanism only truly becomes visible in a network analysis of the system. In empirical terms, this result is also compelling, as levels of interbank leverage are often around a value of two. In this light, the distress in the second round can be twice as big as the initial distress on the external assets. To conclude, neglecting second-round effects could therefore lead to a severe underestimation of systemic risk.

Outlook for policy-making

What is the added value of trying to understand the economy as an interconnected complex system? The most important result to mention in this context is the power of such analysis to uncover hidden features that would otherwise go undetected. Stated simply, the intractable complexity of financial systems can be decoded and understood by unraveling the underlying network.

A prime example of a network analysis uncovering unsuspected latent features is the detection of the tiny, but highly interconnected core of powerful actors in the global ownership network. It is a novel finding that the most influential companies do not conduct their business in isolation, but rather are entangled in an extremely intricate web of control. Notice, however, that the very existence of such a small, powerful and self-controlled group of financial institutions was unsuspected in the economics literature. Indeed, its existence is in stark contrast with many theories on corporate governance (see, e.g., Dore 2002).

However, understanding the structure of interaction in a complex system is only the first step. Once the underlying network architecture is made visible, the resulting dynamics of such systems can be analyzed. Recall that distress spreads through the network like an epidemic, infecting one node after another. In other words, the true understanding of the notion of systemic risk in a financial setting crucially relies on the knowledge of this propagation mechanism, which again is determined by the network topology. As discussed above, in a real-world setting in which feedback loops can act as amplifiers, the second-round effect of an initial shock is also at least as big as the initial impact. It should be noted that the notorious "bank stress tests" also aim at assessing such risks. More specifically, it is analyzed whether, under unfavorable economic scenarios, banks have enough capital to withstand the impact of adverse developments. Unfortunately, while commendable, these efforts only emphasize first-round effects and therefore potentially underestimate the true dangers to a significant degree. A recent example is the Comprehensive Assessment conducted by the European Central Bank in 2014, which included the Asset Quality Review.

A first obvious application of the knowledge derived from a complex-systems approach to finance and economics is related to monitoring the health of the system. For instance, DebtRank allows systemic risk to be measured along two dimensions: the potential impact of an institution on the whole system as well as the vulnerability of an institution exposed to the distress of others. This identifies the most dangerous culprits, namely, institutions with both high vulnerability and impact. In Figure 3, the whole extent of the financial crisis becomes apparent, as high vulnerability was indeed compounded with high impact in 2008. In 2013, high vulnerability was offset by relatively low impact.

In addition to analyzing the health of the financial system at the level of individual actors, an index could be constructed that incorporates and aggregates the many facets of systemic risk. In this case, sectors and countries could also be scrutinized. A final goal would be the implementation of forecasting techniques. What probable trajectories leading into crisis emerge from the current state of the system? As Haldane (2011) noted in contemplating the idea of forecasting economic turbulences:

It would allow regulators to issue the equivalent of weather-warnings -- storms brewing over Lehman Brothers, credit default swaps and Greece. It would enable advice to be issued -- keep a safe distance from Bear Stearns, sub-prime mortgages and Icelandic banks. And it would enable "what-if?" simulations to be run -- if UK bank Northern Rock is the first domino, what will be the next?

In essence, a data- and complex systems-driven approach to finance and economics has the power to comprehensively assess the true state of the system. This offers crucial information to policymakers. By shedding light on previously invisible vulnerabilities inherent in our interconnected economic world, the blindfolds of ignorance can be removed, paving the way to policies that effectively mitigate systemic risk and avert future global crises.


References and Figures










 —  —  — 

This was a chapter contribution to “To the Man with a Hammer: Augmenting the Policymaker’s Toolbox for a Complex World”, Bertelsmann Stiftung, 2016:
This article collection helps point the way forward. Gathering a distinguished panel of complexity experts and policy innovators, it provides concrete examples of promising insights and tools, drawing from complexity science, the digital revolution and interdisciplinary approaches.

Table of contents:



 —  —  — 

See also "Ökonomie neu denken", February 16, 2016, Frankfurt am Main and Podiumsdiskussion.