Thursday, December 1, 2016

What is Real?

am I even real?
Met the wonderful Lucy Hawking at TEDxSalford by chance (Science and Storytelling, The Consciousness of Reality). This led to an amazing opportunity allowing me to contribute a science essay to her newest children's book:

George and the Blue Moon
Lucy and Stephen Hawking
Penguin, 2016

Staying true to my little hobby, it was called:

What is Reality?

And it started like this:
Every day you wake up. Returning from the wonderful adventures you may have been having in your dreams, you become you again. The memories of who you are and what you have been up to in your life come back. And you also realize that there is a world that lies outside of yourself, simply called reality. Then you get up.

This all seems very ordinary and not very exciting. However, all of this is linked to the hardest question that humans have ever asked themselves: What exactly is reality? What is this thing, made up of space, time and objects, we live in?

And ended like this:
But for the moment we can comfort ourselves with two answers to the question, ‘What is reality?’

One is that reality is a much bigger, richer and more complex thing than we ever dared to dream.

Or a short answer could be, ‘I create my reality!’

Thursday, November 24, 2016

creativity

reggie watts is a genius
Some people are just insanely creative. We already met Beardyman. Reggie Watts is equipped with a similar set of skills. His web-page labels him as vocal artist, beatboxer, musician, and comedian. His performances are random and often improvised sequences of live looping and colorful vocal outbursts, comprised of sounds and noises and narrating in various accents and (mock) languages.


To appreciate how his talent affects people, one can read the remarkable compliments in the comment section of the YouTube videos of his performances. In a refreshing, albeit rare, contrast to the hatred and animosity mostly encountered there:
  • "Reggie Watts is my favorite human."
  • "He really is light years ahead of the rest of us humans. I love him. He is a genius."
  • "What planet did he come from? We don't deserve this kind of being. We're not worthy."
  • "This man is light-years ahead of his time."
  • "Absolutely no clue what was happening but brilliant"
  • "What a fucking genius..."
  • "I hereby nominate Reggie Watts to be Ambassador of Earth, and be first to make contact with aliens should they visit."
  • "Reggie is at the end of my rainbow!"
  • "I don't even laugh when watching Reggie anymore. I just admire him."



One reoccurring theme is Watts speaking with a British accent reminiscent of a university professor, talking abstract nonsense. Or is it?
"And the important thing to remember is that this simulation is a good one. It's believable, it's tactile. You can reach out -- things are solid. You can move objects from one area to another. You can feel your body. You can say, 'I'd like to go over to this location,' and you can move this mass of molecules through the air over to another location, at will."


"Now, we know that everything here is an illusion and that we are somewhere else. But the cool thing about that is, it feels pretty real. I mean -- you know what I mean? Like, it's pretty convincing. So, big credit to those people working hard there."


Why should you consider reality to be a simulation/illusion? Well...


some stuff

on being idiosyncratic
next to my love of science (see all the boring stuff;)




I really enjoy

  • snowboarding (nearly 30 years), climbing (23 years), surfing (20 years plus), and skateboarding (1/2 year)



  • traveling

  • electronic music and related parties/festivals





and then (in random order)


I constantly have to wonder about the existence of my own mind, the conscious experience it gives me of an external reality, and what this all could possibly mean (yeah, book project). 


I also like to be highly critical of the socio-cultural environment I was born into and from there move on to being critical of other ones (rant and rant). I am highly skeptical of our financial systems (faults and greed).


I like to question myself and my ideas/beliefs.


I try to put myself into other people's shoes, as I believe I would be that same person, given the same biography and brain chemistry/hard-wiring.


I am an irrational optimist. although I see, in my opinion, so many things that are so terribly and depressingly wrong all over the world, I try to keep my faith (this here).


I get inspired by a spiritual outlook on life that seeks happiness and wisdom within oneself and allows for the existence of other realms of "reality" outside space and time (e.g., Buddhism and certain esoteric ideas). I totally and fundamentally reject institutionalized theologies. does the term "spiritual atheism" make any sense?


I had been vegetarian for 12 years before turning vegan (as best as I can) 4 yeas ago. why? environmental, ethical, and health considerations (once I get around to it, this will be a long and heavily referenced piece).


I aim at remaining grateful for experiencing this stream of consciousness, regardless of its contents.


I try to resist the urge to be cynical as fuck as much as I can (e.g., while interacting with crackpots in news groups or discussing climate change).


I am deeply thankful to all the loved ones in my life, especially my wife, who make this journey so much more fun <3




Monday, May 30, 2016

swimming in the sea of knowledge

we live in truly interesting times
We take one of the most amazing and far-reaching achievements in recent times for granted: free access to knowledge.

The advent of user-generated content, the so-called Web 2.0, has enabled initiatives like Wikipedia to assemble an unfathomable amount of human knowledge --- at your fingertips. The Google Books Project has scanned and digitalized millions of books making them searchable on-line.

Google Scholar is a search engine accessing countless published scholarly articles. Many publications nowadays are open access and often working papers or preprints are available (like arxiv.org, biorxiv.org, ssrn.com). If this isn't enough, "Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it" (src). Obviously, this results in a cat-and-mouse game:
  • http://sci-hub.io/
  • http://sci-hub.bz/
  • ...
  • TOR scihub22266oqcxt.onion
But access alone is not enough. The sheer amount of information is mind-blowing. So, how can one navigate this see of knowledge without drowning?

Enter YouTube, respectively its content providers. There exist a multitude of channels featuring videos aimed at explaining countless topics from science to philosophy. But crucially, this is done in an entertaining and/or visually appealing manner. Some of my favorites are: Kurzgesagt – In a Nutshel, CrashCourse, Vsauce, Veritassium, MinutePhysics or one of the channels of Brady Haran (list).

And, last but not least, TED and TEDx talks entertain "ideas worth spreading". In other words, personal insights from people working at the cutting edge of current knowledge or simply talks packed with inspiration.

This all means that you have a nearly inexhaustible treasure trove of knowledge at your free disposal, broken down into piecemeal units, ready for instant education.

Enjoy:)











--
Edit: Some of my Youtube playlists:




Thursday, May 26, 2016

more random quotes: scott aaronson

new perspectives
So, John Horgan, the End of Science guy, interviewed Scott Aaronson, a theoretical computer scientist interested in quantum computing and computational complexity theory.

In the following, some random quotes.

On Quantum Mechanics

    [Q]uantum mechanics is astonishingly simple—once you take the physics out of it!  In fact, QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software.

    [A]ccepting quantum mechanics didn’t mean giving up on the computational worldview: it meant upgrading it, making it richer than before.  There was a programming language fundamentally stronger than BASIC, or Pascal, or C—at least with regard to what it let you compute in reasonable amounts of time.  And yet this quantum language had clear rules of its own; there were things that not even it let you do (and one could prove that); it still wasn’t anything-goes. 


The Computational Universe

    If it’s worthwhile to build the LHC or LIGO—wonderful machines that so far, have mostly triumphantly confirmed our existing theories—then it seems at least as worthwhile to build a scalable quantum computer, and thereby prove that our universe really does have this immense computational power beneath the surface. 

    Firstly, quantum computing has supplied probably the clearest language ever invented—namely, the language of qubits, quantum circuits, and so on—for talking about quantum mechanics itself.
[...]
Secondly, one of the most important things we’ve learned about quantum gravity—which emerged from the work of Stephen Hawking and the late Jacob Bekenstein in the 1970s—is that in quantum gravity, unlike in any previous physical theory, the total number of bits (or actually qubits) that can be stored in a bounded region of space is finite rather than infinite.  In fact, a black hole is the densest hard disk allowed by the laws of physics, and it stores a “mere” 1069 qubits per square meter of its event horizon!  And because of the dark energy (the thing, discovered in 1998, that’s pushing the galaxies apart at an exponential rate), the number of qubits that can be stored in our entire observable universe appears to be at most about 10122.
[...]
So, that immediately suggests a picture of the universe, at the Planck scale of 10^-33 meters or 10^-43 seconds, as this huge but finite collection of qubits being acted upon by quantum logic gates—in other words, as a giant quantum computation. 

The Big Picture

    Ideas from quantum computing and quantum information have recently entered the study of the black hole information problem—i.e., the question of how information can come out of a black hole, as it needs to for the ultimate laws of physics to be time-reversible.  Related to that, quantum computing ideas have been showing up in the study of the so-called AdS/CFT (anti de Sitter / conformal field theory) correspondence, which relates completely different-looking theories in different numbers of dimensions, and which some people consider the most important thing to have come out of string theory. 

    [S]ome of the conceptual problems of quantum gravity turn out to involve my own field of computational complexity in a surprisingly nontrivial way.  The connection was first made in 2013, in a remarkable paper by Daniel Harlow and Patrick Hayden.  Harlow and Hayden were addressing the so-called “firewall paradox,” which had lit the theoretical physics world on fire (har, har) over the previous year.

    In summary, I predict that ideas from quantum information and computation will be helpful—and possibly even essential—for continued progress on the conceptual puzzles of quantum gravity. 


    If civilization lasts long enough, then there’s absolutely no reason why there couldn’t be further discoveries about the natural world as fundamental as relativity or evolution. One possible example would be an experimentally-confirmed theory of a discrete structure underlying space and time, which the black-hole entropy gives us some reason to suspect is there. 

P/NP

    [T]he ocean of mathematical understanding just keeps monotonically rising, and we’ve seen it reach peaks like Fermat’s Last Theorem that had once been synonyms for hopelessness.  I see absolutely no reason why the same ocean can’t someday swallow P vs. NP, provided our civilization lasts long enough.  In fact, whether our civilization will last long enough is by far my biggest uncertainty. 

    More seriously, it was realized in the 1970s that techniques borrowed from mathematical logic—the ones that Gödel and Turing wielded to such great effect in the 1930s—can’t possibly work, by themselves, to resolve P vs. NP.  Then, in the 1980s, there were some spectacular successes, using techniques from combinatorics, to prove limitations on restricted types of algorithms.  Some experts felt that a proof of P≠NP was right around the corner.  But in the 1990s, Alexander Razborov and Steven Rudich discovered something mind-blowing: that the combinatorial techniques from the 1980s, if pushed just slightly further, would start “biting themselves in the rear end,” and would prove NP problems to be easier at the same time they were proving them to be harder!  Since it’s no good to have a proof that also proves the opposite of what it set out to prove, new ideas were again needed to break the impasse. 


Musings

    This characteristic of quantum mechanics—the way it stakes out an “intermediate zone,” where (for example) n qubits are stronger than n classical bits, but weaker than 2n classical bits, and where entanglement is stronger than classical correlation, but weaker than classical communication—is so weird and subtle that no science-fiction writer would have had the imagination to invent it.  But to me, that’s what makes quantum information interesting: that this isn’t a resource that fits our pre-existing categories, that we need to approach it as a genuinely new thing. 

    [I]f scanning my brain state, duplicating it like computer software, etc. were somehow shown to be fundamentally impossible, then I don’t know what more science could possibly say in favor of “free will being real”!


    I hate when the people in power are ones who just go with their gut, or their faith, or their tribe, or their dialectical materialism, and who don’t even feel self-conscious about the lack of error-correcting machinery in their methods for learning about the world.

    Just in the fields that I know something about, NP-completeness, public-key cryptography, Shor’s algorithm, the dark energy, the Hawking-Bekenstein entropy of black holes, and holographic dualities are six examples of fundamental discoveries from the 1970s to the 1990s that seem able to hold their heads high against almost anything discovered earlier (if not quite relativity or evolution).

Wednesday, February 17, 2016

Decoding Financial Networks: Hidden Dangers and Effective Policies 


Two changes have ushered in a new era of analyzing the complex and interdependent world surrounding us. One is related to the increased influx of data, furnishing the raw material for this revolution that is now starting to impact economic thinking. The second change is due to a subtler reason: a paradigm shift in the analysis of complex systems.

The buzzword "big data" is slowly being replaced by what is becoming established as "data science." While the cost of computer storage is continually falling, storage capacity is increasing at an exponential rate. In effect, seemingly endless streams of data, originating from countless human endeavors, are continually flowing along global information superhighways and being stored not only in server farms and the cloud, but -- importantly -- also in the researcher's local databases. However, collecting and storing raw data is futile if there is no way to extract meaningful information from it. Here, the budding science of complex systems is helping distill meaning from this data deluge.

Traditional problem-solving has been strongly shaped by the success of the reductionist approach taken in science. Put in the simplest terms, the focus has traditionally been on things in isolation -- on the tangible, the tractable, the malleable. But not so long ago, this focus shifted to a subtler dimension of our reality, where the isolation is overcome. Indeed, seemingly single and independent entities are always components of larger units of organization and hence influence each other. Our world, while still being comprised of many of the same "things" as in the past, has become highly networked and interdependent -- and, therefore, much more complex. From the interaction of independent entities, the notion of a system has emerged.

Understanding the structure of a system's components does not bring insights into how the system will behave as a whole. Indeed, the very concept of emergence fundamentally challenges our knowledge of complex systems, as self-organization allows for novel properties -- features not previously observed in the system or its components -- to unfold. The whole is literally more than the sum of its parts.

This shift away from analyzing the structure of "things" to analyzing their patterns of interaction represents a true paradigm shift, and one that has impacted computer science, biology, physics and sociology. The need to bring about such a shift in economics, too, can be heard in the words of Andy Haldane, chief economist at the Bank of England (Haldane 2011):
Economics has always been desperate to burnish its scientific credentials and this meant grounding it in the decisions of individual people. By itself, that was not the mistake. The mistake came in thinking the behavior of the system was just an aggregated version of the behavior of the individual. Almost by definition, complex systems do not behave like this. [...] Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behavior of any one node.

In a nutshell, the key to the success of complexity science lies in ignoring the complexity of the components while quantifying the structure of interactions. An ideal abstract representation of a complex system is given by a graph -- a complex network. This field has been emerging in a modern form since about the turn of the millennium (Watts and Strogatz 1998; Barabasi and Albert 1999; Albert and Barabasi 2002; Newman 2003).

Underpinning economics with insights from complex systems requires a major culture change in how economics is conducted. Specialized knowledge needs to be augmented with a diversity of expertise. Or, in the words of Jean-Claude Trichet, former president of the European Central Bank (Trichet 2010):

I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analyzing complex dynamic systems in a rigorous way.

What's more, scientists themselves have acknowledged this call for action (see, e.g., Schweitzer et al. 2009; Farmer et al. 2012).

In what follows, I will present two case studies that provide an initial glimpse of the potential of applying such a data-driven and network-inspired type of research to economic systems. By uncovering patterns of organization otherwise hidden in the data, these studies caught the attention not only of scholars and the general public, but also of policymakers.

The network of global corporate control

A specific constraint related to the analysis of economic and financial systems lies in an unfortunate relative lack of data. While other fields are flooded with data, in the realm of economics, a lot of potentially valuable information is deemed proprietary and not disclosed for strategic reasons. A viable detour is utilizing a good proxy that is exhaustive and widely available.

Ownership data, representing the percentages of equity a shareholder has in certain companies, is such a dataset. The structure of the ownership network is thought to be a good proxy for that of the financial network (Vitali, Glattfelder and Battiston 2011). However, this is not the main reason for analyzing such a dataset. Ownership networks represent an interface between the fields of economics and complex networks because information on ownership relations crucially unlocks knowledge relating to the global power of corporations. As a matter of fact, ownership gives a certain degree of control to the shareholder. In other words, the signature of corporate control is encoded in these networks (Glattfelder 2013). These and similar issues are also investigated in the field of corporate governance.

Bureau van Dijk's commercial Orbis database comprises about 37 million economic actors (e.g., physical persons, governments, foundations and firms) located in 194 countries as well as roughly 13 million directed and weighted ownership links for the year 2007. In a first step, a cross-country analysis of this ownership snapshot was performed (Glattfelder and Battiston 2009). A key finding was that the more control was locally dispersed, the higher the global concentration of control lay in the hands of a few powerful shareholders. This is in contrast to the economic idea of "widely held" firms in the United States (Berle and Means 1932). In fact, these results show that the true picture can only be unveiled by considering the whole network of interdependence. By simply focusing on the first level of ownership, one is misled by a mirage.

In a next step, the Orbis data was used to construct the global network of ownership. By focusing on the 43,060 transnational corporations (TNCs) found in the data, a new network was constructed that comprised all the direct and indirect shareholders and subsidiaries of the TNCs. Then, this network of TNCs, containing 600,508 nodes and 1,006,987 links, was further analyzed (Vitali, Glattfelder and Battiston 2011). Figure 1 shows a small sample of the network.

Analyzing the topology of the TNC network reveals the first signs of an organizational principle at work. One can see that the network is actually made up of many interconnected sub-networks that are not connected among themselves. The cumulative distribution function of the size of these connected components follows a power law, as there are 23,824 such components varying in size from many single isolated nodes to a cluster of 230 connected nodes. However, the largest connected component (LCC) represents an outlier in the powerlaw distribution, as it contains 464,006 nodes and 889,601 links.

This super-cluster contains only 36 percent of all TNCs. In effect, most TNCs "prefer" to be part of isolated components that comprise a few hundred nodes at most. But what can be said about the TNCs in the LCC? By adding a proxy for the value or size of firms, the network analysis can be extended. In the study, the operating revenue was used for the value of firms. Now it is possible to see where the valuable TNCs are located in the network. Strikingly, the 36 percent of TNCs in the LCC account for 94 percent of the total TNC operating revenue. This finding justifies focusing further analysis solely on the LCC.

In general, assigning a value v_j to firm j gives additional meaning to the ownership network. As mentioned, a good proxy reflecting the economic value of a company is the operating revenue. Assigning such a non-topological variable to the nodes uncovers a deeper level of information embedded in the network. If shareholder i holds a fraction W_{ij} of the shares of firm j, W_{ij} v_j represents the value that i holds in j. Accordingly, the portfolio value of firm i is given by
p_i = sum_j W_{ij} v_j, (1.1)
However, in ownership networks, there are also chains of indirect ownership 80 links. For instance, firm i can gain value from firm k via firm j, if i holds shares in j, which, in turn, holds shares in k. Symbolically, this can be denoted as i -> j -> k.

Using these building blocks, and the fact that ownership is related to control, a methodology is introduced that estimates the degree of influence that each agent wields as a result of the network of ownership relations. In other words, a network centrality measure is provided that not only accounts for the structure of the shareholding relations, but -- crucially -- also incorporates the distribution of value. This allows for the top shareholders to be identified. As it turns out, 730 top shareholders have the potential to control 80 percent of the total operating revenue of all TNCs. In effect, this measure of influence is one order of magnitude more concentrated than the distribution of operating revenue. These top shareholders are comprised of financial institutions located in the United States and the United Kingdom (note that holding many ownership links does not necessarily result in a high value of influence).

Combining these two dimensions of analysis -- that is, the topology and the shareholder ranking -- finally uncovers yet another pattern of organization. A striking feature of the LCC is that it has a tiny but distinct core of 1,318 nodes that are highly interconnected (12,191 links). Analyzing the identity of the firms present in this core reveals that many of them are also top shareholders. Indeed, the 147 most influential shareholders in the core can potentially control 38 percent of the total operating revenue of all TNCs. In other words, a "superentity" with disproportional power is identified in the already powerful core, akin to a fractal structure.

This emerging power structure in the global ownership network has possible negative implications. For instance, as will be discussed in the next section, global systemic risk is sensitive to the connectivity of the network (Battiston et al. 2007; Lorenz and Battiston 2008; Wagner 2009; Stiglitz 2010; Battiston et al. 2012a). Moreover, global market competition is threatened by potential collusion (O'Brien and Salop 2001; Gilo, Moshe and Spiegel 2006).

Subjecting a comprehensive global economic dataset to a detailed network analysis has the power to unveil organizational patterns that have previously gone undetected. Although the exact numbers in the study should be taken with a grain of salt, they still give a good first approximation. For instance, the very different methods that can be used to estimate control from ownership all provide very similar aggregated network statistics.

Finally, although it cannot be proved that the top influencers actually exert their power or are able to leverage their privileged position, it is also impossible to rule out such activities -- especially since these channels for relaying power can be utilized in a covert manner. In any case, the degree of influence assigned to the shareholders can be understood as the probability of achieving one's own interest against the opposition of the other actors -- a notion reminiscent of Max Weber's idea of potential power (Weber 1978).

An ongoing research effort aims to extend this analysis to include additional annual snapshots of the global ownership network up to 2012. The focus now lies on the dynamics and evolution of the network. In particular, the stability of the core over time will be analyzed. Preliminary results on a small subset of the data suggest that the structure of the core is indeed stable. If verified, this would imply that the emergent power structure is resilient to forces reshaping the network architecture, such as the global financial crisis. The structure could also potentially be resistant to market reforms and regulatory efforts.

DebtRank

In an interconnected system, the notion of risk can assume many guises. The simplest and most obvious manifestation is that of individual risk. The colloquialism "too big to fail" captures the promise that further disaster can be averted by identifying and assisting the major players. This approach, however, does not work in a network. In systems where the agents are connected and therefore codependent, the relevant measure is systemic risk. Only by understanding the architecture of the network's connectivity can the propagation of financial distress through the system be understood. In essence, systemic risk is akin to the process of an epidemic spreading through a population.

A naive intuition would suggest that by increasing the interconnectivity of the system, the threat of systemic risk is reduced. In other words, the overall system should be more resilient when agents diversify their individual risks by increasing the shared links with other agents. Unfortunately, this can be shown to be false (Battiston et al. 2012a). Granted, in systems with feedback loops, such as financial systems, initial individual risk diversification can indeed start off by reducing systemic risk. However, there is a threshold related to the level of connectivity, and once it has been reached, any additional diversification effort will only result in increased systemic risk. Above this certain value, feedback loops and amplifications can lead to a knife-edge property, in which case stability is suddenly compromised.

Now a paradox emerges: Although individual financial agents become more resistant to shocks coming from their own business, the overall probability of failure in the system increases. In the worst-case scenario, the efforts of individual agents to manage their own risk increase the chances that other agents in the system will experience distress, thereby creating more systemic risk than the risk they reduced via risk-sharing. Against this backdrop, the highly interconnected core of the global ownership network looms ominously.

To summarize, in the presence of a network, it is not enough to simply identify the big players that have the potential to damage the system should they experience financial distress. Instead, it is crucial to analyze the network of codependency. The phrase "too connected to fail" captures this focus. However, for this approach to be implemented, a full-blown network analysis is required. Insights can only be gained by simulating the dynamics of such a system on its underlying network structure. For instance, one cannot calculate analytically the threshold of connectivity past which diversification has a destabilizing effect.

Still, there is a final step that can be taken in analyzing systemic risk in networks. Next to "too big to fail" (which focuses on the nodes) and "too connected to fail" (which incorporates the links), a third layer can be added by utilizing a more sophisticated network measure called "centrality." In a nutshell, a node's centrality simply depends on its neighbors' centrality. For example, PageRank, the algorithm that Google uses to rank websites in its search-engine results, is a centrality measure. A webpage is more important if other important webpages link to it. Recall also that the methodology for computing the degree of influence that was discussed in the previous section is another example of centrality.

A study focusing on this "too central to fail" notion of systemic risk has been conducted (Battiston et al. 2012b). The work employed previously confidential data on the 2008 crisis gathered by the US Federal Reserve to assess systemic risk as part of the Fed's emergency loans program. Inspired by the methodology behind the computation of shareholder influence and PageRank, a novel centrality measure for tracking systemic risk, called DebtRank, is introduced.

In the study, debt data from the Fed is augmented with the ownership data used in the analysis of the network of global corporate control. As mentioned, the ownership network is a valid proxy for the undisclosed financial network linking banks. The data also includes detailed information on daily balance sheets for 407 institutions that, together, received bailout funds worth $1.2 trillion from the Fed. The data covers 1,000 days from before, during and after the peak of the crisis, from August 2007 to June 2010. The study focuses on the 22 banks that collectively received three-quarters of that bailout money. It is interesting to observe that almost all of these banks were members of the "super-entity."

DebtRank computes the likelihood that a bank will default as well as how much this would damage the creditworthiness of the other banks in the network. In essence, the measure extends the notion of default contagion into that of distress propagation. Crucially, Debt- Rank proposes a quantitative method for monitoring institutions in a network and identifying the ones that are the most important for the stability of the system.

Figure 2 shows an "X-ray image" of the global financial crisis unfolding. It is striking to observe how many of the major players are affected and how some individual institutions threaten the majority of the economic value in the network (a DebtRank value larger than 0.5). Indeed, if a bank with a DebtRank value close to one defaults, it could potentially obliterate the economic value of the entire system. And, finally, the issue of "too central to fail" becomes dauntingly visible: Even institutions with relatively small asset size can become fragile and threaten a large part of the economy. The condition for this to happen is given by the position in the network as measured by the centrality.

In a forthcoming publication (Battiston et al. 2015), the notion of DebtRank is re-expressed making use of the more common notion of leverage, defined as the ratio between an institution's assets and equity. From this starting point, the authors develop a stress-test framework that allows the computation of a whole set of systemic risk measures. Again, since detailed data on the bilateral exposures between financial institutions is not publicly available, the true architecture of the financial network cannot be observed. In order to overcome this problem, the framework utilizes Monte Carlo samples of networks with realistic topologies (i.e., network realizations that match the aggregate level of interbank exposure for each financial institution).

As an illustrative exercise, the authors run the framework on a set of European banks, with empirical data comprising the aggregated interbank lending and borrowing volumes having been obtained from Bankscope, which covers 183 EU banks. The interbank network is reconstructed for the years 2008 to 2013 using the so-called fitness model. Importantly, the attention is placed not only on first-round effects of an initial shock, but also on the subsequent additional rounds of reverberations within the interbank network. A crucial result is given by the following relation:
L(2) = l^b S, (1.2)
where L(2) represents the total relative equity loss of the second round of distress propagation induced by the initial shock S, and with l^b > 0 being the weighted average of the interbank leverage. In other words, l^b is derived from the interbank assets and equity. In detail, S is computed from the unit shock on the value of external assets and the external leverage, that is, from the leverage related to the assets that do not originate from within the interbanking system.

Equation (1.2) implies the highly undesirable conclusion that the second-round effect of distress propagation is also at least as detrimental as the initial shock. This result highlights the important fact that waves of financial distress ripple multiple times through the network in a way that intensifies the problem for the individual nodes. This mechanism only truly becomes visible in a network analysis of the system. In empirical terms, this result is also compelling, as levels of interbank leverage are often around a value of two. In this light, the distress in the second round can be twice as big as the initial distress on the external assets. To conclude, neglecting second-round effects could therefore lead to a severe underestimation of systemic risk.

Outlook for policy-making

What is the added value of trying to understand the economy as an interconnected complex system? The most important result to mention in this context is the power of such analysis to uncover hidden features that would otherwise go undetected. Stated simply, the intractable complexity of financial systems can be decoded and understood by unraveling the underlying network.

A prime example of a network analysis uncovering unsuspected latent features is the detection of the tiny, but highly interconnected core of powerful actors in the global ownership network. It is a novel finding that the most influential companies do not conduct their business in isolation, but rather are entangled in an extremely intricate web of control. Notice, however, that the very existence of such a small, powerful and self-controlled group of financial institutions was unsuspected in the economics literature. Indeed, its existence is in stark contrast with many theories on corporate governance (see, e.g., Dore 2002).

However, understanding the structure of interaction in a complex system is only the first step. Once the underlying network architecture is made visible, the resulting dynamics of such systems can be analyzed. Recall that distress spreads through the network like an epidemic, infecting one node after another. In other words, the true understanding of the notion of systemic risk in a financial setting crucially relies on the knowledge of this propagation mechanism, which again is determined by the network topology. As discussed above, in a real-world setting in which feedback loops can act as amplifiers, the second-round effect of an initial shock is also at least as big as the initial impact. It should be noted that the notorious "bank stress tests" also aim at assessing such risks. More specifically, it is analyzed whether, under unfavorable economic scenarios, banks have enough capital to withstand the impact of adverse developments. Unfortunately, while commendable, these efforts only emphasize first-round effects and therefore potentially underestimate the true dangers to a significant degree. A recent example is the Comprehensive Assessment conducted by the European Central Bank in 2014, which included the Asset Quality Review.

A first obvious application of the knowledge derived from a complex-systems approach to finance and economics is related to monitoring the health of the system. For instance, DebtRank allows systemic risk to be measured along two dimensions: the potential impact of an institution on the whole system as well as the vulnerability of an institution exposed to the distress of others. This identifies the most dangerous culprits, namely, institutions with both high vulnerability and impact. In Figure 3, the whole extent of the financial crisis becomes apparent, as high vulnerability was indeed compounded with high impact in 2008. In 2013, high vulnerability was offset by relatively low impact.

In addition to analyzing the health of the financial system at the level of individual actors, an index could be constructed that incorporates and aggregates the many facets of systemic risk. In this case, sectors and countries could also be scrutinized. A final goal would be the implementation of forecasting techniques. What probable trajectories leading into crisis emerge from the current state of the system? As Haldane (2011) noted in contemplating the idea of forecasting economic turbulences:

It would allow regulators to issue the equivalent of weather-warnings -- storms brewing over Lehman Brothers, credit default swaps and Greece. It would enable advice to be issued -- keep a safe distance from Bear Stearns, sub-prime mortgages and Icelandic banks. And it would enable "what-if?" simulations to be run -- if UK bank Northern Rock is the first domino, what will be the next?

In essence, a data- and complex systems-driven approach to finance and economics has the power to comprehensively assess the true state of the system. This offers crucial information to policymakers. By shedding light on previously invisible vulnerabilities inherent in our interconnected economic world, the blindfolds of ignorance can be removed, paving the way to policies that effectively mitigate systemic risk and avert future global crises.


References and Figures










 —  —  — 

This was a chapter contribution to “To the Man with a Hammer: Augmenting the Policymaker’s Toolbox for a Complex World”, Bertelsmann Stiftung, 2016:
This article collection helps point the way forward. Gathering a distinguished panel of complexity experts and policy innovators, it provides concrete examples of promising insights and tools, drawing from complexity science, the digital revolution and interdisciplinary approaches.

Table of contents:



 —  —  — 

See also "Ökonomie neu denken", February 16, 2016, Frankfurt am Main and Podiumsdiskussion.


Friday, December 11, 2015

At the Dawn of Human Collective Intelligence

trusting the universe to reach ever higher levels of complexity

The following was a contribution first published on the 30th of November 2015 in “HOW TO SAVE HUMANITY — Essays and answers from the desks of futurists, economists, biologists, humanitarians, entrepreneurs, activists and other people who spend a lot of time caring about, improving, and supporting the future of humanity.”



It is an interesting idiosyncrasy of our times that we have become increasingly accustomed to the ongoing success of the human mind in probing reality and understanding the world we live in. Indeed, the relevance of this ever growing body of knowledge, describing the universe and ourselves in greater and greater detail, cannot be overstated. But today, even the most breathtaking technological breakthroughs, fostered by this knowledge, can hardly capture the collective attention span for long. It is as if we have come to expect our technological abilities to steadily accelerate and reach breakneck speeds.

On the other hand, we have also become very accustomed, and alarmingly indifferent and unconcerned, about the state of human affairs. As a species, our recent terraforming activities have fundamentally transformed the biosphere we rely on, resulting in considerable impact for us individually. In a nutshell, we have devised linear systems that extract resources at one end, which, after being consumed, are disposed of at the other end. However, on a finite planet, extraction soon becomes exploitation and disposal results in pollution.

Today, this can be witnesses at unprecedented global scales. Just consider the following: substantial levels of pesticides and BPA in vast populations and even remote populations (like Inuit women whose breast milk is toxic due to pollutants accumulating in the ocean’s food chain), increase of chronic diseases, antimicrobial resistance, the Great Pacific and the North Atlantic garbage patches, e-waste, exploding levels of greenhouse gases, peak oil and phosphorus, land degradation, deforestation, water pollution, food waste, overfishing, dramatic loss of biodiversity,. . . The list is constantly growing as we await the arrival of the next billion human inhabitants on this planet.

Compounding this acute problem is the fact that today’s generations are living at the expense of future generations, ecologically and economically. For instance, we have reached Earth Overshoot Day in 2015 on the 13th of August. Each year, this day measures when human consumption of Earth’s natural resources, or humanity’s ecological footprint, approximately reaches the world’s biocapacity to generated those natural resources in a year. Since the introduction of this measure in 1970, when the 23rd of December marked Earth Overshoot Day, this tipping point has been occurring earlier and earlier. Moreover, just check the Global Debt Clock, recording public debt worldwide, to see an incomprehensibly and frighteningly high figure, casting an ominous shadow over future prosperity. Yes, the outlook is very dire indeed.


The Two Modes of Intelligence

In essence, we have an abundance of individual intelligence, fueling knowledge generation and technological proficiency, but an acute lack of collective intelligence, which would allow our species to co-evolve and co-exists in a sustainable manner with the biosphere that keeps it alive. This is the true enigma of our modern times: why does individual intelligence not foster collective intelligence? Take, for instance, a single termite. The biological capacity for cognition is very limited. However, as a collective swarm, the termites engineer nests they equip with air-conditioning capabilities, ensuring a constant inside temperature allowing the termites to cultivate a fungus which digest food for them they could otherwise not utilize. Now take any human. Amazing feats of higher cognitive functioning are manifested: self-awareness, sentience, language capability, creativity, abstract reasoning, formation and defense of beliefs, and much, much more. Remarkably but regrettably, multiplying this amazing potential and capacity times a few billion results in our current sate of affairs.

It is interesting to note that all biological systems do not feature centralized decision making. There are no architect or engineering termites overseeing construction, no CPU in our brains responsible for consciousness. This decentralized and bottom up approach appears to result in the emergence of collective intelligence, in other words, in self-organization, adaptivity, and resilience. Indeed, this incredible robustness of biological complex systems is most probably the reason why we still can continue with “business as usual” despite the continued devastating blows we have delivered to the biosphere. In stark contrast to these natural systems, all human systems, from political to economic, are all characterized by centralized governance. This top down approach to collective organization appears to systematically lack adaptivity, resilience, and, most importantly, sustainability.


The Zeitgeist and Beyond

We truly live in tumultuous times. Next to the increasing external pressures just outlined, we are also exposed directly to our own destructiveness. In a global environment where ignorance, myopia, denial, cynicism, indifference, callousness, alienation, disenchantment, and superficiality reign it is not surprising to witness the rise of fundamentalism and violence in all corners of the world. Neither is it really surprising that many people then try and escape this angst short-term by distracting consumerism and numbing materialism overall. Which then leads to the next predicament:

This is a strange, rather perverse story. Just to put it in very simple terms: it’s a story about us, people, being persuaded to spend money we don’t have, on things we don’t need, to create impressions that won’t last, on people we don’t care about.
(Tim Jackson’s 2010 TED talk.)

The reality of the society we’re in, is there are thousands and thousands of people out there, leading lives of quiet scream- ing desperation, where they work long hard hours, at jobs they hate, to enable them to buy things they don’t need, to impress people they don’t like.
(Nigel Marsh’s 2011 TED talk.)

Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situa- tion is profound. It is a scar across our collective soul. Yet virtually no one talks about it.
(David Graeber, “On the Phenomenon of Bullshit Jobs”, 2013.)

Our collective psyche is suffering under the current zeitgeist. In just a few decades the complexity and uncertainty of the lives we lead has dramatically increased and we now struggle even harder to find meaning. So, was this it? Are we simply yet another civilization at the precipice of its demise? Are we just a very brief, albeit spectacular, perturbation in the billion year history of life on Earth, which will undoubtedly adapt and continue for billions of years until our sun runs out of fuel?


At the Dawn

Perhaps things are not as they seem. Maybe the chaotic paths to destruction or survival really are only separated by the metaphorical flapping of the wings of a butterfly. In the case at hand, a mere flicker in the minds of people — for instance, a radical and contagious thought or idea — could alter the course of history.

Indeed, perhaps acquiring collective intelligence is not as hard as we might imagine. What is missing is possibly a subtle change in the way we perceive and think of ourselves and the world we inhabit; a change that would initiate a true shift in our behavior which could lead to adaptive, resilient, and sustainable human systems and interactions. Maybe the difficulty lies in the simple fact that we all first need to focus on ourselves for the common ground to emerge which would allow global change to flourish on.

One of the earliest and strongest constraints everyone of us as child is confronted with is the imprinting of local and static sociocultural and religious narratives, mostly emphasizing external authority. To resist this initial molding requires a very critical and open-minded worldview, not something every human child comes equipped with. What would happen if we would replace these obviously dysfunctional foundational stories that we have been telling our children? What if we, as a species, agreed to convey ideas to the next generation which do not simply depend on the geographic location of birth but represent something more functional, universal, and unifying? Ideas that also stress self-responsibility and self-reliance?

Modern neuroscience heavily emphasizes the plasticity of the human brain. This neuroplasticity reflects how the brain’s circuits constantly get rewired due to changes not only in the environment, but crucially also in response to inner changes within the mind. Cultivating different thought patterns results in different neural networks. As a consequence, we should never underestimate how untainted young brains, exposed to novel empowering ideas, could result in a generation of “new” humans, significantly different from the last one. Possibly some of the following ideas could meet this challenge — ideas capable of transforming the inner space of the mind and thus having the power to emanate into the outer world.


Cultivating a Responsible, Dynamic, and Inclusive Mindset

First, acknowledge that you are not the center of the universe. The local “reality bubble” you live in is arbitrary and infused with ideas relevant to the past. Your way of life is not representative or defining for the human species. Foreign ideas, beliefs, and ways of life are as justified as your own ones. The way you perceive reality depends on the exact levels of dozens of neurotransmitters and the biologically evolved hardwiring in your brain. In efect, what appears as real and true is always contingent and relative. Reality could be vastly richer, bigger, and more complex than anyone ever dared to dream. And never forget to appreciate the amazing string of measurable coincidences that had to conspire for you to read this sentence: from the creation of space, time, and energy, to the formation of the first heavy elements in the burning cores of stars which then got scattered into the cosmos when they exploded as supernovae and started to assemble into organic matter, which could store information and spontaneously began to replicate, sparking the evolution of life, which gradually reached ever higher and higher levels of complexity until a lump of organic matter, organized as a network of dozens of billions of nodes and roughly 100 trillion links, became self-aware.

Secondly, place yourself into the center of your universe. You alone are in charge of your life and solely responsible for your actions. You have the freedom in your mind to choose how you respond to internal urges and external influences. You can strive to cultivate a state of happiness and gratitude in your mind, regardless of the circumstances outside of your mind. Embrace change and accept that impermanence is an immutable fact of life. Let go of the illusion of control.

Finally, cultivate a dynamic and inclusive mindset. Assume that all people act to the best of their possibilities and capacities. Face the fact that you can be very wrong in the beliefs you deeply cherish and avoid the illusion of knowledge. Be open to the possibility that other people could be right. Allow your beliefs and ideas to be malleable, adaptive, and self-correcting. Try and strike a healthy balance between critical thinking and openmindedness.

Can we dare to imagine a future, when we teach our children to be empathetic but critical thinkers? When we teach them to be independent and not to seek acknowledgment form others but only themselves? When we teach them not to fear and discriminate against what is perceived as different and foreign; not to fear change and frantically cling on to the status quo, but to face the never ending challenges of life with confidence and trust? Imagine the collective intelligence that could emerge from a “swarm” of such individuals, emphasizing social inclusion next to cultivating a deep feeling of connectedness to the matrix of life and a profound appreciation of being an integral part of the enigma of existence. Simply by leaving out one generation’s worth of flawed and harmful imprinting, and by filling the arising void with radically functional and dynamic ideas and concepts, has the power to change everything.


The First Rays of Light

What if we already are in the middle of the transition and have not yet realized that it is happening? Despite the fact that we are still fueling dysfunctional collective ideas, perhaps we are already witnessing the beginning of a profound paradigm shift towards collective intelligence.

Take the recent emergence of decentralized financial and economic interactions that are slowly disrupting the status quo. For instance, the nascent rise of the blockchain ledger in a trustless peer-to-peer network enabling unthinkable new ways of human economic cooperation. Or the impact of free-access and free-content collaborative efforts providing us with unrestricted availability of nearly unlimited knowledge and constantly evolving, cutting-edge software. Or peer-to-peer lending, crowdfunding, and crowd-sourcing with the capacity to leverage the network effect created by a collective of like-minded people. And not to forget the success of shareconomies, offering a radically different blueprint to the way business has been conducted in the past. All these new technologies are based on bottom up, dynamic, decentralized, networked, unconstrained, and self-organizing human interactions. It is impossible to gauge the future impact of these systems today. Similarly, imagine trying to asses the potential of a new technology, called the Internet, in the early 1990s. No one had the audacity to predict what today has emerged form this initial network, then comprised of a few million computers, now affecting every aspect of modern human life.

We are truly living in a brave new world of unprecedented potential, where future utopias or dystopias are only separated by a thought, an idea, a behavior able to replicate and trigger self-organizing and adaptive collective action. So, where will you be at the dawning of human collective intelligence?