Monday, May 30, 2016

swimming in the sea of knowledge

we live in truly interesting times
We take one of the most amazing and far-reaching achievements in recent times for granted: free access to knowledge.

The advent of user-generated content, the so-called Web 2.0, has enabled initiatives like Wikipedia to assemble an unfathomable amount of human knowledge --- at your fingertips. The Google Books Project has scanned and digitalized millions of books making them searchable on-line.

Google Scholar is a search engine accessing countless published scholarly articles. Many publications nowadays are open access and often working papers or preprints are available (like,, If this isn't enough, "Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it" (src). Obviously, this results in a cat-and-mouse game:
  • ...
  • TOR scihub22266oqcxt.onion
But access alone is not enough. The sheer amount of information is mind-blowing. So, how can one navigate this see of knowledge without drowning?

Enter YouTube, respectively its content providers. There exist a multitude of channels featuring videos aimed at explaining countless topics from science to philosophy. But crucially, this is done in an entertaining and/or visually appealing manner. Some of my favorites are: Kurzgesagt – In a Nutshel, CrashCourse, Vsauce, Veritassium, MinutePhysics or one of the channels of Brady Haran (list).

And, last but not least, TED and TEDx talks entertain "ideas worth spreading". In other words, personal insights from people working at the cutting edge of current knowledge or simply talks packed with inspiration.

This all means that you have a nearly inexhaustible treasure trove of knowledge at your free disposal, broken down into piecemeal units, ready for instant education.


Edit: Some of my Youtube playlists:

Thursday, May 26, 2016

more random quotes: scott aaronson

new perspectives
So, John Horgan, the End of Science guy, interviewed Scott Aaronson, a theoretical computer scientist interested in quantum computing and computational complexity theory.

In the following, some random quotes.

On Quantum Mechanics

    [Q]uantum mechanics is astonishingly simple—once you take the physics out of it!  In fact, QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software.

    [A]ccepting quantum mechanics didn’t mean giving up on the computational worldview: it meant upgrading it, making it richer than before.  There was a programming language fundamentally stronger than BASIC, or Pascal, or C—at least with regard to what it let you compute in reasonable amounts of time.  And yet this quantum language had clear rules of its own; there were things that not even it let you do (and one could prove that); it still wasn’t anything-goes. 

The Computational Universe

    If it’s worthwhile to build the LHC or LIGO—wonderful machines that so far, have mostly triumphantly confirmed our existing theories—then it seems at least as worthwhile to build a scalable quantum computer, and thereby prove that our universe really does have this immense computational power beneath the surface. 

    Firstly, quantum computing has supplied probably the clearest language ever invented—namely, the language of qubits, quantum circuits, and so on—for talking about quantum mechanics itself.
Secondly, one of the most important things we’ve learned about quantum gravity—which emerged from the work of Stephen Hawking and the late Jacob Bekenstein in the 1970s—is that in quantum gravity, unlike in any previous physical theory, the total number of bits (or actually qubits) that can be stored in a bounded region of space is finite rather than infinite.  In fact, a black hole is the densest hard disk allowed by the laws of physics, and it stores a “mere” 1069 qubits per square meter of its event horizon!  And because of the dark energy (the thing, discovered in 1998, that’s pushing the galaxies apart at an exponential rate), the number of qubits that can be stored in our entire observable universe appears to be at most about 10122.
So, that immediately suggests a picture of the universe, at the Planck scale of 10^-33 meters or 10^-43 seconds, as this huge but finite collection of qubits being acted upon by quantum logic gates—in other words, as a giant quantum computation. 

The Big Picture

    Ideas from quantum computing and quantum information have recently entered the study of the black hole information problem—i.e., the question of how information can come out of a black hole, as it needs to for the ultimate laws of physics to be time-reversible.  Related to that, quantum computing ideas have been showing up in the study of the so-called AdS/CFT (anti de Sitter / conformal field theory) correspondence, which relates completely different-looking theories in different numbers of dimensions, and which some people consider the most important thing to have come out of string theory. 

    [S]ome of the conceptual problems of quantum gravity turn out to involve my own field of computational complexity in a surprisingly nontrivial way.  The connection was first made in 2013, in a remarkable paper by Daniel Harlow and Patrick Hayden.  Harlow and Hayden were addressing the so-called “firewall paradox,” which had lit the theoretical physics world on fire (har, har) over the previous year.

    In summary, I predict that ideas from quantum information and computation will be helpful—and possibly even essential—for continued progress on the conceptual puzzles of quantum gravity. 

    If civilization lasts long enough, then there’s absolutely no reason why there couldn’t be further discoveries about the natural world as fundamental as relativity or evolution. One possible example would be an experimentally-confirmed theory of a discrete structure underlying space and time, which the black-hole entropy gives us some reason to suspect is there. 


    [T]he ocean of mathematical understanding just keeps monotonically rising, and we’ve seen it reach peaks like Fermat’s Last Theorem that had once been synonyms for hopelessness.  I see absolutely no reason why the same ocean can’t someday swallow P vs. NP, provided our civilization lasts long enough.  In fact, whether our civilization will last long enough is by far my biggest uncertainty. 

    More seriously, it was realized in the 1970s that techniques borrowed from mathematical logic—the ones that Gödel and Turing wielded to such great effect in the 1930s—can’t possibly work, by themselves, to resolve P vs. NP.  Then, in the 1980s, there were some spectacular successes, using techniques from combinatorics, to prove limitations on restricted types of algorithms.  Some experts felt that a proof of P≠NP was right around the corner.  But in the 1990s, Alexander Razborov and Steven Rudich discovered something mind-blowing: that the combinatorial techniques from the 1980s, if pushed just slightly further, would start “biting themselves in the rear end,” and would prove NP problems to be easier at the same time they were proving them to be harder!  Since it’s no good to have a proof that also proves the opposite of what it set out to prove, new ideas were again needed to break the impasse. 


    This characteristic of quantum mechanics—the way it stakes out an “intermediate zone,” where (for example) n qubits are stronger than n classical bits, but weaker than 2n classical bits, and where entanglement is stronger than classical correlation, but weaker than classical communication—is so weird and subtle that no science-fiction writer would have had the imagination to invent it.  But to me, that’s what makes quantum information interesting: that this isn’t a resource that fits our pre-existing categories, that we need to approach it as a genuinely new thing. 

    [I]f scanning my brain state, duplicating it like computer software, etc. were somehow shown to be fundamentally impossible, then I don’t know what more science could possibly say in favor of “free will being real”!

    I hate when the people in power are ones who just go with their gut, or their faith, or their tribe, or their dialectical materialism, and who don’t even feel self-conscious about the lack of error-correcting machinery in their methods for learning about the world.

    Just in the fields that I know something about, NP-completeness, public-key cryptography, Shor’s algorithm, the dark energy, the Hawking-Bekenstein entropy of black holes, and holographic dualities are six examples of fundamental discoveries from the 1970s to the 1990s that seem able to hold their heads high against almost anything discovered earlier (if not quite relativity or evolution).

Wednesday, February 17, 2016

Decoding Financial Networks: Hidden Dangers and Effective Policies 

Two changes have ushered in a new era of analyzing the complex and interdependent world surrounding us. One is related to the increased influx of data, furnishing the raw material for this revolution that is now starting to impact economic thinking. The second change is due to a subtler reason: a paradigm shift in the analysis of complex systems.

The buzzword "big data" is slowly being replaced by what is becoming established as "data science." While the cost of computer storage is continually falling, storage capacity is increasing at an exponential rate. In effect, seemingly endless streams of data, originating from countless human endeavors, are continually flowing along global information superhighways and being stored not only in server farms and the cloud, but -- importantly -- also in the researcher's local databases. However, collecting and storing raw data is futile if there is no way to extract meaningful information from it. Here, the budding science of complex systems is helping distill meaning from this data deluge.

Traditional problem-solving has been strongly shaped by the success of the reductionist approach taken in science. Put in the simplest terms, the focus has traditionally been on things in isolation -- on the tangible, the tractable, the malleable. But not so long ago, this focus shifted to a subtler dimension of our reality, where the isolation is overcome. Indeed, seemingly single and independent entities are always components of larger units of organization and hence influence each other. Our world, while still being comprised of many of the same "things" as in the past, has become highly networked and interdependent -- and, therefore, much more complex. From the interaction of independent entities, the notion of a system has emerged.

Understanding the structure of a system's components does not bring insights into how the system will behave as a whole. Indeed, the very concept of emergence fundamentally challenges our knowledge of complex systems, as self-organization allows for novel properties -- features not previously observed in the system or its components -- to unfold. The whole is literally more than the sum of its parts.

This shift away from analyzing the structure of "things" to analyzing their patterns of interaction represents a true paradigm shift, and one that has impacted computer science, biology, physics and sociology. The need to bring about such a shift in economics, too, can be heard in the words of Andy Haldane, chief economist at the Bank of England (Haldane 2011):
Economics has always been desperate to burnish its scientific credentials and this meant grounding it in the decisions of individual people. By itself, that was not the mistake. The mistake came in thinking the behavior of the system was just an aggregated version of the behavior of the individual. Almost by definition, complex systems do not behave like this. [...] Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behavior of any one node.

In a nutshell, the key to the success of complexity science lies in ignoring the complexity of the components while quantifying the structure of interactions. An ideal abstract representation of a complex system is given by a graph -- a complex network. This field has been emerging in a modern form since about the turn of the millennium (Watts and Strogatz 1998; Barabasi and Albert 1999; Albert and Barabasi 2002; Newman 2003).

Underpinning economics with insights from complex systems requires a major culture change in how economics is conducted. Specialized knowledge needs to be augmented with a diversity of expertise. Or, in the words of Jean-Claude Trichet, former president of the European Central Bank (Trichet 2010):

I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analyzing complex dynamic systems in a rigorous way.

What's more, scientists themselves have acknowledged this call for action (see, e.g., Schweitzer et al. 2009; Farmer et al. 2012).

In what follows, I will present two case studies that provide an initial glimpse of the potential of applying such a data-driven and network-inspired type of research to economic systems. By uncovering patterns of organization otherwise hidden in the data, these studies caught the attention not only of scholars and the general public, but also of policymakers.

The network of global corporate control

A specific constraint related to the analysis of economic and financial systems lies in an unfortunate relative lack of data. While other fields are flooded with data, in the realm of economics, a lot of potentially valuable information is deemed proprietary and not disclosed for strategic reasons. A viable detour is utilizing a good proxy that is exhaustive and widely available.

Ownership data, representing the percentages of equity a shareholder has in certain companies, is such a dataset. The structure of the ownership network is thought to be a good proxy for that of the financial network (Vitali, Glattfelder and Battiston 2011). However, this is not the main reason for analyzing such a dataset. Ownership networks represent an interface between the fields of economics and complex networks because information on ownership relations crucially unlocks knowledge relating to the global power of corporations. As a matter of fact, ownership gives a certain degree of control to the shareholder. In other words, the signature of corporate control is encoded in these networks (Glattfelder 2013). These and similar issues are also investigated in the field of corporate governance.

Bureau van Dijk's commercial Orbis database comprises about 37 million economic actors (e.g., physical persons, governments, foundations and firms) located in 194 countries as well as roughly 13 million directed and weighted ownership links for the year 2007. In a first step, a cross-country analysis of this ownership snapshot was performed (Glattfelder and Battiston 2009). A key finding was that the more control was locally dispersed, the higher the global concentration of control lay in the hands of a few powerful shareholders. This is in contrast to the economic idea of "widely held" firms in the United States (Berle and Means 1932). In fact, these results show that the true picture can only be unveiled by considering the whole network of interdependence. By simply focusing on the first level of ownership, one is misled by a mirage.

In a next step, the Orbis data was used to construct the global network of ownership. By focusing on the 43,060 transnational corporations (TNCs) found in the data, a new network was constructed that comprised all the direct and indirect shareholders and subsidiaries of the TNCs. Then, this network of TNCs, containing 600,508 nodes and 1,006,987 links, was further analyzed (Vitali, Glattfelder and Battiston 2011). Figure 1 shows a small sample of the network.

Analyzing the topology of the TNC network reveals the first signs of an organizational principle at work. One can see that the network is actually made up of many interconnected sub-networks that are not connected among themselves. The cumulative distribution function of the size of these connected components follows a power law, as there are 23,824 such components varying in size from many single isolated nodes to a cluster of 230 connected nodes. However, the largest connected component (LCC) represents an outlier in the powerlaw distribution, as it contains 464,006 nodes and 889,601 links.

This super-cluster contains only 36 percent of all TNCs. In effect, most TNCs "prefer" to be part of isolated components that comprise a few hundred nodes at most. But what can be said about the TNCs in the LCC? By adding a proxy for the value or size of firms, the network analysis can be extended. In the study, the operating revenue was used for the value of firms. Now it is possible to see where the valuable TNCs are located in the network. Strikingly, the 36 percent of TNCs in the LCC account for 94 percent of the total TNC operating revenue. This finding justifies focusing further analysis solely on the LCC.

In general, assigning a value v_j to firm j gives additional meaning to the ownership network. As mentioned, a good proxy reflecting the economic value of a company is the operating revenue. Assigning such a non-topological variable to the nodes uncovers a deeper level of information embedded in the network. If shareholder i holds a fraction W_{ij} of the shares of firm j, W_{ij} v_j represents the value that i holds in j. Accordingly, the portfolio value of firm i is given by
p_i = sum_j W_{ij} v_j, (1.1)
However, in ownership networks, there are also chains of indirect ownership 80 links. For instance, firm i can gain value from firm k via firm j, if i holds shares in j, which, in turn, holds shares in k. Symbolically, this can be denoted as i -> j -> k.

Using these building blocks, and the fact that ownership is related to control, a methodology is introduced that estimates the degree of influence that each agent wields as a result of the network of ownership relations. In other words, a network centrality measure is provided that not only accounts for the structure of the shareholding relations, but -- crucially -- also incorporates the distribution of value. This allows for the top shareholders to be identified. As it turns out, 730 top shareholders have the potential to control 80 percent of the total operating revenue of all TNCs. In effect, this measure of influence is one order of magnitude more concentrated than the distribution of operating revenue. These top shareholders are comprised of financial institutions located in the United States and the United Kingdom (note that holding many ownership links does not necessarily result in a high value of influence).

Combining these two dimensions of analysis -- that is, the topology and the shareholder ranking -- finally uncovers yet another pattern of organization. A striking feature of the LCC is that it has a tiny but distinct core of 1,318 nodes that are highly interconnected (12,191 links). Analyzing the identity of the firms present in this core reveals that many of them are also top shareholders. Indeed, the 147 most influential shareholders in the core can potentially control 38 percent of the total operating revenue of all TNCs. In other words, a "superentity" with disproportional power is identified in the already powerful core, akin to a fractal structure.

This emerging power structure in the global ownership network has possible negative implications. For instance, as will be discussed in the next section, global systemic risk is sensitive to the connectivity of the network (Battiston et al. 2007; Lorenz and Battiston 2008; Wagner 2009; Stiglitz 2010; Battiston et al. 2012a). Moreover, global market competition is threatened by potential collusion (O'Brien and Salop 2001; Gilo, Moshe and Spiegel 2006).

Subjecting a comprehensive global economic dataset to a detailed network analysis has the power to unveil organizational patterns that have previously gone undetected. Although the exact numbers in the study should be taken with a grain of salt, they still give a good first approximation. For instance, the very different methods that can be used to estimate control from ownership all provide very similar aggregated network statistics.

Finally, although it cannot be proved that the top influencers actually exert their power or are able to leverage their privileged position, it is also impossible to rule out such activities -- especially since these channels for relaying power can be utilized in a covert manner. In any case, the degree of influence assigned to the shareholders can be understood as the probability of achieving one's own interest against the opposition of the other actors -- a notion reminiscent of Max Weber's idea of potential power (Weber 1978).

An ongoing research effort aims to extend this analysis to include additional annual snapshots of the global ownership network up to 2012. The focus now lies on the dynamics and evolution of the network. In particular, the stability of the core over time will be analyzed. Preliminary results on a small subset of the data suggest that the structure of the core is indeed stable. If verified, this would imply that the emergent power structure is resilient to forces reshaping the network architecture, such as the global financial crisis. The structure could also potentially be resistant to market reforms and regulatory efforts.


In an interconnected system, the notion of risk can assume many guises. The simplest and most obvious manifestation is that of individual risk. The colloquialism "too big to fail" captures the promise that further disaster can be averted by identifying and assisting the major players. This approach, however, does not work in a network. In systems where the agents are connected and therefore codependent, the relevant measure is systemic risk. Only by understanding the architecture of the network's connectivity can the propagation of financial distress through the system be understood. In essence, systemic risk is akin to the process of an epidemic spreading through a population.

A naive intuition would suggest that by increasing the interconnectivity of the system, the threat of systemic risk is reduced. In other words, the overall system should be more resilient when agents diversify their individual risks by increasing the shared links with other agents. Unfortunately, this can be shown to be false (Battiston et al. 2012a). Granted, in systems with feedback loops, such as financial systems, initial individual risk diversification can indeed start off by reducing systemic risk. However, there is a threshold related to the level of connectivity, and once it has been reached, any additional diversification effort will only result in increased systemic risk. Above this certain value, feedback loops and amplifications can lead to a knife-edge property, in which case stability is suddenly compromised.

Now a paradox emerges: Although individual financial agents become more resistant to shocks coming from their own business, the overall probability of failure in the system increases. In the worst-case scenario, the efforts of individual agents to manage their own risk increase the chances that other agents in the system will experience distress, thereby creating more systemic risk than the risk they reduced via risk-sharing. Against this backdrop, the highly interconnected core of the global ownership network looms ominously.

To summarize, in the presence of a network, it is not enough to simply identify the big players that have the potential to damage the system should they experience financial distress. Instead, it is crucial to analyze the network of codependency. The phrase "too connected to fail" captures this focus. However, for this approach to be implemented, a full-blown network analysis is required. Insights can only be gained by simulating the dynamics of such a system on its underlying network structure. For instance, one cannot calculate analytically the threshold of connectivity past which diversification has a destabilizing effect.

Still, there is a final step that can be taken in analyzing systemic risk in networks. Next to "too big to fail" (which focuses on the nodes) and "too connected to fail" (which incorporates the links), a third layer can be added by utilizing a more sophisticated network measure called "centrality." In a nutshell, a node's centrality simply depends on its neighbors' centrality. For example, PageRank, the algorithm that Google uses to rank websites in its search-engine results, is a centrality measure. A webpage is more important if other important webpages link to it. Recall also that the methodology for computing the degree of influence that was discussed in the previous section is another example of centrality.

A study focusing on this "too central to fail" notion of systemic risk has been conducted (Battiston et al. 2012b). The work employed previously confidential data on the 2008 crisis gathered by the US Federal Reserve to assess systemic risk as part of the Fed's emergency loans program. Inspired by the methodology behind the computation of shareholder influence and PageRank, a novel centrality measure for tracking systemic risk, called DebtRank, is introduced.

In the study, debt data from the Fed is augmented with the ownership data used in the analysis of the network of global corporate control. As mentioned, the ownership network is a valid proxy for the undisclosed financial network linking banks. The data also includes detailed information on daily balance sheets for 407 institutions that, together, received bailout funds worth $1.2 trillion from the Fed. The data covers 1,000 days from before, during and after the peak of the crisis, from August 2007 to June 2010. The study focuses on the 22 banks that collectively received three-quarters of that bailout money. It is interesting to observe that almost all of these banks were members of the "super-entity."

DebtRank computes the likelihood that a bank will default as well as how much this would damage the creditworthiness of the other banks in the network. In essence, the measure extends the notion of default contagion into that of distress propagation. Crucially, Debt- Rank proposes a quantitative method for monitoring institutions in a network and identifying the ones that are the most important for the stability of the system.

Figure 2 shows an "X-ray image" of the global financial crisis unfolding. It is striking to observe how many of the major players are affected and how some individual institutions threaten the majority of the economic value in the network (a DebtRank value larger than 0.5). Indeed, if a bank with a DebtRank value close to one defaults, it could potentially obliterate the economic value of the entire system. And, finally, the issue of "too central to fail" becomes dauntingly visible: Even institutions with relatively small asset size can become fragile and threaten a large part of the economy. The condition for this to happen is given by the position in the network as measured by the centrality.

In a forthcoming publication (Battiston et al. 2015), the notion of DebtRank is re-expressed making use of the more common notion of leverage, defined as the ratio between an institution's assets and equity. From this starting point, the authors develop a stress-test framework that allows the computation of a whole set of systemic risk measures. Again, since detailed data on the bilateral exposures between financial institutions is not publicly available, the true architecture of the financial network cannot be observed. In order to overcome this problem, the framework utilizes Monte Carlo samples of networks with realistic topologies (i.e., network realizations that match the aggregate level of interbank exposure for each financial institution).

As an illustrative exercise, the authors run the framework on a set of European banks, with empirical data comprising the aggregated interbank lending and borrowing volumes having been obtained from Bankscope, which covers 183 EU banks. The interbank network is reconstructed for the years 2008 to 2013 using the so-called fitness model. Importantly, the attention is placed not only on first-round effects of an initial shock, but also on the subsequent additional rounds of reverberations within the interbank network. A crucial result is given by the following relation:
L(2) = l^b S, (1.2)
where L(2) represents the total relative equity loss of the second round of distress propagation induced by the initial shock S, and with l^b > 0 being the weighted average of the interbank leverage. In other words, l^b is derived from the interbank assets and equity. In detail, S is computed from the unit shock on the value of external assets and the external leverage, that is, from the leverage related to the assets that do not originate from within the interbanking system.

Equation (1.2) implies the highly undesirable conclusion that the second-round effect of distress propagation is also at least as detrimental as the initial shock. This result highlights the important fact that waves of financial distress ripple multiple times through the network in a way that intensifies the problem for the individual nodes. This mechanism only truly becomes visible in a network analysis of the system. In empirical terms, this result is also compelling, as levels of interbank leverage are often around a value of two. In this light, the distress in the second round can be twice as big as the initial distress on the external assets. To conclude, neglecting second-round effects could therefore lead to a severe underestimation of systemic risk.

Outlook for policy-making

What is the added value of trying to understand the economy as an interconnected complex system? The most important result to mention in this context is the power of such analysis to uncover hidden features that would otherwise go undetected. Stated simply, the intractable complexity of financial systems can be decoded and understood by unraveling the underlying network.

A prime example of a network analysis uncovering unsuspected latent features is the detection of the tiny, but highly interconnected core of powerful actors in the global ownership network. It is a novel finding that the most influential companies do not conduct their business in isolation, but rather are entangled in an extremely intricate web of control. Notice, however, that the very existence of such a small, powerful and self-controlled group of financial institutions was unsuspected in the economics literature. Indeed, its existence is in stark contrast with many theories on corporate governance (see, e.g., Dore 2002).

However, understanding the structure of interaction in a complex system is only the first step. Once the underlying network architecture is made visible, the resulting dynamics of such systems can be analyzed. Recall that distress spreads through the network like an epidemic, infecting one node after another. In other words, the true understanding of the notion of systemic risk in a financial setting crucially relies on the knowledge of this propagation mechanism, which again is determined by the network topology. As discussed above, in a real-world setting in which feedback loops can act as amplifiers, the second-round effect of an initial shock is also at least as big as the initial impact. It should be noted that the notorious "bank stress tests" also aim at assessing such risks. More specifically, it is analyzed whether, under unfavorable economic scenarios, banks have enough capital to withstand the impact of adverse developments. Unfortunately, while commendable, these efforts only emphasize first-round effects and therefore potentially underestimate the true dangers to a significant degree. A recent example is the Comprehensive Assessment conducted by the European Central Bank in 2014, which included the Asset Quality Review.

A first obvious application of the knowledge derived from a complex-systems approach to finance and economics is related to monitoring the health of the system. For instance, DebtRank allows systemic risk to be measured along two dimensions: the potential impact of an institution on the whole system as well as the vulnerability of an institution exposed to the distress of others. This identifies the most dangerous culprits, namely, institutions with both high vulnerability and impact. In Figure 3, the whole extent of the financial crisis becomes apparent, as high vulnerability was indeed compounded with high impact in 2008. In 2013, high vulnerability was offset by relatively low impact.

In addition to analyzing the health of the financial system at the level of individual actors, an index could be constructed that incorporates and aggregates the many facets of systemic risk. In this case, sectors and countries could also be scrutinized. A final goal would be the implementation of forecasting techniques. What probable trajectories leading into crisis emerge from the current state of the system? As Haldane (2011) noted in contemplating the idea of forecasting economic turbulences:

It would allow regulators to issue the equivalent of weather-warnings -- storms brewing over Lehman Brothers, credit default swaps and Greece. It would enable advice to be issued -- keep a safe distance from Bear Stearns, sub-prime mortgages and Icelandic banks. And it would enable "what-if?" simulations to be run -- if UK bank Northern Rock is the first domino, what will be the next?

In essence, a data- and complex systems-driven approach to finance and economics has the power to comprehensively assess the true state of the system. This offers crucial information to policymakers. By shedding light on previously invisible vulnerabilities inherent in our interconnected economic world, the blindfolds of ignorance can be removed, paving the way to policies that effectively mitigate systemic risk and avert future global crises.

References and Figures

 —  —  — 

This was a chapter contribution to “To the Man with a Hammer: Augmenting the Policymaker’s Toolbox for a Complex World”, Bertelsmann Stiftung, 2016:
This article collection helps point the way forward. Gathering a distinguished panel of complexity experts and policy innovators, it provides concrete examples of promising insights and tools, drawing from complexity science, the digital revolution and interdisciplinary approaches.

Table of contents:

 —  —  — 

See also "Ökonomie neu denken", February 16, 2016, Frankfurt am Main and Podiumsdiskussion.

Friday, December 11, 2015

At the Dawn of Human Collective Intelligence

trusting the universe to reach ever higher levels of complexity

The following was a contribution first published on the 30th of November 2015 in “HOW TO SAVE HUMANITY — Essays and answers from the desks of futurists, economists, biologists, humanitarians, entrepreneurs, activists and other people who spend a lot of time caring about, improving, and supporting the future of humanity.”

It is an interesting idiosyncrasy of our times that we have become increasingly accustomed to the ongoing success of the human mind in probing reality and understanding the world we live in. Indeed, the relevance of this ever growing body of knowledge, describing the universe and ourselves in greater and greater detail, cannot be overstated. But today, even the most breathtaking technological breakthroughs, fostered by this knowledge, can hardly capture the collective attention span for long. It is as if we have come to expect our technological abilities to steadily accelerate and reach breakneck speeds.

On the other hand, we have also become very accustomed, and alarmingly indifferent and unconcerned, about the state of human affairs. As a species, our recent terraforming activities have fundamentally transformed the biosphere we rely on, resulting in considerable impact for us individually. In a nutshell, we have devised linear systems that extract resources at one end, which, after being consumed, are disposed of at the other end. However, on a finite planet, extraction soon becomes exploitation and disposal results in pollution.

Today, this can be witnesses at unprecedented global scales. Just consider the following: substantial levels of pesticides and BPA in vast populations and even remote populations (like Inuit women whose breast milk is toxic due to pollutants accumulating in the ocean’s food chain), increase of chronic diseases, antimicrobial resistance, the Great Pacific and the North Atlantic garbage patches, e-waste, exploding levels of greenhouse gases, peak oil and phosphorus, land degradation, deforestation, water pollution, food waste, overfishing, dramatic loss of biodiversity,. . . The list is constantly growing as we await the arrival of the next billion human inhabitants on this planet.

Compounding this acute problem is the fact that today’s generations are living at the expense of future generations, ecologically and economically. For instance, we have reached Earth Overshoot Day in 2015 on the 13th of August. Each year, this day measures when human consumption of Earth’s natural resources, or humanity’s ecological footprint, approximately reaches the world’s biocapacity to generated those natural resources in a year. Since the introduction of this measure in 1970, when the 23rd of December marked Earth Overshoot Day, this tipping point has been occurring earlier and earlier. Moreover, just check the Global Debt Clock, recording public debt worldwide, to see an incomprehensibly and frighteningly high figure, casting an ominous shadow over future prosperity. Yes, the outlook is very dire indeed.

The Two Modes of Intelligence

In essence, we have an abundance of individual intelligence, fueling knowledge generation and technological proficiency, but an acute lack of collective intelligence, which would allow our species to co-evolve and co-exists in a sustainable manner with the biosphere that keeps it alive. This is the true enigma of our modern times: why does individual intelligence not foster collective intelligence? Take, for instance, a single termite. The biological capacity for cognition is very limited. However, as a collective swarm, the termites engineer nests they equip with air-conditioning capabilities, ensuring a constant inside temperature allowing the termites to cultivate a fungus which digest food for them they could otherwise not utilize. Now take any human. Amazing feats of higher cognitive functioning are manifested: self-awareness, sentience, language capability, creativity, abstract reasoning, formation and defense of beliefs, and much, much more. Remarkably but regrettably, multiplying this amazing potential and capacity times a few billion results in our current sate of affairs.

It is interesting to note that all biological systems do not feature centralized decision making. There are no architect or engineering termites overseeing construction, no CPU in our brains responsible for consciousness. This decentralized and bottom up approach appears to result in the emergence of collective intelligence, in other words, in self-organization, adaptivity, and resilience. Indeed, this incredible robustness of biological complex systems is most probably the reason why we still can continue with “business as usual” despite the continued devastating blows we have delivered to the biosphere. In stark contrast to these natural systems, all human systems, from political to economic, are all characterized by centralized governance. This top down approach to collective organization appears to systematically lack adaptivity, resilience, and, most importantly, sustainability.

The Zeitgeist and Beyond

We truly live in tumultuous times. Next to the increasing external pressures just outlined, we are also exposed directly to our own destructiveness. In a global environment where ignorance, myopia, denial, cynicism, indifference, callousness, alienation, disenchantment, and superficiality reign it is not surprising to witness the rise of fundamentalism and violence in all corners of the world. Neither is it really surprising that many people then try and escape this angst short-term by distracting consumerism and numbing materialism overall. Which then leads to the next predicament:

This is a strange, rather perverse story. Just to put it in very simple terms: it’s a story about us, people, being persuaded to spend money we don’t have, on things we don’t need, to create impressions that won’t last, on people we don’t care about.
(Tim Jackson’s 2010 TED talk.)

The reality of the society we’re in, is there are thousands and thousands of people out there, leading lives of quiet scream- ing desperation, where they work long hard hours, at jobs they hate, to enable them to buy things they don’t need, to impress people they don’t like.
(Nigel Marsh’s 2011 TED talk.)

Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situa- tion is profound. It is a scar across our collective soul. Yet virtually no one talks about it.
(David Graeber, “On the Phenomenon of Bullshit Jobs”, 2013.)

Our collective psyche is suffering under the current zeitgeist. In just a few decades the complexity and uncertainty of the lives we lead has dramatically increased and we now struggle even harder to find meaning. So, was this it? Are we simply yet another civilization at the precipice of its demise? Are we just a very brief, albeit spectacular, perturbation in the billion year history of life on Earth, which will undoubtedly adapt and continue for billions of years until our sun runs out of fuel?

At the Dawn

Perhaps things are not as they seem. Maybe the chaotic paths to destruction or survival really are only separated by the metaphorical flapping of the wings of a butterfly. In the case at hand, a mere flicker in the minds of people — for instance, a radical and contagious thought or idea — could alter the course of history.

Indeed, perhaps acquiring collective intelligence is not as hard as we might imagine. What is missing is possibly a subtle change in the way we perceive and think of ourselves and the world we inhabit; a change that would initiate a true shift in our behavior which could lead to adaptive, resilient, and sustainable human systems and interactions. Maybe the difficulty lies in the simple fact that we all first need to focus on ourselves for the common ground to emerge which would allow global change to flourish on.

One of the earliest and strongest constraints everyone of us as child is confronted with is the imprinting of local and static sociocultural and religious narratives, mostly emphasizing external authority. To resist this initial molding requires a very critical and open-minded worldview, not something every human child comes equipped with. What would happen if we would replace these obviously dysfunctional foundational stories that we have been telling our children? What if we, as a species, agreed to convey ideas to the next generation which do not simply depend on the geographic location of birth but represent something more functional, universal, and unifying? Ideas that also stress self-responsibility and self-reliance?

Modern neuroscience heavily emphasizes the plasticity of the human brain. This neuroplasticity reflects how the brain’s circuits constantly get rewired due to changes not only in the environment, but crucially also in response to inner changes within the mind. Cultivating different thought patterns results in different neural networks. As a consequence, we should never underestimate how untainted young brains, exposed to novel empowering ideas, could result in a generation of “new” humans, significantly different from the last one. Possibly some of the following ideas could meet this challenge — ideas capable of transforming the inner space of the mind and thus having the power to emanate into the outer world.

Cultivating a Responsible, Dynamic, and Inclusive Mindset

First, acknowledge that you are not the center of the universe. The local “reality bubble” you live in is arbitrary and infused with ideas relevant to the past. Your way of life is not representative or defining for the human species. Foreign ideas, beliefs, and ways of life are as justified as your own ones. The way you perceive reality depends on the exact levels of dozens of neurotransmitters and the biologically evolved hardwiring in your brain. In efect, what appears as real and true is always contingent and relative. Reality could be vastly richer, bigger, and more complex than anyone ever dared to dream. And never forget to appreciate the amazing string of measurable coincidences that had to conspire for you to read this sentence: from the creation of space, time, and energy, to the formation of the first heavy elements in the burning cores of stars which then got scattered into the cosmos when they exploded as supernovae and started to assemble into organic matter, which could store information and spontaneously began to replicate, sparking the evolution of life, which gradually reached ever higher and higher levels of complexity until a lump of organic matter, organized as a network of dozens of billions of nodes and roughly 100 trillion links, became self-aware.

Secondly, place yourself into the center of your universe. You alone are in charge of your life and solely responsible for your actions. You have the freedom in your mind to choose how you respond to internal urges and external influences. You can strive to cultivate a state of happiness and gratitude in your mind, regardless of the circumstances outside of your mind. Embrace change and accept that impermanence is an immutable fact of life. Let go of the illusion of control.

Finally, cultivate a dynamic and inclusive mindset. Assume that all people act to the best of their possibilities and capacities. Face the fact that you can be very wrong in the beliefs you deeply cherish and avoid the illusion of knowledge. Be open to the possibility that other people could be right. Allow your beliefs and ideas to be malleable, adaptive, and self-correcting. Try and strike a healthy balance between critical thinking and openmindedness.

Can we dare to imagine a future, when we teach our children to be empathetic but critical thinkers? When we teach them to be independent and not to seek acknowledgment form others but only themselves? When we teach them not to fear and discriminate against what is perceived as different and foreign; not to fear change and frantically cling on to the status quo, but to face the never ending challenges of life with confidence and trust? Imagine the collective intelligence that could emerge from a “swarm” of such individuals, emphasizing social inclusion next to cultivating a deep feeling of connectedness to the matrix of life and a profound appreciation of being an integral part of the enigma of existence. Simply by leaving out one generation’s worth of flawed and harmful imprinting, and by filling the arising void with radically functional and dynamic ideas and concepts, has the power to change everything.

The First Rays of Light

What if we already are in the middle of the transition and have not yet realized that it is happening? Despite the fact that we are still fueling dysfunctional collective ideas, perhaps we are already witnessing the beginning of a profound paradigm shift towards collective intelligence.

Take the recent emergence of decentralized financial and economic interactions that are slowly disrupting the status quo. For instance, the nascent rise of the blockchain ledger in a trustless peer-to-peer network enabling unthinkable new ways of human economic cooperation. Or the impact of free-access and free-content collaborative efforts providing us with unrestricted availability of nearly unlimited knowledge and constantly evolving, cutting-edge software. Or peer-to-peer lending, crowdfunding, and crowd-sourcing with the capacity to leverage the network effect created by a collective of like-minded people. And not to forget the success of shareconomies, offering a radically different blueprint to the way business has been conducted in the past. All these new technologies are based on bottom up, dynamic, decentralized, networked, unconstrained, and self-organizing human interactions. It is impossible to gauge the future impact of these systems today. Similarly, imagine trying to asses the potential of a new technology, called the Internet, in the early 1990s. No one had the audacity to predict what today has emerged form this initial network, then comprised of a few million computers, now affecting every aspect of modern human life.

We are truly living in a brave new world of unprecedented potential, where future utopias or dystopias are only separated by a thought, an idea, a behavior able to replicate and trigger self-organizing and adaptive collective action. So, where will you be at the dawning of human collective intelligence?

Wednesday, July 22, 2015

The Consciousness of Reality / The Illusion of Knowledge

back in the game;)
The following is another iteration of my little hobby (see the "Evolution Section" at the end):

This talk is basically based on Parts II and III of the book I'm currently writing, with the working title:

The Illusion of Knowledge:
Why Uncertainty is Woven into Any Description of Reality—And Why It Does Not Matter

Part I focusses on formal thought systems, mathematics, and physics---in great detail, as witnessed by the over 140 equations introduced to the reader. In essence, Part I is a testimony to the human mind's unprecedented understanding of the workings of the reality it finds itself embedded in. Then, in Part II, notions of certainty are deconstructed, with respect to knowledge, truth, and reality. The subjective, context-dependent, and ambiguous nature of every experience and belief is emphasized.

So where does this leave us? Do we really live in a cynical universe, which reveals itself to the human mind just as far as it awakens the false hope in its comprehensibility and leaves us forever in a state of epistemological nihilism? I sincerely believe otherwise: enter Part III. With brave, radical, out-of-the-box thinking, I believe we can advance our knowledge of the most fundamental questions relating to our existence and existence itself. Some such ideas are: the information-theoretic and information-processing foundation of reality (universe as computer, reality as a simulation) next to the primacy and/or universality of consciousness (consciousness creates reality).

The TOK so far:

The book will be an open access publication with Springer. Yes, at one point I will try and crowd-fund the costs;)

The slides are found here and the transcript of the talk reads:

What is real? Well, all of this obviously. But what exactly is it? OK, so you all woke up this morning.

A sense of self kicked in. [break] Memories returned. [break] And you became aware of an external world. So, you are an entity that exist in a physical reality.

But this begs three questions. [break] What can we know about these things? [break] What is the true nature of reality? [break] And what is an “I” anyway? OK, so let’s start with the question of knowledge.

“The more you know, the less you understand.” [break] “I know that I know nothing.” To be fair, these quotes are quite old. Surely today people are less uncertain. Well…

“Those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt.” This is from the great philosopher and mathematician Bertrand Russell. [break] “While differing widely in the various little bits we know, in our infinite ignorance we are all equal.” Karl Popper was one of the most influential philosophers of science. And in the same vein: [break] “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” Daniel Kahneman is the father of behavioral economics and a Nobel laureate. OK, so let’s agree that from a philosophical point of view the notion of certainty is a bit tricky. But we have science, which is a knowledge-­‐generation machine. Or not?

Ever since the Pythagoreans, people have realized that the book of nature is written in the language of mathematics. Or in the words of the great mathematician David Hilbert: [break] “Mathematics is the foundation of all exact knowledge of natural phenomena.” But this raises a very profound question.

“What is it that breathes fire into the equations and makes a universe for them to describe?” [break] So, basically, “Two miracles confront us here: the existence of laws of nature and the human mind's capacity to divine them.” You all know Stephen Hawking and Eugene Wigner received the Nobel prize in physics in the 60s. Physicists have o]en been puzzled about the general nature of science, because

“Fundamentally, we do not know why our theories work so well.” [break] And “The deeper an explanation is, the more new problems it creates.” David Deutsch is one of the pioneers of quantum computing. But the bewilderment does not stop here.

“There is no logical path to these laws; only intuition can reach them.” [break] “Perhaps it is culture rather than nature that dictates the content of scientific theories.” Now intuition and culture don’t usually spring to mind when thinking about science. And are also not really ideas one would associates with these two great physicists.

So, yes, science works and gives us the amazing gift of technology. But what science exactly is and why it works no one really knows. And surprisingly, Kurt Gödel and Gregory Chaitin showed us that at the heart of mathematics lurks incompleteness and randomness. OK, to summarize: What we have been talking about is called epistemology. It is the branch of philosophy concerned with the nature of knowledge. But just because our knowledge of reality turns out to be a bit elusive doesn’t mean that reality itself should be suspect. Now ontology is the word philosophers use when dealing with the nature of reality. OK, so, let’s move on to this question, what about the nature of reality? Well…

“Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning.” Ernst von Glasersfeld was a distinguished philosopher who coined the term “radical constructivism”. This is the idea, that all knowledge is always subjective. But what exactly is he talking about here?

Let’s zoom into the fabric of reality. In other words, let’s enter the quantum world. [break] These are symbols from the book of nature. This equation describes the birth of quantum physics. Something no one saw coming. Indeed, Max Planck introduced it in an act of despair. [break] And it turns out that this new realm of reality is a truly bizarre place. Particles behave as waves and vice versa, depending on how you look at them. [break] There is an intrinsic limit to the amount of information you can have. [break] And everything is instantaneously connected to everything else. This is called entanglement and you can use it to encrypt information. But things get worse. Some quantum experiments are truly mindboggling: they appear to alter the past or break causality.

OK, let’s look at the universe. What is out there? [break] Well it turns out that of all the stuff there is, only 5% is ordinary matter. 26% is called dark matter, and no one really knows what it is made of. And 69% is dark energy, some mysterious force in the vacuum making our universe expanded faster and faster the bigger it gets. This was discovered in 1998 and was awarded with a Nobel prize.

And even things as innocuous as time, can be very problematic on closer inspection. So much so, that some physicists suspect it doesn’t really exist. [break] “The passage of time is simply an illusion created by our brains.” [break] And what about emergence and self-­‐organization. This is a map of the internet. It is as though there is a fundamental force in the universe driving it to ever higher levels of complexity and structure. Just look at ants: where does this collective intelligence come from that allows them to become such an amazingly clever super-­‐organism. And why can’t we humans achieve this?

OK , so reality is indeed a very weird place. But perhaps we can find a sanctuary of clarity and regularity within ourselves.

Let’s look at our brains and how we perceive the world. [break] “Instead of reality being passively recorded by the brain, it is actively constructed by it.” [break] “You're not perceiving what's out there. You're perceiving whatever your brain tells you.” [break] “What we call normal perception does not really differ from hallucinations, except that the latter is not anchored by external input.” Wow. But at least I am in control of my mind. Or not?

“The conscious mind is not at the center of action, but on a distant edge, hearing but whispers of the activity.” [break] “The exact levels of dozens of neurotransmitters are critical for who you believe yourself to be.” [break] “Beliefs about logic, economics, ethics, emotions, beauty, social interaction, love, are all products of the biologically evolved ‘hardwiring’ in the brain.”
These are the words of David Eagleman. He is a neuroscientists and writer. And yes, things get worse.

These two books are an embarrassment to any human being who believes in rationality. Countless experiments show how easily we can be manipulated. Without ever suspecting a thing. And don’t fool yourself. We all fail equally at this.

Other experiments have shown that the simple expectation of an experience changes how you perceive it. For instance, tasting wine you thought was expensive results in neural activity in your pleasure center. This does not happen for the same wine if you are told it is cheap. The same is true for how you feel pain. And then there are the placebo and nocebo effects, where your beliefs shape your reality. Like overdosing on sugar pills and nearly dying because you thought they were antidepressants. Then there is this phenomenon called false awakening, where you dream that you wake up. To experience this can be quite unsettling. [break] “To wake up twice in a row is something that can shatter many of the intuitions you have about consciousness: [break] that the vividness, the coherence, and the crispness of a conscious experience are evidence that you are really in touch with reality.” Thomas Metzinger is a philosopher of the mind interested in neuroscience. He asks: “Well how do you know that you actually woke up this morning?”

And then things can go terribly wrong in the mind. This book on psychopathology is a frightening 800 pages thick.

Jill Bolte Taylor studies brain anatomy. When she had a golf-­‐ball sized blood-­‐clot in her le] hemisphere due to a stroke, this is what she experienced. [break] My consciousness shifted away from my normal perception of reality, to some esoteric space where I'm witnessing myself having this experience. [break] I can no longer define the boundaries of my body -­‐ I can't define where I begin and where I end. [break] I felt at one with all the energy that was, and it was beautiful there. Now these aren't really worlds I would expect from someone who’s left brain is being damaged, but rather from someone like…

…this. This is Christian Rätsch. He is an anthropologist specialized in ethnopharmacology. His book, called “The Encyclopedia of Psychoactive Plants” is nearly 1’000 pages thick. And remember what Eagleman said about hallucinations: they are just as real. Perhaps this realization prompted the next quote. There are these extraordinary other types of universe. Aldous Huxley was talking about his experiences with LSD.

So what does this all mean? Where does it leave us? Well, if we are really honest, the answer is [break] We don’t know. Basically we are back to René Descartes. “I think therefore I exists”. So, the only thing I cannot deny, is that I am having a subjective experience now. That’s all. But perhaps we can do better. Perhaps if we are willing to abandon some of our cherished beliefs about reality we can start to understand more. And there is a glimmer on the horizon.

Information is physical. [break] All things physical are information-­‐theoretic. John Wheeler helped develop general relativity giving us the word black hole. And Rolf Landauer made important contributions to information processing in the 60s.

“The universe is made not of chunks of stuff, but chunks of information — ones and zeros.” [break] “Quantum physics requires us to abandon the distinction between information and reality.” Seth Lloyd and Anton Zeilinger are currently pioneering the field of quantum information. They are helping us build quantum computers.

A second theme is that we are in fact involved in creating reality. This is an idea going back to Immanuel Kant and is also found in Buddhism. [break] “This is a participatory universe.” [break] “Reality is something that comes into being through the very act of human cognition.” [break] “Consciousness is all that exists. Space-­‐time and matter never were fundamental denizens of the universe but have always been among the humbler contents of consciousness.” Richard Tarnas is a historian and author of the book “The Passion of the Western Mind”. An epic journey looking at All the ideas that have shaped our modern world view. And Donald Hoffman is a cognitive scientist. So, continuing with this idea:

“Our belief that there is a single universe shared by multiple observers is wrong. Instead, each observer has their own universe.” [break] “This cosmic solipsism turns on all of our common sense notions about the world; then again fundamental physics has a long history of disregarding our common sense notions.” Amanda Gefter is a science journalist and author. And solipsism is the view that only one’s own mind exists. Her idea can perhaps also be summarized as follows:

Objectivity is the illusion that observations are made without an observer.” Heinz von Foerster was a physicists and philosopher and one of the pioneers of cybernetics.

But finally, a word of caution. Although these last quotes come from very sober and keen thinkers, they still could be wrong. In fact, everything I have been saying could be wrong. But if we all really are in charge of our own universe, made of pure information, it is essential for us to look for wisdom and truth within ourselves. Perhaps looking for reality outside of the mind is the wrong way to go. Thank you.

Index under construction (mostly just people for now):

  • I don't really recall when this all started. Ever since I was 15 I wanted to study physics. After graduating in 1999, I had more questions than answers relating to reality and consciousness.
  • In 2001: "On the Structure of the Vacuum and the Dynamics of Scalar Fields" (here) looking at some of the limits of modern physics.
  • My first job (which would last for 12 years, where I was developing trading model algorithms for the FX market at Olsen Ltd) was an intuitive transition from fundamental physics to complex systems. "A New Kind of Science" by S. Wolfram.
  • In 2005 I was looking for a new challenge in addition to my work and googled  the Chair of Systems Design by chance.
  • Applied for a PhD there (50% next to my finance job) and had to give a talk that summer where I started to think about the analytical/algorithmic and fundamental/complex paradigms;  "Alternate Realities: Mathematical Models of Nature and Man" by J. Casti: science as the art of encoding reality domains into abstract representations.
  • In the summer of 2006 I read "Zen and the Art of Motorcycle Maintenance" by R. Pirsig in a hammock on one of the Andaman islands after visiting Delhi and Varanasi for our charity
  • I consolidated a lot of stuff between 2006 and 2009. These blog posts at Olsen Ltd and stuff on my old webpage: 1, 2, 3, 4.
  • It all got more serious when I took a course on the philosophy of science during my PhD in 2008: G. Brun and D. Kuenzle, ETH Zurich.
  • All of this now fuelled the contents of Appendix A of my dissertation (2010, PDF) which got updated and published in Springer's Theses series (2013): Laws of nature; paradigms of fundamental processes and complex systems; epistemological and ontological challenges; postmodernism, constructivism, and relativism.
  • Which then prompted these blog posts about certainty, reality, and perception: 1, 2, 3,  (2011 - 2012).
  • Sometime: "Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos" S. Lloyd.
  • 2011: "The Passion of the Western Mind: Understanding the Ideas That Have Shaped Our World View" R. Tarnas, "Incognito: The Secret Lives of the Brain" D. Eagleman.
  • Ideas which also flowed into this Ignite talk in 2011 (or as blog post), which represents a rough sketch of the current talk. 
  • 2012: "The Ego Tunnel: The Science of the Mind and the Myth of the Self" T. Metzinger.
  • Not sure when the ideas of  consciousness entered the picture, but happy to see such crazy ideas also being espoused by scientists and philosophers today.
  • In 2013 I started negotiating with Springer and started writing... 

Saturday, February 1, 2014

snow, wind, and avalanches

I <3 pow
Freeriding is arguably the most fun thing to do on a snowboard. But as the proverb has it: no risk, no fun. There is always a looming threat due to avalanches. Although, judging the risk of avalanche danger is today based on a lot of scientific knowledge, allowing for proper  assessments resulting in decision strategies (see, for instance, Werner Munter), there is always a residual risk. Avalanches are very complex phenomena, depending on a web of factors, like temperature, slope orientation and steepness, terrain, vegetation, snowpack, ...

A very difficult variable to deal with is wind. Heavy winds during snow fall can pack incredible amounts of snow at very specific exposures. And windy conditions after the last snow fall can result in very local hot spots. Often only experience can help here.

Recently, we had to deal with this. In order to reach the side of the mountain we planned on descending, there was some windpacked powder to deal with. Between the three of us, we triggered four avalanches. Luckily they were all small and superficial - but you never know. Interestingly, the final couloirs greeted us with epic pow, very different in quality to the other slopes...

Perhaps the greatest safety accomplishment in the last years has been the introduction of avalanche airbags. A simple idea based on increasing the volume associated with the freerider. In an avalanche, understood as granular media moving under the influence of gravity, larger particles tend to travel to the surface. This is vital for survival, as being rescued before about 20 minutes results in a very good survival rate, which drops significantly after that.

One last thing. If you are "lucky" enough to be close to the tear where the avalanche rips away from the slope, you have a few seconds left to do the right thing. Next to deploying the airbag you can actually try and ride out of the avalanche. When the snow silently crumbles around you, it's like surfing! Your board actually carries you and if you are not distracted by the dynamics of everything around you moving, you can focus on a sideways exit. This happened to me here:

Not sure how easy this is on skis though, as you can see herehere  and here (note the effect of the airbag - the last guy didn't have one; those must have been long 5 1/2 minutes).

Watch the pros struggling: 1, 2, 3, 4, 5. And try not to do this, after you decide to gun it.

And then there's these guys: 1, 2.

Please, don't be one of those people who turn up with no safety equipment or say stuff like, "but I've never seen an avalanche come down on this slope" or "hey, there were already some tracks, no big deal"!

And finally, why bother? Why expose yourself to unnecessary risk?
Because it is so much fun, that's why:)

Safe and awesome freeriding!

Wednesday, November 6, 2013

old posts from

This is a collection of old blog posts, going back to 2006. For some strange reason I thought it would be a good idea to have two blogs. They have been migrated here from

a philosophy of science primer - part III

  • part I: some history of science and logical empiricism,
  • part II: problems of logical empiricism, critical rationalism and its problems.
After the unsuccessful attempts to found science on common sense notions as seen in the programs of logical empiricism and critical rationalism, people looked for new ideas and explanations.
the thinker

The Kuhnian View

Thomas Kuhn’s enormously influential work on the history of science is called the Structure of Scientific Revolutions. He revised the idea that science is an incremental process accumulating more and more knowledge. Instead, he identified the following phases in the evolution of science:
  • prehistory: many schools of thought coexist and controversies are abundant,
  • history proper: one group of scientists establishes a new solution to an existing problem which opens the doors to further inquiry; a so called paradigm emerges,
  • paradigm based science: unity in the scientific community on what the fundamental questions and central methods are; generally a problem solving process within the boundaries of unchallenged rules (analogy to solving a Sudoku),
  • crisis: more and more anomalies and boundaries appear; questioning of established rules,
  • revolution: a new theory and weltbild takes over solving the anomalies and a new paradigm is born.
Another central concept is incommensurability, meaning that proponents of different paradigms cannot understand the other’s point of view because they have diverging ideas and views of the world. In other words, every rule is part of a paradigm and there exist no trans-paradigmatic rules.
This implies that such revolutions are not rational processes governed by insights and reason. In the words of Max Planck (the founder of quantum mechanics; from his autobiography):
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
Kuhn gives additional blows to a commonsensical foundation of science with the help of Norwood Hanson and Willard Van Orman Quine:
  • every human observation of reality contains an a priori theoretical framework,
  • underdetermination of belief by evidence: any evidence collected for a specific claim is logically consistent with the falsity of the claim,
  • every experiment is based on auxiliary hypotheses (initial conditions, proper functioning of apparatus, experimental setup,…).
People slowly started to realize that there are serious consequences in Kuhn’s ideas and the problems faced by the logical empiricists and critical rationalists in establishing a sound logical and empirical foundation of science:
  • postmodernism,
  • constructivism or the scoiology of science,
  • relativism.


Modernism describes the development of Western industrialized society since the beginning of the 19th Century. A central idea was that there exist objective true beliefs and that progression is always linear.
Postmodernism replaces these notions with the belief that many different opinions and forms can coexist and all find acceptance. Core ideas are diversity, differences and intermingling. In the 1970s it is seen to enter scientific and cultural thinking.
Postmodernism has taken a bad rap from scientists after the so called Sokal affair, where physicist Alan Sokal got a nonsensical paper published in the journal of postmodern cultural studies, by flattering the editors ideology with nonsense that sounds good.
Postmodernims has been associated with scepticism and solipsism, next to relativism and constructivism.
Notable scientists identifiable as postmodernists are Thomas Kuhn, David Bohm and many figures in the 20th century philosophy of mathematics. As well as Paul Feyerabend, an influential philosopher of science.


To quote the Nobel laureate Steven Weinberg on Kuhnian revolutions:
If the transition from one paradigm to another cannot be judged by any external standard, then perhaps it is culture rather than nature that dictates the content of scientific theories.
Constructivism excludes objectivism and rationality by postulating that beliefs are always subject to a person’s cultural and theological embedding and inherent idiosyncrasies. It also goes under the label of the sociology of science.
In the words of Paul Boghossian (in his book Fear of Knowledge: Against Relativism and Constructivism):
Constructivism about rational explanation: it is never possible to explain why we believe what we believe solely on the basis of our exposure to the relevant evidence; our contingent needs and interests must also be invoked.
The proponents of constructivism go further:
[…] all beliefs are on a par with one another with respect to the causes of their credibility. It is not that all beliefs are equally true or equally false, but that regardless of truth and falsity the fact of their credibility is to be seen as equally problematic.
From Barry Barnes’ and David Bloor’s Relativism, Rationalism and the Sociology of Knowledge.
In its radical version, constructivism fully abandons objectivism:
  • Objectivity is the illusion that observations are made without an observer(from the physicist Heinz von Foerster; my translation)
  • Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning (from the philosopher Ernst von Glasersfeld); my translation
In addition, radical constructivism proposes that perception never yields an image of reality but is always a construction of sensory input and the memory capacity of an individual. An analogy would be the submarine captain who has to rely on instruments to indirectly gain knowledge from the outside world. Radical constructivists are motivated by modern insights gained by neurobiology.
Historically, Immanuel Kant can be understood as the founder of constructivism. On a side note, the bishop George Berkeley went even as far as to deny the existence of an external material reality altogether. Only ideas and thought are real.


Another consequence of the foundations of science lacking commonsensical elements and the ideas of constructivism can be seen in the notion of relativism. If rationality is a function of our contingent and pragmatic reasons, then it can be rational for a group A to believe P, while at the same time it is rational for group B to believe in negation of P.
Although, as a philosophical idea, relativism goes back to the Greek Protagoras, its implications are unsettling for the Western mid:anything goes (as Paul Feyerabend characterizes his idea of scientific anarchy). If there is no objective truth, no absolute values, nothing universal, then a great many of humanity’s century old concepts and beliefs are in danger.
It should however also be mentioned, that relativism is prevalent in Eastern thought systems, and as an example found in many Indian religions. In a similar vein, pantheism and holism are notions which are much more compatible with Eastern thought systems than Western ones.
Furthermore, John Stuart Mill’s arguments for liberalism appear to also work well as arguments for relativism:
  • fallibility of people’s opinions,
  • opinions that are thought to be wrong can contain partial truths,
  • accepted views, if not challenged, can lead to dogmas,
  • the significance and meaning of accepted opinions can be lost in time.
From his book On Liberty.


But could relativism be possibly true? Consider the following hints:
  • Epistemological
    • problems with perception: synaesthesia, altered states of consciousness (spontaneous, mystical experiences and drug induced),
    • psychopathology describes a frightening amount of defects in the perception of reality and ones self,
    • people suffering from psychosis or schizophrenia can experience a radically different reality,
    • free will and neuroscience,
    • synthetic happiness,
    • cognitive biases.
  • Ontological
    • nonlocal foundation of quantum reality: entanglement, delayed choice experiment,
    • illogical foundation of reality: wave-particle duality, superpositions, uncertainty, intrinsic probabilistic nature, time dilation (special relativity), observer/measurment problem in quantum theory,
    • discreteness of reality: quanta of energy and matter, constant speed of light,
    • nature of time: not present in fundamental theories of quantum gravity, symmetrical,
    • arrow of time: why was the initial state of the universe very low in entropy?
    • emergence, selforganization and structureformation.
In essence, perception doesn’t necessarily say much about the world around us. Consciousness can fabricate reality. This makes it hard to be rational. Reality is a really bizarre place. Objectivity doesn’t seem to play a big role.
And what about the human mind? Is this at least a paradox free realm? Unfortunately not. Even what appears as a consistent and logical formal thought system, i.e., mathematics, can be plagued by fundamental problems. Kurt Gödel proved that in every consistent non-contradictory system of mathematical axioms (leading to elementary arithmetic of whole numbers), there exist statements which cannot be proven or disproved in the system. So logical axiomatic systems are incomplete.
As an example Bertrand Russell encountered the following paradox: let R be the set of all sets that do not contain themselves as members. Is R an element of itself or not?
If you really accede to the idea that reality and the perception of reality by the human mind are very problematic concepts, then the next puzzles are:
  • why has science been so fantastically successful at describing reality?
  • why is science producing amazing technology at breakneck speed?
  • why is our macroscopic, classical level of reality so well behaved and appears so normal although it is based on quantum weirdness?
  • are all beliefs justified given the believers biography and brain chemistry?

a philosophy of science primer - part II

Continued from part I

The Problems With Logical Empiricism

The programme proposed by the logical empiricists, namely that science is built of logical statements resting on an empirical foundation, faces central difficulties. To summarize:
  • it turns out that it is not possible to construct pure formal concepts that solely reflect empirical facts without anticipating a theoretical framework,
  • how does one link theoretical concepts (electrons, utility functions in economics, inflational cosmology, Higgs bosons,…) to experiential notions?
  • how to distinguish science from pseudo-science?
Now this may appear a little technical and not very interesting or fundamental to people outside the field of the philosophy of science, but it gets worse:
  • inductive reasoning is invalid from a formal logical point of view!
  • causality defies standard logic!

This is big news. So, just because I have witnessed the sun going up everyday of my life (single observations), I cannot say it will go up tomorrow (general law). Observation alone does not suffice, you need a theory. But the whole idea here is that the theory should come from observation. This leads to the dead end of circular reasoning.
But surely causality is undisputable? Well, apart from the problems coming from logic itself, there are extreme examples to be found in modern physics which undermine the common sense notion of a causal reality: quantum nonlocalitydelayed choice experiment.
But challenges often inspire people, so the story continues…

Critical Rationalism

OK, so the logical empiricists faced problems. Can’t these be fixed? The critical rationalists belied so. A crucial influence came from René Descartes’ and Gottfried Leibniz’ rationalism: knowledge can have aspects that do not stem from experience, i.e., there is an immanent reality to the mind.
The term critical refers to the fact, that insights gained by pure thought cannot be strictly justified but only critically tested with experience. Ultimate justifications lead to the so called Münchhausen trilemma, i.e., one of the following:
  • an infinite regress of justifications,
  • circular reasoning,
  • dogmatic termination of reasoning.
The most influential proponent of critical rationalism was Karl Popper. His central claims were in essence
  • use deductive reasoning instead of induction,
  • theories can never be verified, only falsified.
Although there are similarities with logical empiricism (empirical basis, science is a set of theoretical constructs), the idea is that theories are simply invented by the mind and are temporarily accepted until they can be falsified. The progression of science is hence seen as evolutionary process rather than a linear accumulation of knowledge.
Sounds good, so what went wrong with this ansatz?

The Problems With Critical Rationalism

In a nutshell:
  • basic formal concepts cannot be derived from experience without induction; how can they be shown to be true?
  • deduction turns out to be just as tricky as induction,
  • what parts of a theory need to be discarded once it is falsified?
To see where deduction breaks down, a nice story by Lewis Carroll (the mathematician who wrote the Alice in Wonderland stories): What the tortoise Said to Achilles.
If deduction goes down the drain as well, not much is left to ground science on notions of logic, rationality and objectivity. Which is rather unexpected of an enterprise that in itself works amazingly well employing just these concepts.

Explanations in Science

And it gets worse. Inquiries into the nature of scientific explanation reveal further problems. It is based on Carl Hempel’s and Paul Oppenheim’s formalisation of scientific inquiry in natural language. Two basic schemes are identified: deductive-nomological and inductive-statistical explanations. The idea is to show that what is being explained (the explanandum) is to be expected on the grounds of these two types of explanations.
The first tries to explain things deductively in terms of regularities and exact laws (nomological). The second uses statistical hypotheses and explains individual observations inductively. Albeit very formal, this inquiry into scientific inquiry is very straightforward and commonsensical.
Again, the programme fails:
  • can’t explain singular causal events,
  • asymmetric (a change in the air pressure explains the readings on a barometer, however, the barometer doesn’t explain why the air pressure changed),
  • many explanations are irrelevant,
  • as seen before, inductive and deductive logic is controversial,
  • how to employ probability theory in the explanation?
So what next? What are the consequences of these unexpected and spectacular failings of the most simplest premises one would wish science to be grounded on (logic, empiricism, causality, common sense, rationality, …)?
The discussion is ongoing and isn’t expected to be resolved soon. Seepart III

a philosophy of science primer - part I

Naively one would expect science to adhere to two basic notions:
  • common sense, i.e., rationalism,
  • observation and experiments, i.e., empiricism.
Interestingly, both concepts turn out to be very problematic if applied to the question of what knowledge is and how it is acquired. In essence, they cannot be seen as a foundation for science.
But first a little history of science…

Classical Antiquity

The Greek philosopher Aristotle was one of the first thinkers to introduce logic as a means of reasoning. His empirical method was driven by gaining general insights from isolated observations. He had a huge influence on the thinking within the Islamic and Jewish traditions next to shaping Western philosophy and inspiring thinking in the physical sciences.

Modern Era

Nearly two thousand years later, not much changed. Francis Bacon (the philosopher, not the painter) made modifications to Aristotle’s ideas, introducing the so called scientific method where inductive reasoning plays an important role. He paves the way for a modern understanding of scientific inquiry.
Approximately at the same time, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences.

Logical Empiricism

So far so good. By the early 20th Century the notion that science is based on experience (empiricism) and logic, and where knowledge is intersubjectively testable, has had a long history.
The philosophical school of logical empiricism (or logical positivism) tries to formalise these ideas. Notable proponents were Ernst Mach, Ludwig Wittgenstein, Bertrand Russell, Rudolf Carnap, Hans Reichenbach, Otto Neurath. Some main influences were:
  • David Hume’s and John Locke’s empiricism: all knowledge originates from observation, nothing can exist in the mind which wasn’t before in the senses,
  • Auguste Comte’ and John Stuart Mills’ positivism: there exists no knowledge outside of science.
In this paradigm (see Thomas Kuhn a little later) science is viewed as a building comprised of logical terms based on an empirical foundation. A theory is understood as having the following structure: observation -> empirical concepts -> formal notions -> abstract law. Basically a sequence of ever higher abstraction.
This notion of unveiling laws of nature by starting with individual observations is called induction (the other way round, starting with abstract laws and ending with a tangible factual description is called deduction, see further along).
And here the problems start to emerge. See part II

Stochastic Processes and the History of Science: From Planck to Einstein

How are the notions of randomness, i.e., stochastic processes, linked to theories in physics and what have they got to do with options pricing in economics?
How did the prevailing world view change from 1900 to 1905?
What connects the mathematicians Bachelier, Markov, Kolmogorov, Ito to the physicists Langevin, Fokker, Planck, Einstein and the economists Black, Scholes, Merton?

The Setting

  • Science up to 1900 was in essence the study of solutions of differential equations (Newton’s heritage);
  • Was very successful, e.g., Maxwell’s equations: four differential equations describing everything about (classical) electromagnetism;
  • Prevailing world view:
    • Deterministic universe;
    • Initial conditions plus the solution of differential equation yield certain prediction of the future.

Three Pillars

By the end of the 20th Century, it became clear that there are (at least?) two additional aspects needed in a completer understanding of reality:
  • Inherent randomness: statistical evaluations of sets of outcomes of single observations/experiments;
    • Quantum mechanics (Planck 1900; Einstein 1905) contains a fundamental element of randomness;
    • In chaos theory (e.g., Mandelbrot 1963) non-linear dynamics leads to a sensitivity to initial conditions which renders even simple differential equations essentially unpredictable;
  • Complex systems (e.g., Wolfram 1983), i.e., self-organization and emergent behavior, best understood as outcomes of simple rules.

Stochastic Processes

  • Systems which evolve probabilistically in time;
  • Described by a time-dependent random variable;
  • The probability density function describes the distribution of the measurements at time t;
  • Prototype: The Markov process.
For a Markov process, only the present state of the system influences its future evolution: there is no long-term memory. Examples:
  • Wiener process or Einstein-Wiener process or Brownian motion:
    • Introduced by Bachelier in 1900;
    • Continuous (in t and the sample path)
    • Increments are independent and drawn from a Gaussian normal distribution;
  • Random walk:
    • Discrete steps (jumps), continuous in t;
    • Is a Wiener process in the limit of the step size going to zero.
To summarize, there are three possible characteristics:
  1. Jumps (in sample path);
  2. Drift (of the probability density function);
  3. Diffusion (widening of the probability density function).
Probability distribution function showing drift and diffusion:
Probability distribution function with drift and diffusion
But how to deal with stochastic processes?

The Micro View

  • Presented a theory of Brownian motion in 1905;
  • New paradigm: stochastic modeling of natural phenomena; statistics as intrinsic part of the time evolution of system;
  • Mean-square displacement of Brownian particle proportional to time;
  • Equation for the Brownian particle similar to a diffusion (differential) equation.
  • Presented a new derivation of Einstein’s results in 1908;
  • First stochastic differential equation, i.e., a differential equation of a “rapidly and irregularly fluctuating random force” (today called a random variable)
  • Solutions of differential equation are random functions.
However, no formal mathematical grounding until 1942, when Ito developed stochastic calculus:
  • Langevin’s equations interpreted as Ito stochastic differential equations using Ito integrals;
  • Ito integral defined to deal with non-differentiable sample paths of random functions;
  • Ito lemma (generalized integration rule) used to solve stochastic differential equations.
  • The Markov process is a solution to a simple stochastic differential equation;
  • The celebrated Black-Scholes option pricing formula is a stochastic differential equation employing Brownian motion.

The Fokker-Planck Equation: Moving To The Macro View

  • The Langevin equation describes the evolution of the position of a single “stochastic particle”;
  • The Fokker-Planck equation describes the behavior of a large population of of “stochastic particles”;
    • Formally: The Fokker-Planck equation gives the time evolution of the probability density function of the system as a function of time;
  • Results can be derived more directly using the Fokker-Planck equation than using the corresponding stochastic differential equation;
  • The theory of Markov processes can be developed from this macro point of view.

The Historical Context


  • Developed a theory of Brownian motion (Einstein-Wiener process) in 1900 (five years before Einstein, and long before Wiener);
  • Was the first person to use a stochastic process to model financial systems;
  • Essentially his contribution was forgotten until the late 1950s;
  • Black, Scholes and Merton’s publication in 1973 finally gave Brownian motion the break-through in finance.


  • Founder of quantum theory;
  • 1900 theory of black-body radiation;
  • Central assumption: electromagnetic energy is quantized, E = h v;
  • In 1914 Fokker derives an equation on Brownian motion which Planck proves;
  • Applies the Fokker-Planck equation as quantum mechanical equation, which turns out to be wrong;
  • In 1931 Kolmogorov presented two fundamental equations on Markov processes;
  • It was later realized, that one of them was actually equivalent to the Fokker-Planck equation.


1905 “Annus Mirabilis” publications. Fundamental paradigm shifts in the understanding of reality:
  • Photoelectric effect:
    • Explained by giving Planck’s (theoretical) notion of energy quanta a physical reality (photons),
    • Further establishing quantum theory,
    • Winning him the Nobel Prize;
  • Brownian motion:
    • First stochastic modeling of natural phenomena,
    • The experimental verification of the theory established the existence of atoms, which had been heavily debate at the time,
    • Einstein’s most frequently cited paper, in the fields of biology, chemistry, earth and environmental sciences, life sciences, engineering;
  • Special theory of relativity: the relative speeds of the observers’reference frames determines the passage of time;
  • Equivalence of energy and mass (follows from special relativity): E = m c^2.
Einstein was working at the Patent Office in Bern at the time and submitted his Ph.D. to the University of Zurich in July 1905.
Later Work:
  • 1915: general theory of relativity, explaining gravity in terms of the geometry (curvature) of space-time;
    • Planck also made contributions to general relativity;
  • Although having helped in founding quantum mechanics, he fundamentally opposed its probabilistic implications: “God does not throw dice”;
  • Dreams of a unified field theory:
    • Spend his last 30 years or so trying to (unsuccessfully) extend the general theory of relativity to unite it with electromagnetism;
    • Kaluza and Klein elegantly managed to do this in 1921 by developing general relativity in five space-time dimensions;
    • Today there is still no empirically validated theory able to explain gravity and the (quantum) Standard Model of particle physics, despite intense theoretical research (string/M-theory, loop quantum gravity);
    • In fact, one of the main goals of the LHC at CERN (officially operational on the 21st of October 2008) is to find hints of such a unified theory (supersymmetric particles, higher dimensions of space).
Technorati ,

laws of nature

What are Laws of Nature?

Regularities/structures in a highly complex universe
    Allow for predictions
  • Dependent on only a small set of conditions (i.e., independent of very many conditions which could possibly have an effect)

…but why are there laws of nature and how can these laws be discovered and understood by the human mind?

No One Knows!

  • G.W. von Leibniz in 1714 (Principes de la nature et de la grâce):
    • Why is there something rather than nothing? For nothingness is simpler and easier than anything
  • E. Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences“, 1960:
    • […] the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and […] there is no rational explanation for it
    • […] it is not at all natural that “laws of nature” exist, much less that man is able to discover them
    • […] the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them
    • […] fundamentally, we do not know why our theories work so well

In a Nutshell

  • We happen to live in a structured, self-organizing, and fine-tuned universe that allows the emergence of sentient beings (anthropic principle)
  • The human mind is capable of devising formal thought systems (mathematics)
  • Mathematical models are able to capture and represent the workings of the universe
See also this post: in a nutshell.

The Fundamental Level of Reality: Physics

Mathematical models of reality are independent of their formal representation: invariance and symmetry
  • Classical mechanics: invariance of the equations under transformations (e.g., time => conservation of energy)
  • Gravitation (general relativity): geometry and the independence of the coordinate system (covariance)
  • The other three forces of nature (unified in quantum field theory): mathematics of symmetry and special kind of invariance
See also these posts: funadamentalinvariant thinking.

Towards Complexity

  • Physics was extremely successful in describing the inanimate world the in the last 300 years or so
  • But what about complex systems comprised of many interacting entities, e.g., the life and social sciences?
  • The rest is chemistryC. D. Anderson in 1932; echoing the success of a reductionist approach to understanding the workings of nature after having discovered the positron
  • At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary […]. Psychology is not applied biology, nor is biology applied chemistryP. W. Anderson in 1972; pointing out that the knowledge about the constituents of a system doesn’t reveal any insights into how the system will behave as a whole; so it is not at all clear how you get from quarks and leptons via DNA to a human brain…

Complex Systems: Simplicity

The Limits of Physics
  • Closed-form solutions to analytical expressions are mostly only attainable if non-linear effects (e.g., friction) are ignored
  • Not too many interacting entities can be considered (e.g., three body problem)
The Complexity of Simple Rules
  • S. Wolfram’s cellular automaton rule 110: neither completely random nor completely repetitive
  • [The] results [simple rules give rise to complex behavior] where were so surprising and dramatic that as I gradually came to understand them, they forced me to change my whole view of science […]; S. Wolfram reminiscing on his early work on cellular automaton in the 80s (”New Kind of Science”, pg. 19)

Complex Systems: The Paradigm Shift

  • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior
  • The shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers (only this bottom-up approach to simulating complex systems has been fruitful, all top-down efforts have failed: try programming swarming behaviorant foragingpedestrian/traffic dynamics,… not using simple local interaction rules but with a centralized, hierarchical setup!)
  • Understanding the complex system as a network of interactions (graph theory), where the complexity (or structure) of the individual nodes can be ignored
  • Challenge: how does the macro behavior emerge from the interaction of the system elements on the micro level?
See also these posts: complexswarm theorycomplex networks.

Laws of Nature Revisited

So are there laws of nature to be found in the life and social sciences?
  • Yes: scaling (or power) laws
  • Complex, collective phenomena give rise to power laws […] independent of the microscopic details of the phenomenon. These power laws emerge from collective action and transcend individual specificities. As such, they are unforgeable signatures of a collective mechanism; J.P. Bouchaud in “Power-laws in Economy and Finance: Some Ideas from Physics“, 2001

Scaling Laws

Scaling-law relations characterize an immense number of natural patterns (from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences) prominently in the form of
  • scaling-law distributions
  • scale-free networks
  • cumulative relations of stochastic processes
A scaling law, or power law, is a simple polynomial functional relationship
f(x) = a x^k     <=>   Y = (X/C)^E
Scaling laws
  • lack a preferred scale, reflecting their (self-similar) fractal nature
  • are usually valid across an enormous dynamic range (sometimes many orders of magnitude)
See also these posts: scaling lawsbenford’s law.

Scaling Laws In FX

  • Event counts related to price thresholds
  • Price moves related to time thresholds
  • Price moves related to price thresholds
  • Waiting times related to price thresholds
FX scaling law

Scaling Laws In Biology

So-called allometric laws describe the relationship between two attributes of living organisms as scaling laws:
  • The metabolic rate B of a species is proportional to its mass M: B ~ M^(3/4)
  • Heartbeat (or breathing) rate T of a species is proportional to its mass: T ~ M^(-1/4)
  • Lifespan L of a species is proportional to its mass: L ~ M^(1/4)
  • Invariants: all species have the same number of heart beats in their lifespan (roughly one billion)
allometric law
(Fig. G. West)
G. West (et. al) proposes an explanation of the 1/4 scaling exponent, which follow from underlying principles embedded in the dynamical and geometrical structure of space-filling, fractal-like, hierarchical branching networks, presumed optimized by natural selection: organisms effectively function in four spatial dimensions even though they physically exist in three.


  • The natural world possesses structure-forming and self-organizing mechanisms leading to consciousness capable of devising formal thought systems which mirror the workings of the natural world
  • There are two regimes in the natural world: basic fundamental processes and complex systems comprised of interacting agents
  • There are two paradigms: analytical vs. algorithmic (computational)
  • There are ‘miracles’ at work:
    • the existence of a universe following laws leading to stable emergent features
    • the capability of the human mind to devise formal thought systems
    • the overlap of mathematics and the workings of nature
    • the fact that complexity emerges from simple rules
  • There are basic laws of nature to be found in complex systems, e.g., scaling laws
Technorati ,

animal intelligence

This is the larger lesson of animal cognition research: It humbles us.
We are not alone in our ability to invent or plan or to contemplate ourselves—or even to plot and lie.
Many scientists believed animals were incapable of any thought. They were simply machines, robots programmed to react to stimuli but lacking the ability to think or feel.

We’re glimpsing intelligence throughout the animal kingdom.

Copyright Vincent J. Musi, National Geographic

A dog with a vocabulary of 340 words. A parrot that answers “shape” if asked what is different, and “color” if asked what is the same, while being showed two items of different shape and same color. An octopus with “distinct personality” that amuses itself by shooting water at plastic-bottle targets (the first reported invertebrate play behavior). Lemurs with calculatory abilities. Sheep able to recognize faces (of other sheep and humans) long term and that can discern moods. Crows able to make and use tools (in tests, even out of materials never seen before). Human-dolphin communication via an invented sign language (with simple grammar). Dolphins ability to correctly interpret on the first occasion instructions given by a person displayed on a TV screen.
This may only be the tip of the iceberg…
Read the article Animal Minds in National Geographic`s March 2008 edition.
Ever think about vegetarianism?


Technorati ,


complex networks

The study of complex networks was sparked at the end of the 90s with two seminal papers, describing their universal:
  • small-worlds property [1],
  • and scale-free nature [2] (see also this older post: scaling laws).
weighted network
unweighted network

Today, networks are ubiquitous: phenomena in the physical world (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological systems (e.g., neural networks, epidemiology, food webs, gene regulation), and social realms (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) are best understood if characterized as networks.
The explosion of this field of research was and is coupled with the increasing availability of
  • huge amounts of data, pouring in from neurobiology, genomics, ecology, finance and the Word-wide Web, …,
  • computing power and storage facilities.


The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complexfundamentalswarm theoryin a nutshell.)


Only in the last years has the attention shifted from this topological level of analysis (either links are present or not) to incorporate weights of links, giving the strength relative to each other. Albeit being harder to tackle, these networks are closer to the real-world system it is modeling.


However, there is still one step missing: also the vertices of the network can be assigned with a value, which acts as a proxy for some real-world property that is coded into the network structure.
The two plots above illustrate the difference if the same network is visualized [3] using weights and values assigned to the vertices (left) or simply plotted as a binary (topological) network (right)…


[1] Strogatz S. H. and Watts D. J., 1998, Collective Dynamics of ‘Small-World’ Networks,
Nature, 393, 440–442.
[2] Albert R. and Barabasi A.-L., 1999, Emergence of Scaling in Random Networks,
[3] Cuttlefish Adaptive NetWorkbench and Layout

cool links…

think statistics are boring, irrelevant and hard to understand? well, think again.
two examples of visually displaying important information in an amazingly cool way:

territory size shows the proportion of all people living on less than or equal to US$1 in purchasing power parity a day. displays a large collection of world maps, where territories are re-sized on each map according to the subject of interest. sometimes an image says more than a thousand words…

want to see the global evolution of life expectancy vs. income per capita from 1975 to 2003? and additionally display the co2 emission per capita? choose indicators from areas as diverse as internet users per 1′000 people to contraceptive use amongst adult women and watch the animation.
gapminder is a fantastic tool that really makes you think…

work in progress…

Some of the stuff I do all week…

Complex Networks

Visualizing a shareholder network:

The underlying network visualization framework is JUNG, with theCuttlefish adaptive networkbench and layout algorithm (coming soon). The GUI uses Swing.

Stochastic Time Series

Scaling laws in financial time series:
A Java framework allowing the computation and visualization of statistical properties. The GUI is programmed using SWT.

plugin of the month

The Firefox add-on Gspace allows you to use Gmail as a file server:

This extension allows you to use your Gmail Space (4.1 GB and growing) for file storage. It acts as an online drive, so you can upload files from your hard drive and access them from every Internet capable system. The interface will make your Gmail account look like a FTP host.

tech dependence…

Because technological advancement is mostly quite gradual, one hardly notices it creeping into ones life. Only if you would instantly remove these high tech commodities, you’d realize how dependent one has become.

A random list of ‘nonphysical’ things I wouldn’t want to live without anymore:
  • everything you ever wanted to know — and much more
  • (e.g., news, scholar, maps, webmaster tools, …): basically the internet;-)
  • Web 2.0 communities (e.g.,,,,,,, …): your virtual social network
  • towards the babel fish 
  • recommendations from the fat tail of the probability distribution
  • Web browsers (e.g., Firefox): your window to the world
  • Version control systems (e.g., Subversion): get organized
  • CMS (e.g., TYPO3): disentangle content from design on your web page and more
  • LaTeX typesetting software (btw, this is not a fetish;-): the only sensible and aesthetic way to write scientific documents
  • Wikies: the wonderful world of unstructured collaboration
  • Blogs: get it out there
  • Java programming language: truly platform independent and with nice GUI toolkits (SWT, Swing, GWT); never want to go back to C++ (and don’t even mention C# or .net)
  • Eclipse IDE: how much fun can you have while programming?
  • MySQL: your very own relational database (the next level: db4o)
  • PHP: ok, Ruby is perhaps cooler, but PHP is so easy to work with (e.g., integrating MySQL and web stuff)
  • Dynamic DNS (e.g., let your home computer be a node of the internet
  • Web server (e.g., Apache 2): open the gateway
  • CSS: ok, if we have to go with HTML, this helps a lot
  • VoIP (e.g., Skype): use your bandwidth
  • P2P (e.g., BitTorrent): pool your network
  • Video and audio compression (e.g., MPEG, MP3, AAC, …): information theory at its best
  • Scientific computing (R, Octave, gnuplot, …): let your computer do the work
  • Open source licenses (Creative Commons, Apache, GNU GPL, …): the philosophy!
  • Object-oriented programming paradigm: think design patterns
  • Rich Text editors: online WYSIWYG editing, no messing around with HTML tags
  • SSH network protocol: secure and easy networking
  • Linux Shell-Programming (”grep”, “sed”, “awk”, “xargs”, pipes, …): old school Unix from the 70s
  • E-mail (e.g., IMAP): oops, nearly forgot that one (which reminds me of something i really, really could do without: spam)
  • Graylisting: reduce spam
  • Debian (e.g., Kubuntu): the basis for it all
  • apt-get package management system: a universe of software at your fingertips
  • Compiz Fusion window manager: just to be cool…
It truly makes one wonder, how all this cool stuff can come for free!!!

climate change 2007

Confused about the climate? Not sure what’s happening? Exaggerated fears or impending cataclysm?
A good place to start is a publication by Swiss Re. It is done in a straightforward, down-to-earth, no-bullshit and sane manner. The source to the whole document is given at the bottom.

Executive Summary
The Earth is getting warmer, and it is a widely held view in the scientific community that much of the recent warming is due to human activity. As the Earth warms, the net effect of unabated climate change will ultimately lower incomes and reduce public welfare. Because carbon dioxide (CO₂) emissions build up slowly, mitigation costs rise as time passes and the level of CO₂ in the atmosphere increases. As these costs rise, so too do the benefits of reducing CO₂ emissions, eventually yielding net positive returns. Given how CO₂ builds up and remains in the atmosphere, early mitigation efforts are highly likely to put the global economy on a path to achieving net positive benefits sooner rather than later. Hence, the time to act to reduce these emissions is now.The climate is what economists call a “public good”: its benefits are available to everyone and one person’s enjoyment and use of it does not affect another’s. Population growth, increased economic activity and the burning of fossil fuels now pose a threat to the climate. The environment is a free resource, vulnerable to overuse, and human activity is now causing it to change. However, no single entity is responsible for it or owns it. This is referred to as the “tragedy of the commons”: everyone uses it free of charge and eventually depletes or damages it. This is why government intervention is necessary to protect our climate.
Climate is global: emissions in one part of the world have global repercussions. This makes an international government response necessary. Clearly, this will not be easy. The Kyoto Protocol for reducing CO₂ emissions has had some success, but was not considered sufficiently fair to be signed by the United States, the country with the highest volume of CO₂ emissions. Other voluntary agreements, such as the Asia-Pacific Partnership on Clean Development and Climate – which was signed by the US – are encouraging, but not binding. Thus, it is essential that governments implement national and international mandatory policies to effectively reduce carbon emissions in order to ensure the well-being of future generations.
The pace, extent and effects of climate change are not known with certainty. In fact, uncertainty complicates much of the discussion about climate change. Not only is the pace of future economic growth uncertain, but also the carbon dioxide and equivalent (CO₂e) emissions associated with economic growth. Furthermore, the global warming caused by a given quantity of CO₂e emissions is also uncertain, as are the costs and impact of temperature increases.
Though uncertainty is a key feature of climate change and its impact on the global economy, this cannot be an excuse for inaction. The distribution and probability of the future outcomes of climate change are heavily weighted towards large losses in global welfare. The likelihood of positive future outcomes is minor and heavily dependent upon an assumed maximum climate change of 2° Celsius above the pre-industrial average. The probability that a “business as usual” scenario – one with no new emission-mitigation policies – will contain global warming at 2° Celsius is generally considered as negligible. Hence, the “precautionary principle” – erring on the safe side in the face of uncertainty – dictates an immediate and vigorous global mitigation strategy for reducing CO₂e emissions.
There are two major types of mitigation strategies for reducing greenhouse gas emissions: a cap-and-trade system and a tax system. The cap-and-trade system establishes a quantity target, or cap, on emissions and allows emission allocations to be traded between companies, industries and countries. A tax on, for example, carbon emissions could also be imposed, forcing companies to internalize the cost of their emissions to the global climate and economy. Over time, quantity targets and carbon taxes would need to become increasingly restrictive as targets fall and taxes rise. Though both systems have their own merits, the cap-and-trade policy has an edge over the carbon tax, given the uncertainty about the costs and benefits of reducing emissions. First, cap-and-trade policies rely on market mechanisms – fluctuating prices for traded emissions – to induce appropriate mitigating strategies, and have proved effective at reducing other types of noxious gases. Second, caps have an economic advantage over taxes when a given level of emissions is required. There is substantial evidence that emissions need to be capped to restrict global warming to 2 °C above preindustrial levels or a little more than 1 °C compared to today. Given that the stabilization of emissions at current levels will most likely result in another degree rise in temperature and that current economic growth is increasing emissions, the precautionary principle supports a cap-and-trade policy. Finally, cap-and-trade policies are more politically feasible and palatable than carbon taxes. They are more widely used and understood and they do not require a tax increase. They can be implemented with as much or as little revenue-generating capacity as desired. They also offer business and consumers a great deal of choice and flexibility. A cap-and-trade policy should be easier to adopt in a wide variety of political environments and countries.
Whichever system – cap-and-trade or carbon tax – is adopted, there are distributional issues that must be addressed. Under a quantity target, allocation permits have value and can be granted to businesses or auctioned. A carbon tax would raise revenues that could be recycled, for example, into research on energy-efficient technologies. Or the revenues could be used to offset inefficient taxes or to reduce the distributional aspects of the carbon tax.
Source: “The economic justification for imposing restraints on carbon emissions”, Swiss Re, Insights, 2007; PDF

scaling laws

Scaling-law relations characterize an immense number of natural processes, prominently in the form of
  1. scaling-law distributions,
  2. scale-free networks,
  3. cumulative relations of stochastic processes.

A scaling law, or power law, is a simple polynomial functional relationship, i.e., f(x) depends on a power of x. Two properties of such laws can easily be shown:
  • a logarithmic mapping yields a linear relationship,
  • scaling the function’s argument x preserves the shape of the functionf(x), called scale invariance.
See (Sornette, 2006).

Scaling-Law Distributions

Scaling-law distributions have been observed in an extraordinary wide range of natural phenomena: from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences; see (Newman, 2004). It is truly amazing, that such diverse topics as
  • the size of earthquakes, moon craters, solar flares, computer files, sand particle, wars and price moves in financial markets,
  • the number of scientific papers written, citations received by publications, hits on webpages and species in biological taxa,
  • the sales of music, books and other commodities,
  • the population of cities,
  • the income of people,
  • the frequency of words used in human languages and of occurrences of personal names,
  • the areas burnt in forest fires,
are all described by scaling-law distributions. First used to describe the observed income distribution of households by the economist Pareto in 1897, the recent advancements in the study of complex systems have helped uncover some of the possible mechanisms behind this universal law. However, there is as of yet no real understanding of the physical processes driving these systems.
Processes following normal distributions have a characteristic scale given by the mean of the distribution. In contrast, scaling-law distributions lack such a preferred scale. Measurements of scaling-law processes yield values distributed across an enormous dynamic range (sometimes many orders of magnitude), and for any section one looks at, the proportion of small to large events is the same. Historically, the observation of scale-free or self-similar behavior in the changes of cotton prices was the starting point for Mandelbrot’s research leading to the discovery of fractal geometry; see (Mandelbrot, 1963).
It should be noted, that although scaling laws imply that small occurrences are extremely common, whereas large instances are quite rare, these large events occur nevertheless much more frequently compared to a normal (or Gaussian) probability distribution. For such distributions, events that deviate from the mean by, e.g., 10 standard deviations (called “10-sigma events”) are practically impossible to observe. For scaling law distributions, extreme events have a small but very real probability of occurring. This fact is summed up by saying that the distribution has a “fat tail” (in the terminology of probability theory and statistics, distributions with fat tails are said to be leptokurtic or to display positive kurtosis) which greatly impacts the risk assessment. So although most earthquakes, price moves in financial markets, intensities of solar flares, … will be very small, the possibility that a catastrophic event will happen cannot be neglected.

Scale-Free Networks

Another modern research field marked by the ubiquitous appearance of scaling-law relations is the study of complex networks. Many different phenomena in the physical (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological (e.g., neural networks, epidemiology, food webs, gene regulation), and social (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) worlds can be understood as network based. In essence, the links and nodes are abstractions describing the system under study via the interactions of the elements comprising it.
In graph theory, the degree of a node (or vertex), k, describes the number of links (or edges) the node has to other nodes. The degree distribution gives the probability distribution of degrees in a network. For scale-free networks, one finds that the probability that a node in the network connects with k other nodes follows a scaling law. Again, this power law is characterized by the existence of highly connected hubs, whereas most nodes have small degrees.
Scale-free networks are
  • characterized by high robustness against random failure of nodes, but susceptible to coordinated attacks on the hubs, and
  • thought to arise from a dynamical growth process, called preferential attachment, in which new nodes favor linking to existing nodes with high degrees.
It should be noted, that another prominent feature of real-world networks, namely the so-called small-world property, is separate from a scale-free degree distribution, although scale-free networks are also small-world networks; (Strogatz and Watts, 1998). For small-world networks, although most nodes are not neighbors of one another, most nodes can be reached from every other by a surprisingly small number of hops or steps.
Most real-world complex networks - such as those listed at the beginning of this section - show both scale-free and small-world characteristics.
Some general references include (Barabasi, 2002), (Albert and Barabasi, 2001), and (Newman, 2003). Emergence of scale-free networks in the preferential attachment model (Albert and Barabasi, 1999). An alternative explanation to preferential attachment, introducing non-topological values (called fitness) to the vertices, is given in (Caldarelli et al., 2002).

Cumulative Scaling-Law Relations

Next to distributions of random variables, scaling laws also appear in collections of random variables, called stochastic processes. Prominent empirical examples are financial time-series, where one finds empirical scaling laws governing the relationship between various observed quantities. See (Guillaume et al., 1997) and (Dacorogna et al., 2001).


Albert R. and Barabasi A.-L., 1999, Emergence of Scaling in Random Networks,
Albert R. and Barabasi A.-L., 2001, Statistical Mechanics of Complex Networks,
Barabasi A.-L., 2002, Linked — The New Science of Networks, Perseus Publishing, Cambridge, Massachusetts.
Caldarelli G., Capoccio A., Rios P. D. L., and Munoz M. A., 2002, Scale- free Networks without Growth or Preferential Attachment: Good get Richer,
Dacorogna M. M., Gencay R., Müller U. A., Olsen R. B., and Pictet O. V., 2001, An Introduction to High-Frequency Finance, Academic Press, San Diego, CA.
Guillaume D. M., Dacorogna M. M., Dave R. D., Müller U. A., Olsen R. B., and Pictet O. V., 1997, From the Bird’s Eye to the Microscope: A Survey of New Stylized Facts of the Intra-Daily Foreign Exchange Markets, Finance and Stochastics, 1, 95–129.
Mandelbrot B. B., 1963, The variation of certain speculative prices, Journal of Business, 36, 394–419.
Newman M. E. J., 2003, The Structure and Function of Complex Networks,
Newman M. E. J., 2004, Power Laws, Pareto Distributions and Zipf ’s Law,
Sornette D., 2006, Critical Phenomena in Natural Sciences, Series in Synergetics. Springer, Berlin, 2nd edition.
Strogatz S. H. and Watts D. J., 1998, Collective Dynamics of ‘Small-World’ Networks,
Nature, 393, 440–442.
See also this post: laws of nature.

swarm theory

National Geographic`s July 2007 edition: Swarm Theory
A single ant or bee isn’t smart, but their colonies are. The study of swarm intelligence is providing insights that can help humans manage complex systems.

benford’s law

In 1881 a result was published, based on the observation that the first pages of logarithm books, used at that time to perform calculations, were much more worn than the other pages. The conclusion was that computations of numbers that started with 1 were performed more often than others: if d denotes the first digit of a number the probability of its appearance is equal to log(d + 1).

The phenomenon was rediscovered in 1938 by the physicist F. Benford, who confirmed the “law” for a large number of random variables drawn from geographical, biological, physical, demographical, economical and sociological data sets. It even holds for randomly compiled numbers from newspaper articles. Specifically, Benford’s law, or the first-digit law, states, that the occurrence of a number with first digit 1 is with 30.1%, 2 with 17.6%, 3 with 12.5%, 4 with 9.7%, 5 with 7.9%, 6 with 6.7%, 7 with 5.8%, 8 with 5.1% and 9 with 4.6% probability. In general, the leading digit d ∈ [1, …, b−1] in base b ≥ 2 occurs with probability proportional to log_b(d + 1) − log_b(d) = log_b(1 + 1/d).
First explanations of this phenomena, which appears to suspend the notions of probability, focused on its logarithmic nature which implies a scale-invariant or power-law distribution. If the first digits have a particular distribution, it must be independent of the measuring system, i.e., conversions from one system to another don’t affect the distribution. (This requirement that physical quantities are independent of a chosen representation is one of the cornerstones of general relativity and called covariance.) So the common sense requirement that the dimensions of arbitrary measurement systems shouldn’t affect the measured physical quantities, is summarized in Benford’s law. In addition, the fact that many processes in nature show exponential growth is also captured by the law, which assumes that the logarithms of numbers are uniformly distributed.
So how come one observes random variables following normal and scaling-law distributions? In 1996 the phenomena was mathematically rigorously proven: if one repeatedly chooses different probability distribution and then randomly chooses a number according to that distribution, the resulting list of numbers will obey Benford’s law. Hence the law reflects the behavior of distributions of distributions.
Benford’s law has been used to detect fraud in insurance, accounting or expenses data, where people forging numbers tend to distribute their digits uniformly.


There is an interesting observation or conjecture to be made from the Mataphysics Map in the post what can we know?, concerning the nature of infinity.

The Finite

Many observations reveal a finite nature of reality:
  • Energy comes in finite parcels (quatum mechanics)
  • The knowledge one can have about quanta is a fixed value (uncertainty)
  • Energy is conserved in the universe
  • The speed of light has the same constant value for all observers (special relativity)
  • The age of the universe is finite
  • Information is finite and hence can be coded into a binary language
Newer and more radical theories propose:
  • Space comes in finite parcels
  • Time comes in finite parcels
  • The universe is spatially finite
  • The maximum entropy in any given region of space is proportional to the regions surface area and not its volume (this leads to the holographic principle stating that our three dimensional universe is a projection of physical processes taking place on a two dimensional surface surrounding it)
So finiteness appears to be an intrinsic feature of the Outer Reality box of the diagram.
There is in fact a movement in physics ascribing to the finiteness of reality, called Digital Philosophy. Indeed, this finiteness postulate is a prerequisite for an even bolder statement, namely, that the universe is one gigantic computer (a Turing complete cellular automata), where reality (thought and existence) is equivalent to computation. As mentioned above, the selforganizing structure forming evolution of the universe can be seen to produce ever more complex modes of information processing (e.g., storing data in DNA, thoughts, computations, simulations and perhaps, in the near future, quantum computations).
There is also an approach to quantum mechanics focussing on information stating that an elementary quantum system carries (is?) one bit of information. This can be seen to lead to the notions of quantisation, uncertainty and entanglement.

The Infinite

It should be noted that zero is infinity in disguise. If one lets the denominator of a fraction go to infinity, the result is zero. Historically, zero was discovered in the 3rd century BC in India and was introduced to the Western world by Arabian scholars in the 10th century AC. As ordinary as zero appears to us today, the great Greek mathematicians didn’t come up with such a concept.
Indeed, infinity is something intimately related to formal thought systems (mathematics). Irrational numbers have an infinite number of digits. There are two measures for infinity: countability and uncountablility. The former refers to infinite series as 1, 2, 3, … Whereas for the latter measure, starting from 1.0 one can’t even reach 1.1 because there are an infinite amount of numbers in the interval between 1.0 and 1.1. In geometry, points and lines are idealizations of dimension zero and one, respectively.
So it appears as though infinity resides only in the Inner Reality box of the diagram.

The Interface

If it should be true that we live in a finite reality with infinity only residing within the mind as a concept, then there should be some problems if one tries to model this finite reality with an infinity-harboring formalism.
Perhaps this is indeed so. In chaos theory, the sensitivity to initial conditions (butterfly effect) can be viewed as the problem of measuring numbers: the measurement can only have a finite degree of accuracy, whereas the numbers have, in principle, an infinite amount of decimal places.
In quatum gravity (the, as yet, unsuccessful merger of quantum mechanics and gravity) many of the inherent problems of the formalism could by bypassed, when a theory was proposed (string theory) that replaced (zero space) point particles with one dimensionally extended objects. Later incarnations, called M-theory, allowed for multidimensional objects.
In the above mentioned information based view of quantum mechanics, the world appears quantised because the information retrieved by our minds about the world is inevitably quantised.
So the puzzle deepens. Why do we discover the notion of infinity in our minds while all our experiences and observations of nature indicate finiteness?

medical studies


medical studies often contradict each other. results claiming to have “proven” some causal connection are confronted with results claiming to have “disproven” the link, or vice versa. this dilemma affects even reputable scientists publishing in leading medical journals. the topics are divers:
  • high-voltage power supply lines and leukemia [1],
  • salt and high blood pressure [1],
  • heart diseases and sport [1],
  • stress and breast cancer [1],
  • smoking and breast cancer [1],
  • praying and higher chances of healing illnesses [1],
  • the effectiveness of homeopathic remedies and natural medicine,
  • vegetarian diets and health,
  • low frequency electromagnetic fields and electromagnetic hypersensitivity [2],
basically, this is understood to happen for three reasons:
  • i.) the bias towards publishing positive results,
  • ii.) incompetence in applying statistics,
  • ii.) simple fraud.


publish or perish. in order the guarantee funding and secure the academic status quo, results are selected by their chance of being published.
an independent analysis of the original data used in 100 published studies exposed that roughly half of them showed large discrepancies in the original aims stated by the researchers and the reported findings, implying that the researchers simply skimmed the data for publishable material [3].
this proves fatal in combination with ii.) as every statistically significant result can occur (per definition) by chance in an arbitrary distribution of measured data. so if you only look long enough for arbitrary results in your data, you are bound to come up with something [1].


often, due to budget reasons, the numbers of test persons for clinical trials are simply too small to allow for statistical relevance. ref. [4] showed next to other things, that the smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
statistical significance - often evaluated by some statistics software package - is taken as proof without considering the plausibility of the result. many statistically significant results turn out to be meaningless coincidences after accounting for the plausibility of the finding [1].
one study showed that one third of frequently cited results fail a later verification [1].
another study documented that roughly 20% of the authors publishing in the magazine “nature” didn’t understand the statistical method they were employing [5].

iii.) a.)

two thirds of of the clinical biomedical research in the usa is supported by the industry - double as much as in 1980 [1].
it was shown that in 1000 studies done in 2003, the nature of the funding correlated with the results: 80% of industry financed studies had positive results, whereas only 50% of the independent research reported positive findings.
it could be argued that the industry has a natural propensity to identify effective and lucrative therapies. however, the authors show that many impressive results were only obtained because they were compared with weak alternative drugs or placebos. [6]

iii.) b.)

quoted from
“Andrew Wakefield (born 1956 in the United Kingdom) is a Canadian trained surgeon, best known as the lead author of a controversial 1998 research study, published in the Lancet, which reported bowel symptoms in a selected sample of twelve children with autistic spectrum disorders and other disabilities, and alleged a possible connection with MMR vaccination. Citing safety concerns, in a press conference held in conjunction with the release of the report Dr. Wakefield recommended separating the components of the injections by at least a year. The recommendation, along with widespread media coverage of Wakefield’s claims was responsible for a decrease in immunisation rates in the UK. The section of the paper setting out its conclusions, known in the Lancet as the “interpretation” (see the text below), was subsequently retracted by ten of the paper’s thirteen authors.
In February of 2004, controversy resurfaced when Wakefield was accused of a conflict of interest. The London Sunday Times reported that some of the parents of the 12 children in the Lancet study were recruited via a UK attorney preparing a lawsuit against MMR manufacturers, and that the Royal Free Hospital had received £55,000 from the UK’s Legal Aid Board (now the Legal Services Commission) to pay for the research. Previously, in October 2003, the board had cut off public funding for the litigation against MMR manufacturers. Following an investigation of The Sunday Times allegations by the UK General Medical Council, Wakefield was charged with serious professional misconduct, including dishonesty, due to be heard by a disciplinary board in 2007.
In December of 2006, the Sunday Times further reported that in addition to the money given to the Royal Free Hospital, Wakefield had also been personally paid £400,000 which had not been previously disclosed by the attorneys responsible for the MMR lawsuit.”
wakefield had always only expressed his criticism of the combined triple vaccination, supporting single vaccinations spaced in time. the british tv station channel 4 exposed in 2004 that he had applied for patents for the single vaccines. wakefield dropped his subsequent slander action against the media company only in the beginning of 2007. as mentioned, he now awaits charges for professional misconduct. however, he has left britain and now works for a company in austin texas. it has been uncovered that other employees of this us company had received payments from the same attorney preparing the original law suit. [7]


should we be surprised by all of this? next to the innate tendency of human beings to be incompetent and unscrupulous, there is perhaps another level, that makes this whole endeavor special.
the inability of scientist to conclusively and reproducibly uncover findings concerning human beings is maybe better appreciated, if one considers the nature of the subject under study. life, after all, is an enigma and the connection linking the mind to matter is elusive at best (i.e., the physical basis of consciousness).
the bodies capability to heal itself, i.e., the placebo effect and the need for double-blind studies is indeed very bizarre. however, there are studies questioning, if the effect exists at all;-)
taken from
 (consult also for the corresponding links for the sources cited below)

[1] This article in the magazine issued by the Neue Zürcher Zeitung by Robert Matthews
[2] C. Schierz; Projekt NEMESIS; ETH Zürich; 2000
[3] A. Chan (Center of Statistics in Medicine, Oxford) et. al.; Journal of the American Medical Association; 2004
[4] J. Ioannidis; “Why Most Published Research Findings Are False” ; University of Ioannina; 2005
[5] R. Matthews, E. García-Berthou and C. Alcaraz as reported in this “Nature” article; 2005
[6] C. Gross (Yale University School of Medicine) et. al.; “Scope and Impact of Financial Conflicts of Interest in Biomedical Research “; Journal of the American Medical Association; 2003
[7] H. Kaulen; “Wie ein Impfstoff zu Unrecht in Misskredit gebracht wurde”; Deutsches Ärzteblatt; Jg. 104; Heft 4; 26. Januar 2007

in a nutshell


Science, put simply, can be understood as working on three levels:
  • i.) analyzing the nature of the object being considered/observed,
  • ii.) developing the formal representation of the object’s features and its dynamics/interactions,
  • iii.) devising methods for the empirical validation of the formal representations.
To be precise, level i.) lies more within the realm of philosophy (e.g., epistemology) and metaphysics (i.e., ontology), as notions of origin, existence and reality appear to transcend the objective and rational capabilities of thought. The main problem being:
“Why is there something rather than nothing? For nothingness is simpler and easier than anything.”; [1].
In the history of science the above mentioned formulation made the understanding of at least three different levels of reality possible:
  • a.) the fundamental level of the natural world,
  • b.) inherently random phenomena,
  • c.) complex systems.
While level a.) deals mainly with the quantum realm and cosmological structures, levels b.) and c.) are comprised mostly of biological, social and economic systems.


a.) Fundamental
Many natural sciences focus on a.i.) fundamental, isolated objects and interactions, use a.ii.) mathematical models which are a.iii.) verified (falsified) in experiments that check the predictions of the model - with great success:
“The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious. There is no rational explanation for it.”; [2].
b.) Random
Often the nature of the object b.i.) being analyzed is in principle unknown. Only statistical evaluations of sets of outcomes of single observations/experiments can be used to estimate b.ii.) the underlying model, and b.iii.) test it against more empirical data. Often the approach taken in the fields of social sciences, medicine, and business.
c.) Complex
Moving to c.i.) complex, dynamical systems, and c.ii.) employing computer simulations as a template for the dynamical process, unlocks a new level of reality: mainly the complex and interacting world we experience at our macroscopic length scales in the universe. Here two new paradigms emerge:
  • the shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers,
  • simple rules giving rise to complex behavior: “And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.”; [3].
However, things are not as clear anymore. What is the exact methodology, and how does it relate to underlying concepts of ontology and epistemology, and what is the nature of these computations per se? Or within the formulation given above, i.e., iii.c.), what is the “reality” of these models: what do the local rules determining the dynamics in the simulation have to say about the reality of the system c.i.) they are trying to emulate?


There are many coincidences that enabled the structured reality we experience on this planet to have evolve: exact values of fundamental constants (initial conditions), emerging structure-forming and self-organizing processes, the possibility of (organic) matter to store information (after being synthesized in supernovae!), the right conditions of earth for harboring life, the emergent possibilities of neural networks to establish consciousness and sentience above a certain threshold, …
Interestingly, there are also many circumstances that allow the observable world to be understood by the human mind:
  • the mystery allowing formal thought systems to map to patterns in the real world,
  • the development of the technology allowing for the design and realization of microprocessors,
  • the bottom-up approach to complexity identifying a micro level of simple interactions of system elements.
So it appears that the human mind is intimately interwoven with the fabric of reality that produced it.
But where is all this leading to? There exists a natural extension to science which fuses the notions from levels a.) to c.), namely
  • information and information processing,
  • formal mathematical models,
  • statistics and randomness.
Notably, it comes from an engineering point-of-view and deals with quantum computers and comes full circle back to level i.), the question about the nature of reality:
“[It can be shown] that quantum computers can simulate any system that obeys the known laws of physics in a straightforward and efficient way. In fact, the universe is indistinguishable from a quantum computer.”; [4].
At first blush the idea of substituting reality with a computed simulation appears rather ad hoc, but in fact it does have potentially falsifiable notions:
  • the discreteness of reality, i.e., the notion that continuity and infinity are not physical,
  • the reality of the quantum realm should be contemplated from the point of view of information, i.e., the only relevant reality subatomic quanta manifest is that they register one bit of information: “Information is physical.”; [5].
[1] von Leibniz, G. W., “Principes de la nature et de la grâce”, 1714
[2] Wigner, E. P., “Symmetries and Reflections”, MIT Press, Cambridge, 1967
[3] Wolfram, S., “A New Kind of Science”, Wolfram Media, pg. 19, 2002
[4] Lloyd, S., “Programming the Universe”, Random House, pgs. 53 - 54, 2006
[5] Landauer, R., Nature, 335, 779-784, 1988
See also: “The Mathematical Universe” by M. Tegmark.
Related posts:
See also this post: laws of nature.

what can we know?

Put bluntly, metaphysics asks simple albeit deep questions:
  • Why do I exist?
  • Why do I die?
  • Why does the world exist?
  • Where did everything come from?
  • What is the nature of reality?
  • What is the meaning of existence?
  • Is there a creator or omnipotent being?
Although these questions may appear idle and futile, they seem to represent an innate longing for knowledge of the human mind. Indeed, children can and often do pose such questions, only to be faced with resignation or impatience of adults.
To make things simpler and tractable, one can focus on the question “What can we know?”.
When you wake up in the morning, you instantly become aware of your self, i.e., you experience an immaterial inner reality you can feel and probe with your thoughts. Upon opening your eyes, a structured material outer reality appears. These two unsurmountable facts are enough to sketch a small metaphysical diagram:


Focussing on the outer reality or physical universe, there exists an underlying structure forming and selforganizing process starting with an initial singularity or Big Bang (extremely low entropy state, i.e., high order, giving rise to the arrow or direction of time). Due to the exact values of physical constants in our universe, this organizing process yields structures eventually giving birth to stars, which, at the end of their lifecycle, explode (supernovae) allowing for nuclear reactions to fuse heavy elements.
One of these heavy elements brings with it novel bonding possibilities, resulting in a new pattern: organic matter. Within a couple of billion years, the structure forming process gave rise to a plethora of living organisms. Although each organism would die after a short lifespan, the process of life as a whole continued to live in a sustainable equilibrium state and survived a couple of extinction events (some of which eradicated nearly 90% of all species).
The second law of thermodynamics states, that the entropy of the universe is increasing, i.e., the universe is becoming an ever more unordered place. It would seem that the process of life creating stable and ordered structures violates this law. In fact, complex structures spontaneously appear where there is a steady flow of energy from a high temperature input source (the sun) to a low temperature external sink (the earth). So pumping a system with energy leads it to a state far from the thermodynamic equilibrium which is characterized by the emergence of ordered structures.
Viewed from an information processing perspective, the organizing process suddenly experienced a great leap forward. The brains of some organisms had reached a critical mass, allowing for another emergent behavior: consciousness.


The majority of people in industrialized nations take a rational and logicall outlook on life. Although one might think this is an inevitable mode of awareness, it actually is a cultural imprinting as there exist other civilization putting far less emphasis on rationality.
Perhaps the divide between Western and Eastern thinking illustrates this best. Whereas the former is locked in continuous interaction with the outer world, the latter focuses on the experience of an inner reality. A history of meditation techniques underlines this emphasis on the nonverbal experience of ones self. Thought is either totally avoided, or the mind is focused on repetitive activities, in effect deactivating it.
Recall from fundamental that there are two surprising facts to be found. On the one hand, the physical laws dictating the fundamental behavior of the universe can be mirrored by formal thought systems devised by the mind. And on the other hand, real complex behavior can be emulated by computer simulations following simple laws (the computers themselves are an example of technological advances made possible by the successfull modelling of nature by formal thought systems).


This conceptual map allows one to categorize a lot of stuff in a concise manner. Also, the interplay between the outer and inner realities becomes visible. However, the above mentioned questions remain unanswered. Indeed, more puzzles appear. So as usual, every advance in understanding just makes the question mark bigger…
Continued here: infinity?

invariant thinking…

Arguably the most fruitful principle in physics has been the notion of symmetry. Covariance and gauge invariance - two simply stated symmetry conditions - are at the heart of general relativity and the standard model (of particle physics).
This is not only aesthetically pleasing it also illustrates a basic fact: in coding reality into a formal system, we should only allow the most minimal reference to be made to this formal system. I.e. reality likes to be translated into a language that doesn’t explicitly depend on its own peculiarities (coordinates, number bases, units, …). This is a pretty obvious idea and allows for physical laws to be universal.
But what happens if we take this idea to the logical extreme? Will the ultimate theory of reality demand: I will only allow myself to be coded into a formal framework that makes no reference to itself whatsoever. Obviously a mind twister. But the question remains: what is the ultimate symmetry idea? Or: what is the ultimate invariant?
Does this imply “invariance” even with respect to our thinking? How do we construct a system that supports itself out of itself, without relying on anything external? Can such a magical feat be performed by our thinking?
Taken from this newsgroup message


See also: fundamental


While physics has had an amazing success in describing most of the observable universe in the last 300 years, the formalism appears to be restricted to the fundamental workings of nature. Only solid-state physics attempts to deal with collective systems. And only thanks to the magic of symmetry one is able to deduce fundamental analytical solutions.
In order to approach real life complex phenomena, one needs to adopt a more systems oriented focus. This also means that the interactions of entities becomes an integral part of the formalism.
Some ideas should illustrate the situation:
  • Most calculations in physics are idealizations and neglect dissipative effects like friction
  • Most calculations in physics deal with linear effect, as non-linearity is hard to tackle and is associated with chaos; however, most physical systems in nature are inherently non-linear
  • The analytical solution of three gravitating bodies in classical mechanics, given their initial positions, masses, and velocities, cannot be found; it turns out to be a chaotic system which can only be simulated in a computer; however, there are an estimated hundred billion of galaxies in the universe

Systems Thinking

Systems theory is an interdisciplinary field which studies relationships of systems as a whole. The goal is to explain complex systems which consist of a large number of mutually interacting and interwoven parts in terms of those interactions.
A timeline:
  • Cybernetics (50s): Study of communication and control, typically involving regulatory feedback, in living organisms and machines
  • Catastrophe theory (70s): Phenomena characterized by sudden shifts in behavior arising from small changes in circumstances
  • Chaos theory (80s): Describes the behavior of non-linear dynamical systems that under certain conditions exhibit a phenomenon known as chaos (sensitivity to initial conditions, regimes of chaotic and deterministic behavior, fractals, self-similarity)
  • Complex adaptive systems (90s): The “new” science of complexity which describes emergence, adaptation and self-organization; employing tools such as agent-based computer simulations
In systems theory one can distinguish between three major hierarchies:
  • Suborganic: Fundamental reality, space and time, matter, …
  • Organic: Life, evolution, …
  • Metaorganic: Consciousness, group dynamical behavior, financial markets, …
However, it is not understood how one can traverse the following chain: bosons and fermions -> atoms -> molecules -> DNA -> cells -> organisms -> brains. I.e., how to understand phenomena like consciousness and life within the context of inanimate matter and fundamental theories.


e.g., systems view
Category Theory
The mathematical theory called category theory is a result of the “unification of mathematics” in the 40s. A category is the most basic structure in mathematics and is a set of objects and a set of morphisms (maps). A functor is a structure-preserving map between categories.
This dynamical systems picture can be linked to the notion of formal systems mentioned above: physical observables are functors, independent of a chosen representation or reference frame, i.e., invariant and covariant.
Object-Oriented Programming
This paradigm of programming can be viewed in a systems framework, where the objects are implementations of classes (collections of properties and functions) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys certain rules (encapsulationinheritancepolymorphism, …).
Some advantages of this integral approach to software development:
  • Easier to tackle complex problems
  • Allows natural evolution towards complexity and better modeling of the real world
  • Reusability of concepts (design patterns) and easy modifications and maintenance of existing code
  • Object-oriented design has more in common with natural languages than other (i.e., procedural) approaches

Algorithmic vs. Analytical

Perhaps the shift of focus in this new weltbild can be understood best when one considers the paradigm of complex system theory:
  • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior: Emergence, structure-formation, self-organization, adaptive behavior (learning), …
This allows a departure from the equation-based description to models of dynamical processes simulated in computers. This is perhaps the second miracle involving the human mind and the understanding of nature. Not only does nature work on a fundamental level akin to formal systems devised by our brains, the hallmark of complexity appears to be coded in simplicity (”simple sets of rules give complexity”) allowing computational machines to emulate its behavior.
complex systems
It is very interesting to note, that in this paradigm the focus is on the interaction, i.e., the complexity of the agent can be ignored. That is why the formalism works for chemicals in a reaction, ants in an anthill, humans in social or economical organizations, … In addition, one should also note, that simple rules - the epitome of deterministic behavior - can also give rise to chaotic behavior.
The emerging field of network theory (an extension of graph theory,yielding results such as scale-free topologies, small-worlds phenomena, etc. observed in a stunning veriety of complex networks) is also located at this end of the spectrum of the formal descriptions of the workings of nature.
Finally, to revisit the analytical approach to reality, note that in the loop quantum gravity approach, space-time is perceived as a causal network arising from graph updating rules (spin networks, which are graphs associated with group theoretic properties), where particles are envisaged as ‘topological defects’ and geometric properties of reality, such as dimensionality, are defined solely in terms of the network’s connectivity pattern.
list of open questions in complexity theory.

2 Responses to “complex”

  1. jbg » Blog Archive » complex networks Says:
    […] The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complex, fundamental, swarm theory, in a nutshell.) […]
  2. jbg » Blog Archive » laws of nature Says:
    […] See also these posts: complex, swarm theory, complex networks. […]



    What is science?
    • Science is the quest to capture the processes of nature in formal mathematical representations
    So “math is the blueprint of reality” in the sense that formal systems are the foundation of science.
    In a nutshell:
    • Natural systems are a subset of reality, i.e., the observable universe
    • Guided by thought, observation and measurement natural systems are “encoded” into formal systems
    • Using logic (rules of inference) in the formal system, predictions about the natural system can be made (decoding)
    • Checking the predictions with the experimental outcome gives the validity of the formal system as a model for the natural system
    Physics can be viewed as dealing with the fundamental interactions of inanimate matter.
    For a technical overview, go to the here.
    math models


    • Mathematical models of reality are independent of their formal representation
    This leads to the notions of symmetry and invariance. Basically, this requirement gives rise to nearly all of physics.
    Classical Mechanics
    Symmetry, understood as the invariance of the equations under temporal and spacial transformations, gives rise to the conservation laws of energy, momentum and angular momentum.
    In layman terms this means that the outcome of an experiment is unchanged by the time and location of the experiment and the motion of the experimental apparatus. Just common sense…
    Mathematics of Symmetry
    The intuitive notion of symmetry has been rigorously defined in the mathematical terms of group theory.
    Physics of Non-Gravitational Forces
    The three non-gravitational forces are described in terms of quantum field theories. These in turn can be expressed as gauge theories, where the parameters of the gauge transformations are local, i.e., differ from point to point in space-time.
    The Standard Model of elementary particle physics unites the quantum field theories describing the fundamental interactions of particles in terms of their (gauge) symmetries.
    Physics of Gravity
    Gravity is the only force that can’t be expressed as a quantum field theory.
    Its symmetry principle is called covariance, meaning that in the geometric language of the theory describing gravity (general relativity) the physical content of the equations is unchanged by the choice of the coordinate system used to represent the geometrical entities.
    To illustrate, imagine an arrow located in space. It has a length and an orientation. In geometric terms this is a vector, lets call it a. If I want to compute the length of this arrow, I need to choose a coordinate system, which gives me the x-, y- and z-axes components of the vector, e.g., a = (3, 5, 1). So starting from the origin of my coordinate system (0, 0, 0), if I move 3 units in the x direction (left-right), 5 units in the y-direction (forwards-backwards) and 1 unit in the z direction (up-down), I reach the end of my arrow. The problem is now, that depending on the choice of coordinate system - meaning the orientation and the size of the units - the same arrow can look very different: a = (3, 5, 1) = (0, 23.34, -17). However, everytime I compute the length of the arrow in meters, I get the same number independent of the chosen representation.
    In general relativity the vectors are somewhat like multidimensional equivalents called tensors and the commonsense requirement, that the calculations involving tensor do not depend on how I represent the tensors in space-time, is covariance.
    It is quite amazing, but there is only one more ingredient needed in order to construct one of the most estethic and accurate theories in physics. It is called the equivalence principle and states that the gravitational force is equivalent to the forces experienced during acceleration. This may sound trivial, has however very deep implications.
    micr macro math models
    Physics of Condensed Matter
    This branch of physics, also called solid-state physics, deals with the macroscopic physical properties of matter. It is one of physics first ventures into many-body problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors and quasi-crystals (interestingly, they have fractal properties!). In the superconducting phase, the wave functionbecomes symmetric.


    The Success
    It is somewhat of a miracle, that the formal systems the human brain discovers/devises find their match in the workings of nature. In fact, there is no reason for this to be the case, other than that it is the way things are.
    The following two examples should underline the power of this fact, where new features of reality where discovered solely on the requirements of the mathematical model:
    • In order to unify electromagnetism with the weak force (two of the three non-gravitational forces), the theory postulated two new elementary particles: the W and Z bosons. Needless to say, these particles where hitherto unknown and it took 10 years for technology to advance sufficiently in order to allow their discovery.
    • The fusion of quantum mechanics and special relativity lead to the Dirac equation which demands the existence of an, up to then, unknown flavor of matter: antimatter. Four years after the formulation of the theory, antimatter was experimentally discovered.
    The Future…
    Albeit the success, modern physics is still far from being a unified, paradox-free formalism describing all of the observable universe. Perhaps the biggest obstacles lies in the last missing step to unification. In a series of successes, forces appearing as being independent phenomena, turned out to be facets of the same formalism: electricity and magnetism was united in the four Maxwell equations; as mentioned above, electromagnetism and the weak force were merged into the electroweak force; and finally, the electroweak and strong force were united in the framework of the standard model of particle physics. These four forces are all expressed as quantum (field) theories. There is only one observable force left: gravity.
    The efforts to quantize gravity and devise a unified theory, have taken a strange turn in the last 20 years. The problem is still unsolved, however, the mathematical formalisms engineered for this quest - namely string/M-theory and loop quantum gravity - have had a twofold impact:
    • A new level in the application of formal systems is reached. Whereas before, physics relied on mathematical branches that where developed independently from any physical application (e.g., differential geometry, group theory), string/M-theory is actually spawning new fields of mathematics (namely in topology).
    • These theories tell us very strange things about reality:
      • Time does not exist on a fundamental level
      • Space and time per se become quantized
      • Space has more than three dimensions
      • Another breed of fundamental particles is needed: supersymmetricmatter
    Unfortunately no one knowns if these theories are hinting at a greater reality behind the observable world, or if they are “just” math. The main problem being the fact that any kind of experiment to verify the claims appears to be out of reach of our technology…

    4 Responses to “fundamental”

    1. jbg » Blog Archive » complex networks Says:
      […] The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complex, fundamental, swarm theory, in a nutshell.) […]
    2. jbg » Blog Archive » what can we know? Says:
      […] Recall from fundamental that there are two surprising facts to be found. On the one hand, the physical laws dictating the fundamental behavior of the universe can be mirrored by formal thought systems devised by the mind. And on the other hand, real complex behavior can be emulated by computer simulations following simple laws (the computers themselves are an example of technological advances made possible by the successfull modelling of nature by formal thought systems). […]
    3. jbg » Blog Archive » in a nutshell Says:
      […] fundamental and complex […]
    4. jbg » Blog Archive » laws of nature Says:
      […] See also this post: funadamental, invariant thinking. […]