The root of power law religion

A ‘power law’ refers specifically to a statistical relationship between quantities, such that a change in one quantity has a proportional change in another. One property of this law is scale invariance, otherwise known as ‘scale-free,’ meaning the same proportion repeats at every scale in a self-similar pattern. Mathematical fractals are an example of such a power law. Power laws are taken as universal and have been applied to any and all phenomena to prove the universality of this law.

 

However, a recent study (Broido and Clauset, 2019) claims that “scale free networks are rare.” They conducted an extensive review of one thousand social, biological, technological and information networks using state of the art statistical methods and concluded what the title of their article states. To the contrary, “log-normal distributions fit the data as well or better than power laws.” And that scale-free structure is “not an empirically universal pattern.” Hence it should not be used to model and analyze real world structures.

So why the fascination with trying to fit nearly all phenomena into the scale-free paradigm? Holme (2019) reviews the above article and the overall power law issue and notes that “in the Platonic realm of simple mechanistic models, extrapolated to infinite system size, the concepts of emergence, universality and scale-freeness are well-defined and clear. However, in the real world, where systems are finite and many forces affect them, they become blurry.” Klarreich (2018) reviewed an earlier version of Borodo’s paper and noted that per mathematician Steven Strogatz, in physics there is a “power law religion.”

 

So what is the root of this religion? Holme nailed it when he said the power law universally applies “in the Platonic realm.” This is a long-held, guiding myth that has remained strong in math. Lakoff and Nunez (2001) dispel this myth, noting that there is no proof of an a priori mathematics; it is purely a premised axiom with no empirical foundation. Just like the conception of God it is religious faith. We can only understand math with the mind and brain, so it requires us to understand how that brain and mind perceives and conceives. Hence there is no one correct or universal math. There are equally valid but mutually inconsistent maths depending on one's premised axioms (354-55). This is because math is also founded on embodied, basic categories and metaphors, from which particular axioms are unconsciously based (and biased), and can go in a multitude of valid inferential directions depending on which metaphor (or blend) is used in a particular contextual preference. They dispel this myth of a transcendent, Platonic math while validating a plurality of useful and accurate maths.

However Lakoff & Nunez do not see the above as relativistic postmodernism (pomo) because of empirically demonstrated, convergent scientific evidence of universal, embodied grounding of knowledge via image schema, basic categories and extended in metaphor. They see both transcendent math and pomo as a priori investments. And they also affirm universal validity, but through empirical methodology, not a priori speculation.

 

Lakoff (1987) also points out the following:

 

"The classical theory of categories provides a link between objectivist metaphysics and and set-theoretical models.... Objectivist metaphysics goes beyond the metaphysics of basic realism...[which] merely assumes that there is a reality of some sort.... It additionally assumes that reality is correctly and completely structured in a way that can be modeled by set-theoretic models" (159).

He argues that this arises from the correspondence-representation model, a model that has been legitimately questioned by postmetaphysical thinking.

Also see the above on the idealistic assumptions of modeling that came from a type of complexity theory that also assumed the universality of scale-free networks, while most actual networks do not display this kind of mathematical distribution.

 

A prime example of the power law religion is found in the model of hierarchical complexity (MHC). Commons (2008) admits the Platonic roots when he said: “The ideal truth is the mathematical forms of Platonic ideal.” Granted he qualified this statement noting the difference between the ideal and the real, that we cannot know the ideal as pure form, only as it manifests in the real. And yet he further notes that Aristotle elucidated the real with postulates of logic, yet these too come from a priori axioms without empirical grounding. Yes, the logical entailments of his logic follow mathematical rules, but the axioms are presupposed a priori and taken as given. The MHC then is a combination of the ideal “perfect form, as Plato would have described it,” with the representation of that form in the real domain. The duality of the ideal and the real is apparent.

 

Lakoff and Johnson (1999, Ch. 7) show that abstract set theory has no connection to embodiment:

 

“Spatial relations concepts (image schemas), which fit visual scenes, are not characterizable in terms of set-theoretical structures. Motor concepts (verbs of bodily movement), which fit the body's motor schemas, cannot be characterized by set-theoretical models. Set-theoretical models simply do not have the kind of structure needed to fit visual scenes or motor schemas, since all they have in them are abstract entities, sets of those entities, and sets of those sets. These models have no structure appropriate to embodied meaning-no motor schemas, no visual or imagistic mechanisms, and no metaphor."

 

However Lakoff and Nunez note that math per se is not merely socially constructed:

 

"In recognizing all the ways that mathematics makes use of cognitive universals and universal aspects of experience, the theory of embodied mathematics explicitly rejects any possible claim that mathematics is arbitrarily shaped by history and culture alone. Indeed, the embodiment of mathematics accounts for real properties of mathematics that a radical cultural relativism would deny or ignore: conceptual stability, stability of inference, precision, consistency, generalizability, discoverability, calculability, and real utility in describing the world" (362).

 

Nonetheless, the MHC used a particular kind of set theory were sets cannot be members of themselves but in other set theories they can:

 

“There are lots and lots of set theories, each defined by different axioms. You can construct a set theory in which the Continuum hypothesis is true and a set theory in which it is false. You can construct a set theory in which sets cannot be members of themselves and a set theory in which sets can be members of themselves. It is just a matter of which axioms you choose, and each collection of axioms defines a different subject matter. Yet each such subject matter is itself a viable and self-consistent form of mathematics. [...] There is no one true set theory" (WMCF, 355).

 

Commons, Ross, Miller (2010) note that Axiom 1 of the MHC is based on set theory and the orders are scale-free.

 

“Axiom 1 of the Model of Hierarchical Complexity (Commons, Goodheart, et al., 2008) posits that consistent with Piaget, that higher order actions are defined in terms of two or more lower-order actions. In terms of set theory, A = {a, b} where A is the higher order set, and a and b are lower order actions that are elements of that set A.. Note that the element a cannot equal the set A. An element cannot equal a set formed out of that element.”

 

Sara Ross (2014) goes further in that the MHC’s orders are scale-free and fractal.

"To possess 'universal, scale-free' properties means the MHC’s orders of hierarchical complexity are fractal. Fractal means the repetition of self-similar patterns at different scales. Behavioral scales from the micro-biological to large social systems evidence the orders of hierarchical complexity (see Commons & Ross, 2008). The fractal transition theory is proposed as a universal, scale-free general model as well.”

 

Here we are seeing the power law religion in action, given that the MHC’s orders are ideal and they develop from scale-free power laws. And yet as Broido and Clauset noted above, real networks rarely display scale-free power laws. They further noted that given the empirical data, different models would be required to explain these other networks, that the scale-free model, while perhaps applicable for a few real-world networks, was inadequate to the task.

 

One example of a real world network is the human brain connectome. Gastner and Odor (2016) note that the connectome is not scale-free, so why use scale-free models to measure it? And why then extend scale-free models to everything? As noted above, is it a power law religion of the ideal imposed on phenomena?

"The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff."

 

A recent neuroimaging study (Smith et al., 2019) on brain connectome hierarchical complexity (HC) seems to support my notion that, like basic categories in cognitive science, HC arises from the middle out as 'bridges' rather than bottom-up or top-down. E.g.

"Dividing the connectomes into four tiers based on degree magnitudes indicates that the most complex nodes are neither those with the highest nor lowest degrees but are instead found in the middle tiers. […] The most complex tier (Tier 3) involves regions believed to bridge high-order cognitive (Tier 1) and low-order sensorimotor processing (Tier 2)."

"The results show that hub nodes (Tier 1(t)) and peripheral nodes (Tier 4(b)) are contributing less to the greater complexity exhibited in the human brain connectome than middle tiers. In fact, this is particularly true of hub nodes."

Also note that "this concerns wholly separate considerations of topology to the well-known paradigms of small-world and scale-free complex networks," being one of those new models that responds to the empirical date rather than trying to fit the latter into a one-size-fits-all scale-free model.

 

This appears to be a matter of the guiding metaphors used in defining worldviews, which over time transcend and replace their forbears given new information in a different time and context (Berge, 2019). Sociological worldviews form a continuum in the broad categories of pre-modern, modern, postmodern and metamodern. The modern, scientific worldview is based on a mechanistic worldview, with later iterations extending that metaphor based on the computer. Both are premised on dualisms of various sorts, like the difference and separation of body and mind, ideal and real and with computers, on and off with one pole in the duality being source, the other it’s logical result. It’s an either/or logic of the Aristotelian type noted above, itself based on a priori axioms.

The postmodern metaphor turned this dualism around, claiming that the ideal was fantasy, that only the concrete, real world had validity, the ideal just being so much hierarchical power relations over the real without any basis. However, the metamodern worldview syntegrates this dualism by acknowledging duality, but also everywhere in between the poles. In fact, it quits thinking in terms of poles altogether, e.g. Abramson (2014):

 

"But a still more intriguing question is whether antipodal analyses are any longer useful, or whether the time has come to speak of multiple dimensions of reality, actualities that are irresolvably contradictory and deliberately incalculable, and a state of affective response in which contemporary humans feel perpetually overwhelmed, but not critically degeneratively so. Whereas postmodern theories of hyperreality invariably metaphorized erasure of the line between fact and fiction as a gradual process of degeneration, collapse, and decomposition, metamodernism approaches contradiction, paradox, and ambiguity as reconstructive forces, and emphasizes not singularity qua collapse but multiplicity qua transcendence. […] The question to be asked of and into contemporary culture, then, is [...] indeed a transcendent metamodern condition in which the poles themselves have disappeared and we, collectively and individually, have found in the middle space between them an entirely new site of 'reconstructive deconstruction'" (7-8).

 

This is consistent with Lakoff and Johnson’s cognitive science, a reconstruction of an empirical plurality of mathematics, allowing for their “contradiction and paradox,” yet grounded in our universal embodiment in the spaces between metaphysical paradigms.

Load Previous Replies
  • up

    Edward theurj Berge

    Excerpts from The Number Sense by Dehaene, pp. 242-45 are below. I'd say whether or not one believes in Platonic math, both it and formal math are abstract with a priori axioms divorced from concrete reality. The intuitionist or constructivist math he notes below, while accepting our innate categories of thought, are not the same as the image schema and basic categories of cognitive linguistics (and in fact are not referenced). But it bases this similar idea on the relation of math to our embodiment.

    "Twentieth-century mathematicians have been profoundly divided over this fundamental issue concerning the nature of mathematical objects. For some, traditionally labeled 'Platonists,' mathematical reality exists in an abstract plane, and its objects are as real as those of everyday life. [...] For an epistemologist, a neurobiologist, or a neuropsychologist, the Platonist position seems hard to defend—as unacceptable, in fact, as Cartesian dualism is as a scientific theory of the brain."

    "A second category of mathematicians, the 'formalists,' view the issue of the existence of mathematical objects as meaningless and void. For them, mathematics is only a game in which one manipulates symbols according to precise formal rules. Mathematical objects such as numbers have no relation to reality: They are defined merely as a set of symbols that satisfy certain axioms. [...] Though the formalist position may account for the recent evolution of pure mathematics, it does not provide an adequate explanation of its origins."

    "A third category of mathematicians is thus that of the 'intuitionists' or 'constructivists,' who believe that mathematical objects are nothing but constructions of the human mind. In their view, mathematics does not exist in the outside world, but only in the brain of the mathematician who invents it. [...] Among the available theories on the nature of mathematics, intuitionism seems to me to provide the best account of the relations between arithmetic and the human brain. The discoveries of the last few years in the psychology of arithmetic have brought new arguments to support the intuitionist view. [...] These empirical results tend to confirm Poincare's postulate that number belongs to the 'natural objects of thought,' the innate categories according to which we apprehend the world. [...] Intuition about numbers is thus anchored deep in our brain. Number appears as one of the fundamental dimensions according to which our nervous system parses the external world."

  • up

    Edward theurj Berge

    Power laws are a burr in my butt today. From this article:

    "Many self-similar systems are scale invariant only in discrete steps. A blood vessel tends to branch into two smaller vessels, a fluid vortex into two or three smaller vortices, and the Sierpinski triangle is self-simila
    r only by powers of two. These systems preserve relative proportions upon rescaling from one step to the next, but not upon arbitrary rescaling. This property is termed discrete-scale invariance or discrete renormalizability. It is a weaker condition than the continuous scale invariance underlying the Pareto distribution. Whereas strict scale invariance implies a power law and vice versa, discrete-scale invariance allows log-periodic modulations in the frequencies of observations that deviate from a pure power law such as Eq.(1). Such modulations are indeed observed in bronchial tube diameter, vortex ens-trophy, and financial asset prices."

  • up

    Edward theurj Berge

    From this article:

    "A common graph mining task is community detection, which seeks an unsupervised decomposition of a network into groups based on statistical regularities in network connectivity. Although many such algorithms exist, community detection’s No Free Lunch
    theorem implies that no algorithm can be optimal across all inputs. [...] We find that (i) algorithms vary widely in the number and composition of communities they find, given the same input; (ii) algorithms can be clustered into distinct high-level groups based on similarities of their outputs on real-world networks; (iii) algorithmic differences induce wide variation in accuracy on link-based learning tasks; and, (iv) no algorithm is always the best at such tasks across all inputs."