Skip to main content

The Power Law of Monkeys..

Had read this interesting paper by Michael Mitzenmacher some time ago and always wanted to blog about this. Now is a good time as any, while I wait for a long download to finish.

The paper talks about the new found interest in the power-law and log-normal distributions especially among web researchers. Michael provides ample evidence to show that these debates are nothing new. Power-law versus log-normal processes have been debated in several areas like biology, chemistry and astronomy for more than a century now.

A power-law distribution over a random variable X is one where the probability that X takes a value at least k is proportional to k-g. Here g > 0 is the power-law "exponent". Intuitively a power-law distribution represents a system where X has a very large number of instances of very small values and a very small number of instances of extremely large values.

Power-law distributions are also called scale-free distributions as their overall shape appears the same, no matter at what scale they are viewed from. A characteristic feature of the power-law is that it appears as a straight line with a negative slope, on a log-log scale.

Log-normal distributions on the other hand appear as normal (Gaussian) distributions in the log-log scale. Power-law is log-linear.

Both power-law and log-normal distributions represent very skewed systems with a small number of very resourceful entities and a large number of entities with very limited resources. Such skewed distributions are very pervasive. Some examples are: the sizes of blood vessels to the number of such vessels in our body; income distributions and the number of people with such incomes; populations of cities versus the number of such cities; the rank of a word in a document (in terms of its frequency) versus the number of times it appears; etc.

There are several generative models that speculate on the underlying random processes that generate the power-law and log-normal distributions. And the sheer simplicity of some of them make them most appealing.

One of the most common generative models for the power-law is preferential attachment. This is one of the most popular models to explain the in-degree distribution of web pages. Simply put, the preferential attachment model for the web says that the probability of a page obtaining a new incoming hyperlink is proportional to its existing in-degree. Pages that have more in-degree are more popular and tend to be more visible -- making their popularity go up further.

There are several other models that generate the power-law based on models like information-theoretic optimizations and martingale processes.

Similarly, there are several models for generating log-normal distributions. One of the simplest generative models is of multiplicative processes. Suppose that on a social networking site like Orkut, each person brings in about 'r' new friends into the system. Since each person is assumed to be acting independently, let r be modeled as an i.i.d. (independent identically distributed) random variable with a finite variance (i.e. there are limits up to which any person is likely to have friends). Suppose the site starts with an initial population of X(0). The population after one time step would be X(1) = r X(0).

As we can see, the population at any time t is given as X(t) = r X(t-1) = rt X(0).

This is a multiplicative process, where each iteration has a multiplicative, rather than an additive effect on the population. Now taking log on both sides, we get: log X(t) = log X(0) + t log r.

Logarithms have this effect of converting multiplications into additions. So what was a multiplicative process in the linear domain is now an additive process in log scale. Now we had assumed that r is an i.i.d. random variable with finite variance (and so would be log r). There is this nice little theorem called the central-limit theorem, which says that the distribution of any i.i.d. random variable with finite variance, approaches the normal distribution asymptotically. So, there we have a multiplicative process generating a normal distribution in a log-log scale.

Anyway, one of the best generative models for the power-law is as follows: A monkey randomly typing on a typewriter (having n letters and one space) is just as likely to generate words with a power-law frequency distribution as do any other models!

Assume that the monkey types a space with a probability q and any of the n letters with a probability of (1-q)/n. Spaces are used to delimit words. A word with c letters would then have a probability of: ((1-q)/n)c q.

There are nc words of length c. The probability of a word with frequency rank r(j) (having nj words) occurs with probability q((1-q)/n)log_n(r(j)), which in turn reduces to q(r(j))log_n(1-q)-1

.. giving us a power-law distribution. Reading this serves us with a good reminder that there could be nothing but pure noise that determines how the web (for example) grows! Maybe there is a giant monkey sitting out there weaving the web...

Comments

Sanket said…
What if the giant monkey is lazy? It might not be using all the 'n' keys uniformly randomly, but prefers those that it can reach with least effort much more than the rest. (Or say it is constantly eating a banana with one hand..) ;)
No, it has to be a giant, fair monkey.. ;)
If you look at my website's paper list

www.eecs.harvard.edu/~michaelm/ListByYear.html

you'll see I wrote a paper with Brian Conrad that shows even if the monkey is lazy (non-uniform distribution on keys) then you still get a power law. Strangely, that case was left open for about 40 years....

Thanks for the interest!

Michael Mitzenmacher
This comment has been removed by a blog administrator.
Hi Michael,

Nice to see your comments! Let me read the paper you've mentioned.

The lazy monkey problem seems somewhat analogous to preferential attachment, and so the occurrence power-laws seem more natural than the fair monkey problem.

Thanks for the interesting paper!

Popular posts from this blog

Co-occurrence and Correlation

In one of our projects, we encountered this dilemma where we had to nitpick on (the probability of) co-occurrence of a pair of events and correlation between the pair of events. Here is my attempt at disambiguating between the two. Looking forward to any pokes at loopholes in my argument. Consider two events e1 and e2 that have a temporal signature. For instance, they could be login events of two users on a computer system across time. Let us also assume that time is organized as discrete units of constant duration each (say one hour). We want to now compare the login behaviour of e1 and e2 over time. We need to find out whether e1 and e2 are taking place independently or are they correlated. Do they tend to occur together (i.e. co-occur) or do they take place independent of one another? This is where terminologies are freely used and things start getting a bit confusing. So to clear the confusion, we need to define our terms more precisely. Co-occurrence is simply the probability that

Meta-modeling heuristics

(Acknowledgment: This post has inputs from Sanket of First principles fame.) When teaching about formal axiomatic systems, I'm usually asked a question something like, "But, how do you find the axioms in the first place?" What are axioms, you ask? To take a few steps back, reasoning processes based on logic and deduction are set in an "axiomatic" context. Axioms are the ground truths, based on which we set out to prove or refute theorems within the system. For example, "Given any two points, there can be only one line that passes through both of them" is an axiom of Eucledian geometry. When proving theorems within an axiomatic system, we don't question the truth of the axioms themselves. As long as the set of axioms are consistent among themselves (i.e. don't contradict one another) it is fine. Axioms are either self-evident or well-known truths, or somethings that are assumed to be true for the context. But then, the main question becomes ver

Paradoxes and self references

One of the most celebrated paradoxes in set theory is the Russel's paradox. The story behind the paradox and subsequent developments is rather interesting. Consider a set of the kind S = {a, b, S}. It seems somewhat unusual because S is a set in which S itself is a member. If we expand it, we get S = {a, b, {a, b, {a, b ....}}} leading to an infinite membership chain. Suppose we want to express the class of sets that don't have this "foundationless" property. Let us call this set as the set of all "proper" sets, that is, sets that don't contain themselves. We can express this set of sets as: X = {x | x is not a member of x} Now this begs the question whether X is a member of itself. If X is a member of itself, then by the definition of X (set of all sets that don't contain themselves), X should not be a member of itself. If X is not a member of itself, then by the definition of X (set of all sets that don't contain themselves), X should be a memb