Recall Kolmogorov complexity. Consider the following problem, which I came up for the honors section of the complexity class:

Suppose we have a “really good” lossless file compression algorithm C. It finds almost the smallest compression, with some small error. Suppose is a compressible string; is small compared to other strings of the same length. Let the output of the compression algorithm on be . You can imagine that since our algorithm is “really good”. What can we say about the relationship between and ?””

The take away idea here is that if a compression algorithm is good, it should produce incompressible data, otherwise, it didn’t really compress as much as it could have. There is a good reason people do not zip files twice or more, it doesn’t really do anything. If you have some metadata like macs do, then it can even make the files bigger! The second idea is that given any bad but non-zero compression algorithm, you can make it a good compression algorithm by running it repeatedly until your compressed file isn’t getting smaller.

Why don’t we eat poop? Consider the following experiment. Put a gorilla or monkey on an island with one banana and nothing else. The monkey will eat the banana, then poop. Having nothing else on the island, the monkey will be forced to eat its poop, and the monkey will eventually die. Here is a proof that poo has no nutrients. Suppose to the contrary that it did. Then nutrient rich poo implies that your body was not really doing a good job at extracting the nutrients, not an optimal evolutionary strategy. Therefore, poop has no nutrients.

The whole point is, I am saying these are really like the same problem. The compression algorithm making passes over the file trying to make a smaller representation, this is analogous like those tubes in your gut which keep extracting nutrients all the way until the end. Also, some animals don’t have digestion systems that are good at extracting nutrients, and those same animals happen to be the ones which eat their poop.