Nature Partially Described by Fourier Transform

Be back in a bit for this.

(The excerpted words below, keeping my place, are from a wondrous post at  https://betterexplained.com/

When Richard Feynman was asked what information he would leave to humanity to rebuild in the event all knowledge was lost, he said that information would be "All Things Are Made of Atoms." When Cornelius Lanczos was asked what mathematical equation he would leave to humanity, above all others, he said "The Fourier Transform." And Lanczos was probably the very best person to ask this question, perhaps above all others.

Think in Terms of Cycles, of Endless Returning

"From Smoothie to Recipe

"A math transformation is a change of perspective. We change our notion of quantity from "single items" (lines in the sand, tally system) to "groups of 10" (decimal) depending on what we're counting. 

"Tally it up.

"The Fourier Transform changes our perspective from consumer to producer, turning What do I have? into How was it made?

"In other words: given a smoothie, let's find the recipe.

"Why? Well, recipes are great descriptions of drinks. You wouldn't share a drop-by-drop analysis, you'd say "I had an orange/banana smoothie". A recipe is more easily categorized, compared, and modified than the object itself.

"So... given a smoothie, how do we find the recipe?"

Life carries and obeys a basic recipe committed to memory by evolution that most of us "understand" well enough to follow from birth without knowing anything about protons and electrons, quarks, or even proteins of which our bodies (our living clay tablets :) are composed. We just know we are hungry, and what to do to try to fix that. Later, we may want to copy the recipe, and pass it on, but it won't feel to us like passing on a memorized ancient family secret, like following directions, even though it will be.

Important counsel from Kalid Azad, the creator of this Fourier post and his wonderful website Better Explained.

"Get a map, not directions.

"Memorization isn’t understanding:

"you follow the recipe, apply the formula, and get from A to B without
knowing why. Directions “work”, but what about wrong turns? A new
destination? Helping a friend who’s lost at point C, not A? This site is
about sharing maps, the intuitions that get you from any point to any
other point. We’ll leave the raw details for the encyclopedias."

 


Dependence on A Partial Description of Nature: "It Already Has a Name ... You Should Call It Entropy"

Entropy as Measure of Uncertainty, as von Neumann Interpreted Boltzmann's Meaning

"Information Theory, Relative Entropy and Statistics Fran¸cois Bavaud

Introduction: the relative entropy as an epistemological functional

"Shannon’s Information Theory (IT) (1948) definitely established the purely mathematical nature of entropy and relative entropy, in contrast to the previous identification by Boltzmann (1872) of his “H-functional” as the physical entropy of earlier thermodynamicians (Carnot, Clausius, Kelvin). The following declaration is attributed to Shannon (Tribus and McIrvine 1971):

"My greatest concern was what to call it. I thought of calling it “information”, but the word was overly used, so I decided to call it “uncertainty”. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.'

"I thought of calling it 'information,' but ... I decided to call it 'uncertainty' ... Von Neumann told me ... your uncertainty function has been used in statistical mechanics under that name, so it already has a name"

"You should call it entropy"

"In a nutshell, the relative entropy K(f||g) has two arguments f and g, which both are probability distributions belonging to the same simplex. Despite formally similar, the arguments are epistemologically contrasted: f represents the observations, the data, what we see, while g represents the expectations, the models, what we believe. K(f||g) is an asymmetrical measure of dissimilarity between empirical and theoretical distributions, able to capture the various aspects of the confrontation between models and data, that is the art of classical statistical inference, including Popper’s refutationism as a particulary case. Here lies the dialectic charm of K(f||g), which emerges in that respect as an epistemological functional."

an asymmetrical measure of dissimilarity between the observations, the data ("what we see") and "the expectations, the models, what we believe."

this is thus "able to capture the various aspects of the confrontation between models and data, that is the art of classical statistical inference"

"Here lies the dialectic charm of K(f||g)," "also known as the Kullback-Leibler divergence," "which emerges ... as an epistemological functional"

"K(f||g), also known as the Kullback-Leibler divergence ... constitutes a non-symmetric measure of the dissimilarity between the distributions f and g"

"Furthermore, the asymmetry of the relative entropy does not constitute a defect, but perfectly matches the asymmetry between data and models"

"Asymmetry of the relative entropy and hard falsificationism"

"the theory 'All crows are black' is refuted by the single observation of a white crow, while the theory 'Some crows are black' is not refuted by the observation of a thousand white crows. In this spirit, Popper’s falsificationist mechanisms (Popper 1963) are captured by the properties of the relative entropy, and can be further extended to probabilistic or 'soft falsificationist' situations, beyond the purely logical true/false context"

"Competition between simple hypotheses: Bayesian selection"

"the heart of the so-called model selection procedures, with the introduction of penalties ... increasing with the number of free parameters. In the alternative minimum description length (MDL) and algorithmic complexity theory approaches ... richer models necessitate a longer description and should be penalised accordingly. All those procedures, together with Vapnik’s Structural Risk Minimization (SRM) principle (1995), aim at controlling the problem of overparametrization in statistical modelling."

"Suppose data to be incompletely observed"

"For decades (ca. 1950-1990), the 'maximum entropy' principle, also called 'minimum discrimination information (MDI) principle' by Kullback (1959), has largely been used in science and engineering as a first-principle, 'maximally non-informative' method of generating models, maximising our ignorance (as represented by the entropy) under our available knowledge ... (see in particular Jaynes (1957), (1978)). However, (18) shows the maximum entropy construction ... points towards the empirical (rather than theoretical) nature of the latter. In the present setting, ˜f D appears as the most likely data reconstruction under the prior model and the incomplete observations (see also section 5.3)."

Examples:

Unobserved Category
Coarse Grained Observations
Symmetrical Observations
Average Value
Statistical mechanics
Decompositions
*Conditional Independence*