May 2024 (Original ≽)



Question: Are the numbers realistic?


Answer: The question is reminiscent of the one about Pinocchio, but I will take this opportunity to look at it from another side as an unusual topic. These are questions about the sufficiency of truth for the description and existence of reality. From the first, let's remember the usefulness of contradiction (an always incorrect statement), and from the second, our ability to use numbers to master nature.

Ancient Greek mathematicians believed that all numbers could be constructed as the ratio of two natural numbers; they would say that rational numbers are all numbers. On the other hand, the best proofs are those derived by the "contradiction method," when by reducing an assumption to a contradictory one, we determine the correctness of its negation. The Pythagoreans took the lead in this.

Legend has it that Hippasus was the first to discover the irrationality of the root of the number two (√2), and that Pythagoras' supporters killed him by throwing him into a sea after informing them of his discovery, enraged at the desecration of his science. He assumed that the root of two, the length of the diagonal of the unit square as in the previous picture on the left, can be written as the quotient of two natural numbers (√2 = m/n) and that these numbers are mutually prime.

Prime numbers are those natural numbers that can be divided without a remainder only by themselves and the unit (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ...), we say they have no quotient, and mutually prime they have no common quotient (eg 6 and 25). The Pythagoreans knew that any fraction can be shortened so that the numerator and denominator are mutually prime numbers.

Well, then the assumed expression, √2 = m/n, squared can be written as the equality of twice the square of the denominator and the square of the numerator of the given fraction, 2n² = m², which means that the numerator is divisible by two. Therefore, the numerator is of the form m = 2k where k is some natural number. This written back to the previous equality gives 2n² = 4k², or n² = 2k², which means that n is also an even number (divisible by two).

As both numbers are even, the numerator and the denominator of the supposed root of the number two, the assumption that they are mutually prime is brought into contradiction, from which it follows that the initial assumption was incorrect. This tells us that the opposite statement is true: that the root of the number two cannot be written as a quotient of natural numbers, that it is not a rational number (√2 ∉ ℚ).

From such familiar examples, typically mathematical proofs, we now derive the new lesson that inaccuracies are also parts of the "body of mathematics," which we believe consists of the exactions themselves. Note that, taking truths and only truths as the content of all mathematics, we cannot derive either Russell's set paradox about the non-existence of a universal set, or Gödel’s Incompleteness Theorems about the non-existence of a theory of all theories (Sufficiency), and especially not countless proofs of theorems whose unprecedented accuracy has fascinated people for centuries, or their exclusivity tires us.

Another way of seeing fiction as a kind of reality is the possibility of having them at all. It is not possible to think about "universal truths" without the use of falsehoods, we just now realize, and the entire previous blog (Memory) is devoted to the absence of deductions in the type of alternative reality. This is an emphasis on diversity, one of the principles of the information theory we are discussing and which is also valid in its non-physical part.

Deduction II

Question: Can Gödel's "impossibility" be understood by non-mathematicians?

Deduction II

Answer: We are talking about Gödel's (1931) incompleteness theorem, about precision or comprehensiveness, despite the pedantry of the original Gödel and the difficulty reading the proof of his so-called 1st theorem. And they are such that even mathematicians are often not familiar with them.

From that difficult part, to begin with, we emphasize that Gödel does not refer to "truth" (whatever that means more deeply), but proves that any "sufficiently strong" (encompasses important theorems) and correct (does not prove falsities) deductive theory will always also contain statements (so-called Gödel's propositions) that are true but not provable in that theory.

"Deduction" (implication) is when only "true" B can follow from "true" A; we write that AB. Negated "true" A yields "false" ¬A, and the statements we're talking about here have only one of those two values. So much for "deduction" and "truth.".

One of the easier interpretations of Gödel's incompleteness theorem (Quine, 1946) is using a "deductive engine." If it prints the expression X, we write PX, while the record ¬PX means that the machine does not print the expression X. Double X is the expression XX written dX, and d¬Pd of the expression ¬Pd is ¬Pd¬Pd, and so on. Strings of these are "print sentences" that tell what the machine is printing. If such a self-referencing machine is correct — it is incomplete!

Namely, for example, the printed sentence ¬Pd¬Pd, which is a duplication of ¬Pd, is true when the machine does not print it, or it is false when the machine prints it. A machine can say "two plus two is four," but it cannot say "two plus two is five" when the first statement is true and the second is not. However, she can say, "I can't say that two plus two is five," thereby contradicting herself, because while determining what she can't do (she is clear and correct), she also had to define what she can't and somehow pronounce "two plus two is five.".

To be able to have clear and correct truths, I add to this in information theory that we must also use falsehoods. In that witty example of mathematics in dogma (Deduction), to the question "Is god omnipotent?" if the answer is yes, we ask the second question "Can he create a stone so big that he cannot lift it himself?" thus contradicting the assumption — now we point out that with every power comes impotence, next to God is the devil, with good comes evil, action is followed by reaction, proving theorems cannot be done without using inaccuracy. Sometimes you have to step over the stream of untruth to get to the truth.

After this digression, Godel's impossibility theorem is a little clearer, I hope. A correct theory is not factually complete (with every possible theorem) because it cannot "tell" the truth about what it "cannot tell." Let's supplement this with physical reality, which we also cannot do without "unreality." It cannot be done without those fictions that we could prove cannot physically happen. Without such help, there is no correct theory of physical reality.

Any news communicated to us a second time is no longer "news." As soon as it is expressed, it ceases to be information and, due to the law of conservation, must become something else. From these two or more forms, it can then transition and possibly repeat itself periodically, while the wider environment changes so that it can never step into the "same water" twice.


Question: Do you have any analogies between logic and interactions?


Answer: It is a new and old topic in this theory of information. For example, the photoelectric effect (Half Truths) helps to understand the impossibility of some information to reach an interaction. Continuing like this, we come across an example of the question.

Namely, the sentence "I lie", a contradictory statement, is an analogy of a physically unrealizable phenomenon. It is also "A says that B is lying and B says that A is telling the truth", so even if we dilute the statement stochastically to, "A says that B's statements are mostly false and B that all of A's statements are true", we get contradictions that can be expressions of excess vitality (intelligence), but not objects of physical interactions. Such is "A causes B not to exist, and B causes A to exist," or "A causes B to exist, and B causes A not to exist." Especially this last one we recognize as the topic of the question is asked.

A different analogy would be with a machine that works only with pure water (fluid) where the "untruths" would be such contaminants of it that might or might not be carried by its currents. When a pollutant is present at source A, then it may or may not be carried downstream to B. Such a "flow" is equivalent to the implication, AB, which is false if and only if A is true (unpolluted) and B is false (polluted). But this is impossible in the case of the "deductive machine" from the previous answer, when the installation is good and the "pure water" itself is loaded. However, then, the water in the system is never all water, say because H2O molecules can be found in substances that do not look or behave like water.

When devising such analogies, it is necessary to achieve some important feature of logic that it imitates. It should be simulating the implication before many, and the principle of saving (my information theory) which would appear in the first previous example in the impermanence of lies (The Truth), and in another say in the tendency of impurity to settle so as to leave the water clean. I will also mention their limitations.

"First-order logic", or first-order predicative calculus, is a formal system common in mathematics, philosophy, linguistics, or in computer science. In such a case, the Löwenheim–Skolem theorem on the existence and cardinality of the model is valid. In short, it implies that a first-order countable theory with an infinite model (in the number of its elements) for every infinite cardinal number κ will have a model of the same size κ, and none of the first-order theories with an infinite model can have a unique model up to isomorphism (equivalences, bijections: mutually unique mapping). As a consequence, first-order theories are unable to control the cardinality of their infinite models.

In other words, the Löwenheim–Skolem theorem claims that any set of axioms with which we want to describe a structure with infinitely many objects (eg natural numbers) will describe the same in many other structures that are significantly different from the given one. Deductive theories of the first kind, which would characterize a single structure, are therefore not actually possible. This conclusion, which agrees with the well-known great variety of applications of mathematical abstractions, will now disappoint us. This same theorem here limits the scope of every single, concrete analogy with abstract logic!

The breadth it has makes the idea abstract and unreal, in contrast to the focus on the concrete occurrence of physical information.


Question: Can you say anything else about this theorem that fascinates me?


Answer: Exchanging comments on the Löwenheim-Skolem theorem, we quickly agreed that such could be very important information theory in the sense where I am leading it. Serious interlocutors would not like me to reveal their identity, for now, until (if) this theory becomes serious, as mentioned.

The theorem was established by Löwenheim (1915) as an idea developed by Skolem (1920) and others, which today is the basic result in the theory of first-order logic models. The paradox of Löwenheim-Skolem's theorem is that it says that if a first-order theory (understood in mathematics, philosophy, linguistics, or computing) has infinite models, it still has models whose domains are only countable, discrete. However, some sets are uncountably infinite (Continuum, 2nd Paragraph).

Paradoxes in mathematics most often arise due to the insufficient development of our intuition and inappropriate assumptions about a certain theory. Thus, this becomes explicable with at most a countably infinite number of real activities arising during current realities, as opposed to an uncountable infinite number of possibilities. We also understand this using a countable infinite (series) of decimals, which record an uncountable multitude of real numbers — so that (almost) every number position can have two or more options. The analogy is, I hope, obvious.

The original version of the Löwenheim-Skolem theorem simply asserts that any theory that has an infinite model will also have a countably infinite model. Skolem's paradox then arises when we notice that the standard axioms of set theory can themselves be formulated as a (countable) collection of first-order sentences. Thus, if its axioms have a model, then the Löwenheim-Skolem theorem ensures that they also have a countable domain model. This seems quite confusing, and it is precisely this that gives impetus to Cantor's discovery (Cantor's Theorem) and his different infinities, which in the beginning, even top mathematicians did not want to accept.

I hope that it is less difficult now to make a step further towards the theory of information, noting that this theorem is a kind of boundary between the unreal and the real, logic and the material world of physics that we by the first understand. The special question is "What kind of combination are we?"


Question: Explain to me the connection between impossibility with domain and perception?


Answer: The Löwenheim-Skolem theorem establishes that any satisfying formula (in some interpretation will have at least one exact value) within first-order logic satisfiable in the alepf_0 (ℵ0) domain of interpretation. This means that alepf_0 domains (which can be sequenced like natural numbers) are sufficient interpretations of first-order logic.

In other words, it is the possibility of the existence of a larger set of knowledge than any given one, like Gödel's impossibility theorem (there is no theory of all theories), or analogously the possibility that there is no set of all sets (Russell's paradox), and on the other hand, the possibility of countability (ℵ0) of the domain of physical reality.

From an even further side, Skolem's Paradox is a surprising insight into the very core of the fundamentals of mathematics. It is an observation of the necessary indeterminacy of the nature of our conceptions of infinity. It tells us that every mathematical representation of infinity is not inherent, unfortunately, it shows a lack of absoluteness. It turns out that the question of whether a given set is countable or uncountable, and even whether it is infinite at all, is not a substantial (inherent) characteristic of that set and may depend on the background context of the set theory in which the question is asked. Different models of set theory can have equal sets in common—agree on the underlying collection of set objects—yet disagree on their very sizes; they may disagree about whether a set is countably or uncountably infinite, or even whether it is infinite at all.

In general, the cardinality (infinite number of elements) of the set S is always less than ℘(S), i.e. the set of all its subsets. For example, the set of natural numbers ℕ is discrete, i.e. is countably infinite, but neither is the set of all its subsets ℘(ℕ) which is uncountably infinite. The fact that there is no end to the creation of ever-larger cardinals (infinite quantities) is in itself surprising and absurd in the Löwenheim-Skolem theorem and the Skolem paradox because the previous (before my information theory) expectations of the development of mathematics and science are not going in that direction.

This is supported by Borel-Cantelli's lemma (Accumulating) speaking about the sufficiency of a finite subset of outcomes from an infinite distribution so that we have wide open ways of principled unpredictability and hence to the well-fondness of this theory of information. Let's put it this way, general uncertainty means the internal and external uncertainty of no matter how large a set of natural phenomena. From "highly visible" (inevitable for explanations of the micro-world) in quantum mechanics, to seemingly "ultimate certainty" in the laws of physics. This is the reason why measurements (physical reality) are always close to the prediction of formulas because absolute certainty is then disturbed by the uncertainty of the smallest.


Question: How do you explain Gödel's "completeness"?


Answer: This question is derived from the discussion about computer interpretations of the most common logic (of the first kind), and especially Gödel's completeness theorem.

Gödel's completeness theorem (Vollständigkeit, 1929) claims that in basic predicate calculus (first order) all logically valid formulas are provable. A little more generally, we say that statement S is a syntactic consequence of theory T, denoted by ⊢ if S is provable from T in our deductive system. In addition, we say that S is a semantic consequence of T, denoted by ⊨, when S holds in every model of T. The completeness theorem then states that for any first-order theory T with a well-ordered language and any sentence S in the language T:

if T ⊨ S, then T ⊢ S.

Since the reverse is also true (soundness), it follows that ⊨ holds if and only if ⊢, and therefore this syntactic and semantic consequence are equivalent to first-order logic. Otherwise, original proof is extensive.

Due to Gödel's theorem incompleteness (1931), we conclude that the incompleteness of correct and sufficiently strong deductive theories follows from the incompleteness of their axioms, that all correct and sufficiently strong deductive theories are logically complete and factually incomplete ( do not have every possible theorem).

The essence of Gödel's theorem of completeness is that a system of axioms built on facts, truths, and mathematical operations can capture every consequence derivable from that framework. Paradoxically, precisely because there are self-sufficient, isolated islands of axioms, that open up to us different areas of truth, so that a fact in one may not be true in another, we find that completeness inherently contains a contradiction.

Incompleteness, on the other hand, as a special subset of all possible truths, achieves absolute consistency by avoiding these contradictions. Deliberately excluding certain regions of the axiom, achieves the coherence of its work at the expense of incompleteness. Gödel discovered the deceptive nature of this navigation and its profound duality. It limits its local content of truths to special axiomatic systems against broad contexts that transcend certain borders.

By changing the so-called the fifth Euclidean postulate, that through a given point outside a given line we can draw only one line parallel to it, to Lobachevsky's new axiom, that through a given point outside a given line we can draw two lines parallel to it, we get two apparently contradictory geometries, the so-called flat and hyperbolic. However, leaving such an axiom to the user as a choice, we will have "absolute geometry", an exact branch of mathematics anyway.

So much for Gödel's theorem. On the other side is the "theory of information" which he and his followers (of course) did not deal with, and which is the subject of the sequel. For this reason, let's consider the following instructive situation.

Example. We are at the intersection of two roads with three townspeople A, B, and C, whom we know to be liars, and to find out which of the roads is the right one, we can only ask them one question. What is that question?

Solution: Let's say we ask A what B would say to tell C if this road is the right one, and the person asking would point to one of the two roads. When there are three liars present, C would lie, then B would lie about what C would say, and then when questioned A would lie about what B would say. The negation of a lie is a truth, and the negation of a truth is a lie, so the questioner's answer (A) is incorrect. So, let's go the other way. □

The lesson is that by deciphering pure lies, we can get to the truth. We can think of a situation at the same intersection with three brothers, two of whom are lying and one of whom is telling the truth, then with the same question that always ends with the correct answer, no matter which of the three is asked and which of them is lying. Then the lesson is that the truth can (not necessarily) come from a mixture of lies and truths. We might finally notice that a lie is a diluted truth. A mixture to "fish" from, to be interested in the truth (The Truth) that is not obvious.

The truth is like cement, a rock, and a lie is like mortar, like a connective tissue. The expectation from this information theory is that a meaningful text is less informative than a meaningless, random one, and this is also found by simulations (Letter Frequency). Also, any arrangement, direction, commitment, and then efficiency or security, is some state with less uncertainty. It has fewer options and therefore less information. Pure, abstract truth has the least amount of information, and diluted truth, i.e. a mixture of falsehoods, has even less.

To this analogy of buildings with Gödel's "axiom islands", we can join two such "islands" into one logical system using the example with absolute geometries. When two opposing axioms (parallel postulates) appear in the same system, they become options. The information of such an enlarged system is greater with both density and sum, so the natural flow (minimalism) is the tendency to increase certainty, actually towards fragmentation and variety. New, but not in my previous posts.

Viewed structurally, or analogously informally speaking procedurally, and physically considered dynamic and changing, Gödel's "axiom islands" are like "sandwiches", then packed with layers of truths and lies. The entire structure of mathematical theories is such that, viewed as a series of movie frames, it is displayed like a wave motion. Like light, whose electric phase disappears as it induces a magnetic phase, which then induces an electric phase and moves on again.


Question: So, untruth is not "surplus" in the perfection of nature around us?


Answer: Falsehood is required for the excess information that living beings have in contrast to the inanimate substance of which they are composed. This excess amount is "vitality".

1. For the interpretation of Godel's theorem by a "deductive machine" (Deduction II) to do what it can, it is necessary to print correct statements that contain part of the inaccuracies clarifying what the deduction itself cannot. We imagine that said fictitious machine is mathematics because we can only give it correct sentences from which it will not return incorrect derivatives. It is a step further to imagine this as some fictitious person who does not utter inaccuracies, which is what Gödel's followers did.

Such a "person" (deductive machine) is defective precisely for those truths that she cannot express in specifying what she cannot say because they are statements of inaccuracy and she is therefore non-vital. In other words, the surplus of truths that a correct "deduction" would have to miss because it comes to them through lies is the surplus of information that dead nature itself, lifeless, does not have. If the mentioned "deductive machine" contained an otherwise true sentence like "I can't say that 2 + 2 = 5", it should have at least some vitality, but it doesn't, and the process of printing such leaves it to us who the vitality have.

2. Excess information is often imperceptible and truly unreal, like a ghost, as we see in the example of Slime Mold. These are the types of creatures on the rainforest floor that, like plants, spread their spores in all directions. However, every branch that does not come across food starves, dries up, and dies, and the whole thing survives where there is food. That's how this dancer moves, as if it has some intelligence, even though it doesn't.

Slime mold still has some vitality (excess information) because it is a living thing. An additional, and important here, proof of its liveliness is its competition against its death with the "Monte Carlo" strategy, a random trial like throwing a hook in fishing because winning games are unknown to the dead matter of physics (in a blind sequence of the least action). Although the slime only imitates the decision "here I will, here I will not," the result is aliveness.

3. At the end, in the "miscellaneous" section of this answer, we will consider the question of whether we discover mathematics or create it. The usual "reasonable" response would be to discover it, because those truths are something objective, like the ratio of the circumference to the diameter of a circle (π = 3.14159...) that we cannot declare to be slightly different without endangering the rest of the math. However, what we see from this discussion (and these answers) about Godel's theorems tells us that we create mathematics in part.

Simply put, without our vitality, there is no mathematics. It is not there because it is also in expressions like "I cannot say that 2 + 2 = 5", which cannot be achieved by pure deduction without the use of lies. The same is not possible with dead nature itself, and then such is not possible without the likeness of us as we understand it and (using falsehoods) prove its truths. Let's note that mathematics is exactly as "created" as our contribution is truly, physically unreal.

4. All the same, untruth is not "excess" in the perfection of nature around us, unless we are an excess in it. That is the expected (to this theory) and unexpected (to classical science) answer to the question I was asked.


Question: Is the logic concerned with our relationship to "truth," but not with the truth itself?


Answer: That's right. The above discussions of Gödel's theorems reveal this, from contradiction (Numbers) which is the apex of all methods and mathematical proofs, and beyond. In particular, it is at the very base of this theory of information that posits an idea, be it a statement or something else, as the fabric of everything.

A "statement" is a mathematical sentence that can be true ⊤, false ⊥ and nothing else. The negation of true is false (¬⊤ = ⊥), while the negation of false gives true (¬⊥ = ⊤). Therefore, every true statement is mapped by bijection (mutually unique) to some false statement and vice versa, which means that the cardinal number of all true statements is equal to the number of all false statements. Consequently, the fictional "worlds" of truth and falsehood are equivalent. They are equal in some respects, but not too equal.

Unlike a dead physical substance, vitals, which have an excess of possibilities compared to them, can communicate with lies. Moreover, and in fact, Godel's theorems prove this (let's consider this hypothesis the discovery of this theory), without lies, we cannot reach the truth, and, conversely, no truth is not amortized (tucked away) by lies. Dead nature is incapable of lying, which does not apply to it, so let's consider that what we "know" about truths has no "objective" meaning — unless we consider such naked fictions as objective phenomena. And that's where the circle closes; fiction will become objective because we consider it as such, so with physical interactions, it fits into the same packaging.

We seem to have fallen into a circulus vitiosus, but only at first glance, as it might appear to be Lobachevsky geometry. It is self-sufficient, however, it is not self-contradictory and contains Euclidean (a model of the chords of a circle), and, conversely, Euclidean has a saddle surface as a model of Lobacev's geometry. Here we have a bigger task, with a slightly wider "reality" and a more unclear edge, but the presence of information theory in it is certain, and vice versa, especially non-contradiction.

Information theory is not the first to touch on the subject of the "unreality" of those laws of nature that we are discovering, I hope, nor are we now straying too far from "common sense," but what we are discovering as mathematics and science in general, is not a bare abstraction or dead nature in itself, but rather our experience of them. What they (truths) are in themselves cannot be known without us (liars), nor can we fully reach them.

Thus, logic is concerned with our relationship to "truth," not truth itself. We would have to be non-vital (a dead thing) to reach the essence of "truth," but then we would not be able to know it.


Question: Do you have a smaller "reality" with a clearer rim within which the certainty of that new theory of information is greater?


Answer: Yes, that's how it works in this case. Otherwise, if we clarify something, we will intervene so that the certainty of the content is greater and, at the same time, the information is less (Letter Frequency). That is basically the work of science—the removal of mystique.

Well, as a first example of the answer to the above question, let it be the past (Memory) because it was the topic of my recent blog. Events from the past are not "real" (they were). We prove their alleged existence forensically, archeologically, or simply by observing distant objects, assuming that they are as far in our past as it took light to arrive.

The similarity of the proof of the past with the proof of truth in mathematics is visible, even in the presence of "Gödel's impossibility" (Deduction II). The impossibility of discovering all the past here comes from the obstriction of transmission by the Markov chain, when (informatics) processes are ergodic , and then by reducing the abstraction to the concrete. Deductive theories of the first kind that would describe a single structure are not possible (Range), so the abstract has breadth and the concrete has focus. Because of the latter, every narrower (concrete) example of "less reality" will be incomplete.

The present is spent leaving a trace of past events, reducing its concentration of information (amount of options) to the detriment of what it receives from the past. The certainty of the process grows, according to the principle of minimalism, so we can say that the past dictates the future, and all the more the closer it is to it. The "once" itself is thus (almost) completely fixed; however, moving away from the "now" slowly consumes it, and its information is diluted, then to the detriment of the increasingly long past. Fiction is thus narrowed down to the "reality" of the non-contradictory past, with the limitations of concretization.

The illusion of the past and the consistency of the theory are reflected in the possibility of choice (Dimensions); it connects the meaninglessness of the concept of time in mathematics. Broadly speaking, there is a continuum of possibilities, while the series of events in a particular reality is countably infinite at best. That is why the past does not even formally exist in some universal series of possibilities—a time flow behind or ahead of our history.

Finally, as hinted at at the beginning of this answer, theoretical clarification means stating and guiding theorems. This denial of other possibilities is analogous to the described development of physical reality. Both tend to have a lower density of information. On the other hand, option streams resemble Gödel's isolated islands of axioms (Completeness).


Question: Understanding the topic turns out to be a type of communication?


Answer: Yes, understanding is the manner of communication or the interaction of the topic with us. As a more abstract form of feeling or measuring. Unlike matching, where subject and object are matched by an alternating series of questions with answers, let's analyze just one of those binary steps.

A theorem does not exist by itself; it is created by our understanding. More precisely, without us, it is meaningless, and the meaning we add to it is not what it is by itself. This is how it turns out in Gödel's treatises (Deduction II). Without us, nature can't even tell a simple truth like "I can't say that two plus two is five" and can publish it only in the way of liars like us. For us, these simple truths about physical reality do not exist, and the way we understand the truths is not what they are. Bare matter "exists" as an object designed by a corresponding vital subject. Further, continuing this point of view, we come across those interpretations sum of products, i.e., perception's information, as I explained earlier.

The illusion of our present and its past is extremely realistic. It is irrefutably "real" by any experiments or exact theorizing, and since we have nothing stronger than such, then we consider it real. From now we are like shipwrecked people on an isolated island, where they will live and die without any hope of a vehicle for the outside world or rescue. The present is an unreal-real phenomenon of interactions, from its material-physical core to our most abstract insights.

This is a topic I keep repeating through information perception, piece by piece uncovering some aspects of it. Mathematics teaches us that every part of an exact theory is consistent (non-contradictory) with every other part of it, but also that it can be equally consistent with every part of any other exact theory. Thus we have proofs of classical theorems from analytic geometry, and then the same from analysis, probability theory, or beyond, as if truth is one big entity whose scope we cannot see at once. Logic shows us that it is not all like that, there is no "whole," but only true parts of some endless, complex family.

For example, when we roll a die, one of the six possibilities is realized so that all the previous amount of uncertainty (log 6) is delivered to a random occurrence, information. Immediately afterward, the new news is no longer real news but a fixed appearance of the past with particular, increasingly weaker effects on the present — which eludes. In general, we notice that outcomes arise through interactions and then disappear as quickly as possible. Outcomes exert their influence to become stuck in the fiction of the past. Information is born with difficulty (force) and similarly dies so that it is realistically equivalent to a physical action (the product of changed energy and elapsed time).

That is why, in physical phenomena, we speak of communication as interaction, and analogously, in non-physical phenomena, we speak of understanding or only of ideas. In general, we are talking about perception.


Question: Theorem is "spooky" remote action?


Answer: It's kind of like that; I hope I understood what you mean, that it is an expression of Einstein's sentence when he discovered the EPR-paradox. This then has continuations in quantum entanglement, especially in the ways I interpret similar phenomena (spooky action in the distance) in information theory and even in theorems.

An immediate physical reaction in response to a distant action is not possible because of the time it takes for light to travel between given events, unless those events are themselves simultaneous. The premise of this (my) information theory is that such a situation is physically possible even though the same events are non-simultaneous to other observers. Consistent with previous answers (Interaction) the all-time topics of mathematics are permanent or timeless, and in that sense "spooky".

The theory is as abstract as it is universal (Range). The unreality and breadth are the features of ideas, as opposed to the focus on the concrete, the appearance of physical information. We find this discovery from Gödel's considerations, of course, unrelated to this theory of information. However, when we try to multiply the "timelessness" of theorems with their physical "impotence," applying the physical action formula (ΔE⋅Δt → 0⋅∞) does not lead to extreme illogicalities. Moreover, the old phenomena of physics do not oppose the transition to matters of logic.

We know that mathematics can "help" mathematics (for example, algebra or trigonometry to geometry), or physics, other sciences, and technology, so it is not surprising that physics may help mathematics. If I guessed what you meant by the "spooky" effect of the theorem at a distance (if this theory turns out to be correct, the questioner could self-publish), I hope the answer was acceptable?


Question: How does substance react with an idea?


Answer: One way gives a photo-electric effect (Half Truths). Only a certain amount of light energy can move or excite an electron and pull it out of the metal substrate. No matter how many photons hit the substrate, with all its total energy of light of the wrong frequency, it will not be able to eject its electrons. But the right frequency will "communicate" successfully.

Similarly, only the particular frequency of one tuning fork will be able to excite the vibrations of another. Vitality is the trigger for reactions, without which it is impossible.

1. Firefighters know that a fire requires fuel substances (A), heat (B), and oxygen (C). When any of these three are absent — no burning. When there is more oxygen (C = ⊤) a fire can occur even with barely enough fuel substance and heat, which is symbolized by the upper picture on the right (disjunction, A ∨ B). But with barely enough oxygen, as much as there is in the air or a little less, burning requires a significant amount of both combustible substances and heat. It represents C = ⊥ so the bottom top right image (conjunction, A & B).

Much has been written about these gates (Quantum Mechanics, 1.1.2 Logic), and there are many different types, and this "firefighter" that I just described is atypical but perhaps also the best for scoring answers. This is because we know, from the algebra of logic, that negation (¬A), disjunction (A ∨ B) and conjunction (A ∧ B) are sufficient to define any statement.

One way to prove the sufficiency of negation, disjunction, and conjunction is to reduce the statement f(A, B, ...) to the arguments A, B, ... which can be true (⊤) or false (⊥) on disjunctive and conjunctive normal form. The subject is not a difficult area, and it is easy to experiment in it even for the ignorant. Thus, we see that very many types of reactions of the two gradients (A ∧ B) are possible when they are in the presence of the appropriate condition (C).

2. In 1924, Louis de Broglie proposed a hypothesis about matter waves from which, in 1925, Schrödinger derived the wave equation that is very necessary today for predicting quantum mechanics experiments. Both discoveries establish the importance of wave interference for understanding the micro-world of physics. They deny the classical materialistic nature of the world.

According to such new physics, the quantum of action (h = 6.62607015×10−34 J⋅Hz−1) is the product of momentum and wavelength particle-wave (pλ = h). That is, energy and period of oscillation (Eτ = h). When we use the frequency, which is the reciprocal of the period in seconds (ν = 1/τ), then the light energy becomes proportional to the frequency (E = hν). I mention these as the as the easiest of the known formulas for the sake of understanding the current text. The third property of waves is amplitude which defines the probability of interaction.

By interference, two corresponding waves are amplified when their amplitudes are in the same direction or weakened if their amplitudes are in opposite directions. In other words, regardless of, say, the energy of light, its rays (waves) can be canceled when they are in phase with opposite (and equal) amplitudes. This does not mean that the light disappears (as is lightly stated in the literature), but that it does not "communicate" (I adapt the terms to information theory) with the environment. By canceling the amplitudes, the chance of particle-wave interaction with the means of measurement, i.e., perception, disappears.

3. We know that not everything communicates with everyone. Without knowing the language, we cannot communicate well; without the code, we cannot read the coded text, photons that do not have the appropriate energy will not wake up electrons; and sound without the right frequency will not initiate the flickering of the glass. Only in an environment with oxygen (C) will the corresponding substances (A) and heat (B) burn. Finally, here we establish the (hypo)thesis that the "real" (whatever that is) environment enables the communication (interaction) of ideas with substance.

On the other hand, we witness it with the example of canceling waves of opposite amplitudes, nor does a substance that usually interacts with a substance always do so. Even the mind will not always understand or accept an explanation, no matter how clever it is. Our proverb says, You can lead a horse to water, but you can't make him drink. The reaction of a substance with a substance, of it with an idea, as well as the understanding of man by the intelligent, is reached in the triads of the necessary factors. In two of these conversations, the third necessary component is vitality.


Question: Does the idea rule the substance?


Answer: Yes, when they are natural laws, ideas are above physical phenomena, and otherwise, they are ignored. But it is an interesting and open question as to how they manage to do it. Some of the answers are worth guessing.

Adhering to the "principle of saving" of information broadcasts, we first note that vitality tends to be less vital, that systems with more options move to systems with fewer options, to reduce uncertainty, and that more certain outcomes are more common. Then our brain likes to predict what will happen.

1. Domination, from dominance to subordination, is always some renunciation and limitation of freedom, usually for security or efficiency. More deeply, the appearance of dominance arises through repulsive forces of uncertainty and as a process of principled parsimony of information. This is how efficiency arises: from converting the potential quantity of possibilities into concrete, from larger to smaller information, like a pickaxe that needs momentum to dig deeper into the ground. You need to be more vital to become more organized, more successful, and then less vital (Degeneration).

As the truth can be diluted with lies, the lie becomes more attractive than the truth (The Truth). Lies are fickle, but they spread faster and last longer than they are worth. We prefer to listen to them rather than the truth, so reading fiction (novels and short stories) is more interesting to us than theorems. On the other hand, the search for regularities is rooted in the principle of less information, which manifests itself in various ways. From the need for efficiency, attractive riding on rails, and dominance due to or over phenomena.

2. Physical action (ΔE⋅Δt) is equivalent to information, but observations of energy changes and the flow of time are not equal from the point of view of its "saving." When the same is in a place where time passes more slowly, it will have longer periods (larger Δt), slower oscillation, and less frequent events that will appear as places of less information. Places with a slower flow of time are more gravitationally attractive; we know that, and now we see the same in physical action. Under such an expectation would be the irresistible attraction of the timeless laws of nature, where the phenomenon recognizes them.

Diversity is the second principle of the current theory. It reduces the density of information (Extreme), and yet, without it, there is neither uncertainty nor Gödel's incompleteness, and now we see — nor mastery of the substance. For unconditional attachment to exist, there must be infinite periods (Δt → ∞). I take that conclusion with a reservation, until further notice, until something stronger appears to answer this or a similar question.

3. Entropy is also an interesting example for this question when looking at the ways it would be treated in this theory. Boltzmann (1877) developed the idea of the flickering of molecules in the phenomenon of temperature and heat, which is transmitted to the environment and therefore weakens. The expansion and drop of heat are thus phenomena of the consumption of this flickering by transmission. By reducing oscillation, I add, the information it contains is reduced, which means that less information corresponds to greater entropy — exactly the opposite of the usual (Shannon's) belief.

If we stick to this unusual understanding of the oscillation of molecules, it will be that the intensity of the oscillations is more important for greater information, i.e., that in the expressions of action (ΔE⋅Δt) longer periods (Δt) may be more attractive. This means, for example, that a photon of lower energy carries less information, or, as we are left with, that not every quantum of action is equally informative. But about that later.


Question: Can you explain the sufficiency proof of negation, disjunction, and conjunction (Triads, 1) using subjunctive and disjunctive forms?


Answer: In the picture on the right is a sketch of the intersection and union of sets, and they are equivalents of the conjunction AB and disjunctions AB for statements A and B. In addition, the negation A' of the set A consists of all elements outside the set A; that is, the negation of the statement ¬A will be true (⊤) if A = ⊥ and false (⊥) if A = ⊤. I am stating this so you can more easily transfer the following proof to an equivalent set in logic. A statement is a sentence that can be either true or false, without a third.

1. Let the statement f(A, B) be given with two arguments, statements A and B. When the statement f is true iff (if and only if) both arguments are true, then f = AB, which is easy to check by giving all possible values to the arguments:

A B f = AB

When the statement f is true iff the first argument is true and the second is false, then f = A ∧ ¬B, which we also check for all argument values:

A B f = A ∧ ¬B

Similarly, the statement f that is true iff A is false and B is true is f = ¬AB. Likewise, a statement f that is true iff both statements are false will be f = ¬A ∧ ¬B. Check this in a table, as well as the previous two cases.

2. When the statement has three f(A, B, C), or more arguments, we work analogously. Say, if f is true iff A = ⊤, and B = ⊥, and C = ⊥ then f = A ∧ ¬B ∧ ¬C. Check that with the table, now with 23 = 8 rows. In general, the statement n = 1, 2, 3,... arguments f(A1, A2, ..., An) which is correct iff Ak is correct (incorrect), is

f = ... ∧ Ak ∧ ... (apropos ... ∧ ¬Ak ∧ ...),

where instead of three dots, other arguments should be inserted analogously. The check table will have 2n rows.

3. Findings 1 and 2 are further combined. When the expression f(A, B) of two arguments is true if both are true, or the first is true and the second is false, while in all other cases (the other two) it is false, it gives

f = (AB) ∨ (A ∧ ¬B).

We notice how we can combine these alternative, therefore disjunctive, possibilities. These are not the only ways of forming the statement: function f, given domain and codomain, input and output. We form them using the conjunctive and disjunctive normal forms, or otherwise, simply because of the finite, actually few options.

With the help of these combinations, we can build even more complex structures. Let the expression g(f1, f2, ..., fn) complex statement function f1, f2, ..., fn. Each of fk with k = 1, 2, ..., n, can be (multiple) true or false. We have seen how such can always be represented by some con-disjunctive normal form described above. Then, with a similar form of such (f) the given compound statement (g) can be represented.

4. Finally, we note that statements of the highest complexity of this g. Then we can always represent such by using only negation, disjunction, and conjunction. Physical information is always in packages and is equivalent to action, so actions are quantized and also describable in the ways given here. It is also with complement, intersection, and union of sets.


Question: How is it that there is no uncertainty in the past or theorems, and it is the supposed essence of the fabric of this world?


Answer: I have written about the past several times, but something can always be added. A discrete sequence of our present is built along the continuum of possibilities. The increasingly long tail of her "memories" is due to the dilution of her uncertainty and the preservation of the information of the whole. However, the past becomes more and more pale, so that the old information, behind the "beginning" of time, would disappear, and the entire process of data transmission, the flow of time from then until today, would become a "black box." It is and is not an "illusion" of time, depending on whether the possibility of seeing it from the outside requires physical proof.

The formation of the past is exactly as uncertain as the development of future events. What we call "the past" is also changing (Genesis). It does not change only by fading with the information it sends to the present (Past II), but also by the range of the "black box," from the oldest to the youngest time of the current present. Let's see how far, and at the same time physically, this change can come from the following cosmological prediction.

The universe is expanding and expanding faster and faster. More precisely, more and more distant galaxies are moving away from us faster and faster (statistically). This information theory explains it (for now) by the more frequent formation of space (bosons) from matter (fermions) during random transitions of particles from one form to another (Quantum Transition). By the way, this phenomenon is a "consequence" of (unknown) dark energy.

Moving away, the most distant galaxies will go beyond the "event horizon," i.e., boundary spheres beyond which their light can no longer reach us. For billions of years, there would be more and more of them, until our galaxy would be left alone in the darkness that surrounds it. How to convince the possible intelligence that once that "only" galaxy was among many (today we count more than 200 billion of them). Not only can the pasts of the present be very different, but to the minds of the time, the beginnings of their pasts will be no less uncertain than the time before the "Big Bang" of the universe from our point of view today.

Parallel realities, or pseudo-realities of this theory (Dimensions) are realizations that could have been, but are not outcomes of this present. If we could follow such a "present," we would see that it repeats the previous story of the past and that it too is just one more than a countably infinite set of events from continuum of possibilities. Such a sequence is as uncertain to us as our own. The past is still uncertain for them and for us, each in its own way.

When we reach infinity in abstractions, as in set theory for cardinals (infinite numbers), we notice that some properties of finite sets do not apply to them. For example, an infinite set can be its own (proper) subset, it cannot be "physical," and the conservation law does not apply. That "world" does not have to be made up of sequences of events (the continuum is an uncountable infinity), and talking about the "flow of time" makes no sense. Every single infinity extracted from such will be one of countless chances 1 : ∞ to happen, and, therefore, they are worlds of endless uncertainties.

Theorems as we know them are derivatives of the axioms we assume for them, and, according to Gödel's discoveries mentioned above, there are zero chances and zero certainties (p = 1 : ∞) in those seas of possibilities. Therefore, there is uncertainty in both the past and the theorems, if the uncertainty is assumed in the fabric of this world, broadly speaking.


Question: There is something paradoxical about the "event horizon" ... ?


Answer: Questions arise about a couple of absurd places in the cosmological interpretation of the most distant parts of the universe, the galaxies that a telescope can see. We now exclude flat-space topics (Distances). Then its homogeneity, that there is no special location; and isotropic, that there is no particular direction in the cosmos (The Cosmological Principle). This old "principle" is being challenged by recent observations.

1. The questions refer to the very beginning of the universe, to the Big Bang about 13.8 billion years ago—that point-like appearance of the universe that should now be visible at the ends of all directions around us. I would avoid discussing known concrete disputed questions or mystical places in the physics of the cosmos, which would make the short answer too long. Instead, I will immediately sketch a proposal for one of the possible explanations within this theory of information.

In the continuum of possibilities, in that vast "unreal" assemblage, the seemingly ongoing present and past make up our most countably infinite series of occurrences of physically real space-time events. Theirs are the distant beginnings and endings of fiction. Here's how it's possible.

2. It is assumed that the universe at the time of the Big Bang was a super hot "mush" of pure energy and space that inflationary expanded with speeds greater than light. Then, from those particles (bosons) all others were created (fermions). The Higgs mechanism, which is still recognized today, appeared among the first. This supposedly uncontroversial part further builds on my addition to information theory.

Fermions are now slightly more likely to become bosons than the other way around, so more and more space between galaxies looks like they're moving away from each other. We see the most distant galaxies in their early past, and those that disappear beyond the "event horizon" are in the actual disappearance (then emerging) of physical substance. There is no "physical memory" behind it (Chanciness), but rather a continuum of possibilities (worlds of ideas) itself.

3. By way of explanation, it is possible to add that not only the speed of expansion of the period of "inflation" was faster than that of light, but that time was running faster. Time could be slowing down due to greater certainty now than then, so we see the speed of galaxies moving away due to the relativity of our perception. Such two increases, space and speed, can both come from equally valid explanations, say as calculations of the number π = 3.14..., from the geometric ratio of the circumference to the diameter of a circle, or the probability experiment of the Buffon's needle problem. This applies to some other "alternative" (additional) interpretations.

For example, when the mass of celestial bodies in our environment lags, gravity slows them down, making them stay behind in time. Such slow "dilutions" are slowest for black holes, around their "event horizon." That is why we, especially our galaxy, remain the last physical "lost" case. Another issue is the credibility of this theory, I emphasize, as is the case with completely different ones.


Question: How to measure "unreal" in perceptions?


Answer: We study special information and perception using numbers. I take quantum mechanics measurement (Summary) as an example of "information perceptions", and there are the same in scalar product of vectors and sum of products.

1. If we stick to the particle-waves of quantum physics and the atoms of microphysics in understanding the macro-world, considering that macro-bodies also have them, then the aforementioned scalar product of vectors has a consistent place as information of perception. These complex vectors become states of the physical system, linear operators as physical processes, and complex coefficients from the micro-world, which witness significant bypasses (Bypass); in the macro-world, less is visible.

2. By the way, perception is the ability to see, hear, or become consciously aware of something; it is also how something is observed, understood, or interpreted. Perception includes all those processes that give us information about our environment — sight, hearing, feeling, taste, and smell. It is also a transformation of becoming aware of situations and adding meaningful associations to sensations. Also, it is the process of receiving, selecting, organising, interpreting, checking, and reacting to sensory stimuli or data, as well as the procedures by which individuals organise or interpret their sensory impressions to give meaning to their environment. It is pretentious to reduce so much to numbers, but it is surprisingly possible.

3. Synergy is an interaction that gives the whole a greater quantity than the simple sum of its parts (Greek: συνεργος — to work together). In "sum of products"

Q = ax + by + cz + ...

where are strings, vectors u = (a, b, c, ...) and v = (x, y, z, ...) some states of the subject and the object so that Q = uv is "information of perception", information is greater with a larger number, the scalar Q. More information (quantity of options) than dead physical matter; if a given physical system had it, let's call it vitality. Vitality, among other things, also arises from synergy. Here's how.

4. We know from algebra that the scalar product (Q = uv = |u||v| cos θ) of the given vectors increases, when the angle (θ) between the vectors decreases. This is when the corresponding coefficients of these vectors are more proportional. We also see it in personality traits, i.e. classifications of winning game strategies (Reciprocity), which I based precisely on vitality. It is then higher if we combine the corresponding coefficients of the same sign of the two opponents (subject and object), which is easy to calculate:

... + 3⋅2 + 2⋅1 + ... < ... + 5⋅3 + ...

Here we have combined 3 + 2 = 5 and 2 + 1 = 3, so instead of 6 + 2 = 8, we have a larger term of 15 perceptual information, with all other additions unchanged.

5. Thus, it happens that stores often increase turnover when they are combined in a shopping center (although then in greater competition) compared to separate ones, or that associated military units (of a common goal) have more success than the aggregate of the same separate ones. In general, synergy arises if the same aspirations are combined, such as, for example, constructive interference of waves, but which would give a greater amplitude than the sum of the individual ones. There are no positive additions to the synergy of the simple sum in cases of unfolding opposites when the information of perception and vitality declines.

6. An example of declining vitality, which means the power of the game to win, is:

... + 7⋅5 + (-2)⋅(-2) + ... > ... + 5⋅3 + ...

because the critical sum on the left is 39 and on the right is 15, while all other totals of this sum of products remain the same. Namely, the sum of 7 - 2 = 5 states on the left is divided into differences that are opposed by 5 - 2 = 3 of the other.

7. Applied to complex contests, this means that it is worth detailing or breaking down "mostly good" into "good" and "bad", and countering that with the appropriate, proportionate "good" and "bad" at our disposal when we want to level up the game. In practice, this would be a divide-and-conquer strategy, or, on the other hand, a need for more subtlety during gameplay to increase its quality. Even though someone is recognized mainly as a friend, it is good to support the good and prevent the bad traits if we want to have an even more successful joint performance.

8. It is about the reasons for "Information of Perception" as I defined it ten years ago, as well as measuring the "unreal" with it in perceptions. That surplus from synergy, or deficit from conflict, is precisely the measurable increase and decrease of vitality, that is, those fictions that we consider physically immeasurable. Both of those assumptions are well understood. For example, the first is the need for both sun and rain for the growth of plants, and then, like the second, is the need for both praise and criticism to increase success.


May 2024 (Original ≽)