Previous

August 2024 (Original ≽)

Next


Avoidance

Question: Can the "saddle point" lead to defeat, and then what?

Avoidance

Answer: Of course. This is when it happens in the case of classical game theory (Equilibrium II). Let it be:

I \ II A B
A -2 -3
B 1 2

payment table. When player I chooses move A and player II also chooses move A, the former loses 2 points. If II then plays move B, the former will lose 3.

However, when player I chooses to move B, then player II loses in both cases; if he moves A, he loses 1 point, and if he moves B, he loses 2 points. There is no escape from the saddle point (1, in position B-A) because the second player loses as long as the first plays B. Therefore, player II should avoid this game, and the first could try to keep the second in the game. Methods of such deception, of luring the opponent into an unfavorable strategy exist (Sneaking) and, whether we are aware of it or not, we use them.

That's how things are with a smaller number of moves when we can see them all. In contrast to those, there are complex games (military, political, market), when there are too many moves or too little time, and we only evaluate what would be our possible move that would disable the opponent's best move. Then we more often use strategies of discouraging, delaying, and luring the opponent into apathy or tolerance, all to establish a better position. This creates an additional difficulty (see the following table).

I \ II A1 A2 ... Ak ...
A1 a11 a12 ... a1k ...
A2 a21 a22 ... a2k ...
... ... ... ... ... ...
Ai ai1 ai2 ... aik ...
... ... ... ... ... ...

Even when it seems to player I that it is best to play Ai, given the range aik of player II's possible payoffs, he does not be quite sure that there are no unpleasant surprises for him inside such a one. It is a problem of excess moves, i.e., lack of time or lack of knowledge. Actually, probably long strings, like ai1, ai2, ai3,..., hide surprises and courage often pays off. We know from experience that "without risk, there is no profit.".

By raising the vitality of the game, this "amount of options" that would justify the risk increases, and it grows with sum of products and with information perception. The point is that it is easier to know how to increase the vitality of competition (by reciprocating) than to know all the possible responses of the opponent, which would make his job more difficult, increase his possibility of error, or discourage him from continuing the game.

In a multidimensional game, unlike chess and even simpler ones, on the best move, on the sequence Ai = (aik), the adversary can set up a series of (bj) opportunities corresponding to his capacities, so that the sum:

Si = ai1b1 + ai2b2 + ... + aikbk + ...

reduce. Substitute a negative (bk) to a positive coefficient (aik), or conversely, a positive to a negative, so that we hit "good" with "evil," or we respond to "evil" with "good.".

Also, one can reduce the information of perception (Si) and level of our game. In all those cases, it turns out that we missed the optimal response, resulting in weaker mastery, a drop in achievement, and possibly losing the game. And further on are the details I explained earlier.

Interactions

Question: Do you see it as a "game" and as a player, and not just as an object or an external theme when playing?

Interactions

Answer: Good observation! Highly complex games change opponents in more visible ways; they become significant interactions of participants. Otherwise, physical information and actions are equivalent (within this theory). Every piece of such information changes reality even a little, and those who play vital subjects have an excess of information.

Attracting "good" and repelling "evil," the major league vitality strategy game (Win Lose), will not only move the player to a "good" environment but also change the very environment. In quantum mechanics (Summation) the act of measurement is the exchange of information, objects of measurement, and measuring devices. This transition of information is the subtraction of uncertainty from the previous state, thereby redefining it. By observing the electron, its previous path is formed — claimed Heisenberg in the disputes between the founders of quantum mechanics — and this does not mean that "the Moon does not exist when we do not look at it" (Einstein disputing the stated position), because "perception is existence" (we say here). The moon has someone or something to "notice" even when many of us don't.

I write with quotation marks "notice" because the point of this theory, as the equivalence of communication and interaction, is still widely unknown. Let there be a lack of uncertainty by the emission of information and the redefinition of the path of the electron until further debatable interpretation of the microworld, until the acceptance of this theory in physics, we still see the effects of communication on people or the environment around us — as if they were real physical interactions.

Therefore, in complex, multidimensional games, it is unrealistic to consider the player separately from the game, like a chess player from chess pieces, because he is also what he plays, the way he plays (Traits), and then the result of the game. All that, more or less. Even if these changes seemed imperceptible to us, they exist, and the information of perception Si = ai1b1 + ai2b2 +... + aikbk +... records them.

Binary Example

Question: Can you walk me through this concept of perceptual information in detail, through a simple example?

Binary Example

Answer: Yes, and I hope I don't burden the blog with math. For example, a simple stochastic matrix, as used by Markov chain:

\[ M = \begin{pmatrix} 0.7 & 0.6 \\ 0.3 & 0.4 \end{pmatrix}, \]

has a saddle point 0.6 at (1,2). The smallest numbers ofin the first and second rows are 0.6 and 0.3, while the largest of them is 0.6. The columns of this matrix are probability distributions (because it is stochastic), and the rows are not, and we have an accumulation of larger numbers in the first row due to saddles.

In the equation q = Mp, which M: pq maps the binary distribution p to q:

\[ \begin{pmatrix} 0.7 & 0.6 \\ 0.3 & 0.4 \end{pmatrix} \begin{pmatrix} 0.5 \\ 0.5 \end{pmatrix} = \begin{pmatrix} 0.7\cdot 0.5 + 0.6\cdot 0.5 \\ 0.3\cdot 0.5 + 0.4\cdot 0.5 \end{pmatrix} = \begin{pmatrix} 0.65 \\ 0.35 \end{pmatrix} \]

we see a slight increase in the top number of results. However, the upper one does not go too far because the same coefficient of 0.6 then acts on the smaller lower number, so it is:

\[ \begin{pmatrix} 0.7 & 0.6 \\ 0.3 & 0.4 \end{pmatrix} \begin{pmatrix} 0.8 \\ 0.2 \end{pmatrix} = \begin{pmatrix} 0.7\cdot 0.8 + 0.6\cdot 0.2 \\ 0.3\cdot 0.8 + 0.4\cdot 0.2 \end{pmatrix} = \begin{pmatrix} 0.68 \\ 0.32 \end{pmatrix} \]

and no exaggeration. The matrix M in the cascade mapping converges to the "black box", M(M(...M)...)pM0p, as is generally the case with Markov chains. Here is one way to calculate this M0.

In the expression M = PDP-1 (Diagonalization), that is:

\[ \begin{pmatrix} 0.7 & 0.6 \\ 0.3 & 0.4 \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 0.1 \end{pmatrix} \begin{pmatrix} \frac13 & \frac13 \\ \frac13 & -\frac23 \end{pmatrix}, \]

in auxiliary matrix P[v1, v2] columns are eigenvectors v1 and v2 belonging respectively to the eigenvalues λ 1 = 1 and λ2 = 0.1, which, in turn, are the diagonal elements of the matrix D. The matrix P-1 is the inverse of the auxiliary P. Check that indeed Mvk = λkvk, for both k = 1, 2. Also, the product PP-1 = I is equal to the unit matrix, and, of course, the above product is correct.

From the above expression, it follows that Mn = PDnP-1, respectively, for all n = 1, 2, 3,..., so:

\[ \begin{pmatrix} 0.7 & 0.6 \\ 0.3 & 0.4 \end{pmatrix}^n \to \begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} \frac13 & \frac13 \\ \frac13 & -\frac23 \end{pmatrix} = \begin{pmatrix} \frac23 & \frac23 \\ \frac13 & \frac13 \end{pmatrix}, \]

when n → ∞. This last one, the boundary matrix, is a black box M0. It transforms each distribution vector into a (2/3, 1/3) distribution, so we cannot tell from the output what the input was.

In general, the row of the stochastic matrix, where its saddle point is, is the row and the coefficient of the output vector of the distribution will tend to be a number greater than 0.5. That increase in the probability of a given line or effect of perception information the opponent can try to avoid, which was discussed in the previous two answers, If he succeeds, the attacker will miss the optimum and, with the wrong reciprocity weaken the game. However, the constraints on probability distributions do not leave much room for maneuver for this.

As a rule, the possibilities of choice grow with the complexity of the system and the game. It also increases the vitality (amount of options) of the player, the possession of which makes different ranges of different individuals.

Binary Example II

Question: What would the release of this "Binary Example" look like?

Answer: In order to have a binary stochastic matrix and a saddle point at position (1, 2), the link of the Markov chain will be:

\[ M = \begin{pmatrix} a & b \\ 1-a & 1-b \end{pmatrix}, \]

where 0.5 < b < a < 1. Eigenvalues are the solutions Mv = λv, i.e. (M - λ I)v = 0, which means that this determinant is zero:

\[ \begin{vmatrix} a-\lambda & b \\ 1-a & 1-b - \lambda \end{vmatrix} = 0. \]

We immediately see two solutions: λ1 = 1 and λ2 = a - b, because the determinant is zero when the two columns are proportional. Returning these lambdas to the eigenequation, we find the corresponding eigenvectors v1 = (b, 1 - a) as well as v2 = (1, -1). Hence the diagonalization equation M = PDP-1, that is:

\[ \begin{pmatrix} a & b \\ 1-a & 1-b \end{pmatrix} = \begin{pmatrix} b & 1 \\ 1 - a & -1 \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ 0 & a - b \end{pmatrix} \cdot \frac{1}{1 + b - a}\begin{pmatrix} 1 & 1 \\ 1 - a & a \end{pmatrix}, \]

which is easy to check by direct multiplication. The columns of the matrix P are the eigenvectors of the matrix M; on the diagonal of the matrix D are the eigenvalues; and the matrix P-1 is the inverse of P. Further, we have Mn = PDnP-1 for all exponents n = 1, 2, 3,..., so:

\[ \begin{pmatrix} a & b \\ 1-a & 1-b \end{pmatrix}^n \to \begin{pmatrix} b & 1 \\ 1 - a & -1 \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \cdot \frac{1}{1 + b - a}\begin{pmatrix} 1 & 1 \\ 1 - a & a \end{pmatrix}, \]

when n → ∞, because 0 < a - b < 0.5 and (a - b)n → 0. Hence:

\[ M^n \to \frac{1}{1+b-a}\begin{pmatrix} b & b \\ 1 - a & 1 - a \end{pmatrix} = M_0 \]

and for verification, let's compare this more general solution with the previous specific one. Then a = 0.7; b = 0.6; 1 + b - a = 0.9, and all these results are still the same as the previous ones.

I will repeat once again that the scope of "perception information" is primarily in Russell's paradox (there is no set of all sets), in Gödel's theorem (there is no theory of all theories), and in the objectivity of uncertainty, and this can only be seen in the hints in these overly simple examples.

Dominance

Question: What is "strategic dominance"?

Dominance

Answer: Strategic dominance of game theory is a situation where one strategy is best for the player, regardless of the opponent's play. A weakly dominant strategy is one that is never inferior to another strategy, and at the same time, there is at least some combination of another strategy with the same payoffs.

1. Let's imagine two companies, I and II, which complement each other with joint projects, but can also work separately, with another bidder. Let's say masonry with interior decoration, also production and delivery of goods, software design and hardware procurement, or similar. By choosing jobs, they cannot know what the other will do, but they have paytable estimates, such as:

I \ II A B
A 5; 5 4; 2
B 2; 4 3; 3

If both firms do business with bidder A, both will earn 5 monetary units. If both choose bidder B, the salary is 3 measurement units for both. However, if the first goes to A and the second to B, then with other associates, the first's earnings will be 4, and the second will be 2 monetary amounts. It is the other way around if the first goes to B and the second goes to A bidder when the first earnings are 2 and the second is 4. Predictably, those two companies should cooperate with the common bidder, A.

The dominant strategy of the former is (A, A). It is a strategy of action that a player should take regardless of the moves of other players; it is the best response to all possible actions of the opponent. The same (A, A) is a Nash equilibrium, a choice in which no player has an incentive to change his behavior, considering the action strategy of the other player; the best possible option for each party.

2. The previous example is not a zero-sum matrix, like, say, the matrix of the game Rose-Colin, or the following, also a zero-sum matrix:

Dominance b

We find that C-B (Rose C, Colin B) is a distinct and most cautious strategy. Playing C ensures Rose at least 2 points no matter what Colin does. Similarly, Colin playing B is sure to lose no more than 2, so neither player has any incentive to change his strategy. Therefore, it is a Nash equilibrium. It is also a saddle point, because for Rose it is the maximum of the minimum of species, and for Colin it is the minimum of his columnar maxima.

Also, we say that C is the dominant strategy for Rose and B is the dominant strategy for Colin. A zero-sum matrix game has a saddle point if the outcome is a minimum in its row and a maximum in its column. If the matrix game has a saddle point, both players should play it, but a rational player will generally avoid dominant strategies as they limit maneuvers. By repeating this, Colin will keep losing 2 points each time.

3. An excellent example of game theory, the Prisoner's Dilemma was developed in the 1950s by RAND Corporation mathematicians Merrill Flood and Melvin Dresher, during the Cold War, and was named a little later by game theorist Albert Tucker. Some have speculated that the "Prisoner's Dilemma" was created to simulate the conflict strategy between the US and the USSR during the Cold War.

The prisoner's dilemma is a situation in which two parties, separated and unable to communicate, must choose whether to cooperate or not. The greatest reward for each side comes if both choose to cooperate. The classic interpretation is as follows:.

I \ II Confession Denial
Confession -10; -10 -1; -25
Denial -25; -1 -3; -3

Two bank robbers, I and II, were arrested and are being interrogated in separate rooms. The authorities have no other witnesses and can prove the crime against them only if they manage to convince at least one of the bandits to betray the other and testify about the crime. Each of the robbers is faced with the choice of cooperating with their accomplice and remaining silent or defecting from the alliance with a confession to the prosecution.

If they cooperate and remain silent, both denying guilt, then the authorities will only try them on a lesser charge, and they will each receive three years in prison. If one testifies and the other does not, then the one who testifies will have only one year, and the other will have 25 years of punishment. However, if both testify against the other, each will receive 10 years in prison for being partly responsible for the robbery.

Cooperation by silence is the dominant strategy for sued suspects, and thus (Denial, Denial) is a mutually dominant strategy that gives them the payoff vector (-3;-3). It is also not Pareto efficient, because it cannot be said that by exiting only one of the individual's, his profit increases, while at the same time, it does not decrease for others.

4. A dominant strategy directs the player towards win/lose and therefore gives the game a lower vitality. In the case of equilibrium, when such is the most favorable thing that each of the players has, whether it is about moves towards his greatest success or about avoiding a greater failure, it is the appearance of mutual vortex. We then see the further cause of entrainment in the principled striving for less information, less freedom (amount of options), and orderliness as a general cosmic phenomenon.

Without vitality (excess freedom in relation to the physical body), there is no play, but nature spontaneously strives for less freedom (minimalism), so we can see "strategic dominance" as a kind of anti-game. This "dominance" is like a blind tit-for-tat strategy (without surprises) is good if you are on the winning side but need to escape with, say, a casualty if you are the losing party.

Mixed

Question: What does a "mixed strategy" consist of?

Mixed

Answer: Practically, mixed strategy is anything from "random" play to some "strategic dominance". This is because games belong to the vital ones, those with the gift of excess options, compared to dead physical substances, as well as the curse of making mistakes, that is, the impossibility of following the given distribution exactly. I will explain this using the example of the game "Paper, Rock, Scissors.".

Players suddenly show their wrists. The one who reveals the fist (stone) wins the one who brings out the index and middle finger (scissors) and loses to the outstretched hand (paper) because the stone breaks the scissors and the paper covers the stone. The game matrix is as follows:

I \ II Paper Stone Scissors
Paper 0 1 -1
Stone -1 0 1
Scissors 1 -1 0

Abstractly, the odds of showing paper, rock, scissors — are equal. However, in practice, we will always miss expectations at least a little, so the one who is more frequent with the stone slowly loses to the one who is more frequent with the paper and slowly wins the one who is more frequent with the scissors. The default is "random," and yet the game is more or less "dominant.".

If a person is trying to simulate the outcomes of a probability distribution by heart, he must guess the mean and variance, which is almost necessarily impossible. Just think how difficult it is to guess the curve of a Gaussian bell to understand why rote-generated "random" events almost always fail the test of randomness. We use this in testing decimals to check the irrationality of an allegedly irrational number, to check the correctness of bookkeeping in conditions when we do not know the company's methodology, or to check the honesty of the lottery organizer. It will make favorites, or losers, players in games where they shouldn't be.

Statistical research shows, for example, that guys in the game "Paper, Rock, Scissors" prefer "stone" (35.4%), and girls more often start the series with "scissors." We have known for a long time that this game also has successful strategies, and here we are just adding some whys or hows. The greater tendency of vitals towards unpredictable errors forms their mark and uniqueness and gives the opponent the possibility of strategic dominance.

The described game is an unusual example of the intransitivity of the order relation (Chasing tail), and behind it is another way of explaining the existence of packages information contrary to the principle of its minimalism. Here it proves to be a convenient example of a matrix game without a saddle point and of a paradoxical relationship between vitality and uncertainty, but it is not the simplest example of mixed strategies found everywhere. The next one is simpler but therefore boring to players.

For example, two players hold a coin in their palm, and if they both show the same (head-head or tail-tail), the point goes to the first one, and when the coins are not the same (head-tail, tail-head), the point goes to the second player. This can be played without the will of the player, using a random number generator, and then it is an example of a true random game and, of course, a matrix game without a saddle point.

Grizzly

Question: What do you mean by "relative success"?

Grizzly

Answer: Do you know that joke about the two men being chased by a grizzly bear? The first mourns: "It's not worth running away; he's faster than us!" and the second replies, running: "You're wrong; I don't have to be as fast as a bear; it's enough for me to be faster than you!". The story describes the essence of the tournament, the final result of which is the sum of the points of individual parties, and depicts "relative success.".

Calculation of strategies is reduced to matching the coefficients of the game matrix with the differences of the gains of the second bik from the first aik player to the numbers aik - bik = sik. The strategies (of the first player) are of the types S1, S2, S3,..., Sm, and further vectors, or in the application of "states," said game states or games.

As we look at the following general table, we notice that the "zero-sum" game matrix has these coefficients of the form sik = 2aik, because every aik = bik. In addition, we search for the best strategies (of the first player) among those vectors Si (i = 1, 2, ..., m) whose coefficients are all sik, for k = 1, 2,..., n, positive.

I \ II Q1 Q2 ... Qn
S1 a11 - b11 a12 - b12 ... a1n - b1n
S2 a21 - b21 a22 - b22 ... a2n - b2n
... ... ... ... ...
Sm am1 - bm1 am2 - bm2 ... amn - bmn

Thus the extended Rose-Colin table (Dominance, 2) becomes:

I \ II Q1 Q2 Q3 Q4
S1 24 -2 2 0
S2 10 2 14 -40
S3 6 4 8 6
S4 -32 0 0 32

The table shows S3 as the best of the four strategies because it only has positive coefficients. In the case of an optimal strategy, in the manner of the previous saddle point, we choose:

\[ s_i = \min \{s_{i1}, s_{i1}, ..., s_{in}\}, \quad s_{ik} = \max \{s_1, s_2, ..., s_m\} \]

and, if there is one, we get the value sik at position (i, k) of the given table. In the extended Rose-Colin table, the species minima are s12 = -2, s24 = -40, s32 = 4, and s41 = -32, and the largest of these is the second member of the strategy S3.

Imagine, now, a game with too many combinations where the opponents have too little time or, for other reasons, cannot see them all before answering. Let him at a given moment (the first player) reciprocate with a unique move, vector u = (u1, u2,..., un), then S1 is an equally good strategy, the Rose-Colin matrix, as well as S3, because it gives the same sum of differences (24) points on equal chances of the opponent's counter moves.

We will say that two strategies, Si and Sj, are equally "relatively successful" in a given game when they have equal sums of coefficients, when:

zi = si1 + si2 + ... + sin = sj1 + sj2 + ... + sjn = zj.

When zi < zj, then we can say that strategy Si is "relatively worse" than strategy Sj. I see the meaning of "grizzly," this method of calculating strategies, not only in complex games but also in tournaments where mistakes and successes of the participants are repeated multiple times and added up throughout the games.

Theoretically, the number of strategies Si and the number of coefficients sik can grow indefinitely (m, n → ∞), but the same practically prohibits the Borel–Cantelli lemma which says that the most finally many of them is relevant. In real complex conditions, it usually makes sense to focus on an even smaller set of possibilities and prepare answers like u' = (u1, u2,..., up) for such selected situations, and add zeros to the rest of the coefficients of that sequence.

Then the sum of the products siu = s i1u1 + si2 u2 +... + sipup becomes one example, now for the perception information of the game. As the game becomes more interesting, or harder and more vital, these coefficients, sij and uj, change to variable size. Moreover, the pairs of corresponding products and the number of additions also change, because according to the chaos theory, there is a possibility of a suddenly strong influence of a previously negligible coefficient.

Defiance III

Question: Why do you consider the play an expression of vitality?

Defiance

Answer: Those parts of the "game" that make it a true game defy the principle of least action, by which we determine every known trajectory of theoretical physics so far. This is also the basis for distinguishing inanimate physical matter from a living being. My information theory will cover the domains of both physics and excess freedom, and what unites them will be games — considered an expression of vitality, for example.

The emission of information is non-spontaneous according to the principle of minimalism and forced due to the law of conservation. In addition, it is also unstable, so its appearance is accompanied by changes. Nevertheless, information is the fabric of all space, time, and matter, with uncertainty in its essence, so that its physical emissions are always related to actions (changes in energy over time, i.e., momentum on the way). Hence the strain of uncertainty and the resistance of the force of probability, which makes more likely outcomes happen more often, so that laziness was in the nature of things, vitality an "unnatural" phenomenon, and the achievements of higher game levels (Reciprocity) more laborious.

That's why uncertainty is the basis of the game, because only with its surpluses can we deviate from its minimum (Packages) and leave the field of dead physical substance itself, and then "surplus options" (information) ground where someone or something can truly defeat an opponent. That's why I don't consider "games" that don't have a trace of uncertainty to be real games because there is no real victory without excessive uncertainty.

Physical nature itself cannot lie and cannot play. Because of the first, it cannot know itself (Not really), because like the method of contradiction (using a false assumption to arrive at a true position), there is no genuine mathematics without false statements as a tool, and because, secondly, dead things cannot conquer, but only to last by spontaneously changing. Competing with the vital, still life is the losing side. In general, it is more difficult to be the more vital side of the competition, but it is the one that wins not only more often but also in a more skilled way.

Unstable

Question: Explain what you mean by "information is unstable."?

Unstable

Answer: The question is in the context of games, so we have such an answer, unlike some previous ones (Disorder). First of all, it is known that by repeating "news," it is not that news, and because information is the fabric of the cosmos, it will be a consequence of this kind of impermanence. Hence all the beginnings of similar answers, including this one.

Interesting becomes boring in principle (minimalism) in the same way that the brain prefers the familiar. Striving for less information, less action, or more likely outcomes, we prefer to follow the beaten track. We desire order, and we like to find rules, everything from better social laws to natural laws, more or less unaware that this has the same starting point as the need for association (surrendering personal freedoms for collective security or efficiency), or, on the other hand, because each new news "disappears" and "appears" again to the previous one, similar or different, if it is more strongly exchanged.

Unstable b

Technological material fatigue is the slower flow of news fading due to its oscillation between close communication states. These cycles exist almost everywhere, from the vibration of molecules to sound. The chirping of a certain species of bird lasts 8 seconds at rest, and our hearts can beat once per second. Like the paths of deterministic chaos (see image link), these repetitions are constantly, at least slightly, changing.

Alternations of dying and birth maintain the biological species, which also gives the cycles themselves the meaning of the state (formally, states and processes are vectors), so the duration of the species has its limits. The failure of some on the market is often followed by the economic success of others; moreover, it leads to the well-being and longevity of society, so that society (as a type of state) will cycle through its ups and downs. The downfalls of some states or empires are good for the prosperity of later ones because historical flows are also cycles of constant and slightly different phenomena.

All this is in accordance with the principle of uniqueness because otherwise the theory would not be valid. Riesz' lemma of algebra establishes that every functional on a finite-dimensional vector space has a unique vector by which we define it, so in the interpretation:

(vector) → (state),   (functional) → (perception information),

we find the uniqueness of the participants in their perception. Hence the necessity of differences, from internal, say, societies throughout history, to differences beyond the photon traveling through space.

All this points to the problem of using the sum of products (Grizzly):

siu = si1u1 + si2u2 + ... + sipup

of the strategy and game participants. The sik coefficients change during the game itself, even too quickly for everyone to be affected by the same formula. For example, like new models of technical devices, such as TV sets and cars, or similar, the appeal declines even before they "conquer the world." The manufacturer barely holds their value for four years, until they become obsolete and a new model is released. A similar problem arises with these long summaries, starting from the very evaluation of information that easily fades until its final use.

Divergence

Question: Can you be more specific with "Difficulty of sum of products"?

Divergence

Answer: I avoid burdening these answers with mathematics, but more often than not, I stick to interpretations and sometimes ultimate proofs. This is an opportunity for one such (Informatic Theory I, page 106) more difficult example 3 from part 40. Continuous Perception.

Probability distribution density:

\[ \varphi(x) = \begin{cases} \frac{1}{x\ln^2x}, & x \ge e \\ 0, & x \lt e, \end{cases} \]

is well defined, because it is (∀ -∞ < x < ∞) φ(x) ≥ 0 and:

\[ \int_{-\infty}^{+\infty} \varphi(x)\ dx = \int_{-\infty}^e \varphi(x)\ dx + \int_e^{+\infty} \varphi(x)\ dx = \] \[ = 0 + \int_e^{+\infty}\frac{dx}{x\ln^2x} = \int_e^{+\infty} \frac{d\ln x}{\ln^2x} = -\left.\frac{1}{\ln x}\right|_e^{+\infty} = 1. \]

This integral converges, but the Shannon information of this density diverges:

\[ S(\varphi) = -\int_{-\infty}^{+\infty} \varphi(x) \ln \varphi(x)\ dx = -\int_e^{+\infty} \frac{1}{x\ln^2x}\ln\frac{1}{x\ln^2x}\ dx = \] \[ = \int_e^{+\infty} \frac{\ln(x\ln^2x)}{x\ln^2x}\ dx = \int_e^{+\infty} \frac{\ln x + 2\ln(\ln x)}{x\ln^2x}\ dx = \] \[ = \int_e^{+\infty} \frac{dx}{x\ln x} + 2\int_e^{+\infty} \frac{\ln(\ln x)}{x\ln^2x}\ dx \ge \] \[ \ge \int_e^{+\infty} \frac{dx}{x\ln x} = \left. \ln(\ln x) \right|_e^{+\infty} = +\infty. \]

A finite probability density has infinite Shannon information!

Shannon's (1948) information is the mean of Hartley's (1928) means, which are the logarithm of the number of (equally likely) outcomes. Let me further remind you that the mean value, let's say three lengths a = 3, b = 4, and c = 5, of something equally taken amounts to (arithmetic mean):

\[ A = \frac13a + \frac13b + \frac13c = \frac13(a + b + c) = \frac{12}{3} = 4, \]

but if we take the second twice as often as the first and the third three times as often as the first, then the average length we take is:

\[ B = \frac16a + \frac26b + \frac36c = \frac16(3 + 2\cdot 4 + 3\cdot 5) = \frac{26}{6} = 6.5. \]

It is logical that now the expected average length is higher (B = 6.5 > A = 4) because we took longer lengths more often. In general, the same logic will apply when we take more than three lengths, say a1, a2,..., an. According to some probability distribution, the sum of positive numbers p1 + p2 +... + pn = 1. Then:

\[ \mu = p_1a_1 + p_2a_2 + ... + p_na_n. \]

In Shannon's case, these "lengths" are Hartley's ak = -log pk and should be all right, but they're not. It's as if mathematics itself is warning us that we don't understand something well.

I will give another example of the mean value problem, discovered two centuries before Shannon's information by Bernoulli (1738), a Swiss mathematician who lived and worked in Russia. That's how the St. Petersburg paradox, which still troubles mathematicians, was named after him. He imagined the next casino game and tried to calculate the initial payout based on the average value of the payouts to the player so that the casino would not be at a loss.

A coin is tossed repeatedly, and with each "head," the payoff value is doubled, until the first time a "tail" lands. Let's say the initial payout value is $2 and the coin is fair. The probability that it is a "head" in the first attempt is 1/2, that a "head" falls twice in a row is 1/2²,... and that a "head" falls in general k = 1, 2, 3,... times consecutively is pk = 1/2k. The mean payout is:

\[ \mu = \frac12\cdot 2 + \frac{1}{2^2}\cdot 2^2 + ... + \frac{1}{2^k}\cdot 2^k + ... = 1 + 1 + ... + 1 + ... \to \infty. \]

This is not logical because the chance that when tossing a fair coin, only "heads" come up over and over again is zero. The mathematical expectation (so we call these mean values) of this game is that an arbitrary player earns infinite amounts of money. We are still waiting for a satisfactory explanation of the paradox.

Win-Win

Question: Why the win-win strategy always loses, do you know?

Win-Win

Answer: Discussions of this issue come from the results of simulations of some games and the analysis of algorithms. The answers revolved around "I have no idea." I'll spare you that bit of theorizing.

A "win-win" situation would be about "good for me and good for you," like in trade when you give money for goods. It is therefore a compromise, trade, or often-desired political situation. Therefore, politics will "spoil the toy" whenever it interferes with even a masterful game, because that priority situation, as a "win-win" strategy, loses almost every game of different players. I've written about it several times (Win Lose), but we always seem to be left without a good theoretical explanation.

1. In contrast to a "state" as a snapshot of a single moment of a sequence of events, we also have a "state" of a process that formally resembles the sinusoid from the previous image on the right. Nature is all in waves, so successes and failures alternate, whether we like it or not. That's why the "lose-lose" strategy, with sacrifices to victory, will almost always win "win-win," where the successes themselves are followed. Waiting for luck to fall into our lap without doing anything or getting tired will mean missing out on opportunities and shying away from successful ones.

In this attempt to theoretically explain the failure of "win-win" strategies in computer situations, we note that its players deny themselves variety, thus the power of uncertainty. They descend to a lower level of the number of options (information of perception) in communication to win, becoming closer to dead physical matter and the pure principle of least action. In this sense, interpretation belongs to the traits of personality.

2. Associating only with the "good" (abstractly, being a class of the environment), not using the "bad" even against the "bad" that would encroach, means to be a player and not to play, to give up the competition, but also to be at all a smaller island of "good." I am paraphrasing the topological sketch of the game. The graph of the "good guys" would not shrink only in case the "bad guys" gave up their progress or someone else stopped them, but that is the weakness of the "win-win" strategy. This is another explanation for their poor success.

In this second attempt to understand the failure of "win-win" strategies, the world is a stage for the battle of "good and evil" in the broadest sense of the word, when the one who finds himself on the stage has no choice but to fight or disappear. Indeed, vitality is primarily a matter of the biological evolution of species, where they survive only by adapting to a constantly changing environment. And the environment is often "evil" to you, whether you are another type of bacillus, other plants, or animals. Whoever would only wait to combine "good and good" does not last.

3. According to the first, diversity is an advantage of the player, that is, the system, so uniformity is a disadvantage. Whether we call this second structure "good" or "evil" is only a matter of convention in abstract considerations. We also find this in practice, say in more prosperous and competitive companies or countries that are more tolerant of new, strange, or, at first glance, repulsive phenomena. Mixed environments, as well as the mixing of genes in the extension of species, have their own competitive advantages, conscious or not of the reasons for their quality.

Although it originates from the first, this is already the third attempt to interpret the bad placement of "win-win" strategies. Monotony, more repulsive and unnatural due to the desire for less information (Extremes), is an unstable state. It is not a position in which one wakes up to last; rather, it turns into something else, often unexpected and sometimes unpleasant. Hence, the prevailing state of "evil" as well as the prevailing "good," looking at them abstractly spatially, i.e., graphically — unsustainable.

4. By breeding evil, we get more evil, just as by destroying a construct, good (+) and evil (-) multiply into bad (-). That's how we remember their effects; we saw (Summation). At the same time, responding to good with goodwill returns good to us (+ ‧ + = +), and evil should be returned with evil (- ‧ - = +); otherwise, we lose. It is the logic of "information of perception", i.e., "sum of products" if the scalar product of the vectors whose components are powers of two communications. That product then grows with the vitality that presupposes a better game.

In that game, dead nature is the bottom of the bottom, and the yielding of the weaker before the stronger, or the advancement of the stronger against the weaker, without defiance of strong-strong and tolerance of weak-weak, are the characteristics of a "win-win" strategy. Nature suffers and continues, allowing the existence of predators, carnivores, or violence in general. As a minority (predators are less than their prey), hunters often have less weight, speed, or physical strength. A moose (800 kg) is the prey of a wolf (40 kg), or a zebra (400 kg) is the prey of a lioness (140 kg), as prey is more massive than their hunters lose due to a lack of ferocity in vitality.

Monopoly

Question: How does this theory explain monopolies?

Monopoly

Answer: In microeconomics, it is a monopoly firm that has no sustainable competition. It is the sole producer of the industry's products and has enormous market power, for example, in setting prices well above the firm's marginal cost. You can see the details of that classic type of monopoly under the image link on the left, and we will go further according to the topic of the question.

1. It is no longer a secret that the important thesis of this theory is that nature spontaneously strives for less information and that it replaces equality with order and efficiency. This is how I explain the "breakdown" of a network of equal links into unequal nodes, where the exceptional ones have many links versus the many with few. Thus, the phenomenon of "Six degrees of separation" is created for greater efficiency of connection.

In this way, the free capital market, with the idea of equality in terms of the circulation of money, goods, and services, grows into a place of the few who have many of these capital links versus the many who have relatively few of them. That "strange" phenomenon of the evolution of equal rights into unequal estates then grows into the subordination of voters (Win-Win players, good guys) to politicians (manipulators), and these to owners (bad guys).

The Deep State has developed the important slogan "lie locally and work globally," with which it and its politicians successfully rule their people. This is an explanation of current politics in an unusual way, but I reconcile it with the categorization of personality Traits, both of which are information about perception. That is the background of that "incomprehensible" talk about understandable things.

2. However, ultimate globalism (monopoly rulers) will prevent the same forces that do not allow equality. If only one of the nodes had all the links and all the others had only one, it would be the peak of efficiency in terms of the smallest number of steps between two arbitrary nodes. But also with the equality of others, which is not allowed. The latter will prevail, and the development of network efficiency will stop somewhere in the diversity of nodes. This place is like the "Nash equilibrium" of the classical economic theory of monopoly.

Attempts to build ever-increasing globalism, as well as ever-increasing monopolies of anything, will break like waves against rocks before success, regardless of the investment of funds or efforts of globalists (monopolists) or the discovery and elimination of local culprits. Without understanding the mechanism of the theory we are working on now—the spontaneous flow of nature towards smaller information—all interpretations known today will remain "geese in the fog.".

3. To avoid trivial examples, I will compare this to the genesis of the cosmos from the Big Bang, which is still evolving into the expected nothingness around us due to the accelerated receding of galaxies that should eventually all go beyond the limit of the visible universe. I consider the principle of saving information to be general enough to include cosmic dimensions, so we draw analogous conclusions from there. The universe is spontaneously expanding from the uniformly hot initial mushy mass and, in ways unknown to us, will avoid going into the dark uniform void around us.

An interesting example of this (hypo)thesis is the vacuum itself, which, as such, is not a sustainable phenomenon. Physics already works extensively with such a "void" from which various "particle waves" pop out in random ways but whose energy and duration products remain below the quantum of action, or below Heisenberg's uncertainty. Such a vacuum will gain weight (of information) while the information of the surrounding substance is diluted, but it certainly gains, based on this same theory, because physical matter becomes space.

Based on this theory of information and its (hypo)thesis about the slowing down of our time by the duration of the cosmos, because the processes go towards more probable, less informative, i.e., a smaller number of events, unexpected (according to other theories of physics) conclusions about the vacuum can be drawn. What is of too little duration today, in latent particles that "pop" out of the vacuum, may be real enough tomorrow.

Amalgamation

Question: Is the deep state using the method of "lying locally and working globally" to trojan its politicians an incurable disease of democracy?

Amalgamation

Answer: This is not the only weakness of democracy (Monopoly, 1). More emphasized (Democracy) is its naive aspiration towards the equality of capital flows, which, by analyzing the graphs, shows the inevitability of a decreasing number of relatively richer ones compared to the many.

The first one, from the question I was asked, is rather a part of game theory as a branch of the concept "Information of Perception," which I am trying to clarify with these attachments. By the nature of his job, in addressing the "good guys" as the largest population and the main source of his voters, he is forced to be a "manipulator" to defend falsehoods and distort reality. This alone makes him a poor performer against "evil", extremely rare wizards in tournaments of power. Politicians accept their humility in front of the power of play of the upper ones (I league) as their contribution to that team, the gift of lying in governing the lower ones (III league), and everyone is happy. It is a "win-win" situation; they believe and they hope.

The meaning of the state and its democracy is being alienated, but who cares — to the breakdown of such symbioses? Nature doesn't like equality or such globalism; everyone is below the few, so it will cost the builders more and more, with easier resistance from, say, nationalists, separatists, and special communities in general. In the competition against nature, the creators of globalism get in the way of various of her "tricks." Some of them are poorly known to many, such as Simpson's Paradox, and some may not even be known today.

Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. For example, let's say two politicians, let's call them I and II, in pre-election polls in two districts A and B had vote percentages according to the table:

A B
I 30% 35%
II 60% 26%

In total (A + B), the first politician has "only" 65 percent of the votes, while the second has 86 percent, and the first turns out to be worse than the second. However, the first one had more votes in those polls! Here's how.

For the first, in district A, the poll had 30 "for" out of 100 respondents (30/100 = 0.30), and in district B, it had 70 "for" out of 200 respondents (70/200 = 0.35). In total, it is 100 "yes" out of 300 surveyed (100/300 ≈ 0.333). On the opposite side, in district A, the survey had 60 "yes" out of 100 respondents (60/100 = 0.60), and in district B, it had 140 "yes" out of 550 respondents (140/550 ≈ 0.255). The second has a total of 200 supporters out of 650 surveyed (200/650 ≈ 0.308), which gives the second politician a lower overall percentage of support than the first politician.

Sports fans sometimes notice these paradoxical situations by following the statistics of players or teams through games, but if you ask them why, they will "explain" it mostly to computer and operator errors. This is often true, so these kinds of theories go unnoticed. That's why I refrained from listing them here (so as not to burden the interpretation of information and perception), despite one important point: the damage that thus occurs to democracies.

We can understand this "amalgamation paradox" by representing the number of survey responses and questions as vectors, let's say in the following, similar example:

A B
I 10/40 → 25% 90/300 → 30% 100/340 → 29.4%
II 52/200 → 26% 64/200 → 32% 116/400 → 29.0%

We see that the sum of the partial percentages in the regions (A + B) for the first and second politicians is 55% < 58%, while the total number of votes is 29.4% > 29.0%. We define the vectors of the first a1 = (40, 10) and so b1 = (300, 90). The second one will have a2 = (200, 52) and b2 = (200, 64). The quotients are 10/40 = 0.25 and 90/300 = 0.13 of the first, and 52/200 = 0.26 with 64/200 = 0.32 of the second. Those like this represent the slopes of the vectors towards the abscissa and are actually the tangents of the angles formed by the lower axis with the vectors. When we add these vectors:

a1 + b1 = (340, 100)   →   100/340 ≈ 0.294   (29.4%)

a2 + b2 = (400, 116)   →   116/400 = 0.290   (29.0%)

we get that the slope of the second summation vector is lower, although both summations are steeper than the first. More often, steeper vectors add up to steeper ones, so such results seem paradoxical to us.

If we define the abscissa as the time axis and the ordinate as the path, then the slopes of the mean speed vector are the quotient of the distance traveled over time. Speeds do add up, but not like this when they are first calculated as road segment averages. This brings us back to Simpson's paradox of statistics and the idea of how to deceive democracy. Gathering support is always partial, from the base up by grouping the results. This process of delegation can be abused in the way of the "amalgamation paradox" to gain where there is none.

Analysts must do this tuning of the results of polls and elections at the behest of the "evil" and perhaps with some insight from "manipulators." The absence of the "good guys," primarily due to their lack of understanding of the methodology, will be a convenient advantage of the method from the point of view of fraudsters in democracy. In this way, it is possible to get deputies who, in the majority, sincerely represent the minority of voters, the people who participated in the elections, without additional bribery or blackmailing of the elected representatives through "correct" elections.

In the end, I admit, all this about democracies is a side story in order to "prank" you with a difficult topic under an easy drinkable. One of the basic but difficult questions of the theory of the physics of information is how it is formally possible for less probable states (in principle unwanted) to be more probable by grouping them together. I hope, with the above explanations, you recognize that mechanism in the "amalgamation paradox.".

Merging

Question: Now can you clarify the mentioned calculations "from smaller to more probable states" by merging?

Merging

Answer: Ok, in the picture on the left are the hyperbolas of the equations:

\[ y_1 = \frac{30}{x} + \frac{70}{300 - x}, \] \[ y_2 = \frac{60}{x} + \frac{140}{650-x}, \]

which corresponds to the data from the first table (Amalgamation) of the answer, Simson's paradox, changing the variable x = 100. It is obtained:

y1(x) < y2(x), 65 < 86 ⇒ 100/300 > 200/650, 0.333 > 0.308.

In this picture, we have the abscissa x1 < x2 and ordinate y1 > y2 with again the same combined, merged result 100/300 > 200/650. By choosing different abscissa values x1 and x2, we change the ordinate values, but the combined result is always the same, with parameters of the given hyperbola. So we see how it is possible to manipulate partial probabilities. For the previous answer, these are 30% and 35%, that is, 60% and 26%, polls from areas A and B. The ratio of merged probabilities, coefficients 0.333 > 0.308, is the same.

We arrive at the required explanation by looking at y1(x1) and y 2(x2) as lower probabilities of some smaller occurrences, subsets by the principle of saving information (Minimalism) merged into larger occurrences of higher probabilities. Less likely are more uncertain and more informative states that would spontaneously transition into more likely and therefore less informative states.

We have seen (Range) that deductive theories that would characterize a single structure are not possible, which means that transitions from "less into more probable states" according to this fusion model have various uses. It is the quality of good theory that it is "everyone's" as opposed to practice, which is always "someone's" (Abstractions), as I paraphrased (Löwenheim–Skolem theorem) by underlining the unrepeatability of "information of perception" or uniqueness (personality) of each subject of the cosmos in relation to the environment. Therefore, we are still at the beginning of a new theory of information.

Empire

Question: How should I define "game" in a simple and short conversation?

Empire

Answer: Put competition and options as important ingredients of the game, so whatever. Domination's goals are games, and its steps follow the path of freedom.

In a simpler case, like the game "tic-tac-toe" where the goal is to fill one line of the board with your signs (x or o) and by making moves that will give more options to you, or fewer options to others, through chess games (Siege, Courtyard, etc.), which also lead to similar principles (to master "something" and through excess "options"), and in general to increase "Information of Perception".

The basis of such game rules is action-reaction (Reciprocity) whenever possible with creative (unpredictable to the opponent) moves, with good timing and measured strength (actually "amount of uncertainty" or by "force of probability"). Such is, admittedly, part of my "information theory," a novel game theory and so far unexplored story that we only glimpse in the following examples, but its basic idea has proven successful in simulations.

1. For example, the French philosopher Jean-Jacques Rousseau described a situation of trade, security, and trust to achieve greater profits by using two hunters. They hunt deer, which is bigger and tastier for the menu (A), and rabbit (B), which is also tasty but less filling. Catching A is more demanding and requires cooperation; a lone hunter has less chance of catching A. On the contrary, catching B is more of a solitary job. The first catch is a greater benefit for society; it requires "trust" among hunters when each believes that the others will join forces with him, so in the "deer-rabbit" game, the first square of the table belongs to him:

I \ II A B
A 5; 5 0; 3
B 3; 0 3; 3

This game has two pure strategy equilibria, positions (A; A) and (B; B). A position is Pareto dominant over another if at least one party (game participant) would be strictly better off in the first position than in the second position, but no one would be worse off. Position (A; A) Pareto dominates (B; B); however, both are "socially" better than the other two. Nash equilibrium is only the first field (A; A), and it is, therefore, special because of the assumption that others will not join forces, leading to a worse catch.

Now let's add the value of "information perceptions" to that classic description. It is a scalar product (vector, sequence) of the corresponding components of two states that can (but may not) be perceived, that is, perceived or communicated. Information exchanges are the intensities of the components, the higher of which means a greater intensity of coupling, a higher level of play, i.e., a bigger catch, i.e., payment. It is clear that, explained in this way, this value as the aggregate payoff of the players becomes maximal in the field (A; A) of the above table.

2. Let us now imagine two hunters who do not tolerate each other in a situation similar to the previous one, when they would rather be a danger to each other than catch something while hunting together. The game table is:

I \ II A B
A -5; -5 0; 3
B 3; 0 -3; -3

A similar table would be for two hostile tribes making decisions about prosperous projects A and B if they maliciously sought to damage the other side. Then they would choose "evil" for the common "good." However, when in the fields of the game's intensity table, then these minuses become pluses, and at the same time, the goal and meaning of the game change; again, dominance through options.

The more vital game is the one in the greater confrontation of opponents, which means a greater departure from the principle of the least action of physics. Then "good" is followed by "good," and "evil" is returned with "evil." It is even better to do the same in proportion to one's powers, both timely and unpredictably. Any such distancing measures vitality by means of "perceptual information."

3. The game can also be understood as a competition with oneself. The animal scenes in the game are widely known, whether it's puppies fighting, kittens attacking, or dolphins frolicking in the waves. Play is not limited to young animals; adults of many species also practice it (Play Behavior). Play is also an expression of excess freedom (of living beings) in relation to the inanimate physical substance of which they are composed.

The excess of uncertainty is analogous to the potential energy we capture by lifting a pickaxe to drive it deeper into the ground when all of it has been converted into kinetic energy — for purpose and efficiency. Thus, the analysis of writers' texts will show more sense or style if the information on the occurrence of letters or words is less (Letter Frequency). Similarly, the game moves towards dominance by way of freedom.

Criterion

Question: Do you have any ultimate criteria for "game"?

Criterion

Answer: If I understood your question, given the previous answer, it can be reduced to the distinction between living and non-living entities. To be able to "play," we need vitality, which is the excess of information (amount of options) that physical matter does not have.

In other words, if you are a completely physically real system (being), then you consistently follow the principle of least action, and the law of conservation applies to you (Conservation II). The feature of this theory of information is that it considers "reality" as the domain of information for which the law of conservation applies, while under "unreal" it places many other phenomena (fictions, ideas, laws) that could only be "tools" of physics. That excess of freedom (vitality) associated with physical substance has the power to lie, to deviate from the principle of least action, and to play. It's my personal thing, a semi-anonymous story that has its own history.

At first, when I was still looking at information like Shannon's (1948), which is the mean of Hartley's (1928), I found it interesting to discover that it behaves like a constant under the derivative (Conservation law, 2014) if we treat probability in an observable way. Among other things, sometime around that time, I entertained some mathematical forms of information similar to Shannon's and, a few years later, put some of it into the script "Physical Information" (2019). There, it is aimed at ensuring that the sum of the parts of the information is always equal to its whole.

Not long after that, I started writing this blog (Conservation) for a close circle of friends. Some of them were reviewers of my books or helped me in various ways in organizing this data. I answered questions like, "How do we know something is real?" (Conservation II), of course, reducing the answer to the law of conservation of information. Back then, I was already testing the (hypo)thesis that information is the basis of space, time, and matter and that uncertainty is its essence. Actually, I no longer had confidence in the concept of determinism (until then the "holy grail" of physics), nor in matter as the most important basis of the universe.

Among the interesting questions on that topic was the existence of mappings that can preserve a scalar product (Conservation III). Perception information is a type of scalar product. A question like, "What makes it possible to memorize information?" (Conservation IV), I also reduced it to the law of conservation. A consistent continuation is to observe the game as such. Therefore, something worth the conservation law — don't play. That is, what can be played must be able to lie and be inconsistent.

Hyperbole

Question: How does the information perception change during the game?

Hyperbole

Answer: Vectors \(\overrightarrow{OA} \) and \( \overrightarrow{OB} \), in the hyperbola, x² - y² = 1, on the image on the right, are the states of the players. The angle ∠(AOB) = φ, the intensities of the two vectors a and b, respectively, represent quantities defined sum of products Q = a1b1 +... + anbn, scalar product of two vectors:

a = (a1, ..., an),   b = (b1, ..., bn)

in some n-dim coordinate system. You will find a little more detail about hyperbolas as well as about curves of the second order (conic sections) in my contribution, "Analytical Geometry II".

However, even for a large number of dimensions n ∈ ℕ, the two vectors with common origin (O) are in the same plane, as in that picture on the top right. Components, multipliers ak and bk, respectively index k = 1,..., n, represent the intensities of the respective actions and reactions of the opponents. From vector algebra, we know that this intensity is also Q = a b cos φ, so that the perception information Q of their communication increases with the lengths a and b and decreases with the greater angle φ.

1. It is a brief overview of the known, established things on the "slippery" ground yet to be described. The picture shows vectors of equal length, as is usually the case in competitions involving participants from equal categories. Due to the symmetries of the hyperbola, the point A(x, -y) is centrally symmetrically mapped to the point B(x, y) around (that's also a 180° rotation) the center of C(x, 0), which means that AB is perpendicular to OC and that AC = CB. Also, ab = x² - y² = 1, because these are the points of the hyperbola. This means that the scalar product of such symmetric pairs in the given hyperbola is constant!

On the other hand, in a fair competition (with competitors in the same category), the game can always be presented with the above sketch. The increase in the intensity of the two vectors corresponds to greater efforts by the participants, the participation of more options, or stronger actions, which speaks of the undecided position that I also explained, differently illuminated (Standoff). The intensities of the players a = b change with the angle φ, but not the strength of the game.

2. Those formulas also tell us about the repulsive force of uncertainty, the equivalent of the force of probability. Left to that force, the states of the opponents would move along a hyperbola, which, in addition to a parabola and an ellipse, is a type of conic section. We know from earlier calculations that such charge trajectories are characteristics of constant central forces (like electric or gravitational), and also that Kepler's second law applies to them (pull from the sun to the planet during equal times it covers equal areas).

Celestial bodies move on the cone section by inertia, closely adhering to the principle of least action, as well as opponents in the clinch who do not get tired of excessive communication (change of interactions). Only a change in the vitality of the game initiates a change in its flow, just as an additional action changes the established movement of the body.

3. This indicates the effort of defiance of the game, of opposing the principle of least action in physics. For that, the player's vitality is necessary but not sufficient. Remaining with symmetric state efforts, the competitor will remain on the same hyperbola, with vectors of unchanged scalar product and constant information of perception. In order to change the perception information, Q = a b cos φ must be changed. It can happen with a different hyperbole, for example, with players from a different league or category.

4. The perception information will also be changed by a unilateral change in the parameters of one of the players; let's say it is B if he were to play in a weaker league and start losing, so only b⋅cos φ changes. Note that this can also be achieved by strengthening b, but straining in vain due to moving away from the game and increasing the angle φ with a significant decrease in cosine. As the angle increases, the cosine decreases.

To make it difficult for the "loser" to return and to avoid more unpleasant personal efforts, it helps us to intimidate the opponent, discourage him, exaggerate the value of tolerance, or slowly "lean forward" by advising compromises and creeping, bit by bit, conquering the trump cards of the other side (Sneaking). We know this last method as the policy strategy of the West today.

5. It is possible that the weaker one loses and the information of the perception of the game (for a while) remains the same. In general, a game in which the perceptual information is constant seems not to be a game, in the sense of the previous criterion. Its participants are like inanimate matter, prone to "not ripple," the stronger so that the worse does not live, and the weaker so as not to anger (revive), the better. The tension that remains unchanged simulates processes without uncertainty, lifeless and inert, like a dead body. It has such a form.

Penalties

Question: Do you agree that "severity of punishment has little or no effect" on, say, the illegal drug trade?

Penalties

Answer: We are talking about my text "1.10 Crime and Penatly" (p. 23), extracted from the script "Information Stories" (2021) previously attached to the newspaper. The stated part of the question itself that we recognize is a quote from the author of drug trafficking research before that story (MacCoun and Reuter, 1998: 213).

Most of these deterrence analyses assume a decision-theoretic context: criminals weigh the costs and benefits of a given criminal behavior against a given level of punishment and probability of detection. However, this type of analysis ignores the rationality of the controlling agents. Punishment is imminent only if the criminals are actually discovered. At the same time, detection requires effort, and wanting rational control, we strive to improve our level in accordance with the level of crime and vice versa. Game theory provides predictions about the interaction between crime rates and control. The statement is part of the introduction of an experimental (statistical) study, as it is called, on the "inspection game" (Heiko Rauhut: Higher Punishment, Less Control? 2009).

Players want to avoid predictability in such a situation. In game theory terms, there are no solutions to "pure strategies." If we make preliminary predictions, we are aware that criminals and control agents may constantly change their behavior to be unpredictable. These are the recommendations of classical game theory, and we now have a new conception of this theory of information, primarily because of the importance of "unpredictability.".

Aside from practice, the injury of the victim and the compassion of the masses, the potential inner anger that lurks to explode into an excessive thirst for punishing anyone, for revenge for the injustices that have ever been done to us, the death drive, and the like, let's see what the theory known today says, in the words of professor emeritus David Brown: Severity of punishment, in the sense of the phenomenon of "marginal deterrence," has no real deterrent effect, nor does it reduce recidivism.

But, alas, we know that this kind of theorizing does not recognize the author's countries of residence. On the contrary, it is precisely in such countries that stricter punishments are prescribed more often than in the rest of the world, and, at the same time, they have more numerous, more expensive, or more terrible crimes.

Therefore, I agree that in the process of punishment for violating the rules of social behavior, strictness toward the violator is not a measure of success. Cold-headed in rational discussions, many of us reach similar conclusions, so the more interesting question is why the practice of legislation does not follow the supposed real needs of society. This is a special topic of my theory of information, still unexplored, which I can eventually announce.

Falsehood is not consistent (Criterion), and, therefore, legislation aimed not at the real needs of society is beneficial for "manipulators" (Traits), which are mediators of the "evil" in ruling the "good." In the more massive lower league, in addition to what has been said, many of the lower inhabitants find themselves well in bad laws, so they don't want to change to a "better" one, or they don't understand it. They are very drawn to a distorted image of their "needs" to defend them, for example, as once slaves to their master.

Hyperbole II

Question: Can you explain "penalties" using "hyperbole"?

Hyperbole II

Answer: Yes. Ideally, the bad "behavior" for the nation is A(x, -y) and the intended "legal" measure is B(x, y), as in the hyperbola on the right. The scalar product of those two states, thus interpreted, is:

\[ \overrightarrow{OA}\cdot\overrightarrow{OB} = x^2 - y^2 = 1. \]

It is the length OT from the origin to the vertex of this hyperbola.

The length OT = 1 will not change by sliding the point AA' down the hyperbola, so the scalar product will not be specified if the point A is symmetrically followed by the point BB'. These two conditions multiply into the information of the "crime-punishment" perception of the optimal and ideal, perhaps unattainable "state of law.".

Let's recall that in another coordinate system, these vectors can have n ∈ ℕ corresponding coordinates. Otherwise, we write them in one of the following ways:

\[ \begin{cases} \overrightarrow{OA} = \vec{a} = \textbf{a} = (a_1, a_2, ..., a_n), \\ \overrightarrow{OB} = \vec{b} = \textbf{b} = (b_1, b_2, ..., b_n). \end{cases} \]

One such scalar product notation is:

\[ \textbf{a}\cdot\textbf{b} = a_1 b_1 + a_2 b_2 + ... + a_n b_n = ab\cos\varphi \]

where the vector intensities a = ∥a∥, b = ∥b∥ and φ = ∠(AOB) is the angle between. Combined with the previous, this tells us that the growing number of components of the constantly ideal rule of law, n → ∞, will not change her information of perception. It has a constant value OT = 1, which means an optimal strategy (Reciprocity), unbeatable in competition, but in clinch, static, and which, as such, passed into the cooperation of the opponents.

Like two boxers resting in a clinch or states exhausted by a long war without a winner, this static coupling of information simulates dead matter, or the principle of least action in physics. This is a behavior similar to electrons trapped by the attractive force of protons in the shells of atoms when they would need additional energy to get out, which they do not have. Like a vortex. In describing classical game theory, this would be a type of equilibrium from which each side can only go to a less favorable, undesirable state.

From our point of view today, the individual of a rule of law with such invincible power would be very unfree. A highly ordered state resembles an independent living being, an individual with a specialized, subservient, and limited reproductive life. Giving up private freedoms for the sake of the collective, for more security or efficiency, we tend towards such a community.

In the extreme general case of infinitely large states (a, b → ∞), which are better called all-time, or omnipresent individual information, such as theorems, the same hyperbolic points, for example, are the attachments of some phenomena. Not everything communicates with everything, and, therefore, not every vector b will be able to couple like this with a given vector a. However, those who are such a couple will have strong bonds, as laws are.

Due to the infinity of components, around every given a there are countless partners like b. Because of them, we can say that a good theory is "everyone's" and practice is always "someone's" (Abstractions), without the Löwenheim–Skolem theorem this time.

Previous

August 2024 (Original ≽)

Next