Jump to content

An application of computability theory to epistemology in Popperian spirit

From Wikiversity

This quasi-mathematical and philosophical article in computer science and epistemology (or theory of science) by Dan Polansky shows a certain application of computability theory to problems of getting to know in Popperian falsificationist spirit.

The starting question is this: could we use a formal mathematical theory to model two systems, one of the two systems being the reality and the other systems being the getting-to-know system ("epistemic agent"), and learn something from it about Popperianism vs. inductivism? The answer seems to be yes, in favor of Popperianism.

The key tenet of Popperianism somewhat supported is this: induction is wrong and does not matter all that much as a source of scientific hypotheses. Scientific hypotheses need to be invented using scientific imagination; scientific hypotheses are not verified; they are corroborated by sincere and strenuous attempts at refutation.

Game-theoretic model for infinite sequences

[edit | edit source]

The model developed is this:

  • The epistemic process is a two-player game in the game-theoretical sense. It is a normal-form game, meaning there are not successive moves but rather, each player independently chooses a strategy, and there is a payoff function that assigns the result or payoff to each combination of the two player's strategies. The payoff is either win or lose (1 or 0, if one needs a number).
  • There are two players, the reality and the epistemic agent.
  • The strategies that the players use are Turing machines or computationally equivalent systems. One can equally well think of a strategy as a program in Python or C++.
  • The reality produces an infinite sequence of ones and zeros.
  • The epistemic agent receives finite prefixes of this sequence and can, if he so choses, use them to determine the next symbol.
  • The epistemic agent wins if he guesses an infinite suffix; that is, if after some time point, the epistemic agent determines the next symbol right every time.

The model has the following properties:

  • Neither the reality nor the epistemic agent have a strategy guaranteed to be winning.
  • For each strategy of the reality, there is a winning strategy of the epistemic agent: he just happens to guess the reality.
  • From the inductivist perspective, the reality is probably well advised to use some deterministic chaotic function, some kind of pseudo-random generator. No simple inductive process can work it out, unlike e.g. if the reality outputs 10101010 ...
  • A decent strategy of the epistemic agent seems to be this:
    • Have a computably enumerable list of candidate realities, that is, total computable functions or processes.
    • Iterate via the list of candidates.
    • For each candidate, stick to it until it is refuted. Thus, the prefix can be completely ignored. The method is well described as trial and error; the only learning from reality is via the events of non-refutation, which lead to non-rejection of the candidate.
  • The above strategy may appear to be a winning one, but it is not. It would need to have a complete computably enumerable list of total computable functions. But the list of all total computable functions is not computably enumerable.[1]
  • Thus, the best the epistemic agent can do is to have as long a list of candidate realities as possible. But then, we can add more and more realities as individual items, in addition to using schemas for capturing infinite sets of candidate realities. Thus, for each epistemic agent of this type, there is a dominating epistemic agent, in the sense that he performs equally well on the strategies covered by the alternative, but performs better on its own additional strategies.
  • The epistemic agent can explore multiple infinite sets of candidate realities. This may seem unintuitive to someone not acquainted with computability theory but it is so.
    • An infinite set can be represented using finite computable means; for instance, one can consider the set of strings with fixed-length integer blocks of ones and zeros alternating, that is, 101010..., 110011001100..., 111000111000111000..., etc. There is no need to store this explicitly as a coded list; one only needs to know the current length of the block explored.
    • A finite number of computably enumerable infinite sets can be drawn from iteratively in a straightforward manner: one draws an item from one set, then uses the next set to draw the next item, until one arrives at the final set, after which one switches to the first set to get a next item.

Statement: the epistemic agent has no winning strategy:

  • That is, for each strategy of the epistemic agent, there is a reality/strategy of the reality that beats the strategy.
  • The sketch of a proof: since the epistemic agent is given by a total computable function/Turing machine that always terminates in the process of producing the n-th item of the reality, the reality can emulate the epistemic agent and in each step do something different from what the epistemic agent expects. A technical development of this is missing.

Model limitations

[edit | edit source]

Limitations of the model:

  • The epistemic agent is separated from the reality. In real world, according to physicalism, the epistemic agent is part of the reality he is trying to learn about.
  • No computational memory and time constraints are considered, which is typical for applications of computability theory.
  • The epistemic problem modeled is not one of conceptual analysis and concept acquisition (time, space, object extended in space, event, process, animal, plant, etc.). Someone has provided distinct symbols to be accessed by the epistemic agent. Perhaps conceptual analysis could be indirectly modelled as extraction of certain bit patterns. But then, the strategy of the epistemic agent proposed as promising disregards any patterns; it keeps on blindly guessing whenever a putative theory is refuted by the next incoming bit.
  • The epistemic agent is considered to succeed only if, from a certain time point, all the further predictions are correct. By contrast, a biological epistemic agent does not need to be right every time, provided food, predator evasion, etc. requirements are satisfied.
  • Nothing like inductive, interpolative, extrapolative, curve fitting or model fitting epistemic process is used. That is suspect. The model can be charged to be possibly biased or geared toward Popperian philosophy of science and against induction. Instead of having learned something from the model about Popperianism vs. induction, Popperianism would be charged to be implicitly embedded in the model. The true answer seems unclear.
  • The reality is represented in a way in which one would not represent a complete environment of a human epistemic agent. The sequences of ones and zeroes suggests a single time step corresponds to a single bit. One can reinterpret the sequence to mean that in fact a finite sequence of bits corresponds to one macro-symbol matching the real time step, but even so, it is not clear that sensory input into human consciousness matches a fixed bit length at a time, e.g. eyes opened vs. closed. This problems is perhaps not as deep and insurmountable as one might think, but it shows a certain unintuitive aspect of the model. Any splitting into objects and tracking changes on object level is left without being explicitly modeled.

Discussion

[edit | edit source]

Discussion:

  • Given the limitations, one can object that the model is unrealistic. However, the crude features of the model and its properties do seem to have a bearing on real-world epistemology.
  • The idea that the epistemic agent can never know the reality is plausible.
  • Equally plausible is the idea that an agent who uses the full power of a Turing machine cannot possibly gain very good knowledge of a system that uses the full power of a Turing machine to evade, e.g. by producing deterministically chaotic behavior.
  • Deterministic chaos is seen in weather. That makes prediction of weather hard.
  • However, weather is hard to interpret as a system that intentionally seeks to avoid getting to be known and predicted.
  • By contrast, humans are systems that plausibly often try to intentionally avoid getting to be fully known.
  • Human power corrupts. The strata or subsystems of persons that are hidden from surface behavioral view cannot be easily known, if at all. They may wait for a proper time opportune to a switch of behavior.
  • In a strange twist, a manager or boss of a person can be interpreted as a hidden stratum of a larger person. The behavior of a person reflects not only their own personality but also that of the manager and other communicating and controlling systems attached via communication channels.
  • These were asides, but they seem vaguely relevant and of interest.

Extending sequences with next item

[edit | edit source]

It was objected that the design of the game may well be pro-Popperian/pro-falsificationist and anti-inductivist. Let us consider a different game design and see how it fares.

In intelligence tests, there is sometimes an activity in which one is presented with a finite prefix of a mathematical infinite sequence and one has to provide the next item. Unlike the design of the epistemic game above where infinite match is required for success, here, only a single element is required for the epistemic agent to win. Thus, the first player, the sequence proposer, may figure out a sly sentence and present first multiple items, and the epistemic agent wins if he correctly determines the next item to fit into the infinite sequence.

One may ask what it means for the single next element to be the correct answer. Since, any finite sequence can be extended with zeros ad infinitum and be thereby finitely representable. One may note that by doing so, one performs no compression of the finite prefix into an algorithm. One idea could be that the "correct" next element is one that provides for the smallest Kolmogorov complexity, that is, such that the resulting sequence has the minimum finite representation in the form of a program that generates the sequence. This definition could stumble on technicalities: if the sequence prefix is too short, it may technically be more efficient (in terms of a smaller program) to enumerate it by hardcoding the items into the program and be done with it. For the purpose, one could posit the prefix sentence to be, say 1000 items long rather than 10 items long; as something of a downside, this does not match the kinds of assignments people are being given.

Assuming the task of extending sequences with the next item and something like Kolmogorov complexity to determine the correct next item, and for that purpose, say 1000 elements, here again we have a structure of a game. Like the first game, it is far from obvious that trying to induce or extrapolate the next item from the finite prefix is of much good. Indeed, having an extensive list of candidate sequences in compact form represented as a list of programs seems to be a better plan. Unlike the first game, this game seems much less pro-Popperian/pro-falsificationist since guessing will do no good to the epistemic agent.

Let us consider some sequences which could be a bit harder for a human to complete with the next item:

  • Prime numbers times 3 plus 7. One will spot an apparent irregularity, but it is unclear one will link it to prime numbers.
  • Starting with the Collatz 3x + 1 or x/2 activity, the number in the sequence would be the cycle length for the number that starts the activity. See also "Collatz conjecture".
  • The expansion of pi expressed in base 57, each digit being the next item. Instead, one may pick sin(e / pi * 7) expressed in base 77 or any other expansion of a transcendent or algebraic real number that is compactly represented and comes to mind.
  • Concatenate binary representations of numbers 1, 2, 3, etc., but then, set a fixed bit length, e.g. 15 bits, and code each successive sequence of the bits as a positive integer.

The player picking the sequence in the examples above would be a human; it seems hard to imagine a natural process, say from physics, presenting one with this kind of a problem. Moreover, the sequence picker could be computer software, e.g. using a quasi-random process to build a syntax tree for an expression representing a real number, pick the base, and represent the number. Any program with hardwired parametrized sequence generating subprograms, picking pseudo-randomly a subprogram (or getting quantum-generated random numbers over the Internet from a supplier), will do, and can be extended indefinitely with additional parametrized subprograms. The parameters would be set pseudo-randomly or quantum-randomly.

There is a website cataloguing mathematical sequences and assigning them a numerical identifier: The On-Line Encyclopedia of Integer Sequences (OEIS)[2].

One complication of this game design is that the task of determining Kolmogorov complexity given a string is algorithmically insoluble. To address this for practical purposes of a genuine real-world game, one could think of replacing this general concept with Lempel-Ziv complexity, where one is not allowed to use any algorithm for compression but rather a specific algorithm often used in the zip compression. But this would break the character of the definition of the correct next item: one cannot expect to be able to Lempel-Ziv-compress, say, prime numbers times 3 plus 7 using Lempel-Ziv. An amendment is this: the epistemic agent player determines what he thinks is the next item, claims that it is the correct one and states the best determined estimate of Kolmogorov complexity, that is, the smallest found length of the sequence-generating program. The sequence proposer then either admits a loss or shows that there is a better estimate of Kolmogorov complexity, showing the corresponding shorter sequence-generating program. This is practicable: the sequence-proposing player can know a program generating the sequence and the epistemic agent can know a program for his version of the sequence extended by the next item.

One may now recall that the game was to represent a reality vs. epistemic agent. If so, the reality does not need to care about Kolmogorov complexity. From that perspective, one could think of modifying the game so that the reality wins if the next item is correct with respect to the underlying finite representation of the infinite sequence. Thus, the reality player would write down (or send to an intermediary) the proper next item together with the generating program matching the next item. But then, if there is no requirement relating to Kolmogorov complexity or similar, the reality agent may generate a finite sequence using a quantum-random generator, remove the final item from the result, fill in the rest with zeroes (but the next item to be guessed is not zero but rather than item removed), hardcode the sequence into a program, and present the result to the epistemic agent ("knower"). Then, the epistemic agent has tough luck: there is no genuine relation between the 1st through n-1th items and the nth item, and the only thing remaining is guessing. It is unclear what, if anything, could be learned from such a game design about Popperianism vs. inductivism. It favors Popperianism but not in an interesting way: since guessing performs no worse than any other process against a reasonably sophisticated reality, one may as well stick to Popperian trial and error, but one does not learn anything from the possible error since the next item is the only trial available.

This game is much more extrapolationist in some sense than the first game: the prefix revealed plays a central role in winning the game, whereas in the first game, the knower could conveniently ignore the prefix revealed.

One objection to the above model and its use of Kolmogorov complexity is that if the model should represent the reality and the knower, it is not clear why the reality should have the smallest Kolmogorov complexity rather than some other property (or, why Occam's razor?). Surely the reality we are living in is rather complex. And since fully computationally powerful systems ("Turing-complete") are present in the reality (including humans and computers), some parts of reality can be extremely complex in terms of Kolmogorov complexity; a computer can easily contain a very large program to produce a hard-to-extrapolate sequence. The objection seems valid, but a candidate replacement for Kolmogorov complexity does not readily come to mind.

Technical mathematical description

[edit | edit source]

This article makes no attempt at technical mathematical specification. It considers the informal description above to be sufficient for philosophical purposes. A mathematician proper could try to develop the ideas into a real mathematical article.

Brim's thesis

[edit | edit source]

Expanding on the above, let us consider the following thesis I named after teacher Luboš Brim, Brim's thesis:

  • When developing a mathematical proof in computer science, when one sees a problem as intuitively computable, one does not need to feel compelled to develop a technical detailed proof of the computability.

While the thesis relaxes stringency, it releases the scarce resources of a creative mathematician to focus on substance or core of things. Whether the formulation matches Luboš Brim's views exactly is uncertain. The thesis is related to Church thesis, but does not seem to be exactly the same thing. One formulation of Church thesis can be this: since all hitherto developed formalizations of a computation mechanism to represent the concept of computability have proved to be equivalent (Turing machine, while program, lambda calculus), it seems unlikely a search for a new, more powerful mechanism will discover a non-equivalent formalization. (Let us be careful about what we mean by this, though; an oracle machine is more powerful than oracle-free machine.)

Logic and mathematics

[edit | edit source]

A general counter-argument can be made to the effect of this:

  • The world cannot be understood using logic and mathematics, and therefore, the above attempt necessarily fails.

A proper response to this objection is missing. As a sketch of a response, the above seems far from conclusive. Some parts of the world are in fact well modeled using logic and mathematics, for instance the motion of an accelerating aircraft.

Further considerations

[edit | edit source]

To examine the subject of the title of the article further, one could perhaps get acquainted e.g. with machine vision and find out whether its algorithms resemble more inductivism, exploring a list of hardwired parametrizable models (more like Popperianism), combination of both or something else altogether.

References

[edit | edit source]
  1. computability - Why are the total functions not enumerable?, Computer Science Stack Exchange
  2. oeis.org

Further reading

[edit | edit source]