One man's look at logic
This resource includes primary and/or secondary research. Learn more about original research at Wikiversity. |
This article by Dan Polansky looks at logic, the study of correct inference. It is in part idiosyncratic.
Let me open the discussion by asking why anyone would want to study correct inference, correct conclusion drawing, that is, production of correct/true statements from correct/true statements. Are we not all born with ability to draw conclusions from premises? Can express articulation of principles of correct inference really bring us forward in any way?
My tentative answer is yes, studying correct inference is of value. Above all, our experience shows that humans are too frail, too ready to make errors in inference/conclusion drawing. Given this fact, it does not yet follow that logic is going to help. Whether logic is going to help is an empirical question in the field of human psychology; it cannot be answered purely logically. It could turn out that people who learn logic (especially formal logic) do not really improve in ability to draw correct conclusions.
One kind of logic taught is propositional logic. Here one learns to interpret logical connectives (and, or, implication, not) as truth-value/boolean functions. Thus, one can think of them as algebraic operators defines by means of truth-value tables. The idea is of logic as algebra. One can ask whether this brings us any further. It does. For instance, in natural language, or is sometimes implied to mean exclusive or. By defining the logical or by means of a table, one removes all ambiguity. One says: in logic, when we say or, this is what we mean. Another important idea is to interpret sentences as propositions that have truth value, true or false. That assumes the law of the excluded middle: a sentence has to be either true or false (whatever our knowledge of it). It is not obvious that sentences in natural language generally can be unambiguously interpreted in that way. Proposition logic requires us to try to think of unambiguous sentences that have truth value; if a sentence is ambiguous, it cannot be immediately fed as an input into propositional logic. Another think of note is the table-based implication. It is defined as follows A ==> B =def= A or not B. One sometimes reads "==>" as implies or from which follows, but that does not really make sense. The idea that from an untrue state any true statement follows seems suspect. Thus, the idea that e.g. from the grass being always yellow it follows that all cars are green does not make sense. In case of doubt, one is well reminded that "==>" is defined by the truth table, which is equivalent to A or not B.
The real powerhorse is the first-order predicate logic. It seems to be based on the 19th century work of Frege. Here, one adds variables and existential and universal quantifiers as well as predicate symbols and function symbols. The variable refers to entities in the universe of discourse, that is, entity one can talk about given the particular language of concern. A language of concern is a set of predicate and function symbols together with their arities; semantics is not involved. The "first-order" part in the name refers to the quantification being only over items in the universe of discourse and not over sets of such items.
Natural language is sometimes said to be not logical. That is misleading. In fact, language cannot violate the laws of logic. What is often meant by it is that language contains a lot of peculiarities, deviations from pattern-based expectations. For instance, one could think that "here" and "where" would be pronounced is a similar way, but that is not so. More importantly, there are semantic peculiarities, in which the semantics deviates from the pattern-based expectation. None of this violates the canons of logic. One simply has to learn that instead of making pattern-based guesses/estimates, one has to get more serious about word and phrase meaning, examining the meaning of each indivudal item in case of doubt regardless of the suggestiveness of the morphology or etymology.
We may also mention Aristotle. He pointed out that we can sometimes reliably produce true sentences from true sentences. Thus, we can in fact discover some purely mechanical rules. A classic example is this: Socrates is a human; all humans are mortal ==> Socrates is mortal. This reminds us of the predicate logic, but the Aristotelian logic is much less powerful. I will not delve more into this here since I find it mainly of historical interest; if one is serious about logic, one should go for the first-order predicate logic.
Strangely enough, arithmetic calculation can be seen as a species of logic in that it is in the business of mechanically producing true sentences from true sentences. For instance, from noting that soldiers are in a rectangular formation of 6 rows and 8 columns, we may reliably conclude that there are 48 soliders. One may then perhaps ask whether the whole of mathematics is a branch of logic. Whatever the case, there is a traditional separation of logic from mathematics.
One concern about application of logic is that in order to produce true sentences from true sentences, we need to have some true sentences to start with, obtained without use of logic. That is true enough; these can be observational report sentences. One can charge that the observational report sentences are uncertain, and therefore, also the strictly logical conclusions are uncertain. That may be true in principle, but does not really seem practically relevant. For instance, we think to know reliably that Socrates is a human and that all humans are mortal; and then, we feel comfortable about drawing the conclusion that Socrates is mortal. That said, the GIGO problem (garbage in, garbage out) is in general a real one as for mechanical/algorithmic sentence production. There are too many sentences that we do not know reliably enough and yet we want to draw correct conclusions. Importantly, the mechanical conclusion drawing is of great value as part of falsificationism: if an uncertain sentence has a necessary logical consequence known to be untrue, the sentence cannot be true. Rejection of mechanical deductive inference as a principle would seem to prevent falsificationism from operating.
One idea brought forward by the first-order predicate logic is that mechanical rules work well when all symbols are unambigous. The mechanisms of this logic do not have any way to disambguate by context; all occurrences by a symbol (predicate, function or variable) are taken to mean the same thing. One suggestion is then that human mind is helped when sentences deliberated about have reduced ambiguity; something like the logical engines in the background mind can start to work much better. However, this is an empirical hypothesis and would need a proper examination.
A related idea is something that I call export of semantic items on the syntactic surface. Formal symbolic logic can only operate on what has been expressly stated using syntactic means as part of a sentence. Human deliberation about sentences often does not work like that; practical conclusion drawing often involves incorporation of unstated assumptions. Symbolic logic can inspire us to state additional assumption to make purely mechanical inference and argument verification work.
One interesting application of predicate logic is having the pronoun nothing disappear by translating sentences into their logical form. Thus, the sentence "there is nothing in the box" can be rendered as "for each macroscopic object, it holds that it is not in the box". This points to natural language syntactically constructing apparent objects that are in fact not there, as part of something like syntactic sugar, here the putative referent of the word nothing that is allegedly contained in the box. The syntactic sugar is nice to have; it is much nicer to say "there is nothing in the box" or respond to the question "what is in the box" with "nothing" than use the more complex phrasing used above. And then, one can suspect that inquiries into the so-called nothingness end up to be nonsense (or maybe not?).
There are various specialized formal symbolic logics, e.g. modal and temporal logics. In modal logics, the formal operators are interrelated in the same way as existential and universal quantifiers: <> =def= not [] not. Wikipedia article on modal logics has a lattice structure of different axiomatic modal logics. One can ask which of these logics is the true valid one and why. This remains something of a puzzle. More is currently for further reading.
Apart from formal symbolic logic, there is also a thing called informal logic. It investiages e.g. logical fallacies, a classic being ad hominem. It does seem to have the capacity to reduce the rate of certain kind of wrong arguments, but to what extent it really does is again an empirical question.
There is also something called argumentation theory. One would think it could be part of logic. I would need to have a closer look at it to see what it does. As a first note, it appears clear that the support relation (a statement supporting another statement) is usually not necessary one of strict deductive inference. Something else must be going on, but what it is exactly I would need to figure out. One part of argumentation is something I would call argument and counter-argument, on nested level. It is reminiscent of Popper's conjectures and refutations and Lakatos' proofs and refutations, but it can be something somewhat different. One idea is that in order to critically investigate a statement, one must allow even relatively weak counter-arguments into the discussion (but not completely irrelevant). And then one may criticize the counter-arguments as well, leading to a nested argument structure. Wikidebates in Wikiversity are a great example of this structure.
There is something called inductive logic. From what I remember, Popper says something to the effect that there is no such thing as inductive logic since logic is the study of correct inference and inductive inferences are not correct. I would like to look more into the matter, paying more attention to defenders of induction (Carnap?) I also need to clarify whether I want to treat of induction here or in the epistemology article.
The relation of logic to epistemology should be clarified. Logic could be seen as part of epistemology; since, if someone asks me how do I know that Socrates is mortal, I can answer: I know it by applying mechanical rules of logical inference, taking reliably known facts as an input.
Mathematical symbolic formal logic can be contrasted to logic used in mathematics by mathematicians. There is a certain degree of informality in mathematical proofs, even when they invoke existential and universal quantifiers. Mathematical logic sets up axioms and proofs (which it sometimes calls derivations) as formally mathematically concieved/defined entities, subject to rigorous mathematical analysis. And thus, mathematical logic is metamatematics (matematics about tools used by mathematics) as well as metafield (field about tools used by various fields of inquiry). Let us recall that mathematics was not in a very bad state before the arrival of Fregean logic in the 19th century. Mathematicians succeeded in doing mathematics at least since the Ancient Greek Euclid, noted for the axiomatic system of Euclidean geometry. It would seem that mathematicians must have informally known something like first-order predicate logic all along. Which really is the case I do not know; this would require a thorough and serious look into the history of mathematics. One could argue that Newton and Leibniz did not practice the modern mathematical rigor with their early versions of calculus and that therefore something could have changed with the arrival of mathematical logic, especially with Cantor's set theory. One would do well to investiage the possible impact of Frege on Cantor.
One may wonder what value can there be in mathematical first-order logic. Since, in order to execute the study, one needs to assume something like informal logic anyway. Thus, to prove theorems that are part of mathematical logic, one needs something like informal logic. We already know how informal logic works before we even started, so why the fuzz? What's the point of this bureaucratic exercise, investigating something that was clear before we even started? Not so fast. For one thing, it is on the basis of the mathematical formal symbolic logic that we can get such results as Gödel's incompleteness theorems. Without formalizing logic in this way to be studied as an object, it is unclear how these could be oobtained. Moreover, first-order logic opens itself directly to computer support, such as in theorem provers.
There are multi-valued mathematical logics, including fuzzy logic. Thus, instead of a predicate either being true or false about its subject or subjects, the truth value can have degrees. In fuzzy logic, the truth value (also interpreted as degree of memebership in a fuzzy set) is a real number in [0, 1]. One then has to figure out how to calculate logical connectives and, or, implication and not, and multiple proposals are given. Fuzzy logic has applications in devices such as photographic cameras.
There is what is known as intuitionist logic. (One should not read too much into the name "intuitionist", I think. The intent does not seem to be to abandon formal rigor in favor of something like Poincaré's mathematical intution.) It is weaker than the classical logic (such as the classical first-order logic): it makes fewer inferences possible. There seems to be the idea of constructive involved. I know almost nothing about it; further reading can be in SEP and WP.
One can sometimes read that logic is a normative field. I find that doubtful. Logic does not tell anyone how he ought to think or whether he has anything like a duty to think in a particular way. Logic says: if you want to avoid producing untrue statements from true statements, here is how to go about it. A society can in fact require people to adhere to canons of logic, but that is not because logic says it should. Similarly, bridge engineering studies parameters of bridges and manner of bridge building that lead to low likelihood of the bridge failing. Bridge engineering does not tell anyone that they have a duty to build good bridges. Thus, bridge engineering is not a normative field. And nor is logic. I do not find the idea of logic being normative entirely wrong, in part since indeed, if e.g. a judge openly violates canons of logic or sound reasoning, there is likely to be a complaint that he broke his duty. It is just that the putative duty to engage in correct reasoning can be separated from study of correct reasoning.
Logic is sometimes contrasted to psychology of reasoning. Popper argues these are different fields or domains and I find that convincing. On one hand, people often do feel the force of logic as if it was innate (and perhaps it is in some sense). On the other hand, people in fact often do reason in incorrect or brutally heuristic ways; logic does not recognize that reasoning as valid only because it is or seems natural. Thus, logic does not seem to be part of psychology. It is this contrast that may lead people to say that logic is normative. But perhaps it is more debatable than seems to me.
Further reading
[edit | edit source]- Logic, wikipedia.org
- Classical Logic, Stanford Encyclopedia of Philosophy -- features first-order predicate logic
- Search for "logic" in Stanford Encyclopedia of Philosophy -- shows there are many articles on the subject
- Matematická logika by Antonín Kučera (in Czech)