Q
Question
Identify the fundamental problem with relying on intuition to test validity — and see why a symbolic language is the solution

Article A4 ended with a thought: throughout A1–A4, we have been testing arguments by asking whether a conclusion "follows" from premises — and relying on our intuition to tell us when it does. Now consider this argument:

An argument that feels valid — but isn't Deductive
P1If it is raining, then the ground is wet.
P2The ground is wet.
∴ CTherefore, it is raining.
Read it slowly. It probably feels compelling — the conclusion seems to follow. But it does not. There are many other reasons the ground could be wet: a burst pipe, a garden hose, condensation, someone spilling water. The premises give no guarantee that rain is the explanation. This argument is invalid — and yet it is one of the most common patterns of mistaken reasoning in science, medicine, and everyday life. It has a name: affirming the consequent.

The uncomfortable truth is that intuition — even careful, philosophically trained intuition — can be fooled. The argument above has the same surface structure as valid arguments we have seen throughout this package, but the logical relationship between its premises and conclusion is subtly different in a way that defeats necessity. Without a formal language that makes the structure explicit and checkable by rule, this kind of error is easy to miss.

This is exactly the problem that formal logic was developed to solve. The philosophers and mathematicians who built propositional logic — from Aristotle through to Frege and Russell — wanted a language so precise that the question of whether an argument is valid could be settled by mechanical inspection, without relying on anyone's intuition at all.

Can we build a language so precise that the question "does this conclusion follow from these premises?" can be answered by rule alone — with no room for intuitive error?

The answer is yes — within limits. Propositional logic is that language. It replaces the ambiguous connectives of English ("or," "if," "and," "not") with symbols that have exact, stipulated, unambiguous meanings. It represents propositions as variables (P, Q, R) so that the logical structure of an argument can be displayed naked, independent of content. And it provides a mechanical procedure — the truth table — for testing whether any formula is valid, regardless of what the propositions are actually about.

By the end of this article, you will be able to represent philosophical arguments in propositional notation, construct truth tables to test their validity, and identify the most important valid and invalid argument forms by name. These skills connect directly back to everything in A1–A4 — they are the same ideas, now made fully explicit.

U
Unpack
Build the vocabulary of propositional logic from the ground up — propositions, connectives, truth tables, and argument forms

Atomic propositions and variables

Propositional logic begins with the same unit we identified in Article A1: the proposition. In formal logic, individual propositions are represented by capital letters called propositional variables: P, Q, R, and so on. These stand as placeholders for any declarative sentence you care to assign to them.

For example, we might let P stand for "It is raining" and Q stand for "The ground is wet." Once those assignments are fixed, we can build more complex propositions — called compound propositions — by combining variables using connectives.

The five connectives

Propositional logic uses five connectives. Each has a symbol, a name, and precise truth conditions — meaning we can specify exactly when the compound proposition it forms is true and when it is false.

¬
Negation
"not P"
True when P is false. False when P is true. Flips the truth value of any proposition.
Conjunction
"P and Q"
True only when both P and Q are true. False if either or both are false.
Disjunction
"P or Q"
True when at least one of P or Q is true. False only when both are false. This is inclusive or.
Conditional
"if P then Q"
False only when P is true and Q is false. True in all other cases — including when P is false.
Biconditional
"P if and only if Q"
True when P and Q have the same truth value (both true or both false). False otherwise.

The connective that most students find counterintuitive is the conditional (). In ordinary English, "if it is raining, then the ground is wet" implies a real-world causal connection. In propositional logic, the conditional merely says: it will not be the case that the first proposition is true while the second is false. Crucially, if P is false — if it is not raining — the conditional is true regardless of whether Q is true or false. A promise is not broken when the condition for it never arises. This matters enormously when evaluating arguments.

Truth tables: the validity-checking machine

A truth table is a systematic procedure for testing a formula by working through every possible combination of truth values for its variables. With two variables (P and Q), there are four possible combinations (T/T, T/F, F/T, F/F). With three variables, there are eight. The truth table is exhaustive: it covers every case, so if a formula is true in all rows, it is a tautology — necessarily true. If an argument form produces a false conclusion when all premises are true in no row at all, the argument is valid.

Here is the complete truth table for the conditional P → Q:

Truth Table — The Conditional (P → Q) Read row by row. The critical row is highlighted.
P Q P → Q Interpretation
TTT It is raining, and the ground is wet. The conditional holds.
TFF Critical row: It is raining, but the ground is dry. The conditional is false — this is the only case where it fails.
FTT It is not raining, but the ground is wet (other cause). The conditional is not violated.
FFT It is not raining, and the ground is dry. The conditional is not violated.
The conditional is false in exactly one case: when P is true (the antecedent holds) and Q is false (the consequent does not). This is the formal definition of "if…then" in propositional logic. Notice rows 3 and 4: a false antecedent makes the whole conditional true. This is sometimes called the "paradox of material implication" — it means "if 2 + 2 = 5, then the moon is made of cheese" is technically true in propositional logic, because the antecedent is false.

Key valid argument forms

Certain logical forms are valid — that is, they can never produce a false conclusion from true premises — and these forms recur so often in philosophical argument that they have earned proper names. Each one below should feel familiar: you have been using these patterns throughout A1–A4 without the formal notation.

Modus Ponens  ·  "affirming the antecedent" Valid ✓
P1If P then Q.P → Q
P2P is true.P
∴ CTherefore, Q.Q
The most fundamental valid argument form. We saw it in Socrates: "If all humans are mortal (P → Q), and Socrates is human (P), then Socrates is mortal (Q)." You cannot accept both premises and deny the conclusion.
Modus Tollens  ·  "denying the consequent" Valid ✓
P1If P then Q.P → Q
P2Q is false.¬Q
∴ CTherefore, P is false.¬P
If my theory predicts X, and X did not happen, my theory is refuted. This is exactly Popper's falsificationism from Article A2: testing a theory by its predictions is modus tollens. Science's most powerful logical form.
Hypothetical Syllogism  ·  "chain of conditionals" Valid ✓
P1If P then Q.P → Q
P2If Q then R.Q → R
∴ CTherefore, if P then R.P → R
Two conditionals chained together. "If it rains, the ground gets wet. If the ground gets wet, the game is cancelled. Therefore, if it rains, the game is cancelled." Legal reasoning and causal chains often have this form.
Disjunctive Syllogism  ·  "eliminating a disjunct" Valid ✓
P1P or Q (at least one is true).P ∨ Q
P2P is false.¬P
∴ CTherefore, Q.Q
The valid version of what the false dilemma fallacy (A4) tries to do. A false dilemma presents a disjunction where not all options are actually included — so when one is eliminated, you cannot validly conclude the other. When all options genuinely are in the disjunction, disjunctive syllogism is perfectly valid.
E
Examine
Verify valid forms with truth tables, expose formal fallacies, and trace the historical development of formal logic

Verifying modus ponens with a truth table

The claim that modus ponens is valid can now be proved mechanically. An argument form is valid if and only if there is no row in its truth table where all premises are true and the conclusion is false. We test this for P → Q, P ∴ Q:

Truth Table Verification — Modus Ponens (P → Q, P  ∴ Q) Look for any row where both premises are T but conclusion is F
PQ Premise 1: P → Q Premise 2: P Conclusion: Q
TT TT T
TF FT F
FT TF T
FF TF F
The only row where all premises are true is Row 1 (P=T, Q=T). In that row, the conclusion Q is also T. There is no row where premises are all true and conclusion is false. Therefore, modus ponens is valid — proved mechanically, without intuition. Row 2 has both premises true? No — P→Q is F when P=T and Q=F, so Row 2 does not have all premises true. Row 3 and Row 4 both have P=F, so Premise 2 (P) is false. Only Row 1 counts, and it delivers a true conclusion.

The formal fallacies: two invalid forms that look valid

We can now expose formally — with the same mechanical procedure — the two most important formal fallacies involving the conditional. Both involve mixing up the roles of antecedent and consequent in ways that natural language conceals.

Affirming the Consequent  ·  the argument from Question tab Invalid ✗
P1If P then Q.P → Q
P2Q is true.Q
✗ CTherefore, P.P
The ground is wet, therefore it is raining. Invalid. From Row 3 of the conditional truth table: when P is false and Q is true, P→Q is still true. So we can have Premise 1 true (P→Q) and Premise 2 true (Q) but Conclusion false (¬P). The ground being wet is consistent with P being false — a hose, not rain. This error is ubiquitous in scientific reasoning: a theory predicts observation X; we observe X; therefore the theory is confirmed. This only confirms the theory if it is the only explanation for X.
Denying the Antecedent  ·  the mirror error Invalid ✗
P1If P then Q.P → Q
P2P is false.¬P
✗ CTherefore, Q is false.¬Q
"If it rains, the ground gets wet. It is not raining. Therefore the ground is not wet." Invalid — again, the ground could be wet from a hose. The error here mirrors modus tollens (which IS valid) — but the direction is reversed. In modus tollens we deny Q and conclude ¬P. Here we deny P and wrongly conclude ¬Q. The conditional "if P then Q" says nothing about what happens when P is absent. Row 3 of the truth table: P=F, Q=T, P→Q=T — all premises true, conclusion (¬Q = F) false. Invalid.

Placing these two invalid forms next to their valid counterparts makes the structure of the error immediately visible. Modus ponens affirms P and concludes Q — correct. Affirming the consequent affirms Q and concludes P — invalid. Modus tollens denies Q and concludes ¬P — correct. Denying the antecedent denies P and concludes ¬Q — invalid. The symmetry is exact: these are the four possible ways to reason from a conditional, and exactly two of them are valid.

From Aristotle to Frege: the long road to formal logic

Aristotle's Prior Analytics was the first systematic attempt to study logical form rather than logical content. His syllogistic — the theory of three-proposition arguments — dominated Western logic for two millennia. It captured a great deal of valid reasoning and did so with impressive rigour for its time.

But Aristotelian syllogistic had limits. It could not handle relational statements ("Every philosopher who has studied Kant admires him"), it could not represent the full range of conditional reasoning, and it had no way of testing complex arguments involving more than two or three propositions. It was a powerful but incomplete tool.

The decisive transformation came in 1879, in a slim, dense pamphlet that virtually no one read at the time.

GF
Philosopher & Mathematician
Gottlob Frege
1848–1925
Mathematical logic  ·  Philosophy of language  ·  Philosophy of mathematics
Frege's Begriffsschrift ("Concept Script") of 1879 is the most important work in the history of logic since Aristotle. In it, Frege invented a complete formal language for logic — one that could express not just the categorical statements of Aristotelian syllogistic, but quantified statements ("all," "some," "none"), relational predicates, and the full apparatus of modern predicate logic. He was motivated by a philosophical project: to prove that arithmetic was reducible to pure logic — that mathematical truths were in fact logical truths (a position called logicism). Bertrand Russell discovered a devastating paradox in Frege's system in 1902, just as the second volume of Frege's Grundgesetze was going to press. Frege received the letter pointing out the paradox with extraordinary intellectual honesty, noting in his published response that "a scientist can hardly meet with anything more undesirable than to have the foundation give way just as the work is finished." Logicism was not finished, but Frege's logical system survived — and it forms the foundation of all modern formal logic.
Relevant works: Begriffsschrift (1879) — the founding document of modern logic; Die Grundlagen der Arithmetik / The Foundations of Arithmetic (1884) — the clearest statement of Frege's philosophical programme, and more readable than the Begriffsschrift.
📐
Historical context
Russell, Whitehead, and the Principia Mathematica (1910–13)
After discovering the paradox in Frege's system, Bertrand Russell and Alfred North Whitehead spent a decade attempting to rebuild the logical foundations of mathematics on a paradox-free basis. The result — Principia Mathematica, three volumes totalling nearly 2,000 pages — is one of the most ambitious intellectual projects ever undertaken. It established that propositional logic and predicate logic could be axiomatised and that mathematical reasoning could, in principle, be reduced to logical derivation from those axioms. The work is famous for spending several hundred pages before being able to formally prove that 1 + 1 = 2. Kurt Gödel's incompleteness theorems (1931) later showed that any sufficiently powerful formal system must contain true statements that cannot be proved within the system — establishing a fundamental and permanent limit to what formal logic can achieve. This is not a defeat for logic; it is one of the most profound mathematical results of the twentieth century.
S
Synthesise
Learn to formalise natural-language arguments — and understand both the power and limits of propositional logic

The practical skill this article is building toward is formalisation: taking an argument expressed in English and rewriting it in propositional notation. This process forces precision — ambiguities in the original argument become visible when you must commit to a formal representation.

How to formalise an argument — four steps

Step 1: Identify the conclusion and premises in standard form (A1 — you know how to do this). Step 2: Identify the atomic propositions — the simple, non-compound claims — and assign each a variable (P, Q, R…). Step 3: Identify the logical structure — which premises are conditionals? Which involve conjunction or disjunction? Step 4: Write the formalised argument using symbols in place of natural language, and check that the symbolic version accurately captures the original.

Worked formalisation 1: The Socrates argument

The argument from Article A1 — now rendered formally for the first time.

Formalisation — Modus Ponens / Universal Instantiation
Natural language
P1All humans are mortal.
P2Socrates is human.
∴ CTherefore, Socrates is mortal.
Formal representation
P1H → M
P2H
∴ CM
Key: H = "Socrates is human"; M = "Socrates is mortal". P1 is formalised as H → M ("if Socrates is human, then Socrates is mortal") — which captures the universal claim applied to this individual. P2 affirms H. The argument is modus ponens: H → M, H, ∴ M. Valid. Note: strictly speaking, "all humans are mortal" is a universal quantification that requires predicate logic (∀x: Human(x) → Mortal(x)) — propositional logic can represent its application to Socrates but not the full generalisation.

Worked formalisation 2: Popper's falsificationism as modus tollens

In Article A2, we saw that Popper's falsificationism involves testing a scientific theory by its predictions. It can now be rendered formally — and the argument form made explicit.

Formalisation — Modus Tollens (Popper's falsificationism)
Natural language
P1If my theory is true, then this experimental result will occur.
P2This experimental result did not occur.
∴ CTherefore, my theory is not true.
Formal representation
P1T → E
P2¬E
∴ C¬T
Key: T = "my theory is true"; E = "the experimental result occurs". P2 denies E (¬E). C denies T (¬T). This is modus tollens: T → E, ¬E, ∴ ¬T. Fully valid. This formalisation shows why falsificationism is logically rigorous: the refutation of a prediction is a valid deductive argument for the falsity of the theory. Notice also why confirmation is logically weaker: observing E does not validate T, because that would be affirming the consequent — T → E, E, ∴ T — which is invalid.

The limit propositional logic cannot cross

Propositional logic is powerful, but it treats propositions as atomic — indivisible units that are simply true or false. It cannot look inside a proposition and reason about the subjects and predicates it contains. This means it cannot handle arguments whose validity depends on quantifier words such as "all," "some," "every," "no," and "none."

Consider: "All philosophers are curious. Some curious people ask questions. Therefore, some philosophers ask questions." This argument is intuitively valid, and can be proved valid in predicate logic — the more powerful formal system that Frege actually developed. But in propositional logic, we would have to assign a single variable to "All philosophers are curious" (call it P), another to "Some curious people ask questions" (Q), and another to "Some philosophers ask questions" (R). Then our "argument" is just P, Q ∴ R — and a truth table immediately shows this is invalid (P and Q can both be true while R is false).

The formal representation has failed to capture the actual logical structure of the argument. This is not a failure of the argument — it is the limit of propositional logic. The validity of the argument depends on internal structure (the relationships between subjects, predicates, and quantifiers) that propositional logic is designed to ignore. Predicate logic (also called first-order logic) is the formal system that handles this — and it is taught in university-level formal logic courses. For senior secondary philosophy, propositional logic takes you very far indeed.

T
Transfer
See how formal logic underlies computing, shapes law, and raises profound philosophical questions about the limits of reason itself

Logic and the computer

The most consequential application of formal logic is one that Frege and Russell never anticipated: the digital computer. In 1936, the mathematician Alan Turing proved that any computation could be reduced to a sequence of simple logical operations — and that an abstract machine operating on a tape of symbols could, in principle, compute anything computable. In 1937, Claude Shannon showed that propositional logic could be implemented in electrical circuits using Boolean algebra — the same logical system, with True and False replaced by 1 and 0, represented as high and low electrical voltages.

Every processor, every transistor, every logic gate in every device you have ever used implements the connectives from Unpack. A NAND gate implements ¬(P ∧ Q). An OR gate implements P ∨ Q. An XOR gate implements exclusive disjunction. The billions of logical operations your phone performs every second are, at the deepest level, the same mechanical truth-table operations you worked through in this article — just running at speeds measured in gigahertz. Formal logic is not an abstract philosopher's game: it is the operational foundation of the information age.

Logic and law

Legal reasoning has a deep structural affinity with conditional logic. Statutory law is often written in the form of conditionals: "If [legal conditions X and Y are satisfied], then [legal consequence Z follows]." Applying law to a case is modus ponens: establish the conditions (P1 and P2), derive the consequence (C). Contesting the application of a law is typically either a validity objection (this law does not have this form) or a soundness objection (condition X or Y is not satisfied in this case) — which maps directly to the A3 framework.

Hypothetical syllogism appears constantly in legal chains of reasoning: if the contract was formed, then consideration was required; if consideration was required, then it must have been provided; therefore, if the contract was formed, consideration must have been provided. Legal precedent reasoning has precisely this structure.

Wittgenstein and the limits of what can be said

Ludwig Wittgenstein — who studied under Russell at Cambridge and whose early philosophy grew directly from the work of Frege and Russell — drew a radical conclusion from the project of formal logic. In his Tractatus Logico-Philosophicus (1921), Wittgenstein argued that formal logic revealed the boundaries of what could be meaningfully said at all. Any proposition that could be expressed in a logically well-formed way was a picture of a possible state of the world. Propositions that could not be so expressed — including ethical, aesthetic, and metaphysical claims — were literally nonsense: not false, but outside the category of truth-apt statements entirely.

His famous conclusion — "Whereof one cannot speak, thereof one must be silent" — is not a counsel of quietism. It is an attempt to use the precision of formal logic to draw a boundary around meaningful discourse. Whether this boundary is rightly drawn is one of the most contested questions of twentieth-century philosophy. The later Wittgenstein effectively repudiated it. But the attempt itself illustrates something important: formal logic, taken seriously, does not leave philosophy's other questions intact. It changes what you think those questions are.

Gödel's incompleteness: a permanent limit

In 1931, Kurt Gödel proved something that stunned the mathematical world: any formal system powerful enough to express basic arithmetic must contain true statements that cannot be proved within that system. This is the first incompleteness theorem. The second states that no consistent formal system can prove its own consistency. Together they establish a permanent and ineliminable limit to formal methods: no matter how powerful a formal system you build, there will always be truths it cannot reach by its own rules.

This does not make formal logic useless — far from it. It establishes with mathematical precision what formal systems can and cannot do. The incompleteness theorems are themselves proved using formal logic, in one of the most dazzling self-referential arguments in the history of ideas. Philosophy of mathematics is still grappling with their implications. For the student of philosophy, the takeaway is this: even the most powerful formal tools have limits, and understanding those limits is part of understanding the tools.

Connecting forward to Article A6

You have now seen two different ways to represent the structure of an argument: the standard form layout from A1 (P1, P2 … ∴ C) and the symbolic notation of propositional logic introduced here. Both are linear — they present an argument as a sequence of statements moving toward a conclusion.

Article A6 introduces a third representation: the argument map. Instead of lines of text or symbols, an argument map is a diagram — a visual representation in which conclusions are connected to the premises that support them, objections are linked to the claims they challenge, and the overall architecture of complex reasoning becomes visible at a glance. For multi-layered philosophical arguments — the kind you encounter in ethics, political philosophy, and epistemology — argument mapping often reveals structure that neither linear presentation nor propositional notation makes immediately apparent. It is a tool that complements everything in this package.

The question to carry with you into Article A6
You can now represent an argument as a sequence of propositions in standard form, or as a string of symbols in propositional notation. But when an argument has many premises, nested sub-arguments, and several layers of objection and reply, can a linear sequence — however precise — really make the structure visible? Is there a better way to see an argument's shape?
Article A6 answers with a diagram. Argument mapping turns the architecture of complex reasoning into a picture — revealing connections, contradictions, and gaps that linear presentation conceals. It is the final tool in Package A's analytical kit.