Most bad arguments do not announce themselves as bad. They arrive dressed in the clothes of good reasoning — confident, structured, emotionally compelling. They look like they are giving you a reason to believe their conclusion. The problem, on inspection, is that they are not. The premises are irrelevant, or the evidence is too thin, or the terms shift halfway through, or a false choice has been smuggled in where many options actually exist.
A fallacy is a recurring pattern of defective reasoning. The word matters. Not every mistake in thinking is a fallacy — a fallacy is a pattern, one that recurs often enough and is recognisable enough to have earned a name. The catalogue of fallacies that follows is the accumulated critical intelligence of two and a half thousand years of philosophy, rhetoric, and logic: a list of the moves that look like argument but aren't.
But before we catalogue them, a crucial distinction.
Formal versus informal fallacies
Not all fallacies are the same kind of thing. There are two fundamentally different ways an argument can go wrong, and they have different names.
This article is entirely about informal fallacies. They are organised into four categories based on the type of error they make — a classification that connects directly to the evaluative vocabulary of A1–A3:
Relevance fallacies offer premises that are simply irrelevant to the conclusion — they fail the soundness test because the premises, even if true, give no support to the conclusion at all.
Weak induction fallacies offer premises that are relevant but insufficient — the evidence is real but too thin, too biased, or too distorted to make the conclusion probable.
Presumption fallacies sneak unwarranted assumptions into the argument's structure — the conclusion depends on a hidden premise that has not been established and may be false.
Clarity fallacies exploit ambiguity in language — either misrepresenting an opponent's argument or using the same word with two different meanings to make an inference appear to follow when it does not.
With that map in hand, let us examine the twelve most important informal fallacies.
Naming fallacies is easy. The skill philosophy actually demands is different: encountering an argument that feels compelling, pausing, and asking systematically which of the criteria from A1–A3 it fails to meet. The following three cases are drawn from political philosophy, applied ethics, and a real pattern of historical reasoning. Each is analysed using the full spotter's method.
The fallacies in Unpack are not a list to memorise and match. Used that way, fallacy-spotting becomes a parlour game — slapping labels on arguments without doing the philosophical work of showing precisely why each label applies. The point of having names for fallacies is to give you a vocabulary for a structured diagnostic process.
The method below extends the six-step evaluation framework from Article A3 with four targeted fallacy-detection questions. These questions correspond directly to the four categories introduced in the Question tab. Together, they form a complete analytical procedure — the "fallacy spotter's method."
One further caution deserves emphasis: the goal is not to "win" by attaching a label. It is to understand, precisely, why an argument fails — and to be honest about when an argument does not fail. Some slippery slope arguments are legitimate (when the causal chain is established). Some appeals to authority are legitimate (when genuine expert consensus exists within a relevant field). Some generalisations are strong (when the sample is large, diverse, and controlled). The method should make you more precise, not more dismissive.
Philosophy demands that you take arguments seriously — including arguments you disagree with. The principle of charity requires that you identify the best version of an argument before you evaluate it. Fallacy-naming should be the last step in a careful analysis, not a substitute for one.
Fallacies in political discourse
Political debate is arguably the domain in which informal fallacies are most densely and most consequentially deployed. Several structural features of democratic politics make this almost inevitable: politicians must persuade large audiences quickly, emotional engagement drives electoral outcomes more reliably than logical precision, and the media environment rewards simplification over nuance.
The false dilemma is the political fallacy par excellence. "You're either with us or against us," "you either support border security or you support open borders," "you either back economic growth or you care about the environment" — these framings do political work precisely by closing down the space for nuanced positions. Once a binary is accepted, the elimination of one unacceptable option forces acceptance of the other. Recognising the false dilemma means always asking: where are the third, fourth, and fifth options?
The ad hominem is the second great weapon of political discourse. Rather than engaging with a policy argument, political opponents routinely attack the motives, character, or associations of the person making it. This is psychologically effective — it is easier to distrust a source than to refute an argument — but it is always logically empty. A corrupt politician who proposes a good policy has still proposed a good policy. The corruption is a separate matter to be addressed separately.
Fallacies in scientific communication
Scientific reasoning itself is disciplined to avoid these errors — the controlled experiment is specifically designed to rule out post hoc reasoning, and peer review is designed to catch hasty generalisations. But the communication of science to the public is deeply vulnerable to fallacious interpretation.
Headlines regularly commit the post hoc fallacy: "Coffee drinkers live longer — here's why." The observed correlation (coffee consumption associated with longer life) is reported as though causation has been established, eliding the enormous methodological work required to move from correlation to causal conclusion. Epidemiology routinely produces correlations; establishing causation requires controlled trials, dose-response relationships, plausible mechanisms, and replication across diverse populations.
The appeal to authority becomes particularly consequential when it is invoked to dismiss scientific consensus rather than support a minority view. "Many scientists disagree with the consensus on climate change" — when the number of such scientists is vanishingly small, and their expertise is not in climate science — is an appeal to manufactured authority. Understanding how legitimate expert testimony differs from the appeal to authority fallacy is one of the most important skills for scientific literacy.
A note on charity and critical generosity
The tradition of identifying fallacies stretches back to Aristotle's Sophistical Refutations — a catalogue of the tricks used by sophists to win arguments they should have lost. The impulse to name and expose bad reasoning is as old as philosophy itself, and it is a genuinely important intellectual practice.
But there is a misuse of fallacy-naming that philosophy should resist: using labels as a substitute for engagement. When a fallacy label is applied too quickly — before the argument has been charitably reconstructed, before the specific premise or inference that fails has been identified — it becomes a rhetorical move, not a philosophical one. The slippery slope becomes a thought-terminating cliché rather than a substantive observation that the causal links have not been established. "That's an ad hominem!" becomes a way to refuse to engage with a legitimate point about the arguer's track record.
The principle of charity, introduced in A1, is the corrective. Always reconstruct the argument in its strongest form. Identify exactly which step fails. Name the fallacy after showing why it applies. This sequence — reconstruction, analysis, diagnosis — is philosophy, not rhetoric.
Connecting to Article A5
You have now worked through the informal fallacies — the ways arguments fail because of problems with their content: irrelevant premises, insufficient evidence, unwarranted assumptions, or ambiguous terms. These errors are detectable by careful attention to meaning and logical relationship, without any formal symbolic machinery.
Article A5 asks a different question: can we represent the logical structure of arguments in a purely formal, symbolic language — one where validity can be checked mechanically, without depending on your ability to "see" whether a conclusion follows? This is the project of propositional logic, and it takes you into a different mode of philosophical reasoning altogether: precise, symbolic, and provably complete in its own domain.