Predicate Logic
So propositional logic is great! It allows us to express practically anything.
I have a number of opinions on cheetahs, for example. I think cheetahs have spots (𝑆). I think they are fast (𝐹). And I think cheetahs are cats (𝐶). I could express this all as a compound composition that I assert is true:
𝑆 ∧ 𝐹 ∧ 𝐶: ⊤
That is, cheetahs having spots and being fast and being cats must each be true for my compound proposition to be true, which I assert it is.
But I have a friend who is really into leopards. And they are a little confused about cheetah spots. “How many are there? What color are they? What size?” they ask.
I could set out a number of propositions for my friend carefully specifying each quality of cheetah spots and ∧ing them together. But given their love of leopards, it’s probably easer to establish a relationship. Something like “Cheetah spots are like leopard spots.1”
How would I do that, though? We know “cheetahs have spots” (𝑆). Let’s say we also know “leopards have spots” (𝐿). So can we just say:
𝑆 ∧ 𝐿: ⊤
Hmm… this doesn’t seem quite right. We’re trying to express some sort of a relationship between cheetah and leopard spots. But it seems like we’ve only made a relationship between the claims of cheetahs and leopards having spots, not the spots themselves. The “∧” in our compound proposition feels like it only cares about the ⊤-ness or ⊥-ness of the propositions on either side of it. It’s not concerned with any of their opaque, internal qualities.
To further illustrate, let’s introduce a new proposition: “Paris is a city in France” (𝑃). And let’s set up the exact same relationship between it and cheetahs having spots:
𝑆 ∧ 𝑃: ⊤
This is, of course, ⊤; cheetahs do have spots and Paris is a city in France, so it must be. But further, it’s the same kind of true as 𝑆 ∧ 𝐿 is. In fact, we could say the relationship between 𝑆 ∧ 𝑃 is exactly the same as the relationship between 𝑆 ∧ 𝐿 — it’s the relationship between one proposition and another.
This feels weird because 𝑆 and 𝐿 both contain the letters “have spots”. It seems like they should have something in common. And about the only thing 𝑆 and 𝑃 share between them is that they both happen to be ⊤. But as far as ∧ is concerned, that’s all that matters. 𝑆, 𝐿, and 𝑃 are all just propositions. And propositions can only be ⊤ or ⊥. No further details matter.
Maybe we try another connective, then? We could make it an implication (𝑆 → 𝐿) or biconditional (𝑆 ↔ 𝐿)? But all these suffer from the same problem as ∧. They relate propositions which can only be ⊤ or ⊥ and admit nothing of the qualities inside that make them so.
Indeed, the fault here lies not with the connectives but with what they connect. The fault lies in our very definition of “proposition”.
The Limits of Propositions
Here we uncover a frustrating aspect of our formal language as currently defined: the proposition is our language’s atomic unit, and we defined a proposition to be “A statement which can only be ⊤ or ⊥.”
But as our cheetah example illustrates, if we dig deeper into propositions we see they are more than the sum of their ⊤-ness or ⊥-ness. This ⊤-ness or ⊥-ness is actually derived from the combination of two distinct parts: a subject (“cheetah”, “leopard”) and a clause that modifies that subject (“has spots”).
Sometimes we want to relate these parts as opposed to the propositions as a whole. To liken the spots of cheetahs to the spots of leopards, for example, we have to be able to make some sort of relationship between “has spots”. But our current definition of proposition tightly couples these parts with their respective subjects. Neither can escape the confines of the proposition and are instead ground up, mixed together, and baked into the proposition’s value ultimate value of ⊤ or ⊥.
Enter Prepositions
As with all strategies for breaking up tightly coupled values, we need to add indirection by extracting independent variables. We know we have the concept of “has spots” and that we want to apply it to different subjects; namely cheetahs and leopards. So let’s extract them out into constants. We’ll use lower-case letters to represent them:
𝑐 = Cheetahs
𝑙 = Leopards
All we’re left with, then, is the “has spots” concept, which we’ll call a “predicate” (in grammar, the predicate is literally the part of a declarative sentence that is not the subject). We’ll denote this with a capital letter just as we did with propositions:
𝑆 = has spots
Now we can apply our predicate to a constant using syntax that looks rather like a function call:
𝑆(𝑐) = Cheetahs have spots
Taken together, a predicate applied to a constant forms our familiar proposition — a statement that must be either ⊤ or ⊥. And as such, all the logical connectives that work on propositions work on predicates applied to constants. Indeed, predicate logic is essentially a superset of propositional logic. So we can finally say something like:
𝑆(𝑐) ∧ 𝑆(𝑙)
This might not look very different from our original 𝑆 ∧ 𝐿, but has one crucial advantage: we’re not just using the same spelling for different predicates locked within distinct propositions (“cheetahs have spots” and “leopards have spots”), but are actually using the same predicate applied to each. We know that cheetah spots are like leopard spots because we’re literally using the same definition applied to each.
Arity
And there’s nothing limiting the number of constants a predicate can apply over. Let’s say, for 𝐹 = “is fast”, we want to say “cheetahs and leopards are fast”:
𝐹(𝑐) ∧ 𝐹(𝑙)
Awesome. But what if we wanted to say “cheetahs are faster than leopards”? There’s no logical connective for that. Propositions need to be either ⊤ or ⊥, and there’s no way for something to be “more ⊤” or “less ⊤” than something else.
No problem. We define a predicate that take can be applied over two constants:
𝑅 = “the first is faster than the second”
Then, to express “cheetahs are faster than leopards”, we can simply write:
𝑅(𝑐,𝑙)
A predicate that applies over a single constant, like our original “has spots” example, is said to be unary. One that applies over two, like 𝑅, above, is called binary. One that can by applied over three constants (imagine 𝐵(𝑎,𝑏,𝑐) meaning “the second is between the first and third) is called ternary, and so on.
This naming scheme can get complicated quick. Past three, we usually talk about a predicate’s “arity”, generally. That is, a predicate that can be applied over five constants (𝐶(𝑎,𝑏,𝑐,𝑑,𝑒) = “the first carried the second between the third and the fourth via the fifth”) is said to have “an arity of five”.
In practice, unary predicates are common, binary predicates less so, and anything more than that downright rare.
Relationships and Obvious Commonalities
Finally, apart from the mechanics of being able to draw relations between the independent parts of a proposition, predicate logic’s syntax makes relational fallacies easier to spot. For example, given:
𝑐 = cheetahs
𝑙 = leopards
𝑆 = has spots
We can see that 𝑆(𝑐) ∧ 𝑆(𝑙) is making some sort valid relational claim because 𝑆 is common between our propositions. On the other hand, to revisit an absurd example from earlier, given:
𝑝 = paris
𝐹 = is a city in France
We could rightly assert 𝑆(𝑐) ∧ 𝐹(𝑝) — that is, “cheetahs have spots and Paris is a city in France”. Just as with the propositional example earlier, 𝑆(𝑐) is ⊤ and 𝐹(𝑝) is ⊤, so 𝑆(𝑐) ∧ 𝐹(𝑝) is ⊤. But there’s no actual relationship being asserted here because 𝑆(𝑐) has no parts in common with 𝐹(𝑝). This is easy to overlook when everything is hidden behind monolithic propositions like “𝑃 ∧ 𝑄”. Breaking subjects and predicates out in predicate logic’s syntax, though, makes it obvious.
-
I know. Leopards have rosettes not spots. ↩︎