Previously we covered the ambiguity of spoken language, how compound propositions are just atomic propositions with spoken words between them, how these two combine to allow ambiguity to leak into our propositions, and how using a formal language instead of a spoken one fixes this.
const score = 20; let years = (4 * score) + 7;
Now let’s examine exactly how that transformation took place. Really, there were only three key steps:
- Identify the concepts and their connectors.
- Convert the concepts to constants (
scoremost obviously, but also “four” and “seven” to
- Replace the connectors with unambiguous operators provided by the language.
These same steps, with some minor tweaks, allow us to convert spoken, compound propositions into the formal language of propositional logic. It goes something like:
- Identify the atomic propositions and their connectors.
- Convert the atomic propositions to constants.
- Replace the connectors with unambiguous “logical connectives” provided by the language.
So to get started with propositional logic, we really only need to know two things: “How do we represent constants?“ and “What are the logical connectives provided by the language?”
Step one is converting atomic propositions to constants. These constants are usually represented with a capital letter.1
It’s worth noting there’s a convention to use the constants 𝑃, 𝑄, and 𝑅 in examples or to refer to unknown/abstract atomic propositions (think of them as the
baz of propositional logic). In practice, though, any letter may be used.
It’s often helpful, for example, to use the first letter of the concept you want to express as its constant. It makes a handy pneumonic to keep otherwise terse-looking formulas reasonably comprehensible. Looking at the compound proposition “It is rainy and it is wet,” we might choose to represent the atomic proposition “it is rainy” with the constant 𝑅 and “it is wet” with the constant 𝑊. That is: “𝑅 and 𝑊”.
But that “and” is starting to stick out like a sore thumb, isn’t it? So let’s take a look at the “logical connectives” provided by propositional logic to chain our propositions together.
There are five connectives we have to worry about:2 Negation (¬), Conjunction (∧), Disjunction (∨), Bicondition (↔), and Implication (→).3
The first three of these function exactly as boolean operators we’re used to from programming, so we’ll tackle them first:
“Negation”, or ¬, is equivalent to “not” or
! in most programming languages. It’s a prefix operator that inverts the truth value of its operand. So if 𝑃 is ⊤ and 𝑄 is ⊥ then ¬𝑃 is ⊥ and ¬𝑄 is ⊤.
“Conjunction”, or ∧, is exactly like our logical “and” or
&& in programming languages. It can be thought of as an infix operator that evaluates to the product of its two operands — that is, it is ⊤ if both its operands are ⊤. Otherwise it is ⊥. To put it another way, 𝑃 ∧ 𝑄 is ⊤ only if 𝑃 is ⊤ and 𝑄 is ⊤. In all other cases, 𝑃 ∧ 𝑄 is ⊥.
“Disjunction”, or ∨, is logic-speak for programming’s “or” (
||). It’s like an infix operator that sums the truth values of its operands — if either one is ⊤, then it is ⊤. Only if both are ⊥ is it ⊥. So, in contrast to conjunction, 𝑃 ∨ 𝑄 is ⊥ only if both 𝑃 and 𝑄 are ⊥. If 𝑃 is ⊤ or 𝑄 is ⊤, then 𝑃 ∨ 𝑄 is ⊤.
The Limits of Intuition
As we can see, our programmer’s intuition carries us quite a ways into propositional logic. With it, can finally translate our “it is raining and it is wet” example into concise formal logic: 𝑅 ∧ 𝑊.
We can also express more complicated ideas like “The stars are either visible or it is daytime… or nighttime but overcast.” Assuming 𝑆 for the proposition “stars are visible”, 𝐷 for “daytime” and 𝑂 for “overcast”, we could say: 𝑆 ∨ (𝐷 ∨ (¬𝐷 ∧ 𝑂)).4
In the case of the last two connectives, though, our programmer brain tends to gets in the way. Breaking down the unintuitive so we can build it back up again as intuition… that’s a task for next week.
Specifically, a roman italic capital letter. I use the
MATHEMATICAL ITALIC CAPITAL...unicode characters for this, and the keyboard layout at the bottom of the symbols page has these bound to ⌥I + [letter] for convenience. ↩︎
Like in programming, there is also an “exclusive or”, symbolized by ⊕, or ↮. But it’s just not as common in logical contexts as it is in computational ones.
And when it does come up, it’s usually more clear to my thinking to say ¬(𝑃 ↔ 𝑄) when indicating non-equivalence. Or (𝑃 ∨ 𝑄) ∧ ¬(𝑃 ∧ 𝑄) in the contexts of sets. ↩︎
See the symbols guide for more details on these and a helpful keyboard layout with shortcuts for typing them. ↩︎
Note the prodigious use of parentheses for grouping. Like when evaluating code, there is an “order of operations” to disambiguate the precedents of sibling logical connectives. But (presumably in the name of crushing ambiguity) these are rarely relied upon. Even though 𝑃 ∧ 𝑄 ∨ 𝑅 the same as (𝑃 ∧ 𝑄) ∨ 𝑅, the latter is considered best practice.
The one exception to this rule is dealing with negation. Most seem to agree it’s obvious enough that it always happens first that it’s okay to write ¬𝑃 ∧ 𝑄 rather than (¬𝑃) ∧ 𝑄… though I have seen it both ways. ↩︎