This commit is contained in:
Boris Marinov 2023-10-28 21:05:55 +03:00
parent f8d2ac5bd4
commit 7678990e4c
12 changed files with 189 additions and 91 deletions

View File

@ -1,6 +0,0 @@
<script>
location.href="/category-theory-illustrated/10_functors"
</script>
[Chapter moved, click here](/category-theory-illustrated/10_functors)
---

View File

@ -37,6 +37,8 @@ People have tried to be precise and at the same time down to Earth for centuries
Sets
===
> “A set is a gathering together into a whole of definite, distinct objects of our perception or of our thought—which are called elements of the set.” Georg Cantor
Perhaps unsurprisingly, everything in set theory is defined in terms of sets. A set is a collection of things where the "things" can be anything you want (like individuals, populations, genes, etc.) Consider, for example, these balls.
![Balls](../01_set/elements.svg)
@ -238,67 +240,12 @@ Purely-functional programming languages
We said that while all mathematical functions are also programming functions, the reverse is not true for *most* programming languages. There are some languages that don't permit non-mathematical functions, and for which this equality holds. They are called *purely-functional* programming languages.
A peculiarity in such languages is that they don't directly allow for writing functions (like rendering stuff on screen, I/O, etc.) that perform operations other than returning values.
A peculiarity in such languages is that they don't support functions that perform operations like rendering stuff on screen, doing I/O, etc. (in this context, such operations are called "side effects".
In purely functional programming languages, such operations are *outsourced* to the language's runtime, using a style of programming called *continuation passing style*.
In purely functional programming languages, such operations are *outsourced* to the language's runtime. Instead of writing functions that directly permform a side effect, for example `console.log('Hello')`, we write functions that return a type that represents that side effect (for example, in Haskell side effects are handled by the `IO` type) and the runtime then executes those functions for us.
{% if site.distribution == 'print'%}
We then link all those functions into a whole program, often by using a thing called *continuation passing style*.
Interlude --- type theory
===
Type theory and set theory are related in that type theory can be seen as a more restrictive version of set theory. In set theory you have only one kind of object that is (you guessed it) &mdash; set, and a set can contain anything, including other sets. In type theory, you generally have two concepts &mdash; types and values.
Russell's paradox
---
In order to understand type theory better, it's useful to see why it was created originally. Its first formulation was developed by Bertrand Russell in response to a paradox in the original formulation of set theory (called *naive* set theory today), arising due to the fact that, unlike types (which can only contain *values*), sets can contain other sets.
In particular, a set can contain itself.
![A set that contains itself](../01_set/set_contains_itself.svg)
Unlike the set above, most sets that we discussed (like the empty set and singleton sets) do not contain themselves.
![Sets that don't contains themselves](../01_set/sets_dont_contain_themselves.svg)
In order to understand Russell's paradox we will try to visualize *the set of all sets that do not contain themselves*. In the original set notation we can define this set to be such that it contains all sets $x$ such that $x$ is not a member of $x$), or $\{x \mid x ∉ x \}$
![Russel's paradox - option one](../01_set/russells_paradox.svg)
If we look at the definition, we recognize that the set that we just defined does not contain itself and therefore it belongs there as well.
![Russel's paradox - option one](../01_set/russells_paradox_2.svg)
Hmm, something is not quite right with this diagram as well --- because of the new adjustments that we made, our set now *contains itself*. And removing it from the set would just bring us back to the previous situation. So this is Russell's paradox.
From set theory to type theory
---
To avert this and related paradoxes, we have to impose certain restrictions to the ways in which you can define sets. And while doing so is possible without any significant changes to the set theory's core (the new paradox-free "version" of set theory by *Zermelo and Fraenkel* is still in use today), Russell himself took a different route and he developed an entirely new mathematical theory that is *like set theory*, but which is much more strict and rigid.
The theory of types defines two primitive concepts &mdash; *types and values*, which correspond to *sets and set elements*, but at the same time differ in many respects.
Types
---
Types contain values, so they are like sets in this respect (although this is not true for all formulations of type theory). In fact, every type can be seen as a set of its values. However, unlike sets, which can contain other sets, *a type cannot contain other types*. And so, *not every set is a type* (although the reverse is true). The proper way to think about a type is as a collection of values that *share some common characteristics*.
Values
---
Values are like set elements, in that they constitute the contents of a type. However, while a given object can be an element of many sets, a given value *belongs to only one type* (we can also say that it *is* a given type), i.e., the type of each value is an intrinsic property of the value.
This may seem weird at first, e.g., when we create a subtype for example, as in the type-theoretic example of our constructing the set of all balls with warm colors, we end up with two instances of all objects that are members of both types, but it actually makes sense after we get used to it. After all, we can always convert the more general version of the value to the more specific one, using the function that exist between each set and its subset.
Conclusion
---
I won't get into more details, as there are many versions of type theories which are very different from one another, so examining them wouldn't be easy (e.g., if we look into programming languages, each language uses a different type system and different ways to construct subtypes.) Instead, we will return to using set theory, which in contrast has just a few formulations that are very similar to one another.
But the choice of formal system is not important &mdash; all concepts that we are examining here are so essential that they have their counterparts in all set and type theories.
{%endif%}
Functional Composition
===

View File

@ -287,11 +287,11 @@ Note that to really prove that the two formulas are equivalent we have to also d
Intuitionistic logic. The BHK interpretation
===
> [...] logic is life in the human brain; it may accompany life outside the brain but it can never guide it by virtue of its own power. --- L.E.J. Brouwer
Although the classical truth-functional interpretation of logic works and is correct in its own right, it doesn't fit well the categorical framework that we are using here: It is too "low-level", it relies on manipulating the values of the propositions. According to it, the operations *and* and *or* are just 2 of the 16 possible binary logical operations and they are not really connected to each other (but we know that they actually are.)
For these and other reasons (mostly other, probably), in the 20th century a whole new school of logic was founded, called *intuitionistic logic*. If we view classical logic as based on *set theory*, then intuitionistic logic would be based on *category theory* and its related theories. If *classical logic* is based on Plato's theory of forms, then intuitionism began with a philosophical idea originating from Kant and Schopenhauer: the idea that the world as we experience it is largely predetermined of out perceptions of it. As the mathematician L.E.J. Brouwer puts it.
> [...] logic is life in the human brain; it may accompany life outside the brain but it can never guide it by virtue of its own power.
For these and other reasons (mostly other, probably), in the 20th century a whole new school of logic was founded, called *intuitionistic logic*. If we view classical logic as based on *set theory*, then intuitionistic logic would be based on *category theory* and its related theories. If *classical logic* is based on Plato's theory of forms, then intuitionism began with a philosophical idea originating from Kant and Schopenhauer: the idea that the world as we experience it is largely predetermined of out perceptions of it. Thus without absolute standards for truth, a proof of a proposition becomes something that you *construct*, rather than something you discover.
Classical and intuitionistic logic diverge from one another right from the start: because according to intuitionistic logic we are *constructing* proofs rather than *discovering* them or *unveiling* a universal truth, we are *off with the principle of bivalence*, that is, we have no basis to claim that each statements is necessarily *true or false*. For example, there might be a statements that might not be provable not because they are false, but simply because they fall outside of the domain of a given logical system (the twin-prime conjecture is often given as an example for this.)
@ -382,7 +382,6 @@ The Curry-Howard isomorphism
---
Programmers might find the definition of the BHK interpretation interesting for other reason - it is very similar to a definition of a programming language: propositions are *types*, the *implies* operations are *functions*, *and* operations are composite types (objects), and *or* operations are *sum types* (which are currently not supported in most programming languages, but that's a separate topic.) Finally a proof of a given proposition is represented by a value of the corresponding type.
![Logic as a programming language](../05_logic/logic_curry.svg)
This similarity is known as the *Curry-Howard isomorphism*.
@ -392,9 +391,9 @@ This similarity is known as the *Curry-Howard isomorphism*.
Cartesian closed categories
---
Knowing about the Curry-Howard isomorphism and knowing also that programming languages can be described by category theory may lead us to think that *category theory is part of this isomorphism as well*. And we would be quite correct, this is why it is sometimes known as the Curry-Howard-*Lambek* isomorphism, Lambek being the person who discovered the categorical side.
Knowing about the Curry-Howard isomorphism and knowing also that programming languages can be described by category theory may lead us to think that *category theory is part of this isomorphism as well*. And we would be quite correct --- this is why it is sometimes known as the Curry-Howard-*Lambek* isomorphism, Lambek being the person who discovered the categorical side. So let's examine this isomorphism. As all other isomorphisms, it comes in two parts:
Let's examine this isomorphism (without being too formal about it). As all other isomorphisms, it comes in two parts. The first part is finding a way to convert a logical system into a category - this would not be hard for us, as sets form a category and the flavor of the BHK interpretation that we saw is based on sets.
The first part is finding a way to convert a *logical system* into a category - this would not be hard for us, as sets form a category and the flavor of the BHK interpretation that we saw is based on sets.
![Logic as a category](../05_logic/category_curry_logic.svg)
@ -404,7 +403,7 @@ The second part involves converting a category into a logical system - this is m
![Logic as a category](../05_logic/logic_curry_category.svg)
Categories that adhere to these criteria are called *cartesian closed categories*. We won't describe them here directly, but instead we would start with a similar but simpler structures that are instance of them and that we already examined - orders.
Categories that adhere to these criteria are called *cartesian closed categories*. To describe them here directly, but instead we would start with a similar but simpler structures that we already examined - orders.
Logics as orders
---

View File

@ -1,11 +0,0 @@
Peano
----
Learning mathematics, can feel overwhelming, because of the huge, even infinite, body of knowledge: how do you proceed so big of a task? But it turns out the answer is simple: you start off knowing 0 things, 0 theories. Then, you learn 1 theory - congrats, you have learned your first theory and so you would know a total of 1 theories. Then, you learn 1 more theory and you would already know a total of 2 theories. Then learn 1 more theory and then 1 more and, given enough time and dedication, you may learn all theories.
This little argument applies not only to learning mathematical theories, but to everything else that is "countable", so to say. This is because it is the basis of the mathematical definition of natural numbers, as famously syntesized in the 19th century by the Italian mathematician Giuseppe Peano:
1. $0$ is a natural number.
2. If $n$ is a naturan number, $n+1$ is a natural number.
And then, he gave the following laws.

119
_chapters/06_type.md Normal file
View File

@ -0,0 +1,119 @@
layout: default
title: Types
---
In the last "category overview" chapter in the book, we will talk about, in my opinion, the most interesting category of all --- the category of **types**.
This might be disappointing, if you expected to learn about as many *new* categories as possible (for which you don't even suspect that they are categories till the unexpected reveal), as we've been giving examples with the category of types in programming languages ever since the first chapter, so we already know that they form a category, and *how* they form it. You have also heard about the Curry-Howard correspondence, that connects types and logic, which we will be discussing in depth
However, types are not just about programming languages --- they are an alternative to sets (and categories) as the foundational language of mathematics. More so, they are as powerful tool as any of those formalisms.
Sets and types
===
Most books about category theory (and mathematics in general) begin with sets, simply because *sets are simple to understand*, at least when we are operating on the conceptual level that is customary for introductory materials --- when we draw a circle around a few things, everyone knows what we are talking about. So, it is tempting to think that we can use sets to represent everything we want. But, as we will see shortly, this view is somewhat *naive*.
Type theory and set theory are related conceptually.
Russell's paradox
---
Type theory and set theory are also related historically, the first version of type theory was developed by Bertrand Russell in response to a paradox that he found in the first version of set theory (usually called *naive* set theory). This paradox is called "Russel's paradox.
Russel's paradox arises due to the very same characteristics of set theory that makes it so simple - the idea that everything is a set that contains other sets. The core of the paradox, is that (unlike a type), a set can contain itself.
![A set that contains itself](../06_types/set_contains_itself.svg)
Unlike the set above, most sets that we discussed (like the empty set and singleton sets) do not contain themselves.
![Sets that don't contains themselves](../06_type/sets_dont_contain_themselves.svg)
In order to understand Russell's paradox we will try to visualize *the set of all sets that do not contain themselves*. In the original set notation we can define this set to be such that it contains all sets $x$ such that $x$ is not a member of $x$), or $\{x \mid x ∉ x \}$
![Russel's paradox - option one](../06_type/russells_paradox.svg)
If we look at the definition, we recognize that the set that we just defined does not contain itself and therefore it belongs there as well.
![Russel's paradox - option one](../06_type/russells_paradox_2.svg)
Hmm, something is not quite right with this diagram as well --- because of the new adjustments that we made, our set now *contains itself*. And removing it from the set would just bring us back to the previous situation. So, this is Russell's paradox.
Resolving the paradox --- ZFC
---
Russel's paradox completely changed the game of set theory, as averting it requires that we impose certain restrictions to the ways in which you can define sets. i.e. we can no longer allow sets to be generated freely.
And, while doing so is possible, (the new paradox-free “version” of set theory by Zermelo and Fraenkel is still in use today) the result is not simple.
Russell himself took a different route and he developed an entirely new mathematical theory that is like set theory, but which is much more strict and rigid. But I will stop here, as we have a special chapter dedicated to types.
Resolving the paradox --- Types
---
Type theory and set theory are related in that type theory can be seen as a more restrictive version of set theory. In set theory you have only one kind of object that is (you guessed it) &mdash; set, and a set can contain anything, including other sets. In type theory, you generally have two concepts &mdash; **types** and **values** (which in a mathematical context we call **terms**).
From set theory to type theory
---
To avert this and related paradoxes, we have to impose certain restrictions to the ways in which you can define sets. And while doing so is possible without any significant changes to the set theory's core (the new paradox-free "version" of set theory by *Zermelo and Fraenkel* is still in use today), Russell himself took a different route and he developed an entirely new mathematical theory that is *like set theory*, but which is much more strict and rigid.
The theory of types defines two primitive concepts &mdash; *types and values*, which correspond to *sets and set elements*, but at the same time differ in many respects.
Types
---
Types contain values, so they are like sets in this respect (although this is not true for all formulations of type theory). In fact, every type can be seen as a set of its values. However, unlike sets, which can contain other sets, *a type cannot contain other types*. And so, *not every set is a type* (although the reverse is true). The proper way to think about a type is as a collection of values that *share some common characteristics*.
Values
---
Values are like set elements, in that they constitute the contents of a type. However, while a given object can be an element of many sets, a given value *belongs to only one type* (we can also say that it *is* a given type), i.e., the type of each value is an intrinsic property of the value.
This may seem weird at first, e.g., when we create a subtype for example, as in the type-theoretic example of our constructing the set of all balls with warm colors, we end up with two instances of all objects that are members of both types, but it actually makes sense after we get used to it. After all, we can always convert the more general version of the value to the more specific one, using the function that exist between each set and its subset.
Conclusion
---
I won't get into more details, as there are many versions of type theories which are very different from one another, so examining them wouldn't be easy (e.g., if we look into programming languages, each language uses a different type system and different ways to construct subtypes.) Instead, we will return to using set theory, which in contrast has just a few formulations that are very similar to one another.
Types as mathematical foundation
====
Types and computation
===
Numbers as types
===
Learning mathematics, can feel overwhelming, because of the huge, even infinite, body of knowledge: how do you proceed so big of a task? But it turns out the answer is simple: you start off knowing 0 things, 0 theories. Then, you learn 1 theory - congrats, you have learned your first theory and so you would know a total of 1 theories. Then, you learn 1 more theory and you would already know a total of 2 theories. Then learn 1 more theory and then 1 more and, given enough time and dedication, you may learn all theories.
This little argument applies not only to learning mathematical theories, but to everything else that is "countable", so to say. This is because it is the basis of the mathematical definition of natural numbers, as famously syntesized in the 19th century by the Italian mathematician Giuseppe Peano:
1. $0$ is a natural number.
2. If $n$ is a naturan number, $n+1$ is a natural number.
And then, he gave the following laws.
Types and categories
===
In thinking of a category as a type theory, the objects of a category are
regarded as types (or sorts) and the arrows as mappings between the corresponding
types. Roughly speaking, a category may be thought of a type theory shorn of
its syntax. In the 1970s Lambek20 established that, viewed in this way, cartesian
closed categories correspond to the typed λ-calculus. Later Seely [1984] proved
that locally Cartesian closed categories correspond to Martin-L¨of, or predicative,
type theories. Lambek and Dana Scott independently observed that C-monoids,
i.e., categories with products and exponentials and a single, nonterminal object
correspond to the untyped λ-calculus. The analogy between type theories and
categories has since led to what Jacobs [1999] terms a “type-theoretic boom”,
with much input from, and applications to, computer science

View File

@ -12,12 +12,6 @@ Sets
[On the cruelty of really teaching computer science](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1036.html)
Numbers
---
[Number theory from first principles](https://explained-from-first-principles.com/number-theory/)
[The natural numbers game](https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_game/)
Orders
---
@ -28,6 +22,16 @@ Logic
[The HOTT game](https://homotopytypetheory.org/2021/12/01/the-hott-game/)
Types
---
[Sets types categories](../references/sets_types_categories.pdf)
[Number theory from first principles](https://explained-from-first-principles.com/number-theory/)
[The natural numbers game](https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_game/)
Functors
---
@ -46,7 +50,6 @@ Functors
etc
---
[The Principles of Mathematics](https://people.umass.edu/klement/pom/)

View File

@ -71,7 +71,7 @@ window.MathJax = {
<h1>Created by Jencel Panic</h1>
<p><a target="_blank" href="/">personal blog</a></p>
<p><a href="mailto:marinovboris@protonmail.com">email</a></p>
<p><a target="_blank" href="https://mathstodon.xyz/@abuseofnotation">mastodon</a></p>
<p><a rel="me" target="_blank" href="https://mathstodon.xyz/@abuseofnotation">mastodon</a></p>
<p><a target="_blank" href="https://twitter.com/AlexanderKatt/">twitter</a></p>
<p><a target="_blank" href="https://github.com/abuseofnotation/category-theory-illustrated">Github</a></p>

BIN
book.epub Normal file

Binary file not shown.

15
build
View File

@ -1 +1,14 @@
sudo apt-get install com.github.babluboy.bookworm
pandoc --from markdown --to epub3 \
epub/title.txt \
_chapters/00_about.md \
_chapters/01_set.md \
_chapters/02_category.md \
_chapters/03_monoid.md \
_chapters/04_order.md \
_chapters/05_logic.md \
_chapters/10_functors.md \
--output book.epub \
--toc \
--epub-cover-image=cover_print.png \
--resource-path=_chapters/02_category \
--css=epub/epub-style.css

28
epub/epub-style.css Normal file
View File

@ -0,0 +1,28 @@
/* This defines styles and classes used in the book */
body { margin: 5%; text-align: justify; font-size: medium; }
code { font-family: monospace; }
h1 { text-align: left; }
h2 { text-align: left; }
h3 { text-align: left; }
h4 { text-align: left; }
h5 { text-align: left; }
h6 { text-align: left; }
h1.title { }
h2.author { }
h3.date { }
nav#toc ol,
nav#landmarks ol { padding: 0; margin-left: 1em; }
nav#toc ol li,
nav#landmarks ol li { list-style-type: none; margin: 0; padding: 0; }
a.footnote-ref { vertical-align: super; }
em, em em em, em em em em em { font-style: italic;}
em em, em em em em { font-style: normal; }
code{ white-space: pre-wrap; }
span.smallcaps{ font-variant: small-caps; }
span.underline{ text-decoration: underline; }
q { quotes: "“" "”" "" ""; }
div.column{ display: inline-block; vertical-align: top; width: 50%; }
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
img {
width: 100%;
}

6
epub/title.txt Normal file
View File

@ -0,0 +1,6 @@
---
title: Category Theory Illustrated
author: Jencel Panic
rights: Creative Commons Non-Commercial Share Alike 3.0
language: en-US
...

Binary file not shown.