Merge branch 'master' of github.com:boris-marinov/category-theory-illustrated

This commit is contained in:
Boris Marinov 2023-09-14 10:45:13 +03:00
commit 8c7c57ac2d
10 changed files with 43 additions and 43 deletions

View File

@ -94,7 +94,7 @@ Although I am not an expert in special relativity, I suspect that the way this c
>
> Engineer 2: Just adjust it by X and see if it works. Oh, and tell that to some physicist. They might find it interesting.
In other words, we can solve problems without any advanced math, or with no math at all, as evidenced by the fact that the Egyptians were able to build the pyramids without even knowing Euclidian geometry. And with that I am not claiming that math is so insignificant, that it is not even good enough to serve as a tool for building stuff. Quite the contrary, I think that math is much more than just a simple tool. Thinking itself is mathematical. So going through any math textbook (and of course especially this one) would help you in ways that are much more vital than finding solutions to "complex" problems.
In other words, we can solve problems without any advanced math, or with no math at all, as evidenced by the fact that the Egyptians were able to build the pyramids without even knowing Euclidean geometry. And with that I am not claiming that math is so insignificant, that it is not even good enough to serve as a tool for building stuff. Quite the contrary, I think that math is much more than just a simple tool. Thinking itself is mathematical. So going through any math textbook (and of course especially this one) would help you in ways that are much more vital than finding solutions to "complex" problems.
And so "Who is this book for" is not to be read as who should, but who *can* read it. Then the answer is "anyone with some time and dedication to learn category theory".

View File

@ -31,7 +31,7 @@ You already see how abstract theories may be useful. Because they are so simple,
<!-- comic - brain on category theory -->
<!--
People have tried to be precise and at the same time down to Earth for centuries, and only recently discovered that "precise and down to Earth" is an oxymoron. Let's take Euclidian geometry as an example. Yes, Euclidian geometry is precise, because it is valid for all sets of objects, called ("point", "line", "angle", "circle", etc.), which have relationships, as defined by the five famous axioms. Yes, geometry does, in many instances, describe the natural world, because there are many sets of objects which have these relations. However, its "precise" part and its "down to Earth" part have nothing to do with each other. We can, for example, define a point as any stain on the floor of your room and a line as a piece of duct tape, put on the same floor. That will be a completely valid application of the Euclidian laws, albeit not very useful one. Or we can try to use geometry to reason about points on the surface of the Earth, which is a very useful application of geometry, however not of Euclidian geometry, because Euclidian geometry only describes points on a flat plane, and the Earth is not flat. You can argue that these are actually two separate theories there, which just happen to be perceived as one. You have the axioms, or the postulates on one hand, which are not useful for anything on their own, and you have applications in science and engineering which are somewhat based on them, but not quite.
People have tried to be precise and at the same time down to Earth for centuries, and only recently discovered that "precise and down to Earth" is an oxymoron. Let's take Euclidean geometry as an example. Yes, Euclidean geometry is precise, because it is valid for all sets of objects, called ("point", "line", "angle", "circle", etc.), which have relationships, as defined by the five famous axioms. Yes, geometry does, in many instances, describe the natural world, because there are many sets of objects which have these relations. However, its "precise" part and its "down to Earth" part have nothing to do with each other. We can, for example, define a point as any stain on the floor of your room and a line as a piece of duct tape, put on the same floor. That will be a completely valid application of the Euclidean laws, albeit not very useful one. Or we can try to use geometry to reason about points on the surface of the Earth, which is a very useful application of geometry, however not of Euclidean geometry, because Euclidean geometry only describes points on a flat plane, and the Earth is not flat. You can argue that these are actually two separate theories there, which just happen to be perceived as one. You have the axioms, or the postulates on one hand, which are not useful for anything on their own, and you have applications in science and engineering which are somewhat based on them, but not quite.
-->
Sets

View File

@ -34,15 +34,15 @@ The concept of *Cartesian product* was first defined by the mathematician and ph
Most people know how Cartesian coordinate systems works, but an equally interesting question, on which few people think about, is how we can define it using sets and functions.
A Cartesian coordinate system consists of two perpendicular lines, situated on an *Euclidian plane* and some kind of mapping that resembles a function, connecting any point in these two lines to a number, representing the distance between the point that is being mapped and the lines' point of overlap (which is mapped to the number $0$).
A Cartesian coordinate system consists of two perpendicular lines, situated on an *Euclidean plane* and some kind of mapping that resembles a function, connecting any point in these two lines to a number, representing the distance between the point that is being mapped and the lines' point of overlap (which is mapped to the number $0$).
![Cartesian coordinates](../02_category/coordinates_x_y.svg)
Using this construct (as well as the concept of a Cartesian product), we can describe not only the points on the lines, but any point on the Euclidian plane. We do that by measuring the distance between the point and those two lines.
Using this construct (as well as the concept of a Cartesian product), we can describe not only the points on the lines, but any point on the Euclidean plane. We do that by measuring the distance between the point and those two lines.
![Cartesian coordinates](../02_category/coordinates.svg)
And since the point is the main primitive of Euclidian geometry, the coordinate system allows us to also describe all kinds of geometric figures such as this triangle (which is described using products of products).
And since the point is the main primitive of Euclidean geometry, the coordinate system allows us to also describe all kinds of geometric figures such as this triangle (which is described using products of products).
![Cartesian coordinates](../02_category/coordinates_triangle.svg)
@ -289,7 +289,7 @@ Now, as we came up with some definition of set *element*, using just functions,
However, our diagram is not yet fully external, as it depends on the idea of the singleton set, i.e. the set with one *element*. Furthermore, this makes the whole definition circular, as we have to already have the concept of an element in order to define the concept of an one-element set.
To avoid these dificulties, we devise a way to define the singleton set, using just functions. We do it in the same way that we did for products and sums - by using a unique property that the singleton set has. In particular, there is exactly one function from any other set to the singleton set i.e. if $1$ is the singleton set, then we have $\forall X \exists! X \to 1$.
To avoid these difficulties, we devise a way to define the singleton set, using just functions. We do it in the same way that we did for products and sums - by using a unique property that the singleton set has. In particular, there is exactly one function from any other set to the singleton set i.e. if $1$ is the singleton set, then we have $\forall X \exists! X \to 1$.
![Terminal object](../02_category/terminal_object_internal.svg)
@ -301,7 +301,7 @@ And because there is no other set, other than the singleton set that has this pr
![Terminal object](../02_category/terminal_object.svg)
With this, we aquire a fully-external definition (up to an isomorphism) of the singleton set, and thus a definition of a set element - the elements of set are just the functions from the singleton set to that set.
With this, we acquire a fully-external definition (up to an isomorphism) of the singleton set, and thus a definition of a set element - the elements of set are just the functions from the singleton set to that set.
![Functions from the singleton set](../02_category/elements_external.svg)
@ -314,7 +314,7 @@ Note that from this property it follows that the singleton set has exactly one e
Defining the empty set using functions
---
The empty set is the set that has no elements, but how would we say this without refering to elements?
The empty set is the set that has no elements, but how would we say this without referring to elements?
We said that there exist a unique function that goes *from* the empty set *to* any other set. But the reverse is also true: the empty set is the only set such that there is exist a function from it to any other set.
@ -527,7 +527,7 @@ Associativity and reductionism
Associativity --- what does it mean and why is it there? In order to tackle this question, we must first talk about another concept --- the concept of *reductionism*:
Reductionism is the idea that the behaviour of some more complex phenomenon can be understood in terms of a number of *simpler* and more fundamental phenomena, or in other words, the idea that things keep getting simpler and simpler as they get "smaller" (or when they are viewed at a lower level), like for example, the behavior of matter can be understood through the understanding the behaviours of its costituents i.e. atoms. Whether the reductionist view is *universally valid*, i.e. whether it is possible to expain everything with a simpler things (and devise a *theory of everything* that reduces the whole universe to a few very simple laws) is a question that we can argue about until that universe's inevitable collapse. But, what is certain is that reductionism underpins all our understanding, especially when it comes to science and mathematics --- each scientific discipline has a set of fundaments using which it tries to explain a given set of more complex phenomena, e.g. particle physics tries to explain the behaviour of atoms in terms of a given set of elementary particles, chemistry tries to explain the behaviour of various chemical substances in terms of a the chemical elements that they are composed of etc. A behaviour that cannot be reduced to the fundamentals of a given scientific discipline is simply outside of the scope of this discipline (and so a new discipline has to be created to tackle it). So, if this principle is so important, it would be benefitial to be able to formalize it, and this is what we will try to do now.
Reductionism is the idea that the behaviour of some more complex phenomenon can be understood in terms of a number of *simpler* and more fundamental phenomena, or in other words, the idea that things keep getting simpler and simpler as they get "smaller" (or when they are viewed at a lower level), like for example, the behavior of matter can be understood through the understanding the behaviours of its constituents i.e. atoms. Whether the reductionist view is *universally valid*, i.e. whether it is possible to explain everything with a simpler things (and devise a *theory of everything* that reduces the whole universe to a few very simple laws) is a question that we can argue about until that universe's inevitable collapse. But, what is certain is that reductionism underpins all our understanding, especially when it comes to science and mathematics --- each scientific discipline has a set of fundaments using which it tries to explain a given set of more complex phenomena, e.g. particle physics tries to explain the behaviour of atoms in terms of a given set of elementary particles, chemistry tries to explain the behaviour of various chemical substances in terms of a the chemical elements that they are composed of etc. A behaviour that cannot be reduced to the fundamentals of a given scientific discipline is simply outside of the scope of this discipline (and so a new discipline has to be created to tackle it). So, if this principle is so important, it would be beneficial to be able to formalize it, and this is what we will try to do now.
Commutativity
---
@ -550,7 +550,7 @@ Or simply
$A \circ B = B \circ A$
Incidentally this is the definition of a mathematicall law called *commutativity*.
Incidentally this is the definition of a mathematical law called *commutativity*.
**Task:** if our objects are sets, which set operation can represents the sum?

View File

@ -311,7 +311,7 @@ In both cases the monoid would be cyclic.
{%endif%}
Dihateral groups
Dihedral groups
===
Now, let's finally examine a non-commutative group --- the group of rotations *and reflections* of a given geometrical figure. It is the same as the last one, but here besides the rotation action that we already saw (and its composite actions), we have the action of flipping the figure vertically, an operation which results in its mirror image:
@ -493,7 +493,7 @@ The intuition behind this representation from a category-theoretic standpoint is
|---| --- | --- |
|Associativity| X | X | X |
|Identity| X | X | X |
|Invertability | | | X |
|Invertibility | | | X |
|Closure | | X | X |
When we view a monoid as a category, this law says that all morphisms in the category should be from one object to itself - a monoid, any monoid, can be seen as a *category with one object*.

View File

@ -56,7 +56,7 @@ Let's get the most boring law out of the way --- each object has to be bigger or
![Reflexivity](../04_order/reflexivity.svg)
Thre is no special reason for this law to exist, except that the "base case" should be covered somehow.
There is no special reason for this law to exist, except that the "base case" should be covered somehow.
We can formulate it the opposite way too and say that each object should *not* have the relationship to itself, in which case we would have a relation than resembles *bigger than*, as opposed to *bigger or equal to* and a slightly different type of order, sometimes called a *strict* order.
@ -270,7 +270,7 @@ We mentioned order isomorphisms several times already so this is about time to e
![Divides poset](../04_order/divides_poset_isomorphism.svg)
> An order isomorphism is essentially an isomorphism between the orders' underlying sets (invertable function). However, besides their underlying sets, orders also have the arrows that connect them, so there is one more condition: in order for an invertable function to constitute an order isomorphism it has to *respect those arrows*, in other words it should be *order preserving*. More specifically, applying this function (let's call it $F$) to any two elements in one set ($a$ and $b$) should result in two elements that have the same corresponding order in the other set (so $a ≤ b$ if and only if $F(a) ≤ F(b)$).
> An order isomorphism is essentially an isomorphism between the orders' underlying sets (invertible function). However, besides their underlying sets, orders also have the arrows that connect them, so there is one more condition: in order for an invertible function to constitute an order isomorphism it has to *respect those arrows*, in other words it should be *order preserving*. More specifically, applying this function (let's call it $F$) to any two elements in one set ($a$ and $b$) should result in two elements that have the same corresponding order in the other set (so $a ≤ b$ if and only if $F(a) ≤ F(b)$).
Birkhoff's representation theorem
---
@ -325,7 +325,7 @@ The difference between the two is small but crucial: in a tree, each element ca
A good intuition for the difference between the two is that a semilattice is capable of representing much more general relations, so for example, the mother-child relation forms a tree (a mother can have multiple children, but a child can have *only one* mother), but the "older sibling" relation forms a lattice, as a child can have multiple older siblings and vise versa.
Why am I speaking about trees? It's because people tend to use them for modelling all kinds of phenomena and to imagine everything a a tree. The tree is the structure that all of us undestand, that comes at us naturally, without even realizing that we are using a structure --- most human-made hierarchies are modelled as trees. A typical organization of people are modelled as trees - you have one person at the top, a couple of people who report to them, then even more people that report to this couple of people.
Why am I speaking about trees? It's because people tend to use them for modelling all kinds of phenomena and to imagine everything a a tree. The tree is the structure that all of us understand, that comes at us naturally, without even realizing that we are using a structure --- most human-made hierarchies are modelled as trees. A typical organization of people are modelled as trees - you have one person at the top, a couple of people who report to them, then even more people that report to this couple of people.
![Tree](../04_order/tree-organization.svg)
@ -337,14 +337,14 @@ The implications of the tendency to use trees, as opposed to lattices, to model
> In simplicity of structure the tree is comparable to the compulsive desire for neatness and order that insists the candlesticks on a mantelpiece be perfectly straight and perfectly symmetrical about the center. The semilattice, by comparison, is the structure of a complex fabric; it is the structure of living things, of great paintings and symphonies.
In general, it seems that hierachies that are specifically designed by *people*, such as cities tend to come up as trees, whereas hierarchies that are natural, such as the hierarchy of colors, tend to come be lattices.
In general, it seems that hierarchies that are specifically designed by *people*, such as cities tend to come up as trees, whereas hierarchies that are natural, such as the hierarchy of colors, tend to come be lattices.
{%endif%}
Interlude: Formal concept analysis
===
In the previous section we (along with Christopher Alexander) argued that lattice-based hierarchies are "natural", that is, they arize in nature. Now we will see a way to uncover such hierarchies given a set of objects that share some attributes. This is an overview of a mathematical method, called *formal context analysis*.
In the previous section we (along with Christopher Alexander) argued that lattice-based hierarchies are "natural", that is, they arise in nature. Now we will see a way to uncover such hierarchies given a set of objects that share some attributes. This is an overview of a mathematical method, called *formal context analysis*.
The datastructure that we will be analysing, called *formal context* consists of 3 sets. Firstly, the set containing all *objects* that we will be analysing (denoted as $G$).
@ -421,15 +421,15 @@ In short, for every preorder, we can define the *partial order of the equivalenc
Maps as preorders
---
We use maps to get around all the time, often without thinking about the fact that that they are actually diagrams. More specifically, some of them are preorders --- the objects represent cities or intercections, and the relations represent the roads.
We use maps to get around all the time, often without thinking about the fact that that they are actually diagrams. More specifically, some of them are preorders --- the objects represent cities or intersections, and the relations represent the roads.
![A map as a preorder](../04_order/preorder_map.svg)
Reflexivity reflects the fact that if you have a route allowing you to get from point $a$ to point $b$ and one that allows you to go from $b$ to $c$, then you can go from $a$ to $c$ as well. Two-way roads may be represented by two arrows that form an isomorphism between objects. Objects that are such that you can always get from one object to the other form equivalence classes (ideally all intercections would be in one equivalence class, else you would have places from which you would not be able to go back from).
Reflexivity reflects the fact that if you have a route allowing you to get from point $a$ to point $b$ and one that allows you to go from $b$ to $c$, then you can go from $a$ to $c$ as well. Two-way roads may be represented by two arrows that form an isomorphism between objects. Objects that are such that you can always get from one object to the other form equivalence classes (ideally all intersections would be in one equivalence class, else you would have places from which you would not be able to go back from).
![preorder](../04_order/preorder_map_equivalence.svg)
However, maps that contain more than one road (and even more than one *route*) connecting two intercections, cannot be represented using preorders. For that we would need categories (don't worry, we will get there).
However, maps that contain more than one road (and even more than one *route*) connecting two intersections, cannot be represented using preorders. For that we would need categories (don't worry, we will get there).
State machines as preorders
---
@ -442,7 +442,7 @@ A specification of a finite state machine consists of a set of states that the m
But as we saw, a finite state machine is similar to a preorder with a greatest and least object, in which the relations between the objects are represented by functions.
Finite state machines are used in organization planning e.g. imagine a process where a given item gets manifactured, gets checked by a quality control person, who, if they find some defficiencies, pass it to the necessary repairing departments and then they check it again and send it for shipping. This process can be modelled by the above diagram.
Finite state machines are used in organization planning e.g. imagine a process where a given item gets manufactured, gets checked by a quality control person, who, if they find some deficiencies, pass it to the necessary repairing departments and then they check it again and send it for shipping. This process can be modelled by the above diagram.
{%endif%}
@ -456,16 +456,16 @@ https://www.cs.rochester.edu/u/nelson/courses/csc_173/fa/fa.html
|--- | --- | --- |
|| X | X | X |
|Identity| X | X | X |
|Invertability | | | X |
|Invertibility | | | X |
|Closure | | X | X |
Or imagine a computational alghorithm for parsing input which iterates a string of characters and converts them to some other objects until all of the input is parsed.
Or imagine a computational algorithm for parsing input which iterates a string of characters and converts them to some other objects until all of the input is parsed.
TODO
Turing machines
https://www.i2cell.science/how-a-turing-machine-works/
---
State machines are, however not Turing-complete, that is, they cannot encode any alghorithm.
State machines are, however not Turing-complete, that is, they cannot encode any algorithm.
|Current State | Input | Next State | Write | Move |
|--- | --- | --- |

View File

@ -282,14 +282,14 @@ Here is one way to do it. The formulas that are used at each step are specified
![Hilbert proof](../05_logic/hilbert_proof.svg)
Note that to really prove that the two formulas are equivalen we have to also do it the other way around (start with ($¬p q$) and ($p → q$)).
Note that to really prove that the two formulas are equivalent we have to also do it the other way around (start with ($¬p q$) and ($p → q$)).
Intuitionistic logic. The BHK interpretation
===
Although the classical truth-functional interpretation of logic works and is correct in its own right, it doesn't fit well the categorical framework that we are using here: It is too "low-level", it relies on manipulating the values of the propositions. According to it, the operations *and* and *or* are just 2 of the 16 possible binary logical operations and they are not really connected to each other (but we know that they actually are.)
For these and other reasons (mostly other, probably), in the 20th century a whole new school of logic was founded, called *intuitionistic logic*. If we view classical logic as based on *set theory*, then intuitionistic logic would be based on *category theory* and its related theories. If *classical logic* is based on Plato's theory of forms, then intuinism began with a philosophical idea originating from Kant and Schopenhauer: the idea that the world as we experience it is largely predetermined of out perceptions of it. As the mathematician L.E.J. Brouwer puts it.
For these and other reasons (mostly other, probably), in the 20th century a whole new school of logic was founded, called *intuitionistic logic*. If we view classical logic as based on *set theory*, then intuitionistic logic would be based on *category theory* and its related theories. If *classical logic* is based on Plato's theory of forms, then intuitionism began with a philosophical idea originating from Kant and Schopenhauer: the idea that the world as we experience it is largely predetermined of out perceptions of it. As the mathematician L.E.J. Brouwer puts it.
> [...] logic is life in the human brain; it may accompany life outside the brain but it can never guide it by virtue of its own power.
@ -453,7 +453,7 @@ The *negation* operation
In order for a distributive lattice to represent a logical system, it has to also have objects that correspond to the values $True$ and $False$. But to mandate that these objects exist, we must first find a way to specify what they are in order/category-theoretic terms.
A well-known result in logic, called *the principle of explosion*, states that if we have a proof of $False$ (or if "$False$ is true" if we use the terminology of classical logic), then any and every other statement can be proven. And we also know that no true statement implies $False$ (in fact in intuinistic logic this is the definition of a true statement). Based on these criteria we know that the $False$ object would look like this when compared to other objects:
A well-known result in logic, called *the principle of explosion*, states that if we have a proof of $False$ (or if "$False$ is true" if we use the terminology of classical logic), then any and every other statement can be proven. And we also know that no true statement implies $False$ (in fact in intuitionistic logic this is the definition of a true statement). Based on these criteria we know that the $False$ object would look like this when compared to other objects:
![False, represented as a Hasse diagram](../05_logic/lattice_false.svg)
@ -516,7 +516,7 @@ Translated to logical language, says that if $B → A$, then the proof of $(A
Note that this definition does not follow the one from the truth tables exactly. This is because this definition is valid specifically for intuinistic logic. For classical logic, the definition of $(A → B)$ is simpler - it is just equivalent to ($-A B$).
By the way, the law of distributivity follows from this criteria, so the only criteria that are left for an lattice to follow the laws of intuinistic logic is for it to be *bounded* i.e. to have greatest and least objects ($True$ and $False$) and to have a function object as described above. Lattices that follow these criteria are called *Heyting algrebras*.
By the way, the law of distributivity follows from this criteria, so the only criteria that are left for an lattice to follow the laws of intuinistic logic is for it to be *bounded* i.e. to have greatest and least objects ($True$ and $False$) and to have a function object as described above. Lattices that follow these criteria are called *Heyting algebras*.
And for a lattice to follow the laws of classical logic it has to be *bounded* and *distributive* and to be *complemented* which is to say that each proposition $A$ should be complemented with a unique proposition $\neg A$ (such that $A \neg A = 1$ and $A ∧ \neg A = 0$). These lattices are called *boolean algebras*.

View File

@ -7,13 +7,13 @@ Adjunctions
In this chapter we will continue with this *leit motif* that we developed in the previous two chapters - to begin each of them by introducing a new concept of equality between categories (and furthermore, for each new type of equality to be more relaxed than the previous one).
We started the chapter about functors by reviewing *categorical isomorphisms*, which are invertable functions between categories.
We started the chapter about functors by reviewing *categorical isomorphisms*, which are invertible functions between categories.
Then in the chapter on *natural transformations* we saw categories that are equivalent up to an isomorphism.
And now we will relax the condition even more and will review a relationship that is not exactly an equality, , but it is not non-equality either. It is not two-way, but at the same time it is not exactly one-way as well. A relationship called *adjunction*.
As you can see, I am not very good at explaining, so I got some examples alligned. But before we proceed with them, we will go through something else.
As you can see, I am not very good at explaining, so I got some examples aligned. But before we proceed with them, we will go through something else.
{% if site.distribution == 'print' %}
@ -65,7 +65,7 @@ Now let's review the functor that has certain relation to the forgetful functor,
![Free functors](free_functors.svg)
Saying "going the other way around" is actually not entirely accurate, as we cannot literary reverse the mapping from the forgetful functor. This is so, simply due to the fact that given one simple structure (such as a set) there can be more than one richer structures that correspond to it (e.g. the set of natural numbers is the underlying set of both the monoid of natural numbers under addition and the monoid of natural numbers under mutliplication).
Saying "going the other way around" is actually not entirely accurate, as we cannot literary reverse the mapping from the forgetful functor. This is so, simply due to the fact that given one simple structure (such as a set) there can be more than one richer structures that correspond to it (e.g. the set of natural numbers is the underlying set of both the monoid of natural numbers under addition and the monoid of natural numbers under multiplication).
But, although we cannot create a functor that is the reverse of the forgetful functor, there is one functor that still has some interesting connection to it - this functor is called the *free functor* for a given category. It works by connecting each object from the simpler category to the *free object* corresponding to it. In our case the case of monoids it is the free monoid generated by a given set.
@ -84,7 +84,7 @@ And a bunch of rules or equations describing how sequences of these generators c
![Rule of the monoid of rotations](rule_rotations.svg)
Here the rules for a given set of generators can be arbitrary, so the free monoid is the monoid that has no such rules. As a result, the free monoid of a given set is the monoid of all possible (endless) sequences of elements of a that set (which is taken as the monoid's set of generators.
Here the rules for a given set of generators can be arbitrary, so the free monoid is the monoid that has no such rules. As a result, the free monoid of a given set is the monoid of all possible (endless) sequences of elements of a that set (which is taken as the monoid's set of generators).
If you think about this definition we would realize that the free monoid is actually just the *list datatype* that we are familiar from programming. And the free functor converting sets to monoids is actually the list functor that we saw in one of the previous sections.
@ -114,7 +114,7 @@ Step two: there exist a morphism from any set to the underlying set of any monoi
![Adjunction](adjunction_2.svg)
Finally, there is a isomorphism between these two sets of functions, which in turn translates to a relationship between the free and forgetful functor that looks like this (the free fuctor is in green and the forgetful one is in red).
Finally, there is a isomorphism between these two sets of functions, which in turn translates to a relationship between the free and forgetful functor that looks like this (the free functor is in green and the forgetful one is in red).
![Adjunction](adjunction_3.svg)

View File

@ -1,7 +1,7 @@
Yoneda lemma
===
When thinking about some mathematical objects such as a groups, orders or categories, we often feel a need to get to the souce. We start asking ourselves what is "groupness" or "orderness". Like, given some (or any) set of objects, what is the ultimate group that we can create out of these objects - a group that includes all other groups, group such that all other groups are just a special case of it. Or the ultimate order? The ultimate category?
When thinking about some mathematical objects such as a groups, orders or categories, we often feel a need to get to the source. We start asking ourselves what is "groupness" or "orderness". Like, given some (or any) set of objects, what is the ultimate group that we can create out of these objects - a group that includes all other groups, group such that all other groups are just a special case of it. Or the ultimate order? The ultimate category?
Homomorphism functors

View File

@ -237,7 +237,7 @@ If you know about semiotics, you may view the source and target categories of th
And so, you can already see that the concept of a functor plays a very important role in category theory. Because of it, diagrams in category theory can be *specified formally* i.e. they are categorical objects *per se*.
You might even say that they are categorical objects *par excellance* (TODO: remove that last joke).
You might even say that they are categorical objects *par excellence* (TODO: remove that last joke).
<!--
(TODO: By the way, the fact that a diagram commutes means just that the morphism in the finite category are sometimes composites of one another).
@ -494,7 +494,7 @@ a.map(f).map(g) == a.map((a) => g(f(a)))
What are functors for
===
Now, that we have seen so many examples of functors, we finally can attempt to answer the million-dollar question, namely what are functors for and why are they useful? (often formulated also as "Why are you wasting your/my time with this (abstact) nonsense?")
Now, that we have seen so many examples of functors, we finally can attempt to answer the million-dollar question, namely what are functors for and why are they useful? (often formulated also as "Why are you wasting your/my time with this (abstract) nonsense?")
Well, we saw that *maps are functors* and we know that *maps are useful*, so let's start from there.

View File

@ -9,7 +9,7 @@ Natural transformations
In this chapter, we will introduce the concept of a morphism between functors or *natural transformation*. Understanding natural transformations will enable us to define category equality and some other advanced concepts.
Natural transformations really are at the heart of category theory --- As a matter of fact, category theory was invented with the purpose of studying natural transformations. However, the importance of natural transformations is not obvious at first, and so, before introducing them, I like to talk about the body of knowledge that this heart maintains (I am good with methaphors... in principle).
Natural transformations really are at the heart of category theory --- As a matter of fact, category theory was invented with the purpose of studying natural transformations. However, the importance of natural transformations is not obvious at first, and so, before introducing them, I like to talk about the body of knowledge that this heart maintains (I am good with metaphors... in principle).
The categorical way AKA Objects are overrated
===
@ -24,7 +24,7 @@ Asking such questions might lead us to suspect that, although what we *see* when
Although old, (dating back to Parmenides' rival Heraclitus) this view has been largely unexplored, both in the realm of philosophy, and that of mathematics. But it is deeply engrained in category theory. For example, when we say that a given property defines an object *up to a unique isomorphism* what we mean is exactly this --- that if there are two or more objects that are isomorphic to one another and have exactly the same morphisms from/to all other objects in the category (have the same *functions* in the category), then these objects are, for all intends and purposes, equivalent. And the key to understanding how this works are natural transformations.
So, are you ready to hear about natural transformations? Actually it is my opinion that you are not, so I would like to continue with something else. Let's ask ourselves the same question that we were poundering at the beginning of the previous chapter --- what does it mean for two categories to be equal.
So, are you ready to hear about natural transformations? Actually it is my opinion that you are not, so I would like to continue with something else. Let's ask ourselves the same question that we were pondering at the beginning of the previous chapter --- what does it mean for two categories to be equal.
Isomorphisms and equivalence
===
@ -49,7 +49,7 @@ To understand equivalent categories better, let's go back to the functor between
Such a map is necessary if your goal is to know about all *places*, however, like we said, when working with category theory, we are not so interested in *places*, but in the *routes* that connect them i.e. we focus not on *objects* but on *morphisms*.
For example, if there are intersections that are positioned in such a way that there are routes from one and to the other and vice-versa a map may may collapse them into one intercection and still show all routes that exist.
For example, if there are intersections that are positioned in such a way that there are routes from one and to the other and vice-versa a map may may collapse them into one intersection and still show all routes that exist.
![Equivalent categories](equivalent_map.svg)
@ -66,7 +66,7 @@ Before we present a formal definition of order equivalence, we need to revise th
In the chapter about orders we presented a definition of order isomorphism that is based on *set* isomorphisms.
> An order isomorphism is essentially an isomorphism between the orders' underlying sets (invertable function). However, besides their underlying sets, orders also have the arrows that connect them, so there is one more condition: in order for an invertable function to constitute an order isomorphism it has to *respect those arrows*, in other words it should be *order preserving*. More specifically, applying this function (let's call it $F$) to any two elements in one set ($a$ and $b$) should result in two elements that have the same corresponding order in the other set (so $a ≤ b$ if and only if $F(a) ≤ F(b)$).
> An order isomorphism is essentially an isomorphism between the orders' underlying sets (invertible function). However, besides their underlying sets, orders also have the arrows that connect them, so there is one more condition: in order for an invertible function to constitute an order isomorphism it has to *respect those arrows*, in other words it should be *order preserving*. More specifically, applying this function (let's call it $F$) to any two elements in one set ($a$ and $b$) should result in two elements that have the same corresponding order in the other set (so $a ≤ b$ if and only if $F(a) ≤ F(b)$).
But, since we know about functors, we will present a new definition, based on functors:
> Given two orders $A$ and $B$, an *order isomorphism* consists of two functors $F: A \to B$ and $G: B \to A$, such that composing one with the other leads us back to the same object.
@ -125,7 +125,7 @@ Note that the functors are similar have the same signature --- both their input
Building the object mapping mapping
---
A functor is comprised of two components --- object mapping and morphism mapping, so a natural transformatiom, being a morphism between functors, should take those two mappings into account.
A functor is comprised of two components --- object mapping and morphism mapping, so a natural transformation, being a morphism between functors, should take those two mappings into account.
Let's first connect the object mappings of the two functors, creating what we called "object mapping mapping". It is simpler than it sounds when we realize that we only need to connect the object in functors' *target category*. The objects in the source category would just always be the same as both functors would include *all* object from the source category, (because that is what functions do, right?)
@ -157,7 +157,7 @@ If you look just a little bit closely, you will see that the only difference bet
Natural transformations again
===
Now that we saw the definition of natural transformations, it is time to see the definition of natural transformations (and if you feel that the quality of the humour in this book is deteoriating, that's only because *things are getting serious*).
Now that we saw the definition of natural transformations, it is time to see the definition of natural transformations (and if you feel that the quality of the humour in this book is deteriorating, that's only because *things are getting serious*).
I am sure that once you saw one definition of a natural transformation, you just cannot get enough of them. So let's work out one more. Let's start with our two functors.
@ -195,7 +195,7 @@ That is because the category with two objects and one morphism (which, if you re
Formal definition
---
Let's state formally what we saw in these diagrams. As you remember we call our two functors $F$ and $G$ and the our categoris $C$ and $D$ (so $F : C \to D$ and $G : C \to D$).
Let's state formally what we saw in these diagrams. As you remember we call our two functors $F$ and $G$ and the our categories $C$ and $D$ (so $F : C \to D$ and $G : C \to D$).
Also, we call the category with two connected objects $2$ and the morphisms in it are $0$ and $1$.