Small fixes + version bump!

This fixes several issues:
fixes #42
fixes #45
fixes #53
fixes #60
fixes #69
fixes #70
fixes #36
This commit is contained in:
Igal Tabachnik 2017-10-16 21:26:02 +03:00
parent 51a71ee475
commit ee59574ad3
26 changed files with 214 additions and 224 deletions

View File

@ -100,7 +100,7 @@ in increasingly hot water, or start looking for some alternatives.
\begin{figure}
\centering
\fbox{\includegraphics[width=3.12500in]{images/img_1299.jpg}}
\includegraphics[width=70mm]{images/img_1299.jpg}
\end{figure}
One of the forces that are driving the big change is the multicore
@ -131,7 +131,7 @@ Changes in hardware and the growing complexity of software are forcing
us to rethink the foundations of programming. Just like the builders of
Europe's great gothic cathedrals we've been honing our craft to the
limits of material and structure. There is an unfinished gothic
\href{http://en.wikipedia.org/wiki/Beauvais_Cathedral}{cathedral in
\urlref{http://en.wikipedia.org/wiki/Beauvais_Cathedral}{cathedral in
Beauvais}, France, that stands witness to this deeply human struggle
with limitations. It was intended to beat all previous records of height
and lightness, but it suffered a series of collapses. Ad hoc measures
@ -148,6 +148,6 @@ those foundations if we want to move forward.
\begin{figure}
\centering
\fbox{\includegraphics[totalheight=8cm]{images/beauvais_interior_supports.jpg}}
\includegraphics[totalheight=8cm]{images/beauvais_interior_supports.jpg}
\caption{Ad hoc measures preventing the Beauvais cathedral from collapsing.}
\end{figure}

View File

@ -3,7 +3,7 @@ A category consists of \newterm{objects} and \newterm{arrows} that go between th
why categories are so easy to represent pictorially. An object can be
drawn as a circle or a point, and an arrow\ldots{} is an arrow. (Just
for variety, I will occasionally draw objects as piggies and arrows as
fireworks.) But the essence of a category is \newterm{composition}. Or, if you
fireworks.) But the essence of a category is \emph{composition}. Or, if you
prefer, the essence of composition is a category. Arrows compose, so
if you have an arrow from object $A$ to object $B$, and another arrow from
object $B$ to object $C$, then there must be an arrow --- their composition
@ -17,17 +17,17 @@ then there must also be a direct arrow from $A$ to $C$ that is their composition
category because its missing identity morphisms (see later).}
\end{figure}
\section{Arrows as Functions}\label{arrows-as-functions}
\section{Arrows as Functions}
Is this already too much abstract nonsense? Do not despair. Let's talk
concretes. Think of arrows, which are also called \newterm{morphisms}, as
functions. You have a function \code{f} that takes an argument of type $A$ and
returns a $B$. You have another function \code{g} that takes a $B$ and returns a $C$.
functions. You have a function $f$ that takes an argument of type $A$ and
returns a $B$. You have another function $g$ that takes a $B$ and returns a $C$.
You can compose them by passing the result of $f$ to $g$. You have just
defined a new function that takes an $A$ and returns a $C$.
In math, such composition is denoted by a small circle between
functions: \ensuremath{g \circ f}. Notice the right to left order of composition. For some
functions: $g \circ f$. Notice the right to left order of composition. For some
people this is confusing. You may be familiar with the pipe notation in
Unix, as in:
@ -36,7 +36,7 @@ lsof | grep Chrome
\end{Verbatim}
or the chevron \code{>>} in F\#, which both
go from left to right. But in mathematics and in Haskell functions
compose right to left. It helps if you read \(g \circ f\) as ``g \emph{after} f.''
compose right to left. It helps if you read $g \circ f$ as ``g \emph{after} f.''
Let's make this even more explicit by writing some C code. We have one
function \code{f} that takes an argument of type \code{A} and
@ -83,12 +83,12 @@ Once you see how simple things are in Haskell, the inability to express
straightforward functional concepts in C++ is a little embarrassing. In
fact, Haskell will let you use Unicode characters so you can write
composition as:
% don't 'mathify' this block
\begin{Verbatim}
g ◦ f
\end{Verbatim}
You can even use Unicode double colons and arrows:
% don't 'mathify' this block
\begin{Verbatim}[commandchars=\\\{\}]
f \ensuremath{\Colon} A → B
\end{Verbatim}
@ -97,7 +97,7 @@ of\ldots{}'' A function type is created by inserting an arrow between
two types. You compose two functions by inserting a period between them
(or a Unicode circle).
\section{Properties of Composition}\label{properties-of-composition}
\section{Properties of Composition}
There are two extremely important properties that the composition in any
category must satisfy.
@ -108,7 +108,7 @@ Composition is associative. If you have three morphisms, $f$, $g$, and $h$,
that can be composed (that is, their objects match end-to-end), you
don't need parentheses to compose them. In math notation this is
expressed as:
\[h\circ{}(g\circ{}f) = (h\circ{}g)\circ{}f = h\circ{}g\circ{}f\]
\[h \circ (g \circ f) = (h \circ g) \circ f = h \circ g \circ f\]
In (pseudo) Haskell:
\begin{Verbatim}
@ -186,7 +186,7 @@ do we bother with the number zero? Zero is a symbol for nothing. Ancient
Romans had a number system without a zero and they were able to build
excellent roads and aqueducts, some of which survive to this day.
Neutral values like zero or \code{id} are extremely useful when
Neutral values like zero or $\id$ are extremely useful when
working with symbolic variables. That's why Romans were not very good at
algebra, whereas the Arabs and the Persians, who were familiar with the
concept of zero, were. So the identity function becomes very handy as an
@ -198,8 +198,7 @@ To summarize: A category consists of objects and arrows (morphisms).
Arrows can be composed, and the composition is associative. Every object
has an identity arrow that serves as a unit under composition.
\section{Composition is the Essence of
Programming}\label{composition-is-the-essence-of-programming}
\section{Composition is the Essence of Programming}
Functional programmers have a peculiar way of approaching problems. They
start by asking very Zen-like questions. For instance, when designing an
@ -223,7 +222,7 @@ This process of hierarchical decomposition and recomposition is not
imposed on us by computers. It reflects the limitations of the human
mind. Our brains can only deal with a small number of concepts at a
time. One of the most cited papers in psychology,
\href{http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two}{The
\urlref{http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two}{The
Magical Number Seven, Plus or Minus Two}, postulated that we can only
keep $7 \pm 2$ ``chunks'' of information in our minds. The details of our
understanding of the human short-term memory might be changing, but we
@ -263,7 +262,7 @@ you have to dig into the implementation of the object in order to
understand how to compose it with other objects, you've lost the
advantages of your programming paradigm.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -3,7 +3,7 @@ their structure.
\begin{wrapfigure}[13]{R}{0pt}
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
\fbox{\includegraphics[width=60mm]{images/1_functors.jpg}}}%
\includegraphics[width=60mm]{images/1_functors.jpg}}%
\end{wrapfigure}
\noindent
@ -29,7 +29,7 @@ $G a$.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=80mm]{images/2_natcomp.jpg}}
\includegraphics[width=80mm]{images/2_natcomp.jpg}
\end{figure}
\noindent
@ -131,7 +131,7 @@ like saying they are the same. \newterm{Natural isomorphism} is defined as
a natural transformation whose components are all isomorphisms
(invertible morphisms).
\section{Polymorphic Functions}\label{polymorphic-functions}
\section{Polymorphic Functions}
We talked about the role of functors (or, more specifically,
endofunctors) in programming. They correspond to type constructors that
@ -355,7 +355,7 @@ alpha :: Reader () a -> Maybe a
\end{Verbatim}
There are only two of these, \code{dumb} and \code{obvious}:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
dumb (Reader _) = Nothing
\end{Verbatim}
and
@ -371,7 +371,7 @@ two elements of the \code{Maybe ()} type, which are \code{Nothing}
and \code{Just ()}. We'll come back to the Yoneda lemma later ---
this was just a little teaser.
\section{Beyond Naturality}\label{beyond-naturality}
\section{Beyond Naturality}
A parametrically polymorphic function between two functors (including
the edge case of the \code{Const} functor) is always a natural
@ -441,7 +441,7 @@ transformation. Interestingly, there is a generalization of natural
transformations, called dinatural transformations, that deals with such
cases. We'll get to them when we discuss ends.
\section{Functor Category}\label{functor-category}
\section{Functor Category}
Now that we have mappings between functors --- natural transformations
--- it's only natural to ask the question whether functors form a
@ -550,7 +550,7 @@ And by ``object'' in $\Cat$ I mean a category, so
$\cat{D^C}$ is a category, which we can identify with the
functor category between $\cat{C}$ and $\cat{D}$.
\section{2-Categories}\label{categories}
\section{2-Categories}
With that out of the way, let's have a closer look at $\Cat$. By
definition, any Hom-set in $\Cat$ is a set of functors. But, as we
@ -748,7 +748,7 @@ Functors may be looked upon as objects in the functor category. As such,
they become sources and targets of morphisms: natural transformations. A
natural transformation is a special type of polymorphic function.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist
@ -780,7 +780,7 @@ op = Op (\x -> x > 0)
\end{Verbatim}
and
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
f :: String -> Int
f x = read x
\end{Verbatim}

View File

@ -1,7 +1,7 @@
\lettrine[lhang=0.17]{T}{he category of types and functions} plays an important role in
programming, so let's talk about what types are and why we need them.
\section{Who Needs Types?}\label{who-needs-types}
\section{Who Needs Types?}
There seems to be some controversy about the advantages of static vs.
dynamic and strong vs. weak typing. Let me illustrate these choices with
@ -35,8 +35,7 @@ The analog of a type checker would go even further by making sure that,
once Romeo is declared a human being, he doesn't sprout leaves or trap
photons in his powerful gravitational field.
\section{Types Are About
Composability}\label{types-are-about-composability}
\section{Types Are About Composability}
Category theory is about composing arrows. But not any two arrows can be
composed. The target object of one arrow must be the same as the source
@ -95,7 +94,7 @@ sites. Unit testing may catch some of the mismatches, but testing is
almost always a probabilistic rather than a deterministic process.
Testing is a poor substitute for proof.
\section{What Are Types?}\label{what-are-types}
\section{What Are Types?}
The simplest intuition for types is that they are sets of values. The
type \code{Bool} (remember, concrete types start with a capital letter
@ -147,7 +146,7 @@ undecidable --- the famous halting problem. That's why computer
scientists came up with a brilliant idea, or a major hack, depending on
your point of view, to extend every type by one more special value
called the \newterm{bottom} and denoted by \code{\_|\_}, or
Unicode \ensuremath{\bot}. This ``value'' corresponds to a non-terminating computation.
Unicode $\bot$. This ``value'' corresponds to a non-terminating computation.
So a function declared as:
\begin{Verbatim}
@ -180,7 +179,7 @@ Functions that may return bottom are called partial, as opposed to total
functions, which return valid results for every possible argument.
Because of the bottom, you'll see the category of Haskell types and
functions referred to as \textbf{Hask} rather than $\Set$. From
functions referred to as $\cat{Hask}$ rather than $\Set$. From
the theoretical point of view, this is the source of never-ending
complications, so at this point I will use my butcher's knife and
terminate this line of reasoning. From the pragmatic point of view, it's
@ -189,8 +188,7 @@ okay to ignore non-terminating functions and bottoms, and treat
John Hughes, Patrik Jansson, Jeremy Gibbons, \href{http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fast+loose.pdf}{
Fast and Loose Reasoning is Morally Correct}. This paper provides justification for ignoring bottoms in most contexts.}
\section{Why Do We Need a Mathematical
Model?}\label{why-do-we-need-a-mathematical-model}
\section{Why Do We Need a Mathematical Model?}
As a programmer you are intimately familiar with the syntax and grammar
of your programming language. These aspects of the language are usually
@ -283,7 +281,7 @@ even when writing web applications for the health system, you may
appreciate the thought that functions and algorithms from the Haskell
standard library come with proofs of correctness.
\section{Pure and Dirty Functions}\label{pure-and-dirty-functions}
\section{Pure and Dirty Functions}
The things we call functions in C++ or any other imperative language,
are not the same things mathematicians call functions. A mathematical
@ -311,7 +309,7 @@ about side effects separately. Later we'll see how monads let us model
all kinds of effects using only pure functions. So we really don't lose
anything by restricting ourselves to mathematical functions.
\section{Examples of Types}\label{examples-of-types}
\section{Examples of Types}
Once you realize that types are sets, you can think of some rather
exotic types. For instance, what's the type corresponding to an empty
@ -384,8 +382,8 @@ effects, but we know that these are not real functions in the
mathematical sense of the word. A pure function that returns unit does
nothing: it discards its argument.
Mathematically, a function from a set A to a singleton set maps every
element of A to the single element of that singleton set. For every A
Mathematically, a function from a set $A$ to a singleton set maps every
element of $A$ to the single element of that singleton set. For every $A$
there is exactly one such function. Here's this function for
\code{Integer}:
@ -458,7 +456,7 @@ Boolean. The actual predicates are defined in \code{std::ctype} and
have the form \code{ctype::is(alpha, c)},
\code{ctype::is(digit, c)}, etc.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -2,7 +2,7 @@
examples. Categories come in all shapes and sizes and often pop up in
unexpected places. We'll start with something really simple.
\section{No Objects}\label{no-objects}
\section{No Objects}
The most trivial category is one with zero objects and, consequently,
zero morphisms. It's a very sad category by itself, but it may be
@ -10,7 +10,7 @@ important in the context of other categories, for instance, in the
category of all categories (yes, there is one). If you think that an
empty set makes sense, then why not an empty category?
\section{Simple Graphs}\label{simple-graphs}
\section{Simple Graphs}
You can build categories just by connecting objects with arrows. You can
imagine starting with any directed graph and making it into a category
@ -34,18 +34,18 @@ given structure by extending it with a minimum number of items to
satisfy its laws (here, the laws of a category). We'll see more examples
of it in the future.
\section{Orders}\label{orders}
\section{Orders}
And now for something completely different! A category where a morphism
is a relation between objects: the relation of being less than or equal.
Let's check if it indeed is a category. Do we have identity morphisms?
Every object is less than or equal to itself: check! Do we have
composition? If $a \leq b$ and $b \leq c$ then $a \leq c$: check! Is composition associative? Check! A set with a
composition? If $a \leqslant b$ and $b \leqslant c$ then $a \leqslant c$: check! Is composition associative? Check! A set with a
relation like this is called a \newterm{preorder}, so a preorder is indeed
a category.
You can also have a stronger relation, that satisfies an additional
condition that, if $a \leq b$ and $b \leq a$ then $a$ must be
condition that, if $a \leqslant b$ and $b \leqslant a$ then $a$ must be
the same as $b$. That's called a \newterm{partial order}.
Finally, you can impose the condition that any two objects are in a
@ -59,7 +59,7 @@ is a thin category.
A set of morphisms from object a to object b in a category $\cat{C}$ is called a
\newterm{hom-set} and is written as $\cat{C}(a, b)$ (or, sometimes,
$\cat{Hom_C}(a, b)$). So every hom-set in a preorder is either
$\mathbf{Hom}_{\cat{C}}(a, b)$). So every hom-set in a preorder is either
empty or a singleton. That includes the hom-set $\cat{C}(a, a)$, the set of
morphisms from a to a, which must be a singleton, containing only the
identity, in any preorder. You may, however, have cycles in a preorder.
@ -70,7 +70,7 @@ and total orders because of sorting. Sorting algorithms, such as
quicksort, bubble sort, merge sort, etc., can only work correctly on
total orders. Partial orders can be sorted using topological sort.
\section{Monoid as Set}\label{monoid-as-set}
\section{Monoid as Set}
Monoid is an embarrassingly simple but amazingly powerful concept. It's
the concept behind basic arithmetics: Both addition and multiplication
@ -109,14 +109,14 @@ class Monoid m where
mappend :: m -> m -> m
\end{Verbatim}
The type signature for a two-argument function,
\code{m->m->m}, might look strange at first,
\code{m -> m -> m}, might look strange at first,
but it will make perfect sense after we talk about currying. You may
interpret a signature with multiple arrows in two basic ways: as a
function of multiple arguments, with the rightmost type being the return
type; or as a function of one argument (the leftmost one), returning a
function. The latter interpretation may be emphasized by adding
parentheses (which are redundant, because the arrow is
right-associative), as in: \code{m->(m->m)}.
right-associative), as in: \code{m -> (m -> m)}.
We'll come back to this interpretation in a moment.
Notice that, in Haskell, there is no way to express the monoidal
@ -222,7 +222,7 @@ std::string mappend(std::string s1, std::string s2) {
}
\end{Verbatim}
\section{Monoid as Category}\label{monoid-as-category}
\section{Monoid as Category}
That was the ``familiar'' definition of the monoid in terms of elements
of a set. But as you know, in category theory we try to get away from
@ -253,7 +253,7 @@ corresponding to the identity function.
An astute reader might have noticed that the mapping from integers to
adders follows from the second interpretation of the type signature of
\code{mappend} as \code{m->(m->m)}. It
\code{mappend} as \code{m -> (m -> m)}. It
tells us that \code{mappend} maps an element of a monoid set to a
function acting on that set.
@ -310,7 +310,7 @@ follow the rules of composition, and as points in a set. Here,
composition of morphisms in M translates into monoidal product in the
set $\cat{M}(m, m)$.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -159,7 +159,7 @@ composition} itself. But composition is the essence of category theory,
so before we write more code, let's analyze the problem from the
categorical point of view.
\section{The Writer Category}\label{the-writer-category}
\section{The Writer Category}
The idea of embellishing the return types of a bunch of functions in
order to piggyback some additional functionality turns out to be very
@ -300,7 +300,7 @@ writer should be able to identify the bare minimum of constraints that
make the library work --- here the logging library's only requirement is
that the log have monoidal properties.
\section{Writer in Haskell}\label{writer-in-haskell}
\section{Writer in Haskell}
The same thing in Haskell is a little more terse, and we also get a lot
more help from the compiler. Let's start by defining the \code{Writer}
@ -395,7 +395,7 @@ process :: String -> Writer [String]
process = upCase >=> toWords
\end{Verbatim}
\section{Kleisli Categories}\label{kleisli-categories}
\section{Kleisli Categories}
You might have guessed that I haven't invented this category on the
spot. It's an example of the so called Kleisli category --- a category
@ -424,7 +424,7 @@ freedom which makes it possible to give simple denotational semantics to
programs that in imperative languages are traditionally implemented
using side effects.
\section{Challenge}\label{challenge}
\section{Challenge}
A function that is not defined for all possible values of its argument
is called a partial function. It's not really a function in the

View File

@ -21,7 +21,7 @@ hits, you refine your query. That increases its \emph{precision}.
Finally, the search engine will rank the hits and, hopefully, the one
result that you're interested in will be at the top of the list.
\section{Initial Object}\label{initial-object}
\section{Initial Object}
The simplest shape is a single object. Obviously, there are as many
instances of this shape as there are objects in a given category. That's
@ -49,13 +49,13 @@ morphism going to any object in the category.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=60mm]{images/initial.jpg}}
\includegraphics[width=60mm]{images/initial.jpg}
\end{figure}
\noindent
However, even that doesn't guarantee the uniqueness of the initial
object (if one exists). But it guarantees the next best thing:
uniqueness \emph{up to isomorphism}. Isomorphisms are very important in
uniqueness \newterm{up to isomorphism}. Isomorphisms are very important in
category theory, so I'll talk about them shortly. For now, let's just
agree that uniqueness up to isomorphism justifies the use of ``the'' in
the definition of the initial object.
@ -77,7 +77,7 @@ absurd :: Void -> a
It's this family of morphisms that makes \code{Void} the initial
object in the category of types.
\section{Terminal Object}\label{terminal-object}
\section{Terminal Object}
Let's continue with the single-object pattern, but let's change the way
we rank the objects. We'll say that object $a$ is ``more terminal''
@ -93,7 +93,7 @@ morphism coming to it from any object in the category.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=60mm]{images/final.jpg}}
\includegraphics[width=60mm]{images/final.jpg}
\end{figure}
\noindent
@ -132,7 +132,7 @@ no _ = False
Insisting on uniqueness gives us just the right precision to narrow down
the definition of the terminal object to just one type.
\section{Duality}\label{duality}
\section{Duality}
You can't help but to notice the symmetry between the way we defined the
initial object and the terminal object. The only difference between the
@ -141,10 +141,10 @@ we can define the \newterm{opposite category} $\cat{C}^{op}$ just by
reversing all the arrows. The opposite category automatically satisfies
all the requirements of a category, as long as we simultaneously
redefine composition. If original morphisms
$f\Colon{}a\to b$ and $g\Colon{}b\to c$ composed
to $h\Colon{}a\to c$ with $h=g \circ f$, then the reversed
morphisms $f^{op}\Colon{}b\to a$ and $g^{op}\Colon{}c\to b$ will compose to
$h^{op}\Colon{}c\to a$ with $h^{op}=f^{op} \circ g^{op}$. And reversing
$f \Colon a \to b$ and $g \Colon b \to c$ composed
to $h \Colon a \to c$ with $h = g \circ f$, then the reversed
morphisms $f^{op} \Colon b \to a$ and $g^{op} \Colon c \to b$ will compose to
$h^{op} \Colon c \to a$ with $h^{op} = f^{op} \circ g^{op}$. And reversing
the identity arrows is a (pun alert!) no-op.
Duality is a very important property of categories because it doubles
@ -159,7 +159,7 @@ the arrows twice gets us back to the original state.
It follows then that a terminal object is the initial object in the
opposite category.
\section{Isomorphisms}\label{isomorphisms}
\section{Isomorphisms}
As programmers, we are well aware that defining equality is a nontrivial
task. What does it mean for two objects to be equal? Do they have to
@ -188,11 +188,10 @@ We understand the inverse in terms of composition and identity: Morphism
$g$ is the inverse of morphism $f$ if their composition is the
identity morphism. These are actually two equations because there are
two ways of composing two morphisms:
\begin{Verbatim}
f . g = id
g . f = id
\end{Verbatim}
\begin{gather*}
f \circ g = \id \\
g \circ f = \id
\end{gather*}
When I said that the initial (terminal) object was unique up to
isomorphism, I meant that any two initial (terminal) objects are
isomorphic. That's actually easy to see. Let's suppose that we have two
@ -205,7 +204,7 @@ these two morphisms?
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=1.56250in]{images/uniqueness.jpg}}
\includegraphics[width=40mm]{images/uniqueness.jpg}
\caption{All morphisms in this diagram are unique.}
\end{figure}
@ -230,7 +229,7 @@ there could be more than one isomorphism between two objects, but that's
not the case here. This ``uniqueness up to unique isomorphism'' is the
important property of all universal constructions.
\section{Products}\label{products}
\section{Products}
The next universal construction is that of a product. We know what a
cartesian product of two sets is: it's a set of pairs. But what's the
@ -284,7 +283,7 @@ q :: c -> b
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=1.56250in]{images/productpattern.jpg}}
\includegraphics[width=40mm]{images/productpattern.jpg}
\end{figure}
\noindent
@ -293,7 +292,7 @@ the product. There may be lots of them.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=1.56250in]{images/productcandidates.jpg}}
\includegraphics[width=40mm]{images/productcandidates.jpg}
\end{figure}
\noindent
@ -331,31 +330,31 @@ too big --- it spuriously duplicated the \code{Int} dimension.
But we haven't explored yet the other part of the universal
construction: the ranking. We want to be able to compare two instances
of our pattern. We want to compare one candidate object \emph{c} and its
of our pattern. We want to compare one candidate object $c$ and its
two projections \emph{p} and \emph{q} with another candidate object
\emph{c'} and its two projections \emph{p'} and \emph{q'}. We would like
to say that \emph{c} is ``better'' than \emph{c'} if there is a morphism
\emph{m} from \emph{c'} to \emph{c} --- but that's too weak. We also
$c'$ and its two projections \emph{p'} and \emph{q'}. We would like
to say that $c$ is ``better'' than $c'$ if there is a morphism
$m$ from $c'$ to $c$ --- but that's too weak. We also
want its projections to be ``better,'' or ``more universal,'' than the
projections of \emph{c'}. What it means is that the projections
projections of $c'$. What it means is that the projections
\emph{p'} and \emph{q'} can be reconstructed from \emph{p} and \emph{q}
using \emph{m}:
using $m$:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
p' = p . m
q' = q . m
\end{Verbatim}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=1.56250in]{images/productranking.jpg}}
\includegraphics[width=40mm]{images/productranking.jpg}
\end{figure}
\noindent
Another way of looking at these equation is that \emph{m}
Another way of looking at these equation is that $m$
\emph{factorizes} \emph{p'} and \emph{q'}. Just pretend that these
equations are in natural numbers, and the dot is multiplication:
\emph{m} is a common factor shared by \emph{p'} and \emph{q'}.
$m$ is a common factor shared by \emph{p'} and \emph{q'}.
Just to build some intuitions, let me show you that the pair
\code{(Int, Bool)} with the two canonical projections, \code{fst}
@ -364,26 +363,26 @@ presented before.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=2.20833in]{images/not-a-product.jpg}}
\includegraphics[width=50mm]{images/not-a-product.jpg}
\end{figure}
\noindent
The mapping \code{m} for the first candidate is:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
m :: Int -> (Int, Bool)
m x = (x, True)
\end{Verbatim}
Indeed, the two projections, \code{p} and \code{q} can be
reconstructed as:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
p x = fst (m x) = x
q x = snd (m x) = True
\end{Verbatim}
The \code{m} for the second example is similarly uniquely determined:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
m (x, _, b) = (x, b)
\end{Verbatim}
We were able to show that \code{(Int, Bool)} is better than either of
@ -391,7 +390,7 @@ the two candidates. Let's see why the opposite is not true. Could we
find some \code{m'} that would help us reconstruct \code{fst}
and \code{snd} from \code{p} and \code{q}?
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
fst = p . m'
snd = q . m'
\end{Verbatim}
@ -405,12 +404,12 @@ to factorize \code{fst} and \code{snd}. Because both \code{p} and
\code{q} ignore the second component of the triple, our \code{m'}
can put anything in it. We can have:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
m' (x, b) = (x, x, b)
\end{Verbatim}
or
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
m' (x, b) = (x, 42, b)
\end{Verbatim}
and so on.
@ -420,7 +419,7 @@ Putting it all together, given any type \code{c} with two projections
to the cartesian product \code{(a, b)} that factorizes them. In fact,
it just combines \code{p} and \code{q} into a pair.
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
m :: c -> (a, b)
m x = (p x, q x)
\end{Verbatim}
@ -433,10 +432,10 @@ category using the same universal construction. Such product doesn't
always exist, but when it does, it is unique up to a unique isomorphism.
\begin{quote}
A \textbf{product} of two objects \emph{a} and \emph{b} is the object
\emph{c} equipped with two projections such that for any other object
\emph{c'} equipped with two projections there is a unique morphism
\emph{m} from \emph{c'} to \emph{c} that factorizes those projections.
A \textbf{product} of two objects $a$ and $b$ is the object
$c$ equipped with two projections such that for any other object
$c'$ equipped with two projections there is a unique morphism
$m$ from $c'$ to $c$ that factorizes those projections.
\end{quote}
\noindent
@ -449,38 +448,38 @@ factorizer :: (c -> a) -> (c -> b) -> (c -> (a, b))
factorizer p q = \x -> (p x, q x)
\end{Verbatim}
\section{Coproduct}\label{coproduct}
\section{Coproduct}
Like every construction in category theory, the product has a dual,
which is called the coproduct. When we reverse the arrows in the product
pattern, we end up with an object \emph{c} equipped with two
\emph{injections}, \code{i} and \code{j}: morphisms from \emph{a}
and \emph{b} to \emph{c}.
pattern, we end up with an object $c$ equipped with two
\emph{injections}, \code{i} and \code{j}: morphisms from $a$
and $b$ to $c$.
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
i :: a -> c
j :: b -> c
\end{Verbatim}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=1.56250in]{images/coproductpattern.jpg}}
\includegraphics[width=40mm]{images/coproductpattern.jpg}
\end{figure}
\noindent
The ranking is also inverted: object \emph{c} is ``better'' than object
\emph{c'} that is equipped with the injections \emph{i'} and \emph{j'}
if there is a morphism \emph{m} from \emph{c} to \emph{c'} that
The ranking is also inverted: object $c$ is ``better'' than object
$c'$ that is equipped with the injections $i'$ and $j'$
if there is a morphism $m$ from $c$ to $c'$ that
factorizes the injections:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
i' = m . i
j' = m . j
\end{Verbatim}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=1.56250in]{images/coproductranking.jpg}}
\includegraphics[width=40mm]{images/coproductranking.jpg}
\end{figure}
\noindent
@ -489,16 +488,16 @@ any other pattern, is called a coproduct and, if it exists, is unique up
to unique isomorphism.
\begin{quote}
A \textbf{coproduct} of two objects \emph{a} and \emph{b} is the object
\emph{c} equipped with two injections such that for any other object
\emph{c'} equipped with two injections there is a unique morphism
\emph{m} from \emph{c} to \emph{c'} that factorizes those injections.
A \textbf{coproduct} of two objects $a$ and $b$ is the object
$c$ equipped with two injections such that for any other object
$c'$ equipped with two injections there is a unique morphism
$m$ from $c$ to $c'$ that factorizes those injections.
\end{quote}
\noindent
In the category of sets, the coproduct is the \emph{disjoint union} of
two sets. An element of the disjoint union of \emph{a} and \emph{b} is
either an element of \emph{a} or an element of \emph{b}. If the two sets
two sets. An element of the disjoint union of $a$ and $b$ is
either an element of $a$ or an element of $b$. If the two sets
overlap, the disjoint union contains two copies of the common part. You
can think of an element of a disjoint union as being tagged with an
identifier that specifies its origin.
@ -539,7 +538,7 @@ In Haskell, you can combine any data types into a tagged union by
separating data constructors with a vertical bar. The \code{Contact}
example translates into the declaration:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
data Contact = PhoneNum Int | EmailAddr String
\end{Verbatim}
Here, \code{PhoneNum} and \code{EmailAddr} serve both as
@ -547,7 +546,7 @@ constructors (injections), and as tags for pattern matching (more about
this later). For instance, this is how you would construct a contact
using a phone number:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
helpdesk :: Contact;
helpdesk = PhoneNum 2222222
\end{Verbatim}
@ -556,7 +555,7 @@ Haskell as the primitive pair, the canonical implementation of the
coproduct is a data type called \code{Either}, which is defined in the
standard Prelude as:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
Either a b = Left a | Right b
\end{Verbatim}
It is parameterized by two types, \code{a} and \code{b} and has two
@ -568,13 +567,13 @@ for the coproduct. Given a candidate type \code{c} and two candidate
injections \code{i} and \code{j}, the factorizer for \code{Either}
produces the factoring function:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
factorizer :: (a -> c) -> (b -> c) -> Either a b -> c
factorizer i j (Left a) = i a
factorizer i j (Right b) = j b
\end{Verbatim}
\section{Asymmetry}\label{asymmetry}
\section{Asymmetry}
We've seen two set of dual definitions: The definition of a terminal
object can be obtained from the definition of the initial object by
@ -610,14 +609,14 @@ Because the product is universal, there is also a (unique) morphism
morphism selects an element from the product set --- it selects a
concrete pair. It also factorizes the two projections:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
p = fst . m
q = snd . m
\end{Verbatim}
When acting on the singleton value \code{()}, the only element of the
singleton set, these two equations become:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
p () = fst (m ())
q () = snd (m ())
\end{Verbatim}
@ -669,7 +668,7 @@ collapsing. They are called \newterm{bijections} and they are truly
symmetric, because they are invertible. In the category of sets, an
isomorphism is the same as a bijection.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist
@ -720,12 +719,12 @@ int j(bool b) { return b ? 0: 1; }
allows multiple acceptable morphisms from it to \code{Either}.
\end{enumerate}
\section{Bibliography}\label{bibliography}
\section{Bibliography}
\begin{enumerate}
\tightlist
\item
The Catsters,
\href{https://www.youtube.com/watch?v=upCSDIO9pjc}{Products and
\urlref{https://www.youtube.com/watch?v=upCSDIO9pjc}{Products and
Coproducts} video.
\end{enumerate}

View File

@ -12,7 +12,7 @@ large subset of composite types.
Let's have a closer look at product and sum types as they appear in
programming.
\section{Product Types}\label{product-types}
\section{Product Types}
The canonical implementation of a product of two types in a programming
language is a pair. In Haskell, a pair is a primitive type constructor;
@ -165,7 +165,7 @@ and error prone --- keeping track of which component represents what.
It's often preferable to give names to components. A product type with
named fields is called a record in Haskell, and a \code{struct} in C.
\section{Records}\label{records}
\section{Records}
Let's have a look at a simple example. We want to describe chemical
elements by combining two strings, name and symbol; and an integer, the
@ -226,7 +226,7 @@ startsWithSymbol e = symbol e `isPrefixOf` name e
The parentheses could be omitted in this case, because an infix operator
has lower precedence than a function call.
\section{Sum Types}\label{sum-types}
\section{Sum Types}
Just as the product in the category of sets gives rise to product types,
the coproduct gives rise to sum types. The canonical implementation of a
@ -392,7 +392,7 @@ You will rarely see \code{union} used as a sum type in C++ because of
severe limitations on what can go into a union. You can't even put a
\code{std::string} into a union because it has a copy constructor.
\section{Algebra of Types}\label{algebra-of-types}
\section{Algebra of Types}
Taken separately, product and sum types can be used to define a variety
of useful data structures, but the real strength comes from combining
@ -512,7 +512,7 @@ where we keep replacing \code{x} on the right hand side with
\code{(1 + a*x)}, and use the distributive property. This leads to
the following series:
\begin{Verbatim}[commandchars=\\\{\}]
\begin{Verbatim}
x = 1 + a*x
x = 1 + a*(1 + a*x) = 1 + a + a*a*x
x = 1 + a + a*a*(1 + a*x) = 1 + a + a*a + a*a*a*x
@ -558,7 +558,7 @@ This analogy goes deeper, and is the basis of the Curry-Howard
isomorphism between logic and type theory. We'll come back to it when we
talk about function types.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -78,7 +78,7 @@ $\idarrow[c]$. It acts like a black hole, compacting
everything into one singularity. We'll see more of this functor when we
discuss limits and colimits.
\section{Functors in Programming}\label{functors-in-programming}
\section{Functors in Programming}
Let's get down to earth and talk about programming. We have our category
of types and functions. We can talk about functors that map this
@ -88,7 +88,7 @@ types to types. We've seen examples of such mappings, maybe without
realizing that they were just that. I'm talking about definitions of
types that were parameterized by other types. Let's see a few examples.
\subsection{The Maybe Functor}\label{the-maybe-functor}
\subsection{The Maybe Functor}
The definition of \code{Maybe} is a mapping from type \code{a} to
type \code{Maybe a}:
@ -134,7 +134,7 @@ fmap :: (a -> b) -> (Maybe a -> Maybe b)
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=3.12500in]{images/functormaybe.jpg}}
\includegraphics[width=3.12500in]{images/functormaybe.jpg}
\end{figure}
\noindent
@ -162,7 +162,7 @@ function \code{fmap} form a functor, we have to prove that
functor laws,'' but they simply ensure the preservation of the structure
of the category.
\subsection{Equational Reasoning}\label{equational-reasoning}
\subsection{Equational Reasoning}
To prove the functor laws, I will use \newterm{equational reasoning}, which
is a common proof technique in Haskell. It takes advantage of the fact
@ -275,7 +275,7 @@ the same result. Despite that, the C++ compiler will try to use
equational reasoning if you implement \code{square} as a macro, with
disastrous results.
\subsection{Optional}\label{optional}
\subsection{Optional}
Functors are easily expressed in Haskell, but they can be defined in any
language that supports generic programming and higher-order functions.
@ -341,7 +341,7 @@ auto g = fmap(f);
especially if, in the future, there are multiple functors overloading
\code{fmap}? (We'll see more functors soon.)
\subsection{Typeclasses}\label{typeclasses}
\subsection{Typeclasses}
So how does Haskell deal with abstracting the functor? It uses the
typeclass mechanism. A typeclass defines a family of types that support
@ -407,7 +407,7 @@ By the way, the \code{Functor} class, as well as its instance
definitions for a lot of simple data types, including \code{Maybe},
are part of the standard Prelude library.
\subsection{Functor in C++}\label{functor-in-c}
\subsection{Functor in C++}
Can we try the same approach in C++? A type constructor corresponds to a
template class, like \code{optional}, so by analogy, we would
@ -442,7 +442,7 @@ This definition works, but only because the second argument of
\code{fmap} selects the overload. It totally ignores the more generic
definition of \code{fmap}.
\subsection{The List Functor}\label{the-list-functor}
\subsection{The List Functor}
To get some intuition as to the role of functors in programming, we need
to look at more examples. Any type that is parameterized by another type
@ -529,7 +529,7 @@ functor is lost under the usual clutter of iterators and temporaries
the new proposed C++ range library makes the functorial nature of ranges
much more pronounced.
\subsection{The Reader Functor}\label{the-reader-functor}
\subsection{The Reader Functor}
Now that you might have developed some intuitions --- for instance,
functors being some kind of containers --- let me show you an example
@ -600,7 +600,7 @@ This combination of the type constructor \code{(->) r}
with the above implementation of \code{fmap} is called the reader
functor.
\section{Functors as Containers}\label{functors-as-containers}
\section{Functors as Containers}
We've seen some examples of functors in programming languages that
define general-purpose containers, or at least objects that contain some
@ -700,7 +700,7 @@ role in many constructions. In category theory, it's a special case of
the $\Delta_c$ functor I mentioned earlier --- the endo-functor
case of a black hole. We'll be seeing more of it in the future.
\section{Functor Composition}\label{functor-composition}
\section{Functor Composition}
It's not hard to convince yourself that functors between categories
compose, just like functions between sets compose. A composition of two
@ -793,7 +793,7 @@ these things because I find it pretty amazing that we can recognize the
same structures repeating themselves at many levels of abstraction.
We'll see later that functors form categories as well.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -4,7 +4,7 @@ it's interesting to see which type constructors (which correspond to
mappings between objects in a category) can be extended to functors
(which include mappings between morphisms).
\section{Bifunctors}\label{bifunctors}
\section{Bifunctors}
Since functors are morphisms in $\Cat$ (the category of categories),
a lot of intuitions about morphisms --- and functions in particular ---
@ -106,8 +106,7 @@ and \code{second} and accepting the default for \code{bimap} (of
course, you may implement all three of them, but then it's up to you to
make sure they are related to each other in this manner).
\section{Product and Coproduct
Bifunctors}\label{product-and-coproduct-bifunctors}
\section{Product and Coproduct Bifunctors}
An important example of a bifunctor is the categorical product --- a
product of two objects that is defined by a \hyperref[products-and-coproducts]{universal
@ -159,8 +158,7 @@ morphisms. We are now one step closer to the full definition of a
monoidal category (we still need to learn about naturality, before we
can get there).
\section{Functorial Algebraic Data
Types}\label{functorial-algebraic-data-types}
\section{Functorial Algebraic Data Types}
We've seen several examples of parameterized data types that turned out
to be functors --- we were able to define \code{fmap} for them.
@ -319,7 +317,7 @@ own typeclasses, but that's a bit more advanced. The idea though is the
same: You provide the behavior for the basic building blocks and sums
and products, and let the compiler figure out the rest.
\section{Functors in C++}\label{functors-in-c}
\section{Functors in C++}
If you are a C++ programmer, you obviously are on your own as far as
implementing functors goes. However, you should be able to recognize
@ -408,7 +406,7 @@ instance Functor Tree where
\end{Verbatim}
This implementation can also be automatically derived by the compiler.
\section{The Writer Functor}\label{the-writer-functor}
\section{The Writer Functor}
I promised that I would come back to the \hyperref[kleisli-categories-page]{Kleisli
category} I described earlier. Morphisms in that category were
@ -482,8 +480,7 @@ the way Haskell implements polymorphic functions. It's called
an implementation of \code{fmap} for a given type constructor, one
that preserves identity, then it must be unique.
\section{Covariant and Contravariant
Functors}\label{covariant-and-contravariant-functors}
\section{Covariant and Contravariant Functors}
Now that we've reviewed the writer functor, let's go back to the reader
functor. It was based on the partially applied function-arrow type
@ -549,7 +546,7 @@ $G$. It's a mapping from $\cat{C}$ to $\cat{D}$. It maps objects the
same way $F$ does, but when it comes to mapping morphisms, it
reverses them. It takes a morphism $f \Colon b \to a$ in $\cat{C}$, maps
it first to the opposite morphism $f^{op} \Colon a \to b$
and then uses the functor $F$ on it, to get $F f^{op} \Colon F a \to F b$.
and then uses the functor $F$ on it, to get $F f^{op} \Colon F\ a \to F\ b$.
Considering that $F a$ is the same as $G a$ and $F b$ is
the same as $G b$, the whole trip can be described as: $G f \Colon (b \to a) \to (G a \to G b)$
@ -562,7 +559,7 @@ by the way --- the kind we've been studying thus far --- are called
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=60mm]{images/contravariant.jpg}}
\includegraphics[width=60mm]{images/contravariant.jpg}
\end{figure}
\noindent
@ -599,7 +596,7 @@ With it, we get:
contramap = flip (.)
\end{Verbatim}
\section{Profunctors}\label{profunctors}
\section{Profunctors}
We've seen that the function-arrow operator is contravariant in its
first argument and covariant in the second. Is there a name for such a
@ -632,7 +629,7 @@ the defaults for \code{lmap} and \code{rmap}, or implementing both
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=60mm]{images/dimap.jpg}}
\includegraphics[width=60mm]{images/dimap.jpg}
\caption{dimap}
\end{figure}
@ -649,7 +646,7 @@ instance Profunctor (->) where
Profunctors have their application in the Haskell lens library. We'll
see them again when we talk about ends and coends.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -3,7 +3,7 @@ type is different from other types.
\begin{wrapfigure}[12]{R}{0pt}
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
\fbox{\includegraphics[width=40mm]{images/set-hom-set.jpg}}}%
\includegraphics[width=40mm]{images/set-hom-set.jpg}}%
\caption{Hom-set in Set is just a set}
\end{wrapfigure}
@ -21,7 +21,7 @@ a category. They are even called \emph{external} hom-sets.
\pagebreak
\begin{wrapfigure}[11]{R}{0pt}
\raisebox{0pt}[\dimexpr\height]{
\fbox{\includegraphics[width=40mm]{images/hom-set.jpg}}}%
\includegraphics[width=40mm]{images/hom-set.jpg}}%
\caption{Hom-set in category C is an external set}
\end{wrapfigure}
@ -31,7 +31,7 @@ function types special. But there is a way, at least in some categories,
to construct objects that represent hom-sets. Such objects are called
\newterm{internal} hom-sets.
\section{Universal Construction}\label{universal-construction}
\section{Universal Construction}
Let's forget for a moment that function types are sets and try to
construct a function type, or more generally, an internal hom-set, from
@ -67,7 +67,7 @@ element of $b$).
\begin{figure}
\centering
\fbox{\includegraphics[width=60mm]{images/functionset.jpg}}
\includegraphics[width=60mm]{images/functionset.jpg}
\caption{In Set we can pick a function $f$ from a set of functions $z$ and we can
pick an argument $x$ from the set (type) $a$. We get an element $f x$ in the
set (type) $b$.}
@ -85,7 +85,7 @@ $a$ connected to another object $b$ by a morphism $g$.
\begin{figure}
\centering
\fbox{\includegraphics[width=60mm]{images/functionpattern.jpg}}
\includegraphics[width=60mm]{images/functionpattern.jpg}
\caption{A pattern of objects and morphisms that is the starting point of the
universal construction}
\end{figure}
@ -110,7 +110,7 @@ That's when we apply our secret weapon: ranking. This is usually done by
requiring that there be a unique mapping between candidate objects --- a
mapping that somehow factorizes our construction. In our case, we'll
decree that $z$ together with the morphism $g$ from
$z\times a$ to $b$ is \emph{better} than some other
$z \times a$ to $b$ is \emph{better} than some other
$z'$ with its own application $g'$, if and
only if there is a unique mapping $h$ from $z'$ to
$z$ such that the application of $g'$ factors
@ -119,7 +119,7 @@ looking at the picture.)
\begin{figure}
\centering
\fbox{\includegraphics[width=60mm]{images/functionranking.jpg}}
\includegraphics[width=60mm]{images/functionranking.jpg}
\caption{Establishing a ranking between candidates for the function object}
\end{figure}
@ -128,7 +128,7 @@ particular universal construction till now. Given the morphism\\
$h \Colon z'\to z$, we want to close the diagram
that has both $z'$ and $z$ crossed with $a$.
What we really need, given the mapping $h$ from $z'$
to $z$, is a mapping from $z'\times a$ to $z\times a$.
to $z$, is a mapping from $z' \times a$ to $z \times a$.
And now, after discussing the \hyperref[functoriality]{functoriality
of the product}, we know how to do it. Because the product itself is a
functor (more precisely an endo-bi-functor), it's possible to lift pairs
@ -136,7 +136,7 @@ of morphisms. In other words, we can define not only products of objects
but also products of morphisms.
Since we are not touching the second component of the product
$z'\times a$, we will lift the pair of morphisms
$z' \times a$, we will lift the pair of morphisms
$(h, \id)$, where $\id$ is an identity on $a$.
So, here's how we can factor one application, $g$, out of another
@ -149,7 +149,7 @@ that is universally the best. Let's call this object $a \Rightarrow b$ (think
of this as a symbolic name for one object, not to be confused with a
Haskell typeclass constraint --- I'll discuss different ways of naming
it later). This object comes with its own application --- a morphism
from $(a \Rightarrow b)\times{}a$ to $b$ --- which we will call
from $(a \Rightarrow b) \times a$ to $b$ --- which we will call
$eval$ The object \code{$a \Rightarrow b$} is the best if any other
candidate for a function object can be uniquely mapped to it in such a
way that its application morphism $g$ factorizes through
@ -158,7 +158,7 @@ our ranking.
\begin{figure}[H]
\centering
\fbox{\includegraphics{images/universalfunctionobject.jpg}}
\includegraphics{images/universalfunctionobject.jpg}
\caption{The definition of the universal function object. This is the same
diagram as above, but now the object $a \Rightarrow b$ is \emph{universal}.}
\end{figure}
@ -191,7 +191,7 @@ object is isomorphic to the hom-set $\Set(a, b)$.
This is why, in Haskell, we interpret the function type
\code{a -> b} as the categorical function object $a \Rightarrow b$.
\section{Currying}\label{currying}
\section{Currying}
Let's have a second look at all the candidates for the function object.
This time, however, let's think of the morphism $g$ as a function
@ -316,7 +316,7 @@ def catstr(s1: String)(s2: String) = s1 + s2
Of course that requires some amount of foresight or prescience on the
part of a library writer.
\section{Exponentials}\label{exponentials}
\section{Exponentials}
In mathematical literature, the function object, or the internal
hom-object between two objects $a$ and $b$, is often
@ -373,8 +373,7 @@ explains the identification of Haskell's function type with the
categorical exponential object --- which corresponds more to our idea of
\emph{data}.
\section{Cartesian Closed
Categories}\label{cartesian-closed-categories}
\section{Cartesian Closed Categories}
Although I will continue using the category of sets as a model for types
and functions, it's worth mentioning that there is a larger family of
@ -415,8 +414,7 @@ is called a \newterm{bicartesian closed} category. We'll see in the next
section that bicartesian closed categories, of which $\Set$ is a
prime example, have some interesting properties.
\section{Exponentials and Algebraic Data
Types}\label{exponentials-and-algebraic-data-types}
\section{Exponentials and Algebraic Data Types}
The interpretation of function types as exponentials fits very well into
the scheme of algebraic data types. It turns out that all the basic
@ -427,7 +425,7 @@ coproducts, products, and exponentials. We don't have the tools yet to
prove them (such as adjunctions or the Yoneda lemma), but I'll list them
here nevertheless as a source of valuable intuitions.
\subsection{Zeroth Power}\label{zeroth-power}
\subsection{Zeroth Power}
\[a^{0} = 1\]
In the categorical interpretation, we replace 0 with the initial object,
@ -455,7 +453,7 @@ matter what they do, nobody can ever execute them. There is no value
that can be passed to \code{absurd}. (And if you manage to pass it a
never ending calculation, it will never return!)
\subsection{Powers of One}\label{powers-of-one}
\subsection{Powers of One}
\[1^{a} = 1\]
This identity, when interpreted in $\Set$, restates the definition
@ -468,7 +466,7 @@ We've seen this function before --- it's called \code{unit}. You can
also think of it as the function \code{const} partially applied to
\code{()}.
\subsection{First Power}\label{first-power}
\subsection{First Power}
\[a^{1} = a\]
This is a restatement of the observation that morphisms from the
@ -478,7 +476,7 @@ itself. In $\Set$, and in Haskell, the isomorphism is between
elements of the set \code{a} and functions that pick those elements,
\code{() -> a}.
\subsection{Exponentials of Sums}\label{exponentials-of-sums}
\subsection{Exponentials of Sums}
\[a^{b+c} = a^{b} \times a^{c}\]
Categorically, this says that the exponential from a coproduct of two
@ -504,16 +502,14 @@ f (Right x) = if x < 0.0 then "Negative double" else "Positive double"
\end{minted}
Here, \code{n} is an \code{Int} and \code{x} is a \code{Double}.
\subsection{Exponentials of
Exponentials}\label{exponentials-of-exponentials}
\subsection{Exponentials of Exponentials}
\[(a^{b})^{c} = a^{b \times c}\]
This is just a way of expressing currying purely in terms of exponential
objects. A function returning a function is equivalent to a function
from a product (a two-argument function).
\subsection{Exponentials over
Products}\label{exponentials-over-products}
\subsection{Exponentials over Products}
\[(a \times b)^{c} = a^{c} \times b^{c}\]
In Haskell: A function returning a pair is equivalent to a pair of
@ -523,14 +519,14 @@ It's pretty incredible how those simple high-school algebraic identities
can be lifted to category theory and have practical application in
functional programming.
\section{Curry-Howard Isomorphism}\label{curry-howard-isomorphism}
\section{Curry-Howard Isomorphism}
I have already mentioned the correspondence between logic and algebraic
data types. The \code{Void} type and the unit type \code{()}
correspond to false and true. Product types and sum types correspond to
logical conjunction \ensuremath{\wedge} (AND) and disjunction \ensuremath{\vee} (OR). In this scheme, the
function type we have just defined corresponds to logical implication \ensuremath{\Rightarrow}.
In other words, the type \code{a->b} can be read as ``if
logical conjunction $\wedge$ (AND) and disjunction $\vee$ (OR). In this scheme, the
function type we have just defined corresponds to logical implication $\Rightarrow$.
In other words, the type \code{$a -> b$} can be read as ``if
a then b.''
According to the Curry-Howard isomorphism, every type can be interpreted
@ -630,13 +626,13 @@ coproducts, function types, and so on. And, as if to dispel any doubts,
the theory is being formulated in Coq and Agda. Computers are
revolutionizing the world in more than one way.
\section{Bibliography}\label{bibliography}
\section{Bibliography}
\begin{enumerate}
\tightlist
\item
Ralph Hinze, Daniel W. H. James,
\href{http://www.cs.ox.ac.uk/ralf.hinze/publications/WGP10.pdf}{Reason
\urlref{http://www.cs.ox.ac.uk/ralf.hinze/publications/WGP10.pdf}{Reason
Isomorphically!}. This paper contains proofs of all those high-school
algebraic identities in category theory that I mentioned in this
chapter.

View File

@ -201,7 +201,7 @@ like Haskell, product types, coproduct types, and function types are
built in, rather than being defined by universal constructions; although
there have been attempts at creating categorical programming languages
(see, e.g.,
\href{http://web.sfc.keio.ac.jp/~hagino/thesis.pdf}{Tatsuya
\urlref{http://web.sfc.keio.ac.jp/~hagino/thesis.pdf}{Tatsuya
Hagino's thesis}).
Whether used directly or not, categorical definitions justify

View File

@ -2,7 +2,7 @@
everything can be viewed from many angles. Take for instance the
universal construction of the \hyperref[products-and-coproducts]{product}.
Now that we know more about \hyperref[chap-functors]{functors} and
\hyperref[chap-natural-transformations]{natural transformations}, can we simplify and, possibly, generalize it? Let us
\hyperref[natural-transformations]{natural transformations}, can we simplify and, possibly, generalize it? Let us
try.
\begin{figure}[H]
@ -177,7 +177,7 @@ diagram is the product of two objects. The product (together with the
two projections) contains the information about both objects. And being
universal means that it has no extraneous junk.
\section{Limit as a Natural Isomorphism}\label{limit-as-a-natural-isomorphism}
\section{Limit as a Natural Isomorphism}
There is still something unsatisfying about this definition of a limit.
I mean, it's workable, but we still have this commutativity condition
@ -323,7 +323,7 @@ $Nat(\Delta_c, D)$ can be thought of as a hom-set in the functor
category; so our natural isomorphism relates two hom-sets, which points
at an even more general relationship called an adjunction.
\section{Examples of Limits}\label{examples-of-limits}
\section{Examples of Limits}
We've seen that the categorical product is a limit of a diagram
generated by a simple category we called $\cat{2}$.
@ -617,7 +617,7 @@ The dual of the pullback is called the \emph{pushout}. It's based on a
diagram called a span, generated by the category
$1\leftarrow2\rightarrow3$.
\section{Continuity}\label{continuity}
\section{Continuity}
I said previously that functors come close to the idea of continuous
mappings of categories, in the sense that they never break existing
@ -696,7 +696,7 @@ calculus. In calculus limits and continuity are defined in terms of open
neighborhoods. Open sets, which define topology, form a category (a
poset).
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -63,7 +63,7 @@ identifications --- just enough to uphold the laws --- is called a free
construction. What we have just done is to construct a \newterm{free
monoid} from the set of generators $\{a, b\}$.
\section{Free Monoid in Haskell}\label{free-monoid-in-haskell}
\section{Free Monoid in Haskell}
A two-element set in Haskell is equivalent to the type \code{Bool},
and the free monoid generated by this set is equivalent to the type
@ -112,7 +112,7 @@ free monoid by identifying more than the minimum number of elements
required by the laws? I'll show you that this follows directly from the
universal construction.
\section{Free Monoid Universal Construction}\label{free-monoid-universal-construction}
\section{Free Monoid Universal Construction}
If you recall our previous experiences with universal constructions, you
might notice that it's not so much about constructing something as about
@ -262,7 +262,7 @@ have been made.
We'll come back to free monoids when we talk about adjunctions.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -332,7 +332,7 @@ valuable in practice. So, surprisingly, even though it's not concerned
with performance at all, category theory provides ample opportunities to
explore alternative implementations that have practical value.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist
@ -359,12 +359,12 @@ Pair a = Pair a a
\code{tabulate} and \code{index}.
\end{enumerate}
\section{Bibliography}\label{bibliography}
\section{Bibliography}
\begin{enumerate}
\tightlist
\item
The Catsters video about
\href{https://www.youtube.com/watch?v=4QgjKUzyrhM}{representable
functors}.\urlfootnote{https://www.youtube.com/watch?v=4QgjKUzyrhM}
\urlref{https://www.youtube.com/watch?v=4QgjKUzyrhM}{representable
functors}.
\end{enumerate}

View File

@ -354,10 +354,10 @@ psi fa h = fmap h fa
functor.
\end{enumerate}
\section{Bibliography}\label{bibliography}
\section{Bibliography}
\begin{enumerate}
\tightlist
\item
\href{https://www.youtube.com/watch?v=TLMxHB19khE}{Catsters} video.\urlfootnote{https://www.youtube.com/watch?v=TLMxHB19khE}
\urlref{https://www.youtube.com/watch?v=TLMxHB19khE}{Catsters} video.
\end{enumerate}

View File

@ -156,8 +156,8 @@ the function being encoded.
The Yoneda embedding also explains some of the alternative
representations of data structures in Haskell. In particular, it
provides a \href{https://bartoszmilewski.com/2015/07/13/from-lenses-to-yoneda-embedding/}
{very useful representation}\urlfootnote{https://bartoszmilewski.com/2015/07/13/from-lenses-to-yoneda-embedding/}
provides a \urlref{https://bartoszmilewski.com/2015/07/13/from-lenses-to-yoneda-embedding/}
{very useful representation}
of lenses from the \code{Control.Lens} library.
\section{Preorder Example}

View File

@ -6,7 +6,7 @@ hom-sets. Also, you'll see that adjunctions provide a more general
language to describe a lot of constructions we've studied before, so it
might help to review them too.
\section{Functors}\label{functors}
\section{Functors}
To begin with, you should really think of functors as mappings of
morphisms --- the view that's emphasized in the Haskell definition of
@ -19,7 +19,7 @@ be composed. So if we want the composition of morphisms to be mapped to
the composition of \newterm{lifted} morphisms, the mapping of their
endpoints is pretty much determined.
\section{Commuting Diagrams}\label{commuting-diagrams}
\section{Commuting Diagrams}
A lot of properties of morphisms are expressed in terms of commuting
diagrams. If a particular morphism can be described as a composition of
@ -47,7 +47,7 @@ the mapping of functors. This way commutativity is reduced to the role
of the assembly language for the higher level language of natural
transformations.
\section{Natural Transformations}\label{natural-transformations}
\section{Natural Transformations}
In general, natural transformations are very convenient whenever we need
a mapping from morphisms to commuting squares. Two opposing sides of a
@ -198,7 +198,7 @@ generalization of a preoder.
A preorder is a thin category --- all hom-sets are either singletons or
empty. We can visualize a general category as a ``thick'' preorder.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -246,7 +246,7 @@ cannot prove it in finite amount of time. A topos with its more nuanced
truth object provides a more general framework for modeling interesting
logics.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -3,7 +3,7 @@ monads. But there is an alternative universe in which, by chance,
Eugenio Moggi turned his attention to Lawvere theories rather than
monads. Let's explore that universe.
\section{Universal Algebra}\label{universal-algebra}
\section{Universal Algebra}
There are many ways of describing algebras at various levels of
abstraction. We try to find a general language to describe things like
@ -268,7 +268,7 @@ The simplest nontrivial example of a Lawvere theory describes the
structure of monoids. It is a single theory that distills the structure
of all possible monoids, in the sense that the models of this theory
span the whole category $\cat{Mon}$ of monoids. We've already seen a
\hyperref[chap-free-monoids]{universal
\hyperref[free-monoids]{universal
construction}, which showed that every monoid can be obtained from an
appropriate free monoid by identifying a subset of morphisms. So a
single free monoid already generalizes a whole lot of monoids. There

View File

@ -330,7 +330,7 @@ all of mathematics, this is a very humbling realization.
What's a monad algebra for a monad in $\cat{Span}$?
\end{enumerate}
\section{Bibliography}\label{bibliography}
\section{Bibliography}
\begin{enumerate}
\tightlist
\item

View File

@ -576,7 +576,7 @@ unique object, up to isomorphism. That's why we have ``the'' product and
``the'' exponential. This property translates to adjunctions as well: if
a functor has an adjoint, this adjoint is unique up to isomorphism.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -47,7 +47,7 @@ The functor $F$ that's the left adjoint to the forgetful functor
$U$ is the free functor that builds free monoids from their
generator sets. The adjunction follows from the free monoid
universal construction we've discussed before.\footnote{See ch.13 on
\hyperref[chap-free-monoids]{free monoids}.}
\hyperref[free-monoids]{free monoids}.}
In terms of hom-sets, we can write this adjunction as:
\[\cat{arg}(F x, m) \cong \Set(x, U m)\]
@ -126,7 +126,7 @@ For simplicity I used the type \code{Int} rather than
\code{replicate} creates a list of length \code{n} pre-filled with a
given value --- here, the unit.
\section{Some Intuitions}\label{some-intuitions}
\section{Some Intuitions}
What follows are some hand-waving arguments. Those kind of arguments are
far from rigorous, but they help in forming intuitions.

View File

@ -310,7 +310,7 @@ instance (Monoid w) => Monad (Writer w) where
return a = Writer (a, mempty)
\end{Verbatim}
\subsection{State}\label{state}
\subsection{State}
Functions that have read/write access to state combine the
embellishments of the \code{Reader} and the \code{Writer}. You may
@ -599,7 +599,7 @@ the action that prints ``World!'' receives, as input, the Universe in
which ``Hello '' is already on the screen. It outputs a new Universe,
with ``Hello World!'' on the screen.
\section{Conclusion}\label{conclusion}
\section{Conclusion}
Of course I have just scratched the surface of monadic programming.
Monads not only accomplish, with pure functions, what normally is done

View File

@ -493,7 +493,7 @@ returning the modified version of \code{a}. Similarly, \code{get}
could be implemented to read the value of the \code{s} field from
\code{a}. We'll explore these ideas more in the next section.
\section{Challenges}\label{challenges}
\section{Challenges}
\begin{enumerate}
\tightlist

View File

@ -10,6 +10,7 @@
% ********************************************************** %
% HISTORY:
% * Version 0.2 (October, 2017) - Math typesetting and lots of tweaks!
%
% * Version 0.1 (September, 2017) by Igal Tabachnik.
% Based on the SICP PDF work of Andres Raba et al.
@ -41,7 +42,7 @@ Bartosz Milewski\\
\vspace{1.26em}
\noindent
Version 0.1, September 2017\\
Version 0.2, October 2017\\
\vspace{1.6em}
\noindent
@ -94,7 +95,7 @@ PDF compiled by \href{https://github.com/hmemcpy/milewski-ctfp-pdf}{Igal Tabachn
\chapter{Simple Algebraic Data Types}\label{simple-algebraic-data-types}
\subfile{content/1.6/Simple Algebraic Data Types}
\chapter{Functors}\label{chap-functors}
\chapter{Functors}\label{functors}
\subfile{content/1.7/Functors}
\chapter{Functoriality}\label{functoriality}
@ -103,7 +104,7 @@ PDF compiled by \href{https://github.com/hmemcpy/milewski-ctfp-pdf}{Igal Tabachn
\chapter{Function Types}\label{function-types}
\subfile{content/1.9/Function Types}
\chapter{Natural Transformations}\label{chap-natural-transformations}
\chapter{Natural Transformations}\label{natural-transformations}
\subfile{content/1.10/Natural Transformations}
\part{Part Two}
@ -114,7 +115,7 @@ PDF compiled by \href{https://github.com/hmemcpy/milewski-ctfp-pdf}{Igal Tabachn
\chapter{Limits and Colimits}\label{limits-and-colimits}
\subfile{content/2.2/Limits and Colimits}
\chapter{Free Monoids}\label{chap-free-monoids}
\chapter{Free Monoids}\label{free-monoids}
\subfile{content/2.3/Free Monoids}
\chapter{Representable Functors}\label{representable-functors}