mirror of
https://github.com/hmemcpy/milewski-ctfp-pdf.git
synced 2024-11-22 20:01:25 +03:00
A whole lotta tweaks! (#184)
* Parameter tweaks * Image tweaks * Line tweaks * Rewording to avoid overflow * Snippets tweaks * Removing the ‘final’ keyword to conserve space * Adding syntax highlighting
This commit is contained in:
parent
67aae992c1
commit
fb7962437c
@ -42,7 +42,8 @@ mkShell {
|
||||
xstring
|
||||
minifp
|
||||
titlecaps
|
||||
enumitem;
|
||||
enumitem
|
||||
l3packages;
|
||||
})
|
||||
gnumake
|
||||
pythonPackages.pygments
|
||||
|
@ -1,3 +1,5 @@
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\begin{quote}
|
||||
For some time now I've been floating the idea of writing a book about
|
||||
category theory that would be targeted at programmers. Mind you, not
|
||||
@ -11,7 +13,8 @@ Feynman, but I will try my best. I'm starting by publishing this preface
|
||||
--- which is supposed to motivate the reader to learn category theory
|
||||
--- in hopes of starting a discussion and soliciting feedback.\footnote{
|
||||
You may also watch me teach this material to a live audience, at
|
||||
\href{https://goo.gl/GT2UWU}{https://goo.gl/GT2UWU} (or search "bartosz milewski category theory" on YouTube.)}
|
||||
\href{https://goo.gl/GT2UWU}{https://goo.gl/GT2UWU} (or search
|
||||
``bartosz milewski category theory'' on YouTube.)}
|
||||
\end{quote}
|
||||
|
||||
\lettrine[lhang=0.17]{I}{will attempt}, in the space of a few paragraphs,
|
||||
@ -98,11 +101,12 @@ you keep heating water, it will eventually start boiling. We are now in
|
||||
the position of a frog that must decide if it should continue swimming
|
||||
in increasingly hot water, or start looking for some alternatives.
|
||||
|
||||
\begin{figure}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=70mm]{images/img_1299.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/img_1299.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
One of the forces that are driving the big change is the multicore
|
||||
revolution. The prevailing programming paradigm, object oriented
|
||||
programming, doesn't buy you anything in the realm of concurrency and
|
||||
@ -148,6 +152,6 @@ those foundations if we want to move forward.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[totalheight=8cm]{images/beauvais_interior_supports.jpg}
|
||||
\includegraphics[totalheight=0.5\textheight]{images/beauvais_interior_supports.jpg}
|
||||
\caption{Ad hoc measures preventing the Beauvais cathedral from collapsing.}
|
||||
\end{figure}
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{A}{category} is an embarrassingly simple concept.
|
||||
A category consists of \newterm{objects} and \newterm{arrows} that go between them. That's
|
||||
@ -13,7 +13,7 @@ object $B$ to object $C$, then there must be an arrow --- their composition
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{images/img_1330.jpg}
|
||||
\includegraphics[width=0.8\textwidth]{images/img_1330.jpg}
|
||||
\caption{In a category, if there is an arrow going from $A$ to $B$ and an arrow going from $B$ to $C$
|
||||
then there must also be a direct arrow from $A$ to $C$ that is their composition. This diagram is not a full
|
||||
category because it’s missing identity morphisms (see later).}
|
||||
@ -33,9 +33,9 @@ functions: $g \circ f$. Notice the right to left order of composition. For some
|
||||
people this is confusing. You may be familiar with the pipe notation in
|
||||
Unix, as in:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
lsof | grep Chrome
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
or the chevron \code{>>} in F\#, which both
|
||||
go from left to right. But in mathematics and in Haskell functions
|
||||
compose right to left. It helps if you read $g \circ f$ as ``g \emph{after} f.''
|
||||
@ -44,22 +44,22 @@ Let's make this even more explicit by writing some C code. We have one
|
||||
function \code{f} that takes an argument of type \code{A} and
|
||||
returns a value of type \code{B}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
B f(A a);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and another:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
C g(B b);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Their composition is:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
C g_after_f(A a)
|
||||
{
|
||||
return g(f(a));
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Here, again, you see right-to-left composition: \code{g(f(a))}; this
|
||||
time in C.
|
||||
|
||||
@ -80,14 +80,15 @@ straightforward functional concepts in C++ is a little embarrassing. In
|
||||
fact, Haskell will let you use Unicode characters so you can write
|
||||
composition as:
|
||||
% don't 'mathify' this block
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
g ◦ f
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
You can even use Unicode double colons and arrows:
|
||||
% don't 'mathify' this block
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
f \ensuremath{\Colon} A → B
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
So here's the first Haskell lesson: Double colon means ``has the type
|
||||
of\ldots{}'' A function type is created by inserting an arrow between
|
||||
two types. You compose two functions by inserting a period between them
|
||||
@ -107,7 +108,7 @@ expressed as:
|
||||
\[h \circ (g \circ f) = (h \circ g) \circ f = h \circ g \circ f\]
|
||||
In (pseudo) Haskell:
|
||||
|
||||
\src{snippet04}
|
||||
\src{snippet04}[b]
|
||||
(I said ``pseudo,'' because equality is not defined for functions.)
|
||||
|
||||
Associativity is pretty obvious when dealing with functions, but it may
|
||||
@ -129,9 +130,9 @@ identity function that just returns back its argument. The
|
||||
implementation is the same for every type, which means this function is
|
||||
universally polymorphic. In C++ we could define it as a template:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T> T id(T x) { return x; }
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Of course, in C++ nothing is that simple, because you have to take into
|
||||
account not only what you're passing but also how (that is, by value, by
|
||||
reference, by const reference, by move, and so on).
|
||||
|
@ -1,4 +1,3 @@
|
||||
def predToStr[A]: Op[Boolean, A] => Op[String, A] = {
|
||||
case Op(f) =>
|
||||
Op(x => if (f(x)) "T" else "F")
|
||||
case Op(f) => Op(x => if (f(x)) "T" else "F")
|
||||
}
|
@ -1,2 +1 @@
|
||||
(op.contramap(func) compose predToStr) ==
|
||||
(predToStr compose op.contramap(func))
|
||||
(op.contramap(func) compose predToStr) == (predToStr compose op.contramap(func))
|
@ -1,20 +1,19 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{W}{e talked about} functors as mappings between categories that preserve
|
||||
their structure.
|
||||
|
||||
\begin{wrapfigure}[13]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=60mm]{images/1_functors.jpg}}%
|
||||
\end{wrapfigure}
|
||||
|
||||
\noindent
|
||||
A functor ``embeds'' one category in another. It may
|
||||
collapse multiple things into one, but it never breaks connections. One
|
||||
way of thinking about it is that with a functor we are modeling one
|
||||
category inside another. The source category serves as a model, a
|
||||
blueprint, for some structure that's part of the target category.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering\includegraphics[width=0.4\textwidth]{images/1_functors.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
There may be many ways of embedding one category in another. Sometimes
|
||||
they are equivalent, sometimes very different. One may collapse the
|
||||
whole source category into one object, another may map every object to a
|
||||
@ -31,7 +30,7 @@ $G a$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/2_natcomp.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/2_natcomp.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -78,7 +77,7 @@ that complete the diagram in \emph{D}:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/3_naturality.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/3_naturality.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -96,7 +95,7 @@ $\alpha_a$ along $f$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/4_transport.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/4_transport.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -116,7 +115,7 @@ commuting naturality square in $\cat{D}$ for every morphism in $\cat{C}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/naturality.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/naturality.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -146,9 +145,9 @@ $F a$. Another functor, \code{G}, maps it to $G a$.
|
||||
The component of a natural transformation \code{alpha} at \code{a}
|
||||
is a function from $F a$ to $G a$. In pseudo-Haskell:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
alpha\textsubscript{a} :: F a -> G a
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
A natural transformation is a polymorphic function that is defined for
|
||||
all types \code{a}:
|
||||
|
||||
@ -162,9 +161,9 @@ Keep in mind that it's really a family of functions parameterized by
|
||||
\code{a}. This is another example of the terseness of the Haskell
|
||||
syntax. A similar construct in C++ would be slightly more verbose:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A> G<A> alpha(F<A>);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
There is a more profound difference between Haskell's polymorphic
|
||||
functions and C++ generic functions, and it's reflected in the way these
|
||||
functions are implemented and type-checked. In Haskell, a polymorphic
|
||||
@ -196,15 +195,15 @@ In Haskell, the action of a functor \code{G} on a morphism \code{f}
|
||||
is implemented using \code{fmap}. I'll first write it in
|
||||
pseudo-Haskell, with explicit type annotations:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
fmap\textsubscript{G} f . alpha\textsubscript{a} = alpha\textsubscript{b} . fmap\textsubscript{F} f
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
Because of type inference, these annotations are not necessary, and the
|
||||
following equation holds:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
fmap f . alpha = alpha . fmap f
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This is still not real Haskell --- function equality is not expressible
|
||||
in code --- but it's an identity that can be used by the programmer in
|
||||
equational reasoning; or by the compiler, to implement optimizations.
|
||||
@ -217,8 +216,8 @@ limitations translate into equational theorems about such functions. In
|
||||
the case of functions that transform functors, free theorems are the
|
||||
naturality conditions.\footnote{
|
||||
You may read more about free theorems in my
|
||||
blog \href{https://bartoszmilewski.com/2014/09/22/parametricity-money-for-nothing-and-theorems-for-free/}{Parametricity:
|
||||
Money for Nothing and Theorems for Free}.}
|
||||
blog \href{https://bartoszmilewski.com/2014/09/22/parametricity-money-for-nothing-and-theorems-for-free/}{``Parametricity:
|
||||
Money for Nothing and Theorems for Free}.''}
|
||||
|
||||
One way of thinking about functors in Haskell that I mentioned earlier
|
||||
is to consider them generalized containers. We can continue this analogy
|
||||
@ -358,7 +357,7 @@ But since the two functors are not covariant, this is not a natural
|
||||
transformation in $\Hask$. However, because they are both
|
||||
contravariant, they satisfy the ``opposite'' naturality condition:
|
||||
|
||||
\src{snippet24}
|
||||
\src{snippet24}[b]
|
||||
Notice that the function \code{f} must go in the opposite direction
|
||||
than what you'd use with \code{fmap}, because of the signature of
|
||||
\code{contramap}:
|
||||
@ -407,7 +406,7 @@ $\beta \cdot \alpha$ --- the composition of two natural transformations $\beta$
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/5_vertical.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/5_vertical.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -417,7 +416,7 @@ composition is indeed a natural transformation from F to H:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/6_verticalnaturality.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/6_verticalnaturality.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -440,9 +439,10 @@ horizontal composition shortly.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/6a_vertical.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/6a_vertical.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
The functor category between categories $\cat{C}$ and $\cat{D}$ is written as
|
||||
$\cat{Fun(C, D)}$, or $\cat{{[}C, D{]}}$, or sometimes as
|
||||
$\cat{D^C}$. This last notation suggests that a functor category itself
|
||||
@ -459,9 +459,10 @@ $\cat{Cat(C, D)}$ is a set of functors between two categories $\cat{C}$ and $\ca
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/7_cathomset.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/7_cathomset.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
A functor category $\cat{{[}C, D{]}}$ is also a set of functors between two
|
||||
categories (plus natural transformations as morphisms). Its objects are
|
||||
the same as the members of $\cat{Cat(C, D)}$. Moreover, a functor category,
|
||||
@ -516,7 +517,7 @@ In the case of $\Cat$ seen as a $\cat{2}$-category we have:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/8_cat-2-cat.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/8_cat-2-cat.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -547,7 +548,7 @@ respectively, on functors $F$ and $G$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/10_horizontal.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/10_horizontal.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -563,7 +564,7 @@ from $G \circ F$ to $G' \circ F'$? Let me sketch the construction.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=55mm]{images/9_horizontal.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/9_horizontal.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -620,7 +621,7 @@ We can think of it as connecting these two categories.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/sideways.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/sideways.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -711,14 +712,14 @@ natural transformation is a special type of polymorphic function.
|
||||
transformations between different \code{Op} functors. Here's one
|
||||
choice:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
op :: Op Bool Int
|
||||
op = Op (\x -> x > 0)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
f :: String -> Int
|
||||
f x = read x
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
\end{enumerate}
|
||||
|
@ -11,11 +11,12 @@ a thought experiment. Imagine millions of monkeys at computer keyboards
|
||||
happily hitting random keys, producing programs, compiling, and running
|
||||
them.
|
||||
|
||||
\begin{figure}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/img_1329.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/img_1329.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
With machine language, any combination of bytes produced by monkeys
|
||||
would be accepted and run. But with higher level languages, we do
|
||||
appreciate the fact that a compiler is able to detect lexical and
|
||||
@ -69,8 +70,8 @@ In Haskell, except on rare occasions, type annotations are purely
|
||||
optional. Programmers tend to use them anyway, because they can tell a
|
||||
lot about the semantics of code, and they make compilation errors easier
|
||||
to understand. It's a common practice in Haskell to start a project by
|
||||
designing the types. Later, type annotations drive the implementation
|
||||
and become compiler-enforced comments.
|
||||
designing the types. \sloppy{Later, type annotations drive the implementation
|
||||
and become compiler-enforced comments.}
|
||||
|
||||
Strong static typing is often used as an excuse for not testing the
|
||||
code. You may sometimes hear Haskell programmers saying, ``If it
|
||||
@ -164,7 +165,7 @@ bottom, which is a member of any type, including \code{Bool}. You can
|
||||
even write:
|
||||
|
||||
\src{snippet04}
|
||||
(without the \code{x}) because the bottom is also a member of the type\\
|
||||
(without the \code{x}) because the bottom is also a member of the type
|
||||
\code{Bool -> Bool}.
|
||||
|
||||
Functions that may return bottom are called partial, as opposed to total
|
||||
@ -235,7 +236,7 @@ The function \code{product} multiplies all elements of a list. That's
|
||||
just like a definition of factorial taken from a math text. Compare this
|
||||
with C:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{c}
|
||||
int fact(int n) {
|
||||
int i;
|
||||
int result = 1;
|
||||
@ -243,7 +244,7 @@ int fact(int n) {
|
||||
result *= i;
|
||||
return result;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Need I say more?
|
||||
|
||||
Okay, I'll be the first to admit that this was a cheap shot! A factorial
|
||||
@ -323,14 +324,14 @@ anything, as in the Latin adage ``ex falso sequitur quodlibet.''
|
||||
|
||||
Next is the type that corresponds to a singleton set. It's a type that
|
||||
has only one possible value. This value just ``is.'' You might not
|
||||
immediately recognise it as such, but that is the C++ \code{void}.
|
||||
immediately recognize it as such, but that is the C++ \code{void}.
|
||||
Think of functions from and to this type. A function from \code{void}
|
||||
can always be called. If it's a pure function, it will always return the
|
||||
same result. Here's an example of such a function:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{c}
|
||||
int f44() { return 44; }
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
You might think of this function as taking ``nothing'', but as we've
|
||||
just seen, a function that takes ``nothing'' can never be called because
|
||||
there is no value representing ``nothing.'' So what does this function
|
||||
@ -350,9 +351,9 @@ defines \code{f44} by pattern matching the only constructor for unit,
|
||||
namely \code{()}, and producing the number 44. You call this function
|
||||
by providing the unit value \code{()}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{c}
|
||||
f44 ()
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Notice that every function of unit is equivalent to picking a single
|
||||
element from the target type (here, picking the \code{Integer} 44). In
|
||||
fact you could think of \code{f44} as a different representation for
|
||||
@ -392,10 +393,10 @@ to unit type? Of course we'll call it \code{unit}:
|
||||
\src{snippet10}
|
||||
In C++ you would write this function as:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T>
|
||||
void unit(T) {}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Next in the typology of types is a two-element set. In C++ it's called
|
||||
\code{bool} and in Haskell, predictably, \code{Bool}. The difference
|
||||
is that in C++ \code{bool} is a built-in type, whereas in Haskell it
|
||||
@ -406,12 +407,12 @@ can be defined as follows:
|
||||
\code{True} or \code{False}.) In principle, one should also be able
|
||||
to define a Boolean type in C++ as an enumeration:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
enum bool {
|
||||
true,
|
||||
false
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
but C++ \code{enum} is secretly an integer. The C++11
|
||||
``\code{enum class}'' could have been used instead, but then you
|
||||
would have to qualify its values with the class name, as in
|
||||
@ -462,24 +463,24 @@ have the form \code{ctype::is(alpha, c)}, \code{ctype::is(digit, c)}, etc.
|
||||
\item
|
||||
The factorial function from the example in the text.
|
||||
\item
|
||||
\begin{Verbatim}
|
||||
\begin{minted}{cpp}
|
||||
std::getchar()
|
||||
\end{Verbatim}
|
||||
\end{minted}
|
||||
\item
|
||||
\begin{Verbatim}
|
||||
\begin{minted}{cpp}
|
||||
bool f() {
|
||||
std::cout << "Hello!" << std::endl;
|
||||
return true;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{minted}
|
||||
\item
|
||||
\begin{Verbatim}
|
||||
\begin{minted}{cpp}
|
||||
int f(int x) {
|
||||
static int y = 0;
|
||||
y += x;
|
||||
return y;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{minted}
|
||||
\end{enumerate}
|
||||
\item
|
||||
How many different functions are there from \code{Bool} to
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{Y}{ou can get} real appreciation for categories by studying a variety of
|
||||
examples. Categories come in all shapes and sizes and often pop up in
|
||||
@ -139,14 +139,14 @@ A word about Haskell syntax: Any infix operator can be turned into a
|
||||
two-argument function by surrounding it with parentheses. Given two
|
||||
strings, you can concatenate them by inserting \code{++} between them:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
"Hello " ++ "world!"
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
or by passing them as two arguments to the parenthesized \code{(++)}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
(++) "Hello " "world!"
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Notice that arguments to a function are not separated by commas or
|
||||
surrounded by parentheses. (This is probably the hardest thing to get
|
||||
used to when learning Haskell.)
|
||||
@ -154,15 +154,15 @@ used to when learning Haskell.)
|
||||
It's worth emphasizing that Haskell lets you express equality of
|
||||
functions, as in:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
mappend = (++)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Conceptually, this is different than expressing the equality of values
|
||||
produced by functions, as in:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
mappend s1 s2 = (++) s1 s2
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The former translates into equality of morphisms in the category
|
||||
$\Hask$ (or $\Set$, if we ignore bottoms, which is the name
|
||||
for never-ending calculations). Such equations are not only more
|
||||
@ -179,7 +179,7 @@ so this might be a little confusing to the beginner.)
|
||||
The closest one can get to declaring a monoid in C++ would be to use the
|
||||
(proposed) syntax for concepts.
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T>
|
||||
T mempty = delete;
|
||||
|
||||
@ -191,7 +191,7 @@ template<class M>
|
||||
{ mempty<M> } -> M;
|
||||
{ mappend(m, m); } -> M;
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The first definition uses a value template (also proposed). A
|
||||
polymorphic value is a family of values --- a different value for every
|
||||
type.
|
||||
@ -207,14 +207,14 @@ type) that tests whether there exist appropriate definitions of
|
||||
An instantiation of the Monoid concept can be accomplished by providing
|
||||
appropriate specializations and overloads:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<>
|
||||
std::string mempty<std::string> = {""};
|
||||
|
||||
std::string mappend(std::string s1, std::string s2) {
|
||||
return s1 + s2;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
\section{Monoid as Category}
|
||||
|
||||
@ -251,11 +251,6 @@ adders follows from the second interpretation of the type signature of
|
||||
tells us that \code{mappend} maps an element of a monoid set to a
|
||||
function acting on that set.
|
||||
|
||||
\begin{wrapfigure}[11]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=40mm]{images/monoid.jpg}}
|
||||
\end{wrapfigure}
|
||||
|
||||
Now I want you to forget that you are dealing with the set of natural
|
||||
numbers and just think of it as a single object, a blob with a bunch of
|
||||
morphisms --- the adders. A monoid is a single object category. In fact
|
||||
@ -263,6 +258,12 @@ the name monoid comes from Greek \emph{mono}, which means single. Every
|
||||
monoid can be described as a single object category with a set of
|
||||
morphisms that follow appropriate rules of composition.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.35\textwidth]{images/monoid.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
String concatenation is an interesting case, because we have a choice of
|
||||
defining right appenders and left appenders (or \emph{prependers}, if
|
||||
you will). The composition tables of the two models are a mirror reverse
|
||||
@ -286,12 +287,13 @@ the rules of category. The identity morphism is the neutral element of
|
||||
this product. So we can always recover a set monoid from a category
|
||||
monoid. For all intents and purposes they are one and the same.
|
||||
|
||||
\begin{figure}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/monoidhomset.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/monoidhomset.jpg}
|
||||
\caption{Monoid hom-set seen as morphisms and as points in a set.}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
There is just one little nit for mathematicians to pick: morphisms don't
|
||||
have to form a set. In the world of categories there are things larger
|
||||
than sets. A category in which morphisms between any two objects form a
|
||||
|
BIN
src/content/1.3/images/monoid.png
Normal file
BIN
src/content/1.3/images/monoid.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 208 KiB |
@ -1,2 +1,5 @@
|
||||
upCase :: String -> Writer String
|
||||
upCase s = (map toUpper s, "upCase ")
|
||||
upCase s = (map toUpper s, "upCase ")
|
||||
|
||||
toWords :: String -> Writer [String]
|
||||
toWords s = (words s, "toWords ")
|
@ -1,2 +1,2 @@
|
||||
toWords :: String -> Writer [String]
|
||||
toWords s = (words s, "toWords ")
|
||||
process :: String -> Writer [String]
|
||||
process = upCase >=> toWords
|
@ -1,2 +0,0 @@
|
||||
process :: String -> Writer [String]
|
||||
process = upCase >=> toWords
|
@ -1,2 +1,5 @@
|
||||
val upCase: String => Writer[String] =
|
||||
s => (s.toUpperCase, "upCase ")
|
||||
s => (s.toUpperCase, "upCase ")
|
||||
|
||||
val toWords: String => Writer[List[String]] =
|
||||
s => (s.split(' ').toList, "toWords ")
|
@ -1,2 +1,4 @@
|
||||
val toWords: String => Writer[List[String]] =
|
||||
s => (s.split(' ').toList, "toWords ")
|
||||
val process: String => Writer[List[String]] = {
|
||||
import kleisli._
|
||||
upCase >=> toWords
|
||||
}
|
@ -1,4 +0,0 @@
|
||||
val process: String => Writer[List[String]] = {
|
||||
import kleisli._
|
||||
upCase >=> toWords
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{Y}{ou've seen how to model} types and pure functions as a category. I also
|
||||
mentioned that there is a way to model side effects, or non-pure
|
||||
@ -7,14 +7,14 @@ functions that log or trace their execution. Something that, in an
|
||||
imperative language, would likely be implemented by mutating some global
|
||||
state, as in:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
string logger;
|
||||
|
||||
bool negate(bool b) {
|
||||
logger += "Not so! ";
|
||||
return !b;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
You know that this is not a pure function, because its memoized version
|
||||
would fail to produce a log. This function has \newterm{side effects}.
|
||||
|
||||
@ -27,25 +27,25 @@ have to pass the log explicitly, in and out. Let's do that by adding a
|
||||
string argument, and pairing regular output with a string that contains
|
||||
the updated log:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
pair<bool, string> negate(bool b, string logger) {
|
||||
return make_pair(!b, logger + "Not so! ");
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This function is pure, it has no side effects, it returns the same pair
|
||||
every time it's called with the same arguments, and it can be memoized
|
||||
if necessary. However, considering the cumulative nature of the log,
|
||||
you'd have to memoize all possible histories that can lead to a given
|
||||
call. There would be a separate memo entry for:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
negate(true, "It was the best of times. ");
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
negate(true, "It was the worst of times. ");
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and so on.
|
||||
|
||||
It's also not a very good interface for a library function. The callers
|
||||
@ -62,11 +62,11 @@ continuous log is a separate concern. We still want the function to
|
||||
produce a string, but we'd like to unburden it from producing a log. So
|
||||
here's the compromise solution:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
pair<bool, string> negate(bool b) {
|
||||
return make_pair(!b, "Not so! ");
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The idea is that the log will be aggregated \emph{between} function
|
||||
calls.
|
||||
|
||||
@ -74,25 +74,25 @@ To see how this can be done, let's switch to a slightly more realistic
|
||||
example. We have one function from string to string that turns lower
|
||||
case characters to upper case:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
string toUpper(string s) {
|
||||
string result;
|
||||
int (*toupperp)(int) = &toupper; // toupper is overloaded
|
||||
transform(begin(s), end(s), back_inserter(result), toupperp);
|
||||
return result;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and another that splits a string into a vector of strings, breaking it
|
||||
on whitespace boundaries:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
vector<string> toWords(string s) {
|
||||
return words(s);
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The actual work is done in the auxiliary function \code{words}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
vector<string> words(string s) {
|
||||
vector<string> result{""};
|
||||
for (auto i = begin(s); i != end(s); ++i)
|
||||
@ -104,11 +104,11 @@ vector<string> words(string s) {
|
||||
}
|
||||
return result;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
\begin{wrapfigure}[11]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=1.83333in]{images/piggyback.jpg}}%
|
||||
\begin{wrapfigure}{r}{0pt}
|
||||
\raisebox{14pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=0.4\textwidth]{images/piggyback.jpg}}%
|
||||
\end{wrapfigure}
|
||||
|
||||
\noindent
|
||||
@ -121,13 +121,13 @@ in a generic way by defining a template \code{Writer} that
|
||||
encapsulates a pair whose first component is a value of arbitrary type
|
||||
\code{A} and the second component is a string:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A>
|
||||
using Writer = pair<A, string>;
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Here are the embellished functions:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
Writer<string> toUpper(string s) {
|
||||
string result;
|
||||
int (*toupperp)(int) = &toupper;
|
||||
@ -138,18 +138,18 @@ Writer<string> toUpper(string s) {
|
||||
Writer<vector<string>> toWords(string s) {
|
||||
return make_pair(words(s), "toWords ");
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We want to compose these two functions into another embellished function
|
||||
that uppercases a string and splits it into words, all the while
|
||||
producing a log of those actions. Here's how we may do it:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
Writer<vector<string>> process(string s) {
|
||||
auto p1 = toUpper(s);
|
||||
auto p2 = toWords(p1.first);
|
||||
return make_pair(p2.first, p1.second + p2.second);
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We have accomplished our goal: The aggregation of the log is no longer
|
||||
the concern of the individual functions. They produce their own
|
||||
messages, which are then, externally, concatenated into a larger log.
|
||||
@ -177,32 +177,32 @@ important point is that this morphism is still considered an arrow
|
||||
between the objects \code{int} and \code{bool}, even though the
|
||||
embellished function returns a pair:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
pair<bool, string> isEven(int n) {
|
||||
return make_pair(n % 2 == 0, "isEven ");
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
By the laws of a category, we should be able to compose this morphism
|
||||
with another morphism that goes from the object \code{bool} to
|
||||
whatever. In particular, we should be able to compose it with our
|
||||
earlier \code{negate}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
pair<bool, string> negate(bool b) {
|
||||
return make_pair(!b, "Not so! ");
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Obviously, we cannot compose these two morphisms the same way we compose
|
||||
regular functions, because of the input/output mismatch. Their
|
||||
composition should look more like this:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
pair<bool, string> isOdd(int n) {
|
||||
pair<bool, string> p1 = isEven(n);
|
||||
pair<bool, string> p2 = negate(p1.first);
|
||||
return make_pair(p2.first, p1.second + p2.second);
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
So here's the recipe for the composition of two morphisms in this new
|
||||
category we are constructing:
|
||||
|
||||
@ -227,7 +227,7 @@ corresponding to three objects in our category. It should take two
|
||||
embellished functions that are composable according to our rules, and
|
||||
return a third embellished function:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B, class C>
|
||||
function<Writer<C>(A)> compose(function<Writer<B>(A)> m1,
|
||||
function<Writer<C>(B)> m2)
|
||||
@ -238,21 +238,21 @@ function<Writer<C>(A)> compose(function<Writer<B>(A)> m1,
|
||||
return make_pair(p2.first, p1.second + p2.second);
|
||||
};
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Now we can go back to our earlier example and implement the composition
|
||||
of \code{toUpper} and \code{toWords} using this new template:
|
||||
|
||||
\begin{minted}[breaklines,fontsize=\small]{text}
|
||||
\begin{snip}{cpp}
|
||||
Writer<vector<string>> process(string s) {
|
||||
return compose<string, string, vector<string>>(toUpper, toWords)(s);
|
||||
}
|
||||
\end{minted}
|
||||
\end{snip}
|
||||
There is still a lot of noise with the passing of types to the
|
||||
\code{compose} template. This can be avoided as long as you have a
|
||||
C++14-compliant compiler that supports generalized lambda functions with
|
||||
return type deduction (credit for this code goes to Eric Niebler):
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
auto const compose = [](auto m1, auto m2) {
|
||||
return [m1, m2](auto x) {
|
||||
auto p1 = m1(x);
|
||||
@ -260,33 +260,33 @@ auto const compose = [](auto m1, auto m2) {
|
||||
return make_pair(p2.first, p1.second + p2.second);
|
||||
};
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
In this new definition, the implementation of \code{process}
|
||||
simplifies to:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
Writer<vector<string>> process(string s) {
|
||||
return compose(toUpper, toWords)(s);
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
But we are not finished yet. We have defined composition in our new
|
||||
category, but what are the identity morphisms? These are not our regular
|
||||
identity functions! They have to be morphisms from type A back to type
|
||||
A, which means they are embellished functions of the form:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
Writer<A> identity(A);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
They have to behave like units with respect to composition. If you look
|
||||
at our definition of composition, you'll see that an identity morphism
|
||||
should pass its argument without change, and only contribute an empty
|
||||
string to the log:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A> Writer<A> identity(A x) {
|
||||
return make_pair(x, "");
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
You can easily convince yourself that the category we have just defined
|
||||
is indeed a legitimate category. In particular, our composition is
|
||||
trivially associative. If you follow what's happening with the first
|
||||
@ -363,8 +363,6 @@ For completeness, let's have the Haskell versions of the embellished
|
||||
functions \code{upCase} and \code{toWords}:
|
||||
|
||||
\src{snippet06}
|
||||
|
||||
\src{snippet07}
|
||||
The function \code{map} corresponds to the C++ \code{transform}. It
|
||||
applies the character function \code{toUpper} to the string
|
||||
\code{s}. The auxiliary function \code{words} is defined in the
|
||||
@ -373,7 +371,7 @@ standard Prelude library.
|
||||
Finally, the composition of the two functions is accomplished with the
|
||||
help of the fish operator:
|
||||
|
||||
\src{snippet08}
|
||||
\src{snippet07}
|
||||
|
||||
\section{Kleisli Categories}
|
||||
|
||||
@ -412,7 +410,7 @@ mathematical sense, so it doesn't fit the standard categorical mold. It
|
||||
can, however, be represented by a function that returns an embellished
|
||||
type \code{optional}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A> class optional {
|
||||
bool _isValid;
|
||||
A _value;
|
||||
@ -422,16 +420,16 @@ public:
|
||||
bool isValid() const { return _isValid; }
|
||||
A value() const { return _value; }
|
||||
};
|
||||
\end{Verbatim}
|
||||
As an example, here's the implementation of the embellished function
|
||||
\end{snip}
|
||||
For example, here's the implementation of the embellished function
|
||||
\code{safe\_root}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
optional<double> safe_root(double x) {
|
||||
if (x >= 0) return optional<double>{sqrt(x)};
|
||||
else return optional<double>{};
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Here's the challenge:
|
||||
|
||||
\begin{enumerate}
|
||||
@ -444,7 +442,7 @@ Here's the challenge:
|
||||
returns a valid reciprocal of its argument, if it's different from
|
||||
zero.
|
||||
\item
|
||||
Compose \code{safe\_root} and \code{safe\_reciprocal} to implement\\
|
||||
Compose the functions \code{safe\_root} and \code{safe\_reciprocal} to implement
|
||||
\code{safe\_root\_reciprocal} that calculates \code{sqrt(1/x)}
|
||||
whenever possible.
|
||||
\end{enumerate}
|
@ -1,4 +1,3 @@
|
||||
def m: ((Int, Int, Boolean))
|
||||
=> (Int, Boolean) = {
|
||||
def m: ((Int, Int, Boolean)) => (Int, Boolean) = {
|
||||
case (x, _, b) => (x, b)
|
||||
}
|
@ -1,3 +1,3 @@
|
||||
sealed trait Contact
|
||||
final case class PhoneNum(num: Int) extends Contact
|
||||
final case class EmailAddr(addr: String) extends Contact
|
||||
case class PhoneNum(num: Int) extends Contact
|
||||
case class EmailAddr(addr: String) extends Contact
|
@ -1,3 +1,3 @@
|
||||
sealed trait Either[A, B]
|
||||
final case class Left[A](v: A) extends Either[A, Nothing]
|
||||
final case class Right[B](v: B) extends Either[Nothing, B]
|
||||
case class Left[A](v: A) extends Either[A, Nothing]
|
||||
case class Right[B](v: B) extends Either[Nothing, B]
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{T}{he Ancient Greek} playwright Euripides once said: ``Every man is like
|
||||
the company he is wont to keep.'' We are defined by our relationships.
|
||||
@ -51,7 +51,7 @@ morphism going to any object in the category.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/initial.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/initial.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -93,7 +93,7 @@ morphism coming to it from any object in the category.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/final.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/final.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -193,7 +193,7 @@ these two morphisms?
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/uniqueness.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/uniqueness.jpg}
|
||||
\caption{All morphisms in this diagram are unique.}
|
||||
\end{figure}
|
||||
|
||||
@ -243,12 +243,12 @@ wildcards:
|
||||
\src{snippet08}
|
||||
In C++, we would use template functions, for instance:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B> A
|
||||
fst(pair<A, B> const & p) {
|
||||
return p.first;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Equipped with this seemingly very limited knowledge, let's try to define
|
||||
a pattern of objects and morphisms in the category of sets that will
|
||||
lead us to the construction of a product of two sets, $a$ and
|
||||
@ -260,7 +260,7 @@ respectively:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/productpattern.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/productpattern.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -269,7 +269,7 @@ the product. There may be lots of them.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/productcandidates.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/productcandidates.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -308,7 +308,7 @@ $p'$ and $q'$ can be reconstructed from $p$ and $q$ using $m$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/productranking.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/productranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -324,7 +324,7 @@ presented before.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/not-a-product.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/not-a-product.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -400,7 +400,7 @@ and $b$ to $c$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/coproductpattern.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/coproductpattern.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -413,7 +413,7 @@ factorizes the injections:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/coproductranking.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/coproductranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -441,27 +441,27 @@ types: it's a tagged union of two types. C++ supports unions, but they
|
||||
are not tagged. It means that in your program you have to somehow keep
|
||||
track which member of the union is valid. To create a tagged union, you
|
||||
have to define a tag --- an enumeration --- and combine it with the
|
||||
union. For instance, a tagged union of an \code{int} and a\\
|
||||
union. For instance, a tagged union of an \code{int} and a
|
||||
\code{char const *} could be implemented as:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
struct Contact {
|
||||
enum { isPhone, isEmail } tag;
|
||||
union { int phoneNum; char const * emailAddr; };
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The two injections can either be implemented as constructors or as
|
||||
functions. For instance, here's the first injection as a function
|
||||
\code{PhoneNum}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
Contact PhoneNum(int n) {
|
||||
Contact c;
|
||||
c.tag = isPhone;
|
||||
c.phoneNum = n;
|
||||
return c;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
It injects an integer into \code{Contact}.
|
||||
|
||||
A tagged union is also called a \newterm{variant}, and there is a very
|
||||
@ -603,16 +603,16 @@ isomorphism is the same as a bijection.
|
||||
Show that \code{Either} is a ``better'' coproduct than \code{int}
|
||||
equipped with two injections:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
int i(int n) { return n; }
|
||||
int j(bool b) { return b ? 0: 1; }
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
Hint: Define a function
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
int m(Either const & e);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
that factorizes \code{i} and \code{j}.
|
||||
\item
|
||||
@ -622,14 +622,14 @@ int m(Either const & e);
|
||||
\item
|
||||
Still continuing: What about these injections?
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
int i(int n) {
|
||||
if (n < 0) return n;
|
||||
return n + 2;
|
||||
}
|
||||
|
||||
int j(bool b) { return b ? 0: 1; }
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
\item
|
||||
Come up with an inferior candidate for a coproduct of \code{int} and
|
||||
\code{bool} that cannot be better than \code{Either} because it
|
||||
|
@ -1,4 +1,3 @@
|
||||
val startsWithSymbol: ((String, String, Int)) => Boolean = {
|
||||
case (name, symbol, _) =>
|
||||
name.startsWith(symbol)
|
||||
case (name, symbol, _) => name.startsWith(symbol)
|
||||
}
|
@ -1,3 +1,3 @@
|
||||
sealed trait Either[+A, +B]
|
||||
final case class Left[A](v: A) extends Either[A, Nothing]
|
||||
final case class Right[B](v: B) extends Either[Nothing, B]
|
||||
case class Left[A](v: A) extends Either[A, Nothing]
|
||||
case class Right[B](v: B) extends Either[Nothing, B]
|
@ -1,4 +1,4 @@
|
||||
sealed trait OneOfThree[+A, +B, +C]
|
||||
final case class Sinistral[A](v: A) extends OneOfThree[A, Nothing, Nothing]
|
||||
final case class Medial[B](v: B) extends OneOfThree[Nothing, B, Nothing]
|
||||
final case class Dextral[C](v: C) extends OneOfThree[Nothing, Nothing, C]
|
||||
case class Sinistral[A](v: A) extends OneOfThree[A, Nothing, Nothing]
|
||||
case class Medial[B](v: B) extends OneOfThree[Nothing, B, Nothing]
|
||||
case class Dextral[C](v: C) extends OneOfThree[Nothing, Nothing, C]
|
@ -1,3 +1,3 @@
|
||||
sealed trait List[+A]
|
||||
case object Nil extends List[Nothing]
|
||||
final case class Cons[A](h: A, t: List[A]) extends List[A]
|
||||
case class Cons[A](h: A, t: List[A]) extends List[A]
|
@ -1,3 +1,3 @@
|
||||
sealed trait List[+A]
|
||||
case object Nil extends List[Nothing]
|
||||
final case class Cons[A](h: A, t: List[A]) extends List[A]
|
||||
case class Cons[A](h: A, t: List[A]) extends List[A]
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{W}{e've seen two basic} ways of combining types: using a product and a
|
||||
coproduct. It turns out that a lot of data structures in everyday
|
||||
@ -21,9 +21,9 @@ language is a pair. In Haskell, a pair is a primitive type constructor;
|
||||
in C++ it's a relatively complex template defined in the Standard
|
||||
Library.
|
||||
|
||||
\begin{figure}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/pair.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/pair.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -215,9 +215,9 @@ sum type:
|
||||
\src{snippet24}
|
||||
is the C++:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
enum { Red, Green, Blue };
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
An even simpler sum type:
|
||||
|
||||
\src{snippet25}
|
||||
@ -252,7 +252,7 @@ a Haskell list type, which can be defined as a (recursive) sum type:
|
||||
can be translated to C++ using the null pointer trick to implement the
|
||||
empty list:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A>
|
||||
class List {
|
||||
Node<A> * _head;
|
||||
@ -262,7 +262,7 @@ public:
|
||||
: _head(new Node<A>(a, l))
|
||||
{}
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Notice that the two Haskell constructors \code{Nil} and \code{Cons}
|
||||
are translated into two overloaded \code{List} constructors with
|
||||
analogous arguments (none, for \code{Nil}; and a value and a list for
|
||||
@ -406,22 +406,22 @@ equation:
|
||||
If we do our usual substitutions, and also replace \code{List a} with
|
||||
\code{x}, we get the equation:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
x = 1 + a * x
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We can't solve it using traditional algebraic methods because we can't
|
||||
subtract or divide types. But we can try a series of substitutions,
|
||||
where we keep replacing \code{x} on the right hand side with
|
||||
\code{(1 + a*x)}, and use the distributive property. This leads to
|
||||
the following series:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
x = 1 + a*x
|
||||
x = 1 + a*(1 + a*x) = 1 + a + a*a*x
|
||||
x = 1 + a + a*a*(1 + a*x) = 1 + a + a*a + a*a*a*x
|
||||
...
|
||||
x = 1 + a + a*a + a*a*a + a*a*a*a...
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We end up with an infinite sum of products (tuples), which can be
|
||||
interpreted as: A list is either empty, \code{1}; or a singleton,
|
||||
\code{a}; or a pair, \code{a*a}; or a triple, \code{a*a*a};
|
||||
@ -444,7 +444,6 @@ sum of two types, on the other hand, contains either a value of type
|
||||
of them is inhabited. Logical \emph{and} and \emph{or} also form a
|
||||
semiring, and it too can be mapped into type theory:
|
||||
|
||||
\begin{absolutelynopagebreak}
|
||||
\begin{longtable}[]{@{}ll@{}}
|
||||
\toprule
|
||||
Logic & Types\tabularnewline
|
||||
@ -456,7 +455,6 @@ $a \mathbin{||} b$ & \code{Either a b = Left a | Right b}\tabularnewline
|
||||
$a \mathbin{\&\&} b$ & \code{(a, b)}\tabularnewline
|
||||
\bottomrule
|
||||
\end{longtable}
|
||||
\end{absolutelynopagebreak}
|
||||
|
||||
\noindent
|
||||
This analogy goes deeper, and is the basis of the Curry-Howard
|
||||
@ -473,18 +471,18 @@ talk about function types.
|
||||
\item
|
||||
Here's a sum type defined in Haskell:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
data Shape = Circle Float
|
||||
| Rect Float Float
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
When we want to define a function like \code{area} that acts on a
|
||||
\code{Shape}, we do it by pattern matching on the two constructors:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
area :: Shape -> Float
|
||||
area (Circle r) = pi * r * r
|
||||
area (Rect d h) = d * h
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Implement \code{Shape} in C++ or Java as an interface and create two
|
||||
classes: \code{Circle} and \code{Rect}. Implement \code{area} as
|
||||
a virtual function.
|
||||
@ -493,11 +491,11 @@ area (Rect d h) = d * h
|
||||
\code{circ} that calculates the circumference of a \code{Shape}.
|
||||
We can do it without touching the definition of \code{Shape}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
circ :: Shape -> Float
|
||||
circ (Circle r) = 2.0 * pi * r
|
||||
circ (Rect d h) = 2.0 * (d + h)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Add \code{circ} to your C++ or Java implementation. What parts of
|
||||
the original code did you have to touch?
|
||||
\item
|
||||
|
@ -1,3 +1,3 @@
|
||||
sealed trait List[+E]
|
||||
case object Nil extends List[Nothing]
|
||||
final case class Cons[E](h: E, t: List[E]) extends List[E]
|
||||
case class Cons[E](h: E, t: List[E]) extends List[E]
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{A}{t the risk of sounding} like a broken record, I will say this about
|
||||
functors: A functor is a very simple but powerful idea. Category theory
|
||||
@ -16,13 +16,15 @@ the image of $f$ in $\cat{D}$, $F f$, will connect the image of
|
||||
$a$ to the image of $b$:
|
||||
\[F f \Colon F a \to F b\]
|
||||
|
||||
\begin{wrapfigure}{r}{0pt}
|
||||
\includegraphics[width=0.3\textwidth]{images/functor.jpg}
|
||||
\end{wrapfigure}
|
||||
(This is a mixture of mathematical and Haskell notation that hopefully
|
||||
makes sense by now. I won't use parentheses when applying functors to
|
||||
objects or morphisms.)
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering\includegraphics[width=0.3\textwidth]{images/functor.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
As you can see, a
|
||||
functor preserves the structure of a category: what's connected in one
|
||||
category will be connected in the other category. But there's something
|
||||
@ -35,7 +37,7 @@ and $g$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/functorcompos.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/functorcompos.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -43,10 +45,6 @@ Finally, we want all identity morphisms in $\cat{C}$ to be mapped to identity mo
|
||||
$\cat{D}$:
|
||||
\[F \idarrow[a] = \idarrow[F a]\]
|
||||
|
||||
\begin{wrapfigure}{r}{0pt}
|
||||
\includegraphics[width=0.3\textwidth]{images/functorid.jpg}
|
||||
\end{wrapfigure}
|
||||
|
||||
\noindent
|
||||
Here, $\idarrow[a]$ is the identity at the object $a$,
|
||||
and $\idarrow[F a]$ the identity at $F a$.
|
||||
@ -120,7 +118,7 @@ signature:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/functormaybe.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/functormaybe.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -172,13 +170,13 @@ There are two cases to consider: \code{Nothing} and \code{Just}.
|
||||
Here's the first case (I'm using Haskell pseudo-code to transform the
|
||||
left hand side to the right hand side):
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
fmap id Nothing
|
||||
= { definition of fmap }
|
||||
Nothing
|
||||
= { definition of id }
|
||||
= { definition of id }
|
||||
id Nothing
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Notice that in the last step I used the definition of \code{id}
|
||||
backwards. I replaced the expression \code{Nothing} with
|
||||
\code{id\ Nothing}. In practice, you carry out such proofs by
|
||||
@ -186,7 +184,7 @@ backwards. I replaced the expression \code{Nothing} with
|
||||
in the middle --- here it was \code{Nothing}. The second case is also
|
||||
easy:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
fmap id (Just x)
|
||||
= { definition of fmap }
|
||||
Just (id x)
|
||||
@ -194,13 +192,13 @@ easy:
|
||||
Just x
|
||||
= { definition of id }
|
||||
id (Just x)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Now, lets show that \code{fmap} preserves composition:
|
||||
|
||||
\src{snippet09}
|
||||
First the \code{Nothing} case:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
fmap (g . f) Nothing
|
||||
= { definition of fmap }
|
||||
Nothing
|
||||
@ -208,10 +206,10 @@ First the \code{Nothing} case:
|
||||
fmap g Nothing
|
||||
= { definition of fmap }
|
||||
fmap g (fmap f Nothing)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
And then the \code{Just} case:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
fmap (g . f) (Just x)
|
||||
= { definition of fmap }
|
||||
Just ((g . f) x)
|
||||
@ -223,28 +221,28 @@ And then the \code{Just} case:
|
||||
fmap g (fmap f (Just x))
|
||||
= { definition of composition }
|
||||
(fmap g . fmap f) (Just x)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
It's worth stressing that equational reasoning doesn't work for C++
|
||||
style ``functions'' with side effects. Consider this code:
|
||||
|
||||
\begin{Verbatim}
|
||||
int square(int x) {
|
||||
\begin{snip}{cpp}
|
||||
int square(int x) {
|
||||
return x * x;
|
||||
}
|
||||
|
||||
int counter() {
|
||||
int counter() {
|
||||
static int c = 0;
|
||||
return c++;
|
||||
}
|
||||
|
||||
double y = square(counter());
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Using equational reasoning, you would be able to inline \code{square}
|
||||
to get:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
double y = counter() * counter();
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This is definitely not a valid transformation, and it will not produce
|
||||
the same result. Despite that, the C++ compiler will try to use
|
||||
equational reasoning if you implement \code{square} as a macro, with
|
||||
@ -260,7 +258,7 @@ implementation is much more complex, dealing with various ways the
|
||||
argument may be passed, with copy semantics, and with the resource
|
||||
management issues characteristic of C++):
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T>
|
||||
class optional {
|
||||
bool _isValid; // the tag
|
||||
@ -270,13 +268,13 @@ public:
|
||||
optional(T x) : _isValid(true) , _v(x) {} // Just
|
||||
bool isValid() const { return _isValid; }
|
||||
T val() const { return _v; } };
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This template provides one part of the definition of a functor: the
|
||||
mapping of types. It maps any type \code{T} to a new type
|
||||
\code{optional<T>}. Let's define its action on
|
||||
functions:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B>
|
||||
std::function<optional<B>(optional<A>)>
|
||||
fmap(std::function<B(A)> f) {
|
||||
@ -287,11 +285,11 @@ fmap(std::function<B(A)> f) {
|
||||
return optional<B>{ f(opt.val()) };
|
||||
};
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This is a higher order function, taking a function as an argument and
|
||||
returning a function. Here's the uncurried version of it:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B>
|
||||
optional<B> fmap(std::function<B(A)> f, optional<A> opt) {
|
||||
if (!opt.isValid())
|
||||
@ -299,7 +297,7 @@ optional<B> fmap(std::function<B(A)> f, optional<A> opt) {
|
||||
else
|
||||
return optional<B>{ f(opt.val()) };
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
There is also an option of making \code{fmap} a template method of
|
||||
\code{optional}. This embarrassment of choices makes abstracting the
|
||||
functor pattern in C++ a problem. Should functor be an interface to
|
||||
@ -310,9 +308,9 @@ specified explicitly? Consider a situation where the input function
|
||||
\code{f} takes an \code{int} to a \code{bool}. How will the
|
||||
compiler figure out the type of \code{g}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
auto g = fmap(f);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
especially if, in the future, there are multiple functors overloading
|
||||
\code{fmap}? (We'll see more functors soon.)
|
||||
|
||||
@ -374,22 +372,22 @@ template class, like \code{optional}, so by analogy, we would
|
||||
parameterize \code{fmap} with a \newterm{template template parameter}
|
||||
\code{F}. This is the syntax for it:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<template<class> F, class A, class B>
|
||||
F<B> fmap(std::function<B(A)>, F<A>);
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We would like to be able to specialize this template for different
|
||||
functors. Unfortunately, there is a prohibition against partial
|
||||
specialization of template functions in C++. You can't write:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B>
|
||||
optional<B> fmap<optional>(std::function<B(A)> f, optional<A> opt)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Instead, we have to fall back on function overloading, which brings us
|
||||
back to the original definition of the uncurried \code{fmap}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B>
|
||||
optional<B> fmap(std::function<B(A)> f, optional<A> opt) {
|
||||
if (!opt.isValid())
|
||||
@ -397,7 +395,7 @@ optional<B> fmap(std::function<B(A)> f, optional<A> opt) {
|
||||
else
|
||||
return optional<B>{ f(opt.val()) };
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This definition works, but only because the second argument of
|
||||
\code{fmap} selects the overload. It totally ignores the more generic
|
||||
definition of \code{fmap}.
|
||||
@ -414,7 +412,7 @@ container, the list:
|
||||
We have the type constructor \code{List}, which is a mapping from any
|
||||
type \code{a} to the type \code{List a}. To show that \code{List}
|
||||
is a functor we have to define the lifting of functions: Given a
|
||||
function \code{a -> b} define a function\\
|
||||
function \code{a -> b} define a function
|
||||
\code{List a -> List b}:
|
||||
|
||||
\src{snippet16}
|
||||
@ -450,7 +448,7 @@ If you are more comfortable with C++, consider the case of a
|
||||
container. The implementation of \code{fmap} for \code{std::vector}
|
||||
is just a thin encapsulation of \code{std::transform}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B>
|
||||
std::vector<B> fmap(std::function<B(A)> f, std::vector<A> v) {
|
||||
std::vector<B> w;
|
||||
@ -460,17 +458,17 @@ std::vector<B> fmap(std::function<B(A)> f, std::vector<A> v) {
|
||||
, f);
|
||||
return w;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We can use it, for instance, to square the elements of a sequence of
|
||||
numbers:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
std::vector<int> v{ 1, 2, 3, 4 };
|
||||
auto w = fmap([](int i) { return i*i; }, v);
|
||||
std::copy( std::begin(w)
|
||||
, std::end(w)
|
||||
, std::ostream_iterator(std::cout, ", "));
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Most C++ containers are functors by virtue of implementing iterators
|
||||
that can be passed to \code{std::transform}, which is the more
|
||||
primitive cousin of \code{fmap}. Unfortunately, the simplicity of a
|
||||
@ -605,23 +603,23 @@ This might be a little clearer in C++ (I never thought I would utter
|
||||
those words!), where there is a stronger distinction between type
|
||||
arguments --- which are compile-time --- and values, which are run-time:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class C, class A>
|
||||
struct Const {
|
||||
Const(C v) : _v(v) {}
|
||||
C _v;
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The C++ implementation of \code{fmap} also ignores the function
|
||||
argument and essentially re-casts the \code{Const} argument without
|
||||
changing its value:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class C, class A, class B>
|
||||
Const<C, B> fmap(std::function<B(A)> f, Const<C, A> c) {
|
||||
return Const<C, B>{c._v};
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Despite its weirdness, the \code{Const} functor plays an important
|
||||
role in many constructions. In category theory, it's a special case of
|
||||
the $\Delta_c$ functor I mentioned earlier --- the endo-functor
|
||||
@ -707,9 +705,9 @@ We'll see later that functors form categories as well.
|
||||
Can we turn the \code{Maybe} type constructor into a functor by
|
||||
defining:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
fmap _ _ = Nothing
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
which ignores both of its arguments? (Hint: Check the functor laws.)
|
||||
\item
|
||||
|
@ -1,6 +1,5 @@
|
||||
trait Bifunctor[F[_, _]] {
|
||||
def bimap[A, B, C, D](
|
||||
g: A => C)(h: B => D): F[A, B] => F[C, D] =
|
||||
def bimap[A, B, C, D](g: A => C)(h: B => D): F[A, B] => F[C, D] =
|
||||
first(g) compose second(h)
|
||||
|
||||
def first[A, B, C](g: A => C): F[A, B] => F[C, B] =
|
||||
|
@ -1,6 +1,5 @@
|
||||
implicit val tuple2Bifunctor = new Bifunctor[Tuple2] {
|
||||
override def bimap[A, B, C, D](
|
||||
f: A => C)(g: B => D): ((A, B)) => (C, D) = {
|
||||
override def bimap[A, B, C, D](f: A => C)(g: B => D): ((A, B)) => (C, D) = {
|
||||
case (x, y) => (f(x), g(y))
|
||||
}
|
||||
}
|
@ -5,9 +5,7 @@ implicit def bicompBiFunctor[
|
||||
// partially-applied type BiComp
|
||||
type BiCompAB[A, B] = BiComp[BF, FU, GU, A, B]
|
||||
new Bifunctor[BiCompAB] {
|
||||
override def bimap[A, B, C, D](
|
||||
f1: A => C)(f2: B => D)
|
||||
: BiCompAB[A, B] => BiCompAB[C, D] = {
|
||||
override def bimap[A, B, C, D](f1: A => C)(f2: B => D): BiCompAB[A, B] => BiCompAB[C, D] = {
|
||||
case BiComp(x) =>
|
||||
BiComp(
|
||||
BF.bimap(FU.fmap(f1))(GU.fmap(f2))(x)
|
||||
|
@ -1,4 +1 @@
|
||||
def bimap: (FU[A] => FU[A1]) =>
|
||||
(GU[B] => GU[B1]) =>
|
||||
BiComp[BF, FU, GU, A, B] =>
|
||||
BiComp[BF, FU, GU, A1, B1]
|
||||
def bimap: (FU[A] => FU[A1]) => (GU[B] => GU[B1]) => BiComp[BF, FU, GU, A, B] => BiComp[BF, FU, GU, A1, B1]
|
@ -1,13 +1,10 @@
|
||||
trait Profunctor[F[_, _]] {
|
||||
def bimap[A, B, C, D]: (A => B) => (C => D)
|
||||
=> F[B, C] => F[A, D] = f => g =>
|
||||
def bimap[A, B, C, D]: (A => B) => (C => D) => F[B, C] => F[A, D] = f => g =>
|
||||
lmap(f) compose rmap[B, C, D](g)
|
||||
|
||||
def lmap[A, B, C]: (A => B) =>
|
||||
F[B, C] => F[A, C] = f =>
|
||||
def lmap[A, B, C]: (A => B) => F[B, C] => F[A, C] = f =>
|
||||
bimap(f)(identity[C])
|
||||
|
||||
def rmap[A, B, C]: (B => C) =>
|
||||
F[A, B] => F[A, C] =
|
||||
def rmap[A, B, C]: (B => C) => F[A, B] => F[A, C] =
|
||||
bimap[A, A, B, C](identity[A])
|
||||
}
|
@ -1,14 +1,10 @@
|
||||
implicit val function1Profunctor = new Profunctor[Function1] {
|
||||
override def bimap[A, B, C, D]
|
||||
: (A => B) => (C => D) => (B => C)
|
||||
=> (A => D) = ab => cd => bc =>
|
||||
override def bimap[A, B, C, D]: (A => B) => (C => D) => (B => C) => (A => D) = ab => cd => bc =>
|
||||
cd compose bc compose ab
|
||||
|
||||
override def lmap[A, B, C]
|
||||
: (A => B) => (B => C) => (A => C) =
|
||||
override def lmap[A, B, C]: (A => B) => (B => C) => (A => C) =
|
||||
flip(compose)
|
||||
|
||||
override def rmap[A, B, C]
|
||||
: (B => C) => (A => B) => (A => C) =
|
||||
override def rmap[A, B, C]: (B => C) => (A => B) => (A => C) =
|
||||
compose
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{N}{ow that you know} what a functor is, and have seen a few examples, let's
|
||||
see how we can build larger functors from smaller ones. In particular
|
||||
@ -17,10 +17,9 @@ one from category $\cat{C}$, and one from category $\cat{D}$, to an object in ca
|
||||
$\cat{E}$. Notice that this is just saying that it's a mapping from a
|
||||
\newterm{Cartesian product} of categories $\cat{C}\times{}\cat{D}$ to $\cat{E}$.
|
||||
|
||||
\begin{wrapfigure}[10]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=50mm]{images/bifunctor.jpg}}
|
||||
\end{wrapfigure}
|
||||
\begin{figure}[H]
|
||||
\centering\includegraphics[width=0.3\textwidth]{images/bifunctor.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
That's pretty straightforward. But functoriality means that a bifunctor
|
||||
@ -54,6 +53,12 @@ constructor that takes two type arguments. Here's the definition of the
|
||||
\code{Control.Bifunctor}:
|
||||
|
||||
\src{snippet01}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering\includegraphics[width=0.3\textwidth]{images/bimap.jpg}
|
||||
\caption{bimap}
|
||||
\end{figure}
|
||||
|
||||
The type variable \code{f} represents the bifunctor. You can see that
|
||||
in all type signatures it's always applied to two type arguments. The
|
||||
first type signature defines \code{bimap}: a mapping of two functions
|
||||
@ -62,33 +67,29 @@ at once. The result is a lifted function,
|
||||
generated by the bifunctor's type constructor. There is a default
|
||||
implementation of \code{bimap} in terms of \code{first} and
|
||||
\code{second}, which shows that it's enough to have functoriality in
|
||||
each argument separately to be able to define a bifunctor. (This is not true in general in category theory, because the two maps may not commute: \code{first g . second h} might not be the same as \code{second h . first g}.)
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/bimap.jpg}
|
||||
\caption{bimap}
|
||||
\end{figure}
|
||||
each argument separately to be able to define a bifunctor. (This is
|
||||
not true in general in category theory, because the two maps may not
|
||||
commute: \code{first g . second h} might not be the same as
|
||||
\code{second h . first g}.)
|
||||
|
||||
\noindent
|
||||
The two other type signatures, \code{first} and \code{second}, are
|
||||
the two \code{fmap}s witnessing the functoriality of \code{f} in the
|
||||
first and the second argument, respectively.
|
||||
|
||||
\begin{longtable}[]{@{}ll@{}}
|
||||
\toprule
|
||||
\begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
|
||||
\includegraphics[width=1.56250in]{images/first.jpg}
|
||||
|
||||
first\strut
|
||||
\end{minipage} & \begin{minipage}[t]{0.48\columnwidth}\raggedright\strut
|
||||
\hypertarget{attachment_4072}{}
|
||||
\includegraphics[width=1.56250in]{images/second.jpg}
|
||||
|
||||
~second\strut
|
||||
\end{minipage}\tabularnewline
|
||||
\bottomrule
|
||||
\end{longtable}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\begin{minipage}{0.45\textwidth}
|
||||
\centering
|
||||
\includegraphics[width=0.65\textwidth]{images/first.jpg} % first figure itself
|
||||
\caption{first}
|
||||
\end{minipage}\hfill
|
||||
\begin{minipage}{0.45\textwidth}
|
||||
\centering
|
||||
\includegraphics[width=0.6\textwidth]{images/second.jpg} % second figure itself
|
||||
\caption{second}
|
||||
\end{minipage}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
The typeclass definition provides default implementations for both of
|
||||
@ -110,7 +111,7 @@ mapping from those objects to the product is bifunctorial. This is true
|
||||
in general, and in Haskell in particular. Here's the \code{Bifunctor}
|
||||
instance for a pair constructor --- the simplest product type:
|
||||
|
||||
\src{snippet02}
|
||||
\src{snippet02}[b]
|
||||
There isn't much choice: \code{bimap} simply applies the first
|
||||
function to the first component, and the second function to the second
|
||||
component of a pair. The code pretty much writes itself, given the
|
||||
@ -120,18 +121,18 @@ types:
|
||||
The action of the bifunctor here is to make pairs of types, for
|
||||
instance:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
(,) a b = (a, b)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
By duality, a coproduct, if it's defined for every pair of objects in a
|
||||
category, is also a bifunctor. In Haskell, this is exemplified by the
|
||||
\code{Either} type constructor being an instance of
|
||||
\code{Bifunctor}:
|
||||
|
||||
\src{snippet04}
|
||||
\src{snippet04}[b]
|
||||
This code also writes itself.
|
||||
|
||||
Now, remember when we talked about monoidal categories? A mo\-noidal
|
||||
Now, remember when we talked about monoidal categories? A monoidal
|
||||
category defines a binary operator acting on objects, together with a
|
||||
unit object. I mentioned that $\Set$ is a monoidal category with
|
||||
respect to Cartesian product, with the singleton set as a unit. And it's
|
||||
@ -229,7 +230,7 @@ know that there will be a definition of \code{bimap} available for
|
||||
instance declaration: a set of class constraints followed by a double
|
||||
arrow:
|
||||
|
||||
\src{snippet10}
|
||||
\src{snippet10}[b]
|
||||
The implementation of \code{bimap} for \code{BiComp} is given in
|
||||
terms of \code{bimap} for \code{bf} and the two \code{fmap}s for
|
||||
\code{fu} and \code{gu}. The compiler automatically infers all the
|
||||
@ -248,7 +249,7 @@ and \code{f2} are:
|
||||
then the final result is of the type
|
||||
\code{bf (fu a') (gu b')}:
|
||||
|
||||
\src{snippet13}
|
||||
\src{snippet13}[b]
|
||||
If you like jigsaw puzzles, these kinds of type manipulations can
|
||||
provide hours of entertainment.
|
||||
|
||||
@ -262,14 +263,14 @@ can't it be automated and performed by the compiler? Indeed, it can, and
|
||||
it is. You need to enable a particular Haskell extension by including
|
||||
this line at the top of your source file:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
{-# LANGUAGE DeriveFunctor #-}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and then add \code{deriving Functor} to your data structure:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
data Maybe a = Nothing | Just a deriving Functor
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
and the corresponding \code{fmap} will be implemented for you.
|
||||
|
||||
The regularity of algebraic data structures makes it possible to derive
|
||||
@ -307,31 +308,31 @@ The base class must define at least one virtual function in order to
|
||||
support dynamic casting, so we'll make the destructor virtual (which is
|
||||
a good idea in any case):
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T>
|
||||
struct Tree {
|
||||
virtual ~Tree() {};
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The \code{Leaf} is just an \code{Identity} functor in disguise:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T>
|
||||
struct Leaf : public Tree<T> {
|
||||
T _label;
|
||||
Leaf(T l) : _label(l) {}
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The \code{Node} is a product type:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class T>
|
||||
struct Node : public Tree<T> {
|
||||
Tree<T> * _left;
|
||||
Tree<T> * _right;
|
||||
Node(Tree<T> * l, Tree<T> * r) : _left(l), _right(r) {}
|
||||
};
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
When implementing \code{fmap} we take advantage of dynamic dispatching
|
||||
on the type of the \code{Tree}. The \code{Leaf} case applies the
|
||||
\code{Identity} version of \code{fmap}, and the \code{Node} case
|
||||
@ -340,7 +341,7 @@ is treated like a bifunctor composed with two copies of the
|
||||
analyzing code in these terms, but it's a good exercise in categorical
|
||||
thinking.
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
template<class A, class B>
|
||||
Tree<B> * fmap(std::function<B(A)> f, Tree<A> * t) {
|
||||
Leaf<A> * pl = dynamic_cast <Leaf<A>*>(t);
|
||||
@ -352,7 +353,7 @@ Tree<B> * fmap(std::function<B(A)> f, Tree<A> * t) {
|
||||
, fmap<A>(f, pn->_right));
|
||||
return nullptr;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
For simplicity, I decided to ignore memory and resource management
|
||||
issues, but in production code you would probably use smart pointers
|
||||
(unique or shared, depending on your policy).
|
||||
@ -532,7 +533,7 @@ which is contra-functorial in the first argument and functorial in the
|
||||
second. Here's the appropriate typeclass taken from the
|
||||
\code{Data.Profunctor} library:
|
||||
|
||||
\src{snippet29}
|
||||
\src{snippet29}[b]
|
||||
All three functions come with default implementations. Just like with
|
||||
\code{Bifunctor}, when declaring an instance of \code{Profunctor},
|
||||
you have a choice of either implementing \code{dimap} and accepting
|
||||
@ -542,7 +543,7 @@ the defaults for \code{lmap} and \code{rmap}, or implementing both
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/dimap.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/dimap.jpg}
|
||||
\caption{dimap}
|
||||
\end{figure}
|
||||
|
||||
@ -550,7 +551,7 @@ the defaults for \code{lmap} and \code{rmap}, or implementing both
|
||||
Now we can assert that the function-arrow operator is an instance of a
|
||||
\code{Profunctor}:
|
||||
|
||||
\src{snippet30}
|
||||
\src{snippet30}[b]
|
||||
Profunctors have their application in the Haskell lens library. We'll
|
||||
see them again when we talk about ends and coends.
|
||||
|
||||
@ -584,9 +585,9 @@ As you can see, the hom-functor is a special case of a profunctor.
|
||||
\item
|
||||
Show that the data type:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
data Pair a b = Pair a b
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
is a bifunctor. For additional credit implement all three methods of
|
||||
\code{Bifunctor} and use equational reasoning to show that these
|
||||
@ -596,9 +597,9 @@ data Pair a b = Pair a b
|
||||
Show the isomorphism between the standard definition of \code{Maybe}
|
||||
and this desugaring:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
type Maybe' a = Either (Const () a) (Identity a)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
Hint: Define two mappings between the two implementations. For
|
||||
additional credit, show that they are the inverse of each other using
|
||||
@ -608,9 +609,9 @@ type Maybe' a = Either (Const () a) (Identity a)
|
||||
it's a precursor to a \code{List}. It replaces recursion with a type
|
||||
parameter \code{b}.
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
data PreList a b = Nil | Cons a b
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
You could recover our earlier definition of a \code{List} by
|
||||
recursively applying \code{PreList} to itself (we'll see how it's
|
||||
@ -621,17 +622,13 @@ data PreList a b = Nil | Cons a b
|
||||
Show that the following data types define bifunctors in \code{a} and
|
||||
\code{b}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
data K2 c a b = K2 c
|
||||
\end{Verbatim}
|
||||
|
||||
\begin{Verbatim}
|
||||
data Fst a b = Fst a
|
||||
\end{Verbatim}
|
||||
|
||||
\begin{Verbatim}
|
||||
data Snd a b = Snd b
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
|
||||
For additional credit, check your solutions against Conor McBride's
|
||||
paper \urlref{http://strictlypositive.org/CJ.pdf}{Clowns to the Left of
|
||||
|
@ -1,6 +1,4 @@
|
||||
val f: Either[Int, Double] => String = {
|
||||
case Left(n) =>
|
||||
if (n < 0) "Negative int" else "Positive int"
|
||||
case Right(x) =>
|
||||
if (x < 0.0) "Negative double" else "Positive double"
|
||||
case Left(n) => if (n < 0) "Negative int" else "Positive int"
|
||||
case Right(x) => if (x < 0.0) "Negative double" else "Positive double"
|
||||
}
|
@ -1,14 +1,8 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{S}{o far} I've been glossing over the meaning of function types. A function
|
||||
type is different from other types.
|
||||
|
||||
\begin{wrapfigure}[9]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=32mm]{images/set-hom-set.jpg}}%
|
||||
\caption{Hom-set in Set is just a set}
|
||||
\end{wrapfigure}
|
||||
|
||||
Take \code{Integer}, for instance: It's just a set of integers.
|
||||
\code{Bool} is a two element set. But a function type
|
||||
$a\to b$ is more than that: it's a set of morphisms
|
||||
@ -17,15 +11,23 @@ two objects in any category is called a hom-set. It just so happens that
|
||||
in the category $\Set$ every hom-set is itself an object in the
|
||||
same category ---because it is, after all, a \emph{set}.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.35\textwidth]{images/set-hom-set.jpg}
|
||||
\caption{Hom-set in Set is just a set}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
The same is not true of other categories where hom-sets are external to
|
||||
a category. They are even called \emph{external} hom-sets.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/hom-set.jpg}
|
||||
\caption{Hom-set in category C is an external set}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.35\textwidth]{images/hom-set.jpg}
|
||||
\caption{Hom-set in category C is an external set}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
It's the self-referential nature of the category $\Set$ that makes
|
||||
function types special. But there is a way, at least in some categories,
|
||||
to construct objects that represent hom-sets. Such objects are called
|
||||
@ -65,14 +67,14 @@ with an argument $x$ (an element of $a$) and map it to
|
||||
$f x$ (the application of $f$ to $x$, which is an
|
||||
element of $b$).
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/functionset.jpg}
|
||||
\begin{figure}[H]
|
||||
\centering\includegraphics[width=0.35\textwidth]{images/functionset.jpg}
|
||||
\caption{In Set we can pick a function $f$ from a set of functions $z$ and we can
|
||||
pick an argument $x$ from the set (type) $a$. We get an element $f x$ in the
|
||||
set (type) $b$.}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
But instead of dealing with individual pairs $(f, x)$, we can as
|
||||
well talk about the whole \emph{product} of the function type $z$
|
||||
and the argument type $a$. The product $z\times{}a$ is an object,
|
||||
@ -83,13 +85,14 @@ function that maps every pair $(f, x)$ to $f x$.
|
||||
So that's the pattern: a product of two objects $z$ and
|
||||
$a$ connected to another object $b$ by a morphism $g$.
|
||||
|
||||
\begin{figure}[h]
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/functionpattern.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/functionpattern.jpg}
|
||||
\caption{A pattern of objects and morphisms that is the starting point of the
|
||||
universal construction}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
Is this pattern specific enough to single out the function type using a
|
||||
universal construction? Not in every category. But in the categories of
|
||||
interest to us it is. And another question: Would it be possible to
|
||||
@ -117,14 +120,15 @@ $z$ such that the application of $g'$ factors
|
||||
through the application of $g$. (Hint: Read this sentence while
|
||||
looking at the picture.)
|
||||
|
||||
\begin{figure}[h]
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/functionranking.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/functionranking.jpg}
|
||||
\caption{Establishing a ranking between candidates for the function object}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
Now here's the tricky part, and the main reason I postponed this
|
||||
particular universal construction till now. Given the morphism\\
|
||||
particular universal construction till now. Given the morphism
|
||||
$h \Colon z'\to z$, we want to close the diagram
|
||||
that has both $z'$ and $z$ crossed with $a$.
|
||||
What we really need, given the mapping $h$ from $z'$
|
||||
@ -156,9 +160,9 @@ way that its application morphism $g$ factorizes through
|
||||
$eval$. This object is better than any other object according to
|
||||
our ranking.
|
||||
|
||||
\begin{figure}[h]
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/universalfunctionobject.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/universalfunctionobject.jpg}
|
||||
\caption{The definition of the universal function object. This is the same
|
||||
diagram as above, but now the object $a \Rightarrow b$ is \emph{universal}.}
|
||||
\end{figure}
|
||||
@ -270,26 +274,26 @@ initializer lists).
|
||||
You can partially apply a C++ function using the template
|
||||
\code{std::bind}. For instance, given a function of two strings:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
std::string catstr(std::string s1, std::string s2) {
|
||||
return s1 + s2;
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
you can define a function of one string:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
using namespace std::placeholders;
|
||||
|
||||
auto greet = std::bind(catstr, "Hello ", _1);
|
||||
std::cout << greet("Haskell Curry");
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Scala, which is more functional than C++ or Java, falls somewhere in
|
||||
between. If you anticipate that the function you're defining will be
|
||||
partially applied, you define it with multiple argument lists:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
def catstr(s1: String)(s2: String) = s1 + s2
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Of course that requires some amount of foresight or prescience on the
|
||||
part of a library writer.
|
||||
|
||||
@ -326,7 +330,7 @@ C++ Standard Library that are usually implemented using lookups.
|
||||
Functions like \code{isupper} or \code{isspace} are implemented
|
||||
using tables, which are equivalent to tuples of 256 Boolean values. A
|
||||
tuple is a product type, so we are dealing with products of 256
|
||||
Booleans: {\small\texttt{bool × bool × bool × ... × bool}}. We know from
|
||||
Booleans: \code{bool × bool × bool × ... × bool}. We know from
|
||||
arithmetics that an iterated product defines a power. If you
|
||||
``multiply'' \code{bool} by itself 256 (or \code{char}) times, you
|
||||
get \code{bool} to the power of \code{char}, or \code{bool}\textsuperscript{\code{char}}.
|
||||
@ -563,14 +567,14 @@ Considering that \code{Void} translates into false, we get:
|
||||
Anything follows from falsehood (\emph{ex falso quodlibet}). Here's one
|
||||
possible proof (implementation) of this statement (function) in Haskell:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
absurd (Void a) = absurd a
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
where \code{Void} is defined as:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
newtype Void = Void Void
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
As always, the type \code{Void} is tricky. This definition makes it
|
||||
impossible to construct a value because in order to construct one, you
|
||||
would need to provide one. Therefore, the function \code{absurd} can
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{n the first part} of the book I argued that both category theory and
|
||||
programming are about composability. In programming, you keep
|
||||
@ -36,9 +36,9 @@ into computer code? The answer to this question is far from obvious and,
|
||||
if we could find it, we would probably revolutionize our understanding
|
||||
of the universe.
|
||||
|
||||
\begin{wrapfigure}[9]{R}{0pt}
|
||||
\begin{wrapfigure}{r}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=50mm]{images/asteroids.png}}
|
||||
\includegraphics[width=0.46\textwidth]{images/asteroids.png}}
|
||||
\end{wrapfigure}
|
||||
|
||||
Let me elaborate. There is a similar duality in physics, which either
|
||||
@ -101,9 +101,10 @@ the speed of light in the water.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/snell.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/snell.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
All of classical mechanics can be derived from the principle of least
|
||||
action. The action can be calculated for any trajectory by integrating
|
||||
the Lagrangian, which is the difference between kinetic and potential
|
||||
@ -117,7 +118,7 @@ area of low potential energy.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/mortar.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/mortar.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -129,7 +130,7 @@ probability of transition.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/feynman.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/feynman.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -181,9 +182,10 @@ factorizing the projections of other such objects.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/productranking.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/productranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
Compare this with Fermat's principle of minimum time, or the principle
|
||||
of least action.
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
trait Contravariant[F[_]] {
|
||||
def contramap[A, B](fa: F[A])(f: B => A) : F[B]
|
||||
}
|
||||
|
||||
class ToString[A](f: A => String) extends AnyVal
|
||||
|
||||
implicit val contravariant: Contravariant[ToString]
|
||||
= new Contravariant[ToString] {
|
||||
def contramap[A, B](fa: ToString[A])(f: B => A): ToString[B] =
|
||||
ToString(fa.f compose f)
|
||||
}
|
||||
implicit val contravariant = new Contravariant[ToString] {
|
||||
def contramap[A, B](fa: ToString[A])(f: B => A): ToString[B] =
|
||||
ToString(fa.f compose f)
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{t seems like in category theory} everything is related to everything and
|
||||
everything can be viewed from many angles. Take for instance the
|
||||
@ -9,7 +9,7 @@ try.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/productpattern.jpg}
|
||||
\includegraphics[width=0.3\textwidth]{images/productpattern.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -28,19 +28,19 @@ functor collapses objects, which is fine too). It also maps morphisms
|
||||
--- in this case it simply maps identity morphisms to identity
|
||||
morphisms.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/two.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
What's great about this approach is that it builds on categorical
|
||||
notions, eschewing the imprecise descriptions like "selecting
|
||||
objects", taken stra\-ight from the hunter-gatherer lexicon of our
|
||||
objects", taken straight from the hunter-gatherer lexicon of our
|
||||
ancestors. And, incidentally, it is also easily generalized, because
|
||||
nothing can stop us from using categories more complex than $\cat{2}$
|
||||
to define our patterns.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.35\textwidth]{images/two.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
But let's continue. The next step in the definition of a product is the
|
||||
selection of the candidate object $c$. Here again, we could
|
||||
rephrase the selection in terms of a functor from a singleton category.
|
||||
@ -53,7 +53,7 @@ objects into $c$ and all morphisms into $\idarrow[c]$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/twodelta.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/twodelta.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -76,7 +76,7 @@ identities) in $\cat{2}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/productcone.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/productcone.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -100,7 +100,7 @@ under $D$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/cone.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/cone.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -115,7 +115,7 @@ triangle are the components of the natural transformation.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/conenaturality.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/conenaturality.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -140,7 +140,7 @@ instance:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.56250in]{images/productranking.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/productranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
This condition translates, in the general case, to the condition that
|
||||
@ -148,7 +148,7 @@ the triangles whose one side is the factorizing morphism all commute.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/conecommutativity.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/conecommutativity.jpg}
|
||||
\caption{The commuting triangle connecting two cones, with the factorizing
|
||||
morphism $h$ (here, the lower cone is the universal one, with
|
||||
$\Lim[D]$ as its apex)}
|
||||
@ -227,13 +227,12 @@ And that's an element of $\cat{C}(c', \Lim[D])$--- so indeed, we
|
||||
have found a mapping of morphisms:
|
||||
|
||||
\src{snippet02}
|
||||
|
||||
Notice the inversion in the order of $c$ and $c'$
|
||||
characteristic of a \emph{contravariant} functor.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/homsetmapping.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/homsetmapping.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -267,7 +266,7 @@ natural transformation.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/natmapping.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/natmapping.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -348,7 +347,7 @@ and two projections:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/equalizercone.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/equalizercone.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -403,7 +402,7 @@ with
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/equilizerlimit.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/equilizerlimit.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -430,7 +429,7 @@ and three morphisms:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/pullbackcone.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/pullbackcone.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -443,7 +442,7 @@ A pullback is a universal cone of this shape.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/pullbacklimit.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/pullbacklimit.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -492,7 +491,7 @@ there is a name conflict between some methods of \code{B} and \code{C}.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=35mm]{images/classes.jpg}
|
||||
\includegraphics[width=0.25\textwidth]{images/classes.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -500,46 +499,46 @@ There's also a more advanced use of a pullback in type inference. There
|
||||
is often a need to \emph{unify} types of two expressions. For instance,
|
||||
suppose that the compiler wants to infer the type of a function:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
twice f x = f (f x)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
It will assign preliminary types to all variables and sub-expressions.
|
||||
In particular, it will assign:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
f :: t0
|
||||
x :: t1
|
||||
f x :: t2
|
||||
f (f x) :: t3
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
from which it will deduce that:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
twice :: t0 -> t1 -> t3
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
It will also come up with a set of constraints resulting from the rules
|
||||
of function application:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
t0 = t1 -> t2 -- because f is applied to x
|
||||
t0 = t2 -> t3 -- because f is applied to (f x)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
These constraints have to be unified by finding a set of types (or type
|
||||
variables) that, when substituted for the unknown types in both
|
||||
expressions, produce the same type. One such substitution is:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
t1 = t2 = t3 = Int
|
||||
twice :: (Int -> Int) -> Int -> Int
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
but, obviously, it's not the most general one. The most general
|
||||
substitution is obtained using a pullback. I won't go into the details,
|
||||
because they are beyond the scope of this book, but you can convince
|
||||
yourself that the result should be:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
twice :: (t -> t) -> t -> t
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
with \code{t} a free type variable.
|
||||
|
||||
\section{Colimits}
|
||||
@ -553,7 +552,7 @@ co-cone.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/colimit.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/colimit.jpg}
|
||||
\caption{Cocone with a factorizing morphism $h$ connecting two apexes.}
|
||||
\end{figure}
|
||||
|
||||
@ -564,7 +563,7 @@ definition of the product.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/coproductranking.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/coproductranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -593,7 +592,7 @@ $F (\Lim[D])$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=3.12500in]{images/continuity.jpg}
|
||||
\includegraphics[width=0.6\textwidth]{images/continuity.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -622,7 +621,6 @@ parameter, let's say to \code{String}, we get the contravariant
|
||||
functor:
|
||||
|
||||
\src{snippet17}
|
||||
|
||||
Continuity means that when \code{ToString} is applied to a colimit,
|
||||
for instance a coproduct \code{Either b c}, it will produce a limit;
|
||||
in this case a product of two function types:
|
||||
|
@ -1,9 +1,9 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{M}{onoids are an important} concept in both category theory and in
|
||||
programming. Categories correspond to strongly typed langua\-ges, monoids
|
||||
to untyped languages. That's because in a monoid you can compose any two
|
||||
arrows, just as in an untyped language you can compose any two functions
|
||||
\lettrine[lhang=0.17]{M}{onoids are an important} concept in both category
|
||||
theory and in programming. Categories correspond to strongly typed languages,
|
||||
monoids to untyped languages. That's because in a monoid you can compose any
|
||||
two arrows, just as in an untyped language you can compose any two functions
|
||||
(of course, you may end up with a runtime error when you execute your
|
||||
program).
|
||||
|
||||
@ -44,7 +44,7 @@ $\{e, a, b, (a, a), (a, b), (b, a), (b, b)\}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{images/bunnies.jpg}
|
||||
\includegraphics[width=0.8\textwidth]{images/bunnies.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -187,7 +187,7 @@ which will pick the best representative of this pattern.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/monoid-pattern.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/monoid-pattern.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -214,7 +214,7 @@ monoid, and so on.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/monoid-ranking.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/monoid-ranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -248,9 +248,9 @@ We'll come back to free monoids when we talk about adjunctions.
|
||||
homomorphism of monoids preserve the unit is redundant. After all, we
|
||||
know that for all $a$
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
h a * h e = h (a * e) = h a
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
So $h e$ acts like a right unit (and, by analogy, as a left
|
||||
unit). The problem is that $h a$, for all $a$ might
|
||||
only cover a sub-monoid of the target monoid. There may be a ``true''
|
||||
|
@ -1,13 +1,10 @@
|
||||
implicit def arrowProfunctor = new Profunctor[Function1] {
|
||||
def dimap[A, B, C, D]
|
||||
(ab: A => B)(cd: C => D)(bc: B => C): A => D =
|
||||
def dimap[A, B, C, D](ab: A => B)(cd: C => D)(bc: B => C): A => D =
|
||||
cd compose bc compose ab
|
||||
|
||||
def lmap[A, B, C]
|
||||
(f: C => A)(ab: A => B): C => B =
|
||||
def lmap[A, B, C](f: C => A)(ab: A => B): C => B =
|
||||
f andThen ab
|
||||
|
||||
def rmap[A, B, C]
|
||||
(f: B => C)(ab: A => B): A => C =
|
||||
def rmap[A, B, C](f: B => C)(ab: A => B): A => C =
|
||||
f compose ab
|
||||
}
|
@ -1,2 +1 @@
|
||||
(fmap(f) _ compose alpha.apply) ==
|
||||
(alpha.apply _ compose fmap(f))
|
||||
(fmap(f) _ compose alpha.apply) == (alpha.apply _ compose fmap(f))
|
@ -1,6 +1,4 @@
|
||||
fmap(f)(fmap(h)(List(12))) ==
|
||||
fmap(f compose h)(List(12))
|
||||
fmap(f)(fmap(h)(List(12))) == fmap(f compose h)(List(12))
|
||||
|
||||
// Or using scala stdlib:
|
||||
List(12).map(h).map(f) ==
|
||||
List(12).map(f compose h)
|
||||
List(12).map(h).map(f) == List(12).map(f compose h)
|
@ -1,7 +1,5 @@
|
||||
trait Representable[F[_]] {
|
||||
type Rep
|
||||
|
||||
def tabulate[X](f: Rep => X): F[X]
|
||||
|
||||
def index[X]: F[X] => Rep => X
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
final case class Stream[X](
|
||||
case class Stream[X](
|
||||
h: () => X,
|
||||
t: () => Stream[X]
|
||||
)
|
@ -1,17 +1,13 @@
|
||||
implicit def streamRepresentable: Representable[Stream] {type Rep = Int} =
|
||||
new Representable[Stream] {
|
||||
type Rep = Int
|
||||
implicit val streamRepresentable = new Representable[Stream] {
|
||||
type Rep = Int
|
||||
|
||||
def tabulate[X](f: Rep => X)
|
||||
: Stream[X] =
|
||||
Stream[X](
|
||||
() => f(0),
|
||||
() => tabulate(f compose (_ + 1)))
|
||||
def tabulate[X](f: Rep => X): Stream[X] =
|
||||
Stream[X](() => f(0), () => tabulate(f compose (_ + 1)))
|
||||
|
||||
def index[X]
|
||||
: Stream[X] => Rep => X = {
|
||||
case Stream(b, bs) => n =>
|
||||
def index[X]: Stream[X] => Rep => X = {
|
||||
case Stream(b, bs) =>
|
||||
n =>
|
||||
if (n == 0) b()
|
||||
else index(bs())(n - 1)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,11 +1,11 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{t's about time} we had a little talk about sets. Mathematicians have a
|
||||
love/hate relationship with set theory. It's the assembly language of
|
||||
mathematics --- at least it used to be. Category theory tries to step
|
||||
away from set theory, to some extent. For instance, it's a known fact
|
||||
that the set of all sets doesn't exist, but the category of all sets,
|
||||
$\Set$, does. So that's good. On the other hand, we assume that
|
||||
\lettrine[lhang=0.17]{I}{t's about time} we had a little talk about sets.
|
||||
Mathematicians have a love/hate relationship with set theory. It's the
|
||||
assembly language of mathematics --- at least it used to be. Category
|
||||
theory tries to step away from set theory, to some extent. For instance,
|
||||
it's a known fact that the set of all sets doesn't exist, but the category
|
||||
of all sets, $\Set$, does. So that's good. On the other hand, we assume that
|
||||
morphisms between any two objects in a category form a set. We even
|
||||
called it a hom-set. To be fair, there is a branch of category theory
|
||||
where morphisms don't form sets. Instead they are objects in another
|
||||
@ -59,7 +59,7 @@ mapping from $x$ to $\Set$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/hom-set.jpg}
|
||||
\includegraphics[width=0.45\textwidth]{images/hom-set.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -86,7 +86,7 @@ member of $\cat{C}(a, y)$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/hom-functor.jpg}
|
||||
\includegraphics[width=0.45\textwidth]{images/hom-functor.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -198,9 +198,9 @@ The other transformation, \code{beta}, goes the opposite way:
|
||||
\src{snippet09}
|
||||
It must respect naturality conditions, and it must be the inverse of \code{alpha}:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
alpha . beta = id = beta . alpha
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
We will see later that a natural transformation from $\cat{C}(a, -)$
|
||||
to any $\Set$-valued functor always exists (Yoneda's lemma) but it
|
||||
is not necessarily invertible.
|
||||
@ -314,9 +314,9 @@ explore alternative implementations that have practical value.
|
||||
\item
|
||||
The functor:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
Pair a = Pair a a
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
is representable. Can you guess the type that represents it? Implement
|
||||
\code{tabulate} and \code{index}.
|
||||
\end{enumerate}
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{M}{ost constructions in} category theory are generalizations of results
|
||||
from other more specific areas of mathematics. Things like products,
|
||||
@ -56,7 +56,7 @@ regular functions between sets:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/yoneda1.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/yoneda1.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -80,7 +80,7 @@ case of $x = a$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/yoneda2.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/yoneda2.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -134,7 +134,7 @@ any choice of $q$ leads to a unique natural transformation.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/yoneda3.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/yoneda3.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -150,7 +150,7 @@ $\cat{C}(a, g)$; and under $F$ it's mapped to $F g$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/yoneda4.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/yoneda4.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -187,7 +187,7 @@ thus determined.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/yoneda5.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/yoneda5.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -234,9 +234,9 @@ to emphasize parametric polymorphism of natural transformations.
|
||||
The Yoneda lemma tells us that these natural transformations are in
|
||||
one-to-one correspondence with the elements of \code{F a}:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
forall x . (a -> x) -> F x \ensuremath{\cong} F a
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
The right hand side of this identity is what we would normally consider
|
||||
a data structure. Remember the interpretation of functors as generalized
|
||||
containers? \code{F a} is a container of \code{a}. But the left
|
||||
@ -271,9 +271,9 @@ that is often used in compiler construction: the continuation passing
|
||||
style or \acronym{CPS}. It's the simplest application of the Yoneda lemma to the
|
||||
identity functor. Replacing \code{F} with identity produces:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
forall r . (a -> r) -> r \ensuremath{\cong} a
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
The interpretation of this formula is that any type \code{a} can be
|
||||
replaced by a function that takes a ``handler'' for \code{a}. A
|
||||
handler is a function accepting \code{a} and performing the rest of
|
||||
@ -306,9 +306,9 @@ elements of the set $F a$:
|
||||
\[\cat{Nat}(\cat{C}(-, a), F) \cong F a\]
|
||||
Here's the Haskell version of the co-Yoneda lemma:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
forall x . (x -> a) -> F x \ensuremath{\cong} F a
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
Notice that in some literature it's the contravariant version that's
|
||||
called the Yoneda lemma.
|
||||
|
||||
@ -320,14 +320,13 @@ called the Yoneda lemma.
|
||||
Show that the two functions \code{phi} and \code{psi} that form
|
||||
the Yoneda isomorphism in Haskell are inverses of each other.
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
phi :: (forall x . (a -> x) -> F x) -> F a
|
||||
phi alpha = alpha id
|
||||
\end{Verbatim}
|
||||
\begin{Verbatim}
|
||||
|
||||
psi :: F a -> (forall x . (a -> x) -> F x)
|
||||
psi fa h = fmap h fa
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
\item
|
||||
A discrete category is one that has objects but no morphisms other
|
||||
than identity morphisms. How does the Yoneda lemma work for functors
|
||||
|
@ -6,8 +6,7 @@ trait ~>[F[_], G[_]] {
|
||||
def apply[X](fa: F[X]): G[X]
|
||||
}
|
||||
|
||||
def fromY[A, B]: (A => ?) ~> (B => ?) =
|
||||
new ~>[A => ?, B => ?] {
|
||||
def apply[X](f: A => X): B => X =
|
||||
b => f(btoa(b))
|
||||
}
|
||||
def fromY[A, B]: (A => ?) ~> (B => ?) = new ~>[A => ?, B => ?] {
|
||||
def apply[X](f: A => X): B => X =
|
||||
b => f(btoa(b))
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{W}{e've seen previously} that, when we fix an object $a$ in the
|
||||
category $\cat{C}$, the mapping $\cat{C}(a, -)$ is a (covariant)
|
||||
@ -42,7 +42,7 @@ $\cat{C}(b, -)$. We get:
|
||||
\[[\cat{C}, \Set](\cat{C}(a, -), \cat{C}(b, -)) \cong \cat{C}(b, a)\]
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=3.87500in]{images/yoneda-embedding.jpg}
|
||||
\includegraphics[width=0.6\textwidth]{images/yoneda-embedding.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -54,7 +54,7 @@ okay; it only means that the functor we are looking at is contravariant.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{images/yoneda-embedding-2.jpg}
|
||||
\includegraphics[width=0.65\textwidth]{images/yoneda-embedding-2.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -118,9 +118,9 @@ In Haskell, the Yoneda embedding can be represented as the isomorphism
|
||||
between natural transformations amongst reader functors on the one hand,
|
||||
and functions (going in the opposite direction) on the other hand:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
forall x. (a -> x) -> (b -> x) \ensuremath{\cong} b -> a
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
(Remember, the reader functor is equivalent to
|
||||
\code{((->) a)}.)
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{f I haven't} convinced you yet that category theory is all about
|
||||
morphisms then I haven't done my job properly. Since the next topic is
|
||||
@ -32,16 +32,17 @@ constructions (with the notable exceptions of the initial and terminal
|
||||
objects). We've seen this in the definitions of products, coproducts,
|
||||
various other (co-)limits, exponential objects, free monoids, etc.
|
||||
|
||||
\begin{wrapfigure}[8]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=1.78125in]{images/productranking.jpg}}%
|
||||
\end{wrapfigure}
|
||||
|
||||
The product is a simple example of a universal construction. We pick two
|
||||
objects $a$ and $b$ and see if there exists an object
|
||||
$c$, together with a pair of morphisms $p$ and $q$,
|
||||
that has the universal property of being their product.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.3\textwidth]{images/productranking.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
A product is a special case of a limit. A limit is defined in terms of
|
||||
cones. A general cone is built from commuting diagrams. Commutativity of
|
||||
those diagrams may be replaced with a suitable naturality condition for
|
||||
@ -59,7 +60,7 @@ of the natural transformation (which are also morphisms).
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=2.25000in]{images/3_naturality.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/3_naturality.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -75,10 +76,10 @@ you up and down or back and forth --- so to speak. You can visualize the
|
||||
transformation maps one such sheet corresponding to F, to another,
|
||||
corresponding to G.
|
||||
|
||||
\begin{wrapfigure}[10]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=60mm]{images/sheets.png}}%
|
||||
\end{wrapfigure}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.35\textwidth]{images/sheets.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
We've seen examples of this orthogonality in Haskell. There the action
|
||||
@ -195,7 +196,7 @@ one direction.
|
||||
A preorder, which is based on a relation that's not necessarily
|
||||
antisymmetric, is also ``mostly'' directional, except for occasional
|
||||
cycles. It's convenient to think of an arbitrary category as a
|
||||
generalization of a preoder.
|
||||
generalization of a preorder.
|
||||
|
||||
A preorder is a thin category --- all hom-sets are either singletons or
|
||||
empty. We can visualize a general category as a ``thick'' preorder.
|
||||
@ -208,7 +209,7 @@ empty. We can visualize a general category as a ``thick'' preorder.
|
||||
Consider some degenerate cases of a naturality condition and draw the
|
||||
appropriate diagrams. For instance, what happens if either functor
|
||||
$F$ or $G$ map both objects $a$ and $b$
|
||||
(the ends of\\ $f \Colon a \to b$) to the same
|
||||
(the ends of $f \Colon a \to b$) to the same
|
||||
object, e.g., $F a = F b$ or $G a = G b$?
|
||||
(Notice that you get a cone or a co-cone this way.) Then consider
|
||||
cases where either $F a = G a$ or $F b = G b$.
|
||||
|
@ -1,2 +1,2 @@
|
||||
(dimap(identity[A])(f) _ compose alpha) ==
|
||||
(dimap(f)(identity[B] _ compose alpha)
|
||||
(dimap(f)(identity[B] _ compose alpha)
|
@ -1,2 +1,2 @@
|
||||
(dimap(f)(identity[B]) _ compose pi.apply) ==
|
||||
(dimap(identity[A])(f) _ compose pi.apply)
|
||||
(dimap(identity[A])(f) _ compose pi.apply)
|
@ -1,7 +1,5 @@
|
||||
def lambda[A, B, P[_, _]](P: Profunctor[P])
|
||||
: P[A, A] => (A => B) => P[A, B] =
|
||||
def lambda[A, B, P[_, _]](P: Profunctor[P]): P[A, A] => (A => B) => P[A, B] =
|
||||
paa => f => P.dimap(identity[A])(f)(paa)
|
||||
|
||||
def rho[A, B, P[_, _]](P: Profunctor[P])
|
||||
: P[B, B] => (A => B) => P[A, B] =
|
||||
def rho[A, B, P[_, _]](P: Profunctor[P]): P[B, B] => (A => B) => P[A, B] =
|
||||
pbb => f => P.dimap(f)(identity[B])(pbb)
|
@ -1,5 +1,4 @@
|
||||
def lambdaP[P[_, _]](P: Profunctor[P])
|
||||
: DiaProd[P] => ProdP[P] = {
|
||||
def lambdaP[P[_, _]](P: Profunctor[P]): DiaProd[P] => ProdP[P] = {
|
||||
case DiaProd(paa) =>
|
||||
new ProdP[P] {
|
||||
def apply[A, B](f: A => B): P[A, B] =
|
||||
@ -7,8 +6,7 @@ def lambdaP[P[_, _]](P: Profunctor[P])
|
||||
}
|
||||
}
|
||||
|
||||
def rhoP[P[_, _]](P: Profunctor[P])
|
||||
: DiaProd[P] => ProdP[P] = {
|
||||
def rhoP[P[_, _]](P: Profunctor[P]): DiaProd[P] => ProdP[P] = {
|
||||
case DiaProd(paa) =>
|
||||
new ProdP[P] {
|
||||
def apply[A, B](f: A => B): P[A, B] =
|
||||
|
@ -1,11 +1,11 @@
|
||||
def lambda[P[- _, _]](P: Profunctor[P]): SumP[P] => DiagSum[P] =
|
||||
sump => new DiagSum[P] {
|
||||
def paa[A]: P[A, A] =
|
||||
P.dimap(sump.f)(identity[A])(sump.pab)
|
||||
}
|
||||
|
||||
def rho[P[_, _]](P: Profunctor[P]): SumP[P] => DiagSum[P] =
|
||||
sump => new DiagSum[P] {
|
||||
def paa[A]: P[A, A] =
|
||||
P.dimap(identity[A])(sump.f)(sump.pab)
|
||||
}
|
||||
sump => new DiagSum[P] {
|
||||
def paa[A]: P[A, A] =
|
||||
P.dimap(sump.f)(identity[A])(sump.pab)
|
||||
}
|
||||
|
||||
def rho[P[_, _]](P: Profunctor[P]): SumP[P] => DiagSum[P] =
|
||||
sump => new DiagSum[P] {
|
||||
def paa[A]: P[A, A] =
|
||||
P.dimap(identity[A])(sump.f)(sump.pab)
|
||||
}
|
@ -1,6 +1,6 @@
|
||||
trait Procompose[Q[_, _], P[_, _], A, B]
|
||||
|
||||
object Procompose{
|
||||
def apply[Q[_, _], P[_, _], A, B, C]
|
||||
(qac: Q[A, C])(pcb: P[C, B])
|
||||
: Procompose[Q, P, A, B] = ???
|
||||
(qac: Q[A, C])(pcb: P[C, B]): Procompose[Q, P, A, B] = ???
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{T}{here are many intuitions} that we may attach to morphisms in a category,
|
||||
but we can all agree that if there is a morphism from the object
|
||||
@ -78,7 +78,7 @@ for which the following diagram commutes, for any $f \Colon a \to b$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/end.jpg}
|
||||
\includegraphics[width=0.35\textwidth]{images/end.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -90,7 +90,7 @@ $q$ preserving composition):
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/end-1.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/end-1.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -127,7 +127,7 @@ commute:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/end-2.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/end-2.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -145,7 +145,7 @@ $h \Colon a \to e$ that makes all triangles commute:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/end-21.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/end-21.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -172,9 +172,9 @@ $f \Colon a \to b$, the wedge condition reads:
|
||||
\src{snippet06}
|
||||
or, with type annotations:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
dimap f id\textsubscript{b} . pi\textsubscript{b} = dimap id\textsubscript{a} f . pi\textsubscript{a}
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
where both sides of the equation have the type:
|
||||
|
||||
\src{snippet07}
|
||||
@ -207,7 +207,7 @@ notation, because mathematical notation seems to be less user-friendly
|
||||
in this case). These functions correspond to the two converging branches
|
||||
of the wedge condition:
|
||||
|
||||
\src{snippet09}
|
||||
\src{snippet09}[b]
|
||||
Both functions map diagonal elements of the profunctor \code{p} to
|
||||
polymorphic functions of the type:
|
||||
|
||||
@ -246,7 +246,7 @@ condition:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/end1.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/end1.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -272,12 +272,13 @@ As expected, the dual to an end is called a coend. It is constructed
|
||||
from a dual to a wedge called a cowedge (pronounced co-wedge, not
|
||||
cow-edge).
|
||||
|
||||
\begin{wrapfigure}[5]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=30mm]{images/end-31.jpg}}%
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.25\textwidth]{images/end-31.jpg}
|
||||
\caption{An edgy cow?}
|
||||
\end{wrapfigure}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
The symbol for a coend is the integral sign with the ``integration
|
||||
variable'' in the superscript position:
|
||||
\[\int^c p\ c\ c\]
|
||||
@ -295,9 +296,9 @@ coend.
|
||||
|
||||
This is why, in pseudo-Haskell, we would define the coend as:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
exists a. p a a
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The standard way of encoding existential quantifiers in Haskell is to
|
||||
use universally quantified data constructors. We can thus define:
|
||||
|
||||
@ -359,9 +360,9 @@ identity for converting coends to ends:
|
||||
\[\Set(\int^x p\ x\ x, c) \cong \int_x \Set(p\ x\ x, c)\]
|
||||
Let's have a look at it in pseudo-Haskell:
|
||||
|
||||
\begin{Verbatim}[commandchars=\\\{\}]
|
||||
\begin{snipv}
|
||||
(exists x. p x x) -> c \ensuremath{\cong} forall x. p x x -> c
|
||||
\end{Verbatim}
|
||||
\end{snipv}
|
||||
It tells us that a function that takes an existential type is equivalent
|
||||
to a polymorphic function. This makes perfect sense, because such a
|
||||
function must be prepared to handle any one of the types that may be
|
||||
@ -426,7 +427,7 @@ and the Cartesian product of two sets corresponds to ``pairs of
|
||||
proofs,'' we can define composition of profunctors using the following
|
||||
formula:
|
||||
\[(q \circ p)\ a\ b = \int^c p\ c\ a\times{}q\ b\ c\]
|
||||
Here's the equivalent Haskell definition from\\
|
||||
Here's the equivalent Haskell definition from
|
||||
\code{Data.Profunctor.Composition}, after some renaming:
|
||||
|
||||
\src{snippet18}
|
||||
@ -435,9 +436,9 @@ a free type variable (here \code{c}) is automatically existentially
|
||||
quantified. The (uncurried) data constructor \code{Procompose} is
|
||||
thus equivalent to:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{text}
|
||||
exists c. (q a c, p c b)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
The unit of so defined composition is the hom-functor --- this
|
||||
immediately follows from the Ninja Yoneda lemma. It makes sense,
|
||||
therefore, to ask the question if there is a category in which
|
||||
|
@ -1,19 +1,16 @@
|
||||
// Read more about foldMap:
|
||||
// typelevel.org/cats/typeclasses/foldable.html
|
||||
def foldMap[F[_], A, B](fa: F[A])(f: A => B)
|
||||
(implicit B: Monoid[B]): B = ???
|
||||
implicit def listMonoid[A]
|
||||
: Monoid[List[A]] = ???
|
||||
(implicit B: Monoid[B]): B = ???
|
||||
implicit def listMonoid[A]: Monoid[List[A]] = ???
|
||||
|
||||
def toLst[A]: List[A] => Lst[A] =
|
||||
as => new Lst[A] {
|
||||
def apply(): `PolyFunctionM`[aTo, Id] =
|
||||
new `PolyFunctionM`[aTo, Id] {
|
||||
def apply[I: Monoid]
|
||||
(fa: aTo[I]): Id[I] =
|
||||
foldMap(as)(fa)
|
||||
}
|
||||
}
|
||||
def toLst[A]: List[A] => Lst[A] = as => new Lst[A] {
|
||||
def apply(): `PolyFunctionM`[aTo, Id] =
|
||||
new `PolyFunctionM`[aTo, Id] {
|
||||
def apply[I: Monoid](fa: aTo[I]): Id[I] =
|
||||
foldMap(as)(fa)
|
||||
}
|
||||
}
|
||||
|
||||
def fromLst[A]: Lst[A] => List[A] =
|
||||
f => f().apply(a => List(a))
|
@ -1,13 +1,11 @@
|
||||
def fst[I]: ((I, _)) => I = _._1
|
||||
|
||||
def toExp[A, B]
|
||||
: (A => B) => Exp[A, B] = f =>
|
||||
new Lan[(A, ?), I, B] {
|
||||
def fk[L](ki: (A, L)): B =
|
||||
f.compose(fst[A])(ki)
|
||||
def toExp[A, B]: (A => B) => Exp[A, B] = f => new Lan[(A, ?), I, B] {
|
||||
def fk[L](ki: (A, L)): B =
|
||||
f.compose(fst[A])(ki)
|
||||
|
||||
def di[L]: I[L] = I()
|
||||
}
|
||||
def di[L]: I[L] = I()
|
||||
}
|
||||
|
||||
def fromExp[A, B]: Exp[A, B] => (A => B) =
|
||||
lan => a => lan.fk((a, lan.di))
|
@ -1,10 +1,8 @@
|
||||
implicit def freeFunctor[F[_]] =
|
||||
new Functor[FreeF[F, ?]] {
|
||||
def fmap[A, B](g: A => B)(fa: FreeF[F, A])
|
||||
: FreeF[F, B] = {
|
||||
new FreeF[F, B] {
|
||||
def h[I]: I => B = g compose fa.h
|
||||
def fi[I]: F[I] = fi
|
||||
}
|
||||
}
|
||||
}
|
||||
implicit def freeFunctor[F[_]] = new Functor[FreeF[F, ?]] {
|
||||
def fmap[A, B](g: A => B)(fa: FreeF[F, A]): FreeF[F, B] = {
|
||||
new FreeF[F, B] {
|
||||
def h[I]: I => B = g compose fa.h
|
||||
def fi[I]: F[I] = fi
|
||||
}
|
||||
}
|
||||
}
|
@ -1,12 +1,10 @@
|
||||
implicit def freeFunctor[F[_]] =
|
||||
new Functor[FreeF[F, ?]] {
|
||||
def fmap[A, B](g: A => B)(fa: FreeF[F, A])
|
||||
: FreeF[F, B] = fa match {
|
||||
case FreeF(r) => FreeF {
|
||||
new ~>[B => ?, F] {
|
||||
def apply[C](bi: B => C): F[C] =
|
||||
r(bi compose g)
|
||||
}
|
||||
}
|
||||
implicit def freeFunctor[F[_]] = new Functor[FreeF[F, ?]] {
|
||||
def fmap[A, B](g: A => B)(fa: FreeF[F, A]): FreeF[F, B] = fa match {
|
||||
case FreeF(r) => FreeF {
|
||||
new ~>[B => ?, F] {
|
||||
def apply[C](bi: B => C): F[C] =
|
||||
r(bi compose g)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,13 +1,8 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{S}{o far we've been} mostly working with a single category or a pair of
|
||||
categories. In some cases that was a little too constraining.
|
||||
|
||||
\begin{wrapfigure}[8]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0.75\baselineskip\relax]{
|
||||
\includegraphics[width=1.70833in]{images/kan2.jpg}}%
|
||||
\end{wrapfigure}
|
||||
|
||||
For instance, when defining a limit in a category $\cat{C}$, we introduced an
|
||||
index category $\cat{I}$ as the template for the pattern that would
|
||||
form the basis for our cones. It would have made sense to introduce
|
||||
@ -20,6 +15,12 @@ categories. Let's start with the functor $D$ from the index
|
||||
category $\cat{I}$ to $\cat{C}$. This is the functor that selects the base
|
||||
of the cone --- the diagram functor.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.4\textwidth]{images/kan2.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
The new addition is the category $\cat{1}$ that contains a single
|
||||
object (and a single identity morphism). There is only one possible
|
||||
functor $K$ from $\cat{I}$ to this category. It maps all objects
|
||||
@ -29,7 +30,7 @@ potential apex for our cone.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/kan15.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan15.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -40,7 +41,7 @@ transformation.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan3-e1492120491591.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan3-e1492120491591.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -59,7 +60,7 @@ $D$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/kan31-e1492120512209.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan31-e1492120512209.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -74,7 +75,7 @@ $\varepsilon$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/kan5.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan5.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -108,7 +109,7 @@ This is quite a mouthful, but it can be visualized in this nice diagram:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan7.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan7.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -133,7 +134,7 @@ $F \circ K$ to $D$. (The left Kan extension picks the other direction.)
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=80mm]{images/kan6.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan6.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -158,7 +159,7 @@ transformation we called $\sigma$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/kan92.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan92.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -205,7 +206,7 @@ base, and the functor $F \Colon \cat{1} \to \cat{C}$ to select its apex.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan81.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan81.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -214,7 +215,7 @@ transformation $\eta$ from $D$ to $F \circ K$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan10a.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan10a.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -224,7 +225,7 @@ $F'$ and a natural transformation
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan10b.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan10b.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -232,7 +233,7 @@ there is a unique natural transformation $\sigma$ from $F$ to $F'$
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan14.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan14.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -242,7 +243,7 @@ This is illustrated in the following diagram:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan112.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan112.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -252,7 +253,7 @@ extension, denoted by $\Lan_{K}D$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/kan12.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan12.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -300,7 +301,7 @@ hom-functor:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/kan13.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/kan13.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -391,9 +392,9 @@ The left Kan extension is a coend:
|
||||
\[\Lan_{K}D\ a = \int^i \cat{A}(K\ i, a)\times{}D\ i\]
|
||||
so it translates to an existential quantifier. Symbolically:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
Lan k d a = exists i. (k i -> a, d i)
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This can be encoded in Haskell using \acronym{GADT}s, or using a universally
|
||||
quantified data constructor:
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{A}{ category is small} if its objects form a set. But we know that
|
||||
there are things larger than sets. Famously, a set of all sets cannot be
|
||||
@ -165,7 +165,7 @@ of morphisms is replaced by a family of morphisms in $\cat{V}$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/composition.jpg}
|
||||
\includegraphics[width=0.45\textwidth]{images/composition.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -176,7 +176,7 @@ where $i$ is the tensor unit in $\cat{V}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/id.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/id.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{ realize that we might} be getting away from programming and diving into
|
||||
hard-core math. But you never know what the next big revolution in
|
||||
@ -63,7 +63,7 @@ Such a family of equivalent injections defines a subset of $b$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=2.29167in]{images/subsetinjection.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/subsetinjection.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -81,7 +81,7 @@ it must be that $g = g'$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/monomorphism.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/monomorphism.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -94,7 +94,7 @@ postcomposition with $m$ would then mask this difference.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/notmono.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/notmono.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -113,7 +113,7 @@ $true$:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/true.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/true.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -137,7 +137,7 @@ pullback diagram:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/pullback.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/pullback.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -1,3 +1,3 @@
|
||||
sealed trait Option[+A]
|
||||
case object None extends Option[Nothing]
|
||||
final case class Some[A](a: A) extends Option[A]
|
||||
case class Some[A](a: A) extends Option[A]
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{N}{owadays you can't} talk about functional programming without mentioning
|
||||
monads. But there is an alternative universe in which, by chance,
|
||||
@ -68,7 +68,7 @@ the roadmap:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=90mm]{images/lawvere1.png}
|
||||
\includegraphics[width=0.8\textwidth]{images/lawvere1.png}
|
||||
\end{figure}
|
||||
|
||||
\section{Lawvere Theories}
|
||||
@ -163,7 +163,7 @@ continuous).
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=90mm]{images/lawvere1.png}
|
||||
\includegraphics[width=0.8\textwidth]{images/lawvere1.png}
|
||||
\caption{Lawvere theory $\cat{L}$ is based on $\Fop$, from which
|
||||
it inherits the ``boring'' morphisms that define the products. It adds
|
||||
the ``interesting'' morphisms that describe the $n$-ary operations (dotted
|
||||
@ -429,7 +429,7 @@ $(a_1, a_2,...a_n)$ (possibly with repetitions).
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/liftpower.png}
|
||||
\includegraphics[width=0.5\textwidth]{images/liftpower.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{T}{here is no good place} to end a book on category theory. There's always
|
||||
more to learn. Category theory is a vast subject. At the same time, it's
|
||||
@ -48,7 +48,7 @@ morphisms are $2$-cells.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.53125in]{images/twocat.png}
|
||||
\includegraphics[width=0.35\textwidth]{images/twocat.png}
|
||||
\caption{$0$-cells $a, b$; $1$-cells $f, g$; and a $2$-cell $\alpha$.}
|
||||
\end{figure}
|
||||
|
||||
@ -100,9 +100,9 @@ that has an inverse.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/bicat.png}
|
||||
\includegraphics[width=0.35\textwidth]{images/bicat.png}
|
||||
\caption{Identity law in a bicategory holds up to isomorphism (an invertible
|
||||
$2$-cell \rho).}
|
||||
$2$-cell $\rho$).}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -123,7 +123,7 @@ g \Colon x \to b
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/span.png}
|
||||
\includegraphics[width=0.35\textwidth]{images/span.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -138,7 +138,7 @@ g' \Colon y \to c
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=2.26042in]{images/compspan.png}
|
||||
\includegraphics[width=0.5\textwidth]{images/compspan.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -156,7 +156,7 @@ which is universal among all such objects.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=2.42708in]{images/pullspan.png}\\
|
||||
\includegraphics[width=0.5\textwidth]{images/pullspan.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -170,7 +170,7 @@ triangles commute.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.70833in]{images/morphspan.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/morphspan.png}
|
||||
\caption{A $2$-cell in $\cat{Span}$.}
|
||||
\end{figure}
|
||||
|
||||
@ -207,7 +207,7 @@ we get:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/monad.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/monad.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -228,7 +228,7 @@ satisfying the monoid laws. We call \emph{this} a monad.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/bimonad.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/bimonad.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -239,7 +239,7 @@ bicategories.
|
||||
|
||||
Let's construct a monad in $\cat{Span}$. We pick a $0$-cell, which is a
|
||||
set that, for reasons that will become clear soon, I will call
|
||||
$Ob$. Next, we pick an endo-1-cell: a span from $Ob$ back
|
||||
$Ob$. Next, we pick an endo-$1$-cell: a span from $Ob$ back
|
||||
to $Ob$. It has a set at the apex, which I will call $Ar$,
|
||||
equipped with two functions:
|
||||
\begin{align*}
|
||||
@ -249,7 +249,7 @@ cod &\Colon Ar \to Ob
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/spanmonad.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/spanmonad.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -272,7 +272,7 @@ cod &\circ \eta = \id
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/spanunit.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/spanunit.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -293,7 +293,7 @@ domain of one is the codomain of the other.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=2.75000in]{images/spanmul.png}
|
||||
\includegraphics[width=0.5\textwidth]{images/spanmul.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{n mathematics we have} various ways of saying that one thing is like
|
||||
another. The strictest is equality. Two things are equal if there is no
|
||||
@ -37,7 +37,7 @@ and another in $\cat{D}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=3.12500in]{images/adj-1.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/adj-1.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -105,7 +105,7 @@ counit in more detail.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/adj-unit.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/adj-unit.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -124,7 +124,7 @@ shoot an arrow --- the morphism $\eta_d$ --- to our target.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/adj-counit.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/adj-counit.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -255,7 +255,7 @@ adjunction.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/adj-homsets.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/adj-homsets.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -335,7 +335,7 @@ $R$ are called \code{f} and \code{u}:
|
||||
|
||||
\src{snippet05}
|
||||
The equivalence between the \code{unit}/\code{counit} formulation
|
||||
and the\\ \code{leftAdjunct}/\code{rightAdjunct} formulation is
|
||||
and the \code{leftAdjunct}/\allowbreak\code{rightAdjunct} formulation is
|
||||
witnessed by these mappings:
|
||||
|
||||
\src{snippet06}
|
||||
@ -412,7 +412,7 @@ morphisms in the product category $\cat{C}\times{}\cat{C}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/adj-productcat.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/adj-productcat.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -467,7 +467,7 @@ $a\times{}b$. We recognize this element of the hom-set as the
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=3.12500in]{images/adj-product.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/adj-product.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -491,10 +491,10 @@ In Haskell, we can always construct the inverse of the
|
||||
\code{factorizer} by composing \code{m} with, respectively,
|
||||
\code{fst} and \code{snd}.
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
p = fst . m
|
||||
q = snd . m
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
To complete the proof of the equivalence of the two ways of defining a
|
||||
product we also need to show that the mapping between hom-sets is
|
||||
natural in $a$, $b$, and $c$. I will leave this as
|
||||
@ -522,7 +522,7 @@ can be seen as an adjunction. Again, the trick is to concentrate on the
|
||||
statement:
|
||||
|
||||
\begin{quote}
|
||||
For any other object $z$ with a morphism $g \Colon z\times{}a \to b$ \\
|
||||
For any other object $z$ with a morphism $g \Colon z\times{}a \to b$
|
||||
there is a unique morphism $h \Colon z \to (a \Rightarrow b)$
|
||||
\end{quote}
|
||||
This statement establishes a mapping between hom-sets.
|
||||
@ -542,7 +542,7 @@ redrawing the diagram that we used in the universal construction.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/adj-expo.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/adj-expo.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -4,9 +4,7 @@ abstract class Adjunction[F[_], U[_]](
|
||||
){
|
||||
// changed the order of parameters
|
||||
// to help Scala compiler infer types
|
||||
def leftAdjunct[A, B]
|
||||
(a: A)(f: F[A] => B): U[B]
|
||||
def leftAdjunct[A, B](a: A)(f: F[A] => B): U[B]
|
||||
|
||||
def rightAdjunct[A, B]
|
||||
(fa: F[A])(f: A => U[B]): B
|
||||
def rightAdjunct[A, B](fa: F[A])(f: A => U[B]): B
|
||||
}
|
@ -4,10 +4,8 @@ def unit[A](a: A): U[F[A]] =
|
||||
def counit[A](a: F[U[A]]): A =
|
||||
rightAdjunct(a)(identity)
|
||||
|
||||
def leftAdjunct[A, B]
|
||||
(a: A)(f: F[A] => B): U[B] =
|
||||
def leftAdjunct[A, B](a: A)(f: F[A] => B): U[B] =
|
||||
U.map(unit(a))(f)
|
||||
|
||||
def rightAdjunct[A, B]
|
||||
(a: F[A])(f: A => U[B]): B =
|
||||
def rightAdjunct[A, B](a: F[A])(f: A => U[B]): B =
|
||||
counit(F.map(a)(f))
|
@ -1,5 +1,5 @@
|
||||
def toNat: List[Unit] => Int =
|
||||
_.length
|
||||
|
||||
def toLst: Int => List[Unit] =
|
||||
n => List.fill(n)(())
|
||||
_.length
|
||||
|
||||
def toLst: Int => List[Unit] =
|
||||
n => List.fill(n)(())
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{F}{ree constructions are} a powerful application of adjunctions. A
|
||||
\newterm{free functor} is defined as the left adjoint to a \newterm{forgetful
|
||||
@ -22,8 +22,8 @@ hand, it's a bunch of sets with multiplication and unit elements. On the
|
||||
other hand, it's a category with featureless objects whose only
|
||||
structure is encoded in morphisms that go between them. Every
|
||||
set-function that preserves multiplication and unit gives rise to a
|
||||
morphism in $\cat{Mon}$.
|
||||
|
||||
morphism in $\cat{Mon}$.\\
|
||||
\newline
|
||||
Things to keep in mind:
|
||||
|
||||
\begin{itemize}
|
||||
@ -35,15 +35,6 @@ Things to keep in mind:
|
||||
are functions between their underlying sets.
|
||||
\end{itemize}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=80mm]{images/forgetful.jpg}
|
||||
\caption{Monoids $m_1$ and $m_2$ have the same
|
||||
underlying set. There are more functions between the underlying sets of
|
||||
$m_2$ and $m_3$ than there are morphisms
|
||||
between them.}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
The functor $F$ that's the left adjoint to the forgetful functor
|
||||
$U$ is the free functor that builds free monoids from their
|
||||
@ -51,6 +42,16 @@ generator sets. The adjunction follows from the free monoid
|
||||
universal construction we've discussed before.\footnote{See ch.13 on
|
||||
\hyperref[free-monoids]{free monoids}.}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.6\textwidth]{images/forgetful.jpg}
|
||||
\caption{Monoids $m_1$ and $m_2$ have the same
|
||||
underlying set. There are more functions between the underlying sets of
|
||||
$m_2$ and $m_3$ than there are morphisms
|
||||
between them.}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
In terms of hom-sets, we can write this adjunction as:
|
||||
\[\cat{Mon}(F x, m) \cong \Set(x, U m)\]
|
||||
This (natural in $x$ and $m$) isomorphism tells us that:
|
||||
@ -72,7 +73,7 @@ This (natural in $x$ and $m$) isomorphism tells us that:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{images/freemonadjunction.jpg}
|
||||
\includegraphics[width=0.8\textwidth]{images/freemonadjunction.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -174,7 +175,7 @@ whereas the ones in $\cat{D}(U c', U c)$ don't.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/forgettingmorphisms.jpg}
|
||||
\includegraphics[width=0.45\textwidth]{images/forgettingmorphisms.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -195,7 +196,7 @@ objects.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/freeimage.jpg}
|
||||
\includegraphics[width=0.45\textwidth]{images/freeimage.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -1,8 +1,8 @@
|
||||
case class Writer[W, A](run: (A, W))
|
||||
|
||||
implicit def writerFunctor[W] = new Functor[Writer[W, ?]] {
|
||||
def fmap[A, B](f: A => B)(fa: Writer[W, A]): Writer[W, B] =
|
||||
fa match {
|
||||
case Writer((a, w)) => Writer(f(a), w)
|
||||
}
|
||||
}
|
||||
def fmap[A, B](f: A => B)(fa: Writer[W, A]): Writer[W, B] =
|
||||
fa match {
|
||||
case Writer((a, w)) => Writer(f(a), w)
|
||||
}
|
||||
}
|
@ -1,5 +1,4 @@
|
||||
trait Monad[M[_]] {
|
||||
def >=>[A, B, C](m1: A => M[B], m2: B => M[C]): A => M[C]
|
||||
|
||||
def pure[A](a: A): M[A]
|
||||
}
|
@ -1,25 +1,20 @@
|
||||
implicit def writerMonad[W: Monoid] =
|
||||
new Functor[Writer[W, ?]] {
|
||||
def >=>[A, B, C](
|
||||
f: A => Writer[W, B],
|
||||
g: B => Writer[W, C]) =
|
||||
a => {
|
||||
val Writer((b, s1)) = f(a)
|
||||
val Writer((c, s2)) = g(b)
|
||||
Writer((c, Monoid[W].combine(s1, s2)))
|
||||
}
|
||||
|
||||
def pure[A](a: A) =
|
||||
Writer(a, Monoid[W].empty)
|
||||
implicit def writerMonad[W: Monoid] = new Functor[Writer[W, ?]] {
|
||||
def >=>[A, B, C](f: A => Writer[W, B], g: B => Writer[W, C]) =
|
||||
a => {
|
||||
val Writer((b, s1)) = f(a)
|
||||
val Writer((c, s2)) = g(b)
|
||||
Writer((c, Monoid[W].combine(s1, s2)))
|
||||
}
|
||||
|
||||
object kleisliSyntax {
|
||||
//allows us to use >=> as an infix operator
|
||||
implicit class MonadOps[M[_], A, B]
|
||||
(m1: A => M[B]) {
|
||||
def >=>[C](m2: B => M[C])
|
||||
(implicit m: Monad[M]): A => M[C] = {
|
||||
m.>=>(m1, m2)
|
||||
}
|
||||
|
||||
def pure[A](a: A) =
|
||||
Writer(a, Monoid[W].empty)
|
||||
}
|
||||
|
||||
object kleisliSyntax {
|
||||
//allows us to use >=> as an infix operator
|
||||
implicit class MonadOps[M[_], A, B](m1: A => M[B]) {
|
||||
def >=>[C](m2: B => M[C])(implicit m: Monad[M]): A => M[C] = {
|
||||
m.>=>(m1, m2)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,2 +1,2 @@
|
||||
def tell[W](s: W): Writer[W, Unit] =
|
||||
Writer((), s)
|
||||
Writer((), s)
|
@ -1,3 +1,2 @@
|
||||
def >=>[A, B, C]
|
||||
(f: A => M[B], g: B => M[C]) =
|
||||
def >=>[A, B, C](f: A => M[B], g: B => M[C]) =
|
||||
a => {...}
|
@ -1,5 +1,4 @@
|
||||
def >=>[A, B, C]
|
||||
(f: A => M[B], g: B => M[C]) =
|
||||
def >=>[A, B, C](f: A => M[B], g: B => M[C]) =
|
||||
a => {
|
||||
val mb = f(a)
|
||||
...
|
||||
|
@ -1,5 +1,4 @@
|
||||
trait Monad[M[_]] {
|
||||
def flatMap[A, B](ma: M[A])(f: A => M[B]): M[B]
|
||||
|
||||
def pure[A](a: A): M[A]
|
||||
}
|
@ -1,9 +1,7 @@
|
||||
object bindSyntax {
|
||||
//allows us to use flatMap as an infix operator
|
||||
implicit class MonadOps[A, B, W: Monoid]
|
||||
(wa: Writer[W, A]) {
|
||||
def flatMap(f: A => Writer[W, B])
|
||||
: Writer[W, B] = wa match {
|
||||
implicit class MonadOps[A, B, W: Monoid](wa: Writer[W, A]) {
|
||||
def flatMap(f: A => Writer[W, B]): Writer[W, B] = wa match {
|
||||
case Writer((a, w1)) =>
|
||||
val Writer((b, w2)) = f(a)
|
||||
Writer(b, Monoid[W].combine(w1, w2))
|
||||
|
@ -1,2 +1,2 @@
|
||||
def flatMap[A, B](ma: M[A])(f: A => M[B]): M[B] =
|
||||
flatten(fmap(f)(ma))
|
||||
flatten(fmap(f)(ma))
|
@ -1,5 +1,4 @@
|
||||
trait Monad[M[_]] extends Functor[M] {
|
||||
def flatten[A](mma: M[M[A]]): M[A]
|
||||
|
||||
def pure[A](a: A): M[A]
|
||||
}
|
@ -1,2 +1,2 @@
|
||||
def fmap[A, B](f: A => B)(ma: M[A]): M[B] =
|
||||
flatMap(ma)(a => pure(f(a)))
|
||||
flatMap(ma)(a => pure(f(a)))
|
@ -2,4 +2,4 @@ def flatten[A, W: Monoid](wwa: Writer[W, Writer[W, A]]): Writer[W, A] =
|
||||
wwa match {
|
||||
case Writer((Writer((a, w2)), w1)) =>
|
||||
Writer(a, Monoid[W].combine(w1, w2))
|
||||
}
|
||||
}
|
@ -1,6 +1,6 @@
|
||||
def words: String => List[String] =
|
||||
_.split("\\s+").toList
|
||||
|
||||
_.split("\\s+").toList
|
||||
|
||||
def process: String => Writer[String, List[String]] =
|
||||
s => {
|
||||
import bindSyntax._
|
||||
|
@ -1,7 +1,6 @@
|
||||
object moreSyntax {
|
||||
// allows us to use >> as an infix operator
|
||||
implicit class MoreOps[A, B, W: Monoid]
|
||||
(m: Writer[W, A])
|
||||
implicit class MoreOps[A, B, W: Monoid](m: Writer[W, A])
|
||||
extends bindSyntax.MonadOps[A, B, W](m) {
|
||||
def >>(k: Writer[W, B]): Writer[W, B] =
|
||||
m >>= (_ => k)
|
||||
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{P}{rogrammers have developed} a whole mythology around monads. It's
|
||||
supposed to be one of the most abstract and difficult concepts in
|
||||
@ -54,7 +54,7 @@ directly from function to function. We also inline short segments of
|
||||
glue code rather than abstract them into helper functions. Here's an
|
||||
imperative-style implementation of the vector-length function in C:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{cpp}
|
||||
double vlen(double * v) {
|
||||
double d = 0.0;
|
||||
int n;
|
||||
@ -62,7 +62,7 @@ double vlen(double * v) {
|
||||
d += v[n] * v[n];
|
||||
return sqrt(d);
|
||||
}
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
Compare this with the (stylized) Haskell version that makes function
|
||||
composition explicit:
|
||||
|
||||
@ -131,11 +131,11 @@ In this formulation, monad laws are very easy to express. They cannot be
|
||||
enforced in Haskell, but they can be used for equational reasoning. They
|
||||
are simply the standard composition laws for the Kleisli category:
|
||||
|
||||
\begin{Verbatim}
|
||||
\begin{snip}{haskell}
|
||||
(f >=> g) >=> h = f >=> (g >=> h) -- associativity
|
||||
return >=> f = f -- left unit
|
||||
f >=> return = f -- right unit
|
||||
\end{Verbatim}
|
||||
\end{snip}
|
||||
This kind of a definition also expresses what a monad really is: it's a
|
||||
way of composing embellished functions. It's not about side effects or
|
||||
state. It's about composition. As we'll see later, embellished functions
|
||||
|
@ -1,9 +1,10 @@
|
||||
implicit def writerMonad[W: Monoid] = new Monad[Writer[W, ?]] {
|
||||
def flatMap[A, B](wa: Writer[W, A])(k: A => Writer[W, B]): Writer[W, B] = wa match {
|
||||
case Writer((a, w)) =>
|
||||
val ((a1, w1)) = runWriter(k(a))
|
||||
Writer((a1, Monoid[W].combine(w, w1)))
|
||||
}
|
||||
def flatMap[A, B](wa: Writer[W, A])(k: A => Writer[W, B]): Writer[W, B] =
|
||||
wa match {
|
||||
case Writer((a, w)) =>
|
||||
val ((a1, w1)) = runWriter(k(a))
|
||||
Writer((a1, Monoid[W].combine(w, w1)))
|
||||
}
|
||||
|
||||
def pure[A](a: A) = Writer(a, Monoid[W].empty)
|
||||
}
|
@ -1,8 +1,9 @@
|
||||
implicit val optionMonad = new Monad[Option] {
|
||||
def flatMap[A, B](ma: Option[A])(k: A => Option[B]): Option[B] = ma match {
|
||||
case None => None
|
||||
case Some(a) => k(a)
|
||||
def flatMap[A, B](ma: Option[A])(k: A => Option[B]): Option[B] =
|
||||
ma match {
|
||||
case None => None
|
||||
case Some(a) => k(a)
|
||||
}
|
||||
|
||||
|
||||
def pure[A](a: A): Option[A] = Some(a)
|
||||
}
|
@ -1,4 +1,3 @@
|
||||
def flatMap[A, B]
|
||||
: ((A => R) => R) =>
|
||||
(A => (B => R) => R) =>
|
||||
((B => R) => R)
|
||||
def flatMap[A, B]: ((A => R) => R) =>
|
||||
(A => (B => R) => R) =>
|
||||
((B => R) => R)
|
@ -1,2 +1,2 @@
|
||||
def flatMap[A, B](ka: Cont[R, A])(kab: A => Cont[R, B]): Cont[R, B] =
|
||||
Cont(hb => ...)
|
||||
Cont(hb => ...)
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{N}{ow that we know} what the monad is for --- it lets us compose
|
||||
embellished functions --- the really interesting question is why
|
||||
|
@ -1,5 +1,3 @@
|
||||
mu.compose(bimap(eta)(identity[M]))(((), x))
|
||||
== lambda(((), x))
|
||||
mu.compose(bimap(eta)(identity[M]))(((), x)) == lambda(((), x))
|
||||
|
||||
mu.compose(bimap(identity[M])(eta))((x, ()))
|
||||
== rho((x, ()))
|
||||
mu.compose(bimap(identity[M])(eta))((x, ())) == rho((x, ()))
|
@ -1,10 +1,8 @@
|
||||
implicit def state[S] =
|
||||
new Adjunction[Prod[S, ?], Reader[S, ?]] {
|
||||
|
||||
def counit[A](a: Prod[S, Reader[S, A]]): A = a match {
|
||||
case Prod((Reader(f), s)) => f(s)
|
||||
}
|
||||
implicit def state[S] = new Adjunction[Prod[S, ?], Reader[S, ?]] {
|
||||
def counit[A](a: Prod[S, Reader[S, A]]): A = a match {
|
||||
case Prod((Reader(f), s)) => f(s)
|
||||
}
|
||||
|
||||
def unit[A](a: A): Reader[S, Prod[S, A]] =
|
||||
Reader(s => Prod((a, s)))
|
||||
}
|
||||
def unit[A](a: A): Reader[S, Prod[S, A]] =
|
||||
Reader(s => Prod((a, s)))
|
||||
}
|
@ -1,4 +1,4 @@
|
||||
% !TEX root = ../../ctfp-reader.tex
|
||||
% !TEX root = ../../ctfp-print.tex
|
||||
|
||||
\lettrine[lhang=0.17]{I}{f you mention monads} to a programmer, you'll probably end up talking
|
||||
about effects. To a mathematician, monads are about algebras. We'll talk
|
||||
@ -16,7 +16,7 @@ programmers, we often think of expressions as trees.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/exptree.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/exptree.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -113,10 +113,10 @@ composition of $T$ after $T^2$. We can apply to it the
|
||||
horizontal composition of two natural transformations:
|
||||
\[I_T \circ \mu\]
|
||||
|
||||
\begin{wrapfigure}[8]{R}{0pt}
|
||||
\raisebox{0pt}[\dimexpr\height-0\baselineskip\relax]{
|
||||
\includegraphics[width=60mm]{images/assoc1.png}}%
|
||||
\end{wrapfigure}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.4\textwidth]{images/assoc1.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
and get $T \circ T$; which can be further reduced to $T$ by
|
||||
@ -132,7 +132,7 @@ We can also draw the diagram in the (endo-) functor category ${[}\cat{C}, \cat{C
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=50mm]{images/assoc2.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/assoc2.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -143,7 +143,7 @@ require that the two paths produce the same result.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/assoc.png}
|
||||
\includegraphics[width=0.3\textwidth]{images/assoc.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -156,7 +156,7 @@ be true for $T \circ \eta$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=80mm]{images/unitlawcomp-1.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/unitlawcomp-1.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -276,7 +276,7 @@ isomorphisms. The laws can be represented by commuting diagrams.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=80mm]{images/assocmon.png}
|
||||
\includegraphics[width=0.5\textwidth]{images/assocmon.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -342,7 +342,7 @@ where $i$ is the unit object for our tensor product.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=40mm]{images/monoid-1.jpg}
|
||||
\includegraphics[width=0.4\textwidth]{images/monoid-1.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -351,12 +351,12 @@ be expressed in terms of the following commuting diagrams:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/assoctensor.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/assoctensor.jpg}
|
||||
\end{figure}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=60mm]{images/unitmon.jpg}
|
||||
\includegraphics[width=0.5\textwidth]{images/unitmon.jpg}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -396,7 +396,7 @@ which you may recognize as the special case of horizontal composition.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=70mm]{images/horizcomp.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/horizcomp.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
@ -418,12 +418,12 @@ Not only that, here are the monoid laws:
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=1.90625in]{images/assoc.png}
|
||||
\includegraphics[width=0.4\textwidth]{images/assoc.png}
|
||||
\end{figure}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=2.86458in]{images/unitlawcomp.png}
|
||||
\includegraphics[width=0.5\textwidth]{images/unitlawcomp.png}
|
||||
\end{figure}
|
||||
|
||||
\noindent
|
||||
|
@ -1,5 +1,4 @@
|
||||
trait Comonad[W[_]] extends Functor[W] {
|
||||
def =>=[A, B, C](w1: W[A] => B)(w2: W[B] => C): W[A] => C
|
||||
|
||||
def extract[A](wa: W[A]): A
|
||||
}
|
@ -1,6 +1,4 @@
|
||||
def =>=[E, A, B, C]:
|
||||
: (Product[E, A] => B) =>
|
||||
(Product[E, B] => C) =>
|
||||
def =>=[E, A, B, C]: (Product[E, A] => B) => (Product[E, B] => C) =>
|
||||
(Product[E, A] => C) = f => g => {
|
||||
case Product((e, a)) =>
|
||||
val b = f(Product((e, a)))
|
||||
|
@ -1,7 +1,2 @@
|
||||
def =>=[W[_], A, B, C]
|
||||
: (W[A] => B) =>
|
||||
(W[B] => C) =>
|
||||
(W[A] => C) = f => g => {
|
||||
g
|
||||
...
|
||||
}
|
||||
def =>=[W[_], A, B, C]: (W[A] => B) => (W[B] => C) => (W[A] => C) =
|
||||
f => g => g ...
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user