My first programming languages were Lisp, Scheme, and ML. When I later started to work in OO languages like C++ and Java I noticed that idioms that are standard vocabulary in functional programming (fp) were not so easy to achieve and required sophisticated structures. Books like [Design Patterns: Elements of Reusable Object-Oriented Software](https://en.wikipedia.org/wiki/Design_Patterns) were a great starting point to reason about those structures. One of my earliest findings was that several of the GoF-Patterns had a stark resemblance of structures that are built into in functional languages: for instance the strategy pattern corresponds to higher order functions in fp (more details see [below](#strategy)).
Recently, while re-reading through the [Typeclassopedia](https://wiki.haskell.org/Typeclassopedia) I thought it would be a good exercise to map the structure of software [design-patterns](https://en.wikipedia.org/wiki/Software_design_pattern#Classification_and_list) to the concepts found in the Haskell type class library and in functional programming in general.
By searching the web I found some blog entries studying specific patterns, but I did not come across any comprehensive study. As it seemed that nobody did this kind of work yet I found it worthy to spend some time on it and write down all my findings on the subject.
>This project is still work in progress, so please feel free to contact me with any corrections, adjustments, comments, suggestions and additional ideas you might have.
> Please use the [Issue Tracker](https://github.com/thma/LtuPatternFactory/issues) to enter your requests.
The [Typeclassopedia](https://wiki.haskell.org/wikiupload/8/85/TMR-Issue13.pdf) is a now classic paper that introduces the Haskell type classes by clarifying their algebraic and category-theoretic background. In particular it explains the relationships among those type classes.
In this section I'm taking a tour through the Typeclassopedia from a design pattern perspective.
For each of the Typeclassopedia type classes (at least up to Traversable) I try to explain how it corresponds to structures applied in design patterns.
> "The strategy pattern [...] is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use"
> "In the above UML class diagram, the `Context` class doesn't implement an algorithm directly. Instead, `Context` refers to the `Strategy` interface for performing an algorithm (`strategy.algorithm()`), which makes `Context` independent of how an algorithm is implemented. The `Strategy1` and `Strategy2` classes implement the `Strategy` interface, that is, implement (encapsulate) an algorithm."
>(quoted from https://en.wikipedia.org/wiki/Strategy_pattern)
* in C a strategy would be modelled as a function pointer that can be used to dispatch calls to different functions.
* In an OO language like Java a strategy would be modelled as a single strategy-method interface that would be implemented by different strategy classes that provide implementations of the strategy method.
* in functional programming a strategy is just a higher order function, that is a parameter of a function that has a function type.
```haskell
-- first we define two simple strategies that map numbers to numbers:
strategyId :: Num a => a -> a
strategyId n = n
strategyDouble :: Num a => a -> a
strategyDouble n = 2*n
-- now we define a context that applies a function of type Num a => a -> a to a list of a's:
Although it would be fair to say that the type class `Functor` captures the essential idea of the strategy pattern - namely the injecting into and the execution in a computational context of a function - the usage of higher order functions (or strategies) is of course not limited to `Functors` - we could use just any higher order function fitting our purpose. Other type classes like `Foldable` or `Traversable` can serve as helpful abstractions when dealing with typical use cases of applying variable strategies within a computational context.
> "The singleton pattern is a software design pattern that restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system."
> (quoted from https://en.wikipedia.org/wiki/Singleton_pattern)
The singleton pattern ensures that multiple requests to a given object always return one and the same singleton instance.
In functional programming this semantics can be achieved by ```let```.
Via the `let`-Binding we can thread the singleton through arbitrary code in the `in` block. All occurences of `singleton` in the `mainComputation`will point to the same instance.
Experienced Haskellers will notice the ["eta-reduction smell"](https://wiki.haskell.org/Eta_conversion) in `eval (Var x) env = fetch x env` which hints at the possibilty to remove `env` as an explicit parameter. We can not do this right away as the other equations for `eval` do not allow eta-reduction. In order to do so we have to apply the combinators of the `Applicative Functor`:
```haskell
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
```
This `Applicative` allows us to rewrite `eval` as follows:
```haskell
eval :: Num e => Exp e -> Env e -> e
eval (Var x) = fetch x
eval (Val i) = pure i
eval (Add p q) = pure (+) <*> eval p <*> eval q
eval (Mul p q) = pure (*) <*> eval p <*> eval q
```
Any explicit handling of the variable `env` is now removed.
(I took this example from the classic paper [Applicative programming with effects](http://www.soi.city.ac.uk/~ross/papers/Applicative.pdf) which details how `pure` and `<*>` correspond to the combinatory logic combinators `K` and `S`.)
> In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline.
This works exactly as stated in the wikipedia definition of the pattern: the output of `echo "hello world"` is used as input for the next command `wc -w`. The ouptput of this command is then piped as input into `xargs printf "%d*3\n"` and so on.
On the first glance this might look like ordinary function composition. We could for instance come up with the following approximation in Haskell:
```haskell
((3 *) . length . words) "hello world"
6
```
But with this design we missed an important feature of the chain of shell commands: The commands do not work on elementary types like Strings or numbers but on input and output streams that are used to propagate the actual elementary data around. So we can't just send a String into the `wc` command as in `"hello world" | wc -w`. Instead we have to use `echo` to place the string into a stream that we can then use as input to the `wc` command:
The `|>` (pronounced as "andThen") does the function chaining:
```haskell
ghci> echo "hello world" |> echo . words
Stream ["hello","world"]
```
The result of `|>` is of type `Stream b` that's why we cannot just write `echo "hello world" |> words`. We have to use echo to create a `Stream` output that can be digested by a subsequent `|>`.
The interplay of a Context type `Stream a` and the functions `echo` and `|>` is a well known pattern from functional languages: it's the legendary *Monad*. As the [Wikipedia article on the pipeline pattern](https://en.wikipedia.org/wiki/Pipeline_(software)) states:
> Pipes and filters can be viewed as a form of functional programming, using byte streams as data objects; more specifically, they can be seen as a particular form of monad for I/O.
There is an interesting paper available elaborating on the monadic nature of Unix pipes: http://okmij.org/ftp/Computation/monadic-shell.html.
-- | Sequentially compose two actions, passing any value produced
-- by the first as an argument to the second.
(>>=) :: m a -> (a -> m b) -> m b
-- | Inject a value into the monadic type.
return :: a -> m a
return = pure
```
By looking at the types of `>>=` and `return` it's easy to see the direct correspondence to `|>` and `echo` in the pipeline example above:
```haskell
(|>) :: Stream a -> (a -> Stream b) -> Stream b
echo :: a -> Stream a
```
Mhh, this is nice, but still looks a lot like ordinary composition of functions, just with the addition of a wrapper.
In this simplified example that's true, because we have designed the `|>` operator to simply unwrap a value from the Stream and bind it to the formal parameter of the subsequent function:
```haskell
Stream x |> f = f x
```
But we are free to implement the `andThen` operator in any way that we seem fit as long we maintain the type signature and the [monad laws](https://en.wikipedia.org/wiki/Monad_%28functional_programming%29#Monad_laws).
What's noteworthy here is that Monads allow to make the mechanism of chaining functions *explicit*. We can define what `andThen` should mean in our pipeline by choosing a different Monad implementation.
So in a sense Monads could be called [programmable semicolons](http://book.realworldhaskell.org/read/monads.html#id642960)
>[...] a null object is an object with no referenced value or with defined neutral ("null") behavior. The null object design pattern describes the uses of such objects and their behavior (or lack thereof).
> [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Null_object_pattern)
In functional programming the null object pattern is typically formalized with option types:
> [...] an option type or maybe type is a polymorphic type that represents encapsulation of an optional value; e.g., it is used as the return type of functions which may or may not return a meaningful value when they are applied. It consists of a constructor which either is empty (named None or `Nothing`), or which encapsulates the original data type `A` (written `Just A` or Some A).
> [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Option_type)
(See also: [Null Object as Identity](http://blog.ploeh.dk/2018/04/23/null-object-as-identity/))
In Haskell the most simple option type is `Maybe`. Let's directly dive into an example. We define a reverse index, mapping songs to album titles.
If we now lookup up a song title we may either be lucky and find the respective album or not so lucky when there is no album matching our song:
```haskell
import Data.Map (Map, fromList)
import qualified Data.Map as Map (lookup) -- avoid clash with Prelude.lookup
-- type aliases for Songs and Albums
type Song = String
type Album = String
-- the simplified reverse song index
songMap :: Map Song Album
songMap = fromList
[("Baby Satellite","Microgravity")
,("An Ending", "Apollo: Atmospheres and Soundtracks")]
```
We can lookup this map by using the function `Map.lookup :: Ord k => k -> Map k a -> Maybe a`.
If no match is found it will return `Nothing` if a match is found it will return `Just match`:
```haskell
ghci> Map.lookup "Baby Satellite" songMap
Just "Microgravity"
ghci> Map.lookup "The Fairy Tale" songMap
Nothing
```
Actually the `Maybe` type is defined as:
```haskell
data Maybe a = Nothing | Just a
deriving (Eq, Ord)
```
All code using the `Map.lookup` function will never be confronted with any kind of Exceptions, null pointers or other nasty things. Even in case of errors a lookup will always return a properly typed `Maybe` instance. By pattern matching for `Nothing` or `Just a` client code can react on failing matches or positive results:
```haskell
case Map.lookup "Ancient Campfire" songMap of
Nothing -> print "sorry, could not find your song"
Just a -> print a
```
Let's try to apply this to an extension of our simple song lookup.
Let's assume that our music database has much more information available. Apart from a reverse index from songs to albums, there might also be an index mapping album titles to artists.
And we might also have an index mapping artist names to their websites:
```haskell
type Song = String
type Album = String
type Artist = String
type URL = String
songMap :: Map Song Album
songMap = fromList
[("Baby Satellite","Microgravity")
,("An Ending", "Apollo: Atmospheres and Soundtracks")]
albumMap :: Map Album Artist
albumMap = fromList
[("Microgravity","Biosphere")
,("Apollo: Atmospheres and Soundtracks", "Brian Eno")]
artistMap :: Map Artist URL
artistMap = fromList
[("Biosphere","http://www.biosphere.no//")
,("Brian Eno", "http://www.brian-eno.net")]
loookup' :: Ord a => Map a b -> a -> Maybe b
loookup' = flip Map.lookup
findAlbum :: Song -> Maybe Album
findAlbum = loookup' songMap
findArtist :: Album -> Maybe Artist
findArtist = loookup' albumMap
findWebSite :: Artist -> Maybe URL
findWebSite = loookup' artistMap
```
With all this information at hand we want to write a function that has an input parameter of type `Song` and returns a `Maybe URL` by going from song to album to artist to website url:
```haskell
findUrlFromSong :: Song -> Maybe URL
findUrlFromSong song =
case findAlbum song of
Nothing -> Nothing
Just album ->
case findArtist album of
Nothing -> Nothing
Just artist ->
case findWebSite artist of
Nothing -> Nothing
Just url -> Just url
```
This code makes use of the pattern matching logic described before. It's worth to note that there is some nice circuit breaking happening in case of a `Nothing`. In this case `Nothing` is directly returned as result of the function and the rest of the case-ladder is not executed.
What's not so nice is *"the dreaded ladder of code marching off the right of the screen"* [(quoted from Real World Haskell)](http://book.realworldhaskell.org/).
For each find function we have to repeat the same ceremony of pattern matching on the result and either return `Nothing` or proceed with the next nested level.
The good news is that it is possible to avoid this ladder.
We can rewrite our search by applying the `andThen` operator `>>=` as `Maybe` is an instance of `Monad`:
```haskell
findUrlFromSong' :: Song -> Maybe URL
findUrlFromSong' song =
findAlbum song >>= \album ->
findArtist album >>= \artist ->
findWebSite artist
```
or even shorter as we can eliminate the lambda expressions by applying [eta-conversion](https://wiki.haskell.org/Eta_conversion):
```haskell
findUrlFromSong'' :: Song -> Maybe URL
findUrlFromSong'' song =
findAlbum song >>= findArtist >>= findWebSite
```
Using it in GHCi:
```haskell
ghci> findUrlFromSong'' "All you need is love"
Nothing
ghci> findUrlFromSong'' "An Ending"
Just "http://www.brian-eno.net"
```
The expression `findAlbum song >>= findArtist >>= findWebSite` and the sequencing of actions in the [pipeline](#pipeline---monad) example `return str >>= return . length . words >>= return . (3 *)` have a similar structure.
But the behaviour of both chains is quite different: In the Maybe Monad `a >>= b` does not evaluate b if `a == Nothing` but stops the whole chain of actions by simply returning `Nothing`.
The pattern matching and 'short-circuiting' is directly coded into the definition of `(>>=)` in the Monad implementation of `Maybe`:
```haskell
instance Monad Maybe where
(Just x) >>= k = k x
Nothing >>= _ = Nothing
```
This elegant feature of `(>>=)` in the `Maybe` Monad allows us to avoid ugly and repetetive coding.
Maybe is often used to avoid any kind of partial functions. Take for example division by zero or computing the square root of negative numbers which are undefined (at least for real numbers).
Here come safe definitions of these functions that return `Nothing` for undefined cases:
```haskell
safeRoot :: Double -> Maybe Double
safeRoot x
| x >= 0 = Just (sqrt x)
| otherwise = Nothing
safeReciprocal :: Double -> Maybe Double
safeReciprocal x
| x /= 0 = Just (1/x)
| otherwise = Nothing
```
As we have already learned the monadic `>>=` operator allows to chain such function as in the following example:
```haskell
safeRootReciprocal :: Double -> Maybe Double
safeRootReciprocal x = return x >>= safeReciprocal >>= safeRoot
```
This can even written more terse as:
```haskell
safeRootReciprocal :: Double -> Maybe Double
safeRootReciprocal = safeReciprocal >=> safeRoot
```
The use of the Kleisli operator `>=>` makes it more evident that we are actually aiming at a composition of the monadic functions `safeReciprocal` and `safeRoot`.
There are many predefined Monads available in the Haskell curated libraries and it's also possible to combine their effects by making use of `MonadTransformers`. But that's a different story...
>In software engineering, the composite pattern is a partitioning design pattern. The composite pattern describes a group of objects that is treated the same way as a single instance of the same type of object. The intent of a composite is to "compose" objects into tree structures to represent part-whole hierarchies. Implementing the composite pattern lets clients treat individual objects and compositions uniformly.
> (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Composite_pattern))
A typical example for the composite pattern is the hierarchical grouping of test cases to TestSuites in a testing framework. Take for instance the following class diagram from the [JUnit cooks tour](http://junit.sourceforge.net/doc/cookstour/cookstour.htm) which shows how JUnit applies the Composite pattern to group `TestCases` to `TestSuites` while both of them implement the `Test` interface:
![Composite Pattern used in Junit](http://junit.sourceforge.net/doc/cookstour/Image5.gif)
In Haskell we could model this kind of hierachy with an algebraic data type (ADT):
In order to aggregate TestComponents we follow the design of JUnit and define a function `addTest`. Adding two atomic Tests will result in a TestSuite holding a list with the two Tests. If a Test is added to a TestSuite, the test is added to the list of tests of the suite. Adding TestSuites will merge them.
What's not visible from the JUnit class diagram is how typical object oriented implementations will have to deal with null-references. That is the implementations would have to make sure that the methods `run` and `addTest` will handle empty references correctly.
With Haskells algebraic data types we would rather make this explicit with a dedicated `Empty` element.
From our additions it's obvious that `Empty` is the identity element of the `addTest` function. In Algebra a Semigroup with an identity element is called *Monoid*:
> In abstract algebra, [...] a monoid is an algebraic structure with a single associative binary operation and an identity element.
> [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Monoid)
We can also use the function `mconcat :: Monoid a => [a] -> a` on a list of `Tests`: mconcat composes a list of Tests into a single Test. That's exactly the mechanism of forming a TestSuite from atomic TestCases.
```haskell
compositeDemo = do
print $ run $ mconcat [t1,t2]
print $ run $ mconcat [t1,t2,t3]
```
This particular feature of `mconcat :: Monoid a => [a] -> a` to condense a list of Monoids to a single Monoid can be used to drastically simplify the design of our test framework.
If `Bool` was a `Monoid` we could use `mconcat` to form test suite aggregates. `Bool` in itself is not a Monoid; but together with a binary associative operation like `(&&)` or `(||)` it will form a Monoid.
The intuitive semantics of a TestSuite is that a whole Suite is "green" only when all enclosed TestCases succeed. That is the conjunction of all TestCases must return `True`.
So we are looking for the Monoid of boolean values under conjunction `(&&)`. In Haskell this Monoid is called `All`):
We now implement a new evaluation function `run'` which evaluates a `SmartTestCase` (which may be either an atomic TestCase or a TestSuite assembled by `mconcat`) to a single boolean result.
This version of `run` is much simpler than the original and we can completely avoid the rather laborious `addTest` function. We also don't need any composite type `Test`.
> [...] the visitor design pattern is a way of separating an algorithm from an object structure on which it operates. A practical result of this separation is the ability to add new operations to existent object structures without modifying the structures. It is one way to follow the open/closed principle.
> (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Visitor_pattern))
* The Haskell type classes `Functor`, `Foldable`, `Traversable`, etc. provide a generic framework to allow visiting any algebraic datatype by just deriving one of these type classes.
By virtue of the instance declaration Exp becomes a Foldable instance an can be used with arbitrary functions defined on Foldable like `length` in the example.
> [...] the iterator pattern is a design pattern in which an iterator is used to traverse a container and access the container's elements. The iterator pattern decouples algorithms from containers; in some cases, algorithms are necessarily container-specific and thus cannot be decoupled.
Compared with `Foldable` or `Functor` the declaration of a `Traversable` instance looks a bit intimidating. In particular the type declaration for `traverse`:
```haskell
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
```
looks like quite a bit of over-engineering for simple traversals as in the above example.
In oder to explain real power of the `Traversable` type class we will look at a more sophisticated example in this section.
The Unix utility `wc` is a good example for a traversal operation that performs several different tasks while traversing its input:
```bash
echo "counting lines, words and characters in one traversal" | wc
1 8 54
```
The output simply means that our input has 1 line, 8 words and a total of 54 characters.
Obviously an efficients implementation of `wc` will accumulate the three counters for lines, words and characters in a single pass of the input will not run three iterations for each counter separately.
For efficiency reasons this solution may be okay, but from a design perspective the solution lacks clarity as the required logic for accumulating the three counters is heavily entangled within one code block. Just imagine how the complexity of the for-loop will increase once we have to add new features like counting bytes, counting white space or counting maximum line width.
So we would like to be able to isolate the different counting algorithms (*separation of concerns*) and be able to combine them in a way that provides efficient one-time traversal.
So we have achieved our aim of separating line counting and character counting in separate functions while still being able to apply them in only one traversal.
The only piece missing is the word counting. This is a bit tricky as it involves dealing with a state monad and wrapping it as an Applicative Functor:
```haskell
import Data.Functor.Compose -- Composition of Functors
import Data.Functor.Const -- Const Functor
import Data.Functor.Identity -- Identity Functor (needed for coercion)
import Data.Monoid (Sum (..), getSum) -- Sum Monoid for Integers
import Control.Monad.State.Lazy -- State Monad
import Control.Applicative -- WrappedMonad (wrapping a Monad as Applicative Functor)
import Data.Coerce (coerce) -- Coercion (forcing types to match, when
> [...] Dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client's state. Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.
>
> This fundamental requirement means that using values (services) produced within the class from new or static methods is prohibited. The client should accept values passed in from outside. This allows the client to make acquiring dependencies someone else's problem.
> (Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Dependency_injection))
In functional languages this is simply achieved by binding a functions formal parameters to values.
See the following example where the function `generatePage :: (String -> Html) -> String -> Html` does not only require a String input but also a rendering function that does the actual conversion from text to Html.
> "The adapter pattern is a software design pattern (also known as wrapper, an alternative naming shared with the decorator pattern) that allows the interface of an existing class to be used as another interface. It is often used to make existing classes work with others without modifying their source code."
> (Quoted from https://en.wikipedia.org/wiki/Adapter_pattern)
An example is an adapter that converts the interface of a Document Object Model of an XML document into a tree structure that can be displayed.
What does an adapter do? It translates a call to the adapter into a call of the adapted backend code. Which may also involve translation of the argument data.
Say we have some `backend` function that we want to provide with an adapter. we assume that `backend` has type `c -> d`:
So in essence the Adapter Patterns is just function composition.
Here is a simple example. Say we have a backend that understands only 24 hour arithmetics (eg. 23:50 + 0:20 = 0:10).
But in our frontend we don't want to see this ugly arithmetics and want to be able to add minutes to a time representation in minutes (eg. 100 + 200 = 300).
We solve this by using the above mentioned function composition of `unmarshal . backend . marshal`:
> In software engineering, the template method pattern is a behavioral design pattern that defines the program skeleton of an algorithm in an operation, deferring some steps to subclasses.
> It lets one redefine certain steps of an algorithm without changing the algorithm's structure.
> [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Template_method_pattern)
The TemplateMethod pattern is quite similar to the [StrategyPattern](#strategy---functor). The main difference is the level of granularity.
In Strategy a complete block of functionality - the Strategy - can be replaced.
In TemplateMethod the overall layout of an algorithm is predefined and only specific parts of it may be replaced.
In functional programming the answer to this kind of problem is again the usage of higher order functions.
In the following example we come back to the example for the [Adapter](#adapter---function-composition).
The function `addMinutesAdapter` lays out a structure for interfacing to some kind of backend:
`addMinutesTemplate` has an additional parameter f of type `(Int -> WallTime -> WallTime)`. This parameter may be bound to `addMinutesToWallTime` or alternative implementations:
```haskell
-- implements linear addition (the normal case) even for values > 1440
The type classes in Haskells base library apply this template approach frequently to reduce the effort for implementing type class instances and to provide a predefined structure with specific 'customization options'.
Even though we specified only `mempty` and `(<>)` we can now use the functions `mappend :: Monoid a => a -> a -> a` and `mconcat :: Monoid a => [a] -> a` on WallTime instances:
So the Monoid type class definition forms a *template* where the default implementations define the 'invariant parts' of the type class and the part specified by us form the 'customization options'.
> The abstract factory pattern provides a way to encapsulate a group of individual factories that have a common theme without specifying their concrete classes.
> In normal usage, the client software creates a concrete implementation of the abstract factory and then uses the generic interface of the factory to create the concrete objects that are part of the theme.
> The client doesn't know (or care) which concrete objects it gets from each of these internal factories, since it uses only the generic interfaces of their products.
> This pattern separates the details of implementation of a set of objects from their general usage and relies on object composition, as object creation is implemented in methods exposed in the factory interface.
> [Quoted from Wikipedia](https://en.wikipedia.org/wiki/Abstract_factory_pattern)
There is a classic example that demonstrates the application of this pattern in the context of a typical problem in object oriented software design:
The example revolves around a small GUI framework that needs different implementations to render Buttons for different OS Platforms (called WIN and OSX in this example).
A client of the GUI API should work with a uniform API that hides the specifics of the different platforms. The problem then is: how can the client be provided with a platform specific implementation without explicitely asking for a given implementation and how can we maintain a uniform API that hides the implementation specifics.
- The client calls an abstract factory `GUIFactory` interface to create a `Button` by calling `createButton() : Button` that somehow chooses (typically by some kind of configuration) which concrete factory has to be used to create concrete `Button` instances.
- The concrete classes `WinButton` and `OSXButton` implement the interface `Button` and provide platform specific implementations of `paint () : void`.
- As the client uses only the interface methods `createButton()` and `paint()` it does not have to deal with any platform specific code.
The following diagram depicts the structure of interfaces and classes in this scenario:
In a functional language this kind of problem would be solved quite differently. In FP functions are first class citizens and thus it is much easier to treat function that represent platform specific actions as "normal" values that can be reached around.
So we could represent a Button type as a data type with a label (holding the text to display on the button) and an `IO ()` action that represents the platform specific rendering:
The nice thing about the Haskell record syntax is that we get accessor functions to the type fields for free.
That's why the type declaration of `Button` also created a function `paint :: Button -> IO ()` which will return the function stored in the `paint` field of a `Button` instance.
WIN -> Button lbl (winPaint lbl) -- use winPaint for WIN platform
OSX -> Button lbl (osxPaint lbl) -- use osxPaint for OSX platform
```
By calling `createButtonFor` only with the `OS` argument (we assume that this flag comes from some initially available configuration) we can now create a partially applied function createButton:
```haskell
ghci> createButton = createButtonFor
ghci> :t createButton
createButton :: String -> Button
```
Now we have an API that hides all implementation specifics from the client and allows him to use only `createButton` and `paint` to work with Buttons for different OS platforms:
> The Builder is a design pattern designed to provide a flexible solution to various object creation problems in object-oriented programming. The intent of the Builder design pattern is to separate the construction of a complex object from its representation.
>
> Quoted from [Wikipedia](https://en.wikipedia.org/wiki/Builder_pattern)
The class provides a package private constructor that takes 5 arguments that are used to fill the instance attributes.
Using constructors with so many arguments is often considered inconvenient and potentially unsafe as certain constraints on the arguments might not be maintained by client code invoking this constructor.
The typical solution is to provide a Builder class that is responsible for maintaining internal data constraints and providing a robust and convenient API.
In the following example the Builder ensures that a BankAccount must have an accountNo and that non null values are provided for the String attributes:
```java
public class BankAccountBuilder {
private int accountNo;
private String name;
private String branch;
private double balance;
private double interestRate;
public BankAccountBuilder(int accountNo) {
this.accountNo = accountNo;
this.name = "Dummy Customer";
this.branch = "London";
this.balance = 0;
this.interestRate = 0;
}
public BankAccountBuilder withAccountNo(int accountNo) {
this.accountNo = accountNo;
return this;
}
public BankAccountBuilder withName(String name) {
this.name = name;
return this;
}
public BankAccountBuilder withBranch(String branch) {
this.branch = branch;
return this;
}
public BankAccountBuilder withBalance(double balance) {
this.balance = balance;
return this;
}
public BankAccountBuilder withInterestRate(double interestRate) {
this.interestRate = interestRate;
return this;
}
public BankAccount build() {
return new BankAccount(this.accountNo, this.name, this.branch, this.balance, this.interestRate);
}
}
```
Next comes an example of how the builder is used in client code:
```java
public class BankAccountTest {
public static void main(String[] args) {
new BankAccountTest().testAccount();
}
public void testAccount() {
BankAccountBuilder builder = new BankAccountBuilder(1234);
As we see the Builder can be either used to create dummy instaces that are still safe to use (e.g. for test cases) or by using the `withXxx` methods to populate all attributes:
From an API client perspective the Builder pattern can help to provide safe and convenient object construction which is not provided by the Java core language.
As the Builder code is quite a redundant (e.g. having all attributes of the actual instance class) Builders are typically generated (e.g. with [Lombok](https://projectlombok.org/features/Builder)).
> Design patterns are reusable abstractions in object-oriented software.
> However, using current mainstream programming languages, these elements can only be expressed extra-linguistically: as prose,pictures, and prototypes.
> We believe that this is not inherent in the patterns themselves, but evidence of a lack of expressivity in the languages of today.
> We expect that, in the languages of the future, the code parts of design patterns will be expressible as reusable library components.
> Indeed, we claim that the languages of tomorrow will suffice; the future is not far away. All that is needed, in addition to commonly-available features,
> are higher-order and datatype-generic constructs;
> these features are already or nearly available now.
> Quoted from [Design Patterns as Higher-Order Datatype-Generic Programs](http://www.cs.ox.ac.uk/jeremy.gibbons/publications/hodgp.pdf)
> To end with FP benefits, there is this curious thing called [Curry–Howard correspondence](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) which is a direct analogy between mathematical concepts and computational calculus (which is what we do, programmers).
> This correspondence means that a lot of useful stuff discovered and proven for decades in Math can then be transposed to programming, opening a way for a lot of extremely robust constructs for free.
>
> In OOP, Design patterns are used a lot and could be defined as idiomatic ways to solve a given problems, in specific contexts but their existences won’t save you from having to apply and write them again and again each time you encounter the problems they solve.
>
> Functional programming constructs, some directly coming from category theory (mathematics), solve directly what you would have tried to solve with design patterns.
>
> Quoted from [Geekocephale](http://geekocephale.com/blog/2018/10/08/fp)
> In the functional-programming world, traditional design patterns generally manifest in one of three ways:
> - The pattern is absorbed by the language.
> - The pattern solution still exists in the functional paradigm, but the implementation details differ.
> - The solution is implemented using capabilities other languages or paradigms lack. (For example, many solutions that use metaprogramming are clean and elegant — and they're not possible in Java.)
>
> [Quoted from IBM developerworks](https://www.ibm.com/developerworks/library/j-ft10/index.html)