We're planting a tree for every job application! Click here to learn more

The F-words: functor and friends

Mateusz Kubuszok

2 Jan 2019

20 min read

The F-words: functor and friends
  • Mathematics

While algebras are something we (programmers) rely on in our everyday work, we don’t always use them knowingly. Functional programming, however, has a relatively high number of programmers, that care about correctness and mathematical formalism, that leads to it. It is no surprise that it was FP that explored the idea that if you perceive your program as a pipeline of operations, you could provide stronger guarantees if you had a tools to define such pipelines mathematically.

Pipeline of values

But let’s not rush into mathematical abstractions right from the start. Let start with some use cases, figuring out some common patterns and leave distilling mathematical models at the end, when we already gained some intuition.

The simplest case is when we put some data into pipeline a get some data out of it. E.g.

final case class User(name: String, surname: String)

val getName: User => String =
  user => user.name

val toUpper: String => String =
  str => str.toUpperCase

val greet: String => String =
  name => s"Hello, $name!"

Since these pipelines are function, we can combine them together by simple function composition. All we have to take care of is make sure the output of a preceding function matches the input of a succeeding one:

val greetUppercased = getName
  .andThen(toUpper)
  .andThen(greet)

  .apply(User("John", "Smith")) // "Hello, JOHN!"

val uppercaseGreet = getName
  .andThen(greet)
  .andThen(toUpper)

val user = User("John", "Smith")

greetUppercased(user) // "Hello, JOHN!"
uppercaseGreet(user)  // "HELLO, JOHN!"

But, let’s say we have a List of things. And we want to apply our pipeline to each element, and receive a List of results. Imperative way of doing this would be:

import scala.collection.mutable

val users: List[User] = ???
val greetsMutable: mutable.List[User] = mutable.List.empty
users.foreach { user =>
  greetsMutable += greetUppercased(user)
}
val greets: List[String] = greetsMutable.toList

Quite a lot of code, and it is easy to make a mistake (even easier if you use language allowing for (int i = 0; i < list.size; i++) ceremony). Besides, if we wanted to do this in steps:

val users: List[User] = ???
// mutable stuff
val names: List[String] = ???
// mutable stuff
val greets: List[String] = ???

things would create even more boilerplate and make things even easier to break.

Another possibility: if each of the operations in your would take some time, so it would make sense to avoid blocking, and run each of them on a thread pool. In Scala we can achieve it with Futures (and Promises).

import scala.concurrent.{ Future, Promise }
import scala.concurrent.ExecutionContext.Implicits.global
import scala.util.{ Success, Failure }

val userF: Future[User] = Future { /*user*/ }
val greetP: Promise[String] = Promise[String]()
userF.onComplete {
  case Success(user)  => greetP.success(greetUppercased(user))
  case Failure(error) => greetP.failed(error)
}
val greetF: Future[String] = greetP.future

We can see that manual completion of a Promise is quite bolerplate-y and error-prone. What if we broke this into stages, just like with Lists? Here is makes more sense as it make things more granular on thread pool level. Again, we would end up with quite a lot of mess.

The last case we’ll consider: let’s say we had some sort of validation, the simplest data structure for that would be Either. By convention the correct (or right) values go into the Right type parameter, while errors are what is left if something went wrong, so they end up in Left type parameter.

Let’s say we don’t want to change our error type while we go though the stages of our pipeline and the input is already validated. We don’t intend to invalidate it, so we want to have some code like:

// here error is stored as String
val validatedUser: Either[String, User] = ...
val validatedGreet: Either[String, String] =
  validatedUser match {
    case Right(user) => Right(greetUppercased(user))
    case Left(error) => Left(error)
  }

What each of these cases have in common?

  • each time we have some Container[A] that we want to turn into Container[B] using A => B function. The container can contain 0, 1 or more values of A type and feeds it into our function, so we can also consider it as a data source, which ensures that the result will be embedded in the same type of Container as the source,
  • each time we process one value at a time and return exactly one result - we don’t want to consider case like a missing results, failing Future, etc.,
  • while have an implicit assumption that whether we run transformations in stages or pass whole composed function at once, the result will be the same.

In mathematics a relation between input and output is called a map, and in many contexts it is virtually equal to a function. So assigning each value in a container a corresponding value would be called mapping. That is why a method that maps container is called a map:

val users: List[User] = ???
val greets: List[String] = users.map(greetUppercased)

val userF: Future[User] = ???
val greetF: Future[String] = userF.map(greetUppercased)

val validatedUser: Either[String, User] = ???
val validatedGreet: Either[String, String] = validatedUser.map(greetUppercased)

The abstraction we arrived at is called a functor. Let’s formalize our finding.

Functor

Let’s take a type constructor F[_]. Let’s assume that each type A can be lifted F[A] and each function A => B can be lifted to F[A] => F[B].

We also put some constraints on how we lift things:

  • lifting of an identity[A] to F[A] => F[A] always behave the same as identity[F[A]] (identity is a function returning input unchanged),
  • lifting of an f andThen g behaves the same as lifting of f composed with lifting of g.

If we use a notation that lifting if a function is implemented using map method of a fa: F[A] (so fa.map(f): F[B]), we can express these laws with:

  • fa.map(identity) == identity(fa) for each fa
  • fa.map(f).map(g) == fa.map(f.andThen(g)).

These constraints are called functor laws and F[_] that is following them we call a functor (or covariant functor). You can see the idea illustrated using the diagram quite known for people in touch with category theory:

A -- F --> F[A]

|           |
f          F[f]
|           |
V           V

B -- F --> F[B]

(Trivia: functor lifts A →F(A). If you reversed the arrow A←F(A) you would get an F-algebra).

Let’s check if structures we used so far can be considered functors.

List[_] can lift ant A to List[A]. It also allows us to lift function A => B to List[A] => List[B] using map:

val times2: Int => Int = i => i*2
val asString: Int => String = i => i.toString

List(1,2,3).map(identity) // List(1, 2, 3)
identity(List(1,2,3)) // List(1, 2, 3)

List(1,2,3).map(times2).map(asString) // List("2", "4", "6")
List(1,2,3).map(times2 andThen asString) // List("2", "4", "6")

As we can see functor laws hold for List[_]. How about Either[_]?

Left[String, Int]("failed").map(identity) // Left("failed")
identity(Left[String, Int]("failed")) // Left("failed")

Right[String, Int](2).map(identity) // Right(2)
identity(Right[String, Int](2)) // Right(2)

Left[String, Int]("failed").map(times2).map(asString) // Left("failed")
Left[String, Int]("failed").map(times2 andThen asString) // Left("failed")

Right[String, Int](2).map(times2).map(asString) // Right("4")
Right[String, Int](2).map(times2 andThen asString) // Right("4")

It seems functor laws hold also for Either - while only Right side changes its actual values (which is why we say that Either is Right-biased), both Left and Right act in a way preserving Either’s lawfulness.

But let’s check something that is not a functor and try to illustrate why.

val isEven: Int => Boolean = i => i%2 == 0
def getRandom: Boolean => Int = {
  val r = new java.util.Random(0)
  _ => r.nextInt
}

Set(1,2,3,4).map(isEven).map(getRandom)
// Set(-1155484576, -723955400)
Set(1,2,3,4).map(isEven andThen getRandom)
// Set(-1155484576, -723955400, 1033096058, -1690734402)

As we can see when side effects and state come into the picture, Set[_] comes unlawful. Here, we simulated what would happen in case our invisible state (content of Random) behaves the same way in both cases (here: starts with the same seed). We end up with different results - in first one four results of first mapping collapse into 2 results and then these 2 results are mapped further. In seconds case we don’t give a chance to collapse on intermediate results, so we end up with 4 element set. This shows one example where lawfulness matters - functions with side effects, should compose without violating our assumptions, even implicit ones.

Applicative functors

So far we have a wrapper/container and a value inside. We can take some function, lift it to function taking and returning wrapper and that’s it. But what, if we wanted to combine 2 wrappers?

If you have Future { 2 } and Future { 3 } you cannot combine them using some f: (Int, Int) => Int = _ + _. (You could do it using flatMap, but in this article we are not discussing monads). How could we do it?

Well, what if we partially applied it? Future { 2 }.map(i => f(2, _)) gives us Future[Int => Int]. So if we only manage to move that other Int… Except functor doesn’t give us a way to move it. However, there is a functor’s extension that does.

Applicative functor differs from normal functor, because it has an additional method ap. It works like this:

f: A => B

fab: F[A => B] = F.pure(f)
fa: F[A] = F.pure(a)
fab.ap(fp): F[B]
fab.ap(fp) == F.pure(f(a))

By F.pure(a) I mean some some wrapping function, that lifts A to F(A). It has many names like pure, point and it simply takes a value and wraps it up (so it would be List(a) for List, Future.successful(a) for Future, etc).

As you see, we can use it to combine wrapped values together (that is, once someone provide you with the right method or extension method - Scala’s standard collection doesn’t have them):

val f1: F[Int] = F.pure(1)
val f2: F[Int] = F.pure(2)
val fAdd: F[Int => Int => Int] = F.pure(i => j => i + j)
fAdd.ap(f1).ap(f2)

The rules to follow regarding ap are:

  • F.pure(f).ap(F.pure(a)) == F.pure(f(a))
  • F.pure(identity).ap(a) == F.pure(a)
  • technically speaking ap should be distributive F.pure(a).ap(f) == F.pure(f).ap(F.pure(a))
  • and consistend with map F.pure(a).map(f) == F.pure(a).ap(F.pure(f))

The most known applicative functors in Scala community (as far as I can tell, there was no research for it), are Validation/Validated from Scalaz/Cats. The composition property was used to implement Cartesian syntax:

(validated1, validated2, validated3).mapN { (a1, a2, a3) =>
  buildSomething(a1, a2, a3)
}

Which is used to build some bigger objects once the pairts used to their construction are validated and valid. Underneath it uses ap to apply arguments partially until it has a whole tuple (or if validation fails, it just aggregates errors). This example is kind of funny as you almost never use the ap directly. I am sure, that there is quite large group of people using them everyday not knowing what an applicative functor is.

Sink input transformations

Functor can be treated as a way of constructing (non-failing) pipeline starting at some source of values. As a matter of the fact, we use this in Scala’s Streams, FS2 streams and Reactive Streams (e.g. Akka Stream):

Stream
  .iterate(1) { i => i + 1 }
  .map(times2)
  .map(asString)
// Stream("2", "4", "6", ...)

import akka.stream.scaladsl._
Source
  .fromIterator(() => Iterator.from(1))
  .map(times2)
  .map(asString) 
// Source[String, akka.NotUsed]

But, what if we looked from the other side? What if we have some Sink[Input] or Subscriber[A] that accepts values? It is the end of our pipeline (or at least it is the dead-end in graph of all inputs and outputs) - we cannot usually map it. We only can feed it with values.

trait Subscriber[A] {
  
  def consume(value: A): Unit
}

object IntSubscriber extends IntSubscriber[Int] {
  
  def consume(value: Int): Unit =
    println(s"Next int is $value")
}

Iterator.from(1).foreach(IntSubscriber.consume)

Exactly! We feed it with values. Which means if there are some values, that doesn’t fit consumer requirements, which we still want to consume, we might adjust transforming them before they will go into Subscriber/Sink or whatever we call it.

val asInt: Double => Int = _.toInt

Iterator.iterate(1.0){ d => d * 2.0 }.foreach { d =>
  IntSubscriber.consume(asInt(d))
}

It would be nice if we could made such adjustment a part of our interface. It is not a map method which transforms our current type into another, but its complementary, which transforms another type into our. That is why we’ll call it comap:

trait Subscriber[A] { self =>
  
  def consume(value: A): Unit
  
  def comap[B](f: B => A): Subscriber[B] =
    new Subscriber[B] {
      def consume(value: B): Unit =
        self.consume(f(value))
    }
}


Iterator
  .iterate(1.0){ d => d * 2.0 }
  .foreach(IntSubscriber.comap(asInt).consume)

It would share some properties in common with functor though - the fact that identity doesn’t affect it and that mappings can be composed.

Confunctor

A type constructor F[_] which lifts A to F[A], and B => A to F[A] => F[B] (using comap / contramap), that obeys:

  • fa.comap(identity) == fa
  • fa.comap(f).comap(g) == fa.comap(g andThen f)

is called a cofunctor or contravariant functor. (I like the sound of the former, certain FP programmers would kill you for not using the later).

To distinct it from functor we can imagine them using this diagram:

  C     --g-->       A    --f-->      B
F[C] <--comap(g)-- F[A] --map(f)--> F[B]

At some point in our pipeline we have a F[A] stage. We can then use map to travel down the pipeline (calculate next step from current) and comap to travel up the pipeline (map preceding step into current).

Other way to express it is by looking at type signatures. In functor the direction of arrows stays the same during lifting, only types get wrapped with F[_]. Cofunctor not only wraps, but also reverts the direction of arrows.

Invariant and phantom

When we talk about variance in Scala, we mention:

  • covariance - A <: B implies F[A] <: F[B] - if you can travel from A to B, you can travel from F[A] to F[B] for free. ,
  • contravariance - A <: B implies F[B] <: F[A] - when you can travel from B to A, you can travel from F[A] to F[B] for free,
  • invariance - none of the above - you cannot travel for free in any side whether A <: B or B :> A.

What would invariance mean for a functor? What would it mean, that functor is invariant? F[A] is a drop in relacement for F[B] only if A = B. You cannot take F[A], pass it one function to turn it into F[B] and forget that it ever was F[A]. I mean, we could, but here A is some algebra, and we want to keep its properties when we map F[A] to F[B].

What you can do, is to pass mappings in both ways: A => B and B => A. This way, when you want to perform some operation on F[B] it will internally translate operands back to A, run the computations and translate the result to B again. This two side mapping is imap (xmap).

import io.circe._
import io.circe.generic.auto._
import io.circe.parser._
import io.circe.syntax._

val a = List(1, 2).asJson
val b = List(3, 4).asJson

Semigroup[List[Int]]
  .imap(_.asJson)(decode[List[Int]](_).toOption.get)
  .combine(a, b)
  .noSpaces // "[1,2,3,4]"

There is also the fourth kind of variance - non-existing in Scala. Phantom variance/anyvariance is when F[A] = F[B] for any A and B. Intuitively, the type doesn’t matter, maybe it is not stored anywhere physically like tags in tagged types. Phantom would use pmap to turn F[A] into F[B]. It wouldn’t need any functions, as it doesn’t operate on actual values.

Bifunctor

We know how to travel in both direction in our pipeline. However, so far we always assumed, that there is only one type describing values we operate on. Even with Either[L, R] we assumed, that the L type is fixed to be able to treat Either[L, _] as our F[_]. But what, if we wanted to map over both types?

The idea, that you have two type parameters, and if you fix one of them, you’ll get a functor is called a bifunctor.

In other to have generic enough definition we would require a type constructor B[_, _] that lifts a pair of types A and B to B[A, B] and a bimap function that lifts pair of functions A => C and C => D into B[A, B] => B[C, D]. If it follows the laws:

  • bab.bimap(identity, identity) == indentity(bab)
  • bab.bimap(f1, f2).bimap(g1, g2) == bab.bimap(f1 andThen g1, f2 andThen g2)

Quite often, you would like to avoid doing both mappings at once, so there are first and second helper methods defined:

bi.first(f) == bi.bimap(f, identity)
bi.second(g) == bi.bimap(identity, g)

At least, that is the nice theory - no one in Scala has to confirm to such convention. Either is a bifunctor which uses left.map and right.map in order to let you treat one of the types as a functor for one operation. In general, if you wanted to use bifunctor you would have to use type classes (and extension methods) from Cats or Scalaz - the definitions we talk about here are specifications that the implementation would have to follow, but it doesn’t necessarily mean, that each eligible type already has one. That is why FP-heavy codebases are so eager to embrace Cats/Scalaz libraries - they provide both syntax and implantations for certain operations, that standard library does not.

An example of heavily (overly) advertised bifunctor for IO operations is ZIO. Scala’s Either, Cats’ Validated and Scalaz’s Validation would also qualify as bifunctors. Same with EitherT monad transformer - each of them could be used to embed error type as the left type and valid type as a right type (Either and EitherT only by convention, the rest also by nomenclature).

These are examples of coproduct bifunctors, but there are also product ones. Tuple (A, B) would be a bifunctor if we defined a bimap for it.

Profunctor

Bifunctor might be though of as 2 separate pipelines going in parallel (e.g. pipeline of errors and pipeline of successful computations).

However, if we had something like Flow[A, B] (using Reactive Streams nomenclature), we see that bifunctor would not work. We would like to be able to map over B type and comap over A type.

     A1 -------- f --------> A  B ------- g -------> B1
Flow[A1, B] <-comap(f)- Flow[A, B] -map(g)-> Flow[A, B1]

If we wanted to do comap with f and map with g at once, we could define a dimap method:

(flowAB: Flow[A, B])
  .dimap(f: A1 => A, g: B => B1): Flow[A1, B1]

As you figured what we just defined is a profunctor - a bifunctor’s sibling with covariant functor for first type parameter and covariant functor for second one.

You can notice profunctors everywhere you abstract over function, that is: you take some input, emit some output, and there is enough place to put something before and after the computation.

Other interesting usage of functors are optics. You basically start with a type Lens[A, A] or Prism[A, A] and then you dimap with pairs of functions (or partial functions) A => B and B => A, defined in a way that allows you to look inside your value to extract some specific value or modify your ADT without cascade of .copy():

trait Lens[S, A] { self =>
  def get(s: S): A
  def set(s: S)(a: A): S
  def modify(s: S)(f: A => A): S = set(s)(f(get(s)))
  
  def compose[A1](l: Lens[A, A1]) = new Lens[S, A1] {
    def get(s: S): A1 = l.get(self.get(s))
    def set(s: S)(a: A1): S = self.set(s) {
      val a = self.get(s)
      l.set(a)(l.get(a))
    }
  }
}
object Lens {
  def apply[A]: Lens[A, A] = new Lens[A, A] {
    def get(s: A): A = s
    def set(s: A)(a: A): S = a
  }
}

final case class Credentials(login: String)
final case class Config(credentials: Credentials)

val intoCredentials = new Lens[Config, Credentials] {
  def get(s: Config): Credentials = s.credentials
  def set(s: Config)(a: Credentials): Config = s.copy(credentials = a)
}
val intoLogin = new Lens[Credentials, String] {
  def get(s: Credentials): String = s.login
  def set(s: Credentials)(a: String): Credentials = s.copy(login = a)
}

val config = Config(Credentials("test"))
intoCredentials.compose(intoLogin).modify { value =>
  value + value
} // Config(Credentials("testtest"))

I don’t want to get into more details here, as optics are interesting enough that they deserve their own, separate article.

Natural transformation / FunctionK

So far we operated under one assumption - we are changing only the types of values inside a container, never the type of container itself. mapping over List still returns List. map over Future still returns Future. What if we wanted to do the opposite? What if we wanted to save the inner type, but change the container?

// lazy value used for e.g.
// expensive but synchronous calulations
trait Eval[A] { self =>
  
  def value(): A // expensive calculation
  def map[B](f: A => B) = new LazyValue[B] {
    def value(): B = f(self.value)
  }
}
object Eval {
  
  def apply[A](a: => A) = new Eval[A] {
    def value = a
  }
}

val expensiveInt: Eval[Int] = Eval {
  /* expensive int calculation */
  1
}

val expensiveString: Eval[String] =
  expensiveInt.map(i => i * 2).map(i => i.toString)

val futureString: Future[String] = ???
// how to interpret expensiveString into Future[String] 

This example is rather trivial (after all you can just Future { expensiveString.value }), but what if you have some more generic problem? You might decide to use a free functor and decide later on what to do with it:

sealed trait Free[S[_], A] {
  def map[B](f: A => B): Free[S, B] =
    Free.Mapped(this, f)
}
object Free {
  final case class Point[S[_], A](a: A)
      extends Free[S, A]
  final case class Mapped[S[_], A, B](fa: F[A], f: A => B)
      extends Free[S, B]
}

(Just in case: I wanted to implement free functors that are not free monads, which is why implementations differs from what you’ll find in Cats/Scalaz/Freestyle/whatever).

Similarly to free monoids, we call it free because it preserves properties (and laws) of a functor, no matter what kind of S[_] we put inside - it’s reliability is completely free from dependency on any details about S. Point (or Pure) is the name of function (or constructor) that simply wraps the value - point refers to the fact that function’s argument can be also called the point (function has a value y at point x) while pure refers, that it is a pure function - it has no side effects, it takes one argument, there is not much it can do about it. If it was Int => Int it could increment, decrement, multiply etc by constant, but since it’s A => F[A] it most likely just wraps the value and that’s all.

So, we have a pretty useless Free[F, A] and want to interpret it into something more useful. For instance Future[A] or Eval[A]. So, we can implement it as:

object futureInterpreter {
  
  def apply[S[_], A](free: Free[S, A]): Future[A] =
    free match {
      case Free.Pure(a) =>
        Future.successful(a)
      case Free.Mapped(fa, f) =>
        // just an example!! it is not stack safe!!
        futureInterpreter(fa).map(f)
    }
}

object evalInterpreter {
  
  def apply[S[_], A](free: Free[S, A]): Eval[A] =
    free match {
      case Free.Pure(a) =>
        Eval(a)
      case Free.Mapped(fa, f) =>
        // just an example!! it is not stack safe!!
        evalInterpreter(fa).map(f)
    }
}

This would do the job, but it wouldn’t be very generic solution. What if we wanted to interpret Free[S, _] into Eval[_] and then Eval[_] into Future[_]? What if we wanted to have more interpreters and make them more composable?

First thing we need to do is to extract the common interface:

trait Interpreter[F[_], G[_]] {
  def apply[A](fa: F[A]): G[A]
}

Any implementation conforming to such interface would allow us to translate values from one functor to another. As a matter of the fact, such interpreter already has a name: natural transformation. It has also another name: FunctionK - this one refers to the fact that this is a function operating not on actual types, but on type constructors (which makes it a higher-Kinded function). Scalaz prefers the former nomenclature while Cats the latter, but this is basically the same concept. Natural transformation is often denoted with ~> type alias, and since in Scala types with two type parameters can be described with infix notation, don’t be surprised to see things like this:

trait NaturalTransformation[F[_], G[_]] { self =>
  def apply[A](fa: F[A]): G[A]
  
  def andThen[H[_]](nt: G[_] ~> H[_]) = new (F[_] ~> H[_]) {
    def apply[A](fa: F[A]): H[A] = self[A] andThen nt[A]
  }
}

type ~>[F[_], G[_]] = NaturalTransformation[F, G]

def futureInterpreter[S[_]] = new (Free[S, ?] ~> Future) {
  
  def apply[A](fa: Free[S, A]): Future[A] = ???
}

def evalInterpreter[S[_]] = new (Free[S, ?] ~> Eval) {
  
  def apply[A](fa: Free[S, A]): Eval[A] = ???
}

Because natural transformation is a (parametric) function you can compose natural transformations just like other functions

Yoneda lemma

So far we evaluated map eagerly, at each point we remembered, that F[_] is functor, and when we mapped, we applied the transformation instantly (whether that transformation was immediately evaluated inside is another thing). So, let’s try to figure out clever trick, showing how we could collect transformations and then delay the actual mapping.

Let’s start with a set of all functions from A to B . Let’s call it home-set Hom(A, B) (if we wanted to express it using Scala’s type system it would be a type Function[A, B]). If we make B a parameter, we’ll end up with a functor Hom(A, -) - in this functor we could lift f : B→C into _Hom(A,B)→Hom(A,C) (because of applicative, we could write it also as Home( A ,f ) ) (in other words we create a functor Function[A, ?], where we map over returned value type, turning Function[A, B] into Function[A, C]).

We also have a functor F (F[_]).

Yoneda lemma states, that for each possible A we can define a set of natural transformation Nat(Hom(A,−),F) (phi: Function[A, ?] ~> F), which is isomorphic to F(A) (there is one-to-one correspondence between elements of these sets).

Intuitively: when you have a set of functions A→B, you can always fix some value a:A, which you would apply to these functions, obtain B which you can lift into F(B), and it will be the same as if you interpreted functions into F(A→B) first and then feed it with F(A) created by lifting a. The number of possible ways you can do that corresponds to the size of F(A).

Formally, the way we create such correspondence is:

  • we take some natural transformation Φ ∈ Nat(Hom(A,−),F)
val phi: Function[A, ?] ~> F
  • we apply type A to parametric function Φ obtaining Φ A:(A→A)→F(A) and pass identity function id A into it, so that we’ll end up with an element u:F(A)
val u: F[A]
  • we define the natural transformation Φ(f)=F(f)(u)
object Phi extends (Function[A, ?] ~> F) {
  def apply[B](f: A => B): F[B] =
    F.lify(f).ap(u) // F.lift(f): F[A => B]
}

This takes care of the other direction of correspondence. The proof of Yoneda lemma can be found on Wiki, here we are only interested in practical application.

What is the practical application?

Yoneda lemma states that there is a natural transformation between Function[A, ?] and F[_] that we can construct using F[A]. It also shows that the order in which we apply some A => B and run this natural transformation is irrelevant. So, lets implement that! First, the interface:

trait Yoneda[F[_], A] {
  def apply[B](f: A => B): F[B]
  def map[B](f: A => B): Yoneda[F, B]
}
object Yoneda {
  // to skip explaining type classes and stuff:
  // here I am assuming import of type classes
  // and syntax from cats/scalaz, to be sure that
  // I can always call .map on fa
  def apply[F[_]: Functor, A](fa: F[A]): Yoneda[F, A]
}

We can see that Yoneda[F, ?] represents Function[A, ?] from the lemma, map is functor mapping and apply is natural transformation Φ. With such interface we could:

Yoneda(Eval(2)).map(_ * 2).apply(_.toString)
Yoneda(Eval(2)).apply(_.toString).map(_ * 2)

Both ways we’ll end up with the same result. However, mapping things within Yoneda has some advantages, namely we can implement things like:

trait Yoneda[F[_], A] { self =>
  def apply[B](f: A => B): F[B]
  def run: F[A] = apply(identity[A])
  def map[B](f: A => B): Yoneda[F, B] =
    new Yoneda[F, B] {
      def apply[C](g: B => C) = self(f andThen g)
    }
}
object Yoneda {
  def apply[F[_]: Functor, A](fa: F[A]): Yoneda[F, A] =
    new Yoneda[F, A] {
      def apply[B](f: A => B): F[B] = Functor[F].map(fa)(f)
    }
}

The trick here lies in implementation of apply and map - as you can see as long as we stay within Yoneda we can compose functions that will be eventually applied to map. However, until we run apply (or run) non of the maps will be evaluated on fa. So one way to thinking about Yoneda is that is maps you functor lazily. The other way (used in Cats and Scalaz documentation) is that it is partially applied map method of a Functor type class, there the f is passed only after you finished composing functions.

At the same time, we can completely forget, that F[_] is a functor - we only see that Yoneda[F, ?] is a functor and use it as such.

If you still wonder what you could do with - imagine that you want to run some computations at once, without breaking it into chunks, that would be run on separate threads or maybe in separate calls to a thread pool if you are thinking about Future. You want to optimize them, but collapsing them into one function, but you don’t want to loose that nice syntax where you just map things.

With Yoneda you can just lift your algebra in first call, keep your nice maps and then post one aggregated function to a thread pool on run.

Dual to Yoneda is Coyoneda - if differs mainly in that it doesn’t require you to prove that F[_] is functor then you create Coyoneda, but when you exit it in run:

trait Coyoneda[F[_], A] { self =>
  
  type I // original value type
  
  val fi: F[I] // original F[A]
  val f: I => A
  
  def run(implicit functor: Functor[F]): F[A] =
    functor.map(fi)(f)
  
  def map[B](g: A => B): Coyoneda[F, B] =
    new Coyoneda[F, B] {
      type I = self.I
      val fi = self.fi
      val f = self.f andThen g
    }
}
object Coyoneda {
  
  def lift[F[_], A](fa: F[A]) =
    new Coyoneda[F, A] {
      type I = A
      val fi = fa
      val f = identity[A]
    }
}

As everything else discussed here Yoneda and Coyoneda are defined in both Cats and Scalaz.

Summary

In this article, we discussed briefly some more common functors and related terms. We tried to figure out some intuitions and practical application.

At this point we should already see, that when it comes to building pipelines of data different kinds of functors are ubiquitous. However, none of the functors we discussed was able to bypass 1-correct-input-1-correct-output requirement. For that - failing with error and recovering, filtering and producing multiple values out of one - we need more generic solution.

But that is a tale for another day.

Did you like this article?

Mateusz Kubuszok

See other articles by Mateusz

Related jobs

See all

Title

The company

  • Remote

Title

The company

  • Remote

Title

The company

  • Remote

Title

The company

  • Remote

Related articles

JavaScript Functional Style Made Simple

JavaScript Functional Style Made Simple

Daniel Boros

12 Sep 2021

JavaScript Functional Style Made Simple

JavaScript Functional Style Made Simple

Daniel Boros

12 Sep 2021

WorksHub

CareersCompaniesSitemapFunctional WorksBlockchain WorksJavaScript WorksAI WorksGolang WorksJava WorksPython WorksRemote Works
hello@works-hub.com

Ground Floor, Verse Building, 18 Brunswick Place, London, N1 6DZ

108 E 16th Street, New York, NY 10003

Subscribe to our newsletter

Join over 111,000 others and get access to exclusive content, job opportunities and more!

© 2024 WorksHub

Privacy PolicyDeveloped by WorksHub