078df939d7faab59e38c949a6fcbe3853b615211 — zainab-ali 7 months ago 527587a
Add cats-effect IORuntime post
M nix/mdoc.bash => nix/mdoc.bash +5 -1
@@ 5,7 5,7 @@

set -ex
fs2=$(coursier fetch -p co.fs2:fs2-core_3:3.1.3)
# cats-effect=$(coursier fetch -p org.typelevel:cats-effect_3.0.2:3.2.9)
catseffect=$(coursier fetch -p org.typelevel:cats-effect_3:3.3.2)

# FIXME: coursier doesn't finish when there is an `error` snippet in mdoc
# Hack around this by timing out once we think it's finished

@@ 13,3 13,7 @@ timeout 1m coursier launch org.scalameta:mdoc_3:2.2.23 -- \
	 --classpath $fs2 \
	 --in src/chapters/fs2/snippets.md \
	 --out src/chapters/fs2/snippets.out.md || true
timeout 3m coursier launch org.scalameta:mdoc_3:2.2.23 -- \
	 --classpath $catseffect \
	 --in src/chapters/2022-02-12-cats-effect-ioruntime/snippets.md \
	 --out src/chapters/2022-02-12-cats-effect-ioruntime/snippets.out.md || true

A src/chapters/2022-02-12-cats-effect-ioruntime/abstract.html.pm => src/chapters/2022-02-12-cats-effect-ioruntime/abstract.html.pm +13 -0
@@ 0,0 1,13 @@
#lang pollen

◊(define-meta time (1 0))
◊(define-meta title "Understanding thread pools with the cats-effect IORuntime")

◊code-inline{Could not find an implicit IORuntime.} is a common error
when using cats-effect. But what is an ◊code-inline{IORuntime}, and
how should we use it? In this tutorial, you’ll use cats-effect 3 to
explore the basics of parallelism, thread pools and blocking. You’ll
see why the cats-effect ◊code-inline{IORuntime.global} is the best
model for your application.

A src/chapters/2022-02-12-cats-effect-ioruntime/introduction.html.pm => src/chapters/2022-02-12-cats-effect-ioruntime/introduction.html.pm +20 -0
@@ 0,0 1,20 @@
#lang pollen


◊headline2{Follow along}

◊p{Assuming you’re familiar with SBT, create a ◊code-inline{build.sbt}
with the following contents:}

◊snippet[#:name "sbt"]{}

◊p{Enter the console with ◊code-inline{sbt console}.}

◊p{Copy the following setup code into the repl. This code sets up a few
◊code-inline{IORuntime} variants for us to play with ⸺ you don’t need
to understand it:}

◊snippet[#:name "setup"]{}

◊p{Finally, make sure your laptop is plugged in. We’ll likely need a fair amount of power.}

A src/chapters/2022-02-12-cats-effect-ioruntime/overview.html.pm => src/chapters/2022-02-12-cats-effect-ioruntime/overview.html.pm +26 -0
@@ 0,0 1,26 @@
#lang pollen

◊(require (only-in "abstract.html.pm" [doc abstract/doc]))

◊headline2{You will learn}
  ◊item{ How to run computations in parallel using cats-effect 3.}
  ◊item{What a thread pool is; and about bounded and unbounded thread pools.}
  ◊item{What the ◊code-inline{IORuntime} is made of, and why
  cats-effect 3 structures it the way it does.}
  ◊item{When you should use the ◊code-inline{IORuntime.global} (and when you shouldn’t)}

◊headline2{I assume you know}

◊item{A bit about the cats effect ◊code-inline{IO} datatype, to the
extent that you can create and run an ◊code-inline{IO}.}

◊p{You’ll get the most out of this if cats-effect is your first
experience of parallel computation. You’re not too sure what thread
pools are (or threads, for that matter), and aren’t as confident in
using them as you’d like.}

A src/chapters/2022-02-12-cats-effect-ioruntime/recap.html.pm => src/chapters/2022-02-12-cats-effect-ioruntime/recap.html.pm +37 -0
@@ 0,0 1,37 @@
#lang pollen

◊p{To summarize:}

◊item{Operations can be thought of as either ◊em{blocking} or ◊em{compute-intensive}.}
◊item{A thread pool controls how many tasks can execute in parallel. An
◊code-inline{ExecutionContext} is just another name for a thread pool.} 
◊item{Blocking operations are best scheduled on an unbounded
thread pool, while compute-intensive operations are best on a thread pool limited to
the number of available processors.}
◊item{An ◊code-inline{IORuntime} has two thread pools: one for compute
operations and another for blocking operations.}
◊item{The global ◊code-inline{IORuntime} has a bounded compute pool
and an unbounded blocking pool.}
◊item{◊code-inline{IO} operations run on the compute pool by
default. You can tap into the blocking pool using

◊p{We’ve dived deep into thread pools, but there’s plenty more to
  explore in cats-effect’s thread model. If you’ve followed along,
  you’re now well equipped to understand ◊external-link[#:href "https://gist.github.com/djspiewak/46b543800958cf61af6efa8e072bfd5c"]{Daniel
  Spiewak’s thoughts on the IORuntime design}. Have a look at
  "https://typelevel.org/cats-effect/docs/schedulers"]{how it handles
  scheduling}, and see how ◊external-link[#:href
  affect the threading landscape.}

  Outside of Scala’s thread pools lies a whole ocean of
  concurrency. If you’ve a thirst for more, why not research
  threads in your operating system? Convert to Linux and play with its

A src/chapters/2022-02-12-cats-effect-ioruntime/snippets.md => src/chapters/2022-02-12-cats-effect-ioruntime/snippets.md +250 -0
@@ 0,0 1,250 @@
# sbt

ThisBuild / scalaVersion := "3.0.2"
ThisBuild / libraryDependencies +=
  "org.typelevel" %% "cats-effect" % "3.3.2"

# imports
```scala mdoc
import $ivy.`org.typelevel::cats-effect:3.3.2`

# setup

```scala mdoc
import cats.effect._
import cats.effect.unsafe._
import cats.effect.implicits._
import cats.implicits._

object Threading {

  val basicRuntime: IORuntime = IORuntime(
    compute = IORuntime.createDefaultBlockingExecutionContext("compute")._1,
    blocking = IORuntime.createDefaultBlockingExecutionContext("blocking")._1,
    scheduler = IORuntime.createDefaultScheduler()._1,
    shutdown = () => (),
    config = IORuntimeConfig()

  def boundedRuntime(numThreads: Int): IORuntime = {
    lazy val lazyRuntime: IORuntime = {
        compute = IORuntime
          .createDefaultComputeThreadPool(lazyRuntime, numThreads, "compute")
        blocking =
        scheduler = IORuntime.createDefaultScheduler()._1,
        shutdown = () => (),
        config = IORuntimeConfig()

  def time(work: IO[Unit]): IO[String] =
    work.timed.map {
      case (t, _) => s"The task took ${t.toSeconds} seconds."
import Threading._

# snooze

```scala mdoc:silent
val snooze: IO[Unit] = IO(Thread.sleep(2000L))

# run-snooze-no-runtime

```scala mdoc:fail

# run-snooze

```scala mdoc

# snooze-list

```scala mdoc:silent
val snoozes: List[IO[Unit]] = List(snooze, snooze)

# snooze-parallel

```scala mdoc:silent
val parallelSnoozes: IO[Unit] = snoozes.parSequence.void

# snooze-parallel-run

```scala mdoc

# snooze-parallel-thousand

```scala mdoc:silent
val lotsOfSnoozes = List.fill(1000)(snooze).parSequence.void

# snooze-parallel-thousand-run

```scala mdoc

# factorial

```scala mdoc:silent
val factorial: IO[Unit] = {
  def go(n: Long, total: Long): Long =
    if (n > 0) go(n - 1, total * n - 1) else total
  IO(go(2000000000L, 1)).void

# factorial-run

```scala mdoc

# factorial-io-parallelized

```scala mdoc:silent
val factorials: IO[Unit] = List.fill(10)(factorial).parSequence.void

# factorial-io-parallelized-run

```scala mdoc

# runtime-available-processors

```scala mdoc
val numProcessors = Runtime.getRuntime().availableProcessors()

# factorial-io-parallelized-time-each

```scala mdoc:silent
val timedFactorial: IO[String] = time(factorial)
val timedFactorials: IO[List[String]] =

# factorial-io-parallelized-time-each-run

```scala mdoc

# factorial-bounded-threadpool

```scala mdoc

# factorial-time-bounded-threadpool

```scala mdoc

# factorial-time-bounded-threadpool-20

```scala mdoc

# factorial-time-bounded-threadpool-available

```scala mdoc

# snooze-10

```scala mdoc:silent
val tenSnoozes: IO[Unit] = List.fill(10)(snooze).parSequence.void

# snooze-10-run

```scala mdoc

# combined

```scala mdoc:silent
val snoozeAndCompute: IO[Unit] = 
  List(factorials, tenSnoozes).parSequence.void

# combined-run

```scala mdoc

# setup-bounded-runtime

def boundedRuntime(numThreads: Int): IORuntime = 
    compute = IORuntime.createDefaultComputeThreadPool(numThreads),
    blocking = IORuntime.createDefaultBlockingExecutionContext()

# compute-threadpool

```scala mdoc

# better-snooze

```scala mdoc:silent
val betterSnooze: IO[Unit] = IO.blocking(Thread.sleep(2000L))
val tenBetterSnoozes: IO[Unit] =

# better-snooze-run

```scala mdoc

# combined-blocking

```scala mdoc:silent
val betterSnoozeAndCompute: IO[Unit] =
  List(factorials, tenBetterSnoozes).parSequence.void

# combined-blocking-run

```scala mdoc

A src/chapters/2022-02-12-cats-effect-ioruntime/the-IORuntime.html.pm => src/chapters/2022-02-12-cats-effect-ioruntime/the-IORuntime.html.pm +163 -0
@@ 0,0 1,163 @@
#lang pollen

◊(define snooze-time "two")
◊(define snooze-time-double "four")

◊headline-link{Blocking and computing}

◊p{The ◊code-inline{snooze} task didn’t do any calculating — its only
operation was a ◊code-inline{Thread.sleep} — so its
thread didn’t occupy a processor in order for the task to progress. As
long as we had an unbounded thread pool, an unlimited
number of ◊code-inline{snooze} tasks could run at once, each on their
own thread. These sorts of tasks are known as “blocking”: the
application sits and waits for them to complete, but doesn’t actually
do any calculations. Blocking tasks are rare, and you should be very
reluctant to write one.}

◊note[#:title "By the way"]{
We’re using ◊code-inline{Thread.sleep} as an educational example. If
you ever need to pause in your own code, use
◊code-inline{IO.sleep}. This ◊em{doesn’t} block a thread, but you can
ponder that mystery later.

◊p{The ◊code-inline{factorial} task, on the other hand, did a lot of 
multiplication. Each task occupied one of my eight processors as it ran, so
only eight of those tasks could run at the same time. These sorts of tasks are
termed “compute-intensive”. While ◊code-inline{factorial} doesn’t
resemble a typical Scala application, it’s more similar to one than

◊note[#:title "Another thing"]{
You should never really write code like
◊code-inline{factorial} either. If you do have a compute-intensive
task that takes seconds, it would be best to ◊external-link[#:href
up ceding}.}

◊headline-link{Varying it up}

◊p{This difference between blocking and compute-intensive tasks poses
a problem for us: what if we want to run a load of ◊code-inline{factorial}
and ◊code-inline{snooze} tasks at the same time?}

◊snippet[#:name "combined"]{}

◊p{Which runtime should we choose?}

◊p{If we use the ◊code-inline{basicRuntime}, each task will be given
its own thread. This is good for the blocking ◊code-inline{snooze}
task, but bad for ◊code-inline{factorial}. But if we use a
◊code-inline{boundedRuntime} our ◊code-inline{snooze} task will block
a thread that ◊code-inline{factorial} could use to progress.}

◊snippet[#:name "combined-run"]{}

◊p{As expected, using a ◊code-inline{boundedRuntime} isn’t ideal.}

◊p{How can we give the blocking ◊code-inline{snooze} task unlimited scaling, but
bound the ◊code-inline{factorial} task at eight threads?}

◊p{Thankfully, there’s a way to get the best of both worlds. Instead
of having just one thread pool, we could have two: an unbounded
thread pool for blocking tasks and a bounded one for compute tasks.}

◊p{It turns out that cats-effect 3 ◊code-inline{IORuntime} supports
this exact use case. Let’s take a closer look at the setup code for
the ◊code-inline{boundedRuntime} to see how. Here’s a simplified version:}

◊snippet[#:name "setup-bounded-runtime"]{}

◊p{The ◊code-inline{IORuntime} accepts two thread pool arguments:
◊code-inline{compute} and ◊code-inline{blocking}. It uses these
thread pools for the compute-intensive and blocking operations

◊note[#:title "Key takeaway"]{The cats effect 3 ◊code-inline{IORuntime} can be
thought of as two thread pools.}

◊p{We can access the compute thread pool using the
◊code-inline{compute} field. This gives us an ◊code-inline{ExecutionContext}:}

◊snippet[#:name "compute-threadpool"]{}

◊note[#:title "Point to note"]{
Thread pools in Scala are represented by the
◊code-inline{ExecutionContext} class. “Execution context” is just
another term for a thread pool.

◊headline-link{A proper snooze}

◊p{You might be a bit confused by this: there are two pools in the
◊code-inline{IORuntime}, but haven’t we only been thinking about one?
◊p{So far, we’ve thought of the ◊code-inline{basicRuntime} and
◊code-inline{boundedRuntime} functions as configuring a single
pool. In actual fact, they configure two: they both have a hard-coded
unbounded blocking pool. It’s just that we never used it.}

◊p{By default, cats-effect’s ◊code-inline{IO} will always use the
◊code-inline{compute} pool — this is the pool we set a bound on in
◊code-inline{boundedRuntime}. If we want to tap into the blocking
pool, we must use a different constructor: the aptly named ◊code-inline{IO.blocking}.}

◊p{Here’s a better snooze function:}

◊snippet[#:name "better-snooze"]{}

◊p{Let’s run a few better snoozes using our
◊code-inline{boundedRuntime}. How long should they take?}

◊snippet[#:name "better-snooze-run"]{}

◊p{Our previous ◊code-inline{tenSnoozes} task took
◊|snooze-time-double| seconds on the
◊code-inline{boundedRuntime} because it was run on the bounded
compute pool. On the other hand, ◊code-inline{tenBetterSnoozes} only
takes ◊|snooze-time| seconds: it’s run on the unbounded blocking pool.}

◊headline-link{A better work-sleep balance}

◊p{What happens if we interleave blocking operations with
compute-intensive ones?}

◊p{Let’s have a task composed of both:}

◊snippet[#:name "combined-blocking"]{}

◊p{The previous ◊code-inline{snoozeAndCompute} task took six
seconds. How long should this one take?}

◊snippet[#:name "combined-blocking-run"]{}

◊p{It’s much faster: the threads in the bounded compute pool no longer
need to handle the ◊code-inline{Thread.sleep}, and the unbounded
blocking pool lets the ◊code-inline{betterSnooze} task scale unlimitedly.}

◊headline-link{The global IORuntime}

◊p{We’ve explored a lot with our ◊code-inline{basicRuntime} and
◊code-inline{boundedRuntime} functions. But we really wanted to know
about ◊code-inline{IORuntime.global}.}

◊p{What’s special about it?}

◊p{In actual fact, you’ve already used it: the global runtime is
effectively a runtime with a compute pool bounded at the number of
available processors. In other words, it’s the same as the
◊code-inline{boundedRuntime(numProcessors)} we settled on

◊note[#:title "Key takeaway"]{The ◊code-inline{IORuntime} then is composed of an unbounded blocking pool and a
bounded compute pool. The global ◊code-inline{IORuntime} has a compute pool bounded
at the number of processors in your computer.}

◊p{Whenever you need to use a thread pool, you can rarely do better
than importing ◊code-inline{IORuntime.global} and making use of it.}

◊p{The cats-effect ◊code-inline{IOApp} does this for you, so in most
cases you don’t even need to know that the ◊code-inline{IORuntime}

A src/chapters/2022-02-12-cats-effect-ioruntime/thread-pools.html.pm => src/chapters/2022-02-12-cats-effect-ioruntime/thread-pools.html.pm +273 -0
@@ 0,0 1,273 @@
#lang pollen

◊headline-link{Why have thread pools?}

◊(define snooze-time "two")
◊(define snooze-time-double "four")

◊p{A ◊keyword{thread pool}, also known as an ◊keyword{execution
context} is a way of managing ◊keyword{parallelism}.}

◊p{To demonstrate, let’s have a look at a simple task: ◊code-inline{snooze}.}

◊snippet[#:name "snooze"]{}

◊p{◊code-inline{snooze} does absolutely nothing. More precisely, it
does absolutely nothing for ◊|snooze-time| seconds. We can double check this by
running it using our handy ◊code-inline{time} function:}

◊snippet[#:name "run-snooze-no-runtime"]{}

◊p{Whoops! We need an ◊code-inline{IORuntime}. Let’s use our own
◊code-inline{basicRuntime} explicitly:}

◊snippet[#:name "run-snooze"]{}

◊p{As expected, it took ◊|snooze-time| seconds to run.}

◊p{What if we have multiple snooze tasks?}

◊snippet[#:name "snooze-list"]{}

◊p{We can combine a list of tasks using ◊code-inline{parSequence}:}

◊snippet[#:name "snooze-parallel"]{}

◊p{The ◊code-inline{parSequence} function produces an ◊code-inline{IO}
that runs multiple tasks in
parallel. If each task takes ◊|snooze-time| seconds, how long should
◊code-inline{parallelSnoozes} take?}

◊snippet[#:name "snooze-parallel-run"]{}

◊p{Both tasks were run at the same time, so the total elapsed time was
still only ◊|snooze-time| seconds.}

◊p{If you’re used to parallel computations, you may look at
◊code-inline{parSequence} with a degree of suspicion. It lets us run
many tasks in parallel, but how many?}

◊p{For instance, we can declare a thousand ◊code-inline{snooze} tasks:}

◊snippet[#:name "snooze-parallel-thousand"]{}

◊p{Will they really only take ◊|snooze-time| seconds?}

◊snippet[#:name "snooze-parallel-thousand-run"]{}

◊p{Nice! There seems to be no upper limit on the tasks we can run in

◊headline-link{Knowing our limits}

◊p{Unlimited parallelism seems like a great idea, but it has
significant downsides. Let’s have a look at a different task to

◊snippet[#:name "factorial"]{}

◊p{Woah! That’s an odd bit of code.}

◊p{If you’re a functional programming enthusiast, you’re probably so
fond of factorials that you compute them in your sleep.}

◊p{Those skills, elegant as they are, aren’t too important here. Don’t
worry if you’ve never heard of a factorial, haven’t seen ◊code-inline{@scala.annotation.tailrec} before, or get a headache
reading this Escher-like code.}

◊p{The key part of ◊code-inline{factorial} is that, unlike ◊code-inline{snooze}, it does a lot of multiplication.}

◊p{Running this on my rusty old laptop takes approximately two seconds.}

◊snippet[#:name "factorial-run"]{}

◊p{The functional programmer within you might point out that this
code is pure: there’s no reason to wrap it in an
◊code-inline{IO}. While that’s true, doing so lets us parallelize it
with ◊code-inline{parSequence}:}

◊snippet[#:name "factorial-io-parallelized"]{}

◊p{How long should ◊code-inline{factorials} take to run?}
◊p{It took two seconds to run ◊code-inline{factorial} once, so it
should also take two seconds to run in parallel, shouldn’t it?:}

◊snippet[#:name "factorial-io-parallelized-run"]{}

◊p{If you ran the code above, you probably felt your laptop heat up a
bit. You might have also found that the code ◊em{didn’t} take two seconds
— it took longer.}

◊p{This is different to our ◊code-inline{snooze} task, which always took
◊|snooze-time| seconds regardless of whether we ran one, ten or a
thousand in parallel.}

◊p{Why would that be?}

◊p{To answer that question, we need to take a closer look at our computers.}

◊headline-link{The processor beneath}

◊p{Despite what we might wish, our laptops are not magical boxes with
unlimited compute power:
they’re made of physical devices, and those devices have limits. A
computer has a limited number of processors, each of which can compute
one thing at once.}

◊p{We can check that number in Scala by taking a look at the
◊code-inline{Runtime} object:}

◊snippet[#:name "runtime-available-processors"]{}

◊p{My humble laptop has eight processors: it can execute a maximum of
eight computations at once. Even if I ask it to calculate ten
factorials in parallel, it won’t actually do so.}

◊p{You might rightly wonder: why didn’t we hit this limit for the
◊code-inline{snooze} task? This is because the ◊code-inline{Thread.sleep}
operation in ◊code-inline{snooze} didn’t occupy a processor as it

◊todo{Go into more detail about the OS?}

◊headline-link{Setting our limits}

◊p{We can take a closer look at how our factorial task is getting run
by timing each task:}

◊snippet[#:name "factorial-io-parallelized-time-each"]{}

◊p{This gives a string description for each of the
twenty tasks corresponding to how long the task took to run.}

◊p{Let’s run it to check the times:}

◊snippet[#:name "factorial-io-parallelized-time-each-run"]{}

◊p{That’s strange! The ◊code-inline{factorials} task may have taken
six seconds in total, but shouldn’t each task have taken two seconds?}

◊p{Instead, we see times of anywhere between two and six.}

◊p{This is because all tasks are fired off at the same time, but our
processors switch between them as they run. A processor might start
computing a task, but put it on hold in order to compute a different
one, switching back to it at a later time.

Tasks are started, halted and restarted as they all compete for

◊note[#:title "By the way"]{
If you’re confused, take a moment to poke at the code. Why not insert
an ◊code-inline{IO.println} at the end of ◊code-inline{factorial} and
see when its printed out?}

◊p{The more tasks we parallelize, the more switching each processor
has to do. This is problematic for a few reasons:}

◊item{Switching between tasks is expensive: a processor has to unload all
the information associated with the computation its about to pause,
and reload the information for the next.}
◊item{A paused computation still has resources hanging around. Our
◊code-inline{factorial} task doesn’t need too much memory, but we 
could easily write a task that used a lot of heap space. Running too
many memory-intensive computations would give us
◊code-inline{OutOfMemoryError} exceptions.}

◊p{This is why it’s generally much more useful to limit the number of
tasks that can be run in parallel.}

◊p{We can do this using ◊keyword{thread pools}.}

◊headline-link{Bounded and unbounded thread pools}

◊p{Our current limit, or lack thereof, is specified by our
thread pool. The cats-effect ◊code-inline{IORuntime} has a
thread pool under the hood. The ◊code-inline{basicRuntime} we’ve been
using has an ◊em{unbounded} thread pool: it can execute an unlimited
number of tasks in parallel.}

◊p{In our ◊code-inline{Threading} setup code, we declared another
◊code-inline{boundedRuntime} function. Let’s give it a spin.}

◊p{We can pick a bound of two for ten ◊code-inline{factorial} tasks:}

◊snippet[#:name "factorial-bounded-threadpool"]{}

◊p{It’s much slower than before — only two tasks are run at once.}

◊p{How long does each task take to run? We can check with ◊code-inline{timedFactorials}:}

◊snippet[#:name "factorial-time-bounded-threadpool"]{}

◊p{Unlike the previous unbounded thread pool, each task takes two
seconds ⸺ the tasks might be ◊em{scheduled} at once, but they’re fired
off over time, once a ◊keyword{thread} is free to compute them.}

◊note[#:title "Definition"]{
A ◊keyword{thread pool} controls the number of tasks that can be
executed in parallel. Each task is run using a ◊keyword{thread} in the
pool. A ◊keyword{bounded thread pool} has a limited number of threads,
so limits the number of parallel tasks.}

◊p{What if we set the bound higher?}

◊snippet[#:name "factorial-time-bounded-threadpool-20"]{}

◊p{The ◊code-inline{timedFactorials} task behaves as if were running
on the ◊code-inline{basicRuntime}: it’s as if we didn’t have
a bound at all.}

◊p{If you think about it, this makes sense: if we have more
computations running than the number of processors, each processor
will still need to switch between them. Our ◊code-inline{factorial}
tasks will end up being paused by the processor and taking longer.}

◊p{So far, we’ve experimented with bounds of two and twenty. Having
two tasks run at once gets around our thread-switching 
problem: each processor can focus on a single task. But having only
two isn’t too useful: most of our processors aren’t doing anything

◊p{The best limit probably corresponds to the number of
processors. Let’s check:}

◊snippet[#:name "factorial-time-bounded-threadpool-available"]{}

◊p{Sure enough, each task takes two seconds.}

◊note[#:title "Key takeaway"]{A thread pool bounded at the number of
available processors makes the best use of your computer.}


◊p{A thread pool bounded at ◊code-inline{numProcessors} is the
best option for the ◊code-inline{factorial} task. But what about

◊p{We know that we can run an unlimited number of parallel
◊code-inline{snooze} tasks using the ◊code-inline{basicRuntime} — this
had an unbounded thread pool. What about our ◊code-inline{boundedRuntime}?}

◊p{Let’s test it by running more tasks than processors. We can
construct an ◊code-inline{IO} that runs ten tasks in parallel:}

◊snippet[#:name "snooze-10"]{}

◊p{Let’s try running this our bounded thread pool. Each task takes
◊|snooze-time| seconds. How long should the ◊code-inline{tenSnoozes}
task take?}

◊snippet[#:name "snooze-10-run"]{}

◊p{Not ◊|snooze-time|, but ◊em{◊|snooze-time-double|} seconds.}

◊p{The ◊code-inline{Thread.sleep} call might not hog a processor, but
it does hog a thread in our pool.}

◊p{By choosing a bounded thread pool for our ◊code-inline{tenSnoozes} task,
we cause it to take longer.  If we want to get our task to complete as
fast as possible, it seems better to have an unbounded pool.}

M src/index.ptree => src/index.ptree +7 -0
@@ 10,3 10,10 @@ chapters/fs2/recap.html