scalajs & zio: react fetcher hooks and effects

scalajs & zio: react fetcher hooks and effects

Hooks are relatively new in the react world. Hooks provide slices of capabilities to a component in a more composible model than class-based components. A “fetcher hook” fetches data and provides it to a react function component asynchrously as the data becomes available. The component must already be rendered to use a fetcher hook.

I created a general scala.js react fetcher component in a previous blog. It works, but it is not a hook fetcher. Because it is not a hook, it is less flexible because it forces the application to use a specific component hierarchy. The fetcher in the blog is a react component–it returns a “dom” component. A react hook is not a component–a hook returns data and functions.

The NPM registry is full of js/typescript react hook fetchers. They are easy to write. If a js Promise works for you, use them. Most hook implementations assume you are using HTTP transport via the browser’s “fetch” API and “url” state model. async/await was recently added to javascript making it easier to use js.Promise effects compared to the standard hierarchical, callback tree syntax. Today, many programs use non-HTTP protocols such as websockets or rsocket (websockets with advanced protocols). While useful today, some of the react hook fetcher libraries may be less useful going forward.

The recent introduction of react “Suspense” has raised awareness of how rendering imacts the user experience. If you are suddenly shown a large spinner screen while data is being fetched, then flip to a new screen, the user may feel that there were too many flips and become distracted. Many fetcher libraries do not address this probem. React’s “concurrent” rendering support allows a react component to start rendering, easily inform the rendering engine that it cannot continue to render, allow the rendering engine to render other components, then render the component once an asynchronous “condition” is satisfied. Instead of trying to expose a complicated API, FB wants to make this easy to use.

Here’s some links for react, async data fetching. You may want to review them because they give you a hint about asynchronous fetching features that are useful in real applications. You will want to keep an eye on these features as you work through this blog.

Suspense interrupts a component’s execution path and allows other parts of the UI to render. You set the alternative display logic in the parent component using a Suspense component. If the child’s render is interrupted, the fallback rendering logic is used. The child interrupt’s itself by throwing a js.Promise when it has a dependency that prevents it from rendering. The dependency is often, but not always, a data dependency. There can also be a “code” dependency via the new async import mechanism.

By using a “Suspense” component, you can avoid adding excessive machiney in the parent and child. FB has updated relay with Suspense. relay focuses on graphql and advanced caching models. Unsurprisingly, relay is a complex library with code generators, compiler augmentation, string-based DSLs and alot of abstractions. To solve these problems while keeping it ergonomic for programmers, you may need industrial grade software engineering. Once you look at relay, you immediately wonder if there is an easier way–a subject for another blog.

Old fetcher component

Back to my poor, old fetcher component…

The code below is from the original blog with some minor changes. You must provide a “runner” to run the scala effect F. The strategy for running an effect is captured in a parameter. Separating out the “runner” makes the component simpler and more reusable because you can use any “runner” strategy as long as it produces a non-effectful Either. An Either is a common scala data structure that provides 2 values. In our case, the value is either a domain value or an error value. Unlike other react component libraries, there is no assumption about the effect. The scala.js component could work with a websocket or an rsocket. As an aside, neither the component function or the hook described later are pure functions due to the way that react works.

class Fetcher[F[_], P, E, T](Name: String) {
  /** Load state passed to a child. */
  sealed trait FetchState

  /** Load was successful, hold item. */
  case class Success(item: T) extends FetchState

  /** Load resulted in an error. */
  case class Error(content: E) extends FetchState

  /** Loading still in progress. */
  case object Fetching extends FetchState

  /** Initial state until a fetch request is made. */
  case object NotRequested extends FetchState

  /** Initiate a fetch for P. */
  type FetchCallback = F[P] => Unit
  /** Given a fetch request `F[P]` and a callback, run the F and call the callback
   * to process the results. The results have to be split into an error part and
   * a "value" part so that the proper fetch state can be passed to the child.
   */
  type Runner = F[P] => (Either[E, T] => Unit) => Unit

  trait Props extends js.Object {
    val children: (FetchState, FetchCallback) => ReactNode
    val run: Runner
    val initialValue: Option[F[P]]
  }

  /** Provide data loading status to a child.
   * @param children Callback when fetch state changes. Convenience thunk to
   *  initiate fetch. Return child.
   * @param run Run a F[T] to obtain an error or a result.
   * @param initialValue Optional initial fetch, to kick things off.
   */
  def apply(props: Props) = sfc(props)

  val sfc = SFC1[Props] { props =>
    import props._
    React.useDebugValue(Name)
    val (fstate, setFState) = React.useStateStrictDirect[FetchState](NotRequested)
    // setFState is guaranteed stable
    val makeRequest = React.useCallback[F[P], Unit](fstate.asJsAny){f =>
      if(fstate != Fetching) {
        setFState(Fetching)
        props.run(f){ _ match {
          case Right(item) => setFState(Success(item))
          case Left(e) => setFState(Error(e))
        }}}
    }
    children(fstate, makeRequest)
  }
}

The component can be inconvenient to use. I eventually defined a subclass of Fetcher that curried the runner argument to avoid passing it in every call. The signature has a P narrowed to a T. Narrowing is not needed. You could just manipulate the final value in your application code vs the runner callback. So T can be removed. I’m not sure why I did that in the initial version. The code also does not you to show a child component with old data while informing the child that the data will change soon unless you cache the data in the child.

The component is based on the scalajs-reaction library, a scala.js library that relies exclusively on react hooks. You could write your own scala.js hooks facade in about ~50 lines of code.

Fetcher Hooks

A react hook does not return a react node. A hook returns data and functions that allows the caller, a function component, to manage rendering.

The hook verison is nearly identical to the component version–less the drawing code. The hook is inside the class expressed as a scala function. However, the hook function can only be called inside a react function component. There is no clever programming way that restricts the function so it can only be called inside a function component.

To use the hook, instantiate a FetcherHook then call the class’s useFetcher in your function component. All instances will use the same effect runner. If we added a mutable, application-level cache to the class as constructor parameter, all instances would could use the same cache. Of course, FP-style programming would want to use an immutable data structure so passing in the cache directly would not reflect good FP practices.

Ideally, we would push the result type P into the actual hook function so it could return different types of data. A nice feature of the hook is that w cannot use FetchState instances’s across different instances of FetcherHook. FetchState is a path dependent type so MyFetcherHook1.Success is not the same as MyFetcherHook2.Success.

class FetcherHook[F[_], P, E](Name: String, runner: F[P] => (Either[E, P] => Unit) => Unit) {
  /** Load state passed to a child. */
  sealed trait FetchState

  /** Load was successful, hold item. */
  case class Success(item: P) extends FetchState

  /** Load resulted in an error. */
  case class Error(content: E) extends FetchState

  /** Loading still in progress. */
  case object Fetching extends FetchState

  /** Initial state until a fetch request is made. */
  case object NotRequested extends FetchState

  /** Initiate a fetch for P. */
  type FetchCallback = F[P] => Unit

  def useFetcher() => {
    React.useDebugValue(Name)
    val (fstate, setFState) = React.useStateStrictDirect[FetchState](NotRequested)
    // setFState is guaranteed stable
    val makeRequest = React.useCallback[F[P], Unit](fstate.asJsAny){f =>
      if(fstate != Fetching) {
        setFState(Fetching)
        runner(f){ _ match {
          case Right(item) => setFState(Success(item))
          case Left(e) => setFState(Error(e))
        }}}
    }
    (fstate, makeRequest)
  }
}

Here’s the hook in action:

val Fetcher = FetcherHook[IO, MyResponse, Throwable]("MyFetcher", CatsFetcher.runner[IO, MyResponse])
import Fetcher._

object MyComponent { 
  trait Props extends js.Object { }
  val sfc = SFC1[Props]{ props =>
    val (fstate, doFetch) = useFetcher()
    fstate match { 
      case Success(item) => div(s"item fetched: $item", button(...on click call doFetch...))
      case Error(t) => div(s"Error $t")
      ...
  }
}

The cats runner is a bit complicated because it uses IO as in intermediary effect. zio is a bit cleaner.

object CatsFetcher {

  /** Run an `F` using cats-effect keeping `F` general. */
  def runner[F[_]: Effect, T]: F[T] => (Either[Throwable, T] => Unit) => Unit = {
    val F = Effect[F]
    f =>
      eicb =>
        F.runAsync(f) { asyncei =>
            F.toIO(F.pure[Unit](eicb(asyncei)))
          }
          .unsafeRunSync()
  }
}

We could use CatsFetcher with zio by using the zio cats interop library but its cleaner using zio directly:

object ZioFetcher {
  def runner[T](rts: Runtime[Any]): Task[T] => (Either[Throwable, T] => Unit) => Unit =
    f => eicb => rts.unsafeRunAsync(f)(e => eicb(e.toEither))
}

Unlike JVM applications, there is no main entry point so we cannot return an effect to an “App” subclass that runs the “application” effect on our behalf. In the browser, the effect runtime must run the effect based on the react component’s lifecycle. There leads to many “unsafeRunAsync” method calls throughout the code.

More Goodness

The fetcher above only remembers one piece of data at a time but you may want to use a cache so it serves up multiple dependencies to the same component. In the example above, you would need three separate useFetecher() statements in your component if there were three data dependencies.

A unified cache using a data key would reduce the need for three useFetecher() statements and could also support returning stale data immediately with an “request-in-flight” flag indicating a request for fresh data has been made. “relay”'s support (https://relay.dev/docs/en/experimental/a-guided-tour-of-relay#reusing-cached-data-for-render) for these features suggest that it would be useful. Relay defines a customizable “Environment”, say with caching. The Environment is the first argument to most relay API functions suggesting that a Reader/State monad formulation might also be useful! As a side note, the “store” API looks like the old ADO.NET interface for navigating master-detail relationships…what is old is new.

Existing fetcher libs like https://github.com/bghveding/use-fetch orhttps://github.com/jamesplease/react-request could use https://github.com/jamesplease/fetch-dedupe to dedupe requests. To use dedupe, you would just call fetchDedupe vs fetch directly. That’s insanely easy unless the fetch call is inside the library and you cannot customize it. These libraries are specific to the browser but in some cases can be run on the server for server-side rendering. Libraries vary in configurability, e.g., using a shared cache across the application or narrowing the cache scope to a few pages. A browser’s Response object is not idempotent so there is also a bit finesse needed, e.g., you cannot call response.json() more than once.

With zio, we could call .memoize (or .cached) on your effect and it will memozie the response for you without any further cache/fetch/other configuration or coding. You would need to add some boilerplate to your component to store the “value” and reuse it when appropriate. .memoize is not quite right though as you would not want to memoize an error value, but in spirit, using effeects directly could be useful.

We could use zio to implement “return a result immediately but fetch in the background” semantics. Perhaps the the only value the hook provides is te ability to force a redraw as a fetch progress in its states. The hook also transforms the return value into two either an error value or a domain value. With zio, we can explicitly expose a customized error channel so in theory we could drop the Either.

Let’s break down our problem. Assuming the runner is separate, the hook provides:

  1. Storage for the return value and fetch status.
  2. Fine grained concepts of fetch “state” which changes as the fetch lifecycle advances, e.g., NotRequested -> Fetching -> Success.
  3. Component notification when that state changes and it needs to redraw.

We need storage for (1). We need storage for (2). We need something react’ish for (3). For (3), given the react API, we can use setState to set the state and force a rerender–we cannot avoid this aspect. We might as well store the state of the fetch in a react state hook so its convenient to access. We may also choose to store the fetch state elsewhere so we can manage the fetching lifecycle more cleverly but we will always need at least one hook. To force a rerender in react, you need to change state and state storage requires a “state” hook. So (3) will come from a React.setState hook. But (1) and (2) are open to change.

zio

There are some good scala FP libraries that could help with (1) and (2), fetch in particular was designed to abstract the fetching process, regardless of protocol, and dedupes feches and caches results. It has other features such as automatically batching results. In a way, fetch is a spiritual cousin to relay. However, fetch or relay may be too much abstraction for your app or you it may not be appropriate for your app, e.g., you are not using graphql. Also, fetch does not provide all of the capabilities you may need, witness the capabilities of the fetch libraries mentioned in the first section. Perhaps with zio we can use some zio features to incrementally enhance fetching. Ideally, the solution can be used anywhere, the browser or the server.

For (1) and (2) we can use some zio capabilities similar to what is described https://zio.dev/docs/overview/overview_testing_effects and here to provide a cache to an effect in the zio “environment.” The scope of the cache the entire application or a few web pages based on the enivronment value. Instead of a “Any” environment, we could use an enviornment that has a “Cache” service. In the formulation below, the scope of the cache can be controlled by using an environment shared among all components or you could create a new environment with a fresh cache specific to a page.

For illustratiation purposes since not everyone uses scala.js, we can make the service work on the JVM or in the browser by using a “Ref” data structure. A “Ref” controls access to its content and is safe to use in a multi-threaded environment. Hence, the version below is more complicated because it is built for both environments. I’ll publish the much shorter and simpler version in another blog or a github repo.

Admitedly, if we were just sticking with HTTP calls, we could use one of the packages mentioned above to easily employ a cache. To be fair, assuming the HTTP response contains the right headers, the browser will automatically cache the content. However, even if the browser caches the response, we still incur json parser processing overhead. Caching will increase memory usage but decrease processing needs–the classic cache trade-off. Let’s ignore all of these design dimensions for the moment and just setup a zio based caching capability specific to a “fetching” concept.

FP style programming uses immutable data and effects-as-values. Once you decide on the FP-style, you are immediately thrust into a world where you write your logic inside the effect. On the JVM, zio has the zio.App class you use for your main class. zio.App expresses a main function that expects an effect as the return value. In the browser, there is no “main.” We need to wrap the react “render to DOM” in an effect in order to provision a global cache data structure if the cache itself is inside an effect like “Ref”

There are extensive comments in the code below so you can read the code and comments to understand how it is designed and why. I know that excessive comments can make code hard to read but I thought this was the best way to go.

You should also notice the nearly complete lack of type specifications except for the function declaration. zio is designed to work with scala’s type system. zio type inference is good and type specifications are seldom needed. Since its just the zio effect, there is no F[_] type parameters.

I had nearly 0/zero/zilch type inference issues and the compiler always clearly expressed type problems so writing my code was quite easy once I understood the zio API.

We define the caching system designed as a “module” using the pattern described on the zio site. We also rename the “fetch” concept to one of “data dependency”. There are various equivalent ways to declare the GADT for the data dependency information so if you don’t like the way DataDependency is structured, you can change it. All of this code would be in a library that you never see if you are just consuming the capability.

/** Fetch is really just an asynchronous data dependency data access. Depending on the
 * the use of the this module, you may not receive some states.
 */
sealed trait DataDependency[+T]
/** Data is available for use. An indicator if a request is in flight and hence
 * may be updated soon and whether the data came from the cache and hence may be
 * old.
 */
case class Available[+T](data: T, inflight: Boolean, cache: Boolean) extends DataDependency[T]
/** Error occurred. Modeled here as a string. */
case class Error(m: String) extends DataDependency[Nothing]
/** Request for fresh data is in flight and there is no data available. */
case object InFlight extends DataDependency[Nothing]
/** No data has been requested. Use this is your initial value in react state. */
case object NotRequested extends DataDependency[Nothing]

/** Our fetching system that has a fetching "service". You mix this into the zio
  * environment so you can use it inside your effects via the "reader monad"
  * pattern.
 */
trait FetchingSystem {
  val fetching: FetchingSystem.Service
}

object FetchingSystem {
  // Service definition--the API. All of the methods return an effect because
  // the underlying resource's methods, the Ref[Data], return effects. This
  // is how a Ref protects your data in a concurrent environment.
  trait Service {
    // Last successful fetch
    def clearData(key: String): UIO[Unit]
    def put(key: String, last: Available[_]): UIO[Unit]
    def get[T](key: String): UIO[Option[Available[T]]]  
    // Current fetch cycle
    def status[T](key: String): UIO[Option[DataDependency[T]]]
    def setStatus[T](key: String, status: DataDependency[T]): UIO[Unit]
  }

  /** Default Service implementation based on a cache embedded in a Ref. This is
   * our concrete implementation of the service "interface." In the browser,
   * there is no concurrency so a Ref is overkill. But using a Ref makes this
   * code useful on the JVM. One impact of using a Ref, is that we must
   * provision service instances inside an effect---that makes the usage a bit
   * harder to see compared to, say, Clock in the zio standard package. zio's
   * Clock is a strict value.
   */
  case class DefaultService(state: zio.Ref[Data]) extends Service {
    def clearData(key: String) = 
	    state.update(data => data.copy(lastSuccessful = data.lastSuccessful - key)).unit
    def put(key: String, last: Available[_]) = 
	    state.update(data => data.copy(lastSuccessful = data.lastSuccessful.updated(key, last))).unit
    def get[T](key: String) = 
	    state.get.map(_.lastSuccessful.get(key).map(_.asInstanceOf[Available[T]]))
    def status[T](key: String) = 
	    state.get.map(_.current.get(key).map(_.asInstanceOf[DataDependency[T]]))
    def setStatus[T](key: String, status: DataDependency[T]) = 
	    state.update(data => data.copy(current = data.current.updated(key, status))).unit
  }

  /** Create a FetchingSystem. Due to the Ref in the service, the system must 
   * be created inside an effect. However, since it is wrapped in an
   * effect its less useful on its own than the "Live" instances in the zio
   * package. While you could use this method to create a system, the data will
   * not be shared since its immuatable. Generally, you
   * would use `fromRef` to create a service instance and add that to your
   * environment object because your environment object probably has other services
   * included in it. With this method, you could create your own Fetching
   * System and use it directly as a standalone object.
   */
  def makeDefault(data: Data) = 
    zio.Ref.make(data).map {s =>
      new FetchingSystem {
        val fetching = fromRef(s)
      }
    }

  /** Given a Ref of Data, make a service. Use the same Ref to share the data
   * cache among service instances. We could also instantiate the service once
   * and re-use that e.g. use the result of this function call to create the
   * environment.
   */
  def fromRef(ref: Ref[Data]) = DefaultService(ref)

  //
  // A few methods used in for-comprehensions directly for convenience. This is
  // only boilerplate we have. `ZIO.accessM(...)` creates an effect that gives
  // you the environment, then you access the service API from there. In a
  // for-comprehension, the RHS is an effect so you use these helpers to access
  // the cache contents or stash results. This is the standard approach when
  // using the Reader monad.
  //
  // We vary the amount of type specification in the defs below to
  // illustrate how well type inference is working...it's working well, we
  // really do not need to specfy much at all.
  //
  // The getters/setters are alot like using lenses with immuatable data
  // structures. They are essentially getters/setters to navigate
  // effect/immutable data structures. You almost always see getters/setters
  // like this in effect/immutable FP programs.
  //
  // These first few we need to specify the environment we are accessing on `.accessM[]`
  // Type inference is good and we do not need to specify the return type.
  def clearData(key: String) = ZIO.accessM[FetchingSystem](_.fetching.clearData(key))
  def put(key: String, last: Available[_]) = ZIO.accessM[FetchingSystem](_.fetching.put(key, last))
  def get[T](key: String) = ZIO.accessM[FetchingSystem](_.fetching.get[T](key)
  // These we use the full ZIO signature: ZIO and RIO to show the similarities
  def status[T](key: String): ZIO[FetchingSystem,Nothing,Option[DataDependency[T]]] = ZIO.accessM(_.fetching.status[T](key))
  def setStatus[T](key: String, status: DataDependency[T]): RIO[FetchingSystem,Unit] = ZIO.accessM(_.fetching.setStatus[T](key, status))

  /** Our protected resource in the environment. */
  case class Data(
    /** Last available data. Always a Available, inflight & cache flag is irrelevant
      * but keep them here in case we can return the same instance to react
      * which does not handle object identity very well.
     */
    lastSuccessful: Map[String, Available[_]] = Map.empty,
    /** State of "current" fetch processing. */
    current: Map[String, DataDependency[_]] = Map.empty
  )
}

There are a couple of casts in there that we could get with at the expense of making the API more complex. We’ll skip fixing the casts as as an objective for the moment. We need to define what our environment will look like:

object AppRuntime {
  // Our minimal environment for running the fetch. We really don't need the
  // Clock actually but I want to create a cache with expiring entries so I have
  // included it for now even though the cache is really just a hash map. This
  // also forces the `Env` class to be created that includes both services.
  type AppEnv = FetchingSystem with clock.Clock

  // Our enviroment needs to have the fetching member. Use the "Live" interface
  // in the zio package to mix in a pre-existing instance of the Clock
  // service. zio's default instances makes creating environments much
  // easier. We `.provide` an instance to an effect whose R is AppEnv. Once
  // provided, the effect environment is `Any` and can be run by the default
  // RTS. You almost always a RTS with the Any environment to run effects that
  // have had their real environment dependency "provided" away.
  case class Env(fetching: FetchingSystem.Service) extends FetchingSystem with clock.Clock.Live

  // Wrapped data. The "Data" instance is actually wrapped twice. Once in an
  // outer effect and once with a Ref. To share the Ref everywhere so that the
  // Data is shared in the application, the entire application must run inside
  // the effect as a callback.
  //
  // When you want to share an immutable data structure, such as Data, you need
  // indirection. You could use a mutable data cache. In the browser, a mutable
  // data structure works fine because there is only one thread. So in the
  // browser, you do not need a Ref and you can avoid an the extra indirection.
  // However, on the JVM this would not work in a multi-threaded
  // application. Hence the price of sharing immutable data structures is
  // indirection. The price of concurrency is using the wrapped value inside an
  // effect. Here we show pre-set values for key "blah".
  val refInEffect = zio.Ref.make(FetchingSystem.Data(lastSuccessful =  
	  Map("blah"->Available[String]("hah",false,true))))

  /** Default runtime system. Use this to run all your effects after you have
   * `.provided` an environment to remove the environment dependency.  For the
   * browsers you must use the asynchronous version:
   * `rts.unsafeRunAsync(effect)(e => ...)`.
   */
  val rts = new DefaultRuntime {}
}

In order to use the cache system, we need to write some code that structures the processing and implements our logic. The standard approach with effects is to use a for-comprehension. A for-comprehension requires that all of the RHS have a compatible monad. In scala, this means they must have map, flatMap, withFilter and similar data types–you can’t mix monad types in the same for-comprehension although you can break up the for-comprehension into multiple for-comprehensions.

For us, the RHS of the for-comprehension will be ZIO monads.

val program = for {
  //          The RHS needs to have the same type.
  //          or you have to break up the for-comprehension.
  aValue <- ...a zio effect...
  _ <-      ...another zio effect...
  bValue <- ...a zio effect... 
} yield ()

The trick to using zio for-comprehensions is to define methods for use on the RHS that are ergonomic, access the cache, then access the environment for services (like dependency injection) and call the service methods to perform “caching/fetching” logic. If we are using zio effects, the environment, and hence the ZIO effect type, must match so that the for-comprehension will compile. If we have effects with different environment needs, we will need to use .provide(env) inside the for-comprehension at different times to standardize the environment types to product RHS effect values with the same type. You should recognize that mixing zio.Task and zio.ZIO[MyEnv,...,...] types can result in a bit of a nuanced dance to ensure you have the correct environment dependencies.

I will also make the disclaimer that the cache described below is insufficient for general use. We would want to bake in content expirations and other nifty cache features. There are some js and scala general purpose caches that could provide this and we could just use those instead of a simple Map.

In addition, we have customized the cache contents to the lifecycle of a remote call and we have provided a “logging” capability that allows us to log state changes. The “logging” allows us, conveniently, to tie into the react component lifecycle. We would probably also want a way to turn off caching and clear the cache if the routing changed, etc. Just see the relay documentation for more ideas. The cache below just shows the basics of using a cache to reduce dedupe and improve the user experience. The code would be bundled into a library so a consumer would never need to see it.

object Requests {
  import AppRuntime._
  import FetchingSystem._

  /** Process an effectful request using the fetching environment. Cache behaviour
   * is determined by CachePolicy. Cache invalidation is a classically hard
   * problem :-) so I'm sure the logic below is not perfect. If older data needs
   * to be returned, the data from the last succesful fetch is returned even if
   * there were failures between the last successful fetch and the current
   * fetch.  The resulting effect returns the final stats but the real value is
   * obtaining realtime "status" using the `log` function to inform others.
   * 
   * @param effect Effect to run. This encodes the application specific "fetch" concept.
   * @param key Identify for tagging results.
   * @param update Async notify about status changes, similar in concept to a "logger."
   * @param policy Cache policy to use. Can be set per "run."
   */
  def withCache[T](
    effect: Task[T],
    key: String,
    log: DataDependency[T] => Task[Unit],
    policy: CachePolicy = CachePolicy.NetworkOnly,
  ): ZIO[FetchingSystem, Throwable, DataDependency[T]] =
    // Return type was set explicitly so r is AppEnv. Otherwise we need to do
    // `ZIO.accessM{(r: AppEnv) => ` so it knows the R type. We could also skip
    // `.accessM` and use `ZIO.environment[FetchSystem]` in the for-comphrension
    // to obtain the environment but that would make the syntax to define
    // `networkFetch` much more messy.
    ZIO.accessM{ r =>
      // If a network fetch is needed, use this effect. Returns effect/fetch
      // result. We have to provide the environment to some effects to ensure
      // that we return a simple Task with an Any environment. Sometimes we have
      // to skip calling "log" because we do not want to go until a result is
      // returned.
      def networkFetch(skipLog: Boolean) = {
        // Log, update cache status.
        val update = (if(skipLog)Task.unit else log(InFlight)) *> setStatus(key, InFlight)
        // Do the update and then run the effect. Save the output of the effect.
        (update.provide(r) *> effect
          // Process data value or error. We use foldM because the cache system
          // uses effects and we need to call those functions inside the fold.
          .foldM(
            e => Task.succeed(Error(e.getMessage)),
            // Always put the "last" successful result into the lastSuccessful cache.
            d => { 
              val s = Available(d, false, false)
              put(key, s).provide(r) *> Task.succeed(s)
            }})
      //
      // Cache and fetching logic. Using a for-comprehension is alot like using
      // async/await in other languages in that it linearizes the code vs
      // expressing in more messy flatmap/maps.
      //
      for {
        // Get last succesful and current status
        lastopt <- get(key)
        statusopt <- status(key)
        // Use current values and policy to decide next actions.  result will be
        // just the status e.g. a DataDependency instance.  In my defense, I'm trying to
        // keep the same DataDependency[] instance below but I'm sure my logic is not
        // great.
        result <- (lastopt, statusopt, policy) match {
          case (_, _, CachePolicy.NetworkOnly) =>
            // Always fetch.
            networkFetch(false)
          case (_, Some(x@Available(s, i, c)), CachePolicy.CacheFirst) =>
            // Successful fetch had occurred, no need to hit network. Return the
            // same DataDependency instance if possible otherwise update the flags.
            if(!i && c) Task.succeed(x)
            else Task.succeed(x.copy(inflight = false, cache = true))
          case (Some(last), Some(InFlight), CachePolicy.CacheFirst) =>
            // Raw data exists and request is in flight. No need to hit network.
            Task(last.copy(inflight = true, cache = true))
          case (Some(last), _, CachePolicy.CacheFirst) =>
            Task(last.copy(inflight = false, cache = true))
          case (_, Some(x@Available(s, i, c)), CachePolicy.CacheAndNetwork) =>
            // Last fetch was successful, return it but initiate a fetch.
            networkFetch(true) *> Task.succeed(x)
          case (Some(last), Some(InFlight), CachePolicy.CacheAndNetwork) =>
            // In flight rquest but data is available.
            if(last.inflight && last.cache) networkFetch(true) *> Task.succeed(last)
            else networkFetch(true) *> Task.succeed(last.copy(inflight = true, cache = true))
          case (Some(last), _, CachePolicy.CacheAndNetwork) =>
            // Data exists from the last successful fetch, return it but
            // initiate a fetch.
            if(last.cache) networkFetch(true) *> Task.succeed(last.copy(inflight = false))
            else networkFetch(true) *> Task.succeed(last)
          case _ =>
            // I probably missed some logic above. The default is a pure network
            // fetch.
            networkFetch(false)
        }
        // Log and stash the final result.
        _ <- log(result) *> setStatus(key, result)
      } yield result
    }

  // An environment with the shared data cache. However, if we use this, we are
  // creating a new cache each time since this is just a value and the Ref[Data]
  // is a value as well. We could only share it with other consumers that are
  // consuming it within the same effect instance itself. So this is really only
  // useful at the top of a "sharing" effect.
  val appenv = refInEffect.map(FetchingSystem.fromRef(_)).map(Env(_))

  /** Given a Ref directly, not in an effect, make a zio environment. */
  def mkEnv(dataRef: Ref[FetchingSystem.Data]) = Env(FetchingSystem.fromRef(dataRef))
}

That’s the core of the system. But now we need to set it up for use in the browser. First, we need to define a couple of helpers to work with the browser fetch API:

  import scala.scalajs.js
  import org.scalajs.dom.experimental._
  import scala.util.chaining._
  import scala.scalajs.runtime.wrapJavaScriptException

  def jsPromiseToZIO[T <: js.Any](p: js.Thenable[T]) =
    Task.effectAsync[T]{ cb =>
      p.`then`[Unit](
        { (t:T) => cb(Task.succeed(t))},
        js.defined{ (e: scala.Any) => cb(Task.fail(wrapJavaScriptException(e))) }
      )
    }

  def zioGet[T <: js.Any](url: String) =
    Task.effectSuspendTotal(
      (Fetch.fetch(url) pipe jsPromiseToZIO)
        .flatMap(_.json() pipe jsPromiseToZIO)
        .map(_.asInstanceOf[T]))

Using the zio “fetching” environment

To use a global cache, we need to wrap our entire program inside an effect that holds the cache–we need to unwrap the value from the effect. Essentially, our “main” program becomes a giant callback inside the Ref’s effect via the standard scala .map method.

Once we are in the effect and can access the Ref, we create a react context to allow other components to access the Ref. Each component wishing to use the application wide cache needs to tap into the context, obtain the Ref[Data] instance, create an environment, then run their effects using that environment. In the example below, the shared cache is put into the context but the environment itself could also be placed directly into the context–its an application dependent decision.

Here’s the wrapped “main” entry point. You can see that we are performing the top level rendering into the dom inside the effect after running that effect with default runtime system.

// main.scala

/** Context for application. Note that it is the raw Ref. */
case class ZioContext(cache: zio.Ref[FetchingSystem.Data])

object Main {
  val Context = ttg.react.context.create[ZioContext](null)
  @JSExportTopLevel("App") {
  def App(options: js.Object): Unit = {
    AppRuntime.rts.unsafeRunAsync_(AppRuntime.refInEffect.map{ref =>
      reactdom.renderToElementWithId(
        Context.provider(ZioContext(ref))(
	     Application()
   )),
   targetContainerElement
 ))})
}}

Let’s define a component that accesses the context, creates an environment and runs the data-fetching effects using the “withCache” method.

// Much component detail elided
import AppRuntime._
import Requests._
type RT = api.APIArrayResponse[DomainObject]
object MyComponent { 
  val Name = "MyComponent"
  // the fetch is just a value in this case, it may need to move inside
  // if it was dependent on the component's state
  val getlist = zioGet[RT](s"${BuildConstants.endpoint}/domainobjects")
  // props for the component
  trait Props extends js.Object { ... }
  // the functional component
  val sfc = SFC1[Props]{ props =>
	// The "recipe" below can be formulated as a hook of course. So you don't need to
	// do this everytime.
    // Access the context
    val zctx = React.useContext[ZioContext](Main.Context)
    // Create environment. We could memoize this as well but its a lightweight object for this app.
    // The environment could also be placed into the context. We could also, much more easily,
    // place all of this into a hook and use that directly :-).
    val env = mkEnv(zctx.cache)
    // We still need to have a place to set state and force a redraw
    val (fstate, setFetchState) = React.useStateStrictDirect[Fetch2[RT]](NotRequested)
    // Stable values with little overhead to create, no need to memoize them.
    // If we did not want the cache, we would skip bundling.
    val cached = withCache[RT](getlist, Name, status => Task(setFetchState(status)),
	    CachePolicy.CacheAndNetwork).provide(env)
    // On mount, fetch some data
    React.useEffectMounting{() =>
      rts.unsafeRunAsync(cached)(exit => println(s"mounting exit $exit value not used"))
      // could just call
      // rts.unsafRunAsync_(cached)
    }
    fstate match {
      case Available(data, inflight, cache) => 
        // could add a button that calls rts.unsafeRunAsync again.
        if(inflight) PageWithDataIndicateFetching(data) else PageWithData(data) 
      // depending on whether our cache policy we may want a "loading" display
      case InFlight => ShowLoading()
      case Error(e) => ShowError(e)
      case _ => PageWithoutData()
    }
 }}

The “log” function was used to tie into the react state machinery. As the “status” of the fetch changes, the component is notified via a state change and react redraws the component. You will notice that we now have 2 function calls to use the fetching system, one for the context and one for state. Its a really important point that all of the asynchronous machinery is independent of the actual function component. In order for the effect to notify the drawing machinery, a simple “log” function was used to set the react state that then forces a re-render. It’s an important decoupling design: the caller chooses how to inform the component how to render. We build up “features” of the asynchronous call by using zio combinators which are composable. We could also formulate the entire approach into a hook to make using this much easier. I do not do the above process in every component, I just use a hook passing in the key and effect.

Since the caching function is just a higher-level service we could also just do:

trait DataManagement {
  val dm: DataManagement.Service
}

object DataManagement {

  trait Service {
    def cache[T](key: String, policy: CachePolicy)(effect: => Task[T]): ZIO[CacheSystem, Throwable, Dependency[T]]
  }
 // impl details
}

and use the “withCache” as a service via the data management “service” available in the environment. There are actually multiple, othogonal services we could be using to run queries that are not used in the above formulation.

Continuing with the formlation prior, here’s how the core part would look using a hook (the hook is not shown). We would just call “run” to run the fetch.

    val (fstate, run, zenv) = hooks.useZio[ResultType](
      s"""${name}_entity_home_view_${state.searchText.getOrElse("na")}""",
      fetchItems(state.searchText.toOption),
      rts = rts
    )

Of course, accessing the rts like this is a bit crude, we would really just put the entire environment into the react context and create a service API that access the “client” and the “rts” as needed. We could also use zio environments with “query parameters” baked in so that there is no need to call fechItems(state.searchText.toOption) like above, but access a query specific environment to obtain the query parameters.

Wrapping Up

The js world is complicated because of the tendency for each library to provide a sliver of effects management. For example, if you look at this [package](https://github.com/slorber/awesome-debounce-promise for debouncing js Promises, you realize that to have other effect features, you need yet another library which only composes through react components, at best. At least with zio, its all in one place. And you might to use https://github.com/axios/axios just to create the initial effect.

In the formulation above, we could get rid of the context usage simple be evaluating the main application like before but setting a global, mutable variable and accessing that instead of setting up a context. In the single threaded js world, that’s Ok to do. You would not use that approach in the JVM world.

Fetching is changing in the react world. React “suspense” helps with creating a smoother data interface (example). It uses a cache behind the scenes to provide storage for asynchronous operations. You pragmattically need an application wide cache when using Suspense in order to store a result even if the component is not currently rendered. FB published an application-wide cache js package for use with Suspense which throws a Promise when the data is accessed but not available. I’ve used a variant of the above formulation with the new react suspense mechanism to delay drawing until data is read and push display logic up to the parent instead of down into the tree while still allowing the data dependencies to live in the child.

However, its clear that the entire idea around Suspense fetching is quite a small improvement in rendering. You can just start fetching the data as soon as possible, prior to rendering the consuming component if possible, and use one of a few different mechanisms to conditionally render when the data is ready. You do not need to throw a js.Promise or use a specific component (Suspense) to do that.

The above fetcher formulation is more complicated than just loading a package and using “fetchWithCache” instead of “fetch”. It is also alot more complicated then the fetcher hook–you had to alter your “main” program! From a total lines of code count, its less or about the same as other packages. The level of abstraction is significantly higher and processing overhead may be or may not be higher. However, part of the complexity arises from the desire to use the same mechanism on both the JVM and browser. I could have used a mutable cache and made this much smaller. Using the approach above does allow you to implement a consistent, application wide effects management system–you could instrument/monitor all of your calls using this approach.

With zio, I can also do a bunch of other tricks using a wide range of combinators. Given fetching and other user experince enhancements requiring data fetching/asynchronous programming, the effects model in zio is helpful.

The zio approach above will work server side, browser side and across communication protocols–not just HTTP. In today’s world, HTTP-based web services are becoming less common as other protocols are significantly more efficient despite the presence of compression. This is especially true in places where there are lots of people, cheaper/older phones and less powerful infrastructure.

There are other ergonomic factors involved when creating a good UI. I covered some of these in another blog e.g. how to add delays to the fetch to improve the use experience and avoid display flipping. I will not cover that here again with zio as they are easy to implement.

That’s it!

Note: I’ll blog out a more production version of all of this for scala.js browser work in an upcoming blog. Once you realize the pattern is no different than what you have seen before and you get used the Reader monad and effects (and simplify things since you are in the browser), its amazingly nice how it all comes together. We can also use more features of zio to simplify th code while still retaining flexibility and type correctness. We do not actually leverage zio as much as we could in the above formulation. We can be much more clever.

Comments

Post a Comment

Popular posts from this blog

quick note on scala.js, react hooks, monix, auth

zio environment and modules pattern: zio, scala.js, react, query management

attributes with react and typescript.md