scala and composition and extension patterns

I thought I would create a list of the compositional and extensionable approaches in scala. Some of these can be done in java and other languages but may require more work or more source complexity which outweighs the benefit.

I reviewed a number of books and articles and I have tried to cull out the patterns. Some of these patterns require you to modify existing code which may decrease the value of the pattern because the code is frozen or owned by a third party. Some of these patterns allow composition and extensibility without touching existing code. Some of these approaches are made more difficult in scala, say compared to groovy, because they employ static typing so you need to make sure the benefits of static typing outweigh the cost of complexity.
  • Traditional inheritance composition: This is the standard approach to ensure that a subclass of some sort has the requested behavior and in some cases, properties, that you want to mix. In java, a method specification can really be a clue that its a property dependency when the methods are getter and setters. In scala, this roughly equivalent to mixin composition although there are some benefits to using traits such as fine grained slicing of concerns and implementation code in traits, that make this pattern easier to use in scala. You can use this method with and without Spring.
  • Traditional run-time composition: This is the classic declaration that a service interface is declared in a trait then at runtime a value on an object is set to a service instance. This is equivalent to programming to interfaces. You can use constructor or property injection from a container like Spring to ensure that the wiring occurs. The @Autowired(required=true) comes in handy. The other methods below do not need the annotation and try to use the compiler to ensure that the service dependency is made available to the object. You can use this method with and without Spring.
  • Cake Pattern and self types.  A self type allows you to specify that a concrete class should, at some point, mixin values or methods and that the declaring type (trait) can use that assumption to use values and methods in the current class (trait) to indicate to the compiler a dependency that must be filled. The Cake pattern says you can then create your software components, declare another trait, using self types, that composes those other traits then create a concrete object that extends the compositional trait and also "with"s the other traits. Some scala books say that this is equivalent to dependency injection but I think you need to use implicits to have automatic dependency resolution. Implicits and DI are really a behavior needed to satisfy dependency requirements. Containers in spring actually do two things: They both allow declaration (which in scala is done in code) and resolution (done by the spring engine). In spring java-config, you use java code to declare objects and annotations to indicate how the wiring should be performed especially across third party libraries.
  • Cake + abstract type member: This is where you use the cake pattern but you are also using an abstract type member (a declaration of a type with an optional type constraint in the trait). Then you compose the objects like you would with the cake pattern and it provides extensibility of the abstract types. The cake pattern does not require abstract type members to make the pattern work so I have called out this as a separate pattern. 
  • Visitor pattern (this is an extension pattern): Here scala can implement the standard visitor pattern just like java. You can also use scala's pattern matching to make the pattern more readable. But whether its scala or java, you still suffer from that fact that if you add a new type, you have to touch code somewhere to inform the visitation pattern of the new type, which may not be allowed.
  • traits + abstract type members + type shadowing: This technique is described in the wonderful book Scala in Action (Manning, 2013) and Steps in Scala (Cambridge, 2010). It shows that you can use the standard concepts of traits, abstract type members and shadowing to redefine traits contained with a trait "module." By redefining a type in the trait (which shadows a superclass trait) any sub-traits or objects then receive the "new" definition. The new definition would include a new method. Because you are using traits within traits, you employ path class syntax to refer to classes in the super-trait. After adding the extension method to the sub-trait and using path class syntax to shadow some types, you create your concrete object. Since you are using abstract type members, when you create your concrete object, you need to define the actual type (to fill in the template of the abstract type member) and any outstanding methods. The outstanding methods you define in your concrete object would employ the extension method in some way. This approach does not allow you to avoid new code and any technique based on the scala compile will involve new code but it allows you to extend types with your new methods and data without having to recompile existing code.
  • Type class, implicits + adapter pattern: A type class is a parameterized type. This is roughly equivalent to java using generics. The type parameter does not have any constraints put on it. So for example you might write trait MyTypeClass[A] { ... }. Essentially, since the type A can be of any type, you are essentially wrapping methods defined in the trait around the type parameter. This concept of wrapping is the basis of the adapter pattern. By using a type class (we are not saying use an implicit to change the type to another type --see below) we can add a method to an object. It may not have the fluid syntax of a .member or .methodCall but it will allow you to write generic methods on objects. When combined with implicits to bring in resources needed to evaluate the method call, you can create pluggable generic method calls and use implicit scoping tricks (such is import statements) to ensure that the right dependent resources are available. In some cases, those dependent resource may include processing strategies that are specific to some of the "A" objects and you are actually implementing a functional version of the strategy pattern. But there are a number of variations of this approach. Typically the method call uses the implicitly(...) syntax to specify the implicit resource needs.
  • Implicits and changing a type: The idea is that the compiler will look for an implicit definition to satisfy a member or method call on an object. If there is an implicit conversion that could occur within the scope of the expression, the compiler will bring in the implicit. This essentially extends the methods available to existing objects. This is the basic idea behind extension methods in .NET of course.
  • Currying: You can use currying to carry a function which has the dependent resources specified. By using a function and then currying the parameters with the specific resource, you are in effect specifying the resources needed to perform something. The curried function is then applied to the object of interest which can then have the resources in scope for use during processing
  • Applicative Pattern: This is described in Chapter 11 of Scala In Depth. It uses pure functional patterns to build up objects that are then injected into the target object. Essentially, it builds together a set of dependency objects using conditional structures, like Option, then calls apply on the constructor of the target object. Because Option is being used (think monads and functors here), if any of the inputs are missing or invalid, then the target object is not constructed. Since the target object is returned to the caller as an Option, then you know whether you programmed in enough dependencies to make the object work. Its a form of dependency injection.
I know its hard to visualize these items without more examples, but I wanted to make sure that I had them listed in one place first. After reviewing this, it is clear that Spring should still be used to wire together different libraries and acts as the glue. Its probably just easier to do some of this in Spring.

But if you are working in Scala and building my abstractions all within scala such as with a new system or building a library so its self-contained, the scala compiler approaches make alot of sense because they reduce the complexity of that particular application or library.

Some of the patterns above are more useful when you know that type of composition (and hence wiring) you want to do prior to the deployment cycle. If you have a dynamic composition problem, where the components you need may be determined by a database, you need to think through whether the "use the compiler" models will work (hint: lazy evaluation could help here).  Of course, you could abstract away the very specific parts of the algorithm so that the highly configurable part is further down in the abstractions (this starts getting to the concepts of configuration versus customization). There would still be value in terms of system evolution (the part of the code that you do control) to using the cake pattern internally of course.

There are many variants of the cake pattern (for example Precog has a nice article on existential types a variation of the cake pattern and you'll want to think through how they apply to your problem.

What have we NOT touched on? We have not touched on AOP techniques to understand what might be the same in the scala world. 

Popular posts from this blog

graphql (facebook), falcor (netflix) and odata and ...

React, Redux, Recompose and some simple steps to remove "some" boilerplate and improve reuse

Using wye and tee with scalaz-stream