Given all the learning on Scala, I wanted to do a proper project, using Akka and Scala, so I chose a full Fix implementation. Which on reflection was too big a task, since I’m coding in 30 minute slots.

Anyway, some of the unexpected things that came up are below.

Akka – hmm

I love the Actor model. Don’t roll your own threads, use actors. It means that when your code is running on a 4 core, or a 32 core host it will scale up automatically, as the execution context will grow with the number of cores.

Messages

However, you need a strong naming convention for messages in and out of each actor and you need to embed them in the companion object. This should be a convention over the entire project and not per developer.

For example, MsgInBoostrap could be something to start up an Actors state. It really is worth naming your in and reply messages.

State machines

Akka supports become to alter the actor behaviour. However, if your actor has more than several states I would not use it. Instead implement your own states as separate Scala classes and give them their own unit tests. The receive on your actor can simply delegate the functionality to the current state which is in play, and it can return the next state to transition to.

This also means you can test without actor testing, ie simpler unit tests.

It also lets you hedge your bets - the statemachines you implement can know nothing about Akka - the Actor can be a messaging facade, and the statemachine can only know there is a trait which lets it do comms. So, if in future you migrate from large JVMs to tiny services you can pull your statemachines into a microservice and change the trait to be a Kafka layer.

Asynchronous

Obviously everything is async. However, really think about this. For instance, most actions will be founded on a human wanting some action to complete. So if they fire in a ‘save thing’ to your system, and you then use an actor pipeline to ‘validate’, ‘enrich’, ‘comingle’, ‘save’. Then at the end of this pipeline you will have to send another event back to the initiator to show that it is complete. Not only this but any failure at any point should result in a fail message back to the user.

You want each actor to be like a micro-service, Single Responsibility Principle, decoupled and so on. Ideally, if you are reactive as well, then you don’t want every actor to only respond to a single initiation point – i.e. you don’t want to bake into each actor who the first initiator was. You also do not want to code it as a series of synchronous asks – i.e. send and wait for reply. This means every message in this pipeline has to include the actor who needs to be notified of progress.

This is basically SOA orchestration. The best I have read about this recently is the Kafka Stream papers.

Akka vs Kafka streams

Akka can do remoting of actor refs, in fact Lagom is all about collaborating services using the Actor infrastructure. None of that remoting tech as nearly as interesting as the Kafka streaming and processor topologies.

Using Akka within a JVM is still a good idea – you get the Actor model, failure management via the Guardian and so on.

Using Kafka streaming challenges this… So that’s the next project. Once I have done Sackfix.