Load-Balancing strategies

A load-balancing strategy is a strategy allowing to spread the load (e.g. HTTP requests) over a fleet of instances providing some functionality. If you’re like me you probably think of a sort of load-balancer (e.g. Nginx, AWS ELB, …) in front of several nodes (e.g. EC2 instances, docker containers, …). This is, indeed, a very common pattern but there are other alternatives with different pros and cons. Continue reading “Load-Balancing strategies”

Back-pressure in Grpc-akkastream

Grpc-akkastream, the akka-stream implementation built on top of GRPC looks good on the surface but if you look under the hood there is one problem: it doesn’t provide any support for back-pressure.

Akka-streams (as any reactive-stream implementation) provides a way for back-pressure and GRPC (at least the Java version) also has an API that support back-pressure. Let’s see how we can wire everything together and provide a truly reactive akka-stream implementation for GRPC! Continue reading “Back-pressure in Grpc-akkastream”

Making sense of SBT

SBT is probably the most popular (as in “most used”) build tool for Scala yet people (including me) experienced a hard time figuring out why it doesn’t perform as they expect.

Originally known as the “Simple Build Tool” it has later been renamed to “Scala Build Tool” (maybe to acknowledge the fact that it was not so simple?)

Part of the reason is that it’s not really intuitive. True, there is documentation available but how many developers don’t bother reading it (at least until the effort overcomes the pain of using SBT) plus it’s much easier to copy/paste code snippets without completely understanding what they do.

That being said SBT is not magic, so let’s try to understand how it works so that next time we’re not reduced to copy/paste cryptic code snippets until it works. Continue reading “Making sense of SBT”

Akka stream interface for gRPC

Back from holidays let’s continue with some of my favourite topics: AkkaStreams and gRPC.

We’ve already seen how can take advantage of the ScalaPB code generation tool to generate new interfaces (GRPCMonix) on top of the grpc-java implementation or to create new tools to integrate gRPC with other services (GRPCGateway).

Similarly to GRPCMonix which provides a Monix interface – Task, Observable – on top of gRPC, it’s possible to develop an AkkaStream interface on top of gRPC. Continue reading “Akka stream interface for gRPC”

Understanding Scala Futures and Execution Contexts

Viktor Klang recently published a set of useful tips on Scala Futures. While being widespread and heavily used, people (especially newcomers) are still experiencing problems working with Scala Futures.

In my opinion many of the problems come from a misunderstanding or misconception on how Futures work. (e.g. the strict nature of Futures and the way they interact with Execution Contexts to name a few).

That’s enough of an excuse to dive into the Scala Future and ExecutionContext implementation. Continue reading “Understanding Scala Futures and Execution Contexts”

Publishing gRPC services over REST/HTTP with gRPC Gateway

In the previous post we’ve seen how to implement a gRPC service in Scala. While gRPC is a great way to implement remote services, many client/server interactions are still implemented using REST/HTTP nowadays.

So the question is: Is it possible to use gRPC to define and implement services and make them available over REST at the same time?

Well, it turns out it’s possible. The translation from protobuf to JSON format is quite straightforward. The only thing left is to define associated endpoints for each of the gRPC calls. Here again it can be easily done along with the gRPC definition using the HTTP annotations from Google.

This is exactly what GRPC-Gateway provides. While GRPC-Gateway is implemented in Go it can run along with any gRPC implementation as it only relies on the protobuf/gRPC definitions (.proto files). It then spawns up an HTTP proxy that translates and forwards REST (HTTP/JSON) calls to a gRPC server.

Fortunately ScalaPB‘s code generation makes it possible to implement the same functionalities in Scala. Continue reading “Publishing gRPC services over REST/HTTP with gRPC Gateway”

gRPC in Scala

As many might think, gRPC doesn’t stand for “google Remote Procedure Call” but is a recursive acronym meaning “gRPC Remote Procedure Call”. I don’t know if you buy it but the truth is that is was originally developed by Google and then open-sourced.

If you’ve been in the IT for a while RPC doesn’t necessarily bring back happy memories. On the JVM it all started with RMI in the 90s. RMI was inspired by CORBA and suffered from a lack of interoperability as both the client and the server had to be implemented in Java. RMI was also particularly slow as Java serialisation is not a very efficient protocol.

Later in the 2000s came XML based RPC with XML-RPC and especially SOAP. Both of these formats address the interoperability as it no longer matters how the client/server are implemented. They only need to speak XML. However XML is still not an efficient protocol and communications remain slow.

SOAP provides an interesting definition language (WSDL – Web Service Definition Language) that can be used to generate service implementations.

gRPC addresses all these drawbacks. By default, it uses protobuf (Protocol buffers) for the service definitions and as its serialisation mechanism, which allows it to interoperate with many different languages while providing an efficient serialisation protocol. gRPC also takes advantage of HTTP/2 to add streaming capabilities.

gRPC generates all the boilerplate code for you, so that you only have to implement the business logic inside your services. It supports a number of different languages out of the box: C++, Java (Netty), Python, Go, Ruby, C#, Javascript (Node.JS), Android, Objective-C and PHP.

Unfortunately Scala is not in the list! … but we have scalaPB (and sbt-protoc) to save the day! Continue reading “gRPC in Scala”

Kafka concepts and common patterns

Many people see Kafka as a messaging system but in reality it’s more than that. It’s a distributed streaming platform. While it can be used as a traditional messaging platform it also means that it’s more complex.

In this post we’ll introduce the main concepts present in Kafka and see how they can be used to build different application from the traditional publish/subscribe all the way up to streaming applications. Continue reading “Kafka concepts and common patterns”

Generating protobuf formats with scala.meta macros

Today’s focus is on scalameta. In this introduction post we’re going to see how to create a macro annotation to generate protobuf formats for case classes.

The idea is to be able to serialise any case classes to protobuf just by adding a @PBSerializable annotation to the case class declaration.

Then behind the scene the macro will generate implicit formats in the companion object. These implicit formats can then be used to serialise the case class to/from protobuf binary format.

This is quite similar to Json formats of play-json.

In this post we’re going to cover the main principles of scalameta and how to apply them to create our own macros. Continue reading “Generating protobuf formats with scala.meta macros”

PBDirect – Protobuf without the .proto files

Protocol Buffer (aka Protobuf) is an efficient and fast way to serialise data into a binary format. It is much more compact than Java serialisation or any text-based format (Json, XML, CSV, …).

Protobuf is schema based – it needs a description (in a .proto file) of the data structures to be serialised/deserialised.

On the JVM, protoc (the Protobuf compiler) reads the .proto description files and generates corresponding classes.
For Scala there is a very good sbt plugin “scalaPB” that follows the same process and generates case classes corresponding to the .proto files definitions.

The .proto files are an easy way to describe a protocol between 2 components (e.g. services). However there are some cases (e.g. writing to persistent storage) where the .proto files definition are just unnecessary and add superfluous complexity. (Who likes to read auto-generated code?).

In such cases it would be much easier to serialise an object directly into protobuf (using its class definition as a schema). Afterall this is what the protobuf java binding does: it serialises (auto-generated) java classes into protobuf binary format.

To that matter, let me introduce – PBDirect – a scala library to directly encode scala objects into protobuf. Continue reading “PBDirect – Protobuf without the .proto files”