The problem Last time we’ve seen how to deal with future inside an actor. However some times you just need to wait for a future to finish before processing the next message.
Distributed training of neural networks is something I’ve always wanted to try but couldn’t find much information about it. It seems most people train their models on a single machine. In fact it makes sense because training on a single machine is much more efficient than distributed training. Distributed training incurs additional cost and is […]
We all know that Cassandra is a distributed database. However there’re situations where one needs to perform an atomic operation and for such cases a consensus must be reached between all the replicas. For instance when dealing with payments we might require that we only insert the row once.