I think it is useful to have a quick discussion on what happened about moving to serverless in the last 3 years. First of all, I want to clarify that there are many discussions in my company related that, even some that steps in the field of black magic, meaning that what is told is mostly half a truth of what it is really happening.
Let’s say that if you use AWS, you have 3 choice for your computational power: hosted servers, like EC2, serverless long running apps, like ECS, fargate to be clear, or ECK, kubernetes hosted by AWS, or on demand computational power, I define this lambdas because they are literally computational power that is started and billed per execution of a single unit of processing.
The logic is: if you want, for some reason, a machine (like a legacy product made of 30 apps that are written in python, shell script and Java and that you don’t want to dockerize because you need often to go and check the status by ssh into it and bla bla bla) you go for EC2. If your microservice is a long lasting application that is dockerised and you don’t want to support the machine, but only the code, you may want to use fargate or kubernetes.
For easy pieces of code that are called not so often, do a quick job, return and then you may invoke them much later, or they have limited peaks per day, you may want to use lambdas.
The black magic in the discussions is usually about when to use those technologies, and how much they cost. When to use is a philosophical discussion, and involves the fact that with lambdas you may end up creating architectures that are not microservices, but nanoservices, and I leave you this article to understand why this is an anti-pattern.
For costs it is easier: AWS explicitly says what are prices for Lambdas and for Fargate. It is obviously difficult to compare, but let’s do an example: to have a fargate up and running for 24 hours with .25 of CPU and .5Gb of mem (minimum configuration), according to this page you pay 0.296$ per day, 9.01$ per month, that, according to this page, is the equivalent to call 45,05M of times lambda without considering the processing time, the free tier and using the minimum config for lambdas, that is anyway a bit less powerful than the fargate one. Therefore, if you have an app that is computing very fast a way less times than 45M per month (or 1.5M per day) then be my guest.
The black magic comes when people speaks about their usage of lambdas and they want to sell that it is extremely cheap. I have this case of team that is saying that, but just because they have an API managed by an API Gateway that does a pass through of data to a dynamo db for most of their calls.
This case is a bit unfair. Because in the computational cost you should consider the API Gateway, that is quite expensive. Indeed even if we consider the best price (but you have already paid all the other bands of calls), it is currently other 77.4$ per month to cover the price of your 1.5M per day/45M per month lambda requests, and this make the price a bit different: keeping an API gateway with lambda you can do less than 3 million request per month before covering the cost of a Fargate, that does not need API Gateway.
Then in the last 3 years I improved a lot my skills in terms of Architecture As Code (Terraform), that I think it is quite amazing: we create AWS resources and PagerDuty alerting all with terraform. We also had a lower level knowledge of some of the services that we are using: Fargate, Lambdas, Athena, Glue, relational databases, SQS, SNS, Kinesis, Elasticache (both Redis and Memcached)… it is still amazing, at least from the tech point of view.
It’s all folks. Stay tuned!