Digital Transformation & Software Engineering Services
The Engagement of Microservices and Serverless Computing

The Engagement of Microservices and Serverless Computing

After DevOps, Microservices has been one of the most talked about topics in almost all architecture forums lately. Recently we were discussing the cloud adoption rate and thinking, what would be the next revolution in the infrastructure space? Although we know that cloud computing has been a technology game changer in many ways, what will be the next big thing? The leading candidate appears to be the emergence of serverless computing or serverless architectures, which are being built by all of the cloud service providers: AWS Lamda, Apache OpenWhisk, IBM Bluemix, OpenWhisk and Google Cloud Foundation.

While serverless computing is often popularly referred to as Function-as-a-Service (FaaS), a better name in my opinion would be ‘Compute-as-a-Service’ (CaaS) – if it wasn’t taken already –  because it offers the ability to purchase compute in small increments, not functions in small increments.

The word ‘serverless’ doesn’t mean ‘no Servers’. Serverless computing is an event driven application design and deployment paradigm where all the computing resources are provided as scalable cloud services.

Main Difference Between Cloud Computing and Serverless Computing:

In traditional cloud computing, it is compulsory for the organizations to pay a fixed and recurring amount to run their websites and applications, whether they use all the instances or not. In serverless computing, on the other hand, you pay only for the services or instances you have used, with no charges for downtime and idle time.

Serverless computing is an extension of microservices:

As in a microservices architecture, the serverless architecture is divided into specific core components. Microservices groups similar functionalities into one service, while serverless computing divides functionalities into finer grained components. Developers create custom code and execute it as autonomous and isolated functions that run in stateless compute services.

Let’s look at an example. Imagine a service in today’s FinTech space, e.g., a generic mailer for non-compliance, which sends mail every day at midnight. In a microservices architecture, where everything gets disintegrated into a distinct API and an independent microservice, there are many services which would be running on demand and many on a scheduled basis. A generic mailer for non-compliance which is sent daily once at midnight, would be an independent microservices API (technically a function).

In serverless computing, on the other hand, there is no server running to service the mail operation until the mail event is fired at midnight. At that point, the server is allocated, runs the code, and then gets decommissioned.

The main differences between microservices and serverless computing are:

  • Latency: The time required for a FaaS function to respond to a request depends on many factors, and may be anywhere from 10 milliseconds to a minute. Although in our use cases, we do not have stringent timelines to fire the emails, so this is fine. As soon as the API is called, a server gets spawned, serves the request, and is decommissioned once the request is completed.
  • Cost: In a microservices architecture, you would be charged by the cloud service provider for the time which you have used, in this scenario, at-least one single instance would be allocated for that particular microservice, though it’s used only for 5 to 10 minutes daily. In the case of serverless computing, you will be charged as per your actual usage of server resources, hence making it an attractive pricing model.

The structures, automation and optimization are built in. You can fit and isolate the business logic in each REST API with its own function. The result is a complete agile infrastructure ready to deploy in a very short time period.

The above image shows a typical microservices based serverless architecture based on an Amazon Tech Stack.

The heart of the system is AWS Lamda which gets its routing from the Amazon API gateway and in-turn carries out the designated functionalities. One use case for API Gateway + FaaS is for creating http-fronted microservices in a serverless way with all the scaling, management and other benefits that come from FaaS functions.

The most important benefit, in my opinion, is the reduced feedback loop required to create new application components – there is a lot of value in putting technology in front of an end user as soon as possible to get early feedback, and the reduced time-to-market that comes with serverless fits right in with this philosophy.

The space looks very promising and a new silver lining has arrived but it’s certainly not for the faint-hearted.

About the Author

Akash Shah Akash Shah
Akash Shah, is a Senior Architect at Ness MTIC and has around 12 years experience in defining system architectures using proprietary and open source components for enterprise integration, enterprise architecture and cloud integration.