8 min. reading time

In my last blogpost I described the basic set of serverless services provided by AWS, which can be used to create scalable, high available, and performant cloud architectures. This blogpost will now give you some scenarios, how you can use these services to create serverless applications.

A simple use case for serverless applications on AWS is hosting a static website. Let’s assume we want to host a simple website, where you can customize the attributes for a registered user of your website, available under the /user endpoint.

Hosting a static website

 

The website is hosted on Simple Storage Service (S3) using S3’s static website hosting . The website’s HTML, CSS, and Javascript files are stored on S3 and can be served under a given domain. The user retrieves the static website directly from S3 (1), and the JavaScript code checks if the user is logged in, e.g., by checking whether a token is stored in a cookie.

If the user is not authenticated, he gets redirected to the customizable login page of Cognito, where he can sign in through either an identity provider like Google or Facebook or with his Cognito user pool credentials (2). After successful authentication , Cognito returns user pool tokens to your application. You can use the tokens to grant your user access to your own server-side resources, or to exchange them for temporary AWS credentials to access other AWS services, like the API Gateway. The user pool token handling and management for your web or mobile app is provided on the client side through the Cognito SDK, which is available for all common client frameworks.

available for all common client frameworks. To query the user attributes, the client makes a GET request to the /user endpoint with the Cognito access token added to the request’s authorization header (3). The API Gateway in conjunction with Cognito automatically checks whether the token is valid (4). If this is the case, the API Gateway makes an integration requests to the backend. The endpoint is configured to trigger an associated Lambda function (5). This Lambda function gets invoked with a request context, which contains all relevant information of the initial request. The function processes this GET request and retrieves the user attributes from DynamoDB (6). These attributes are returned to the API Gateway as integration response, which will be forwarded back to the client, which in turn can display the user attributes accordingly. As the Lambda function knows the context of the request, it can handle PUT, POST, or DELETE requests separately to update the database table containing the user attributes.

With that architecture in place, you see a very typical serverless application providing a REST API to handle a user state that is persisted in a database. The API is protected against unauthorized access. Users can sign up, reset their password, and login by using Cognito’s customizable UI. For production cases, you could also add services like AWS CloudFront as a content delivery network (CDN) to reduce the latency for users spread around the world or AWS Shield to protect against DDoS attacks.

Serverless data pipeline

Given this simple example, we will now look at a more sophisticated scenario showing how we can use serverless services to create powerful applications. We would like to create a scalable and fault-tolerant data pipeline with AWS Lambda functions and Kinesis Streams. The data pipeline should receive data from different sources, like frontend applications or other backend systems, and send them as events to different destinations, like databases or other third-party services. This architecture contains several serverless services creating the basic data collection workflow.

 

Serverless data pipeline

 

The data that we want to collect comes from two different sources: either end-user devices like web browsers, front-end applications or mobile clients or from a back-end server. The API Gateway is used to expose a REST endpoint for all frontend applications (1).

The back-end sends the data events directly to Kinesis Streams using the KPL (Kinesis Producer Library) (2). Kinesis Streams is a great service to collect and process large streams of data records in real time because it ensures scalability and prevents data loss. It is used to decouple the data ingestion tier and the data processing tier in order to gain more flexibility and fault tolerance.

All data produced by our event sources is send to the Kinesis data stream (3) and stored until it gets processed. AWS Lambda can be configured to be a consumer of this data stream. It invokes the associated function synchronously, passing events containing stream records (4).

In our example, this AWS Lambda function is responsible of routing the events coming from the Kinesis Stream to several destination services based on a specific set of rules. These rules could be declared in a JSON document, which is stored in a S3 bucket. This way, the rules can be reused and updated at any time without editing the function code.

The Lambda function fetches the rules file from S3 to evaluate the next destination for the received event (6). Each rule defines a target Lambda function, where the event should be forwarded to based on some metadata within the event. These functions are asynchronously invoked by the routing Lambda depending on the event attributes and work like connectors to the various event destinations (5). They provide the logic to connect to specific destination services like DynamoDB (7) or any other third-party web service (8).

While each Lambda function should implement its own retry strategy, some events may not be successfully received by the destination service due to, e.g., connection problems or missing data. For such cases we implement a fallback strategy based on a dead letter queue (DLQ). Lambda can be configured to discard events that could not be processed successfully and to store it in a DLQ. The DLQ can either be realized based on SQS or SNS. It is configured to invoke a separate Lambda function on every element added to that queue. That way, you process all failed events and add more fault tolerance to the data pipeline.

Conclusion

AWS provides many services to create cloud infrastructures for many workloads. Solution architects and developers can choose between a wide range of tools and services to design backend applications for every kind of workload in a very flexible and effective way. To access the services, you can always use the AWS management console UI, the command-line interface, or software development kits (SDKs). You can use a wide range of programming languages within the services or SDKs. The APIs are easy to use, and the documentation is well-structured and complete. By using these fully managed services you can quickly create powerful prototypes and solid productive solutions in a very cost-effective manner, without the need of upfront investments or contracts for infrastructure.

Comments