Oops... your message was not sent

Your message has been successfully sent

Jetruby Blog
Let's talk!

Featured Technical Stories Based On Our Experience

Expertise, JetRuby

Serverless Computing: The Story of Success

The story begins since the moment when we got a project from the client, who wanted to try new approaches in applications development.

He wanted to make an app using serverless computing without spending on it a huge amount of money.

The main goal of the project was to automate a documents management with a possibility to import the documents templates and export this data to the templates. Additionally, it should have both analytics and data aggregation in order to get various kinds of metrics.

The target audience of the application where people, who work in the public sector. That’s why it was crucial to make all the data as secure as possible.

The Implementation Process

Backend

On the backend side, we used an authentication using AWS Cognito. Using this technology, we fully implemented a solution.

Additionally, we used OpenID technology with a token for the authentication as well as a refresh token. The time of their performance was restricted. A developed mechanism of authentication token renovation was connected with AWS Cognito. This allowed us to follow the expiration of token and access control. Everything was based on AWS Cognito.

The mechanism of users access control to the data was implemented using AWS IAM. This tool, which was developed by Amazon, allows managing the access policy. Its main advantage is a high flexibility of an accessibility setup.

Along with DynamoDB, our team could achieve significant results and set the politics. As a result, users from various companies had no possibility to get data about other companies. All the restrictions were imposed on the level of IAM and DynamoDB. Put simply, even if they had a direct access to the base, they would be able to see only the records, which were available with their access level.

AWS Lambda allowed a team to lower the spendings on infrastructure and avoid the spendings on DevOps.

However, it was just the beginning.

We thoroughly studied the techniques of data organization in NoSQL databases. As a result, after the first release was fully rebuilt a structure in terms of separate sheets for the company.

We decided to collect all the data into one sheet and control all the access levels using AWS IAM. All the data was reorganized to allow making selections for every use case. In its turn, it speeded up work with DynamoDB and allowed us to work only with the data we really need. As a result, this reduced the monthly cost on the service.

Nevertheless, this was not the final optimization for reducing the final costs of the whole infrastructure.

DynamoDB Pricing is fully bound to such notions as RCU (read capacity unit) and WCU (Write Capacity Unit). All the payment is being made for the traffic, which we request and/or input into the database. Consequently, we’re interested in the possibility to minimize the interaction with a database.

We developed the mechanism of requests caching on reading from the database as well as the way of cache invalidation when changing the data (that relate to this question). All the cache was divided according to the types of the entity. Since the invalidation took only a little part of all the data input, this allowed us to get instant responses from Lambda functions from cache (at the same time, they’re almost free).

Front-end

Front-end applications were developed on React + Redux. Hosting on S3 is one of the ways to minimize the spendings on hosting.

At the same time, the client side of a web application is little more than apps had 10 years ago. This is a usual choice of static files for most of the applications, which have a usual storage capable of giving these files upon user’s request. The rest of the work is on your browser.

Nevertheless, we wanted to secure user’s data on the client side. Using a local storage for storing the information about the current session, we understood that it wasn’t an absolutely secure solution. Javascript API had access to data management that was in localStorage.

Our team studied the ways of local Storage security as well as the current solutions. Using an open source solution, we learned the working principles and successfully implemented this solution in our application.

For data encryption, we chose a solution, which is based on AES standard. Additionally, we considered the fact that encryption keys may be compromised, We did our best and created them in a way so that it became impossible to distinguish them from obfuscated code.

Hereafter, we applied a strategy that was described in a standard PBKDF2. This strategy uses a special approach for creating inherited keys from the keys, which were presented by the client. Hereafter, using AES, all the data is being encrypted with a key from PBKDF2 (additionally, they can’t be decrypted with a parental key). Eventually, the encrypted files are being compressed using zip.

Using the best practices from Redux, we optimized a storage condition. The storage was optimized in a way that data can’t be duplicated. At the same time, the access to them should be as quick as possible.

Elastic Search

ElasticSearch was used for an analytics and data aggregation in reports and other documents (using stat). We didn’t use DynamoDB for data aggregation and statistic metrics collection. This kind of requests require reading a large amount of data from the storage. As a result, this increases the value of a service. That’s why we decided to use DynamoDB Streams for data indexing in ElasticSearch when creating them in a database.

Infrastructure

To deploy an infrastructure (connected with Lamba), we used an open source framework Serverless. This framework has a low barrier of entry for the developers.

This framework uses an Infrastructure as a Code approach and supports the following cloud platforms (AWS, Google Cloud Platform, Azure, Kubernetes). YML and CLI configs are responsible for the management.

Serverless includes all the necessary functions needed for creating an infrastructure, which is based on AWS Cloudformation. However, a developer can use any framework as well as all the functions of the chosen platform. In our case, it’s AWS.

Let’s look at AWS. Here, it generates Cloudformation config to deploy an infrastructure and Lambda collection, which is stored in S3 directory. In case of refusal to use an open source solution, we’d have an access to the scripts with a native syntax for AWS.

Let’s look at AWS. Here, it generates Cloudformation config to deploy an infrastructure and Lambda collection, which is stored in S3 directory. In case of refusal to use an open source solution, we’d have an access to the scripts with a native syntax for AWS.

The technologies we used:

  • AWS Lambda
  • AWS DynamoDB
  • AWS DynamoDB Streams
  • Serverless
  • React, Redux
  • Design (AntDesign)
  • Node.js
  • ElasticSearch + Kibana
  • AWS API Gateway
  • AWS Cloudfront
  • AWS S3

The project was implemented by:

  • 2 full-stack developers
  • 1 Project Manager
  • 1 Quality Assurance

Timeline of the project:

 2018.03 -> 2018.15

Infrastructure cost:

The whole infrastructure had an approximate price of 15$ per month.

We included into the spendings the cost of ElasticSearch.

t2.small.elasticsearch instance costed about ~30$

The approximate cost of infrastructure with a Serverless approach (Frankfurt region):

  • Lambda+DynamoDB+S3+Cognito+Cloudfront+API Gateway+DynamoDB Streams ~$15
  • t2.small.elasticsearch ~30$
  • The total sum: $45

The cost of a standard solution (Frankfurt region):

  • 1 instance EC2 t2.small + 1 instance RDS t2.small would cost ~$53
  • VPC/Internet Gateway/Route Tables/Security Groups + ~$20-25
  • t2.small.elasticsearch ~30$
  • The total sum: $103
JetRuby Agency
This post has been contributed by:
New Articles