Serverless Cloud: Amazon AWS vs. Google Cloud vs. Microsoft Azure

With the help of AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions, a little business logic can do a lot aws vs azure vs google cloud pricing

If you've ever been woken up at 3am because of a server failure, you'll understand the appeal of the buzzword "serverless". Machines take hours, days, and sometimes even weeks to configure, and they need frequent updates to address bugs and security holes. These updates often cause trouble, as new updates become incompatible with other updates, which seems to never end aws vs azure vs google cloud pricing.

This annoying endless loop of running servers is also one of the reasons why many large cloud companies opt for "serverless" architectures. They know that the excuse bosses have been hearing for a long time is "this is the server, this is the server". The boss must be thinking if we can get rid of the server aws vs azure vs google cloud pricing.

It's a good marketing slogan, the only problem is that it's not strictly true. Without servers, these applications are like restaurants without kitchens. Sitting in a restaurant is great if you want to order everything on the menu and you like the way the chef prepares them. But if you want a different dish, or a different flavor, then you better have your own kitchen aws vs azure vs google cloud pricing.

The three giants Amazon, Google, and Microsoft are currently vying for the future of mainframe applications, looking to write them into their own serverless APIs and manage them through their own layers of automation. If platforms live up to our requirements, and new models become commonplace, they'll undoubtedly be the easiest and fastest way to create multi-billion-dollar unicorn-level web apps. We only need to write a small amount of critical logic and the platform will handle all the details aws vs azure vs google cloud pricing.

Serverless functions are becoming the glue or scripting language that can connect all cloud functions together. Mapping or AI tools that were once relatively independent are now also linked together through event-driven serverless functions. Much of what we do today can be done through responses and event triggers from various parts of the cloud. If we want to try machine learning and analyze data with them, the fastest way is to create a serverless application and send events to the machine learning part of the cloud.

The key here is to further refine everything so that it's easier for them to share resources on the cloud. In the past, everyone was frantically creating new instances with Ubuntu servers running on their own virtual machines. All are using the same operating system, which is heavily replicated on real servers divided into multiple virtual Ubuntu servers. Serverless operations avoid this duplication, thereby drastically reducing the cost of cloud computing, especially for jobs that only run sporadically and never even clog those old servers sitting in the room.

Of course there are hidden costs behind all the conveniences. If you want to migrate your code to another site, you may feel terrified of needing to rewrite most of the stack. APIs are different, and while popular languages ​​like JavaScript are standardized, they have almost become proprietary. This makes it extremely likely that users will experience vendor lock-in.

To introduce the appeal of the serverless option, I spent some time creating some functions and putting them on the stack. I haven't written much code, but that's the point. I spend more time clicking buttons and filling out web forms to configure everything. Do you remember when we used XML and JSON to configure everything? Now I just need to fill out a web form and the cloud will do the rest for us. Still, we have to think like programmers, figuring out what's going on in the cloud and what's not under our control.

AWS Lambda

AWS Lambda is growing into a shell scripting layer for Amazon's entire cloud. As an underlying system, it enables embedded functions to respond to events generated by any part of Amazon's cloud infrastructure. If a new file is uploaded to S3, we can make it trigger a function that does interesting things with the new file. If a video is being transcoded using Amazon Elastic Transcoder, we can make the Lambda function wait until the transcoding is complete and then be triggered in turn. These functions can trigger other Lambda actions, or just send someone an update.

You can write Lambda functions in JavaScript (Node.js), Python, Java, C#, Go, etc. Given that these languages ​​can be embedded in many other languages, this makes it possible to run other code such as Haskell, Lisp, or even C++.

Since Amazon provides many options for configuration and optimization, we felt that writing Lambda functions would be more complicated than expected. While it's technically true that we can do a lot of things with just a few lines of code, I think I have to allocate more time to configuring how the code works. Much work is done by filling out forms in the browser, not by typing in text files. Sometimes I feel like we're just swapping the text editor for a browser-based form. However, that's also the price to pay for retaining all the flexibility that Amazon wants to make available to Lambda users as well.

Some of the extra steps are due to Amazon giving users more options and expecting more people to try function writing for the first time. Once I've written a function on Google or Microsoft, I'm able to point my browser to the correct URLs and test them right away. Amazon will let me click Configure API Gateway and set up the firewall correctly.

Finally, all these clicks add a secondary layer, which is easier to get started with than using a text file. After I created a function, the browser gave a "This function contains an external library" warning. Back in the days of pure Node, that's exactly what I'd like to know, and I'd google this error message to find the answer, crossing my hands in anticipation of the answer showing up on the search results page. Now the cloud will help us.

If serverless means freeing us from server management, then Amazon has many of the same "serverless" options as AWS Lambda. At the same time, Amazon also has elastic tools such as EC2 Auto Scaling, AWS Fargate, AWS Elastic Beanstalk, among which EC2 Auto Scaling and AWS Fargate can start and shut down servers, and AWS Elastic Beanstalk can process the uploaded code and deploy it to the web server. And do load balancing and scaling. Of course, with these automated tools, we can still create server images.

AWS Step Functions is a very helpful solution. As no-code flowchart tools, they create state machines for a model that software architects call workflows. Part of the problem is that all serverless functions are stateless, and they only work when executing some of the most basic business logic, which can cause trouble when navigating the client through checklists or flowcharts. You need to constantly go to the database to reload information about the client. Step Functions connects Lambda functions and state together.

Google Cloud Functions with Firebase

If your goal is to get rid of the work of configuring servers, Google Cloud has a number of services that can free you from providing a root password or even using the command line.

Starting with Google App Engine in 2008, Google has been slowly adding different "serverless" options and offering various combinations of messaging and data transparency. Google Cloud Pub/Sub shields the message queue for us, so we only need to write code for data producers and consumers. Google Cloud Functions provide event-driven computing for many important products, including marquee tools and APIs. Meanwhile, Google's Firebase lets us mix JavaScript code into the data storage layer that delivers data to the client.

Of these, Firebase is what interests me the most. Some people think of databases as primitive serverless applications that abstract away data structures and disk storage, and then pass all information through TCP/IP ports. Firebase takes abstraction to the extreme by adding JavaScript code and notifications to do all the work (which you might want to do with server architecture, like authentication). Technically, they're just a database, but they handle a lot of business logic and stack notifications. We could really get rid of some client-side HTML, CSS, JavaScript and Firebase.

You might be tempted to call Firebase's JavaScript layer "stored procedures", as Oracle does, but that would be a missed opportunity. Firebase codes are written in JavaScript, so they can run on the local version of Node.js. You can embed a lot of business logic in this layer because Node already comes with libraries to handle this workflow. In addition, you can enjoy the convenience of homogeneous code running on the client, server and database.

Another thing that appeals to me is the sync layer in Firebase that syncs a copy of the item from the database across the network. With this feature, we can set up our client application as another database node and subscribe to all changes to the relevant data (or just the relevant data). If the data changes somewhere, they modify everywhere. We no longer need to post notifications, we just need to write a message for Firebase because Firebase copies them where needed.

We don't need to focus solely on Firebase. The more basic Google Cloud Functions are a simpler solution that embeds custom code throughout Google Cloud. At this point, cloud functions are a good option for writing Node.js code that will run in a pre-configured Node environment. While Google Cloud Platform supports multiple languages ​​such as Java, C#, Go, Python, and PHP, Cloud Functions are limited to JavaScript and Node. There are already signs that they will support other languages, and I wouldn't be at all surprised if these become a reality.

Google Cloud Functions are not as integrated in Google Cloud as AWS Lambda is in AWS, at least for now. When I try to create a function to interact with Google Docs, I find that I may need a REST API and code in Apps Script. In other words, Google Docs has its own REST API, and it was experimenting with the concept long before the term "serverless" was coined.

It is worth noting that Google App Engine has maintained a good momentum of development. In the beginning, they only provided Python-enabled applications to meet the needs of website visitors. After several years of development, they can now handle many different language runtimes. Once the code is bundled into an executable, when a user sends a request, App Engine will step into a program that spins up enough nodes to handle the traffic, and expands and contracts accordingly.

Nonetheless, there are still some hurdles to overcome. As with cloud functions, our code must be written in a relatively stateless manner, and each request must be completed within a limited time. App Engine doesn't throw away all the helpers, or forget what's in between these requests. While App Engine is a big part of the serverless revolution, it's still the best way to go for those who still use the old ways to create stacks in Python, PHP, Java, C#, or Go.

Microsoft Azure Functions

Of course, Microsoft is working just as hard as its rivals to bring the convenience of serverless to users through the Azure cloud. They created their own basic functions, Azure Functions, and well-designed tools that could even be worth half a programmer.

Microsoft's biggest strength may be its Office apps. Office applications, which used to execute files on the desktop, are now gradually migrating to the cloud. In fact, Microsoft's cloud revenue surpassed Amazon in large part because it included Office's revenue in cloud revenue.

One of the best examples listed in the Azure Functions documentation shows how a cloud function is triggered when a user stores a spreadsheet in OneDrive. The elves on the cloud will suddenly come alive to help process the spreadsheet. This is a godsend for IT store support teams that love Excel spreadsheets (or other Office documents). They can do almost anything by writing Azure Functions. While we often think of HTML and the web as the cloud's only interface, there's no reason they can't be done through document formats like Microsoft Word or Excel.

What appealed to me about Azure's Logic Apps is that one of the tools allows us to just fill out a form and no doubt worry about semantics and syntax. While we still need to think like programmers and make informed decisions about abstractions and data, we can be confident that we're not writing as many "codes" as filling out forms.

Like Amazon's Step Functions, Logic Apps intentionally encodes "workflows." Thanks to the availability of some states, the "workflow" here is slightly more complex than the "function" in the normal sense. We still have to write logic to connect different functions and connectors in a flowchart-like fashion, but we no longer have to spell them in formal computer language.

The biggest advantage of Logic Apps is that they have pre-built "connectors" that allow them to penetrate into more Microsoft and third-party applications. We can efficiently push and pull data to and from Logic Apps, just like Salesforce, Twitter, and Office 365. These connections are for the company's IT staff to of great value, they can now connect external tools by writing Logic Apps, just like they used to create shell scripts.

Another interesting thing about Azure is Azure Cosmos DB. The database is both a NoSQL database and an SQL database at the same time. Microsoft replicated the API for Cassandra and MongoDB, so we can push and push information without rewriting Cassandra or MongoDB code. If we want to write SQL, then we can too. Cosmos DB is more straightforward, they index everything to run faster. If we have a lot of SQL and NoSQL code and we want them to work together then Azure Cosmos DB creates a central node. Perhaps we also want to leave an open door for other different solutions in the future.

A Comparison Between Three Serverless Clouds

Which serverless platform is best for us? While the effort to write the base functions is similar across the three separate platforms, there are differences. The most obvious difference is probably the languages ​​that can be used, as each platform has its own preferred language after completing support for Node.js and JavaScript. We're not surprised at all that C# is available on Microsoft Azure, it's surprising that they are the only platforms that support F# and TypeScript. The languages ​​supported by Amazon are Java, C# and Python. Although Google's App Engine supports many languages, Google's underlying functions are currently limited to JavaScript.

When comparing these serverless clouds, the hardest job is comparing price and speed, as they are more hidden behind the scenes. When I enable VM instances, I feel like I'm spending money because they're priced by the hour. Today, these providers are cutting the sausage thinner and thinner, and we can get hundreds of thousands of function calls for less than a dollar. We're going to talk about the word "million" like Doctor Evil in Powershot.

Of course, these ostensibly low prices can quickly drive our minds out of sanity and budget-consciousness, just as we go on vacation to an unfamiliar country that uses a different currency. We'll be making another million database calls soon. It's a lot like we drink at bars in Cancuon, Mexico's health resort, where we can't quickly calculate their true prices.

When the cloud sells us raw virtual machines, we may evaluate in terms of RAM and CPU, but in a serverless platform, we have nothing to evaluate at all.

It's worth noting that the serverless model will force us to store data on a local cloud database, since we can't really be allowed to maintain arbitrary state through code. We have to trust these backends. Functions must run without any caching or configuration, as there are always many versions coming and going. Database glue code will fill our code, like those vines in the "inverted" world of American TV series Stranger Things.

The only feasible way to compare their costs is to create an application on all platforms, but this is a very challenging task. Since they all run Node.js, it is possible to move some code between the three platforms. Even so, there are some differences that we have to live with (for example, we can handle HTTP requests directly on Microsoft and Google's platforms, but on AWS we need to go through an API gateway).

The good news is that we don't need to be so paranoid. In my experience, many basic applications use very little resources, and we can do many things on the free tier provided by the three platforms for developers who have no money in their pockets. The serverless model can really save us a lot of money in terms of overall spending. Unless we're the type to run servers at full capacity 24/7 and have free air conditioning, going serverless can save us a lot of money. Whether the price is $1 per million calls or $1.50, they will be able to save us a lot of money, that's indisputable.

There is a deeper problem here. If we're not happy with any of the three cloud platforms, then we're in big trouble. Taking code out of these platforms to run on commodity servers like we do with Docker containers is nearly impossible. If we're lucky, we can replicate the same original schema and underlying JavaScript functions, but after that we need to rewrite the database glue code everywhere. Because all three companies have their own proprietary data storage layers.

There's also a bit of ambiguity about what happens if there's an operational failure. When running our own servers, our bosses would choke us if something went wrong. What happens when a serverless platform fails is unclear. Google's page has this warning "This is a beta release of Google Cloud Functions. This API may be changed in incompatible ways and is not subject to any SLA or deprecation policy."

Amazon's terms of service have gotten a lot better than when they were just getting into this area. But these terms still contain a warning that we need to keep in mind, "If your content has not been running for more than three (3) months, we may notify you within 30 days and delete your uploads without any liability. Anything to AWS Lambda.” If we want to use them all the time, then we need to make sure our code is running. This warning is undoubtedly true (since I know my old Lambda function will never be used again), but it shows that we're giving up some control.

Microsoft offers a service-level agreement for Azure services that promises financial compensation for downtime based on service credits. So when our function doesn't work, do these promises still apply? Maybe, provided we don't try some of the test features of the service. If we're going to be creating something mission-critical, rather than creating a chat room for kids, it's worth taking the time to study these terms.

In many cases, what we've done is limited to comparing Amazon, Google, and Microsoft's services and capabilities, ignoring the functional layer. If the users we support have a soft spot for Office apps, the ability to trigger Azure Functions from Office files on OneDrive would be very appealing. Google Firebase makes it easier to provide support services such as notifications and authentication for web applications through functions. AWS Lambda brings in many Amazon services, but that also seems to limit their prospects.

Mash-up of these clouds and functions is technically possible because they all use the same PUT and GET language for HTTP API calls. It is not valid to say that microservices that combine the advantages of three cloud services cannot be used in one application. But severe delays can make us give up trying, as packets leave the local cloud and instead travel across the open and expansive internet. In addition, there are slight differences in syntax analysis and structure, which also makes us choose only one company's platform.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us