The 8 flavors of serverless: How to choose wisely – TechBeacon

Get up to speed fast on the techniques behind successful enterprise application development, QA testing and software delivery from leading practitioners.
How software testers can demonstrate value to the business
How DevOps teams are using—and abusing—DORA metrics
How to improve your observability systems
What people don’t get about value stream management
Usability: Where software testing tools fall short
Trends and best practices for provisioning, deploying, monitoring and managing enterprise IT systems. Understand challenges and best practices for ITOM, hybrid IT, ITSM and more.
The new IT services model: Why you need to get product-centric
The 8 flavors of serverless: How to choose wisely
5 steps to becoming a data-sharing master
How AIOps is a game-changer for predictive analytics and CloudOps
The state of IT operations management: 6 trends to watch
All things security for software engineering, DevOps, and IT Ops teams. Stay out front on application security, information security and data security.
What you need to know about KVKK data-privacy requirements
Transform your security approach: 7 ways to shift to cyber resilience
3 best practices for locking down your hybrid cloud security approach
Let’s fight cybercrime like we did piracy in the 18th century
How technical debt is hurting your software team—and your app sec
TechBeacon Guides are collections of stories on topics relevant to technology practitioners.
TechBeacon Guide: The State of SecOps 2021
TechBeacon Guide: Application Security Testing
TechBeacon Guide: Data Masking for Privacy and Security
TechBeacon Guide: Cloud Security & Data Privacy
TechBeacon Guide: Unstructured Data Security
Discover and register for the best 2021 tech conferences and webinars for app dev & testing, DevOps, enterprise IT and security.
DevOps World 2021
SKILup Days: 2021 – Observability
Webinar: Threat Hunting—Stories from the Trenches
Webinar: Cybersecurity Executive Order Challenges and Strategies
Webinar: Data Privacy and CIAM—Complete Your Identity Stack
Understanding the term serverless isn’t easy. The definition is by design what the technology lacks, and, even then, the claim is wrong. There are indeed servers in the mix, handling the computation and juggling the bits as the LEDs flash and the fans whir. It’s just that they’re out of sight, behind a curtain like the Wizard of Oz. 
In this case, the “server” that’s missing doesn’t refer to hardware—it signifies all the worries involved with keeping the hardware running. These include chores such as updating the operating system, configuring the firewall, and fiddling with the drivers. The idea behind serverless is that it takes a weight off the shoulders of the customer.
System architects and developers like the idea of letting the platform handle as many of the particulars as possible. Serverless options promise to save developers and their support team all the work of building out and configuring the hardware. Humans need only concentrate on writing their own business logic because the platform promises to handle all the other details. 
Serverless options are proliferating, in part because there’s no one to patrol the term’s usage. There’s no “International Serverless Committee” along the lines of the bodies that control the Olympics or Major League baseball, that can curate the word and spend hours codifying the variations. There are no serverless police who watch for misappropriation of words.
Serverless is evolving quickly as vendors attach the word to various products because they hope jumping on the buzzword bandwagon will attract attention. As options proliferate, though, some providers are starting to differentiate—by adding new terms that define things by what they are or how they work.
Here are eight serverless terms that matter—so that you can choose the right approach wisely.
The first generation of serverless tools allowed you to take your old code and hide it away behind one function that would act as a trigger. Once it was invoked, your code told the computer exactly what to do with the data.
Amazon’s lambda functions, for instance, can be written in Java, Go, PowerShell, Node.js, C#, Python, or Ruby. If that’s not enough choices, you can create a custom runtime for pretty much any language out there.
The word imperative is invoked by the programming language community to describe the situation of the programmer telling the computer exactly what to do. Some people joke that the only real difference is that imperative programmers write loops, while everyone else does not.
In the cloud world, imperative serverless tends to mean that the programmer handles more details about the way the function is applied and whether parallel processing is invoked. The serverless framework still handles the other details about the operating system, but the programmer is king inside the realm of the serverless function.
If your team regularly uses imperative languages or your application requires a number of libraries, you’ll want choices that give you more latitude to create a complex function. Your traditional imperative code can drop right into the function.
The word declarative is usually used as a way to contrast with imperative, and it’s usually associated with moving to a more abstract programming model. Instead of telling the computer exactly what to do, you simply declare what you want to happen, and the computer fills in the details. This only works when the computer has been taught enough about the problem to know how to handle the rest.
Serverless vendors are using the term when they have a sophisticated layer ready to help. Azure, for instance, offers a more “declarative” approach to some API calls. One new programming framework, Ripple, promises a simpler way to specify the problem because it is smart enough to know how and when to invoke parallel processing.
In general, declarative serverless applications can simplify your work—when you invoke them correctly.
This approach is often most attractive to use with simple functions that rely on libraries or features that are well supported by the serverless framework. It’s not unheard of for a developer to write just a single line of code.
If your team has a job that lines up well with the built-in powers of the declarative framework, then it can be a good match. Many common tasks, such as polling a web service or watching for database updates, are usually well represented. 
Some people call databases the original declarative serverless option. You present the data to the database as a quick transaction, and the database handles the job of storing it in a way so that it can be searched quickly. Some serverless databases will host code, sometimes called “stored procedures,” and these allow for customization.
Some of the earliest cloud APIs, such as Amazon’s S3, store and retrieve bytes based on their names. You don’t need to worry about the file system, replication, or backups. These days, pretty much every database is also available as a service that’s billed per query.
Database administrators and others who value simplicity may choose to deploy a serverless database to handle all the work. If your application doesn’t require much logic or you enjoy using the embedded procedural language of your favorite database, embracing a very simple layer makes sense. 
Lately the cloud companies have been increasing the precision of their metering so they can charge the fastest functions less. Amazon, for instance, started billing its Lambda functions in 1ms intervals. Smaller, faster functions are going to be cheaper, and that encourages developers to squeeze out the inefficiencies of their code.
If your application values speed over an abundance of features or you just want to deliver a very low-cost service, focusing on building the shortest, fastest nanoservice is an ideal solution. 
At the same time that cloud companies are increasing billing precision, they’re also removing limits. The earliest cloud functions came with a time limit, which was more of a safety function to prevent deadlocked code and endless loops from chewing up time and wasting money.
More and more developers, though, are using the serverless model for batch or background processing, and the cloud companies have been encouraging this by removing time limits. Azure, for instance, has been removing those limits for its premium plan so your code can run longer for complicated computations. It guarantees only 60 uninterrupted minutes, but that won’t stop you from running longer if the service stays up.
Some code is meant to be run only occasionally. Some requests arrive in big waves. Setting aside a dedicated set of servers that are idle most of the time is a waste. Even if your job can take hours to run, the serverless model is a good way to pay only for the compute time that you really need. If your team is looking for a cost-effective way to run functions occasionally, macroservices can accomplish quite a bit without requiring a dedicated server. 
Many APIs are sometimes, in effect, just serverless options. Basic APIs do little more than respond to a query, but some can be customized with more elaborate routines. It’s rare for APIs to offer much of the wide-open potential of general serverless approaches.
If your project requires some work that’s already handled by a good existing API, you can usually thread the API into your stack. While it can be more challenging for internal developers to build out good APIs for code that may be used only internally, it can be a big asset for future growth. 
Static resources aren’t really the job of serverless, because they’re usually delivered without changes. But developers are more interested in capturing the raw speed of CDNs by designing their apps so they’re as static as possible.
Some are converting formerly dynamic systems such as Drupal or WordPress by using a static site generator to precompute the response for all possible URLs. This may be the most lightweight and simplest expression of the serverless philosophy.
If your website or app can be boiled down to a set of HTML, CSS, and JavaScript files, plus some images perhaps, then a CDN is by far the cheapest serverless solution. The static site generators make it possible to build some feature-rich options that are still technically just static collections of files. 
One of the earliest ways to create a website was to get an account on shared hardware, an option that defined the basic LAMP stack. Many of the earliest dynamic websites were built with this model. Some created entire PHP applications, and others built upon open-source solutions such as WordPress and Drupal. 
Deploying a LAMP application to the old-style shared server isn’t much different from handing it over to an “official” serverless option. The major difference may be the billing, which with a shared server is usually a flat fee for a monthly subscription instead of the pay-per-invocation model used in most serverless options.
One big limitation is that these basic approaches don’t scale very well. You can move to a larger machine and, perhaps, pay more, but the infrastructure is not set up for multi-machine parallelism. While this approach seems dated, it continues to power a surprisingly large collection of websites.
The LAMP stack may be old, and PHP may seem like an ancient solution, but classic shared servers are often good, cheap solutions for basic applications. It’s amazing how much can be done. If your task list lines up well with the feature set of classic tools such as Drupal or WordPress, thinking of them as serverless options can deliver quite a bit of time-tested functionality. 
Serverless options will continue to attract more developer attention because they offer faster startup times and relatively trouble-free maintenance. Much of the work is done by the cloud companies rolling out the serverless platforms, saving the coders and DevOps teams the trouble of configuring and maintaining the hardware. 
The biggest argument against the approach will continue to be the feeling of being locked in. While it may not be too hard to rewrite a serverless function in many cases, the differences between the platforms are large enough to make developers pause. All of the support given by the serverless platform is like a pair of golden handcuffs for the enterprise. 
Choosing among the different options here, though, depends upon your taste and the style of your team. If you’re using plenty of custom code, you’ll like the options that give you the most control, such as the imperative or static solutions. If you’re stringing together other APIs and doing only a modest amount of work in your code, the declarative, nanoservices, or database solutions may be best.
In other cases, the nature of your problem will dominate. Older code will fit better in older formats such as the shared server.
In the future, there will be even more choices. The meaning of serverless is still evolving as more companies begin to attach the word to their tools. Some of the new versions will emphasize new features such as, perhaps, enhanced artificial intelligence.
Others will push integration with particular forms of infrastructure such as, perhaps, the banking networks for new financial ventures. In general, the extra sophistication will follow a trend of adding programming options that make the code more declarative and less imperative. It’s all about removing more work from the developer’s shoulders.
Is there a perfect choice? Maybe. Depending on your situation you may want to prioritize the language and the structure, because you’ll want to revise and extend your investment in a particular language. You might want to choose an imperative structure that offers flexibility.
You may even want to extend your investment in a database by using that platform as a serverless option. Or you might be drawn to the opportunities that come with some of the declarative frameworks that offer features and optimizations that align with your mission.
Some of the questions you might have with regard to size are largely immaterial to the debate about language and architecture. Some jobs require more work than others. The biggest ones may be ideal choices for serverless if the jobs are done occasionally. Serverless platforms are a natural for intermittent work, because there’s no charge when the platform isn’t being used.
Lurking in the background is always the question of whether serverless is an appropriate solution overall. Tasks that don’t fit the model very well are consistent, unceasing loads that require sophisticated algorithms and plenty of custom code. In these cases, the overhead of maintaining your own server is small and the cost of adopting a serverless platform may simply be too high.
In that case, the right choice may be none of the above.
Choose the right ESM tool for your needs. Get up to speed with the our  Buyer’s Guide to Enterprise Service Management Tools
What will the next generation of enterprise service management tools look like? TechBeacon’s Guide to Optimizing Enterprise Service Management offers the insights.
Discover more about IT Operations Monitoring with TechBeacon’s Guide.
What’s the best way to get your robotic process automation project off the ground? Find out how to choose the right tools—and the right project. 
Ready to advance up the IT career ladder? TechBeacon’s Careers Topic Center provides expert advice you need to prepare for your next move.
Get the best of TechBeacon, from App Dev & Testing to Security, delivered weekly.


Brought to you by

I’d like to receive emails from TechBeacon and Micro Focus to stay up-to-date on products, services, education, research, news, events, and promotions.
Check your email for the latest from TechBeacon.

source