You are here

You are here

5 fundamentals to a successful microservice design

public://pictures/Bernard Golden Photo 3.png
Bernard Golden CEO, Navica
 

The tech world is all agog over microservices. Why? Because the ability to break up monolithic applications into smaller, independently managed and updated components seems like a heaven-sent approach to IT organizations overwhelmed by demands to move faster. Microservice success stories such as what Wix has accomplished certainly generate excitement. But microservices design isn't exactly easy.

In truth, microservices do hold enormous potential for changing the enterprise application ground rules. Microservices-based applications let you distribute work across multiple groups in such a way that each group can work on individual application sections without imposing additional work on the others. Microservice architectures also let you decompose an application into independently executing services. You can update individual microservices more easily and place the resulting update into production without the need for lengthy integration work across all of your different development teams.

Unfortunately, most of the information out there about microservices explains why you should use them, but not how. It’s good to know that microservices could revolutionize application design, implementation, and operation. But exactly how do you build an individual microservice? You need to understand the fundamental components of a microservice if you want the resulting artifact to operate properly and not end up looking like the same old monolithic application with a new paint job. 

Here are five elements that your microservice will need before it can take its place in a distributed application architecture.

1. Properly scoped functionality

The biggest design issue with monolithic application architectures is that there’s so much code in them that implements widely differing functionality. To make any change to a monolithic app, you must coordinate across different groups in order to ensure that everyone’s code continues operating properly. As a result, developers often spend more time on integration and testing than on delivering new application capability.

For this reason, the first element of a microservice is to define what it should do. What is the breadth of functionality it should implement? On their initial foray into microservices, many people are concerned that they’ll overpartition their functionality and end up with too many tiny microservices. In my experience, overpartitioning is rarely the issue; it's more common to stuff too much into each service.

One way to define the proper scope is to partition the services along logical functionality lines. For example, if you have a tax lookup function in your monolithic app that many other functions call, it’s a candidate to be broken out into its own service.

Another scoping approach is to mirror the development organization’s structure. Each application subgroup (e.g., the authentication group responsible for user identity and authorization) takes responsibility for creating one or more microservices for the functionality that falls into its area.

A third approach, recommended in the excellent Building Microservices book by Sam Newman, is to minimize a service to the amount of code that could be re-implemented by the team in a two-week period. Rationing the size of the microservice in this fashion ensures that you’ll avoid the problem of bloated services.

2. Presenting an API

Once you break up a single application into multiple cooperating services, how should the services talk to one another? Typically, this is done with REST web services API calls, although you can use other transport mechanisms as well.

Presenting an API to calling services in some way represents the old challenge of integration. For an overall application to run properly, each of the individual services must be able to reliably send and receive data, and testing that APIs operate properly is necessary to ensure that everything hangs together.

The foundation of an API is exposing the service at a known location with a format that, when called by a client service, can respond with the appropriate functionality and/or response data. Recognize, though, that as individual services mature, they may add new functionality that requires a richer API. This, in turn, implies that the new API must be exposed alongside the old one. Absent this, every API change cascades into a requirement that all callers update their code and retest, which results in the same problem that monolithic applications pose.

It’s a good idea to avoid jumping into API coding immediately. Instead, do some work on paper or whiteboards to define what a specific service must expose to operate properly. It will undoubtedly take several iterations to fully flesh out an API capable of presenting the service behind the API, as well as managing the calls from multiple client types.

3. Traffic management

Once the API is up and one service can call another, everything’s OK, right? Well, no, actually.

In the real world of operations applications, a service may run slowly, and calls to it to take a long time. Or a service can be overwhelmed with calls and lack the processing power needed to respond quickly enough. Even worse, a service might simply stop running due to a software or hardware crash. And sometimes a client is issuing too many calls for the lower-level service to respond quickly enough.

Addressing this too-heavy traffic situation requires management. There must be a way for calling and called services to communicate status and coordinate traffic loads.

From the perspective of the calling service, it should always track its calls and be prepared to terminate them if the response takes too long. From the perspective of the called service, the API design should include the ability to send a response that indicates overload. This response, typically referred to as backpressure, signals that the calling service should reduce or redirect its load.

One important note here about managing traffic: Calling services should have a graceful way to handle a nonresponsive called service. If the information the called service is supposed to return is unavailable, then your calling service should still be able to accept that the called service will not respond, and continue to serve up useful, if incomplete, information. This is commonly referred to as a “circuit breaker pattern.”

Finally, services must be able to spawn and kill new service instances as needed to accommodate traffic load variations. Most sophisticated microservice applications achieve this through auto-scaling, a process in which a management system tracks service loads and adds or removes service instances as needed.

4. Data offloading

The vagaries and erratic traffic of microservice applications mean that individual services come and go. Adding to the constant service instance churn: the reality that the underlying infrastructure also is unreliable. Virtual machines crash, fail to respond, or go into high-load status while not performing any useful work (thereby requiring hard termination). Nevertheless, while individual services instances are transient, the overall service must be available and continue operating so that users will keep obtaining results from the application.

This need for continuous operation is quite different from traditional applications, which often stop operating if the underlying infrastructure fails.

To ensure that users can continue to perform useful work when one instance from which their sessions are being served fails, you can migrate user-specific data off of service instances and into a shared, redundant storage system that's accessible from all service instances. In this way, you can ensure that no instance crash stops user interactions.

A further twist on the offloaded storage approach is to insert a shared, memory-based cache system between a given service and the storage associated with that service. This allows for quicker data access and improves application performance. Naturally, the caching system becomes another service in the application architecture, and makes the overall application more complex, but data offloading and caching improves application satisfaction.

5. Monitoring

Decomposition of a monolithic application, along with insertion of offloaded data layer and caching to increase performance, inevitably means a more complex application topology — a lot more complex.

For this reason, traditional monitoring tools and approaches cannot deal with the scale and dynamic environments associated with microservices. The monitoring system for a microservices-based application must allow for ongoing resource change, be able to capture monitoring data in a central location, and display information that reflects the frequently changing nature of microservices applications.

But more is necessary to deliver useful metrics for microservices applications. As an end-user action triggers application work, API calls and service work cascade down the application topology, and a single action may result in tens, or hundreds, of monitorable events. Trying to manually correlate errors across a service cascade is nearly impossible, so use a monitoring system that can discover and display events based on a common timeline to support root cause analysis.

Most microservices monitoring systems place a monitoring agent on each service instance, where it can track specific instance data. These monitoring systems can also capture application-created log information. All of this data migrates to a centralized database, where the system does cross-correlation, allowing monitoring alerts or humans to track important event data.

Microservices flexibility worth the complexity

Microservices is the logical response to the shortcomings of monolithic applications in a time of frequent functionality change and constant operational churn. A microservices architecture allows much greater application flexibility and performance, but it's complex. With these five aspects of microservice design, however, you'll be better prepared as you move to a more modern application architecture and topology.

Keep learning

Read more articles about: App Dev & TestingApp Dev