How to get started with event-driven microservices

Many organizations reach a stage in their growth where the monolithic applications that once served them well start to hold them back. Perhaps the business needs new functionality that the existing architecture can’t support, or more flexible ways to store and access data for their apps. Team growth, conflicting performance requirements, and new competitive technologies can also pose a challenge to a singular, monolithic codebase. Adopting an event-driven microservices architecture can help you address these challenges.

Microservices overcome the limitations of monolithic apps by dividing those apps into small, purpose-built services, which can be custom tailored to the business problem they’re meant to solve. They provide you with the freedom to choose your own programming languages, frameworks, and databases as you see fit. Microservices can remodel, manage, and store data according to their own needs, providing you with complete control over how best to solve your business problems. 

In an event-driven microservices architecture, systems communicate by producing and consuming events. These event-driven services consume events from input event streams (such as Apache Kafka topics) and apply their specific business logic, emitting their own output events, providing data for request-response access, communicating with a third-party API, or performing other required actions.

I wrote a book on this topic if you’d like a deeper dive. In this article I will cover the key points you need to know to get started. 

Are event-driven microservices right for you?

If you’re considering an event-driven microservices architecture, the first step is to ensure it’s something your company needs. Like many technology decisions, there are trade-offs. Monolithic apps are generally tightly coupled to their data store, providing fast data access to other internal functions. But they serve that data according to the internal data model, providing performance and access according to the underlying technology. For example, a key-value store makes for a lousy relational database, and neither is a good substitute for storing loosely structured documents. 

One of the most common signs that you may be ready for event-driven microservices is if you find yourself writing (or using) bulk-export data APIs. Similarly, if you’re writing periodic polling jobs to pull data from one database to put into another. These are examples of an ad hoc data communication layer. You don’t actually need the business functionality of the monolith associated with that data, but you simply need the data, most likely to get on with the business of writing your own new business functionality.

Historically, you’d commonly find this pattern when extracting data from online transaction processing (OLTP) systems to do online analytical processing (OLAP). But with the massive growth in data, performance requirements, and business needs, these same extraction and loading patterns are now prevalent for any operational system that simply needs common business data to do its function. Event-driven microservices provide a way to access both historical and new data in the form of an append-only immutable log of events. 

Event-driven microservices are also an excellent solution when you need to provide multiple departments and teams with access to the same set of data in a consistent way. For example, if sales data is packaged as an event stream, the analytics team can simply subscribe to it and be confident they’re using the same exact sales data that the fulfillment team is seeing. The event streams provide the basis for the data communication layer, eliminating the tangled web of point-to-point, ad hoc, purpose-built connections and replacing them with a single set of streams for easy consumption. 

Take advantage of modern cloud services

Each new service you create (micro or otherwise) requires a deployment pipeline, container management, monitoring, and scaling services. Dozens of microservices will require more overhead and management than a single monolithic service. This overhead is known as the “microservices tax.” Streamlining and automating your operations will help reduce the cost but this has historically required a significant investment to achieve.

Nowadays, we can rely much more heavily on managed cloud services to reduce the microservices tax. Deploying, managing, monitoring, and scaling Dockerized services on Kubernetes is extremely common and easy to do in this day and age. Similarly, creating and managing Kafka topics through cloud services, like Confluent Cloud, is easier than ever.   

Pay close attention to the differences between “managed” and “fully managed” services. Choosing fully managed services lets you get on with the business of actually running your business, outsourcing all of the maintenance, monitoring, scaling, and security requirements to your service provider. Relying on cloud services lets you prototype and experiment with microservices without paying any microservices taxes up front. You can simply try them out and adopt the pieces and technologies that help you best. 

Start small and build on existing systems

Moving to a microservices architecture shouldn’t be a rip-and-replace exercise. Start by identifying a specific use case that meets a real business need. For example, you might need to source and remodel data from four different relational tables in one database to power the new document-based search functionality. How do you integrate existing non-streaming data sources into an event-driven architecture?

Kafka Connect is an excellent option for bootstrapping database tables into their own event streams. You can connect to a whole host of on-prem and cloud databases, snapshot historical data, filter data, mask columns, and more. Your source database remains independent of Kafka Connect, letting you incrementally source important business data without affecting existing systems. 

You can build your microservices incrementally while maintaining the original monolithic application for as long as you need. It’s not a question of “either/or”—it’s about exposing the additional data and functionality you need at a speed that makes sense for the business. 

Build microservices aligned to business needs

Design your microservices to solve specific, closely related business problems, as similar problems tend to require similar data. Aligning your microservices with business problems means that your microservices are unlikely to need to change unless your business changes. For example, an e-commerce business could have one microservice that deals with payments, another that deals with inventory management, and a third that deals with shipping. Any changes made to the shipping workflow will only affect the shipping microservice. 

Aligning your microservice boundaries with business use cases reduces the risk of unintentional or incidental changes, as the functionality is encapsulated in one service. In more complex use cases, it’s not uncommon to have a business workflow span multiple microservices, though any atomic operations should remain within a single service for consistency.

Keep your microservice count low

A microservice doesn’t need to be tiny. In fact, you may find it helpful to think of a microservice as simply a dedicated service for a subset of business problems. One of the main pitfalls that many people fall into is building microservices for every single piece of functionality, often ending up with hundreds or thousands of services! The goal of an event-driven microservices architecture is not to build as many services as possible, but rather to enable dedicated solutions using the right tools for the job. 

When adding business functionality, see if you can reasonably integrate it with an existing service first. If you can add functionality in a way that seems like a reasonable extension to an existing service, it may make sense to do that instead of building a new, possibly unnecessary service. For example, expanded inventory management functionality should be within the inventory management microservice, and not in yet another similar-yet-different microservice. 

Not everything needs to be a microservice from day one. One reasonable design choice is to prototype a solution using a modular monolith framework with healthy API boundaries and separation of concerns. You can treat the entire prototype monolith as a single (large) microservice, reading from event streams and writing as necessary. Once your business use cases harden and become clearer, you can split off select modules into their own microservices as necessary. Introduce new microservices only when necessary, and don’t forget that less is more, particularly when starting out. 

Use a catalog to keep track of event streams and services

As you create more microservices and event streams, you’ll need some way to manage, discover, and track usage and metadata. A catalog serves two primary functions:

  1. Discovering who owns an event stream, what data it contains, and what schema and structure it uses.
  2. Discovering what microservices already exist, who owns them, and what event streams (and APIs) they are responsible for.

You can catalog metadata with something as simple as a shared spreadsheet when starting out. As your business grows, you’ll really need to move to a dedicated metadata service. Apache Atlas is a common open-source choice, though an easier answer is to look to your cloud service providers for a solution (such as Confluent Cloud’s Stream Catalog). 

Create a toolbox of approved tools, languages, and frameworks

One of the advantages of event-driven microservices is that it opens the door to a wider choice of technologies, including various programming languages, frameworks, databases, and tools. This is great for innovation and accessibility but can become a problem if developers use too many different technologies, especially lesser-known or flavor-of-the-month selections.

To remedy this, get your key stakeholders together and decide on the toolbox of technologies that you’ll support. This shouldn’t be overly restrictive, but you don’t want to support tools and technologies unnecessarily because it increases cost, complexity, and management overhead. Application templates, code generators, event generators, testing frameworks, monitoring frameworks, and programming languages are just some examples of things you’ll find in the toolbox.

If a developer wants to stray outside of the toolbox, make sure they have a good reason to do so—for example, to achieve some desired business functionality that they can’t create any other way. In this case, use their experience as a case study for expanding the toolbox to include the new option. However, be careful, as every new addition requires efforts to support.

As a general rule, your goal should be to make it as easy as possible for developers to build and maintain the microservices your organization needs. You could even create a quick-start function that generates a GitHub repo with the skeleton for a service, builds the test framework, and so on.

Take advantage of full integration and unit testing

Event-driven microservices lend themselves well to full integration and unit testing. As the predominant form of input to an event-driven microservice, events can be easily composed to cover a wide range of inputs and corner cases. Event schemas constrain the range of values that need to be tested and provide the necessary structure for composing input test streams.

You can usually “black box test” your microservice by loading the inputs with certain events and see what results are produced. For Kafka-specific events, Kafka has a built-in, Java-based, in-memory test broker. By starting the broker, you can programmatically generate events and evaluate the results produced.

For container-based microservice integration tests, you can either spin up your own containerized parallel Kafka instance, or simply plug into a cloud cluster. Produce and consume your events, then strip everything down at the end. 

For unit testing, microservices are similar to most other application architectures. Use unit tests and test all of your functions to make sure that the outputs match the expected outputs. Unit tests are critical for making sure that your application works the first time you deploy it, and for protecting against any unintentional chances. 

Event streams meet diverse needs

Event-driven communication is not new, but the needs of most modern organizations have changed. Data sets have gotten much larger, and singular monoliths are rarely enough to handle all the complex and diverse needs of the modern organization. Event-driven microservices provide a powerful, flexible way to meet today’s requirements. Event streams form the basis of data communication, providing a reliable source of truth for other services to consume and use as they see fit.

Meanwhile, microservices provide the flexibility and choice to use the right tools for solving business problems. And today’s modern cloud services mean you’ll spend far less time working on the platform, and way more time working on the problems. I hope this guide has provided you with the basics you’ll need to get started with this incredibly powerful and flexible model in your own organization.

Adam Bellemare is staff technologist in the office of the CTO at Confluent.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2023 IDG Communications, Inc.

Source