Gartner expects the worldwide market for technology that enables hyperautomation will reach $596.6 billion in 2022. Choosing the right architecture for the software is as important as having an architecture in the first place. At times, the big ball of mud over there is not an architecture.
There are several good approaches to architect the software. I am going to point out the merits of using Event Driven Architecture (EDA) with an eye on hyperautomation that has inherent cost benefits. While EDA is not new, the use of EDA for hyperautomation with the help of cloud services is the 'new'.
EDA is a distributed asynchronous architecture that is often used to produce highly scalable applications. This architecture that caters to business events is almost decoupled.
Some use cases of business events are invoice generation along with the downhill events of sending the invoice, notifications, receiving money for that invoice, confirmation, updating AR records etc.
Other use cases are in insurance underwriting for receiving requests for a quote, quote generation, notification, acceptance, receiving payment, binding, policy issuance and updating various areas of the platform with the new information.
Events can be simple, with only one step, like ‘for this, send a notification’. The events can be complex having multiple steps where the steps are ‘ordered’ and thus dependent on some step to proceed.
The complex, multi-step events will need some central orchestration or a Mediator. Sounds familiar? This service broker mediates the various interactions of the steps for that event.
The other type of EDA implementation is without the use of a Mediator. This is the Broker implementation. Here there is no central mediator, rather the actions are distributed for processing in a chain.
There are significant challenges when the architecture is based on events. Primarily, the events can be missed or dropped due to a variety of reasons like non-availability of resources at a particular time. Further, when the events are ‘ordered’, a break at one place will queue up errors and the downhill processes will be ideal initially and then drown when the flood gates open.
To mitigate, the central mediator becomes important. Think of it like an Event Service Bus. This will orchestrate, through the right tools the processes to order, retry, replay, deduplicate, and batch.
EDA is a good use case for OnDemand computing. We have transitioned from the Own-It, Rent-It to Use-When-Needed. Good examples of this are Uber, Lyft and other auto sharing services. The parallel in IT is the transition from having in-premise servers to collocated data centers, Cloud VMs, and now to OnDemand compute like Serverless compute (AWS Lambda, Azure Functions, Google Cloud Functions) and other cloud tools/services.
Why pay for dedicated compute resources for events when you can fire up a Lambda function on demand when the event happens? Lambda will take care of the event and then stand down. Not only does this save compute cost, but it is highly scalable as well. Fire up more Lambdas in parallel when there are more events!
Amazon, Microsoft and Google Cloud Platform (GCP) provide their own tools for serverless computing. Check them out here for AWS, Azure, and GCP.
Some of the AWS tools for EDA are Simple Notification Service (SNS), EventBridge, Kinesis, MQ. On the Azure side, you can accomplish this with Azure Service Bus, Event Grid, and Event Hubs.
For event driven applications, by using the right combination of EDA, serverless computing and cloud services, one builds highly scalable applications that are cost effective in the long run. By 2024, organizations are expected to lower operational costs by 30%.
Are you looking at hyperautomation to lower operational costs? Which approach are you following?