SOA
you structure your application by decomposing it into multiple services (most commonly as HTTP services) that can be classified as different types like subsystems or tiers. Microservices derive from SOA, but SOA is different from microservices architecture. Features like big central brokers, central orchestrators at the organization level, and the Enterprise Service Bus (ESB) are typical in SOA. But in most cases, these are anti-patterns in the microservice community. In fact, some people argue that “The microservice architecture is SOA done right.”
Microservices architecture
As the name implies, a microservices architecture is an approach to building a server application as a set of small services. Each service runs in its own process and communicates with other processes using protocols such as HTTP/HTTPS, WebSockets, or AMQP. Each microservice implements a specific end-to-end domain or business capability within a certain context boundary, and each must be developed autonomously and be deployable independently. Finally, each microservice should own its related domain data model and domain logic (sovereignty and decentralized data management) based on different data storage technologies (SQL, NoSQL) and different programming languages.
As an additional benefit, microservices can scale-out independently. Instead of having a single monolithic application that you must scale out as a unit, you can instead scale-out specific microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that do not need to be scaled. That means cost savings because you need less hardware.
The microservices approach allows agile changes and rapid iteration of each microservice because you can change specific, small areas of complex, large, and scalable applications.
An important rule for microservices architecture is that each microservice must own its domain data and logic. Just as a full application owns its logic and data, so must each microservice own its logic and data under an autonomous lifecycle, with independent deployment per microservice.
Data consistency is hard in microservices, therefore different services use different data storage i.e SQL, NoSql, or even Graph. Like Pollygot Persistence
A microservices is like a Bounded Context in DDD. It is good to approach to start to define or divide your system accordingly.
Challenge 1: How to define Boundaries Of Microservices.
Usually, Bounded Context is a good idea to start, Each service should have its own context. The same entity could be referred to differently in different contexts i.e user in auth service, the customer in order service. They could have the same data or the same identity with different attributes.
Challenge #2: How to create queries that retrieve data from several microservices
API Gateway
CQRS with query/reads tables
“Cold data” in central databases.
Challenge #3: How to achieve consistency across multiple microservices
. No microservice should ever include tables/storage owned by another
microservice in its own transactions, not even in direct queries; microservice should use eventual consistency probably based on asynchronous communication such
as integration events (message and event-based communication).
Challenge #4: How to design communication across microservice boundaries
In this context, communication means how coupled your
microservices should be. Depending on the level of coupling, when a failure occurs, the impact of that
failure on your system will vary significantly
For instance, imagine that your client application makes an HTTP API call to an individual microservice
like the Ordering microservice. If the Ordering microservice, in turn, calls additional microservices using
HTTP within the same request/response cycle, you’re creating a chain of HTTP calls. It might sound
reasonable initially. However, there are important points to consider when going down this path:
• Blocking and low performance. Due to the synchronous nature of HTTP, the original request
doesn’t get a response until all the internal HTTP calls are finished. Imagine if the number of
these cells increases significantly and at the same time one of the intermediate HTTP calls to a
microservice is blocked. The result is that performance is impacted, and the overall scalability will
be exponentially affected as additional HTTP requests increase.
• Coupling microservices with HTTP. Business microservices shouldn’t be coupled with other
business microservices. Ideally, they shouldn’t “know” about the existence of other microservices.
If your application relies on coupling microservices as in the example, achieving autonomy per
microservice will be almost impossible.
• Failure in any one microservice. If you implemented a chain of microservices linked by HTTP
calls when any of the microservices fails (and eventually they will fail) the whole chain of
microservices will fail. A microservice-based system should be designed to continue to work as
well as possible during partial failures. Even if you implement client logic that uses retries with
exponential backoff or circuit breaker mechanisms, the more complex the HTTP call chains are,
the more complex it is to implement a failure strategy based on HTTP.
In fact, if your internal microservices are communicating by creating chains of HTTP requests as
described, it could be argued that you have a monolithic application, but one based on HTTP between
processes instead of intra-process communication mechanisms.
Therefore, to enforce microservice autonomy and have better resiliency, you should minimize
the use of chains of request/response communication across microservices. It’s recommended that
you use only asynchronous interaction for inter-microservice communication, either by using
asynchronous message- and event-based communication, or by using (asynchronous) HTTP polling
independently of the original HTTP request/response cycle.
API Gateway Pattern
API Gateway or BFF (Backend for Frontend) is a useful pattern in microservices. Instead of exposing all the microservices to the outer world, better to create API Gateway to be used for the outer world. This will act as a facade between client applications and microservices, will prevent too many round trips for SPA/Client app to aggregate data, single side to implement logging, authentication and authorization, caching, IP Whitelisting, reties, circuit breaker and QoS, Headers, Query String and claims transformation, rate liming and throttling and load balancing. but there could be drawbacks of using API Gateways too like a single point of failure, which could be a bottleneck to your system and increased development efforts.
There are many API Gateways in the market i.e Azure Gateway or Ocelot
Upstream URL: URL being requested by outside
DownStream: Local Service
There is n-n relation between up and downstream, which means upstream can have multiple downstream and vice versa
Microservice and Service Registry
Microservices should have a unique name, should be discoverable, It shouldn't be an IP running on some computer (things can go bad). It shouldn't depend on the infrastructure it's running on. For that, there needs to be a service registry if one machine fails but the service is still discoverable through that.
Micro-frontends (Composite UI)
.
Resiliency And High Availability
A microservice should be resilient and available all the time; that means it should restart itself from failure; If it's a faulty version, it should be able to roll back to the stable one. and resilient also so no state loss or data loss should occur. For that, we need to implement patterns like a circuit breaker, retry strategy, exponential bakeoff by using libraries like Polly in .NET Core.
Health management, logging, and diagnostic are also a part of resilience and High availability; there should be standardized health checks, event logging, and tracing. There needs to be crash reporting or machine restart alerts. Logging should be in a standardized output stream like Microsoft.Diagnostic.EventFlow which collects streams from multiple systems and publishes them to output system and Orchestrators should also have diagnostic monitoring i.e Azure, AWS, etc.
Exponential Backoffs
Retry policy when you get a failure from the API and you retry first time in 0 seconds second time in 5 seconds and third time in 25 seconds.
It doesn't bombard another service to send n amount of retries.
Circuit Breaker Pattern
A pattern that wraps or intercepts the HTTP calls to the service and checks if It returns a success or failure response. if more than n percentage of requests are timing out or returning failure it breaks the circuit and lets the nonresponsive service recover and stops cascading failure in case of timeouts which keeps threads busy. It helps the system fail fast than to reach a state of exhaustion.
It has three states
CLosed: All requests can go through
Open: No calls will go (once it opens a timer will start to make it half-open and it lets some requests in) if still fails It stays open and the timer and timer start again.
Fallback: when API fails we want to define policy, going to backup API if not serving cached response or just failing.
Bulk Head Pattern
In Deadpool, If service calls lots of services and one service is slower than the other that service call is hogging the resources to solve that we allocate resources per service that means each service will have an allocated bandwidth of calls if that exceeds calls at the time of making next call we won't make the next call. unless the previous calls are completed.
Integration Event
An event to publish on a common event bus to be subscribed by any service. In a way, it's decoupled because publisher and subscriber don't know of each other. Any message broker service can be used to achieve that.
1. Achieving idempotency with events. queue built-in capabilities can be used as RabbitMq add a re-delivered flag to the unacknowledged, redelivered event and assigning an ID to the event and persisting it in the subscribing microservices.
These events can be implemented by Message Broker Like RabbitMq which uses APMQ messaging protocols which means messages go to exchange first and then be delivered to the consumers.
It has multiple modes direct, fanout, topics. Direct will send messages to one queue strictly matching the route key of published to the subscriber, fanout is not strict it will deliver all messages to all the consumers however topic is a middle ground in both; It will match string patterns like wildcard to filter the subscriber.
Each message will need acknowledgment to be deleted from the queues,
Event Sourcing Pattern
You store only domain events to the database and reply to those to get the current state of the object,. for efficiency purposes, you can also save snapshots of the current state.
Handles multiple concurrent requests on the same data, maintains audit log. Loose consistency but has high scalability.
Docker Container
For each service instance, you use one instance.
Docker images/containers are units of deployment
A container is an instance of a docker image.
A host VM handles many containers.
Transaction Outbox Pattern
Service Mesh
In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between services or microservices, using a proxy. It helps deploy new code without disturbing the old one, migrates traffic i.e canary deployment. service discovery could be outsourced to mesh, load balancing, and also fault tolerance could also be handled by mesh.
AWS Lambda
Stored in AWS Storage, when the trigger (defined in the function) occurs, the function is deployed to compute and starts execution. It deploys one function instance per request at the time, it scales automatically and destructs automatically too. it gets triggered by function and also can trigger other functions too.
No comments:
Post a Comment