Cloud Design Pattern Tuesdays

May 7, 20251684 words9 min read
azure
waf
design patterns
architecture

WAF Design Patterns

This collection contains a summary and simple implementation of the cloud design patterns covered by the WAF.

Ambassador Pattern

The Ambassador pattern is a design pattern where a helper service acts as an intermediary between a client and backend services.

It's particularly useful for:

  • Offloading responsibilities like logging, routing, retry policies, or monitoring
  • Supporting legacy applications or hard-to-change systems
  • Building shared client connectivity features
  • Offloading connectivity concerns to specialists

This pattern is best suited when you need to abstract cross-cutting concerns and is commonly used in microservices architectures. However, it's not recommended for latency-critical applications or when features require deep client integration.

Read more about the Ambassador Pattern

Anti-Corruption Layer Pattern

The Anti-Corruption Layer pattern introduces a layer between two subsystems to prevent undesirable dependencies and preserve the integrity of the internal model.

It's particularly useful for:

  • Integrating legacy systems with modern applications
  • Working with external services that have different models or protocols
  • Ensuring an application's design isn't limited by external dependencies
  • Maintaining clean boundaries between different subsystems

This pattern is best suited when you need to integrate systems with differing semantics, but it's not recommended when there are no significant semantic differences between the systems or when the layer would introduce unnecessary complexity.

Read more about the Anti-Corruption Layer Pattern

Asynchronous Request-Reply Pattern

The Asynchronous Request-Reply pattern decouples backend processing from frontend hosts by allowing the sender to continue processing without waiting for an immediate reply.

It's particularly useful for:

  • Decoupling services and increasing system reliability
  • Handling long-running operations without blocking
  • Buffering workloads and managing system load
  • Enabling systems to operate independently

This pattern is best suited for distributed systems and event-driven architectures where immediate feedback isn't critical. However, it's not recommended for real-time systems requiring low latency or for simple operations where the added complexity isn't justified.

Read more about the Asynchronous Request-Reply Pattern

Backends for Frontends Pattern

The Backends for Frontends Pattern creates dedicated backend services for each frontend interface (web, mobile, IoT) to handle their unique requirements and optimize their specific needs.

It's particularly useful for:

  • Applications with multiple frontends requiring different data shapes
  • Reducing over-fetching and under-fetching of data
  • Improving modularity and maintainability
  • Simplifying API development for each frontend type

This pattern is best suited when frontends have distinct requirements that would make a shared backend overly complex. However, it's not recommended for applications with simple or uniform frontend requirements where a single API would suffice.

Read more about the Backends for Frontends Pattern

Bulkhead Pattern

The Bulkhead Pattern isolates elements of an application into pools to prevent cascading failures and ensure that a failure in one component doesn't impact others.

It's particularly useful for:

  • Preventing cascading failures across microservices
  • Isolating critical resources like database connections
  • Managing task queues and compute resources
  • Improving fault tolerance and recovery strategies

This pattern is best suited for complex applications where component isolation is crucial for reliability. However, it's not recommended for lightweight applications where the isolation complexity would outweigh the benefits, or for services with minimal risk of resource exhaustion.

Read more about the Bulkhead Pattern

Cache-Aside Pattern

The Cache-Aside Pattern is a caching strategy where the application is responsible for managing both reading from and writing to the cache. Data is loaded into the cache on-demand.

It's particularly useful for:

  • Read-heavy applications that can tolerate minor data staleness
  • Reducing load on the primary data store
  • Improving application performance and scalability
  • Applications where you need granular control over the cache

This pattern is best suited for scenarios where read performance is a priority and eventual consistency is acceptable. However, it is not ideal for write-heavy workloads or when strong data consistency between the cache and the data store is required.

Read more about the Cache-Aside Pattern

Choreography Pattern

The Choreography Pattern is a decentralized approach to service orchestration where each service reacts to events and independently coordinates its part in a workflow.

It's particularly useful for:

  • Event-driven architectures and microservices
  • Systems that benefit from loose coupling and service autonomy
  • Improving reliability and scalability by removing central points of failure
  • Workflows that evolve independently across services

This pattern is best suited when you want services to operate independently and communicate through published events. However, it is not ideal for highly structured workflows that require strict ordering, centralized control, or complex error recovery logic.

Read more about the Choreography Pattern

Circuit Breaker Pattern

The Circuit Breaker Pattern is a stability pattern that prevents an application from repeatedly trying operations likely to fail by temporarily halting calls to a service when failures cross a threshold.

It's particularly useful for:

  • External service calls and remote APIs that may experience intermittent issues
  • Preventing cascading failures and isolating faults
  • Improving system resilience and availability
  • Enabling fallback paths and graceful recovery

This pattern is best suited when you want to safeguard systems against persistent failures and avoid overwhelming dependent services. However, it may add unnecessary complexity if failures are rare or short-lived, or if retrying has minimal impact.

Read more about the Circuit Breaker Pattern

Claim-Check Pattern

The Claim-Check Pattern offloads large or sensitive data payloads to external storage and transmits only a reference (claim check) through the message pipeline.

It's particularly useful for:

  • Messaging systems with strict size limits
  • Reducing message size and optimizing bandwidth
  • Exchanging large or sensitive data without sending it directly through the messaging infrastructure
  • Improving performance, scalability, and security

This pattern is best suited when large objects or sensitive data need to be exchanged but shouldn't travel through messaging infrastructure directly. However, it adds operational complexity and potential latency, so it's not recommended for simple or lightweight messages.

Read more about the Claim-Check Pattern

Compensation Transaction Pattern

The Compensation Transaction Pattern ensures system consistency by undoing work after failures, using defined compensation or rollback actions to revert the system to a stable state.

It's particularly useful for:

  • Long-running, multi-step workflows across distributed services
  • Scenarios where traditional ACID transactions aren't feasible
  • Improving resilience and consistency in distributed systems
  • Minimizing the impact of failures by isolating compensation logic per step

This pattern is best suited when operations can't be committed atomically and need a way to roll back only the affected parts of a process. However, it's not recommended for scenarios requiring strong consistency and atomicity, or where compensating logic is complex or risky to implement.

Read more about the Compensation Transaction Pattern

Competing Consumers Pattern

The Competing Consumers Pattern allows multiple independent consumer instances to pull messages from a shared queue, enabling parallel processing and improved throughput.

It's particularly useful for:

  • Message-based systems like order processing and telemetry ingestion
  • Distributing load across multiple workers for better scalability
  • Improving system resilience by removing single processing bottlenecks
  • Enabling horizontal scaling to handle varying workloads

This pattern is best suited when workloads increase and processing needs to scale out. However, it's not recommended for scenarios requiring strict message ordering or transactional consistency across multiple consumers.

Read more about the Competing Consumers Pattern

Compute Resource Consolidation Pattern

The Compute Resource Consolidation Pattern consolidates multiple tasks or operations into a single computational unit to maximize resource utilization and efficiency.

It's particularly useful for:

  • Multi-tenant systems and microservices that can share compute capacity safely
  • Reducing the number of underutilized compute instances for cost optimization
  • Simplifying resource management and lowering operational overhead
  • Lightweight, bursty, or predictable workloads that don't require dedicated resources

This pattern is best suited when workloads are lightweight and can run safely on shared compute resources without affecting each other. However, it's not recommended for scenarios requiring strong workload isolation for compliance or security, or where predictable performance is critical.

Read more about the Compute Resource Consolidation Pattern

Command Query Responsibility Segregation (CQRS) Pattern

The CQRS Pattern separates the models for handling commands (writes that change state) and queries (reads that fetch data), allowing each to evolve independently.

It's particularly useful for:

  • Systems where reads and writes have differing workload profiles
  • High read-load applications and event-driven designs
  • Optimizing query performance without compromising write integrity
  • Allowing each model to scale and be maintained independently

This pattern is best suited when read and write workloads diverge in complexity or scale. However, it's not recommended for simple CRUD applications where separation adds unnecessary complexity, or when immediate consistency is critical across data operations.

Read more about the CQRS Pattern

Deployment Stamps Pattern

The Deployment Stamps Pattern involves provisioning multiple independent copies ("stamps") of a solution, each hosting a subset of tenants or workloads, enabling linear scale-out, regional deployment, and data separation.

It's particularly useful for:

  • Multitenant or SaaS workloads needing horizontal scale and tenant isolation
  • Deploying in multiple regions for compliance and performance
  • Staggered updates using deployment rings
  • Solutions that hit scale limits or incur non-linear costs

This pattern is best suited when single deployments hit scale limits or must separate customers by size, update cadence, or region. However, it's not recommended for simple solutions that scale easily on one instance, or systems needing global data replication across all deployments.

Read more about the Deployment Stamps Pattern

© 2025 Andrei Bodea