arrows-split-up-and-leftConfluence

Unified Fan-in and Fan-out Event Processing

Confluence is Happen's system for handling multiple events and multiple nodes with minimal API surface. It provides powerful capabilities for both event batching (fan-in) and multi-node distribution (fan-out) through a single, intuitive container pattern.

Core Idea

Confluence embodies a simple concept: When something is in a container, it represents a collection. This principle applies consistently whether you're working with nodes or events:

  • An array of nodes means "multiple receivers"

  • An array of events means "a batch of events"

  • Handlers naturally work with both individual items and collections

This symmetry creates a powerful system with virtually no new API surface.

API Surface

The entire Confluence system introduces zero new methods, instead extending Happen's existing API to work with arrays:

Fan-out: One Event, Multiple Nodes

// Register a handler across multiple nodes using an array
[orderNode, paymentNode, inventoryNode].on("update", (event, context) => {
  // Handler receives events from any of these nodes
  console.log(`Processing update in ${context.node.id}`);
});

// Send an event to multiple nodes using an array
[orderNode, shippingNode, notificationNode].send({
  type: "order-completed",
  payload: { orderId: "ORD-123" }
});

Fan-in: Multiple Events, One Handler

The Event Continuum and Divergent Flows

When an event is sent to multiple nodes using Confluence, each node processes it through its own independent flow chain, creating "divergent flows" - parallel processing paths that naturally extend the Event Continuum model:

This creates a causal tree structure where:

  • Each node has its own isolated context

  • Flow paths can diverge based on node-specific logic

  • Return values are tracked per node

  • The complete causal history is preserved

When you need the results from multiple nodes:

No Config Needed

Since batching is fully explicit with arrays, there's no magical configuration needed. Batches are simply arrays of events that you create and send directly:

This direct approach ensures complete predictability with no behind-the-scenes magic.

Causality Preservation

Even with batches and multi-node processing, Confluence maintains Happen's causality guarantees:

  • The causal context is preserved for each event in a batch

  • Each event maintains its position in the causal chain

  • Batch processing still records each event's causal history

  • Divergent flows are tracked as branches in the causal tree

This means you can always trace the complete history of events, even when they've been processed in batches or across multiple nodes.

Context in Multi-Node Operations

When working with multiple nodes, context handling is structured to maintain node-specific isolation while providing clear access to relevant information:

Node-Specific Context Structure

When a handler receives an event in a multi-node operation, the context structure clearly identifies which node is processing it:

Results Collection

When collecting results from multiple nodes, the returned object has a clear structure with node IDs as keys:

Batch Context Structure

For batch operations (multiple events to a single node), the context provides batch-level information at the root, with individual event contexts in an array:

Combined Multi-Node and Batch Operations

In the rare case of both batching and multi-node operations, the context structure maintains clear separation:

This context structure ensures that no matter how complex the operation, each node maintains its own isolated processing environment while still providing clear access to all necessary information.

Examples

Explicit Batch Creation and Processing

Distributed Notification System with Divergent Flows

Manual Batch Processing

Performance Considerations

Confluence is designed to be explicit and predictable while still providing performance benefits:

When to Use Batching

  • High-volume event streams with similar event types

  • Processing that benefits from aggregation (like analytics)

  • Networks with significant latency where reducing round-trips helps

  • Calculations that can be optimized when performed on multiple items together

When to Use Multi-node Operations

  • Broadcast notifications that multiple subsystems need to process

  • Commands that affect multiple services simultaneously

  • Cross-cutting concerns like logging, monitoring, or security

  • Redundant processing for critical operations

Batch Processing Efficiency

For maximum efficiency when processing batches:

  1. Use Specialized Algorithms: Many operations are more efficient on batches (like bulk database inserts)

  2. Minimize Per-event Overhead: Amortize setup costs across multiple events

  3. Leverage Memory Locality: Process related data together to improve cache efficiency

  4. Prefer Single Passes: Process the entire batch in one pass rather than multiple iterations

Memory Management

Batch processing can also help with memory efficiency:

Conclusion

Confluence provides powerful capabilities for handling multiple events and multiple nodes while maintaining Happen's commitment to radical simplicity. Through a single intuitive container pattern - using arrays for both nodes and events - it enables sophisticated batch processing and multi-node communication without introducing special methods or complex APIs.

The system offers:

  1. Pure Symmetry: The same pattern works for both nodes and events

  2. Explicit Control: No "magic" - batching and multi-node operations are explicit

  3. Divergent Flows: Natural extension of the Event Continuum to parallel processing

  4. Zero New Methods: Works entirely through existing APIs with array support

  5. Powerful Capabilities: Enables sophisticated patterns with minimal complexity

Staying true to Happen's core philosophy of simplicity by recognizing that arrays naturally represent collections enables a powerful system with virtually no learning curve.

Last updated