Features Overview¶
Curve provides production-ready features for event-driven microservices out of the box.
Core Features¶
Declarative Event Publishing¶
Publish events with a single annotation - no boilerplate code required.
@PublishEvent(eventType = "ORDER_CREATED")
public Order createOrder(OrderRequest request) {
return orderRepository.save(new Order(request));
}
Benefits:
- 90% less code compared to manual Kafka usage
- Type-safe with compile-time validation
- SpEL support for flexible payload extraction
Standardized Event Structure¶
All events follow CloudEvents-inspired schema:
{
"eventId": "7355889748156289024",
"eventType": "ORDER_CREATED",
"occurredAt": "2026-02-03T10:30:00Z",
"publishedAt": "2026-02-03T10:30:00.123Z",
"severity": "INFO",
"metadata": {
"source": { ... },
"actor": { ... },
"trace": { ... },
"tags": { ... }
},
"payload": { ... }
}
Metadata includes:
- Source: Service name, version, hostname
- Actor: User ID, session ID, roles
- Trace: Distributed tracing (trace ID, span ID)
- Tags: Custom key-value pairs
3-Tier Failure Recovery¶
Main Topic → DLQ → Local File Backup
Zero event loss even when Kafka is completely down.
- Primary: Publish to main Kafka topic
- DLQ: Failed events sent to Dead Letter Queue
- Backup: If Kafka unavailable, save to local disk
Automatic PII Protection¶
Annotate sensitive fields and Curve handles the rest:
public class UserPayload implements DomainEventPayload {
@PiiField(type = PiiType.EMAIL, strategy = PiiStrategy.MASK)
private String email; // → "j***@ex***.com"
@PiiField(type = PiiType.PHONE, strategy = PiiStrategy.ENCRYPT)
private String phone; // → Encrypted with AES-256-GCM
@PiiField(type = PiiType.NAME, strategy = PiiStrategy.HASH)
private String name; // → HMAC-SHA256 hashed
}
High Performance¶
| Mode | Throughput | Use Case |
|---|---|---|
| Sync | ~500 TPS | Strong consistency |
| Async | ~10,000+ TPS | High throughput |
| Transactional Outbox | ~1,000 TPS | Atomicity guarantee |
Async Mode with MDC context propagation:
Transactional Outbox Pattern¶
Guarantee atomicity between database and event publishing:
@Transactional
@PublishEvent(
eventType = "ORDER_CREATED",
outbox = true,
aggregateType = "Order",
aggregateId = "#result.id"
)
public Order createOrder(OrderRequest request) {
return orderRepository.save(new Order(request));
}
How it works:
- Event saved to DB in same transaction
- Background poller publishes to Kafka
- Exponential backoff for retries
SKIP LOCKEDprevents duplicate processing
Built-in Observability¶
Health Checks¶
{
"status": "UP",
"details": {
"kafkaProducerInitialized": true,
"clusterId": "lkc-abc123",
"nodeCount": 3,
"topic": "event.audit.v1",
"dlqTopic": "event.audit.dlq.v1"
}
}
Custom Metrics¶
{
"summary": {
"totalEventsPublished": 1523,
"successfulEvents": 1520,
"failedEvents": 3,
"successRate": "99.80%"
}
}
Architecture¶
Hexagonal Architecture (Ports & Adapters)¶
┌─────────────────────────────────────┐
│ Domain Layer (Core) │
│ • EventEnvelope, EventMetadata │
│ • Framework-independent │
└───────────────┬─────────────────────┘
│
┌───────┴────────┐
│ │
▼ ▼
┌───────────┐ ┌────────────┐
│ Spring │ │ Kafka │
│ (Adapter) │ │ (Adapter) │
└───────────┘ └────────────┘
Benefits:
- Framework-independent core
- Easy to test
- Extensible (can swap Kafka for RabbitMQ, etc.)
What's Next?¶
-
Quick Start
Get up and running in 5 minutes
-
PII Protection
Protect sensitive data automatically
-
Failure Recovery
Handle failures gracefully
-
Configuration
Production-ready settings