Curve Configuration Guide¶
This document describes the detailed configuration methods for the Curve event publishing library.
Table of Contents¶
- Basic Configuration
- Configuration Validation
- Worker ID Configuration
- Kafka Transmission Mode Configuration
- DLQ Configuration
- Backup Strategy Configuration
- Retry Configuration
- AOP Configuration
- PII Protection Configuration
- Outbox Configuration
- Serialization Configuration
- Avro Serialization Configuration
- Logging Configuration
Basic Configuration¶
application.yml¶
curve:
enabled: true # Enable Curve (default: true)
kafka:
topic: event.audit.v1 # Main topic name
dlq-topic: event.audit.dlq.v1 # DLQ topic (optional)
id-generator:
worker-id: 1 # Snowflake Worker ID (0~1023)
auto-generate: false # Auto-generate based on MAC address
Configuration Validation¶
Curve automatically validates configuration values at application startup using @Validated. If invalid configuration values are entered, the application will fail to start with a clear error message.
Validation Rules¶
| Configuration Item | Validation Rule | Error Message |
|---|---|---|
curve.kafka.topic | Required (non-empty string) | "Kafka topic is required" |
curve.kafka.retries | 0 or greater | "retries must be 0 or greater" |
curve.kafka.retry-backoff-ms | Positive number | "retryBackoffMs must be positive" |
curve.kafka.request-timeout-ms | Positive number | "requestTimeoutMs must be positive" |
curve.kafka.async-timeout-ms | Positive number | "asyncTimeoutMs must be positive" |
curve.kafka.sync-timeout-seconds | Positive number | "syncTimeoutSeconds must be positive" |
curve.kafka.dlq-executor-threads | 1 or greater | "dlqExecutorThreads must be 1 or greater" |
curve.id-generator.worker-id | 0 ~ 1023 | "workerId must be between 0 and 1023" |
curve.retry.max-attempts | 1 or greater | "maxAttempts must be 1 or greater" |
curve.retry.initial-interval | Positive number | "initialInterval must be positive" |
curve.retry.multiplier | 1 or greater | "multiplier must be 1 or greater" |
curve.retry.max-interval | Positive number | "maxInterval must be positive" |
curve.outbox.poll-interval-ms | Positive number | "pollIntervalMs must be positive" |
curve.outbox.batch-size | 1 ~ 1000 | "batchSize must be between 1 and 1000" |
curve.outbox.max-retries | 1 or greater | "maxRetries must be 1 or greater" |
curve.outbox.send-timeout-seconds | Positive number | "sendTimeoutSeconds must be positive" |
curve.outbox.retention-days | 1 or greater | "retentionDays must be 1 or greater" |
curve.async.core-pool-size | 1 or greater | "corePoolSize must be at least 1" |
curve.async.max-pool-size | 1 or greater | "maxPoolSize must be at least 1" |
curve.async.queue-capacity | 0 or greater | "queueCapacity must be at least 0" |
curve.kafka.backup.s3-bucket | Required when s3Enabled=true | "s3Bucket is required when s3Enabled=true" |
curve.serde.schema-registry-url | Required when type=AVRO | "schemaRegistryUrl is required when serde type is AVRO" |
Validation Error Example¶
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target org.springframework.boot.context.properties.bind.BindException:
Failed to bind properties under 'curve' to com.project.curve.autoconfigure.CurveProperties failed:
Property: curve.id-generator.worker-id
Value: "2000"
Reason: workerId must be 1023 or less
Worker ID Configuration¶
The Snowflake ID Generator uses a Worker ID to generate unique IDs in a distributed environment.
Method 1: Explicit Worker ID Configuration (Recommended)¶
Assign a unique Worker ID to each instance.
Kubernetes Environment Example:
# deployment.yaml
env:
- name: CURVE_ID_GENERATOR_WORKER_ID
valueFrom:
fieldRef:
fieldPath: metadata.uid # Use hashed Pod UID
Docker Compose Example:
# docker-compose.yml
services:
app-1:
environment:
- CURVE_ID_GENERATOR_WORKER_ID=1
app-2:
environment:
- CURVE_ID_GENERATOR_WORKER_ID=2
Method 2: Auto-Generation (Caution)¶
Auto-generate Worker ID based on MAC address.
⚠️ Caution: - In virtual environments, MAC addresses may be identical, leading to conflicts - MAC addresses may change when containers restart - Explicit configuration is recommended for production environments
Worker ID Range¶
- Minimum value: 0
- Maximum value: 1023
- Recommended: Manage using environment variables or configuration management systems (Consul, etcd)
Kafka Transmission Mode Configuration¶
Curve supports both synchronous and asynchronous transmission modes.
Synchronous Transmission (Default)¶
Characteristics: - ✅ Guaranteed transmission (clear success/failure confirmation) - ✅ Easy error handling - ❌ Performance degradation (blocking) - ❌ Limited throughput
Suitable for: - Financial transactions, payments, etc. where accuracy is critical - Cases where event loss is not acceptable - Low throughput (tens to hundreds of TPS)
Asynchronous Transmission¶
curve:
kafka:
async-mode: true # Asynchronous transmission
async-timeout-ms: 5000 # 5 seconds timeout
Characteristics: - ✅ High performance (non-blocking) - ✅ High throughput capability - ⚠️ Callback-based error handling - ⚠️ Relies on DLQ in case of transmission failure
Suitable for: - Logs, analytics events, etc. where some loss is acceptable - High throughput required (thousands to tens of thousands of TPS) - Cases where latency is critical
Performance Comparison¶
| Item | Synchronous Transmission | Asynchronous Transmission |
|---|---|---|
| Throughput (TPS) | ~500 | ~10,000+ |
| Latency | High (10-50ms) | Low (1-5ms) |
| Transmission Guarantee | Strong | Moderate (DLQ dependent) |
| Resource Usage | High | Low |
DLQ Configuration¶
The Dead Letter Queue stores events that fail to be transmitted.
Enable DLQ¶
Disable DLQ¶
⚠️ Caution: Disabling DLQ may result in event loss in case of transmission failure.
DLQ Message Structure¶
{
"eventId": "123456789",
"originalTopic": "event.audit.v1",
"originalPayload": "{...}",
"exceptionType": "org.apache.kafka.common.errors.TimeoutException",
"exceptionMessage": "Failed to send message after 3 retries",
"failedAt": 1704067200000
}
Backup Strategy Configuration¶
Configure backup strategies for events that fail to be sent to DLQ.
S3 Backup (Recommended for Cloud)¶
Requirements: - software.amazon.awssdk:s3 dependency - S3Client bean in Spring Context
Local File Backup¶
Retry Configuration¶
Automatic retry configuration in case of transmission failure.
Basic Configuration¶
curve:
retry:
enabled: true # Enable retry
max-attempts: 3 # Maximum 3 attempts
initial-interval: 1000 # Initial 1 second wait
multiplier: 2.0 # Increase by 2x (1s -> 2s -> 4s)
max-interval: 10000 # Maximum 10 seconds
Exponential Backoff Example¶
| Attempt | Wait Time |
|---|---|
| 1st | 0ms (immediate) |
| 2nd | 1,000ms (1 second) |
| 3rd | 2,000ms (2 seconds) |
| 4th | 4,000ms (4 seconds) |
Disable Retry¶
AOP Configuration¶
AOP configuration based on @PublishEvent annotation.
Enable AOP (Default)¶
Disable AOP¶
Async Executor Configuration¶
Curve can register a dedicated curveAsyncExecutor bean for async event processing.
Note: This does NOT force
@EnableAsyncon the application. If you need@EnableAsync, enable it in your own configuration.
Enable Async Executor¶
curve:
async:
enabled: true # Register curveAsyncExecutor bean
core-pool-size: 2 # Core thread pool size (default: 2)
max-pool-size: 10 # Maximum thread pool size (default: 10)
queue-capacity: 500 # Task queue capacity (default: 500)
Disable Async Executor (Default)¶
PII Protection Configuration¶
Through PII (Personally Identifiable Information) protection features, sensitive data can be automatically masked, encrypted, or hashed.
Basic Configuration¶
curve:
pii:
enabled: true # Enable PII protection (default: true)
crypto:
default-key: ${PII_ENCRYPTION_KEY} # Encryption key (environment variable required)
salt: ${PII_HASH_SALT} # Hashing salt (environment variable recommended)
Encryption Key Configuration (Required)¶
When using @PiiField(strategy = PiiStrategy.ENCRYPT), an encryption key is mandatory.
1. Generate Key
# Generate 32-byte AES-256 key
openssl rand -base64 32
# Output example: K7gNU3sdo+OL0wNhqoVWhr3g6s1xYv72ol/pe/Unols=
2. Set Environment Variable (Recommended)
# Linux/macOS
export PII_ENCRYPTION_KEY=K7gNU3sdo+OL0wNhqoVWhr3g6s1xYv72ol/pe/Unols=
export PII_HASH_SALT=your-random-salt-value
# Windows PowerShell
$env:PII_ENCRYPTION_KEY="K7gNU3sdo+OL0wNhqoVWhr3g6s1xYv72ol/pe/Unols="
$env:PII_HASH_SALT="your-random-salt-value"
3. application.yml Configuration
⚠️ Caution: - Do not hardcode the encryption key directly in application.yml - For production environments, use environment variables or external secret management systems (Vault, AWS Secrets Manager) - If the key is not configured, an exception will occur when using the ENCRYPT strategy
PII Strategies¶
| Strategy | Description | Reversible | Example |
|---|---|---|---|
MASK | Pattern-based masking | Not possible | John Doe → John ** |
ENCRYPT | AES-256-GCM encryption | Possible (key required) | Encrypted Base64 string |
HASH | HMAC-SHA256 hashing | Not possible | Hashed Base64 string |
Masking Patterns by PII Type¶
| Type | Masking Pattern | Example |
|---|---|---|
NAME | Keep first character, mask rest | John Doe → J*** *** |
EMAIL | Keep local part, mask domain | user@example.com → user@***.com |
PHONE | Keep first 3 and last 4 digits only | 010-1234-5678 → 010****5678 |
DEFAULT | Keep first 30%, mask rest | Seoul Gangnam → Seou*** |
Usage Example¶
public class CustomerInfo {
@PiiField(type = PiiType.NAME, strategy = PiiStrategy.MASK)
private String name;
@PiiField(type = PiiType.EMAIL, strategy = PiiStrategy.MASK)
private String email;
@PiiField(type = PiiType.PHONE, strategy = PiiStrategy.ENCRYPT)
private String phone;
@PiiField(strategy = PiiStrategy.HASH)
private String ssn; // Social Security Number
}
Kubernetes Environment Configuration¶
# deployment.yaml
env:
- name: PII_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: curve-secrets
key: pii-encryption-key
- name: PII_HASH_SALT
valueFrom:
secretKeyRef:
name: curve-secrets
key: pii-hash-salt
# Create Secret
kubectl create secret generic curve-secrets \
--from-literal=pii-encryption-key=$(openssl rand -base64 32) \
--from-literal=pii-hash-salt=$(openssl rand -base64 16)
Outbox Configuration¶
Use the Transactional Outbox Pattern to ensure atomicity between DB transactions and event publishing.
Basic Configuration¶
curve:
outbox:
enabled: true # Enable Outbox
poll-interval-ms: 1000 # Polling interval (1 second)
batch-size: 100 # Batch size
max-retries: 3 # Maximum retry count
send-timeout-seconds: 10 # Send timeout
cleanup-enabled: true # Enable old event cleanup
retention-days: 7 # Retention period (7 days)
cleanup-cron: "0 0 2 * * *" # Cleanup job execution time (2 AM daily)
initialize-schema: embedded # Schema initialization mode (embedded, always, never)
Schema Initialization Modes¶
embedded: Automatically create tables only for embedded DBs like H2, HSQLDB (default)always: Always attempt to create tables (if they don't exist)never: No automatic creation (recommended when using Flyway/Liquibase)
Serialization Configuration¶
Configure the event payload serialization method.
Avro Serialization Configuration¶
Additional configuration is required to serialize events using Avro.
1. Curve Configuration¶
2. Spring Kafka Configuration (Required)¶
You must explicitly specify the value-serializer in Spring Kafka's Producer configuration.
spring:
kafka:
producer:
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
properties:
schema.registry.url: http://localhost:8081
⚠️ Caution: - When curve.serde.type=AVRO is configured, Curve internally creates a GenericRecord object and passes it to KafkaTemplate. - Therefore, you must use KafkaAvroSerializer so that KafkaTemplate can serialize GenericRecord. - schema.registry.url may need to be configured in both curve.serde and spring.kafka.properties (for Curve internal logic and Kafka Serializer).
Avro Schema Structure¶
Curve internally uses the following fixed Avro schema. Some fields in payload and metadata are stored as JSON strings for flexibility.
{
"type": "record",
"name": "EventEnvelope",
"namespace": "com.project.curve.core.envelope",
"fields": [
{"name": "eventId", "type": "string"},
{"name": "eventType", "type": "string"},
{"name": "severity", "type": "string"},
{"name": "metadata", "type": { ... }},
{"name": "payload", "type": "string"}, // JSON String
{"name": "occurredAt", "type": "long", "logicalType": "timestamp-millis"},
{"name": "publishedAt", "type": "long", "logicalType": "timestamp-millis"}
]
}
Complete Configuration Examples¶
Production Environment (Stability-focused)¶
curve:
enabled: true
id-generator:
worker-id: ${INSTANCE_ID} # Injected from environment variable
auto-generate: false
kafka:
topic: event.audit.v1
dlq-topic: event.audit.dlq.v1
async-mode: false # Synchronous transmission
retries: 5
retry-backoff-ms: 1000
request-timeout-ms: 30000
# Backup Strategy
backup:
s3-enabled: true
s3-bucket: "prod-event-backups"
local-enabled: false
retry:
enabled: true
max-attempts: 5
initial-interval: 1000
multiplier: 2.0
max-interval: 10000
aop:
enabled: true
pii:
enabled: true
crypto:
default-key: ${PII_ENCRYPTION_KEY} # Environment variable required
salt: ${PII_HASH_SALT}
async:
enabled: true
core-pool-size: 4
max-pool-size: 20
queue-capacity: 1000
outbox:
enabled: true
initialize-schema: never # Use Flyway
cleanup-enabled: true
retention-days: 14
Development/Test Environment (Performance-focused)¶
curve:
enabled: true
id-generator:
worker-id: 1
auto-generate: false
kafka:
topic: event.audit.dev.v1
dlq-topic: event.audit.dlq.dev.v1
async-mode: true # Asynchronous transmission
async-timeout-ms: 3000
retries: 3
backup:
local-enabled: true
retry:
enabled: true
max-attempts: 3
initial-interval: 500
multiplier: 1.5
aop:
enabled: true
outbox:
enabled: true
initialize-schema: always
async:
enabled: true
High-Performance Environment¶
curve:
enabled: true
id-generator:
worker-id: ${WORKER_ID}
auto-generate: false
kafka:
topic: event.audit.v1
dlq-topic: event.audit.dlq.v1
async-mode: true # Asynchronous transmission
async-timeout-ms: 5000
retries: 1 # Minimum retry
retry:
enabled: false # Disable retry (performance priority)
aop:
enabled: true
async:
enabled: true
core-pool-size: 8
max-pool-size: 32
queue-capacity: 2000
Environment-specific Configuration Recommendations¶
Local Development¶
- Worker ID: 1 (fixed)
- Transmission Mode: Synchronous (debugging convenience)
- DLQ: Enabled
- Retry: Minimum (fast failure)
- Outbox: Enabled (auto schema generation)
- Backup: Local File
Staging¶
- Worker ID: Environment variable
- Transmission Mode: Asynchronous
- DLQ: Enabled
- Retry: Medium level
- Outbox: Enabled
- Backup: S3 (if available) or Local
Production¶
- Worker ID: Centrally managed (Consul/etcd)
- Transmission Mode: Based on business requirements
- DLQ: Mandatory enabled
- Retry: High level
- Outbox: Mandatory enabled (data consistency)
- Backup: S3 (Mandatory for K8s)
Troubleshooting¶
Worker ID Conflict¶
Symptom: Identical IDs are being generated
Solution:
Transmission Timeout¶
Symptom: TimeoutException occurs
Solution:
High Latency¶
Symptom: Event publishing is slow
Solution:
PII Encryption Key Not Configured¶
Symptom:
ERROR: PII encryption key is not configured!
ERROR: An exception will occur when using @PiiField(strategy = PiiStrategy.ENCRYPT).
Solution:
# 1. Generate key
openssl rand -base64 32
# 2. Set environment variable
export PII_ENCRYPTION_KEY=generated_key_value
# 3. Configure application.yml
curve:
pii:
crypto:
default-key: ${PII_ENCRYPTION_KEY}
Configuration Validation Failure¶
Symptom:
Solution: - Check if configuration values meet validation rules - Refer to validation rules in the Configuration Validation section
Logging Configuration¶
By default, Curve outputs minimal logs. To see detailed configuration information or internal operations, enable the DEBUG level.
Basic Logging (INFO)¶
In the default configuration, only the following log is output:
INFO c.p.c.a.CurveAutoConfiguration : Curve auto-configuration enabled (disable with curve.enabled=false)
Enable DEBUG Logging¶
Information Available at DEBUG Level¶
| Item | Description |
|---|---|
| Kafka Producer Configuration | Detailed configuration such as retries, timeout, async-mode |
| RetryTemplate Configuration | max-attempts, detailed backoff policy |
| SnowflakeIdGenerator | Worker ID and initialization information |
| DLQ ExecutorService | Thread pool size, shutdown timeout |
| PII Module | Encryption/salt configuration status, module registration |
| Event Transmission | Transmission details per event (eventId, topic, partition, offset) |
| Outbox Publisher | Polling, publishing, cleanup job logs |
Enable DEBUG for Specific Modules Only¶
logging:
level:
# DEBUG for Kafka transmission only
com.project.curve.kafka: DEBUG
# DEBUG for Auto-Configuration only
com.project.curve.autoconfigure: DEBUG
# DEBUG for PII processing only
com.project.curve.spring.pii: DEBUG
# DEBUG for Outbox only
com.project.curve.spring.outbox: DEBUG