Production Checklist Table
| Concern | Bad pattern | Better pattern |
|---|---|---|
| Connection lifecycle | āĻĒā§āϰāϤāĻŋ publish-āĻ new connection | long-lived connection, managed channel usage |
| Error handling | log āĻāϰ⧠ignore | classify and react |
| Duplicate processing | assume exactly once | idempotent consumer design |
| Monitoring | queue full āĻšāϞ⧠āĻŦā§āĻāĻŦ | depth, lag, redelivery, DLQ count track |
| Shutdown | abrupt exit | graceful close and context-driven stop |
type Publisher struct {
conn *amqp.Connection
ch *amqp.Channel
}
func (p *Publisher) PublishJSON(exchange, key string, body []byte) error {
return p.ch.Publish(exchange, key, false, false, amqp.Publishing{
ContentType: "application/json",
DeliveryMode: amqp.Persistent,
Body: body,
Timestamp: time.Now(),
})
}Operational Smell
Situation: Queue depth steady āĻŦāĻžā§āĻā§, consumer count 1, redelivery count āĻŦāĻžā§āĻā§, logs-āĻ intermittent DB timeoutāĨ¤
Goal: āĻĒā§āϰāĻĨāĻŽ āϤāĻŋāύāĻāĻŋ suspicion identify āĻāϰāĻžāĨ¤
What to think about:
- consumer throughput āĻāĻŽ?
- downstream dependency slow?
- retry strategy hot loop āĻāϰāĻā§?
āϏāĻŽā§āĻāĻžāĻŦā§āϝ āϏāĻŽāĻžāϧāĻžāύ āĻĻā§āĻā§āύ
Likely bottlenecks: consumer too slow or too few, downstream DB instability, and retry/requeue pattern causing repeat churn. āĻļā§āϧ⧠queue depth āĻĻā§āĻāĻž āϝāĻĨā§āώā§āĻ āύāĻž; ack latency āĻāϰ redelivery trend-āĻ āĻāϰā§āϰāĻŋāĨ¤
Production metrics idea
Checkpoint
āύāĻŋāĻā§āĻā§ āĻāĻŋāĻā§āĻāĻžāϏāĻž āĻāϰā§: 'āĻāĻŽāĻžāϰ consumer crash āĻāϰāϞ⧠business side effect duplicate āĻšāĻŦā§ āĻāĻŋ?'
- database unique constraint āĻāĻŋ āĻāĻā§?
- processed message id track āĻāϰāĻāĻŋ āĻāĻŋ?
- external API call repeat-safe āĻāĻŋ?