Designing a logging framework
In this series (15 parts)
- Introduction to low level design
- SOLID principles
- Design patterns: Creational
- Design patterns: Structural
- Design patterns: Behavioral
- Designing a parking lot
- Designing a library management system
- Designing an elevator system
- Designing a hotel booking system
- Designing a ride-sharing model
- Designing a rate limiter
- Designing a logging framework
- Designing a notification system
- API design and contract-first development
- Data modeling for system design
Logging is the most underestimated piece of infrastructure in a production system. When things go wrong at 3 AM, logs are the first place anyone looks. A poorly designed logging framework produces noise: unstructured strings dumped to stdout, mixed severity levels, no correlation between related events. A well-designed one gives you structured, searchable, severity-filtered diagnostic data that flows from your application to the right sink without slowing anything down.
This article covers the low-level design of a logging framework from scratch. For the broader picture of why observability matters at the system level, start there. For how the chain of responsibility pattern works in general, see design patterns behavioral.
Core abstractions
A logging framework has three responsibilities: create log records, format them, and deliver them somewhere. These map to three abstractions: Logger, Formatter, and Handler.
public enum LogLevel {
TRACE(0), DEBUG(1), INFO(2), WARN(3), ERROR(4), FATAL(5);
private final int severity;
LogLevel(int severity) { this.severity = severity; }
public boolean isAtLeast(LogLevel other) {
return this.severity >= other.severity;
}
}
public record LogRecord(
LogLevel level,
String message,
String loggerName,
Instant timestamp,
String threadName,
Map<String, String> context,
Throwable thrown
) {}
The LogRecord is the unit of work. Every log statement creates one. The context map carries structured key-value pairs like requestId, userId, or traceId that make logs searchable.
classDiagram
class Logger {
-String name
-LogLevel level
-List~Handler~ handlers
+info(msg: String) void
+warn(msg: String) void
+error(msg: String, thrown: Throwable) void
+isEnabled(level: LogLevel) boolean
+addHandler(handler: Handler) void
}
class Handler {
<<interface>>
+handle(record: LogRecord) void
+setLevel(level: LogLevel) void
+setFormatter(formatter: Formatter) void
}
class Formatter {
<<interface>>
+format(record: LogRecord) String
}
class ConsoleHandler {
-LogLevel level
-Formatter formatter
+handle(record: LogRecord) void
}
class FileHandler {
-LogLevel level
-Formatter formatter
-Path filePath
-long maxFileSize
+handle(record: LogRecord) void
}
class JsonFormatter {
+format(record: LogRecord) String
}
class PatternFormatter {
-String pattern
+format(record: LogRecord) String
}
class LogRecord {
+LogLevel level
+String message
+Instant timestamp
+String threadName
+Map context
}
Logger --> Handler : delegates to
Handler --> Formatter : uses
Handler <|.. ConsoleHandler
Handler <|.. FileHandler
Formatter <|.. JsonFormatter
Formatter <|.. PatternFormatter
Logger ..> LogRecord : creates
Class diagram showing the Logger, Handler, and Formatter hierarchy with chain of responsibility for handlers.
The Logger class
The Logger is the entry point. Application code calls logger.info("message") and the Logger creates a LogRecord, checks the level filter, and passes it to each registered Handler.
public class Logger {
private final String name;
private LogLevel level;
private final List<Handler> handlers;
private final Map<String, String> contextMap;
public Logger(String name) {
this.name = name;
this.level = LogLevel.INFO;
this.handlers = new CopyOnWriteArrayList<>();
this.contextMap = new ConcurrentHashMap<>();
}
public void log(LogLevel level, String message, Throwable thrown) {
if (!level.isAtLeast(this.level)) return;
LogRecord record = new LogRecord(
level, message, name,
Instant.now(), Thread.currentThread().getName(),
Map.copyOf(contextMap), thrown
);
for (Handler handler : handlers) {
handler.handle(record);
}
}
public void info(String message) { log(LogLevel.INFO, message, null); }
public void warn(String message) { log(LogLevel.WARN, message, null); }
public void error(String message, Throwable t) { log(LogLevel.ERROR, message, t); }
public void putContext(String key, String value) {
contextMap.put(key, value);
}
public void addHandler(Handler handler) {
handlers.add(handler);
}
}
The level check at the top is critical. If the logger is set to WARN, an info() call returns immediately without creating a LogRecord or doing any string formatting. This is the most important optimization: the cost of a disabled log statement is a single integer comparison.
Handlers and the chain of responsibility
Each Handler decides independently whether to process a record. A ConsoleHandler might accept everything at DEBUG and above. A FileHandler might only accept WARN and above. An AlertHandler might only fire on FATAL.
public class ConsoleHandler implements Handler {
private LogLevel level = LogLevel.DEBUG;
private Formatter formatter = new PatternFormatter(
"{timestamp} [{level}] {loggerName} - {message}"
);
@Override
public void handle(LogRecord record) {
if (!record.level().isAtLeast(this.level)) return;
String output = formatter.format(record);
System.out.println(output);
}
@Override
public void setLevel(LogLevel level) { this.level = level; }
@Override
public void setFormatter(Formatter f) { this.formatter = f; }
}
This is the chain of responsibility pattern. The Logger does not know or care what the handlers do with the record. Each handler filters and processes independently. Adding a new destination (Kafka, Elasticsearch, a monitoring webhook) means adding a new Handler class. Nothing else changes.
sequenceDiagram
participant App as Application
participant L as Logger
participant CH as ConsoleHandler
participant FH as FileHandler
participant AH as AlertHandler
App->>L: error("Payment failed", exception)
L->>L: create LogRecord(ERROR, ...)
L->>CH: handle(record)
CH->>CH: level check passes (DEBUG threshold)
CH->>CH: format and print to console
L->>FH: handle(record)
FH->>FH: level check passes (WARN threshold)
FH->>FH: format and write to file
L->>AH: handle(record)
AH->>AH: level check fails (FATAL threshold)
Note over AH: Record dropped silently
Sequence diagram showing a log record flowing through three handlers with different level thresholds.
Formatters and structured logging
Plain text logs are human-readable but machine-hostile. When you have millions of log lines flowing into Elasticsearch or Splunk, you need structured output.
public class JsonFormatter implements Formatter {
@Override
public String format(LogRecord record) {
var json = new StringBuilder();
json.append("{");
json.append("\"timestamp\":\"").append(record.timestamp()).append("\",");
json.append("\"level\":\"").append(record.level()).append("\",");
json.append("\"logger\":\"").append(record.loggerName()).append("\",");
json.append("\"thread\":\"").append(record.threadName()).append("\",");
json.append("\"message\":\"").append(escape(record.message())).append("\"");
if (!record.context().isEmpty()) {
json.append(",\"context\":{");
var entries = record.context().entrySet().iterator();
while (entries.hasNext()) {
var e = entries.next();
json.append("\"").append(e.getKey()).append("\":\"")
.append(escape(e.getValue())).append("\"");
if (entries.hasNext()) json.append(",");
}
json.append("}");
}
if (record.thrown() != null) {
json.append(",\"exception\":\"")
.append(escape(record.thrown().toString()))
.append("\"");
}
json.append("}");
return json.toString();
}
}
A structured log record looks like this:
{
"timestamp": "2026-04-20T14:23:01.442Z",
"level": "ERROR",
"logger": "com.app.PaymentService",
"thread": "http-worker-7",
"message": "Payment processing failed",
"context": {
"requestId": "req-abc-123",
"userId": "user-456",
"amount": "99.99"
},
"exception": "java.net.SocketTimeoutException: connect timed out"
}
Every field is indexed. You can query for all errors from a specific user in a specific time range without parsing strings.
Async logging
Writing to a file or sending to a remote sink takes time. If the logging call blocks the application thread, you are paying for I/O on the hot path. Async logging decouples the application from the I/O.
public class AsyncHandler implements Handler {
private final Handler delegate;
private final BlockingQueue<LogRecord> queue;
private final Thread writerThread;
private volatile boolean running = true;
public AsyncHandler(Handler delegate, int queueCapacity) {
this.delegate = delegate;
this.queue = new ArrayBlockingQueue<>(queueCapacity);
this.writerThread = new Thread(this::drainLoop, "async-log-writer");
this.writerThread.setDaemon(true);
this.writerThread.start();
}
@Override
public void handle(LogRecord record) {
if (!queue.offer(record)) {
// Queue full: drop the record or write to stderr
System.err.println("Log queue full, dropping record");
}
}
private void drainLoop() {
while (running || !queue.isEmpty()) {
try {
LogRecord record = queue.poll(100, TimeUnit.MILLISECONDS);
if (record != null) {
delegate.handle(record);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
}
}
public void shutdown() {
running = false;
writerThread.interrupt();
try {
writerThread.join(5000);
} catch (InterruptedException ignored) {}
}
}
The trade-off is clear. The application thread does a non-blocking offer() and returns immediately. A background thread drains the queue and delegates to the real handler. If the queue fills up, you have to decide: block the application, drop the log, or fall back to stderr. Most systems drop with a counter metric, because slowing down the application to preserve a log line is rarely worth it.
Logger factory and hierarchy
Applications create loggers by name, typically matching the class name. A factory ensures you get the same logger instance for the same name.
public class LoggerFactory {
private static final ConcurrentHashMap<String, Logger> loggers =
new ConcurrentHashMap<>();
private static final List<Handler> defaultHandlers = new ArrayList<>();
public static Logger getLogger(String name) {
return loggers.computeIfAbsent(name, n -> {
Logger logger = new Logger(n);
defaultHandlers.forEach(logger::addHandler);
return logger;
});
}
public static void addDefaultHandler(Handler handler) {
defaultHandlers.add(handler);
}
}
Usage in application code is minimal:
private static final Logger log = LoggerFactory.getLogger("PaymentService");
public void processPayment(PaymentRequest req) {
log.putContext("requestId", req.id());
log.info("Processing payment");
// ...
log.error("Payment failed", exception);
}
Structured log schema
Define a schema for your log records across the organization. This ensures consistency across services.
fields:
- name: timestamp
type: ISO-8601 datetime
required: true
- name: level
type: enum [TRACE, DEBUG, INFO, WARN, ERROR, FATAL]
required: true
- name: service
type: string
required: true
- name: traceId
type: string (UUID)
required: false
- name: spanId
type: string (UUID)
required: false
- name: message
type: string
required: true
- name: context
type: map of string to string
required: false
- name: exception
type: object with class, message, stackTrace
required: false
When every service emits logs in this format, your centralized logging platform can index, search, and correlate without per-service parsing rules.
What comes next
A logging framework captures what happened inside your system. But you also need to tell your users what happened, through channels like email, SMS, or push notifications. The notification system design takes the patterns you have seen here (strategy for handlers, async processing, configurable routing) and applies them to user-facing communication.