JSON Monitoring & Logging: Observability Best Practices 2026
Comprehensive guide to monitoring JSON APIs and logging structured data. Learn ELK Stack, Datadog, structured logging, error tracking, and production observability patterns.
David Chen
• Technical WriterExpert in JSON data manipulation, API development, and web technologies. Passionate about creating tools that make developers' lives easier.
# JSON Monitoring & Logging: Observability Best Practices 2026
Production systems require comprehensive observability—monitoring API performance, tracking errors, and analyzing user behavior. This guide covers structured JSON logging, monitoring tools, and best practices for modern applications.
Table of Contents
---
Structured Logging with JSON
Why JSON Logs?
Traditional logs:2026-03-21 10:30:45 INFO User alice@example.com logged in from 192.168.1.1
JSON logs (searchable, parseable):
{
"timestamp": "2026-03-21T10:30:45.123Z",
"level": "info",
"message": "User logged in",
"userId": "user_123",
"email": "alice@example.com",
"ip": "192.168.1.1",
"userAgent": "Mozilla/5.0..."
}
Node.js with Winston
npm install winston
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: { service: 'user-service' },
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
});
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}));
}
// Usage
logger.info('User logged in', {
userId: 'user_123',
email: 'alice@example.com',
ip: req.ip
});
logger.error('Payment failed', {
userId: 'user_123',
orderId: 'order_456',
amount: 99.99,
error: error.message,
stack: error.stack
});
Pino (High Performance)
npm install pino pino-pretty
const pino = require('pino');
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label };
}
},
timestamp: pino.stdTimeFunctions.isoTime
});
// Development: pretty print
// NODE_ENV=development node app.js | pino-pretty
// Production: JSON output
logger.info({
userId: 'user_123',
action: 'login',
ip: '192.168.1.1'
}, 'User logged in');
// Child loggers (inherit parent context)
const requestLogger = logger.child({ requestId: 'req_abc123' });
requestLogger.info('Processing request');
requestLogger.error({ error: err }, 'Request failed');
Express.js Middleware
const express = require('express');
const { v4: uuidv4 } = require('uuid');
const app = express();
// Request logging middleware
app.use((req, res, next) => {
req.id = uuidv4();
req.startTime = Date.now();
req.log = logger.child({ requestId: req.id });
res.on('finish', () => {
const duration = Date.now() - req.startTime;
req.log.info({
method: req.method,
url: req.url,
statusCode: res.statusCode,
duration,
ip: req.ip,
userAgent: req.get('user-agent')
}, 'Request completed');
});
next();
});
// Use in routes
app.get('/api/users', async (req, res) => {
req.log.info('Fetching users');
try {
const users = await db.users.findAll();
res.json(users);
} catch (error) {
req.log.error({ error: error.message }, 'Failed to fetch users');
res.status(500).json({ error: 'Internal server error' });
}
});
---
ELK Stack (Elasticsearch, Logstash, Kibana)
Architecture
Application → Logstash → Elasticsearch → Kibana
↓
(Parse & Transform)
Docker Compose Setup
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms512m -Xmx512m
ports:
- "9200:9200"
volumes:
- es-data:/usr/share/elasticsearch/data
logstash:
image: docker.elastic.co/logstash/logstash:8.8.0
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
ports:
- "5000:5000"
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:8.8.0
ports:
- "5601:5601"
depends_on:
- elasticsearch
volumes:
es-data:
Logstash Configuration
# logstash.conf
input {
tcp {
port => 5000
codec => json
}
}
filter {
# Parse JSON logs
json {
source => "message"
}
# Add geo IP data
geoip {
source => "ip"
target => "geoip"
}
# Parse user agent
useragent {
source => "userAgent"
target => "ua"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "app-logs-%{+YYYY.MM.dd}"
}
# Also output to console in development
stdout { codec => rubydebug }
}
Sending Logs to Logstash
const winston = require('winston');
require('winston-logstash');
const logger = winston.createLogger({
transports: [
new winston.transports.Logstash({
port: 5000,
host: 'localhost',
node_name: 'my-app',
max_connect_retries: 10
})
]
});
logger.info({
userId: 'user_123',
action: 'purchase',
amount: 99.99
});
Kibana Queries
Search logs in Kibana:
# Find all errors
level: "error"
# Find slow requests
duration > 1000
# Find specific user activity
userId: "user_123" AND action: "purchase"
# Time range: last 24 hours
@timestamp: [now-24h TO now]
---
Application Performance Monitoring
Datadog APM
npm install dd-trace
// tracer.js
const tracer = require('dd-trace').init({
service: 'my-app',
env: process.env.NODE_ENV,
version: process.env.APP_VERSION,
logInjection: true
});
module.exports = tracer;
// app.js (first line!)
require('./tracer');
const express = require('express');
const app = express();
app.get('/api/users', async (req, res) => {
const span = tracer.startSpan('fetch-users');
try {
const users = await db.users.findAll();
span.setTag('user.count', users.length);
res.json(users);
} catch (error) {
span.setTag('error', true);
span.log({ event: 'error', message: error.message });
throw error;
} finally {
span.finish();
}
});
New Relic
npm install newrelic
// newrelic.js
exports.config = {
app_name: ['My Application'],
license_key: process.env.NEW_RELIC_LICENSE_KEY,
logging: {
level: 'info'
}
};
// app.js
require('newrelic');
const express = require('express');
// New Relic automatically instruments Express routes
Custom Metrics
const { StatsD } = require('node-statsd');
const statsd = new StatsD({
host: 'localhost',
port: 8125
});
// Increment counter
statsd.increment('api.requests');
// Timing
const startTime = Date.now();
await processData();
statsd.timing('api.processing_time', Date.now() - startTime);
// Gauge (current value)
statsd.gauge('api.active_connections', activeConnections);
// Histogram
statsd.histogram('api.response_size', responseSize);
---
Error Tracking & Alerting
Sentry
npm install @sentry/node
const Sentry = require('@sentry/node');
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
tracesSampleRate: 1.0
});
// Express error handler
app.use(Sentry.Handlers.requestHandler());
app.use(Sentry.Handlers.tracingHandler());
// Routes...
app.use(Sentry.Handlers.errorHandler());
// Custom error tracking
try {
await riskyOperation();
} catch (error) {
Sentry.captureException(error, {
tags: {
component: 'payment',
userId: user.id
},
extra: {
orderId: order.id,
amount: order.total
}
});
}
// Breadcrumbs (track events leading to error)
Sentry.addBreadcrumb({
category: 'auth',
message: 'User logged in',
level: 'info'
});
Slack Alerts
const axios = require('axios');
async function sendSlackAlert(message, severity = 'error') {
const webhookUrl = process.env.SLACK_WEBHOOK_URL;
const payload = {
text: [${severity.toUpperCase()}] ${message},
attachments: [{
color: severity === 'error' ? 'danger' : 'warning',
fields: [
{ title: 'Environment', value: process.env.NODE_ENV, short: true },
{ title: 'Service', value: 'user-service', short: true },
{ title: 'Timestamp', value: new Date().toISOString(), short: false }
]
}]
};
await axios.post(webhookUrl, payload);
}
// Usage
logger.on('error', (error) => {
sendSlackAlert(Error in application: ${error.message});
});
---
API Monitoring
Uptime Monitoring
const axios = require('axios');
async function healthCheck() {
const endpoints = [
{ name: 'API', url: 'https://api.example.com/health' },
{ name: 'Database', url: 'https://db.example.com/status' }
];
for (const endpoint of endpoints) {
try {
const start = Date.now();
const response = await axios.get(endpoint.url, { timeout: 5000 });
const duration = Date.now() - start;
logger.info({
service: endpoint.name,
status: 'up',
statusCode: response.status,
responseTime: duration
});
statsd.timing(health.${endpoint.name.toLowerCase()}.response_time, duration);
statsd.gauge(health.${endpoint.name.toLowerCase()}.status, 1);
} catch (error) {
logger.error({
service: endpoint.name,
status: 'down',
error: error.message
});
statsd.gauge(health.${endpoint.name.toLowerCase()}.status, 0);
sendSlackAlert(${endpoint.name} is down!, 'error');
}
}
}
// Run every minute
setInterval(healthCheck, 60 1000);
API Response Time Tracking
app.use((req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const duration = Date.now() - start;
// Log slow requests
if (duration > 1000) {
logger.warn({
type: 'slow_request',
method: req.method,
url: req.url,
duration,
statusCode: res.statusCode
});
}
// Send metrics
statsd.timing('api.response_time', duration, [
route:${req.route?.path || 'unknown'},
method:${req.method},
status:${res.statusCode}
]);
});
next();
});
---
Log Aggregation Patterns
Correlation IDs
// Generate correlation ID at entry point
app.use((req, res, next) => {
req.correlationId = req.get('X-Correlation-ID') || uuidv4();
res.set('X-Correlation-ID', req.correlationId);
next();
});
// Include in all logs
req.log = logger.child({ correlationId: req.correlationId });
// Include when calling other services
await axios.get('https://api.example.com/data', {
headers: { 'X-Correlation-ID': req.correlationId }
});
Context Propagation
const { AsyncLocalStorage } = require('async_hooks');
const asyncLocalStorage = new AsyncLocalStorage();
app.use((req, res, next) => {
const context = {
requestId: req.id,
userId: req.user?.id,
correlationId: req.correlationId
};
asyncLocalStorage.run(context, () => next());
});
// Access context anywhere
function logWithContext(message, data = {}) {
const context = asyncLocalStorage.getStore() || {};
logger.info({ ...context, ...data }, message);
}
// Usage in any function
async function processOrder(order) {
logWithContext('Processing order', { orderId: order.id });
// Automatically includes requestId, userId, correlationId
}
---
Security & Compliance
Redact Sensitive Data
const pino = require('pino');
const logger = pino({
redact: {
paths: [
'password',
'creditCard',
'ssn',
'req.headers.authorization',
'.password',
'*.creditCard'
],
censor: '[REDACTED]'
}
});
logger.info({
userId: 'user_123',
password: 'secret123', // Will be redacted
email: 'alice@example.com'
});
// Output: { userId: 'user_123', password: '[REDACTED]', email: '...' }
PII Filtering
function sanitizePII(obj) {
const sensitiveFields = ['ssn', 'creditCard', 'password', 'apiKey'];
const clone = { ...obj };
for (const field of sensitiveFields) {
if (field in clone) {
clone[field] = '[REDACTED]';
}
}
return clone;
}
app.post('/api/users', (req, res) => {
logger.info({ data: sanitizePII(req.body) }, 'Creating user');
// ...
});
Log Retention Policies
// Elasticsearch Index Lifecycle Management
PUT _ilm/policy/logs-policy
{
"policy": {
"phases": {
"hot": {
"actions": {}
},
"warm": {
"min_age": "7d",
"actions": {
"shrink": { "number_of_shards": 1 }
}
},
"delete": {
"min_age": "90d",
"actions": {
"delete": {}
}
}
}
}
}
---
Conclusion
Production observability essentials:
Logging: JSON structured logs with Winston/Pino Aggregation: ELK Stack or managed services (Datadog, New Relic) Monitoring: APM, uptime checks, custom metrics Alerting: Sentry, Slack notifications Security: Redact PII, implement retention policiesTrack everything, alert on anomalies, debug faster!
Related Resources
Related Articles
JSON in Node.js: Complete Guide 2026
Master JSON handling in Node.js with streaming, parsing, validation, and performance optimization. Learn fs.readFile, streams, error handling, and production best practices with real examples.
JSON APIs and REST Services: Complete Development Guide
Learn to build and consume JSON-based REST APIs. Covers HTTP methods, authentication, best practices, and real-world implementation examples.
JSON Security Vulnerabilities: Complete Protection Guide 2026
Protect your APIs from JSON security vulnerabilities. Learn about injection attacks, prototype pollution, DoS attacks, and implement security best practices for safe JSON processing in production applications.