Ayder – HTTP-native durable event log written in C (curl as client)
https://github.com/A1darbek/ayder
#HackerNews #Ayder #HTTP-native #durable #event #log #C #curl #client #technology #open-source
Ayder – HTTP-native durable event log written in C (curl as client)
https://github.com/A1darbek/ayder
#HackerNews #Ayder #HTTP-native #durable #event #log #C #curl #client #technology #open-source
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
sinks) and which logs to show (lowestLevel).["my-app", "server"] inherits settings from ["my-app"].Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
Logging in Node.js (or Deno or Bun or edge functions) in 2026
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
sinks) and which logs to show (lowestLevel).["my-app", "server"] inherits settings from ["my-app"].Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging.
Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive.
I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine.
Several readers wanted to see a real-world example rather than theory.
Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like:
These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork.
But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues.
I looked at the existing options. With winston or Pino, I would have to either:
There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons.
None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user.
The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it.
Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized:
| Category | What it logs |
|---|---|
["fedify"] |
Everything from the library |
["fedify", "federation", "inbox"] |
Incoming activities |
["fedify", "federation", "outbox"] |
Outgoing activities |
["fedify", "federation", "http"] |
HTTP requests and responses |
["fedify", "sig", "http"] |
HTTP Signature operations |
["fedify", "sig", "ld"] |
Linked Data Signature operations |
["fedify", "sig", "key"] |
Key generation and retrieval |
["fedify", "runtime", "docloader"] |
JSON-LD document loading |
["fedify", "webfinger", "lookup"] |
WebFinger resource lookups |
…and about a dozen more. Each category corresponds to a distinct subsystem.
This means a user can configure logging like this:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Show errors from all of Fedify
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// But show debug info for inbox processing specifically
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration.
The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries.
In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request.
Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts
});
With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed.
The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure.
So what does all this configuration actually mean for someone using Fedify?
If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops.
For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem.
And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in.
The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need.
Building Fedify with LogTape taught me a few things:
Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently.
Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically.
Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant.
Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in.
If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it.
The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that.
LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
I couldn't find a logging library that worked for my library, so I made one
When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging.
Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive.
I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine.
Several readers wanted to see a real-world example rather than theory.
Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like:
These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork.
But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues.
I looked at the existing options. With winston or Pino, I would have to either:
There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons.
None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user.
The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it.
Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized:
| Category | What it logs |
|---|---|
["fedify"] |
Everything from the library |
["fedify", "federation", "inbox"] |
Incoming activities |
["fedify", "federation", "outbox"] |
Outgoing activities |
["fedify", "federation", "http"] |
HTTP requests and responses |
["fedify", "sig", "http"] |
HTTP Signature operations |
["fedify", "sig", "ld"] |
Linked Data Signature operations |
["fedify", "sig", "key"] |
Key generation and retrieval |
["fedify", "runtime", "docloader"] |
JSON-LD document loading |
["fedify", "webfinger", "lookup"] |
WebFinger resource lookups |
…and about a dozen more. Each category corresponds to a distinct subsystem.
This means a user can configure logging like this:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Show errors from all of Fedify
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// But show debug info for inbox processing specifically
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration.
The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries.
In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request.
Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts
});
With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed.
The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure.
So what does all this configuration actually mean for someone using Fedify?
If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops.
For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem.
And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in.
The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need.
Building Fedify with LogTape taught me a few things:
Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently.
Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically.
Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant.
Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in.
If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it.
The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that.
LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
한국어:Fedify, ActivityPub 서버 프레임워크를 개발하기 시작했을 때, 의외의 문제에 부딪혔습니다: 로깅을 추가하는 방법을 찾지 못했습니다.
로깅 자체가 어려워서가 아닙니다—JavaScript용 성숙한 로깅 라이브러리는 수십 개가 있습니다. 문제는 이들이 주로 애플리케이션을 위해 설계되었지, 방해가 되지 않기를 원하는 라이브러리를 위한 것이 아니라는 점이었습니다.
저는 몇 달 전에 이에 대해 글을 썼고, 반응은 소박했습니다—약간의 관심, 약간의 회의론, 그리고 이 글이 AI로 생성되었는지에 대한 꽤 많은 논쟁이 있었습니다. 솔직히 말하자면: 영어는 제 모국어가 아니기 때문에 글을 다듬기 위해 LLM을 사용합니다. 하지만 아이디어와 기술적 내용은 제 것입니다.
몇몇 독자들은 이론보다 실제 사례를 보고 싶어했습니다.
Fedify는 개발자들이 ActivityPub 프로토콜을 사용하여 연합형 소셜 애플리케이션을 구축하는 데 도움을 줍니다. 연합(federation)과 작업해 본 적이 있다면, 디버깅이 얼마나 고통스러울 수 있는지 아실 겁니다. 활동(activity) 전달이 실패했을 때, 다음과 같은 질문에 답해야 합니다:
이러한 질문들은 여러 하위 시스템에 걸쳐 있습니다: HTTP 처리, 암호화 서명, JSON-LD 처리, 큐 관리 등. 좋은 로깅 없이는 디버깅이 추측 게임이 됩니다.
하지만 라이브러리 작성자로서 제가 직면한 딜레마는 이것이었습니다: 디버깅을 돕기 위해 상세한 로깅을 추가하면, 콘솔이 Fedify의 내부 잡담으로 어지러워지는 것을 원치 않는 사용자들을 짜증나게 할 위험이 있습니다. 침묵을 지키면, 사용자들은 문제를 진단하는 데 어려움을 겪습니다.
기존 옵션들을 살펴봤습니다. winston이나 Pino를 사용하면 다음 중 하나를 해야 했습니다:
또한 debug도 있는데, 이는 이런 사용 사례를 위해 설계되었습니다. 하지만 운영 팀이 기대하는 구조화된, 레벨 기반 로그를 제공하지 않습니다—그리고 환경 변수에 의존하는데, Deno와 같은 일부 런타임은 보안상의 이유로 기본적으로 이를 제한합니다.
이 중 어느 것도 적합하지 않았습니다. 그래서 저는 LogTape—라이브러리 작성자를 위해 처음부터 설계된 로깅 라이브러리를 만들었습니다. 그리고 Fedify는 그 첫 번째 실제 사용자가 되었습니다.
핵심 통찰은 간단했습니다: 라이브러리는 애플리케이션 개발자가 명시적으로 활성화하지 않는 한 어떤 출력도 생성하지 않고 로깅할 수 있어야 합니다.
Fedify는 LogTape의 계층적 카테고리 시스템을 사용하여 사용자에게 보고 싶은 것에 대한 세밀한 제어 권한을 제공합니다. 카테고리는 다음과 같이 구성되어 있습니다:
| 카테고리 | 로깅 내용 |
|---|---|
["fedify"] |
라이브러리의 모든 것 |
["fedify", "federation", "inbox"] |
수신 활동 |
["fedify", "federation", "outbox"] |
발신 활동 |
["fedify", "federation", "http"] |
HTTP 요청 및 응답 |
["fedify", "sig", "http"] |
HTTP 서명 작업 |
["fedify", "sig", "ld"] |
링크드 데이터 서명 작업 |
["fedify", "sig", "key"] |
키 생성 및 검색 |
["fedify", "runtime", "docloader"] |
JSON-LD 문서 로딩 |
["fedify", "webfinger", "lookup"] |
WebFinger 리소스 조회 |
...그리고 약 십여 개 더 있습니다. 각 카테고리는 별개의 하위 시스템에 해당합니다.
이는 사용자가 다음과 같이 로깅을 구성할 수 있음을 의미합니다:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Fedify의 모든 오류 표시
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// 하지만 특별히 inbox 처리에 대한 디버그 정보 표시
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
수신 활동에 문제가 생기면, 다른 모든 것은 조용히 유지하면서 해당 하위 시스템에 대한 상세한 로그를 얻을 수 있습니다. 코드 변경이 필요 없습니다—단지 구성만 필요합니다.
계층적 카테고리는 필터링 문제를 해결했지만, 또 다른 과제가 있었습니다: 비동기 경계를 넘어 로그를 연관시키는 것입니다.
연합 시스템에서는 단일 사용자 작업이 일련의 작업을 촉발할 수 있습니다: 원격 액터 가져오기, 서명 확인, 활동 처리, 팔로워에게 전파 등. 무언가 실패했을 때, 해당 특정 요청에 대한 모든 로그 항목을 연관시켜야 합니다.
Fedify는 LogTape의 암시적 컨텍스트 기능을 사용하여 모든 로그 항목에 자동으로 requestId를 태그합니다:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // 암시적 컨텍스트 활성화
});
이 구성을 사용하면 모든 로그 항목에 자동으로 requestId 속성이 포함됩니다. 특정 요청을 디버깅해야 할 때 로그를 필터링할 수 있습니다:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
그러면 해당 요청의 모든 로그 항목을 볼 수 있습니다—모든 하위 시스템에 걸쳐, 모두 순서대로. 수동 상관관계 분석이 필요 없습니다.
requestId는 가능한 경우 표준 헤더(X-Request-Id, Traceparent 등)에서 파생되므로, 기존 관찰성 인프라와 자연스럽게 통합됩니다.
그렇다면 이 모든 구성이 Fedify를 사용하는 사람에게 실제로 어떤 의미가 있을까요?
Fedify 사용자가 LogTape를 전혀 구성하지 않으면, 아무것도 보이지 않습니다. 누락된 구성에 대한 경고도 없고, 기본 출력도 없으며, 성능 오버헤드도 최소화됩니다—로깅 호출은 본질적으로 아무 작업도 하지 않습니다.
기본적인 가시성을 위해, 세 줄의 구성으로 Fedify의 모든 오류 수준 로깅을 활성화할 수 있습니다. 특정 문제를 디버깅할 때는 관련 하위 시스템에 대해서만 디버그 수준 로깅을 활성화할 수 있습니다.
그리고 심각한 관찰성 요구 사항이 있는 프로덕션 환경에서 실행 중이라면, 요청 상관관계가 내장된 구조화된 JSON 로그를 모니터링 시스템으로 전송할 수 있습니다.
동일한 라이브러리 코드가 이 모든 시나리오를 지원합니다—사용자가 Node.js, Deno, Bun 또는 엣지 함수에서 실행하든, 추가 폴리필이나 심(shim) 없이 가능합니다. 사용자가 필요한 것을 결정합니다.
LogTape로 Fedify를 구축하면서 몇 가지를 배웠습니다:
카테고리를 일찍 설계하세요. 계층적 구조는 사용자가 실제로 로그를 필터링하고 싶어하는 방식을 반영해야 합니다. 저는 Fedify의 카테고리를 사용자가 독립적으로 디버깅해야 할 수 있는 하위 시스템을 중심으로 구성했습니다.
구조화된 로깅을 사용하세요. requestId, activityId, actorId와 같은 속성은 프로그래밍 방식으로 로그를 분석해야 할 때 문자열 보간보다 훨씬 더 유용합니다.
암시적 컨텍스트가 예상보다 더 유용한 것으로 판명되었습니다. 컨텍스트를 수동으로 전달하지 않고도 비동기 경계를 넘어 로그를 연관시킬 수 있어 분산 작업 디버깅이 훨씬 쉬워졌습니다. 사용자가 활동 전달이 실패했다고 보고할 때, 관련된 모든 것을 추출하는 단일 jq 명령을 제공할 수 있습니다.
사용자를 신뢰하세요. 일부 라이브러리 작성자는 로그를 통해 너무 많은 내부 세부 정보를 노출하는 것을 걱정합니다. 저는 반대의 경험을 했습니다—사용자들은 필요할 때 무슨 일이 일어나고 있는지 볼 수 있다는 것을 감사하게 생각합니다. 핵심은 옵트인(opt-in) 방식으로 만드는 것입니다.
라이브러리를 구축하면서 로깅 문제—얼마나 많이 로깅할지, 사용자에게 어떻게 제어권을 줄지, 어떻게 시끄럽지 않게 할지—로 고민하고 계시다면, Fedify가 어떻게 하는지 살펴보시길 권장합니다.
Fedify 로깅 문서에서 모든 것을 자세히 설명합니다. 그리고 LogTape 설계 철학을 이해하고 싶다면, 제 이전 글에서 다루고 있습니다.
LogTape은 해당 도구에 만족하는 애플리케이션 개발자를 위해 winston이나 Pino를 대체하려는 것이 아닙니다. 이는 다른 간극을 메웁니다: 사용자가 필요로 할 때까지 방해가 되지 않기를 원하는 라이브러리를 위한 로깅입니다. 그것이 여러분이 찾고 있는 것이라면, 일반적인 앱 중심 로거보다 더 적합할 수 있습니다.
한국어(대한민국):Fedify, ActivityPub 서버 프레임워크를 개발하기 시작했을 때, 의외의 문제에 부딪혔습니다: 로깅을 추가하는 방법을 찾지 못했습니다.
로깅 자체가 어려워서가 아닙니다—JavaScript용 성숙한 로깅 라이브러리는 수십 개가 있습니다. 문제는 이들이 주로 애플리케이션을 위해 설계되었지, 방해가 되지 않기를 원하는 라이브러리를 위한 것이 아니라는 점이었습니다.
저는 몇 달 전에 이에 대해 글을 썼고, 반응은 적당했습니다—약간의 관심, 약간의 회의론, 그리고 이 글이 AI로 생성되었는지에 대한 꽤 많은 논쟁이 있었습니다. 솔직히 말하자면: 영어는 제 모국어가 아니기 때문에 글을 다듬기 위해 LLM을 사용합니다. 하지만 아이디어와 기술적 내용은 제 것입니다.
몇몇 독자들은 이론보다 실제 사례를 보고 싶어했습니다.
Fedify는 개발자들이 ActivityPub 프로토콜을 사용하여 연합형 소셜 애플리케이션을 구축하는 데 도움을 줍니다. 연합(federation)과 작업해 본 적이 있다면, 디버깅이 얼마나 고통스러울 수 있는지 아실 겁니다. 활동(activity) 전달이 실패했을 때, 다음과 같은 질문에 답해야 합니다:
이러한 질문들은 여러 하위 시스템에 걸쳐 있습니다: HTTP 처리, 암호화 서명, JSON-LD 처리, 큐 관리 등. 적절한 로깅 없이는 디버깅이 추측 게임이 됩니다.
하지만 라이브러리 작성자로서 제가 직면한 딜레마는 이것이었습니다: 디버깅을 돕기 위해 상세한 로깅을 추가하면, Fedify의 내부 메시지로 콘솔이 어지러워지는 것을 원치 않는 사용자들을 짜증나게 할 위험이 있습니다. 반면 아무 말도 하지 않으면, 사용자들은 문제를 진단하는 데 어려움을 겪습니다.
기존 옵션들을 살펴봤습니다. winston이나 Pino를 사용하면 다음 중 하나를 해야 했습니다:
또한 debug도 있는데, 이는 이런 사용 사례를 위해 설계되었습니다. 하지만 운영 팀이 기대하는 구조화된 레벨 기반 로그를 제공하지 않으며, 보안상의 이유로 Deno와 같은 일부 런타임에서 기본적으로 제한하는 환경 변수에 의존합니다.
이 중 어느 것도 적합하지 않았습니다. 그래서 저는 LogTape—라이브러리 작성자를 위해 처음부터 설계된 로깅 라이브러리를 만들었습니다. 그리고 Fedify는 그 첫 번째 실제 사용자가 되었습니다.
핵심 통찰은 간단했습니다: 라이브러리는 애플리케이션 개발자가 명시적으로 활성화하지 않는 한 어떤 출력도 생성하지 않고 로깅할 수 있어야 합니다.
Fedify는 LogTape의 계층적 카테고리 시스템을 사용하여 사용자에게 보고 싶은 것에 대한 세밀한 제어 기능을 제공합니다. 카테고리는 다음과 같이 구성되어 있습니다:
| 카테고리 | 로깅 내용 |
|---|---|
["fedify"] |
라이브러리의 모든 것 |
["fedify", "federation", "inbox"] |
수신 활동 |
["fedify", "federation", "outbox"] |
발신 활동 |
["fedify", "federation", "http"] |
HTTP 요청 및 응답 |
["fedify", "sig", "http"] |
HTTP 서명 작업 |
["fedify", "sig", "ld"] |
링크드 데이터 서명 작업 |
["fedify", "sig", "key"] |
키 생성 및 검색 |
["fedify", "runtime", "docloader"] |
JSON-LD 문서 로딩 |
["fedify", "webfinger", "lookup"] |
WebFinger 리소스 조회 |
...그리고 약 십여 개 더 있습니다. 각 카테고리는 별개의 하위 시스템에 해당합니다.
이는 사용자가 다음과 같이 로깅을 구성할 수 있음을 의미합니다:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Fedify의 모든 오류 표시
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// 하지만 특별히 inbox 처리에 대한 디버그 정보 표시
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
수신 활동에 문제가 생기면, 다른 모든 것은 조용히 유지하면서 해당 하위 시스템에 대한 상세한 로그를 얻을 수 있습니다. 코드 변경이 필요 없이 구성만으로 가능합니다.
계층적 카테고리는 필터링 문제를 해결했지만, 또 다른 과제가 있었습니다: 비동기 경계를 넘어 로그를 연관시키는 것입니다.
연합 시스템에서는 단일 사용자 작업이 일련의 작업을 촉발할 수 있습니다: 원격 액터 가져오기, 서명 확인, 활동 처리, 팔로워에게 전파 등. 무언가 실패했을 때, 해당 특정 요청에 대한 모든 로그 항목을 연관시켜야 합니다.
Fedify는 LogTape의 암시적 컨텍스트 기능을 사용하여 모든 로그 항목에 자동으로 requestId를 태그합니다:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // 암시적 컨텍스트 활성화
});
이 구성을 통해 모든 로그 항목에는 자동으로 requestId 속성이 포함됩니다. 특정 요청을 디버깅해야 할 때 로그를 필터링할 수 있습니다:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
그러면 해당 요청의 모든 로그 항목을 볼 수 있습니다—모든 하위 시스템에 걸쳐, 모두 순서대로. 수동 상관관계 분석이 필요 없습니다.
requestId는 가능한 경우 표준 헤더(X-Request-Id, Traceparent 등)에서 파생되므로 기존 관찰성 인프라와 자연스럽게 통합됩니다.
그렇다면 이 모든 구성이 Fedify를 사용하는 사람들에게 실제로 어떤 의미가 있을까요?
Fedify 사용자가 LogTape를 전혀 구성하지 않으면, 아무것도 보이지 않습니다. 누락된 구성에 대한 경고도 없고, 기본 출력도 없으며, 성능 오버헤드도 최소화됩니다—로깅 호출은 본질적으로 아무 작업도 하지 않습니다.
기본적인 가시성을 위해, 세 줄의 구성으로 Fedify의 모든 오류 수준 로깅을 활성화할 수 있습니다. 특정 문제를 디버깅할 때는 관련 하위 시스템에 대해서만 디버그 수준 로깅을 활성화할 수 있습니다.
그리고 심각한 관찰성 요구 사항이 있는 프로덕션 환경에서 실행하는 경우, 요청 상관관계가 내장된 구조화된 JSON 로그를 모니터링 시스템으로 전송할 수 있습니다.
동일한 라이브러리 코드가 이 모든 시나리오를 지원합니다—사용자가 Node.js, Deno, Bun 또는 엣지 함수에서 실행하든, 추가 폴리필이나 심(shim) 없이 가능합니다. 사용자가 필요한 것을 결정합니다.
LogTape로 Fedify를 구축하면서 몇 가지를 배웠습니다:
카테고리를 일찍 설계하세요. 계층적 구조는 사용자가 실제로 로그를 필터링하고 싶어하는 방식을 반영해야 합니다. 저는 Fedify의 카테고리를 사용자가 독립적으로 디버깅해야 할 수 있는 하위 시스템을 중심으로 구성했습니다.
구조화된 로깅을 사용하세요. requestId, activityId, actorId와 같은 속성은 프로그래밍 방식으로 로그를 분석해야 할 때 문자열 보간보다 훨씬 더 유용합니다.
암시적 컨텍스트는 예상보다 더 유용했습니다. 컨텍스트를 수동으로 전달하지 않고도 비동기 경계를 넘어 로그를 연관시킬 수 있어 분산 작업 디버깅이 훨씬 쉬워졌습니다. 사용자가 활동 전달이 실패했다고 보고할 때, 관련된 모든 것을 추출하는 단일 jq 명령을 제공할 수 있습니다.
사용자를 신뢰하세요. 일부 라이브러리 작성자는 로그를 통해 너무 많은 내부 세부 정보를 노출하는 것을 걱정합니다. 저는 반대의 경험을 했습니다—사용자들은 필요할 때 무슨 일이 일어나고 있는지 볼 수 있다는 것을 감사하게 생각합니다. 핵심은 옵트인(opt-in) 방식으로 만드는 것입니다.
라이브러리를 구축하면서 로깅 문제—얼마나 많이 로깅할지, 사용자에게 어떻게 제어권을 줄지, 어떻게 시끄럽지 않게 할지—로 고민하고 계시다면, Fedify가 어떻게 하는지 살펴보시길 권장합니다.
Fedify 로깅 문서에서 모든 것을 자세히 설명합니다. 그리고 LogTape 설계 철학을 이해하고 싶다면, 이전 포스트에서 다루고 있습니다.
LogTape은 winston이나 Pino에 만족하는 애플리케이션 개발자들을 위한 대체품이 되려는 것이 아닙니다. 이는 다른 간극을 메웁니다: 사용자가 필요로 할 때까지 방해가 되지 않기를 원하는 라이브러리를 위한 로깅입니다. 그것이 당신이 찾고 있는 것이라면, 일반적인 앱 중심 로거보다 더 적합할 수 있습니다.
"Важные истории" выпустили продолжение своего расследования про Telegram.
Я подозреваю, что за то, что я напишу дальше меня запишут в кого угодно - хоть в сотрудников ФСБ, но меня прям бомбит.
Как человеку, который не специализируется на этом, но чуть-чуть понимает про сети - уже первую часть их расследования было больно слушать.
Во второй части вместо того чтобы адресовать реальную критику первой (там было много технических моментов!) они решили опубликовать разговор с каким-то чуваком, который владеет точками обмена трафиком и сетями, также предоставляя услуги Telegram.
Как-то даже не хочется начинать говорить про журналистскую этику, но они сами открыто признаются, что он просил их разговор не публиковать, а они (все в белом такие) - его публикуют, хоть и частично. Океееееей.
Смотрим сам разговор. Чувак ровно в тех же местах, где мне хочется им сказать "бля, вы же нихуя не понимаете о чём говорите" делает ровно это и пытается как идиотам объяснять о том, что если они размещают какое-то оборудование для Telegram - это не значит, что у них есть доступ к данным пользователей. Он им прямо открыто говорит: "я могу пойти и поставить какую-нибудь железку между их оборудованием и сетью, только непонятно зачем мне это делать". И правда: ему это делать в общем-то не за чем 🤷
И про то, что ФСБ может получать данные о трафику он им объясняет, что вообще-то у всех операторов стоят ТСПУ и СОРМ и съём трафика там предусмотрен. Если есть возможности работать с big data, то не нужно никаких заговоров оператора со спецслужбами чтобы вычислить сеансы связи между двумя абонентами на территории РФ.
Почему-то при этом его слова "журналистами" воспринимаются как будто бы как подтверждение того, что они ранее говорили и они продолжают задавать ещё более тупые вопросы.
Да, в какой-то момент они там начинают что-то рассказывать про бумажки и кто за кого их подписывал и была ли у него должность в Telegram или нет. Но с технической точки зрения (в которую их и тыкали носом ранее) это вообще не имеет никакого значения. Даже если бы у него была должность в ФСБ это не поменяло бы того, что они несут чушь.
Это так глупо и отвратительно смотреть, что мне сложно выразить всю полноту своих эмоций после того как я это сделал.
Telegram - это не безопасный мессенджер. У него по дефолту не включено E2E шифрование и UX этого E2E там такой, что стимулирует им не пользоваться. Также у специалистов есть вопросики к кастомным протоколам.
Но это расследование не указывает на реальные проблемы Telegram, а выдумывает какое-то соломенное чучело и радостно тычет в него.
И ещё одно. В самом начале они говорят, что якобы многие независимые специалисты критиковали их, но на самом деле это всё был вот этот товарищ, разговор с которым они показали против его воли.
Нет! Их критиковал даже грёбаный Климарёв, который у нас уже давно признан экстремистом. И куча других людей, которые хотя бы базово понимают работу сетей.
То есть вся эта "работа" - это манипуляция на манипуляции. И я даже не знаю - это они специально это делают на заказ или они просто НАСТОЛЬКО некомпетентны.
Короче, пиздец, товарищи. Пиздец и позор.
#log #Russia #Telegram #security #journalism #ethics #thoughts #review #ВажныеИстории