I've read this article https://thoughtbot.com/blog/a-javascript-developer-s-guide-to-rails-where-does-everything-come-from
It's a great guide to start with #ruby language programming if you comes from #javascript
#programming #coding #softwareEngineer #rails
I've read this article https://thoughtbot.com/blog/a-javascript-developer-s-guide-to-rails-where-does-everything-come-from
It's a great guide to start with #ruby language programming if you comes from #javascript
#programming #coding #softwareEngineer #rails
Your CLI's completion should know what options you've already typed
Consider Git's -C option:
git -C /path/to/repo checkout <TAB>
When you hit Tab, Git completes branch names from /path/to/repo, not your
current directory. The completion is context-aware—it depends on the value of
another option.
Most CLI parsers can't do this. They treat each option in isolation, so
completion for --branch has no way of knowing the --repo value. You end up
with two unpleasant choices: either show completions for all possible
branches across all repositories (useless), or give up on completion entirely
for these options.
Optique 0.10.0 introduces a dependency system that solves this problem while preserving full type safety.
Static dependencies with or()
Optique already handles certain kinds of dependent options via the or()
combinator:
import { flag, object, option, or, string } from "@optique/core";
const outputOptions = or(
object({
json: flag("--json"),
pretty: flag("--pretty"),
}),
object({
csv: flag("--csv"),
delimiter: option("--delimiter", string()),
}),
);
TypeScript knows that if json is true, you'll have a pretty field, and if
csv is true, you'll have a delimiter field. The parser enforces this at
runtime, and shell completion will suggest --pretty only when --json is
present.
This works well when the valid combinations are known at definition time. But it can't handle cases where valid values depend on runtime input—like branch names that vary by repository.
Runtime dependencies
Common scenarios include:
- A deployment CLI where
--environmentaffects which services are available - A database tool where
--connectionaffects which tables can be completed - A cloud CLI where
--projectaffects which resources are shown
In each case, you can't know the valid values until you know what the user
typed for the dependency option. Optique 0.10.0 introduces dependency() and
derive() to handle exactly this.
The dependency system
The core idea is simple: mark one option as a dependency source, then create derived parsers that use its value.
import {
choice,
dependency,
message,
object,
option,
string,
} from "@optique/core";
function getRefsFromRepo(repoPath: string): string[] {
// In real code, this would read from the Git repository
return ["main", "develop", "feature/login"];
}
// Mark as a dependency source
const repoParser = dependency(string());
// Create a derived parser
const refParser = repoParser.derive({
metavar: "REF",
factory: (repoPath) => {
const refs = getRefsFromRepo(repoPath);
return choice(refs);
},
defaultValue: () => ".",
});
const parser = object({
repo: option("--repo", repoParser, {
description: message`Path to the repository`,
}),
ref: option("--ref", refParser, {
description: message`Git reference`,
}),
});
The factory function is where the dependency gets resolved. It receives the
actual value the user provided for --repo and returns a parser that validates
against refs from that specific repository.
Under the hood, Optique uses a three-phase parsing strategy:
- Parse all options in a first pass, collecting dependency values
- Call factory functions with the collected values to create concrete parsers
- Re-parse derived options using those dynamically created parsers
This means both validation and completion work correctly—if the user has
already typed --repo /some/path, the --ref completion will show refs from
that path.
Repository-aware completion with @optique/git
The @optique/git package provides async value parsers that read from Git
repositories. Combined with the dependency system, you can build CLIs with
repository-aware completion:
import {
command,
dependency,
message,
object,
option,
string,
} from "@optique/core";
import { gitBranch } from "@optique/git";
const repoParser = dependency(string());
const branchParser = repoParser.deriveAsync({
metavar: "BRANCH",
factory: (repoPath) => gitBranch({ dir: repoPath }),
defaultValue: () => ".",
});
const checkout = command(
"checkout",
object({
repo: option("--repo", repoParser, {
description: message`Path to the repository`,
}),
branch: option("--branch", branchParser, {
description: message`Branch to checkout`,
}),
}),
);
Now when you type my-cli checkout --repo /path/to/project --branch <TAB>, the
completion will show branches from /path/to/project. The defaultValue of
"." means that if --repo isn't specified, it falls back to the current
directory.
Multiple dependencies
Sometimes a parser needs values from multiple options. The deriveFrom()
function handles this:
import {
choice,
dependency,
deriveFrom,
message,
object,
option,
} from "@optique/core";
function getAvailableServices(env: string, region: string): string[] {
return [`${env}-api-${region}`, `${env}-web-${region}`];
}
const envParser = dependency(choice(["dev", "staging", "prod"] as const));
const regionParser = dependency(choice(["us-east", "eu-west"] as const));
const serviceParser = deriveFrom({
dependencies: [envParser, regionParser] as const,
metavar: "SERVICE",
factory: (env, region) => {
const services = getAvailableServices(env, region);
return choice(services);
},
defaultValues: () => ["dev", "us-east"] as const,
});
const parser = object({
env: option("--env", envParser, {
description: message`Deployment environment`,
}),
region: option("--region", regionParser, {
description: message`Cloud region`,
}),
service: option("--service", serviceParser, {
description: message`Service to deploy`,
}),
});
The factory receives values in the same order as the dependency array. If
some dependencies aren't provided, Optique uses the defaultValues.
Async support
Real-world dependency resolution often involves I/O—reading from Git repositories, querying APIs, accessing databases. Optique provides async variants for these cases:
import { dependency, string } from "@optique/core";
import { gitBranch } from "@optique/git";
const repoParser = dependency(string());
const branchParser = repoParser.deriveAsync({
metavar: "BRANCH",
factory: (repoPath) => gitBranch({ dir: repoPath }),
defaultValue: () => ".",
});
The @optique/git package uses isomorphic-git under the hood, so
gitBranch(), gitTag(), and gitRef() all work in both Node.js and Deno.
There's also deriveSync() for when you need to be explicit about synchronous
behavior, and deriveFromAsync() for multiple async dependencies.
Wrapping up
The dependency system lets you build CLIs where options are aware of each other—not just for validation, but for shell completion too. You get type safety throughout: TypeScript knows the relationship between your dependency sources and derived parsers, and invalid combinations are caught at compile time.
This is particularly useful for tools that interact with external systems where the set of valid values isn't known until runtime. Git repositories, cloud providers, databases, container registries—anywhere the completion choices depend on context the user has already provided.
This feature will be available in Optique 0.10.0. To try the pre-release:
deno add jsr:@optique/core@0.10.0-dev.311
Or with npm:
npm install @optique/core@0.10.0-dev.311
See the documentation for more details.
Your CLI's completion should know what options you've already typed
Consider Git's -C option:
git -C /path/to/repo checkout <TAB>
When you hit Tab, Git completes branch names from /path/to/repo, not your
current directory. The completion is context-aware—it depends on the value of
another option.
Most CLI parsers can't do this. They treat each option in isolation, so
completion for --branch has no way of knowing the --repo value. You end up
with two unpleasant choices: either show completions for all possible
branches across all repositories (useless), or give up on completion entirely
for these options.
Optique 0.10.0 introduces a dependency system that solves this problem while preserving full type safety.
Static dependencies with or()
Optique already handles certain kinds of dependent options via the or()
combinator:
import { flag, object, option, or, string } from "@optique/core";
const outputOptions = or(
object({
json: flag("--json"),
pretty: flag("--pretty"),
}),
object({
csv: flag("--csv"),
delimiter: option("--delimiter", string()),
}),
);
TypeScript knows that if json is true, you'll have a pretty field, and if
csv is true, you'll have a delimiter field. The parser enforces this at
runtime, and shell completion will suggest --pretty only when --json is
present.
This works well when the valid combinations are known at definition time. But it can't handle cases where valid values depend on runtime input—like branch names that vary by repository.
Runtime dependencies
Common scenarios include:
- A deployment CLI where
--environmentaffects which services are available - A database tool where
--connectionaffects which tables can be completed - A cloud CLI where
--projectaffects which resources are shown
In each case, you can't know the valid values until you know what the user
typed for the dependency option. Optique 0.10.0 introduces dependency() and
derive() to handle exactly this.
The dependency system
The core idea is simple: mark one option as a dependency source, then create derived parsers that use its value.
import {
choice,
dependency,
message,
object,
option,
string,
} from "@optique/core";
function getRefsFromRepo(repoPath: string): string[] {
// In real code, this would read from the Git repository
return ["main", "develop", "feature/login"];
}
// Mark as a dependency source
const repoParser = dependency(string());
// Create a derived parser
const refParser = repoParser.derive({
metavar: "REF",
factory: (repoPath) => {
const refs = getRefsFromRepo(repoPath);
return choice(refs);
},
defaultValue: () => ".",
});
const parser = object({
repo: option("--repo", repoParser, {
description: message`Path to the repository`,
}),
ref: option("--ref", refParser, {
description: message`Git reference`,
}),
});
The factory function is where the dependency gets resolved. It receives the
actual value the user provided for --repo and returns a parser that validates
against refs from that specific repository.
Under the hood, Optique uses a three-phase parsing strategy:
- Parse all options in a first pass, collecting dependency values
- Call factory functions with the collected values to create concrete parsers
- Re-parse derived options using those dynamically created parsers
This means both validation and completion work correctly—if the user has
already typed --repo /some/path, the --ref completion will show refs from
that path.
Repository-aware completion with @optique/git
The @optique/git package provides async value parsers that read from Git
repositories. Combined with the dependency system, you can build CLIs with
repository-aware completion:
import {
command,
dependency,
message,
object,
option,
string,
} from "@optique/core";
import { gitBranch } from "@optique/git";
const repoParser = dependency(string());
const branchParser = repoParser.deriveAsync({
metavar: "BRANCH",
factory: (repoPath) => gitBranch({ dir: repoPath }),
defaultValue: () => ".",
});
const checkout = command(
"checkout",
object({
repo: option("--repo", repoParser, {
description: message`Path to the repository`,
}),
branch: option("--branch", branchParser, {
description: message`Branch to checkout`,
}),
}),
);
Now when you type my-cli checkout --repo /path/to/project --branch <TAB>, the
completion will show branches from /path/to/project. The defaultValue of
"." means that if --repo isn't specified, it falls back to the current
directory.
Multiple dependencies
Sometimes a parser needs values from multiple options. The deriveFrom()
function handles this:
import {
choice,
dependency,
deriveFrom,
message,
object,
option,
} from "@optique/core";
function getAvailableServices(env: string, region: string): string[] {
return [`${env}-api-${region}`, `${env}-web-${region}`];
}
const envParser = dependency(choice(["dev", "staging", "prod"] as const));
const regionParser = dependency(choice(["us-east", "eu-west"] as const));
const serviceParser = deriveFrom({
dependencies: [envParser, regionParser] as const,
metavar: "SERVICE",
factory: (env, region) => {
const services = getAvailableServices(env, region);
return choice(services);
},
defaultValues: () => ["dev", "us-east"] as const,
});
const parser = object({
env: option("--env", envParser, {
description: message`Deployment environment`,
}),
region: option("--region", regionParser, {
description: message`Cloud region`,
}),
service: option("--service", serviceParser, {
description: message`Service to deploy`,
}),
});
The factory receives values in the same order as the dependency array. If
some dependencies aren't provided, Optique uses the defaultValues.
Async support
Real-world dependency resolution often involves I/O—reading from Git repositories, querying APIs, accessing databases. Optique provides async variants for these cases:
import { dependency, string } from "@optique/core";
import { gitBranch } from "@optique/git";
const repoParser = dependency(string());
const branchParser = repoParser.deriveAsync({
metavar: "BRANCH",
factory: (repoPath) => gitBranch({ dir: repoPath }),
defaultValue: () => ".",
});
The @optique/git package uses isomorphic-git under the hood, so
gitBranch(), gitTag(), and gitRef() all work in both Node.js and Deno.
There's also deriveSync() for when you need to be explicit about synchronous
behavior, and deriveFromAsync() for multiple async dependencies.
Wrapping up
The dependency system lets you build CLIs where options are aware of each other—not just for validation, but for shell completion too. You get type safety throughout: TypeScript knows the relationship between your dependency sources and derived parsers, and invalid combinations are caught at compile time.
This is particularly useful for tools that interact with external systems where the set of valid values isn't known until runtime. Git repositories, cloud providers, databases, container registries—anywhere the completion choices depend on context the user has already provided.
This feature will be available in Optique 0.10.0. To try the pre-release:
deno add jsr:@optique/core@0.10.0-dev.311
Or with npm:
npm install @optique/core@0.10.0-dev.311
See the documentation for more details.
We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.
In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.
The naïve approach: parsing process.argv
Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:
// greet.ts
const args = process.argv.slice(2);
let name: string | undefined;
let count = 1;
for (let i = 0; i < args.length; i++) {
if (args[i] === "--name" || args[i] === "-n") {
name = args[++i];
} else if (args[i] === "--count" || args[i] === "-c") {
count = parseInt(args[++i], 10);
}
}
if (!name) {
console.error("Error: --name is required");
process.exit(1);
}
for (let i = 0; i < count; i++) {
console.log(`Hello, ${name}!`);
}
Run node greet.js --name Alice --count 3 and you'll get three greetings.
But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.
The traditional libraries
You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:
// With Commander.js
import { program } from "commander";
program
.requiredOption("-n, --name <n>", "Name to greet")
.option("-c, --count <number>", "Number of times to greet", "1")
.parse();
const opts = program.opts();
These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.
The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.
Enter Optique
Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.
Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.
Let's rebuild our greeting program:
import { object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { integer, string } from "@optique/core/valueparser";
import { withDefault } from "@optique/core/modifiers";
import { run } from "@optique/run";
const parser = object({
name: option("-n", "--name", string()),
count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
});
const config = run(parser);
// config is typed as { name: string; count: number }
for (let i = 0; i < config.count; i++) {
console.log(`Hello, ${config.name}!`);
}
Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.
Install it with your package manager of choice:
npm add @optique/core @optique/run
# or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run
Building up: a file converter
Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.
import { object } from "@optique/core/constructs";
import { optional, withDefault } from "@optique/core/modifiers";
import { argument, option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = object({
input: argument(string({ metavar: "INPUT" })),
output: option("-o", "--output", string({ metavar: "FILE" })),
format: withDefault(
option("-f", "--format", choice(["json", "yaml", "toml"])),
"json"
),
pretty: option("-p", "--pretty"),
verbose: option("-v", "--verbose"),
});
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
});
// config.input: string
// config.output: string
// config.format: "json" | "yaml" | "toml"
// config.pretty: boolean
// config.verbose: boolean
The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.
The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).
Mutually exclusive options
Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:
import { object, or } from "@optique/core/constructs";
import { withDefault } from "@optique/core/modifiers";
import { argument, constant, option } from "@optique/core/primitives";
import { integer, string, url } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
// Server mode
object({
mode: constant("server"),
port: option("-p", "--port", integer({ min: 1, max: 65535 })),
host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
}),
// Client mode
object({
mode: constant("client"),
url: argument(url()),
}),
);
const config = run(parser);
The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.
TypeScript infers a discriminated union:
type Config =
| { mode: "server"; port: number; host: string }
| { mode: "client"; url: URL };
Now you can write type-safe code that handles each mode:
if (config.mode === "server") {
console.log(`Starting server on ${config.host}:${config.port}`);
} else {
console.log(`Connecting to ${config.url.hostname}`);
}
Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.
This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.
Subcommands
For larger tools, you'll want subcommands. Optique handles this with the command() parser:
import { object, or } from "@optique/core/constructs";
import { optional } from "@optique/core/modifiers";
import { argument, command, constant, option } from "@optique/core/primitives";
import { string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
command("add", object({
action: constant("add"),
key: argument(string({ metavar: "KEY" })),
value: argument(string({ metavar: "VALUE" })),
})),
command("remove", object({
action: constant("remove"),
key: argument(string({ metavar: "KEY" })),
})),
command("list", object({
action: constant("list"),
pattern: optional(option("-p", "--pattern", string())),
})),
);
const result = run(parser, { help: "both" });
switch (result.action) {
case "add":
console.log(`Adding ${result.key}=${result.value}`);
break;
case "remove":
console.log(`Removing ${result.key}`);
break;
case "list":
console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
break;
}
Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.
The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.
Shell completion
Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
completion: "both",
});
Users can then generate completion scripts:
$ myapp --completion bash >> ~/.bashrc
$ myapp --completion zsh >> ~/.zshrc
$ myapp --completion fish > ~/.config/fish/completions/myapp.fish
The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.
Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.
Integrating with validation libraries
Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:
import { z } from "zod";
import { zod } from "@optique/zod";
import { option } from "@optique/core/primitives";
const email = option("--email", zod(z.string().email()));
const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));
Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.
Prefer Valibot? The @optique/valibot package works the same way:
import * as v from "valibot";
import { valibot } from "@optique/valibot";
import { option } from "@optique/core/primitives";
const email = option("--email", valibot(v.pipe(v.string(), v.email())));
Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.
Tips
A few things I've learned building CLIs with Optique:
Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.
Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.
Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.
Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:
import { parse } from "@optique/core/parser";
const result = parse(parser, ["--name", "Alice", "--count", "3"]);
if (result.success) {
assert.equal(result.value.name, "Alice");
assert.equal(result.value.count, 3);
}
This is especially valuable for complex parsers with many edge cases.
Going further
We've covered the fundamentals, but Optique has more to offer:
- Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable
- Path validation with
path()for checking file existence, directory structure, and file extensions - Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier)
- Reusable option groups with
merge()for sharing common options across subcommands - The
@optique/temporalpackage for parsing dates and times using the Temporal API
Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.
That's it
Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.
The source is on GitHub, and packages are available on both npm and JSR.
Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.
Building CLI apps with TypeScript in 2026
We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.
In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.
The naïve approach: parsing process.argv
Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:
// greet.ts
const args = process.argv.slice(2);
let name: string | undefined;
let count = 1;
for (let i = 0; i < args.length; i++) {
if (args[i] === "--name" || args[i] === "-n") {
name = args[++i];
} else if (args[i] === "--count" || args[i] === "-c") {
count = parseInt(args[++i], 10);
}
}
if (!name) {
console.error("Error: --name is required");
process.exit(1);
}
for (let i = 0; i < count; i++) {
console.log(`Hello, ${name}!`);
}
Run node greet.js --name Alice --count 3 and you'll get three greetings.
But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.
The traditional libraries
You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:
// With Commander.js
import { program } from "commander";
program
.requiredOption("-n, --name <n>", "Name to greet")
.option("-c, --count <number>", "Number of times to greet", "1")
.parse();
const opts = program.opts();
These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.
The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.
Enter Optique
Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.
Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.
Let's rebuild our greeting program:
import { object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { integer, string } from "@optique/core/valueparser";
import { withDefault } from "@optique/core/modifiers";
import { run } from "@optique/run";
const parser = object({
name: option("-n", "--name", string()),
count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
});
const config = run(parser);
// config is typed as { name: string; count: number }
for (let i = 0; i < config.count; i++) {
console.log(`Hello, ${config.name}!`);
}
Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.
Install it with your package manager of choice:
npm add @optique/core @optique/run
# or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run
Building up: a file converter
Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.
import { object } from "@optique/core/constructs";
import { optional, withDefault } from "@optique/core/modifiers";
import { argument, option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = object({
input: argument(string({ metavar: "INPUT" })),
output: option("-o", "--output", string({ metavar: "FILE" })),
format: withDefault(
option("-f", "--format", choice(["json", "yaml", "toml"])),
"json"
),
pretty: option("-p", "--pretty"),
verbose: option("-v", "--verbose"),
});
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
});
// config.input: string
// config.output: string
// config.format: "json" | "yaml" | "toml"
// config.pretty: boolean
// config.verbose: boolean
The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.
The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).
Mutually exclusive options
Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:
import { object, or } from "@optique/core/constructs";
import { withDefault } from "@optique/core/modifiers";
import { argument, constant, option } from "@optique/core/primitives";
import { integer, string, url } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
// Server mode
object({
mode: constant("server"),
port: option("-p", "--port", integer({ min: 1, max: 65535 })),
host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
}),
// Client mode
object({
mode: constant("client"),
url: argument(url()),
}),
);
const config = run(parser);
The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.
TypeScript infers a discriminated union:
type Config =
| { mode: "server"; port: number; host: string }
| { mode: "client"; url: URL };
Now you can write type-safe code that handles each mode:
if (config.mode === "server") {
console.log(`Starting server on ${config.host}:${config.port}`);
} else {
console.log(`Connecting to ${config.url.hostname}`);
}
Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.
This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.
Subcommands
For larger tools, you'll want subcommands. Optique handles this with the command() parser:
import { object, or } from "@optique/core/constructs";
import { optional } from "@optique/core/modifiers";
import { argument, command, constant, option } from "@optique/core/primitives";
import { string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
command("add", object({
action: constant("add"),
key: argument(string({ metavar: "KEY" })),
value: argument(string({ metavar: "VALUE" })),
})),
command("remove", object({
action: constant("remove"),
key: argument(string({ metavar: "KEY" })),
})),
command("list", object({
action: constant("list"),
pattern: optional(option("-p", "--pattern", string())),
})),
);
const result = run(parser, { help: "both" });
switch (result.action) {
case "add":
console.log(`Adding ${result.key}=${result.value}`);
break;
case "remove":
console.log(`Removing ${result.key}`);
break;
case "list":
console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
break;
}
Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.
The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.
Shell completion
Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
completion: "both",
});
Users can then generate completion scripts:
$ myapp --completion bash >> ~/.bashrc
$ myapp --completion zsh >> ~/.zshrc
$ myapp --completion fish > ~/.config/fish/completions/myapp.fish
The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.
Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.
Integrating with validation libraries
Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:
import { z } from "zod";
import { zod } from "@optique/zod";
import { option } from "@optique/core/primitives";
const email = option("--email", zod(z.string().email()));
const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));
Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.
Prefer Valibot? The @optique/valibot package works the same way:
import * as v from "valibot";
import { valibot } from "@optique/valibot";
import { option } from "@optique/core/primitives";
const email = option("--email", valibot(v.pipe(v.string(), v.email())));
Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.
Tips
A few things I've learned building CLIs with Optique:
Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.
Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.
Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.
Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:
import { parse } from "@optique/core/parser";
const result = parse(parser, ["--name", "Alice", "--count", "3"]);
if (result.success) {
assert.equal(result.value.name, "Alice");
assert.equal(result.value.count, 3);
}
This is especially valuable for complex parsers with many edge cases.
Going further
We've covered the fundamentals, but Optique has more to offer:
- Async value parsers for validating against external sources, like checking if a Git branch exists or if a URL is reachable
- Path validation with
path()for checking file existence, directory structure, and file extensions - Custom value parsers for domain-specific types (though Zod/Valibot integration is usually easier)
- Reusable option groups with
merge()for sharing common options across subcommands - The
@optique/temporalpackage for parsing dates and times using the Temporal API
Check out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.
That's it
Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.
The source is on GitHub, and packages are available on both npm and JSR.
Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
Starting with console methods—and where they fall short
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
What you actually need from a logging system
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
Log levels with filtering
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
Categories
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
Sinks (multiple output destinations)
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Structured logging
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
Context for request tracing
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
Getting started with LogTape
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
- Configuration is explicit. You decide where logs go (
sinks) and which logs to show (lowestLevel). - Categories are hierarchical. The logger
["my-app", "server"]inherits settings from["my-app"]. - Template literals work. You can use backticks for a natural logging syntax.
Categories and filtering: Controlling log verbosity
Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
Controlling third-party library logs
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
The root logger
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
Log levels and when to use them
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
Structured logging: Beyond plain text
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
Template literals (great for simple messages)
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
Message templates with properties (great for structured data)
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
Nested property access
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
Machine-readable output with JSON Lines
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
Sending logs to different destinations (sinks)
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
Console sink
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
File sink
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
External services
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Multiple sinks
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Custom sinks
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Request tracing with contexts
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
Explicit context
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
Implicit context
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Framework integrations
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
Using LogTape in libraries vs applications
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
If you're writing a library
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
If you're writing an application
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
Migrating from another logger?
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Production considerations
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
Non-blocking mode
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Sensitive data redaction
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions and serverless
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Wrapping up
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
Logging in Node.js (or Deno or Bun or edge functions) in 2026
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
Starting with console methods—and where they fall short
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
What you actually need from a logging system
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
Log levels with filtering
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
Categories
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
Sinks (multiple output destinations)
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Structured logging
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
Context for request tracing
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
Getting started with LogTape
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
- Configuration is explicit. You decide where logs go (
sinks) and which logs to show (lowestLevel). - Categories are hierarchical. The logger
["my-app", "server"]inherits settings from["my-app"]. - Template literals work. You can use backticks for a natural logging syntax.
Categories and filtering: Controlling log verbosity
Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
Controlling third-party library logs
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
The root logger
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
Log levels and when to use them
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
Structured logging: Beyond plain text
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
Template literals (great for simple messages)
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
Message templates with properties (great for structured data)
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
Nested property access
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
Machine-readable output with JSON Lines
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
Sending logs to different destinations (sinks)
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
Console sink
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
File sink
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
External services
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Multiple sinks
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Custom sinks
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Request tracing with contexts
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
Explicit context
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
Implicit context
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Framework integrations
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
Using LogTape in libraries vs applications
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
If you're writing a library
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
If you're writing an application
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
Migrating from another logger?
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Production considerations
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
Non-blocking mode
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Sensitive data redaction
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions and serverless
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Wrapping up
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
MicroQuickJS, https://github.com/bellard/mquickjs/blob/main/README.md.
> MicroQuickJS is a Javascript engine targetted at embedded systems. It compiles and runs Javascript programs with as low as 10 kB of RAM. The whole engine requires about 100 kB of ROM (ARM Thumb-2 code) including the C library. The speed is comparable to QuickJS.
>
> MicroQuickJS only supports a subset of Javascript close to ES5. It implements a stricter mode where some error prone or inefficient Javascript constructs are forbidden.
MicroQuickJS, https://github.com/bellard/mquickjs/blob/main/README.md.
> MicroQuickJS is a Javascript engine targetted at embedded systems. It compiles and runs Javascript programs with as low as 10 kB of RAM. The whole engine requires about 100 kB of ROM (ARM Thumb-2 code) including the C library. The speed is comparable to QuickJS.
>
> MicroQuickJS only supports a subset of Javascript close to ES5. It implements a stricter mode where some error prone or inefficient Javascript constructs are forbidden.
Boa release v0.21:
https://boajs.dev/blog/2025/10/22/boa-release-21
#Boa is an experimental #JavaScript lexer, #parser and #compiler written in #Rust. It now passes 94.12% of conformance tests in the official #ECMAScript Test Suite (Test262).
When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging.
Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive.
I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine.
Several readers wanted to see a real-world example rather than theory.
The problem: existing loggers assume you're building an app
Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like:
- Did the HTTP request actually go out?
- Was the signature generated correctly?
- Did the remote server reject it? Why?
- Was there a problem parsing the response?
These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork.
But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues.
I looked at the existing options. With winston or Pino, I would have to either:
- Configure a logger inside Fedify (imposing my choices on users), or
- Ask users to pass a logger instance to Fedify (adding boilerplate)
There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons.
None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user.
The solution: hierarchical categories with zero default output
The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it.
Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized:
| Category | What it logs |
|---|---|
["fedify"] |
Everything from the library |
["fedify", "federation", "inbox"] |
Incoming activities |
["fedify", "federation", "outbox"] |
Outgoing activities |
["fedify", "federation", "http"] |
HTTP requests and responses |
["fedify", "sig", "http"] |
HTTP Signature operations |
["fedify", "sig", "ld"] |
Linked Data Signature operations |
["fedify", "sig", "key"] |
Key generation and retrieval |
["fedify", "runtime", "docloader"] |
JSON-LD document loading |
["fedify", "webfinger", "lookup"] |
WebFinger resource lookups |
…and about a dozen more. Each category corresponds to a distinct subsystem.
This means a user can configure logging like this:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Show errors from all of Fedify
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// But show debug info for inbox processing specifically
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration.
Request tracing with implicit contexts
The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries.
In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request.
Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts
});
With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed.
The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure.
What users actually see
So what does all this configuration actually mean for someone using Fedify?
If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops.
For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem.
And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in.
The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need.
Lessons learned
Building Fedify with LogTape taught me a few things:
Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently.
Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically.
Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant.
Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in.
Try it yourself
If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it.
The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that.
LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
I couldn't find a logging library that worked for my library, so I made one
When I started building Fedify, an ActivityPub server framework, I ran into a problem that surprised me: I couldn't figure out how to add logging.
Not because logging is hard—there are dozens of mature logging libraries for JavaScript. The problem was that they're primarily designed for applications, not for libraries that want to stay unobtrusive.
I wrote about this a few months ago, and the response was modest—some interest, some skepticism, and quite a bit of debate about whether the post was AI-generated. I'll be honest: English isn't my first language, so I use LLMs to polish my writing. But the ideas and technical content are mine.
Several readers wanted to see a real-world example rather than theory.
The problem: existing loggers assume you're building an app
Fedify helps developers build federated social applications using the ActivityPub protocol. If you've ever worked with federation, you know debugging can be painful. When an activity fails to deliver, you need to answer questions like:
- Did the HTTP request actually go out?
- Was the signature generated correctly?
- Did the remote server reject it? Why?
- Was there a problem parsing the response?
These questions span multiple subsystems: HTTP handling, cryptographic signatures, JSON-LD processing, queue management, and more. Without good logging, debugging turns into guesswork.
But here's the dilemma I faced as a library author: if I add verbose logging to help with debugging, I risk annoying users who don't want their console cluttered with Fedify's internal chatter. If I stay silent, users struggle to diagnose issues.
I looked at the existing options. With winston or Pino, I would have to either:
- Configure a logger inside Fedify (imposing my choices on users), or
- Ask users to pass a logger instance to Fedify (adding boilerplate)
There's also debug, which is designed for this use case. But it doesn't give you structured, level-based logs that ops teams expect—and it relies on environment variables, which some runtimes like Deno restrict by default for security reasons.
None of these felt right. So I built LogTape—a logging library designed from the ground up for library authors. And Fedify became its first real user.
The solution: hierarchical categories with zero default output
The key insight was simple: a library should be able to log without producing any output unless the application developer explicitly enables it.
Fedify uses LogTape's hierarchical category system to give users fine-grained control over what they see. Here's how the categories are organized:
| Category | What it logs |
|---|---|
["fedify"] |
Everything from the library |
["fedify", "federation", "inbox"] |
Incoming activities |
["fedify", "federation", "outbox"] |
Outgoing activities |
["fedify", "federation", "http"] |
HTTP requests and responses |
["fedify", "sig", "http"] |
HTTP Signature operations |
["fedify", "sig", "ld"] |
Linked Data Signature operations |
["fedify", "sig", "key"] |
Key generation and retrieval |
["fedify", "runtime", "docloader"] |
JSON-LD document loading |
["fedify", "webfinger", "lookup"] |
WebFinger resource lookups |
…and about a dozen more. Each category corresponds to a distinct subsystem.
This means a user can configure logging like this:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Show errors from all of Fedify
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// But show debug info for inbox processing specifically
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
When something goes wrong with incoming activities, they get detailed logs for that subsystem while keeping everything else quiet. No code changes required—just configuration.
Request tracing with implicit contexts
The hierarchical categories solved the filtering problem, but there was another challenge: correlating logs across async boundaries.
In a federated system, a single user action might trigger a cascade of operations: fetch a remote actor, verify their signature, process the activity, fan out to followers, and so on. When something fails, you need to correlate all the log entries for that specific request.
Fedify uses LogTape's implicit context feature to automatically tag every log entry with a requestId:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // Enables implicit contexts
});
With this configuration, every log entry automatically includes a requestId property. When you need to debug a specific request, you can filter your logs:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
And you'll see every log entry from that request—across all subsystems, all in order. No manual correlation needed.
The requestId is derived from standard headers when available (X-Request-Id, Traceparent, etc.), so it integrates naturally with existing observability infrastructure.
What users actually see
So what does all this configuration actually mean for someone using Fedify?
If a Fedify user doesn't configure LogTape at all, they see nothing. No warnings about missing configuration, no default output, and minimal performance overhead—the logging calls are essentially no-ops.
For basic visibility, they can enable error-level logging for all of Fedify with three lines of configuration. When debugging a specific issue, they can enable debug-level logging for just the relevant subsystem.
And if they're running in production with serious observability requirements, they can pipe structured JSON logs to their monitoring system with request correlation built in.
The same library code supports all these scenarios—whether the user is running on Node.js, Deno, Bun, or edge functions, without extra polyfills or shims. The user decides what they need.
Lessons learned
Building Fedify with LogTape taught me a few things:
Design your categories early. The hierarchical structure should reflect how users will actually want to filter logs. I organized Fedify's categories around subsystems that users might need to debug independently.
Use structured logging. Properties like requestId, activityId, and actorId are far more useful than string interpolation when you need to analyze logs programmatically.
Implicit contexts turned out to be more useful than I expected. Being able to correlate logs across async boundaries without passing context manually made debugging distributed operations much easier. When a user reports that activity delivery failed, I can give them a single jq command to extract everything relevant.
Trust your users. Some library authors worry about exposing too much internal detail through logs. I've found the opposite—users appreciate being able to see what's happening when they need to. The key is making it opt-in.
Try it yourself
If you're building a library and struggling with the logging question—how much to log, how to give users control, how to avoid being noisy—I'd encourage you to look at how Fedify does it.
The Fedify logging documentation explains everything in detail. And if you want to understand the philosophy behind LogTape's design, my earlier post covers that.
LogTape isn't trying to replace winston or Pino for application developers who are happy with those tools. It fills a different gap: logging for libraries that want to stay out of the way until users need them. If that's what you're looking for, it might be a better fit than the usual app-centric loggers.
한국어:Fedify, ActivityPub 서버 프레임워크를 개발하기 시작했을 때, 의외의 문제에 부딪혔습니다: 로깅을 추가하는 방법을 찾지 못했습니다.
로깅 자체가 어려워서가 아닙니다—JavaScript용 성숙한 로깅 라이브러리는 수십 개가 있습니다. 문제는 이들이 주로 애플리케이션을 위해 설계되었지, 방해가 되지 않기를 원하는 라이브러리를 위한 것이 아니라는 점이었습니다.
저는 몇 달 전에 이에 대해 글을 썼고, 반응은 소박했습니다—약간의 관심, 약간의 회의론, 그리고 이 글이 AI로 생성되었는지에 대한 꽤 많은 논쟁이 있었습니다. 솔직히 말하자면: 영어는 제 모국어가 아니기 때문에 글을 다듬기 위해 LLM을 사용합니다. 하지만 아이디어와 기술적 내용은 제 것입니다.
몇몇 독자들은 이론보다 실제 사례를 보고 싶어했습니다.
문제: 기존 로거들은 앱을 만든다고 가정합니다
Fedify는 개발자들이 ActivityPub 프로토콜을 사용하여 연합형 소셜 애플리케이션을 구축하는 데 도움을 줍니다. 연합(federation)과 작업해 본 적이 있다면, 디버깅이 얼마나 고통스러울 수 있는지 아실 겁니다. 활동(activity) 전달이 실패했을 때, 다음과 같은 질문에 답해야 합니다:
- HTTP 요청이 실제로 나갔나요?
- 서명이 올바르게 생성되었나요?
- 원격 서버가 이를 거부했나요? 왜 그랬나요?
- 응답 파싱에 문제가 있었나요?
이러한 질문들은 여러 하위 시스템에 걸쳐 있습니다: HTTP 처리, 암호화 서명, JSON-LD 처리, 큐 관리 등. 좋은 로깅 없이는 디버깅이 추측 게임이 됩니다.
하지만 라이브러리 작성자로서 제가 직면한 딜레마는 이것이었습니다: 디버깅을 돕기 위해 상세한 로깅을 추가하면, 콘솔이 Fedify의 내부 잡담으로 어지러워지는 것을 원치 않는 사용자들을 짜증나게 할 위험이 있습니다. 침묵을 지키면, 사용자들은 문제를 진단하는 데 어려움을 겪습니다.
기존 옵션들을 살펴봤습니다. winston이나 Pino를 사용하면 다음 중 하나를 해야 했습니다:
- Fedify 내부에 로거를 구성하거나(사용자에게 내 선택을 강요), 또는
- 사용자에게 로거 인스턴스를 Fedify에 전달하도록 요청(상용구 코드 추가)
또한 debug도 있는데, 이는 이런 사용 사례를 위해 설계되었습니다. 하지만 운영 팀이 기대하는 구조화된, 레벨 기반 로그를 제공하지 않습니다—그리고 환경 변수에 의존하는데, Deno와 같은 일부 런타임은 보안상의 이유로 기본적으로 이를 제한합니다.
이 중 어느 것도 적합하지 않았습니다. 그래서 저는 LogTape—라이브러리 작성자를 위해 처음부터 설계된 로깅 라이브러리를 만들었습니다. 그리고 Fedify는 그 첫 번째 실제 사용자가 되었습니다.
해결책: 기본 출력이 없는 계층적 카테고리
핵심 통찰은 간단했습니다: 라이브러리는 애플리케이션 개발자가 명시적으로 활성화하지 않는 한 어떤 출력도 생성하지 않고 로깅할 수 있어야 합니다.
Fedify는 LogTape의 계층적 카테고리 시스템을 사용하여 사용자에게 보고 싶은 것에 대한 세밀한 제어 권한을 제공합니다. 카테고리는 다음과 같이 구성되어 있습니다:
| 카테고리 | 로깅 내용 |
|---|---|
["fedify"] |
라이브러리의 모든 것 |
["fedify", "federation", "inbox"] |
수신 활동 |
["fedify", "federation", "outbox"] |
발신 활동 |
["fedify", "federation", "http"] |
HTTP 요청 및 응답 |
["fedify", "sig", "http"] |
HTTP 서명 작업 |
["fedify", "sig", "ld"] |
링크드 데이터 서명 작업 |
["fedify", "sig", "key"] |
키 생성 및 검색 |
["fedify", "runtime", "docloader"] |
JSON-LD 문서 로딩 |
["fedify", "webfinger", "lookup"] |
WebFinger 리소스 조회 |
...그리고 약 십여 개 더 있습니다. 각 카테고리는 별개의 하위 시스템에 해당합니다.
이는 사용자가 다음과 같이 로깅을 구성할 수 있음을 의미합니다:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Fedify의 모든 오류 표시
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// 하지만 특별히 inbox 처리에 대한 디버그 정보 표시
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
수신 활동에 문제가 생기면, 다른 모든 것은 조용히 유지하면서 해당 하위 시스템에 대한 상세한 로그를 얻을 수 있습니다. 코드 변경이 필요 없습니다—단지 구성만 필요합니다.
암시적 컨텍스트를 통한 요청 추적
계층적 카테고리는 필터링 문제를 해결했지만, 또 다른 과제가 있었습니다: 비동기 경계를 넘어 로그를 연관시키는 것입니다.
연합 시스템에서는 단일 사용자 작업이 일련의 작업을 촉발할 수 있습니다: 원격 액터 가져오기, 서명 확인, 활동 처리, 팔로워에게 전파 등. 무언가 실패했을 때, 해당 특정 요청에 대한 모든 로그 항목을 연관시켜야 합니다.
Fedify는 LogTape의 암시적 컨텍스트 기능을 사용하여 모든 로그 항목에 자동으로 requestId를 태그합니다:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // 암시적 컨텍스트 활성화
});
이 구성을 사용하면 모든 로그 항목에 자동으로 requestId 속성이 포함됩니다. 특정 요청을 디버깅해야 할 때 로그를 필터링할 수 있습니다:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
그러면 해당 요청의 모든 로그 항목을 볼 수 있습니다—모든 하위 시스템에 걸쳐, 모두 순서대로. 수동 상관관계 분석이 필요 없습니다.
requestId는 가능한 경우 표준 헤더(X-Request-Id, Traceparent 등)에서 파생되므로, 기존 관찰성 인프라와 자연스럽게 통합됩니다.
사용자가 실제로 보는 것
그렇다면 이 모든 구성이 Fedify를 사용하는 사람에게 실제로 어떤 의미가 있을까요?
Fedify 사용자가 LogTape를 전혀 구성하지 않으면, 아무것도 보이지 않습니다. 누락된 구성에 대한 경고도 없고, 기본 출력도 없으며, 성능 오버헤드도 최소화됩니다—로깅 호출은 본질적으로 아무 작업도 하지 않습니다.
기본적인 가시성을 위해, 세 줄의 구성으로 Fedify의 모든 오류 수준 로깅을 활성화할 수 있습니다. 특정 문제를 디버깅할 때는 관련 하위 시스템에 대해서만 디버그 수준 로깅을 활성화할 수 있습니다.
그리고 심각한 관찰성 요구 사항이 있는 프로덕션 환경에서 실행 중이라면, 요청 상관관계가 내장된 구조화된 JSON 로그를 모니터링 시스템으로 전송할 수 있습니다.
동일한 라이브러리 코드가 이 모든 시나리오를 지원합니다—사용자가 Node.js, Deno, Bun 또는 엣지 함수에서 실행하든, 추가 폴리필이나 심(shim) 없이 가능합니다. 사용자가 필요한 것을 결정합니다.
배운 교훈
LogTape로 Fedify를 구축하면서 몇 가지를 배웠습니다:
카테고리를 일찍 설계하세요. 계층적 구조는 사용자가 실제로 로그를 필터링하고 싶어하는 방식을 반영해야 합니다. 저는 Fedify의 카테고리를 사용자가 독립적으로 디버깅해야 할 수 있는 하위 시스템을 중심으로 구성했습니다.
구조화된 로깅을 사용하세요. requestId, activityId, actorId와 같은 속성은 프로그래밍 방식으로 로그를 분석해야 할 때 문자열 보간보다 훨씬 더 유용합니다.
암시적 컨텍스트가 예상보다 더 유용한 것으로 판명되었습니다. 컨텍스트를 수동으로 전달하지 않고도 비동기 경계를 넘어 로그를 연관시킬 수 있어 분산 작업 디버깅이 훨씬 쉬워졌습니다. 사용자가 활동 전달이 실패했다고 보고할 때, 관련된 모든 것을 추출하는 단일 jq 명령을 제공할 수 있습니다.
사용자를 신뢰하세요. 일부 라이브러리 작성자는 로그를 통해 너무 많은 내부 세부 정보를 노출하는 것을 걱정합니다. 저는 반대의 경험을 했습니다—사용자들은 필요할 때 무슨 일이 일어나고 있는지 볼 수 있다는 것을 감사하게 생각합니다. 핵심은 옵트인(opt-in) 방식으로 만드는 것입니다.
직접 시도해 보세요
라이브러리를 구축하면서 로깅 문제—얼마나 많이 로깅할지, 사용자에게 어떻게 제어권을 줄지, 어떻게 시끄럽지 않게 할지—로 고민하고 계시다면, Fedify가 어떻게 하는지 살펴보시길 권장합니다.
Fedify 로깅 문서에서 모든 것을 자세히 설명합니다. 그리고 LogTape 설계 철학을 이해하고 싶다면, 제 이전 글에서 다루고 있습니다.
LogTape은 해당 도구에 만족하는 애플리케이션 개발자를 위해 winston이나 Pino를 대체하려는 것이 아닙니다. 이는 다른 간극을 메웁니다: 사용자가 필요로 할 때까지 방해가 되지 않기를 원하는 라이브러리를 위한 로깅입니다. 그것이 여러분이 찾고 있는 것이라면, 일반적인 앱 중심 로거보다 더 적합할 수 있습니다.
한국어(대한민국):Fedify, ActivityPub 서버 프레임워크를 개발하기 시작했을 때, 의외의 문제에 부딪혔습니다: 로깅을 추가하는 방법을 찾지 못했습니다.
로깅 자체가 어려워서가 아닙니다—JavaScript용 성숙한 로깅 라이브러리는 수십 개가 있습니다. 문제는 이들이 주로 애플리케이션을 위해 설계되었지, 방해가 되지 않기를 원하는 라이브러리를 위한 것이 아니라는 점이었습니다.
저는 몇 달 전에 이에 대해 글을 썼고, 반응은 적당했습니다—약간의 관심, 약간의 회의론, 그리고 이 글이 AI로 생성되었는지에 대한 꽤 많은 논쟁이 있었습니다. 솔직히 말하자면: 영어는 제 모국어가 아니기 때문에 글을 다듬기 위해 LLM을 사용합니다. 하지만 아이디어와 기술적 내용은 제 것입니다.
몇몇 독자들은 이론보다 실제 사례를 보고 싶어했습니다.
문제: 기존 로거들은 앱을 만든다고 가정합니다
Fedify는 개발자들이 ActivityPub 프로토콜을 사용하여 연합형 소셜 애플리케이션을 구축하는 데 도움을 줍니다. 연합(federation)과 작업해 본 적이 있다면, 디버깅이 얼마나 고통스러울 수 있는지 아실 겁니다. 활동(activity) 전달이 실패했을 때, 다음과 같은 질문에 답해야 합니다:
- HTTP 요청이 실제로 나갔나요?
- 서명이 올바르게 생성되었나요?
- 원격 서버가 이를 거부했나요? 왜 그랬나요?
- 응답 파싱에 문제가 있었나요?
이러한 질문들은 여러 하위 시스템에 걸쳐 있습니다: HTTP 처리, 암호화 서명, JSON-LD 처리, 큐 관리 등. 적절한 로깅 없이는 디버깅이 추측 게임이 됩니다.
하지만 라이브러리 작성자로서 제가 직면한 딜레마는 이것이었습니다: 디버깅을 돕기 위해 상세한 로깅을 추가하면, Fedify의 내부 메시지로 콘솔이 어지러워지는 것을 원치 않는 사용자들을 짜증나게 할 위험이 있습니다. 반면 아무 말도 하지 않으면, 사용자들은 문제를 진단하는 데 어려움을 겪습니다.
기존 옵션들을 살펴봤습니다. winston이나 Pino를 사용하면 다음 중 하나를 해야 했습니다:
- Fedify 내부에 로거를 구성하거나(사용자에게 내 선택을 강요), 또는
- 사용자에게 로거 인스턴스를 Fedify에 전달하도록 요청(상용구 코드 추가)
또한 debug도 있는데, 이는 이런 사용 사례를 위해 설계되었습니다. 하지만 운영 팀이 기대하는 구조화된 레벨 기반 로그를 제공하지 않으며, 보안상의 이유로 Deno와 같은 일부 런타임에서 기본적으로 제한하는 환경 변수에 의존합니다.
이 중 어느 것도 적합하지 않았습니다. 그래서 저는 LogTape—라이브러리 작성자를 위해 처음부터 설계된 로깅 라이브러리를 만들었습니다. 그리고 Fedify는 그 첫 번째 실제 사용자가 되었습니다.
해결책: 기본 출력이 없는 계층적 카테고리
핵심 통찰은 간단했습니다: 라이브러리는 애플리케이션 개발자가 명시적으로 활성화하지 않는 한 어떤 출력도 생성하지 않고 로깅할 수 있어야 합니다.
Fedify는 LogTape의 계층적 카테고리 시스템을 사용하여 사용자에게 보고 싶은 것에 대한 세밀한 제어 기능을 제공합니다. 카테고리는 다음과 같이 구성되어 있습니다:
| 카테고리 | 로깅 내용 |
|---|---|
["fedify"] |
라이브러리의 모든 것 |
["fedify", "federation", "inbox"] |
수신 활동 |
["fedify", "federation", "outbox"] |
발신 활동 |
["fedify", "federation", "http"] |
HTTP 요청 및 응답 |
["fedify", "sig", "http"] |
HTTP 서명 작업 |
["fedify", "sig", "ld"] |
링크드 데이터 서명 작업 |
["fedify", "sig", "key"] |
키 생성 및 검색 |
["fedify", "runtime", "docloader"] |
JSON-LD 문서 로딩 |
["fedify", "webfinger", "lookup"] |
WebFinger 리소스 조회 |
...그리고 약 십여 개 더 있습니다. 각 카테고리는 별개의 하위 시스템에 해당합니다.
이는 사용자가 다음과 같이 로깅을 구성할 수 있음을 의미합니다:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Fedify의 모든 오류 표시
{ category: "fedify", sinks: ["console"], lowestLevel: "error" },
// 하지만 특별히 inbox 처리에 대한 디버그 정보 표시
{ category: ["fedify", "federation", "inbox"], sinks: ["console"], lowestLevel: "debug" },
],
});
수신 활동에 문제가 생기면, 다른 모든 것은 조용히 유지하면서 해당 하위 시스템에 대한 상세한 로그를 얻을 수 있습니다. 코드 변경이 필요 없이 구성만으로 가능합니다.
암시적 컨텍스트를 통한 요청 추적
계층적 카테고리는 필터링 문제를 해결했지만, 또 다른 과제가 있었습니다: 비동기 경계를 넘어 로그를 연관시키는 것입니다.
연합 시스템에서는 단일 사용자 작업이 일련의 작업을 촉발할 수 있습니다: 원격 액터 가져오기, 서명 확인, 활동 처리, 팔로워에게 전파 등. 무언가 실패했을 때, 해당 특정 요청에 대한 모든 로그 항목을 연관시켜야 합니다.
Fedify는 LogTape의 암시적 컨텍스트 기능을 사용하여 모든 로그 항목에 자동으로 requestId를 태그합니다:
await configure({
sinks: {
file: getFileSink("fedify.jsonl", { formatter: jsonLinesFormatter })
},
loggers: [
{ category: "fedify", sinks: ["file"], lowestLevel: "info" },
],
contextLocalStorage: new AsyncLocalStorage(), // 암시적 컨텍스트 활성화
});
이 구성을 통해 모든 로그 항목에는 자동으로 requestId 속성이 포함됩니다. 특정 요청을 디버깅해야 할 때 로그를 필터링할 수 있습니다:
jq 'select(.properties.requestId == "abc-123")' fedify.jsonl
그러면 해당 요청의 모든 로그 항목을 볼 수 있습니다—모든 하위 시스템에 걸쳐, 모두 순서대로. 수동 상관관계 분석이 필요 없습니다.
requestId는 가능한 경우 표준 헤더(X-Request-Id, Traceparent 등)에서 파생되므로 기존 관찰성 인프라와 자연스럽게 통합됩니다.
사용자가 실제로 보는 것
그렇다면 이 모든 구성이 Fedify를 사용하는 사람들에게 실제로 어떤 의미가 있을까요?
Fedify 사용자가 LogTape를 전혀 구성하지 않으면, 아무것도 보이지 않습니다. 누락된 구성에 대한 경고도 없고, 기본 출력도 없으며, 성능 오버헤드도 최소화됩니다—로깅 호출은 본질적으로 아무 작업도 하지 않습니다.
기본적인 가시성을 위해, 세 줄의 구성으로 Fedify의 모든 오류 수준 로깅을 활성화할 수 있습니다. 특정 문제를 디버깅할 때는 관련 하위 시스템에 대해서만 디버그 수준 로깅을 활성화할 수 있습니다.
그리고 심각한 관찰성 요구 사항이 있는 프로덕션 환경에서 실행하는 경우, 요청 상관관계가 내장된 구조화된 JSON 로그를 모니터링 시스템으로 전송할 수 있습니다.
동일한 라이브러리 코드가 이 모든 시나리오를 지원합니다—사용자가 Node.js, Deno, Bun 또는 엣지 함수에서 실행하든, 추가 폴리필이나 심(shim) 없이 가능합니다. 사용자가 필요한 것을 결정합니다.
배운 교훈
LogTape로 Fedify를 구축하면서 몇 가지를 배웠습니다:
카테고리를 일찍 설계하세요. 계층적 구조는 사용자가 실제로 로그를 필터링하고 싶어하는 방식을 반영해야 합니다. 저는 Fedify의 카테고리를 사용자가 독립적으로 디버깅해야 할 수 있는 하위 시스템을 중심으로 구성했습니다.
구조화된 로깅을 사용하세요. requestId, activityId, actorId와 같은 속성은 프로그래밍 방식으로 로그를 분석해야 할 때 문자열 보간보다 훨씬 더 유용합니다.
암시적 컨텍스트는 예상보다 더 유용했습니다. 컨텍스트를 수동으로 전달하지 않고도 비동기 경계를 넘어 로그를 연관시킬 수 있어 분산 작업 디버깅이 훨씬 쉬워졌습니다. 사용자가 활동 전달이 실패했다고 보고할 때, 관련된 모든 것을 추출하는 단일 jq 명령을 제공할 수 있습니다.
사용자를 신뢰하세요. 일부 라이브러리 작성자는 로그를 통해 너무 많은 내부 세부 정보를 노출하는 것을 걱정합니다. 저는 반대의 경험을 했습니다—사용자들은 필요할 때 무슨 일이 일어나고 있는지 볼 수 있다는 것을 감사하게 생각합니다. 핵심은 옵트인(opt-in) 방식으로 만드는 것입니다.
직접 시도해 보세요
라이브러리를 구축하면서 로깅 문제—얼마나 많이 로깅할지, 사용자에게 어떻게 제어권을 줄지, 어떻게 시끄럽지 않게 할지—로 고민하고 계시다면, Fedify가 어떻게 하는지 살펴보시길 권장합니다.
Fedify 로깅 문서에서 모든 것을 자세히 설명합니다. 그리고 LogTape 설계 철학을 이해하고 싶다면, 이전 포스트에서 다루고 있습니다.
LogTape은 winston이나 Pino에 만족하는 애플리케이션 개발자들을 위한 대체품이 되려는 것이 아닙니다. 이는 다른 간극을 메웁니다: 사용자가 필요로 할 때까지 방해가 되지 않기를 원하는 라이브러리를 위한 로깅입니다. 그것이 당신이 찾고 있는 것이라면, 일반적인 앱 중심 로거보다 더 적합할 수 있습니다.
- 한국어 (Korean): CLI 플래그에 if 문 작성을 그만두세요
If you've built CLI tools, you've written code like this:
if (opts.reporter === "junit" && !opts.outputFile) {
throw new Error("--output-file is required for junit reporter");
}
if (opts.reporter === "html" && !opts.outputFile) {
throw new Error("--output-file is required for html reporter");
}
if (opts.reporter === "console" && opts.outputFile) {
console.warn("--output-file is ignored for console reporter");
}
A few months ago, I wrote Stop writing CLI validation. Parse it right the first time. about parsing individual option values correctly. But it didn't cover the relationships between options.
In the code above, --output-file only makes sense when --reporter is junit or html. When it's console, the option shouldn't exist at all.
We're using TypeScript. We have a powerful type system. And yet, here we are, writing runtime checks that the compiler can't help with. Every time we add a new reporter type, we need to remember to update these checks. Every time we refactor, we hope we didn't miss one.
The state of TypeScript CLI parsers
The old guard—Commander, yargs, minimist—were built before TypeScript became mainstream. They give you bags of strings and leave type safety as an exercise for the reader.
But we've made progress. Modern TypeScript-first libraries like cmd-ts and Clipanion (the library powering Yarn Berry) take types seriously:
// cmd-ts
const app = command({
args: {
reporter: option({ type: string, long: 'reporter' }),
outputFile: option({ type: string, long: 'output-file' }),
},
handler: (args) => {
// args.reporter: string
// args.outputFile: string
},
});
// Clipanion
class TestCommand extends Command {
reporter = Option.String('--reporter');
outputFile = Option.String('--output-file');
}
These libraries infer types for individual options. --port is a number. --verbose is a boolean. That's real progress.
But here's what they can't do: express that --output-file is required when --reporter is junit, and forbidden when --reporter is console. The relationship between options isn't captured in the type system.
So you end up writing validation code anyway:
handler: (args) => {
// Both cmd-ts and Clipanion need this
if (args.reporter === "junit" && !args.outputFile) {
throw new Error("--output-file required for junit");
}
// args.outputFile is still string | undefined
// TypeScript doesn't know it's definitely string when reporter is "junit"
}
Rust's clap and Python's Click have requires and conflicts_with attributes, but those are runtime checks too. They don't change the result type.
If the parser configuration knows about option relationships, why doesn't that knowledge show up in the result type?
Modeling relationships with conditional()
Optique treats option relationships as a first-class concept. Here's the test reporter scenario:
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({
outputFile: option("--output-file", string()),
}),
html: object({
outputFile: option("--output-file", string()),
openBrowser: option("--open-browser"),
}),
}
);
const [reporter, config] = run(parser);
The conditional() combinator takes a discriminator option (--reporter) and a map of branches. Each branch defines what other options are valid for that discriminator value.
TypeScript infers the result type automatically:
type Result =
| ["console", {}]
| ["junit", { outputFile: string }]
| ["html", { outputFile: string; openBrowser: boolean }];
When reporter is "junit", outputFile is string—not string | undefined. The relationship is encoded in the type.
Now your business logic gets real type safety:
const [reporter, config] = run(parser);
switch (reporter) {
case "console":
runWithConsoleOutput();
break;
case "junit":
// TypeScript knows config.outputFile is string
writeJUnitReport(config.outputFile);
break;
case "html":
// TypeScript knows config.outputFile and config.openBrowser exist
writeHtmlReport(config.outputFile);
if (config.openBrowser) openInBrowser(config.outputFile);
break;
}
No validation code. No runtime checks. If you add a new reporter type and forget to handle it in the switch, the compiler tells you.
A more complex example: database connections
Test reporters are a nice example, but let's try something with more variation. Database connection strings:
myapp --db=sqlite --file=./data.db
myapp --db=postgres --host=localhost --port=5432 --user=admin
myapp --db=mysql --host=localhost --port=3306 --user=root --ssl
Each database type needs completely different options:
- SQLite just needs a file path
- PostgreSQL needs host, port, user, and optionally password
- MySQL needs host, port, user, and has an SSL flag
Here's how you model this:
import { conditional, object } from "@optique/core/constructs";
import { withDefault, optional } from "@optique/core/modifiers";
import { option } from "@optique/core/primitives";
import { choice, string, integer } from "@optique/core/valueparser";
const dbParser = conditional(
option("--db", choice(["sqlite", "postgres", "mysql"])),
{
sqlite: object({
file: option("--file", string()),
}),
postgres: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 5432),
user: option("--user", string()),
password: optional(option("--password", string())),
}),
mysql: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 3306),
user: option("--user", string()),
ssl: option("--ssl"),
}),
}
);
The inferred type:
type DbConfig =
| ["sqlite", { file: string }]
| ["postgres", { host: string; port: number; user: string; password?: string }]
| ["mysql", { host: string; port: number; user: string; ssl: boolean }];
Notice the details: PostgreSQL defaults to port 5432, MySQL to 3306. PostgreSQL has an optional password, MySQL has an SSL flag. Each database type has exactly the options it needs—no more, no less.
With this structure, writing dbConfig.ssl when the mode is sqlite isn't a runtime error—it's a compile-time impossibility.
Try expressing this with requires_if attributes. You can't. The relationships are too rich.
The pattern is everywhere
Once you see it, you find this pattern in many CLI tools:
Authentication modes:
const authParser = conditional(
option("--auth", choice(["none", "basic", "token", "oauth"])),
{
none: object({}),
basic: object({
username: option("--username", string()),
password: option("--password", string()),
}),
token: object({
token: option("--token", string()),
}),
oauth: object({
clientId: option("--client-id", string()),
clientSecret: option("--client-secret", string()),
tokenUrl: option("--token-url", url()),
}),
}
);
Deployment targets, output formats, connection protocols—anywhere you have a mode selector that determines what other options are valid.
Why conditional() exists
Optique already has an or() combinator for mutually exclusive alternatives. Why do we need conditional()?
The or() combinator distinguishes branches based on structure—which options are present. It works well for subcommands like git commit vs git push, where the arguments differ completely.
But in the reporter example, the structure is identical: every branch has a --reporter flag. The difference lies in the flag's value, not its presence.
// This won't work as intended
const parser = or(
object({ reporter: option("--reporter", choice(["console"])) }),
object({
reporter: option("--reporter", choice(["junit", "html"])),
outputFile: option("--output-file", string())
}),
);
When you pass --reporter junit, or() tries to pick a branch based on what options are present. Both branches have --reporter, so it can't distinguish them structurally.
conditional() solves this by reading the discriminator's value first, then selecting the appropriate branch. It bridges the gap between structural parsing and value-based decisions.
The structure is the constraint
Instead of parsing options into a loose type and then validating relationships, define a parser whose structure is the constraint.
| Traditional approach | Optique approach |
|---|---|
| Parse → Validate → Use | Parse (with constraints) → Use |
| Types and validation logic maintained separately | Types reflect the constraints |
| Mismatches found at runtime | Mismatches found at compile time |
The parser definition becomes the single source of truth. Add a new reporter type? The parser definition changes, the inferred type changes, and the compiler shows you everywhere that needs updating.
Try it
If this resonates with a CLI you're building:
Next time you're about to write an if statement checking option relationships, ask: could the parser express this constraint instead?
The structure of your parser is the constraint. You might not need that validation code at all.
Stop writing CLI validation. Parse it right the first time.
I have this bad habit. When something annoys me enough times, I end up building a library for it. This time, it was CLI validation code.
See, I spend a lot of time reading other people's code. Open source projects, work stuff, random GitHub repos I stumble upon at 2 AM. And I kept noticing this thing: every CLI tool has the same ugly validation code tucked away somewhere. You know the kind:
if (!opts.server && opts.port) { throw new Error("--port requires --server flag");}if (opts.server && !opts.port) { opts.port = 3000; // default port}// wait, what if they pass --port without a value?// what if the port is out of range?// what if...It's not even that this code is hard to write. It's that it's everywhere. Every project. Every CLI tool. The same patterns, slightly different flavors. Options that depend on other options. Flags that can't be used together. Arguments that only make sense in certain modes.
And here's what really got me: we solved this problem years ago for other types of data. Just… not for CLIs.
The problem with validation
There's this blog post that completely changed how I think about parsing. It's called Parse, don't validate by Alexis King. The gist? Don't parse data into a loose type and then check if it's valid. Parse it directly into a type that can only be valid.
Think about it. When you get JSON from an API, you don't just parse it as any
and then write a bunch of if-statements. You use something like Zod to parse
it directly into the shape you want. Invalid data? The parser rejects it. Done.
But with CLIs? We parse arguments into some bag of properties and then spend the next 100 lines checking if that bag makes sense. It's backwards.
So yeah, I built Optique. Not because the world desperately needed another CLI parser (it didn't), but because I was tired of seeing—and writing—the same validation code everywhere.
Three patterns I was sick of validating
Dependent options
This one's everywhere. You have an option that only makes sense when another option is enabled.
The old way? Parse everything, then check:
const opts = parseArgs(process.argv);if (!opts.server && opts.port) { throw new Error("--port requires --server");}if (opts.server && !opts.port) { opts.port = 3000;}// More validation probably lurking elsewhere...With Optique, you just describe what you want:
const config = withDefault( object({ server: flag("--server"), port: option("--port", integer()), workers: option("--workers", integer()) }), { server: false });Here's what TypeScript infers for config's type:
type Config = | { readonly server: false } | { readonly server: true; readonly port: number; readonly workers: number }The type system now understands that when server is false, port literally
doesn't exist. Not undefined, not null—it's not there. Try to access it and
TypeScript yells at you. No runtime validation needed.
Mutually exclusive options
Another classic. Pick one output format: JSON, YAML, or XML. But definitely not two.
I used to write this mess:
if ((opts.json ? 1 : 0) + (opts.yaml ? 1 : 0) + (opts.xml ? 1 : 0) > 1) { throw new Error('Choose only one output format');}(Don't judge me, you've written something similar.)
Now?
const format = or( map(option("--json"), () => "json" as const), map(option("--yaml"), () => "yaml" as const), map(option("--xml"), () => "xml" as const));The or() combinator means exactly one succeeds. The result is just
"json" | "yaml" | "xml". A single string. Not three booleans to juggle.
Environment-specific requirements
Production needs auth. Development needs debug flags. Docker needs different options than local. You know the drill.
Instead of a validation maze, you just describe each environment:
const envConfig = or( object({ env: constant("prod"), auth: option("--auth", string()), // Required in prod ssl: option("--ssl"), monitoring: option("--monitoring", url()) }), object({ env: constant("dev"), debug: optional(option("--debug")), // Optional in dev verbose: option("--verbose") }));No auth in production? Parser fails immediately. Trying to access --auth in
dev mode? TypeScript won't let you—the field doesn't exist on that type.
“But parser combinators though…”
I know, I know. “Parser combinators” sounds like something you'd need a CS degree to understand.
Here's the thing: I don't have a CS degree. Actually, I don't have any degree. But I've been using parser combinators for years because they're actually… not that hard? It's just that the name makes them sound way scarier than they are.
I'd been using them for other stuff—parsing config files, DSLs, whatever. But somehow it never clicked that you could use them for CLI parsing until I saw Haskell's optparse-applicative. That was a real “wait, of course” moment. Like, why are we doing this any other way?
Turns out it's stupidly simple. A parser is just a function. Combinators are just functions that take parsers and return new parsers. That's it.
// This is a parserconst port = option("--port", integer());// This is also a parser (made from smaller parsers)const server = object({ port: port, host: option("--host", string())});// Still a parser (parsers all the way down)const config = or(server, client);No monads. No category theory. Just functions. Boring, beautiful functions.
TypeScript does the heavy lifting
Here's the thing that still feels like cheating: I don't write types for my CLI configs anymore. TypeScript just… figures it out.
const cli = or( command("deploy", object({ action: constant("deploy"), environment: argument(string()), replicas: option("--replicas", integer()) })), command("rollback", object({ action: constant("rollback"), version: argument(string()), force: option("--force") })));// TypeScript infers this type automatically:type Cli = | { readonly action: "deploy" readonly environment: string readonly replicas: number } | { readonly action: "rollback" readonly version: string readonly force: boolean }TypeScript knows that if action is "deploy", then environment exists but
version doesn't. It knows replicas is a number. It knows force is
a boolean. I didn't tell it any of this.
This isn't just about nice autocomplete (though yeah, the autocomplete is great). It's about catching bugs before they happen. Forget to handle a new option somewhere? Code won't compile.
What actually changed for me
I've been dogfooding this for a few weeks. Some real talk:
I delete code now. Not refactor. Delete. That validation logic that used to be 30% of my CLI code? Gone. It feels weird every time.
Refactoring isn't scary. Want to know something that usually terrifies me?
Changing how a CLI takes its arguments. Like going from --input file.txt to
just file.txt as a positional argument. With traditional parsers,
you're hunting down validation logic everywhere. With this?
You change the parser definition, TypeScript immediately shows you every place
that breaks, you fix them, done. What used to be an hour of “did I catch
everything?” is now “fix the red squiggles and move on.”
My CLIs got fancier. When adding complex option relationships doesn't mean writing complex validation, you just… add them. Mutually exclusive groups? Sure. Context-dependent options? Why not. The parser handles it.
The reusability is real too:
const networkOptions = object({ host: option("--host", string()), port: option("--port", integer())});// Reuse everywhere, compose differentlyconst devServer = merge(networkOptions, debugOptions);const prodServer = merge(networkOptions, authOptions);const testServer = merge(networkOptions, mockOptions);But honestly? The biggest change is trust. If it compiles, the CLI logic works. Not “probably works” or “works unless someone passes weird arguments.” It just works.
Should you care?
If you're writing a 10-line script that takes one argument, you don't need this.
process.argv[2] and call it a day.
But if you've ever:
- Had validation logic get out of sync with your actual options
- Discovered in production that certain option combinations explode
- Spent an afternoon tracking down why
--verbosebreaks when used with--json - Written the same “option A requires option B” check for the fifth time
Then yeah, maybe you're tired of this stuff too.
Fair warning: Optique is young. I'm still figuring things out, the API might shift a bit. But the core idea—parse, don't validate—that's solid. And I haven't written validation code in months.
Still feels weird. Good weird.
Try it or don't
If this resonates:
- Tutorial: Build something real, see if you hate it
- Concepts: Primitives, constructs, modifiers, value parsers, the whole thing
- GitHub: The code, issues, angry rants
I'm not saying Optique is the answer to all CLI problems. I'm just saying I was tired of writing the same validation code everywhere, so I built something that makes it unnecessary.
Take it or leave it. But that validation code you're about to write? You probably don't need it.
JA:私には悪い癖があります。何かに十分イライラすると、それに対するライブラリを作ってしまうのです。今回は、CLI バリデーションコードがその対象でした。
私は他の人のコードを読むことに多くの時間を費やしています。オープンソースプロジェクト、仕事関連のもの、深夜2時に偶然見つけた GitHub リポジトリなど。そして、あるパターンに気づき続けていました:すべての CLI ツールには、どこかに同じような醜いバリデーションコードが隠れているのです。こんな感じのものです:
if (!opts.server && opts.port) { throw new Error("--port requires --server flag");}if (opts.server && !opts.port) { opts.port = 3000; // デフォルトポート}// ちょっと待って、--port に値を渡さない場合は?// ポートが範囲外だったら?// もし...このコードを書くのが難しいわけではありません。問題は、これがどこにでもあることです。すべてのプロジェクト。すべての CLI ツール。同じパターンで、少しずつ異なる味付け。他のオプションに依存するオプション。一緒に使えないフラグ。特定のモードでしか意味をなさない引数。
そして本当に気になったのは:私たちは他のタイプのデータに対してこの問題を何年も前に解決していたということです。ただ...CLI に対しては解決していなかったのです。
バリデーションの問題点
私のパース(構文解析)に対する考え方を完全に変えたブログ記事があります。 Alexis King によるバリデーションではなくパースをです。要点は?データを緩い型にパースしてから有効かどうかをチェックするのではなく、有効な状態しか取り得ない型に直接パースするということです。
考えてみてください。API から JSON を取得するとき、単に any としてパースしてから一連の if 文を書くわけではありません。Zod のようなものを使って、欲しい形に直接パースします。無効なデータ?パーサーがそれを拒否します。終わり。
しかし CLI では?引数をプロパティの集まりにパースして、その後100行かけてそのバッグが意味をなすかどうかをチェックします。これは逆転しています。
そういうわけで、私は Optique を作りました。世界が別の CLI パーサーを切実に必要としていたからではなく(そうではありません)、同じバリデーションコードをあらゆる場所で見る—そして書く—ことにうんざりしていたからです。
バリデーションに飽き飽きしていた3つのパターン
依存オプション
これはどこにでもあります。別のオプションが有効な場合にのみ意味を持つオプションがあります。
従来の方法?すべてをパースしてからチェックします:
const opts = parseArgs(process.argv);if (!opts.server && opts.port) { throw new Error("--port requires --server");}if (opts.server && !opts.port) { opts.port = 3000;}// おそらく他の場所にもバリデーションが潜んでいる...Optique では、欲しいものを記述するだけです:
const config = withDefault( object({ server: flag("--server"), port: option("--port", integer()), workers: option("--workers", integer()) }), { server: false });TypeScript が config の型に対して推論するものは次のとおりです:
type Config = | { readonly server: false } | { readonly server: true; readonly port: number; readonly workers: number }型システムは、server が false の場合、port が文字通り存在しないことを理解しています。undefined でも null でもなく、そこにないのです。アクセスしようとすると TypeScript が警告します。ランタイムでのバリデーションは必要ありません。
相互排他的なオプション
もう一つの典型例。出力フォーマットを一つ選びます:JSON、YAML、または XML。ただし、2つは絶対に選べません。
以前は次のようなコードを書いていました:
if ((opts.json ? 1 : 0) + (opts.yaml ? 1 : 0) + (opts.xml ? 1 : 0) > 1) { throw new Error('Choose only one output format');}(批判しないでください、あなたも似たようなものを書いたことがあるはずです。)
今では?
const format = or( map(option("--json"), () => "json" as const), map(option("--yaml"), () => "yaml" as const), map(option("--xml"), () => "xml" as const));or() コンビネータは、正確に1つだけが成功することを意味します。結果は単に "json" | "yaml" | "xml" です。3つのブール値をやりくりするのではなく、単一の文字列です。
環境固有の要件
本番環境には認証が必要です。開発環境にはデバッグフラグが必要です。Docker はローカルとは異なるオプションが必要です。おなじみのパターンですね。
バリデーションの迷路の代わりに、各環境を記述するだけです:
const envConfig = or( object({ env: constant("prod"), auth: option("--auth", string()), // 本番環境では必須 ssl: option("--ssl"), monitoring: option("--monitoring", url()) }), object({ env: constant("dev"), debug: optional(option("--debug")), // 開発環境ではオプション verbose: option("--verbose") }));本番環境で認証がない?パーサーはすぐに失敗します。開発モードで --auth にアクセスしようとしている?TypeScript はそれを許可しません—その型にはそのフィールドが存在しないからです。
「でもパーサーコンビネータって...」
わかっています。「パーサーコンビネータ」というと、理解するためにコンピュータサイエンスの学位が必要なもののように聞こえます。
実は、私はコンピュータサイエンスの学位を持っていません。実際、私はどんな学位も持っていません。しかし、パーサーコンビネータは実際には...そんなに難しくないので、何年も使ってきました。名前が実際よりもずっと怖く聞こえるだけなのです。
私は他のもの—設定ファイルのパース、DSL、その他—にパーサーコンビネータを使っていました。しかし、Haskell の optparse-applicative を見るまで、CLI パースにも使えることに気づきませんでした。それは本当に「ああ、もちろん」という瞬間でした。なぜ他の方法でやっているのか、と思いました。
実はバカみたいに単純なんです。パーサーは単なる関数です。コンビネータは、パーサーを受け取って新しいパーサーを返す関数に過ぎません。それだけです。
// これはパーサーですconst port = option("--port", integer());// これもパーサーです(より小さなパーサーから作られています)const server = object({ port: port, host: option("--host", string())});// まだパーサーです(すべてパーサーです)const config = or(server, client);モナドはありません。圏論もありません。ただの関数です。退屈で美しい関数です。
TypeScript が重労働を担当
まだズルをしているように感じるのは:もう CLI 設定の型を書かなくなったことです。TypeScript が...勝手に理解してくれます。
const cli = or( command("deploy", object({ action: constant("deploy"), environment: argument(string()), replicas: option("--replicas", integer()) })), command("rollback", object({ action: constant("rollback"), version: argument(string()), force: option("--force") })));// TypeScript は自動的にこの型を推論します:type Cli = | { readonly action: "deploy" readonly environment: string readonly replicas: number } | { readonly action: "rollback" readonly version: string readonly force: boolean }TypeScript は、action が "deploy" の場合、environment は存在するが version は存在しないことを知っています。replicas が number であることも知っています。force が boolean であることも知っています。これらは私が教えたわけではありません。
これは単に素晴らしいオートコンプリートのためだけではありません(もちろん、オートコンプリートは素晴らしいですが)。バグが発生する前に捕まえることができるのです。どこかで新しいオプションの処理を忘れた?コードはコンパイルされません。
実際に私に起きた変化
数週間、自分のプロジェクトでこれを試してきました。率直に言うと:
今はコードを削除します。 リファクタリングではなく、削除です。以前は CLI コードの 30% を占めていたバリデーションロジック?消えました。毎回奇妙な感じがします。
リファクタリングが怖くなくなりました。 通常私を恐怖に陥れるものは何か知っていますか?
CLI が引数を受け取る方法を変更することです。例えば、--input file.txt から単に file.txt という位置引数に変更するような場合です。従来のパーサーでは、あらゆる場所でバリデーションロジックを探し回ります。これでは?パーサー定義を変更すると、TypeScript はすぐに壊れるすべての場所を表示し、それらを修正すれば完了です。以前は「すべてを捕捉できたか?」という1時間の作業が、今では「赤い波線を修正して次に進む」だけです。
CLI がより洗練されました。 複雑なオプション関係を追加しても複雑なバリデーションを書く必要がなければ、単に...追加するだけです。相互排他的なグループ?もちろん。コンテキスト依存のオプション?なぜダメ?パーサーがそれを処理します。
再利用性も本物です:
const networkOptions = object({ host: option("--host", string()), port: option("--port", integer())});// どこでも再利用し、異なる方法で構成const devServer = merge(networkOptions, debugOptions);const prodServer = merge(networkOptions, authOptions);const testServer = merge(networkOptions, mockOptions);しかし正直なところ?最大の変化は信頼です。コンパイルが通れば、CLI ロジックは機能します。「おそらく機能する」や「変な引数を渡さない限り機能する」ではなく。単に機能するのです。
気にするべきか?
10行のスクリプトを書いていて、1つの引数しか取らないなら、これは必要ありません。
process.argv[2] を使って終わりにしましょう。
しかし、もしあなたが:
- バリデーションロジックが実際のオプションと同期しなくなったことがある
- 特定のオプションの組み合わせが爆発することを本番環境で発見した
--jsonと一緒に使うと--verboseが壊れる理由を追跡するのに午後を費やした- 「オプション A にはオプション B が必要」というチェックを5回目に書いた
なら、あなたもこれにうんざりしているかもしれません。
公平な警告:Optique はまだ若いです。まだ物事を整理している段階で、API は少し変わるかもしれません。しかし、核となるアイデア—バリデーションではなくパース—それは確固たるものです。そして、私は数ヶ月間バリデーションコードを書いていません。
まだ奇妙な感じがします。良い意味で奇妙です。
試すも試さないも自由
これが共感を呼ぶなら:
私は Optique がすべての CLI 問題の答えだとは言っていません。ただ、どこでも同じバリデーションコードを書くことにうんざりしていたので、それを不要にするものを作ったというだけです。
使うも使わないも自由です。しかし、これから書こうとしているバリデーションコード?おそらくそれは必要ないでしょう。
KO-KR:저는 이런 나쁜 습관이 있습니다. 어떤 것이 충분히 짜증나게 하면, 결국 그것을 위한 라이브러리를 만들게 됩니다. 이번에는 CLI 유효성 검사 코드가 그 대상이었습니다.
저는 다른 사람들의 코드를 읽는 데 많은 시간을 보냅니다. 오픈 소스 프로젝트, 업무 관련 코드, 새벽 2시에 우연히 발견한 GitHub 저장소 등을 살펴봅니다. 그리고 계속해서 이런 것을 발견했습니다: 모든 CLI 도구는 어딘가에 똑같은 못생긴 유효성 검사 코드를 숨겨두고 있습니다. 이런 종류의 코드 말이죠:
if (!opts.server && opts.port) { throw new Error("--port requires --server flag");}if (opts.server && !opts.port) { opts.port = 3000; // default port}// 잠깐, --port를 값 없이 전달하면 어떻게 될까요?// 포트가 범위를 벗어나면 어떻게 될까요?// 만약...이런 코드를 작성하기 어렵다는 게 문제가 아닙니다. 문제는 이런 코드가 어디에나 있다는 것입니다. 모든 프로젝트. 모든 CLI 도구. 같은 패턴, 약간 다른 형태로요. 다른 옵션에 의존하는 옵션들. 함께 사용할 수 없는 플래그들. 특정 모드에서만 의미가 있는 인수들.
그리고 제가 정말 깨달은 것은 이것입니다: 우리는 다른 유형의 데이터에 대해서는 이 문제를 수년 전에 해결했습니다. 단지... CLI에 대해서는 아직 해결하지 않았을 뿐입니다.
유효성 검사의 문제점
제가 파싱에 대한 생각을 완전히 바꾸게 한 블로그 포스트가 있습니다. Alexis King의 유효성 검사하지 말고, 파싱하라(Parse, don't validate)라는 글입니다. 요점은? 데이터를 느슨한 타입으로 파싱한 다음 유효한지 확인하지 말고, 유효할 수밖에 없는 타입으로 직접 파싱하라는 것입니다.
생각해 보세요. API에서 JSON을 받을 때, 그냥 any로 파싱한 다음 여러 if 문을 작성하지 않습니다. Zod와 같은 것을 사용하여 원하는 형태로 직접 파싱합니다. 유효하지 않은 데이터? 파서가 거부합니다. 끝.
하지만 CLI에서는? 인수를 속성들의 묶음으로 파싱한 다음, 그 묶음이 의미가 있는지 확인하는 데 다음 100줄을 소비합니다. 이건 거꾸로 된 방식입니다.
그래서 네, 저는 Optique를 만들었습니다. 세상이 절실히 또 다른 CLI 파서를 필요로 해서가 아니라(그렇지 않았습니다), 어디서나 같은 유효성 검사 코드를 보고—그리고 작성하는 것—에 지쳤기 때문입니다.
유효성 검사에 지친 세 가지 패턴
종속적 옵션
이것은 어디에나 있습니다. 다른 옵션이 활성화되었을 때만 의미가 있는 옵션이 있습니다.
기존 방식? 모든 것을 파싱한 다음 확인합니다:
const opts = parseArgs(process.argv);if (!opts.server && opts.port) { throw new Error("--port requires --server");}if (opts.server && !opts.port) { opts.port = 3000;}// 더 많은 유효성 검사가 다른 곳에 숨어 있을 가능성이 높습니다...Optique를 사용하면 원하는 것을 그냥 설명하면 됩니다:
const config = withDefault( object({ server: flag("--server"), port: option("--port", integer()), workers: option("--workers", integer()) }), { server: false });TypeScript가 config의 타입을 다음과 같이 추론합니다:
type Config = | { readonly server: false } | { readonly server: true; readonly port: number; readonly workers: number }이제 타입 시스템은 server가 false일 때 port가 문자 그대로 존재하지 않는다는 것을 이해합니다. undefined도 아니고, null도 아니며—그냥 없습니다. 접근하려고 하면 TypeScript가 경고합니다. 런타임 유효성 검사가 필요 없습니다.
상호 배타적 옵션
또 다른 클래식입니다. 하나의 출력 형식을 선택하세요: JSON, YAML 또는 XML. 하지만 절대 두 개를 동시에 선택하면 안 됩니다.
예전에는 이런 지저분한 코드를 작성했습니다:
if ((opts.json ? 1 : 0) + (opts.yaml ? 1 : 0) + (opts.xml ? 1 : 0) > 1) { throw new Error('Choose only one output format');}(판단하지 마세요, 여러분도 비슷한 것을 작성해 보셨을 겁니다.)
이제는?
const format = or( map(option("--json"), () => "json" as const), map(option("--yaml"), () => "yaml" as const), map(option("--xml"), () => "xml" as const));or() 결합자는 정확히 하나만 성공한다는 의미입니다. 결과는 단순히 "json" | "yaml" | "xml"입니다. 하나의 문자열입니다. 세 개의 불리언을 다룰 필요가 없습니다.
환경별 요구사항
프로덕션에는 인증이 필요합니다. 개발에는 디버그 플래그가 필요합니다. Docker는 로컬과 다른 옵션이 필요합니다. 여러분도 알다시피요.
유효성 검사 미로 대신, 각 환경을 그냥 설명하면 됩니다:
const envConfig = or( object({ env: constant("prod"), auth: option("--auth", string()), // 프로덕션에서 필수 ssl: option("--ssl"), monitoring: option("--monitoring", url()) }), object({ env: constant("dev"), debug: optional(option("--debug")), // 개발 모드에서 선택 사항 verbose: option("--verbose") }));프로덕션에서 인증이 없나요? 파서가 즉시 실패합니다. 개발 모드에서 --auth에 접근하려고 하나요? TypeScript가 허용하지 않습니다—해당 필드는 그 타입에 존재하지 않습니다.
"하지만 파서 컴비네이터라니..."
알아요, 알아요. "파서 컴비네이터"는 이해하려면 컴퓨터 과학 학위가 필요한 것처럼 들립니다.
사실은 이렇습니다: 저는 컴퓨터 과학 학위가 없습니다. 사실, 저는 어떤 학위도 없습니다. 하지만 저는 수년 동안 파서 컴비네이터를 사용해 왔습니다. 왜냐하면 그것들이 실제로... 그렇게 어렵지 않기 때문입니다. 단지 이름이 실제보다 훨씬 더 무섭게 들릴 뿐입니다.
저는 다른 것들—설정 파일 파싱, DSL, 기타 등등—에 파서 컴비네이터를 사용해 왔습니다. 하지만 Haskell의 optparse-applicative를 볼 때까지 CLI 파싱에도 사용할 수 있다는 것이 와닿지 않았습니다. 그때 정말 "잠깐, 당연하잖아"라는 순간이었습니다. 왜 우리가 다른 방식으로 이것을 하고 있었을까요?
알고 보니 이것은 어처구니없이 간단합니다. 파서는 그저 함수일 뿐입니다. 컴비네이터는 파서를 받아 새로운 파서를 반환하는 함수일 뿐입니다. 그게 전부입니다.
// 이것은 파서입니다const port = option("--port", integer());// 이것도 파서입니다 (더 작은 파서들로 만들어짐)const server = object({ port: port, host: option("--host", string())});// 여전히 파서입니다 (계속해서 파서들로 구성됨)const config = or(server, client);모나드 없음. 범주론 없음. 그냥 함수들입니다. 지루하고, 아름다운 함수들.
TypeScript가 무거운 작업을 처리합니다
여기서 아직도 속임수 같은 느낌이 드는 것은: 저는 더 이상 CLI 설정에 대한 타입을 작성하지 않습니다. TypeScript가 그냥... 알아냅니다.
const cli = or( command("deploy", object({ action: constant("deploy"), environment: argument(string()), replicas: option("--replicas", integer()) })), command("rollback", object({ action: constant("rollback"), version: argument(string()), force: option("--force") })));// TypeScript는 이 타입을 자동으로 추론합니다:type Cli = | { readonly action: "deploy" readonly environment: string readonly replicas: number } | { readonly action: "rollback" readonly version: string readonly force: boolean }TypeScript는 action이 "deploy"이면 environment는 존재하지만 version은 존재하지 않는다는 것을 알고 있습니다. replicas가 number라는 것도 알고 있습니다. force가 boolean이라는 것도 알고 있습니다. 저는 이 중 어떤 것도 TypeScript에게 알려주지 않았습니다.
이것은 단지 좋은 자동 완성에 관한 것이 아닙니다(물론 자동 완성도 훌륭합니다). 이것은 버그가 발생하기 전에 잡아내는 것에 관한 것입니다. 어딘가에서 새로운 옵션을 처리하는 것을 잊으셨나요? 코드가 컴파일되지 않을 것입니다.
내게 실제로 무엇이 바뀌었는가
저는 몇 주 동안 이것을 직접 사용해 보았습니다. 솔직한 이야기를 해보겠습니다:
이제 코드를 삭제합니다. 리팩토링이 아니라 삭제입니다. 예전에 CLI 코드의 30%를 차지했던 유효성 검사 로직? 사라졌습니다. 매번 이상한 느낌이 듭니다.
리팩토링이 무섭지 않습니다. 보통 저를 겁먹게 하는 것이 무엇인지 아시나요? CLI가 인수를 받는 방식을 변경하는 것입니다. 예를 들어 --input file.txt에서 위치 인수로서의 file.txt로 변경하는 것 같은 경우요. 전통적인 파서를 사용하면 모든 곳에서 유효성 검사 로직을 찾아야 합니다. 이 방식에서는? 파서 정의를 변경하면 TypeScript가 즉시 깨지는 모든 위치를 보여주고, 그것들을 수정하면 끝입니다. 예전에는 "모든 것을 잡았을까?"라는 생각으로 한 시간이 걸렸던 것이 이제는 "빨간 물결선을 수정하고 넘어가자"가 되었습니다.
내 CLI가 더 멋져졌습니다. 복잡한 옵션 관계를 추가하는 것이 복잡한 유효성 검사를 작성하는 것을 의미하지 않을 때, 여러분은 그냥... 추가합니다. 상호 배타적 그룹? 물론이죠. 컨텍스트 의존적 옵션? 왜 안 되겠어요. 파서가 처리합니다.
재사용성도 실제로 있습니다:
const networkOptions = object({ host: option("--host", string()), port: option("--port", integer())});// 어디서나 재사용하고, 다르게 구성하세요const devServer = merge(networkOptions, debugOptions);const prodServer = merge(networkOptions, authOptions);const testServer = merge(networkOptions, mockOptions);하지만 솔직히? 가장 큰 변화는 신뢰입니다. 컴파일되면 CLI 로직이 작동합니다. "아마도 작동할 것"이나 "누군가 이상한 인수를 전달하지 않는 한 작동할 것"이 아닙니다. 그냥 작동합니다.
관심을 가져야 할까요?
만약 하나의 인수를 받는 10줄짜리 스크립트를 작성하고 있다면, 이것이 필요하지 않습니다. process.argv[2]를 사용하고 끝내세요.
하지만 다음과 같은 경험이 있다면:
- 유효성 검사 로직이 실제 옵션과 동기화되지 않은 경우
- 특정 옵션 조합이 프로덕션에서 폭발한다는 것을 발견한 경우
--json과 함께 사용할 때--verbose가 왜 깨지는지 추적하는 데 오후 시간을 보낸 경우- 다섯 번째로 "옵션 A는 옵션 B가 필요합니다" 검사를 작성한 경우
그렇다면 네, 아마도 여러분도 이런 것들에 지쳤을 것입니다.
공정한 경고: Optique는 아직 초기 단계입니다. 저는 아직 여러 가지를 알아가는 중이고, API가 약간 변경될 수 있습니다. 하지만 핵심 아이디어—유효성 검사하지 말고, 파싱하라—는 확고합니다. 그리고 저는 몇 달 동안 유효성 검사 코드를 작성하지 않았습니다.
여전히 이상한 느낌입니다. 좋은 이상함이죠.
시도하거나 말거나
이것이 공감된다면:
저는 Optique가 모든 CLI 문제의 해답이라고 말하는 것이 아닙니다. 단지 저는 어디에서나 같은 유효성 검사 코드를 작성하는 것에 지쳐서, 그것을 불필요하게 만드는 무언가를 만들었다고 말하는 것뿐입니다.
받아들이든 말든 여러분의 선택입니다. 하지만 여러분이 지금 작성하려는 그 유효성 검사 코드? 아마도 필요하지 않을 겁니다.
中文(中国):我有个坏习惯。当某件事让我烦恼多次后,我最终会为它构建一个库。这次,问题出在 CLI 验证代码上。
我花了大量时间阅读其他人的代码。开源项目、工作相关的代码、凌晨 2 点偶然发现的 GitHub 仓库。我不断注意到这样一件事:每个 CLI 工具都在某处藏着相同的丑陋验证代码。你知道是哪种:
if (!opts.server && opts.port) { throw new Error("--port requires --server flag");}if (opts.server && !opts.port) { opts.port = 3000; // default port}// 等等,如果他们传递 --port 但没有值怎么办?// 如果端口超出范围怎么办?// 如果...这些代码并不难写。问题是它们无处不在。每个项目。每个 CLI 工具。相同的模式,略有不同的风格。依赖于其他选项的选项。不能一起使用的标志。只在特定模式下有意义的参数。
真正让我受不了的是:我们多年前就为其他类型的数据解决了这个问题。只是...没有为 CLI 解决。
验证的问题
有一篇博客文章完全改变了我对解析的看法。它是 Alexis King 写的 Parse, don't validate(解析,而非验证)。要点是什么?不要将数据解析为松散类型然后检查它是否有效。直接将其解析为只能有效的类型。
想想看。当你从 API 获取 JSON 时,你不会仅仅将其解析为 any 然后编写一堆 if 语句。你会使用像 Zod 这样的工具直接将其解析为你想要的形状。无效数据?解析器拒绝它。完成。
但对于 CLI 呢?我们将参数解析为一堆属性,然后花接下来的 100 行检查这堆东西是否有意义。这是本末倒置。
所以,我构建了 Optique。不是因为世界迫切需要另一个 CLI 解析器(它不需要),而是因为我厌倦了看到——和编写——到处都是相同的验证代码。
我厌倦了验证的三种模式
依赖选项
这种情况无处不在。你有一个选项,只有在另一个选项启用时才有意义。
旧方法?解析所有内容,然后检查:
const opts = parseArgs(process.argv);if (!opts.server && opts.port) { throw new Error("--port requires --server");}if (opts.server && !opts.port) { opts.port = 3000;}// 更多验证可能潜伏在其他地方...使用 Optique,你只需描述你想要的:
const config = withDefault( object({ server: flag("--server"), port: option("--port", integer()), workers: option("--workers", integer()) }), { server: false });以下是 TypeScript 为 config 类型推断的结果:
type Config = | { readonly server: false } | { readonly server: true; readonly port: number; readonly workers: number }类型系统现在理解当 server 为 false 时,port 字面上不存在。不是 undefined,不是 null——它根本不在那里。尝试访问它,TypeScript 会对你大喊大叫。不需要运行时验证。
互斥选项
另一个经典案例。选择一种输出格式:JSON、YAML 或 XML。但绝对不能同时选两种。
我过去会写这样的混乱代码:
if ((opts.json ? 1 : 0) + (opts.yaml ? 1 : 0) + (opts.xml ? 1 : 0) > 1) { throw new Error('Choose only one output format');}(别评判我,你也写过类似的东西。)
现在呢?
const format = or( map(option("--json"), () => "json" as const), map(option("--yaml"), () => "yaml" as const), map(option("--xml"), () => "xml" as const));or() 组合器意味着只有一个会成功。结果只是 "json" | "yaml" | "xml"。一个字符串。不是三个需要处理的布尔值。
环境特定需求
生产环境需要认证。开发环境需要调试标志。Docker 需要与本地不同的选项。你知道这种情况。
与其使用验证迷宫,不如直接描述每个环境:
const envConfig = or( object({ env: constant("prod"), auth: option("--auth", string()), // 在生产环境中必需 ssl: option("--ssl"), monitoring: option("--monitoring", url()) }), object({ env: constant("dev"), debug: optional(option("--debug")), // 在开发环境中可选 verbose: option("--verbose") }));生产环境中没有认证?解析器立即失败。尝试在开发模式下访问 --auth?TypeScript 不会让你这么做——该字段在那个类型上不存在。
"但是解析器组合器..."
我知道,我知道。"解析器组合器"听起来像是需要计算机科学学位才能理解的东西。
事实是:我没有计算机科学学位。实际上,我没有任何学位。但我已经使用解析器组合器多年了,因为它们实际上...并不那么难?只是这个名字让它们听起来比实际情况更可怕。
我一直在将它们用于其他事情——解析配置文件、DSL 等。但直到我看到 Haskell 的 optparse-applicative 之前,我从未意识到可以将它们用于 CLI 解析。那是一个真正的"等等,当然"时刻。就像,为什么我们要用其他方式来做这件事?
事实证明这非常简单。解析器只是一个函数。组合器只是接受解析器并返回新解析器的函数。就是这样。
// 这是一个解析器const port = option("--port", integer());// 这也是一个解析器(由更小的解析器组成)const server = object({ port: port, host: option("--host", string())});// 仍然是一个解析器(一直都是解析器)const config = or(server, client);没有单子。没有范畴论。只有函数。平凡而美丽的函数。
TypeScript 完成繁重工作
这里有一点仍然感觉像是作弊:我不再为我的 CLI 配置编写类型。TypeScript 只是...自己搞定了。
const cli = or( command("deploy", object({ action: constant("deploy"), environment: argument(string()), replicas: option("--replicas", integer()) })), command("rollback", object({ action: constant("rollback"), version: argument(string()), force: option("--force") })));// TypeScript 自动推断出这个类型:type Cli = | { readonly action: "deploy" readonly environment: string readonly replicas: number } | { readonly action: "rollback" readonly version: string readonly force: boolean }TypeScript 知道如果 action 是 "deploy",那么 environment 存在但 version 不存在。它知道 replicas 是一个 number。它知道 force 是一个 boolean。我没有告诉它这些。
这不仅仅是关于良好的自动完成(尽管是的,自动完成很棒)。这是关于在问题发生前捕获它们。在某处忘记处理新选项?代码将无法编译。
对我来说实际发生了什么变化
我已经自己试用这个库几周了。一些真实感受:
我现在删除代码。 不是重构。是删除。那些曾经占我 CLI 代码 30% 的验证逻辑?消失了。每次这样做都感觉很奇怪。
重构不再可怕。 想知道什么通常让我感到恐惧吗?改变 CLI 接收参数的方式。比如从 --input file.txt 改为仅仅使用 file.txt 作为位置参数。使用传统解析器,你需要到处寻找验证逻辑。而现在呢?你更改解析器定义,TypeScript 立即显示每个出错的地方,你修复它们,完成。过去需要一小时的"我是否捕获了所有内容?"现在变成了"修复红色波浪线然后继续"。
我的 CLI 变得更高级。 当添加复杂选项关系不再意味着编写复杂验证时,你只是...添加它们。互斥组?当然可以。上下文相关选项?为什么不呢。解析器处理这一切。
可重用性也是真实的:
const networkOptions = object({ host: option("--host", string()), port: option("--port", integer())});// 到处重用,以不同方式组合const devServer = merge(networkOptions, debugOptions);const prodServer = merge(networkOptions, authOptions);const testServer = merge(networkOptions, mockOptions);但老实说?最大的变化是信任。如果它编译通过,CLI 逻辑就能工作。不是"可能工作"或"除非有人传递奇怪参数否则工作"。它就是能工作。
你应该关心吗?
如果你正在编写一个只接受一个参数的 10 行脚本,你不需要这个。使用 process.argv[2] 就完事了。
但如果你曾经:
- 让验证逻辑与你的实际选项不同步
- 在生产环境中发现某些选项组合会导致崩溃
- 花了一个下午追踪为什么
--verbose与--json一起使用时会出错 - 第五次编写相同的"选项 A 需要选项 B"检查
那么是的,也许你也厌倦了这些事情。
公平警告:Optique 还很年轻。我仍在摸索,API 可能会有一些变化。但核心理念——解析,而非验证——是坚实的。而且我已经几个月没有编写验证代码了。
仍然感觉很奇怪。好的那种奇怪。
试试看或者不试
如果这引起了你的共鸣:
我并不是说 Optique 是所有 CLI 问题的答案。我只是说我厌倦了到处编写相同的验证代码,所以我构建了一个让它变得不必要的工具。
接受它或者不接受。但是你即将编写的那些验证代码?你可能并不需要它。
中文(台湾):我有個壞習慣。當某件事情煩擾我足夠多次後,我最終會為它建立一個函式庫。這次,問題出在 CLI 驗證程式碼上。
我花了很多時間閱讀其他人的程式碼。開源專案、工作相關的程式碼、凌晨 2 點偶然發現的 GitHub 儲存庫。我不斷注意到這個現象:每個 CLI 工具都在某處藏有相同的醜陋驗證程式碼。你知道是哪種:
if (!opts.server && opts.port) { throw new Error("--port requires --server flag");}if (opts.server && !opts.port) { opts.port = 3000; // default port}// 等等,如果他們傳入 --port 但沒有值怎麼辦?// 如果端口超出範圍怎麼辦?// 如果...這些程式碼並不難寫。問題是它們無處不在。每個專案。每個 CLI 工具。相同的模式,略微不同的風格。依賴於其他選項的選項。不能一起使用的標誌。只在特定模式下有意義的參數。
而真正讓我受不了的是:我們多年前就已經為其他類型的資料解決了這個問題。只是...沒有為 CLI 解決。
驗證的問題
有一篇部落格文章徹底改變了我對解析的看法。它是 Alexis King 寫的 Parse, don't validate。要點是什麼?不要將資料解析成寬鬆的型別然後檢查它是否有效。直接將它解析成一個只能是有效的型別。
想想看。當你從 API 獲取 JSON 時,你不會只是將它解析為 any 然後寫一堆 if 陳述式。你會使用像 Zod 這樣的工具直接將它解析成你想要的形狀。無效資料?解析器拒絕它。完成。
但對於 CLI 呢?我們將參數解析成一堆屬性,然後花接下來的 100 行檢查這堆屬性是否合理。這是本末倒置。
所以,我建立了 Optique。不是因為世界迫切需要另一個 CLI 解析器(它不需要),而是因為我厭倦了到處看到——和撰寫——相同的驗證程式碼。
三種我厭倦了驗證的模式
相依選項
這種情況無處不在。你有一個選項,只有在另一個選項啟用時才有意義。
舊方法?解析所有內容,然後檢查:
const opts = parseArgs(process.argv);if (!opts.server && opts.port) { throw new Error("--port requires --server");}if (opts.server && !opts.port) { opts.port = 3000;}// 更多驗證可能潛伏在其他地方...使用 Optique,你只需描述你想要的:
const config = withDefault( object({ server: flag("--server"), port: option("--port", integer()), workers: option("--workers", integer()) }), { server: false });以下是 TypeScript 為 config 推斷的型別:
type Config = | { readonly server: false } | { readonly server: true; readonly port: number; readonly workers: number }型別系統現在理解當 server 為 false 時,port 字面上不存在。不是 undefined,不是 null——它根本不在那裡。嘗試存取它,TypeScript 就會對你大吼大叫。不需要執行時驗證。
互斥選項
另一個經典案例。選擇一種輸出格式:JSON、YAML 或 XML。但絕對不能同時選兩種。
我過去常寫這種混亂的程式碼:
if ((opts.json ? 1 : 0) + (opts.yaml ? 1 : 0) + (opts.xml ? 1 : 0) > 1) { throw new Error('Choose only one output format');}(別評判我,你也寫過類似的東西。)
現在呢?
const format = or( map(option("--json"), () => "json" as const), map(option("--yaml"), () => "yaml" as const), map(option("--xml"), () => "xml" as const));or() 組合器意味著只有一個會成功。結果只是 "json" | "yaml" | "xml"。一個字串。不是三個要處理的布林值。
環境特定需求
生產環境需要認證。開發環境需要除錯標誌。Docker 需要與本地不同的選項。你知道這種情況。
與其建立一個驗證迷宮,你只需描述每個環境:
const envConfig = or( object({ env: constant("prod"), auth: option("--auth", string()), // 在生產環境中必需 ssl: option("--ssl"), monitoring: option("--monitoring", url()) }), object({ env: constant("dev"), debug: optional(option("--debug")), // 在開發環境中可選 verbose: option("--verbose") }));生產環境中沒有認證?解析器立即失敗。嘗試在開發模式下存取 --auth?TypeScript 不會讓你這麼做——該欄位在那個型別上不存在。
"但是解析器組合器..."
我知道,我知道。"解析器組合器"聽起來像是需要計算機科學學位才能理解的東西。
事實是:我沒有計算機科學學位。實際上,我沒有任何學位。但我已經使用解析器組合器多年了,因為它們實際上...並不那麼難?只是這個名稱讓它們聽起來比實際情況更可怕。
我一直在將它們用於其他事情——解析配置文件、領域特定語言等等。但直到我看到 Haskell 的 optparse-applicative 之前,我從未想到可以將它們用於 CLI 解析。那是一個真正的"等等,當然"時刻。就像,為什麼我們要用其他方式來做這件事?
事實證明這非常簡單。解析器只是一個函數。組合器只是接受解析器並返回新解析器的函數。就是這樣。
// 這是一個解析器const port = option("--port", integer());// 這也是一個解析器(由更小的解析器組成)const server = object({ port: port, host: option("--host", string())});// 仍然是一個解析器(一路都是解析器)const config = or(server, client);沒有單子。沒有範疇論。只有函數。平凡但美麗的函數。
TypeScript 完成繁重工作
這裡有一點仍然感覺像是作弊:我不再為我的 CLI 配置寫型別。TypeScript 就...自己搞定了。
const cli = or( command("deploy", object({ action: constant("deploy"), environment: argument(string()), replicas: option("--replicas", integer()) })), command("rollback", object({ action: constant("rollback"), version: argument(string()), force: option("--force") })));// TypeScript 自動推斷出這個型別:type Cli = | { readonly action: "deploy" readonly environment: string readonly replicas: number } | { readonly action: "rollback" readonly version: string readonly force: boolean }TypeScript 知道如果 action 是 "deploy",那麼 environment 存在但 version 不存在。它知道 replicas 是一個 number。它知道 force 是一個 boolean。我沒有告訴它這些。
這不僅僅是關於良好的自動完成(雖然是的,自動完成很棒)。這是關於在問題發生前捕捉錯誤。在某處忘記處理新選項?程式碼將無法編譯。
對我來說實際改變了什麼
我已經自己使用這個工具幾週了。一些真實感受:
我現在刪除程式碼。 不是重構。是刪除。那些曾經佔我 CLI 程式碼 30% 的驗證邏輯?消失了。每次這樣做都感覺很奇怪。
重構不再可怕。 想知道什麼事通常讓我感到恐懼?改變 CLI 接收參數的方式。比如從 --input file.txt 改為僅僅是 file.txt 作為位置參數。使用傳統解析器,你需要到處尋找驗證邏輯。而現在呢?你改變解析器定義,TypeScript 立即顯示每個出錯的地方,你修復它們,完成。過去需要一小時的"我是否捕捉到所有問題?"現在變成了"修復紅色波浪線然後繼續前進。"
我的 CLI 變得更精緻。 當添加複雜的選項關係不再意味著編寫複雜的驗證時,你就...添加它們。互斥群組?當然可以。上下文相關選項?為什麼不呢。解析器處理這一切。
可重用性也是真實的:
const networkOptions = object({ host: option("--host", string()), port: option("--port", integer())});// 到處重用,以不同方式組合const devServer = merge(networkOptions, debugOptions);const prodServer = merge(networkOptions, authOptions);const testServer = merge(networkOptions, mockOptions);但老實說?最大的變化是信任。如果它編譯通過,CLI 邏輯就能正常工作。不是"可能有效"或"除非有人傳入奇怪的參數才有效"。它就是有效。
你應該關心嗎?
如果你正在編寫一個只接受一個參數的 10 行腳本,你不需要這個。使用 process.argv[2] 就完事了。
但如果你曾經:
- 讓驗證邏輯與你的實際選項不同步
- 在生產環境中發現某些選項組合會爆炸
- 花了一個下午追蹤為什麼
--verbose與--json一起使用時會出錯 - 第五次編寫相同的"選項 A 需要選項 B"檢查
那麼是的,也許你也厭倦了這些事情。
公平警告:Optique 還很年輕。我仍在摸索,API 可能會有些變化。但核心理念——解析,而非驗證——是穩固的。而且我已經幾個月沒有編寫驗證程式碼了。
仍然感覺很奇怪。好的那種奇怪。
嘗試或不嘗試
如果這引起了你的共鳴:
我並不是說 Optique 是所有 CLI 問題的答案。我只是說我厭倦了到處編寫相同的驗證程式碼,所以我建立了一個讓它變得不必要的工具。
接受或拒絕都可以。但是你即將編寫的那些驗證程式碼?你可能並不需要它。
Stop writing if statements for your CLI flags
Wenn Sie CLI-Tools entwickelt haben, haben Sie wahrscheinlich Code wie diesen geschrieben:
if (opts.reporter === "junit" && !opts.outputFile) {
throw new Error("--output-file is required for junit reporter");
}
if (opts.reporter === "html" && !opts.outputFile) {
throw new Error("--output-file is required for html reporter");
}
if (opts.reporter === "console" && opts.outputFile) {
console.warn("--output-file is ignored for console reporter");
}
Vor einigen Monaten habe ich Hören Sie auf, CLI-Validierung zu schreiben. Parsen Sie es gleich beim ersten Mal richtig. über das korrekte Parsen einzelner Optionswerte geschrieben. Aber es behandelte nicht die Beziehungen zwischen Optionen.
Im obigen Code macht --output-file nur Sinn, wenn --reporter auf junit oder html gesetzt ist. Wenn es console ist, sollte die Option gar nicht existieren.
Wir verwenden TypeScript. Wir haben ein leistungsstarkes Typsystem. Und trotzdem schreiben wir hier Laufzeitprüfungen, bei denen der Compiler nicht helfen kann. Jedes Mal, wenn wir einen neuen Reporter-Typ hinzufügen, müssen wir daran denken, diese Prüfungen zu aktualisieren. Bei jedem Refactoring hoffen wir, dass wir keine vergessen haben.
Der Stand der TypeScript CLI-Parser
Die alte Garde – Commander, yargs, minimist – wurde entwickelt, bevor TypeScript zum Mainstream wurde. Sie geben Ihnen Sammlungen von Strings und überlassen die Typsicherheit dem Anwender.
Aber wir haben Fortschritte gemacht. Moderne TypeScript-First-Bibliotheken wie cmd-ts und Clipanion (die Bibliothek, die Yarn Berry antreibt) nehmen Typen ernst:
// cmd-ts
const app = command({
args: {
reporter: option({ type: string, long: 'reporter' }),
outputFile: option({ type: string, long: 'output-file' }),
},
handler: (args) => {
// args.reporter: string
// args.outputFile: string
},
});
// Clipanion
class TestCommand extends Command {
reporter = Option.String('--reporter');
outputFile = Option.String('--output-file');
}
Diese Bibliotheken leiten Typen für einzelne Optionen ab. --port ist eine number. --verbose ist ein boolean. Das ist ein echter Fortschritt.
Aber hier ist, was sie nicht können: ausdrücken, dass --output-file erforderlich ist, wenn --reporter junit ist, und verboten, wenn --reporter console ist. Die Beziehung zwischen Optionen wird nicht im Typsystem erfasst.
Also schreiben Sie trotzdem Validierungscode:
handler: (args) => {
// Sowohl cmd-ts als auch Clipanion benötigen dies
if (args.reporter === "junit" && !args.outputFile) {
throw new Error("--output-file required for junit");
}
// args.outputFile ist immer noch string | undefined
// TypeScript weiß nicht, dass es definitiv string ist, wenn reporter "junit" ist
}
Rusts clap und Pythons Click haben requires und conflicts_with Attribute, aber auch diese sind Laufzeitprüfungen. Sie ändern den Ergebnistyp nicht.
Wenn die Parser-Konfiguration über Optionsbeziehungen Bescheid weiß, warum taucht dieses Wissen nicht im Ergebnistyp auf?
Beziehungen mit conditional() modellieren
Optique behandelt Optionsbeziehungen als Konzept erster Klasse. Hier ist das Test-Reporter-Szenario:
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({
outputFile: option("--output-file", string()),
}),
html: object({
outputFile: option("--output-file", string()),
openBrowser: option("--open-browser"),
}),
}
);
const [reporter, config] = run(parser);
Der conditional()-Kombinator nimmt eine Diskriminator-Option (--reporter) und eine Map von Zweigen. Jeder Zweig definiert, welche anderen Optionen für diesen Diskriminatorwert gültig sind.
TypeScript leitet den Ergebnistyp automatisch ab:
type Result =
| ["console", {}]
| ["junit", { outputFile: string }]
| ["html", { outputFile: string; openBrowser: boolean }];
Wenn reporter "junit" ist, ist outputFile string – nicht string | undefined. Die Beziehung ist im Typ kodiert.
Jetzt erhält Ihre Geschäftslogik echte Typsicherheit:
const [reporter, config] = run(parser);
switch (reporter) {
case "console":
runWithConsoleOutput();
break;
case "junit":
// TypeScript weiß, dass config.outputFile ein string ist
writeJUnitReport(config.outputFile);
break;
case "html":
// TypeScript weiß, dass config.outputFile und config.openBrowser existieren
writeHtmlReport(config.outputFile);
if (config.openBrowser) openInBrowser(config.outputFile);
break;
}
Kein Validierungscode. Keine Laufzeitprüfungen. Wenn Sie einen neuen Reporter-Typ hinzufügen und vergessen, ihn im Switch zu behandeln, informiert Sie der Compiler.
Ein komplexeres Beispiel: Datenbankverbindungen
Test-Reporter sind ein schönes Beispiel, aber versuchen wir etwas mit mehr Variation. Datenbankverbindungsstrings:
myapp --db=sqlite --file=./data.db
myapp --db=postgres --host=localhost --port=5432 --user=admin
myapp --db=mysql --host=localhost --port=3306 --user=root --ssl
Jeder Datenbanktyp benötigt völlig unterschiedliche Optionen:
- SQLite benötigt nur einen Dateipfad
- PostgreSQL benötigt Host, Port, Benutzer und optional ein Passwort
- MySQL benötigt Host, Port, Benutzer und hat ein SSL-Flag
So modellieren Sie dies:
import { conditional, object } from "@optique/core/constructs";
import { withDefault, optional } from "@optique/core/modifiers";
import { option } from "@optique/core/primitives";
import { choice, string, integer } from "@optique/core/valueparser";
const dbParser = conditional(
option("--db", choice(["sqlite", "postgres", "mysql"])),
{
sqlite: object({
file: option("--file", string()),
}),
postgres: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 5432),
user: option("--user", string()),
password: optional(option("--password", string())),
}),
mysql: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 3306),
user: option("--user", string()),
ssl: option("--ssl"),
}),
}
);
Der abgeleitete Typ:
type DbConfig =
| ["sqlite", { file: string }]
| ["postgres", { host: string; port: number; user: string; password?: string }]
| ["mysql", { host: string; port: number; user: string; ssl: boolean }];
Beachten Sie die Details: PostgreSQL verwendet standardmäßig Port 5432, MySQL 3306. PostgreSQL hat ein optionales Passwort, MySQL hat ein SSL-Flag. Jeder Datenbanktyp hat genau die Optionen, die er benötigt – nicht mehr und nicht weniger.
Mit dieser Struktur ist das Schreiben von dbConfig.ssl, wenn der Modus sqlite ist, kein Laufzeitfehler – es ist eine Kompilierzeit-Unmöglichkeit.
Versuchen Sie, dies mit requires_if-Attributen auszudrücken. Das geht nicht. Die Beziehungen sind zu komplex.
Das Muster ist überall
Wenn Sie es einmal sehen, finden Sie dieses Muster in vielen CLI-Tools:
Authentifizierungsmodi:
const authParser = conditional(
option("--auth", choice(["none", "basic", "token", "oauth"])),
{
none: object({}),
basic: object({
username: option("--username", string()),
password: option("--password", string()),
}),
token: object({
token: option("--token", string()),
}),
oauth: object({
clientId: option("--client-id", string()),
clientSecret: option("--client-secret", string()),
tokenUrl: option("--token-url", url()),
}),
}
);
Deployment-Ziele, Ausgabeformate, Verbindungsprotokolle – überall dort, wo Sie einen Modus-Selektor haben, der bestimmt, welche anderen Optionen gültig sind.
Warum conditional() existiert
Optique hat bereits einen or()-Kombinator für sich gegenseitig ausschließende Alternativen. Warum brauchen wir conditional()?
Der or()-Kombinator unterscheidet Zweige basierend auf der Struktur – welche Optionen vorhanden sind. Er funktioniert gut für Unterbefehle wie git commit vs. git push, bei denen sich die Argumente vollständig unterscheiden.
Aber im Reporter-Beispiel ist die Struktur identisch: Jeder Zweig hat ein --reporter-Flag. Der Unterschied liegt im Wert des Flags, nicht in seiner Präsenz.
// Das wird nicht wie beabsichtigt funktionieren
const parser = or(
object({ reporter: option("--reporter", choice(["console"])) }),
object({
reporter: option("--reporter", choice(["junit", "html"])),
outputFile: option("--output-file", string())
}),
);
Wenn Sie --reporter junit übergeben, versucht or(), einen Zweig basierend auf den vorhandenen Optionen auszuwählen. Beide Zweige haben --reporter, sodass sie strukturell nicht unterschieden werden können.
conditional() löst dieses Problem, indem es zuerst den Wert des Diskriminators liest und dann den entsprechenden Zweig auswählt. Es überbrückt die Lücke zwischen strukturellem Parsen und wertbasierten Entscheidungen.
Die Struktur ist die Einschränkung
Anstatt Optionen in einen lockeren Typ zu parsen und dann Beziehungen zu validieren, definieren Sie einen Parser, dessen Struktur die Einschränkung ist.
| Traditioneller Ansatz | Optique-Ansatz |
|---|---|
| Parsen → Validieren → Verwenden | Parsen (mit Einschränkungen) → Verwenden |
| Typen und Validierungslogik werden separat gepflegt | Typen spiegeln die Einschränkungen wider |
| Unstimmigkeiten werden zur Laufzeit gefunden | Unstimmigkeiten werden zur Kompilierzeit gefunden |
Die Parser-Definition wird zur einzigen Quelle der Wahrheit. Fügen Sie einen neuen Reporter-Typ hinzu? Die Parser-Definition ändert sich, der abgeleitete Typ ändert sich, und der Compiler zeigt Ihnen überall, was aktualisiert werden muss.
Probieren Sie es aus
Wenn dies mit einer CLI, die Sie entwickeln, in Resonanz steht:
Wenn Sie das nächste Mal dabei sind, eine if-Anweisung zu schreiben, die Optionsbeziehungen prüft, fragen Sie sich: Könnte der Parser diese Einschränkung stattdessen ausdrücken?
Die Struktur Ihres Parsers ist die Einschränkung. Sie brauchen diesen Validierungscode möglicherweise gar nicht.
Deutsch (deutschland):Wenn Sie CLI-Tools entwickelt haben, haben Sie wahrscheinlich Code wie diesen geschrieben:
if (opts.reporter === "junit" && !opts.outputFile) {
throw new Error("--output-file is required for junit reporter");
}
if (opts.reporter === "html" && !opts.outputFile) {
throw new Error("--output-file is required for html reporter");
}
if (opts.reporter === "console" && opts.outputFile) {
console.warn("--output-file is ignored for console reporter");
}
Vor einigen Monaten habe ich Hören Sie auf, CLI-Validierung zu schreiben. Parsen Sie es gleich beim ersten Mal richtig. über das korrekte Parsen einzelner Optionswerte geschrieben. Aber es behandelte nicht die Beziehungen zwischen Optionen.
Im obigen Code macht --output-file nur Sinn, wenn --reporter auf junit oder html gesetzt ist. Wenn es console ist, sollte die Option überhaupt nicht existieren.
Wir verwenden TypeScript. Wir haben ein leistungsstarkes Typsystem. Und trotzdem schreiben wir hier Laufzeitprüfungen, bei denen der Compiler nicht helfen kann. Jedes Mal, wenn wir einen neuen Reporter-Typ hinzufügen, müssen wir daran denken, diese Prüfungen zu aktualisieren. Bei jedem Refactoring hoffen wir, dass wir keine vergessen haben.
Der Stand der TypeScript CLI-Parser
Die alte Garde – Commander, yargs, minimist – wurde entwickelt, bevor TypeScript zum Mainstream wurde. Sie geben Ihnen Sammlungen von Strings und überlassen die Typsicherheit dem Anwender.
Aber wir haben Fortschritte gemacht. Moderne TypeScript-First-Bibliotheken wie cmd-ts und Clipanion (die Bibliothek, die Yarn Berry antreibt) nehmen Typen ernst:
// cmd-ts
const app = command({
args: {
reporter: option({ type: string, long: 'reporter' }),
outputFile: option({ type: string, long: 'output-file' }),
},
handler: (args) => {
// args.reporter: string
// args.outputFile: string
},
});
// Clipanion
class TestCommand extends Command {
reporter = Option.String('--reporter');
outputFile = Option.String('--output-file');
}
Diese Bibliotheken leiten Typen für einzelne Optionen ab. --port ist eine number. --verbose ist ein boolean. Das ist ein echter Fortschritt.
Aber hier ist, was sie nicht können: ausdrücken, dass --output-file erforderlich ist, wenn --reporter junit ist, und verboten, wenn --reporter console ist. Die Beziehung zwischen Optionen wird nicht im Typsystem erfasst.
Also schreiben Sie trotzdem Validierungscode:
handler: (args) => {
// Sowohl cmd-ts als auch Clipanion benötigen dies
if (args.reporter === "junit" && !args.outputFile) {
throw new Error("--output-file required for junit");
}
// args.outputFile ist immer noch string | undefined
// TypeScript weiß nicht, dass es definitiv string ist, wenn reporter "junit" ist
}
Rusts clap und Pythons Click haben requires und conflicts_with Attribute, aber auch diese sind Laufzeitprüfungen. Sie ändern den Ergebnistyp nicht.
Wenn die Parser-Konfiguration über Optionsbeziehungen Bescheid weiß, warum taucht dieses Wissen nicht im Ergebnistyp auf?
Beziehungen mit conditional() modellieren
Optique behandelt Optionsbeziehungen als Konzept erster Klasse. Hier ist das Test-Reporter-Szenario:
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({
outputFile: option("--output-file", string()),
}),
html: object({
outputFile: option("--output-file", string()),
openBrowser: option("--open-browser"),
}),
}
);
const [reporter, config] = run(parser);
Der conditional()-Kombinator nimmt eine Diskriminator-Option (--reporter) und eine Map von Zweigen. Jeder Zweig definiert, welche anderen Optionen für diesen Diskriminatorwert gültig sind.
TypeScript leitet den Ergebnistyp automatisch ab:
type Result =
| ["console", {}]
| ["junit", { outputFile: string }]
| ["html", { outputFile: string; openBrowser: boolean }];
Wenn reporter "junit" ist, ist outputFile string – nicht string | undefined. Die Beziehung ist im Typ kodiert.
Jetzt erhält Ihre Geschäftslogik echte Typsicherheit:
const [reporter, config] = run(parser);
switch (reporter) {
case "console":
runWithConsoleOutput();
break;
case "junit":
// TypeScript weiß, dass config.outputFile ein string ist
writeJUnitReport(config.outputFile);
break;
case "html":
// TypeScript weiß, dass config.outputFile und config.openBrowser existieren
writeHtmlReport(config.outputFile);
if (config.openBrowser) openInBrowser(config.outputFile);
break;
}
Kein Validierungscode. Keine Laufzeitprüfungen. Wenn Sie einen neuen Reporter-Typ hinzufügen und vergessen, ihn im Switch zu behandeln, informiert Sie der Compiler darüber.
Ein komplexeres Beispiel: Datenbankverbindungen
Test-Reporter sind ein schönes Beispiel, aber versuchen wir etwas mit mehr Variation. Datenbankverbindungsstrings:
myapp --db=sqlite --file=./data.db
myapp --db=postgres --host=localhost --port=5432 --user=admin
myapp --db=mysql --host=localhost --port=3306 --user=root --ssl
Jeder Datenbanktyp benötigt völlig unterschiedliche Optionen:
- SQLite benötigt nur einen Dateipfad
- PostgreSQL benötigt Host, Port, Benutzer und optional ein Passwort
- MySQL benötigt Host, Port, Benutzer und hat ein SSL-Flag
So modellieren Sie dies:
import { conditional, object } from "@optique/core/constructs";
import { withDefault, optional } from "@optique/core/modifiers";
import { option } from "@optique/core/primitives";
import { choice, string, integer } from "@optique/core/valueparser";
const dbParser = conditional(
option("--db", choice(["sqlite", "postgres", "mysql"])),
{
sqlite: object({
file: option("--file", string()),
}),
postgres: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 5432),
user: option("--user", string()),
password: optional(option("--password", string())),
}),
mysql: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 3306),
user: option("--user", string()),
ssl: option("--ssl"),
}),
}
);
Der abgeleitete Typ:
type DbConfig =
| ["sqlite", { file: string }]
| ["postgres", { host: string; port: number; user: string; password?: string }]
| ["mysql", { host: string; port: number; user: string; ssl: boolean }];
Beachten Sie die Details: PostgreSQL verwendet standardmäßig Port 5432, MySQL 3306. PostgreSQL hat ein optionales Passwort, MySQL hat ein SSL-Flag. Jeder Datenbanktyp hat genau die Optionen, die er benötigt – nicht mehr und nicht weniger.
Mit dieser Struktur ist das Schreiben von dbConfig.ssl, wenn der Modus sqlite ist, kein Laufzeitfehler – es ist eine Kompilierzeit-Unmöglichkeit.
Versuchen Sie, dies mit requires_if-Attributen auszudrücken. Das geht nicht. Die Beziehungen sind zu komplex.
Das Muster ist überall
Wenn Sie es einmal sehen, finden Sie dieses Muster in vielen CLI-Tools:
Authentifizierungsmodi:
const authParser = conditional(
option("--auth", choice(["none", "basic", "token", "oauth"])),
{
none: object({}),
basic: object({
username: option("--username", string()),
password: option("--password", string()),
}),
token: object({
token: option("--token", string()),
}),
oauth: object({
clientId: option("--client-id", string()),
clientSecret: option("--client-secret", string()),
tokenUrl: option("--token-url", url()),
}),
}
);
Deployment-Ziele, Ausgabeformate, Verbindungsprotokolle – überall dort, wo Sie einen Modus-Selektor haben, der bestimmt, welche anderen Optionen gültig sind.
Warum conditional() existiert
Optique hat bereits einen or()-Kombinator für sich gegenseitig ausschließende Alternativen. Warum brauchen wir conditional()?
Der or()-Kombinator unterscheidet Zweige basierend auf der Struktur – welche Optionen vorhanden sind. Er funktioniert gut für Unterbefehle wie git commit vs. git push, bei denen sich die Argumente vollständig unterscheiden.
Aber im Reporter-Beispiel ist die Struktur identisch: Jeder Zweig hat ein --reporter-Flag. Der Unterschied liegt im Wert des Flags, nicht in seiner Präsenz.
// Das wird nicht wie beabsichtigt funktionieren
const parser = or(
object({ reporter: option("--reporter", choice(["console"])) }),
object({
reporter: option("--reporter", choice(["junit", "html"])),
outputFile: option("--output-file", string())
}),
);
Wenn Sie --reporter junit übergeben, versucht or(), einen Zweig basierend auf den vorhandenen Optionen auszuwählen. Beide Zweige haben --reporter, daher kann es sie strukturell nicht unterscheiden.
conditional() löst dieses Problem, indem es zuerst den Wert des Diskriminators liest und dann den entsprechenden Zweig auswählt. Es überbrückt die Lücke zwischen strukturellem Parsen und wertbasierten Entscheidungen.
Die Struktur ist die Einschränkung
Anstatt Optionen in einen lockeren Typ zu parsen und dann Beziehungen zu validieren, definieren Sie einen Parser, dessen Struktur die Einschränkung ist.
| Traditioneller Ansatz | Optique-Ansatz |
|---|---|
| Parsen → Validieren → Verwenden | Parsen (mit Einschränkungen) → Verwenden |
| Typen und Validierungslogik werden separat gepflegt | Typen spiegeln die Einschränkungen wider |
| Unstimmigkeiten werden zur Laufzeit gefunden | Unstimmigkeiten werden zur Kompilierzeit gefunden |
Die Parser-Definition wird zur einzigen Quelle der Wahrheit. Fügen Sie einen neuen Reporter-Typ hinzu? Die Parser-Definition ändert sich, der abgeleitete Typ ändert sich, und der Compiler zeigt Ihnen überall, was aktualisiert werden muss.
Probieren Sie es aus
Wenn dies mit einer CLI, die Sie entwickeln, in Resonanz steht:
Wenn Sie das nächste Mal dabei sind, eine if-Anweisung zu schreiben, die Optionsbeziehungen prüft, fragen Sie sich: Könnte der Parser diese Einschränkung stattdessen ausdrücken?
Die Struktur Ihres Parsers ist die Einschränkung. Sie brauchen diesen Validierungscode möglicherweise überhaupt nicht.
English:If you've built CLI tools, you've written code like this:
if (opts.reporter === "junit" && !opts.outputFile) {
throw new Error("--output-file is required for junit reporter");
}
if (opts.reporter === "html" && !opts.outputFile) {
throw new Error("--output-file is required for html reporter");
}
if (opts.reporter === "console" && opts.outputFile) {
console.warn("--output-file is ignored for console reporter");
}
A few months ago, I wrote Stop writing CLI validation. Parse it right the first time. about parsing individual option values correctly. But it didn't cover the relationships between options.
In the code above, --output-file only makes sense when --reporter is junit or html. When it's console, the option shouldn't exist at all.
We're using TypeScript. We have a powerful type system. And yet, here we are, writing runtime checks that the compiler can't help with. Every time we add a new reporter type, we need to remember to update these checks. Every time we refactor, we hope we didn't miss one.
The state of TypeScript CLI parsers
The old guard—Commander, yargs, minimist—were built before TypeScript became mainstream. They give you bags of strings and leave type safety as an exercise for the reader.
But we've made progress. Modern TypeScript-first libraries like cmd-ts and Clipanion (the library powering Yarn Berry) take types seriously:
// cmd-ts
const app = command({
args: {
reporter: option({ type: string, long: 'reporter' }),
outputFile: option({ type: string, long: 'output-file' }),
},
handler: (args) => {
// args.reporter: string
// args.outputFile: string
},
});
// Clipanion
class TestCommand extends Command {
reporter = Option.String('--reporter');
outputFile = Option.String('--output-file');
}
These libraries infer types for individual options. --port is a number. --verbose is a boolean. That's real progress.
But here's what they can't do: express that --output-file is required when --reporter is junit, and forbidden when --reporter is console. The relationship between options isn't captured in the type system.
So you end up writing validation code anyway:
handler: (args) => {
// Both cmd-ts and Clipanion need this
if (args.reporter === "junit" && !args.outputFile) {
throw new Error("--output-file required for junit");
}
// args.outputFile is still string | undefined
// TypeScript doesn't know it's definitely string when reporter is "junit"
}
Rust's clap and Python's Click have requires and conflicts_with attributes, but those are runtime checks too. They don't change the result type.
If the parser configuration knows about option relationships, why doesn't that knowledge show up in the result type?
Modeling relationships with conditional()
Optique treats option relationships as a first-class concept. Here's the test reporter scenario:
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({
outputFile: option("--output-file", string()),
}),
html: object({
outputFile: option("--output-file", string()),
openBrowser: option("--open-browser"),
}),
}
);
const [reporter, config] = run(parser);
The conditional() combinator takes a discriminator option (--reporter) and a map of branches. Each branch defines what other options are valid for that discriminator value.
TypeScript infers the result type automatically:
type Result =
| ["console", {}]
| ["junit", { outputFile: string }]
| ["html", { outputFile: string; openBrowser: boolean }];
When reporter is "junit", outputFile is string—not string | undefined. The relationship is encoded in the type.
Now your business logic gets real type safety:
const [reporter, config] = run(parser);
switch (reporter) {
case "console":
runWithConsoleOutput();
break;
case "junit":
// TypeScript knows config.outputFile is string
writeJUnitReport(config.outputFile);
break;
case "html":
// TypeScript knows config.outputFile and config.openBrowser exist
writeHtmlReport(config.outputFile);
if (config.openBrowser) openInBrowser(config.outputFile);
break;
}
No validation code. No runtime checks. If you add a new reporter type and forget to handle it in the switch, the compiler tells you.
A more complex example: database connections
Test reporters are a nice example, but let's try something with more variation. Database connection strings:
myapp --db=sqlite --file=./data.db
myapp --db=postgres --host=localhost --port=5432 --user=admin
myapp --db=mysql --host=localhost --port=3306 --user=root --ssl
Each database type needs completely different options:
- SQLite just needs a file path
- PostgreSQL needs host, port, user, and optionally password
- MySQL needs host, port, user, and has an SSL flag
Here's how you model this:
import { conditional, object } from "@optique/core/constructs";
import { withDefault, optional } from "@optique/core/modifiers";
import { option } from "@optique/core/primitives";
import { choice, string, integer } from "@optique/core/valueparser";
const dbParser = conditional(
option("--db", choice(["sqlite", "postgres", "mysql"])),
{
sqlite: object({
file: option("--file", string()),
}),
postgres: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 5432),
user: option("--user", string()),
password: optional(option("--password", string())),
}),
mysql: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 3306),
user: option("--user", string()),
ssl: option("--ssl"),
}),
}
);
The inferred type:
type DbConfig =
| ["sqlite", { file: string }]
| ["postgres", { host: string; port: number; user: string; password?: string }]
| ["mysql", { host: string; port: number; user: string; ssl: boolean }];
Notice the details: PostgreSQL defaults to port 5432, MySQL to 3306. PostgreSQL has an optional password, MySQL has an SSL flag. Each database type has exactly the options it needs—no more, no less.
With this structure, writing dbConfig.ssl when the mode is sqlite isn't a runtime error—it's a compile-time impossibility.
Try expressing this with requires_if attributes. You can't. The relationships are too rich.
The pattern is everywhere
Once you see it, you find this pattern in many CLI tools:
Authentication modes:
const authParser = conditional(
option("--auth", choice(["none", "basic", "token", "oauth"])),
{
none: object({}),
basic: object({
username: option("--username", string()),
password: option("--password", string()),
}),
token: object({
token: option("--token", string()),
}),
oauth: object({
clientId: option("--client-id", string()),
clientSecret: option("--client-secret", string()),
tokenUrl: option("--token-url", url()),
}),
}
);
Deployment targets, output formats, connection protocols—anywhere you have a mode selector that determines what other options are valid.
Why conditional() exists
Optique already has an or() combinator for mutually exclusive alternatives. Why do we need conditional()?
The or() combinator distinguishes branches based on structure—which options are present. It works well for subcommands like git commit vs git push, where the arguments differ completely.
But in the reporter example, the structure is identical: every branch has a --reporter flag. The difference lies in the flag's value, not its presence.
// This won't work as intended
const parser = or(
object({ reporter: option("--reporter", choice(["console"])) }),
object({
reporter: option("--reporter", choice(["junit", "html"])),
outputFile: option("--output-file", string())
}),
);
When you pass --reporter junit, or() tries to pick a branch based on what options are present. Both branches have --reporter, so it can't distinguish them structurally.
conditional() solves this by reading the discriminator's value first, then selecting the appropriate branch. It bridges the gap between structural parsing and value-based decisions.
The structure is the constraint
Instead of parsing options into a loose type and then validating relationships, define a parser whose structure is the constraint.
| Traditional approach | Optique approach |
|---|---|
| Parse → Validate → Use | Parse (with constraints) → Use |
| Types and validation logic maintained separately | Types reflect the constraints |
| Mismatches found at runtime | Mismatches found at compile time |
The parser definition becomes the single source of truth. Add a new reporter type? The parser definition changes, the inferred type changes, and the compiler shows you everywhere that needs updating.
Try it
If this resonates with a CLI you're building:
Next time you're about to write an if statement checking option relationships, ask: could the parser express this constraint instead?
The structure of your parser is the constraint. You might not need that validation code at all.
한국어:CLI 도구를 만들어 보셨다면, 이런 코드를 작성해 보셨을 겁니다:
if (opts.reporter === "junit" && !opts.outputFile) {
throw new Error("--output-file is required for junit reporter");
}
if (opts.reporter === "html" && !opts.outputFile) {
throw new Error("--output-file is required for html reporter");
}
if (opts.reporter === "console" && opts.outputFile) {
console.warn("--output-file is ignored for console reporter");
}
몇 달 전, 저는 CLI 유효성 검사 작성을 그만두세요. 처음부터 올바르게 파싱하세요. 라는 글에서 개별 옵션 값을 올바르게 파싱하는 방법에 대해 썼습니다. 하지만 그 글에서는 옵션 간의 관계를 다루지 않았습니다.
위 코드에서 --output-file은 --reporter가 junit이나 html일 때만 의미가 있습니다. console일 때는 이 옵션이 아예 존재하지 않아야 합니다.
우리는 TypeScript를 사용하고 있습니다. 강력한 타입 시스템이 있습니다. 그런데도 여기서는 컴파일러가 도울 수 없는 런타임 검사를 작성하고 있습니다. 새로운 리포터 타입을 추가할 때마다 이러한 검사를 업데이트해야 합니다. 리팩토링할 때마다 하나라도 놓치지 않았기를 바랄 뿐입니다.
TypeScript CLI 파서의 현재 상태
Commander, yargs, minimist와 같은 오래된 라이브러리들은 TypeScript가 주류가 되기 전에 만들어졌습니다. 이들은 문자열 묶음을 제공하고 타입 안전성은 사용자의 몫으로 남겨둡니다.
하지만 우리는 발전했습니다. cmd-ts와 Clipanion(Yarn Berry를 지원하는 라이브러리)과 같은 현대적인 TypeScript 우선 라이브러리들은 타입을 진지하게 다룹니다:
// cmd-ts
const app = command({
args: {
reporter: option({ type: string, long: 'reporter' }),
outputFile: option({ type: string, long: 'output-file' }),
},
handler: (args) => {
// args.reporter: string
// args.outputFile: string
},
});
// Clipanion
class TestCommand extends Command {
reporter = Option.String('--reporter');
outputFile = Option.String('--output-file');
}
이러한 라이브러리들은 개별 옵션에 대한 타입을 추론합니다. --port는 number입니다. --verbose는 boolean입니다. 이는 실질적인 발전입니다.
하지만 이들이 할 수 없는 것이 있습니다: --reporter가 junit일 때 --output-file이 필요하고, --reporter가 console일 때는 금지된다는 옵션 간의 관계를 표현하는 것입니다. 옵션 간의 관계는 타입 시스템에 포착되지 않습니다.
그래서 결국 유효성 검사 코드를 작성하게 됩니다:
handler: (args) => {
// Both cmd-ts and Clipanion need this
if (args.reporter === "junit" && !args.outputFile) {
throw new Error("--output-file required for junit");
}
// args.outputFile is still string | undefined
// TypeScript doesn't know it's definitely string when reporter is "junit"
}
Rust의 clap과 Python의 Click은 requires와 conflicts_with 속성을 가지고 있지만, 이것들도 런타임 검사일 뿐입니다. 결과 타입을 변경하지는 않습니다.
파서 구성이 옵션 간의 관계를 알고 있다면, 왜 그 지식이 결과 타입에 나타나지 않을까요?
conditional()로 관계 모델링하기
Optique는 옵션 간의 관계를 일급 개념으로 취급합니다. 다음은 테스트 리포터 시나리오입니다:
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({
outputFile: option("--output-file", string()),
}),
html: object({
outputFile: option("--output-file", string()),
openBrowser: option("--open-browser"),
}),
}
);
const [reporter, config] = run(parser);
conditional() 컴비네이터는 구분자 옵션(--reporter)과 분기 맵을 받습니다. 각 분기는 해당 구분자 값에 대해 유효한 다른 옵션들을 정의합니다.
TypeScript는 결과 타입을 자동으로 추론합니다:
type Result =
| ["console", {}]
| ["junit", { outputFile: string }]
| ["html", { outputFile: string; openBrowser: boolean }];
reporter가 "junit"일 때, outputFile은 string | undefined가 아닌 string입니다. 관계가 타입에 인코딩되어 있습니다.
이제 비즈니스 로직에 진정한 타입 안전성이 생깁니다:
const [reporter, config] = run(parser);
switch (reporter) {
case "console":
runWithConsoleOutput();
break;
case "junit":
// TypeScript는 config.outputFile이 string임을 알고 있습니다
writeJUnitReport(config.outputFile);
break;
case "html":
// TypeScript는 config.outputFile과 config.openBrowser가 존재함을 알고 있습니다
writeHtmlReport(config.outputFile);
if (config.openBrowser) openInBrowser(config.outputFile);
break;
}
유효성 검사 코드가 없습니다. 런타임 검사도 없습니다. 새 리포터 타입을 추가하고 switch문에서 처리하는 것을 잊어버리면 컴파일러가 알려줍니다.
더 복잡한 예: 데이터베이스 연결
테스트 리포터는 좋은 예시지만, 더 다양한 변형이 있는 것을 시도해 봅시다. 데이터베이스 연결 문자열:
myapp --db=sqlite --file=./data.db
myapp --db=postgres --host=localhost --port=5432 --user=admin
myapp --db=mysql --host=localhost --port=3306 --user=root --ssl
각 데이터베이스 유형은 완전히 다른 옵션이 필요합니다:
- SQLite는 파일 경로만 필요합니다
- PostgreSQL은 호스트, 포트, 사용자, 그리고 선택적으로 비밀번호가 필요합니다
- MySQL은 호스트, 포트, 사용자가 필요하고 SSL 플래그가 있습니다
이를 모델링하는 방법은 다음과 같습니다:
import { conditional, object } from "@optique/core/constructs";
import { withDefault, optional } from "@optique/core/modifiers";
import { option } from "@optique/core/primitives";
import { choice, string, integer } from "@optique/core/valueparser";
const dbParser = conditional(
option("--db", choice(["sqlite", "postgres", "mysql"])),
{
sqlite: object({
file: option("--file", string()),
}),
postgres: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 5432),
user: option("--user", string()),
password: optional(option("--password", string())),
}),
mysql: object({
host: option("--host", string()),
port: withDefault(option("--port", integer()), 3306),
user: option("--user", string()),
ssl: option("--ssl"),
}),
}
);
추론된 타입:
type DbConfig =
| ["sqlite", { file: string }]
| ["postgres", { host: string; port: number; user: string; password?: string }]
| ["mysql", { host: string; port: number; user: string; ssl: boolean }];
세부 사항을 주목하세요: PostgreSQL은 기본 포트가 5432, MySQL은 3306입니다. PostgreSQL은 선택적 비밀번호가 있고, MySQL은 SSL 플래그가 있습니다. 각 데이터베이스 유형은 필요한 옵션만 정확히 가지고 있습니다 - 더도 말고 덜도 말고.
이 구조에서는 모드가 sqlite일 때 dbConfig.ssl을 작성하는 것은 런타임 오류가 아니라 컴파일 타임에 불가능한 일입니다.
requires_if 속성으로 이것을 표현해 보세요. 할 수 없습니다. 관계가 너무 복잡합니다.
이 패턴은 어디에나 있습니다
한 번 보면, 많은 CLI 도구에서 이 패턴을 발견할 수 있습니다:
인증 모드:
const authParser = conditional(
option("--auth", choice(["none", "basic", "token", "oauth"])),
{
none: object({}),
basic: object({
username: option("--username", string()),
password: option("--password", string()),
}),
token: object({
token: option("--token", string()),
}),
oauth: object({
clientId: option("--client-id", string()),
clientSecret: option("--client-secret", string()),
tokenUrl: option("--token-url", url()),
}),
}
);
배포 대상, 출력 형식, 연결 프로토콜 - 다른 옵션의 유효성을 결정하는 모드 선택기가 있는 모든 곳에서 이 패턴을 볼 수 있습니다.
conditional()이 존재하는 이유
Optique에는 이미 상호 배타적인 대안을 위한 or() 컴비네이터가 있습니다. 왜 conditional()이 필요할까요?
or() 컴비네이터는 구조에 기반하여 분기를 구분합니다 - 어떤 옵션이 존재하는지에 따라 달라집니다. 이는 git commit과 git push와 같이 인수가 완전히 다른 하위 명령에 잘 작동합니다.
하지만 리포터 예제에서는 구조가 동일합니다: 모든 분기에 --reporter 플래그가 있습니다. 차이점은 플래그의 존재가 아니라 값에 있습니다.
// 이렇게 하면 의도한 대로 작동하지 않습니다
const parser = or(
object({ reporter: option("--reporter", choice(["console"])) }),
object({
reporter: option("--reporter", choice(["junit", "html"])),
outputFile: option("--output-file", string())
}),
);
--reporter junit을 전달하면, or()는 어떤 옵션이 존재하는지에 기반하여 분기를 선택하려고 합니다. 두 분기 모두 --reporter를 가지고 있으므로 구조적으로 구분할 수 없습니다.
conditional()은 먼저 구분자의 값을 읽은 다음 적절한 분기를 선택하여 이 문제를 해결합니다. 이는 구조적 파싱과 값 기반 결정 사이의 간극을 메웁니다.
구조가 제약 조건입니다
옵션을 느슨한 타입으로 파싱한 다음 관계를 검증하는 대신, 구조 자체가 제약 조건인 파서를 정의하세요.
| 전통적인 접근 방식 | Optique 접근 방식 |
|---|---|
| 파싱 → 검증 → 사용 | 파싱 (제약 조건 포함) → 사용 |
| 타입과 검증 로직이 별도로 유지됨 | 타입이 제약 조건을 반영함 |
| 불일치가 런타임에 발견됨 | 불일치가 컴파일 타임에 발견됨 |
파서 정의가 단일 진실 소스가 됩니다. 새 리포터 타입을 추가하시나요? 파서 정의가 변경되고, 추론된 타입이 변경되며, 컴파일러는 업데이트가 필요한 모든 곳을 보여줍니다.
시도해 보세요
이것이 여러분이 구축 중인 CLI와 공감된다면:
다음에 옵션 관계를 확인하는 if 문을 작성하려고 할 때, 이렇게 물어보세요: 파서가 이 제약 조건을 대신 표현할 수 있을까요?
파서의 구조가 제약 조건입니다. 검증 코드가 전혀 필요하지 않을 수도 있습니다.
We're excited to announce Optique 0.8.0! This release introduces powerful new features for building sophisticated CLI applications: the conditional() combinator for discriminated union patterns, the passThrough() parser for wrapper tools, and the new @optique/logtape package for seamless logging configuration.
Optique is a type-safe combinatorial CLI parser for TypeScript, providing a functional approach to building command-line interfaces with composable parsers and full type inference.
New conditional parsing with conditional()
Ever needed to enable different sets of options based on a discriminator value? The new conditional() combinator makes this pattern first-class. It creates discriminated unions where certain options only become valid when a specific discriminator value is selected.
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({ outputFile: option("--output-file", string()) }),
html: object({ outputFile: option("--output-file", string()) }),
}
);
// Result type: ["console", {}] | ["junit", { outputFile: string }] | ...
Key features:
- Explicit discriminator option determines which branch is selected
- Tuple result
[discriminator, branchValue]for clear type narrowing - Optional default branch for when discriminator is not provided
- Clear error messages indicating which options are required for each discriminator value
The conditional() parser provides a more structured alternative to or() for discriminated union patterns. Use it when you have an explicit discriminator option that determines which set of options is valid.
See the conditional() documentation for more details and examples.
Pass-through options with passThrough()
Building wrapper CLI tools that need to forward unrecognized options to an underlying tool? The new passThrough() parser enables legitimate wrapper/proxy patterns by capturing unknown options without validation errors.
import { object } from "@optique/core/constructs";
import { option, passThrough } from "@optique/core/primitives";
const parser = object({
debug: option("--debug"),
extra: passThrough(),
});
// mycli --debug --foo=bar --baz=qux
// → { debug: true, extra: ["--foo=bar", "--baz=qux"] }
Key features:
- Three capture formats:
"equalsOnly"(default, safest),"nextToken"(captures--opt valpairs), and"greedy"(captures all remaining tokens) - Lowest priority (−10) ensures explicit parsers always match first
- Respects
--options terminator in"equalsOnly"and"nextToken"modes - Works seamlessly with
object(), subcommands, and other combinators
This feature is designed for building Docker-like CLIs, build tool wrappers, or any tool that proxies commands to another process.
See the passThrough() documentation for usage patterns and best practices.
LogTape logging integration
The new @optique/logtape package provides seamless integration with LogTape, enabling you to configure logging through command-line arguments with various parsing strategies.
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
Quick start with the loggingOptions() preset:
import { loggingOptions, createLoggingConfig } from "@optique/logtape";
import { object } from "@optique/core/constructs";
import { parse } from "@optique/core/parser";
import { configure } from "@logtape/logtape";
const parser = object({
logging: loggingOptions({ level: "verbosity" }),
});
const args = ["-vv", "--log-output=-"];
const result = parse(parser, args);
if (result.success) {
const config = await createLoggingConfig(result.value.logging);
await configure(config);
}
The package offers multiple approaches to control log verbosity:
verbosity()parser: The classic-v/-vv/-vvvpattern where each flag increases verbosity (no flags →"warning",-v→"info",-vv→"debug",-vvv→"trace")debug()parser: Simple--debug/-dflag that toggles between normal and debug levelslogLevel()value parser: Explicit--log-level=debugoption for direct level selectionlogOutput()parser: Log output destination with-for console or file path for file output
See the LogTape integration documentation for complete examples and configuration options.
Bug fix: negative integers now accepted
Fixed an issue where the integer() value parser rejected negative integers when using type: "number". The regex pattern has been updated from /^\d+$/ to /^-?\d+$/ to correctly handle values like -42. Note that type: "bigint" already accepted negative integers, so this change brings consistency between the two types.
Installation
# Deno
deno add jsr:@optique/core
# npm
npm add @optique/core
# pnpm
pnpm add @optique/core
# Yarn
yarn add @optique/core
# Bun
bun add @optique/core
For the LogTape integration:
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
# pnpm
pnpm add @optique/logtape @logtape/logtape
# Yarn
yarn add @optique/logtape @logtape/logtape
# Bun
bun add @optique/logtape @logtape/logtape
Looking forward
Optique 0.8.0 continues our focus on making CLI development more expressive and type-safe. The conditional() combinator brings discriminated union patterns to the forefront, passThrough() enables new wrapper tool use cases, and the LogTape integration makes logging configuration a breeze.
As always, all new features maintain full backward compatibility—your existing parsers continue to work unchanged.
We're grateful to the community for feedback and suggestions. If you have ideas for future improvements or encounter any issues, please let us know through GitHub Issues. For more information about Optique and its features, visit the documentation or check out the full changelog.
Optique 0.8.0: Conditional parsing, pass-through options, and LogTape integration
We're excited to announce Optique 0.8.0! This release introduces powerful new features for building sophisticated CLI applications: the conditional() combinator for discriminated union patterns, the passThrough() parser for wrapper tools, and the new @optique/logtape package for seamless logging configuration.
Optique is a type-safe combinatorial CLI parser for TypeScript, providing a functional approach to building command-line interfaces with composable parsers and full type inference.
New conditional parsing with conditional()
Ever needed to enable different sets of options based on a discriminator value? The new conditional() combinator makes this pattern first-class. It creates discriminated unions where certain options only become valid when a specific discriminator value is selected.
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({ outputFile: option("--output-file", string()) }),
html: object({ outputFile: option("--output-file", string()) }),
}
);
// Result type: ["console", {}] | ["junit", { outputFile: string }] | ...
Key features:
- Explicit discriminator option determines which branch is selected
- Tuple result
[discriminator, branchValue]for clear type narrowing - Optional default branch for when discriminator is not provided
- Clear error messages indicating which options are required for each discriminator value
The conditional() parser provides a more structured alternative to or() for discriminated union patterns. Use it when you have an explicit discriminator option that determines which set of options is valid.
See the conditional() documentation for more details and examples.
Pass-through options with passThrough()
Building wrapper CLI tools that need to forward unrecognized options to an underlying tool? The new passThrough() parser enables legitimate wrapper/proxy patterns by capturing unknown options without validation errors.
import { object } from "@optique/core/constructs";
import { option, passThrough } from "@optique/core/primitives";
const parser = object({
debug: option("--debug"),
extra: passThrough(),
});
// mycli --debug --foo=bar --baz=qux
// → { debug: true, extra: ["--foo=bar", "--baz=qux"] }
Key features:
- Three capture formats:
"equalsOnly"(default, safest),"nextToken"(captures--opt valpairs), and"greedy"(captures all remaining tokens) - Lowest priority (−10) ensures explicit parsers always match first
- Respects
--options terminator in"equalsOnly"and"nextToken"modes - Works seamlessly with
object(), subcommands, and other combinators
This feature is designed for building Docker-like CLIs, build tool wrappers, or any tool that proxies commands to another process.
See the passThrough() documentation for usage patterns and best practices.
LogTape logging integration
The new @optique/logtape package provides seamless integration with LogTape, enabling you to configure logging through command-line arguments with various parsing strategies.
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
Quick start with the loggingOptions() preset:
import { loggingOptions, createLoggingConfig } from "@optique/logtape";
import { object } from "@optique/core/constructs";
import { parse } from "@optique/core/parser";
import { configure } from "@logtape/logtape";
const parser = object({
logging: loggingOptions({ level: "verbosity" }),
});
const args = ["-vv", "--log-output=-"];
const result = parse(parser, args);
if (result.success) {
const config = await createLoggingConfig(result.value.logging);
await configure(config);
}
The package offers multiple approaches to control log verbosity:
verbosity()parser: The classic-v/-vv/-vvvpattern where each flag increases verbosity (no flags →"warning",-v→"info",-vv→"debug",-vvv→"trace")debug()parser: Simple--debug/-dflag that toggles between normal and debug levelslogLevel()value parser: Explicit--log-level=debugoption for direct level selectionlogOutput()parser: Log output destination with-for console or file path for file output
See the LogTape integration documentation for complete examples and configuration options.
Bug fix: negative integers now accepted
Fixed an issue where the integer() value parser rejected negative integers when using type: "number". The regex pattern has been updated from /^\d+$/ to /^-?\d+$/ to correctly handle values like -42. Note that type: "bigint" already accepted negative integers, so this change brings consistency between the two types.
Installation
# Deno
deno add jsr:@optique/core
# npm
npm add @optique/core
# pnpm
pnpm add @optique/core
# Yarn
yarn add @optique/core
# Bun
bun add @optique/core
For the LogTape integration:
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
# pnpm
pnpm add @optique/logtape @logtape/logtape
# Yarn
yarn add @optique/logtape @logtape/logtape
# Bun
bun add @optique/logtape @logtape/logtape
Looking forward
Optique 0.8.0 continues our focus on making CLI development more expressive and type-safe. The conditional() combinator brings discriminated union patterns to the forefront, passThrough() enables new wrapper tool use cases, and the LogTape integration makes logging configuration a breeze.
As always, all new features maintain full backward compatibility—your existing parsers continue to work unchanged.
We're grateful to the community for feedback and suggestions. If you have ideas for future improvements or encounter any issues, please let us know through GitHub Issues. For more information about Optique and its features, visit the documentation or check out the full changelog.
한국어:Optique 0.8.0 출시를 발표하게 되어 기쁩니다! 이번 릴리스는 정교한 CLI 애플리케이션 구축을 위한 강력한 새 기능을 소개합니다: 판별 유니온 패턴을 위한 conditional() 조합기, 래퍼 도구를 위한 passThrough() 파서, 그리고 원활한 로깅 구성을 위한 새로운 @optique/logtape 패키지가 포함되어 있습니다.
Optique는 TypeScript를 위한 타입 안전 조합형 CLI 파서로, 조합 가능한 파서와 완전한 타입 추론을 통해 명령줄 인터페이스를 구축하는 함수형 접근 방식을 제공합니다.
conditional()을 이용한 새로운 조건부 파싱
판별자 값에 따라 다른 옵션 세트를 활성화해야 할 필요가 있었나요? 새로운 conditional() 조합기는 이 패턴을 일급 객체로 만듭니다. 특정 판별자 값이 선택되었을 때만 특정 옵션이 유효해지는 판별 유니온을 생성합니다.
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({ outputFile: option("--output-file", string()) }),
html: object({ outputFile: option("--output-file", string()) }),
}
);
// Result type: ["console", {}] | ["junit", { outputFile: string }] | ...
주요 기능:
- 명시적 판별자 옵션이 어떤 분기가 선택될지 결정
- 명확한 타입 좁히기를 위한 튜플 결과
[discriminator, branchValue] - 판별자가 제공되지 않을 때를 위한 선택적 기본 분기
- 각 판별자 값에 필요한 옵션을 명확히 표시하는 오류 메시지
conditional() 파서는 판별 유니온 패턴에 대한 or()의 더 구조화된 대안을 제공합니다. 어떤 옵션 세트가 유효한지 결정하는 명시적 판별자 옵션이 있을 때 사용하세요.
자세한 내용과 예제는 conditional() 문서를 참조하세요.
passThrough()를 이용한 패스스루 옵션
인식되지 않은 옵션을 기본 도구로 전달해야 하는 래퍼 CLI 도구를 구축하고 계신가요? 새로운 passThrough() 파서는 검증 오류 없이 알 수 없는 옵션을 캡처하여 정당한 래퍼/프록시 패턴을 가능하게 합니다.
import { object } from "@optique/core/constructs";
import { option, passThrough } from "@optique/core/primitives";
const parser = object({
debug: option("--debug"),
extra: passThrough(),
});
// mycli --debug --foo=bar --baz=qux
// → { debug: true, extra: ["--foo=bar", "--baz=qux"] }
주요 기능:
- 세 가지 캡처 형식:
"equalsOnly"(기본값, 가장 안전),"nextToken"(--opt val쌍 캡처),"greedy"(남은 모든 토큰 캡처) - 가장 낮은 우선순위(-10)로 명시적 파서가 항상 먼저 일치하도록 보장
"equalsOnly"와"nextToken"모드에서--옵션 종결자 존중object(), 서브커맨드 및 기타 조합기와 원활하게 작동
이 기능은 Docker와 같은 CLI, 빌드 도구 래퍼 또는 다른 프로세스에 명령을 프록시하는 모든 도구를 구축하기 위해 설계되었습니다.
사용 패턴과 모범 사례는 passThrough() 문서를 참조하세요.
LogTape 로깅 통합
새로운 @optique/logtape 패키지는 LogTape와의 원활한 통합을 제공하여 다양한 파싱 전략을 통해 명령줄 인수로 로깅을 구성할 수 있게 합니다.
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
loggingOptions() 프리셋으로 빠르게 시작하기:
import { loggingOptions, createLoggingConfig } from "@optique/logtape";
import { object } from "@optique/core/constructs";
import { parse } from "@optique/core/parser";
import { configure } from "@logtape/logtape";
const parser = object({
logging: loggingOptions({ level: "verbosity" }),
});
const args = ["-vv", "--log-output=-"];
const result = parse(parser, args);
if (result.success) {
const config = await createLoggingConfig(result.value.logging);
await configure(config);
}
이 패키지는 로그 상세도를 제어하기 위한 여러 접근 방식을 제공합니다:
verbosity()파서: 각 플래그가 상세도를 높이는 고전적인-v/-vv/-vvv패턴 (플래그 없음 →"warning",-v→"info",-vv→"debug",-vvv→"trace")debug()파서: 일반 레벨과 디버그 레벨 사이를 전환하는 간단한--debug/-d플래그logLevel()값 파서: 직접 레벨 선택을 위한 명시적--log-level=debug옵션logOutput()파서: 콘솔을 위한-또는 파일 출력을 위한 파일 경로로 로그 출력 대상 지정
전체 예제와 구성 옵션은 LogTape 통합 문서를 참조하세요.
버그 수정: 음수 정수 이제 허용
type: "number"를 사용할 때 integer() 값 파서가 음수 정수를 거부하는 문제를 수정했습니다. 정규식 패턴이 /^\d+$/에서 /^-?\d+$/로 업데이트되어 -42와 같은 값을 올바르게 처리합니다. type: "bigint"는 이미 음수 정수를 허용했으므로, 이 변경으로 두 타입 간의 일관성이 확보되었습니다.
설치
# Deno
deno add jsr:@optique/core
# npm
npm add @optique/core
# pnpm
pnpm add @optique/core
# Yarn
yarn add @optique/core
# Bun
bun add @optique/core
LogTape 통합을 위한 설치:
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
# pnpm
pnpm add @optique/logtape @logtape/logtape
# Yarn
yarn add @optique/logtape @logtape/logtape
# Bun
bun add @optique/logtape @logtape/logtape
앞으로의 계획
Optique 0.8.0은 CLI 개발을 더 표현력 있고 타입 안전하게 만드는 데 계속 초점을 맞추고 있습니다. conditional() 조합기는 판별 유니온 패턴을 전면에 내세우고, passThrough()는 새로운 래퍼 도구 사용 사례를 가능하게 하며, LogTape 통합은 로깅 구성을 간편하게 만듭니다.
항상 그렇듯이, 모든 새로운 기능은 완전한 하위 호환성을 유지합니다—기존 파서는 변경 없이 계속 작동합니다.
피드백과 제안을 해주신 커뮤니티에 감사드립니다. 향후 개선 사항에 대한 아이디어가 있거나 문제가 발생하면 GitHub Issues를 통해 알려주세요. Optique와 그 기능에 대한 자세한 정보는 문서를 방문하거나 전체 변경 로그를 확인하세요.
한국어(대한민국):Optique 0.8.0 출시를 발표하게 되어 기쁩니다! 이번 릴리스는 정교한 CLI 애플리케이션을 구축하기 위한 강력한 새 기능을 소개합니다: 판별 유니온 패턴을 위한 conditional() 조합기, 래퍼 도구를 위한 passThrough() 파서, 그리고 원활한 로깅 구성을 위한 새로운 @optique/logtape 패키지가 포함되어 있습니다.
Optique는 TypeScript를 위한 타입 안전 조합형 CLI 파서로, 조합 가능한 파서와 완전한 타입 추론을 통해 명령줄 인터페이스를 구축하는 함수형 접근 방식을 제공합니다.
conditional()을 이용한 새로운 조건부 파싱
판별자 값에 따라 다른 옵션 세트를 활성화해야 할 필요가 있었나요? 새로운 conditional() 조합기는 이 패턴을 일급 객체로 만듭니다. 특정 판별자 값이 선택되었을 때만 특정 옵션이 유효해지는 판별 유니온을 생성합니다.
import { conditional, object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
const parser = conditional(
option("--reporter", choice(["console", "junit", "html"])),
{
console: object({}),
junit: object({ outputFile: option("--output-file", string()) }),
html: object({ outputFile: option("--output-file", string()) }),
}
);
// Result type: ["console", {}] | ["junit", { outputFile: string }] | ...
주요 기능:
- 명시적 판별자 옵션이 어떤 분기가 선택될지 결정
- 명확한 타입 좁히기를 위한 튜플 결과
[discriminator, branchValue] - 판별자가 제공되지 않을 때를 위한 선택적 기본 분기
- 각 판별자 값에 필요한 옵션을 명확히 표시하는 오류 메시지
conditional() 파서는 판별 유니온 패턴에 대한 or()의 더 구조화된 대안을 제공합니다. 어떤 옵션 세트가 유효한지 결정하는 명시적 판별자 옵션이 있을 때 사용하세요.
자세한 내용과 예제는 conditional() 문서를 참조하세요.
passThrough()를 이용한 패스스루 옵션
인식되지 않은 옵션을 기본 도구로 전달해야 하는 래퍼 CLI 도구를 구축하고 계신가요? 새로운 passThrough() 파서는 유효성 검사 오류 없이 알 수 없는 옵션을 캡처하여 정당한 래퍼/프록시 패턴을 가능하게 합니다.
import { object } from "@optique/core/constructs";
import { option, passThrough } from "@optique/core/primitives";
const parser = object({
debug: option("--debug"),
extra: passThrough(),
});
// mycli --debug --foo=bar --baz=qux
// → { debug: true, extra: ["--foo=bar", "--baz=qux"] }
주요 기능:
- 세 가지 캡처 형식:
"equalsOnly"(기본값, 가장 안전),"nextToken"(--opt val쌍 캡처),"greedy"(남은 모든 토큰 캡처) - 가장 낮은 우선순위(-10)로 명시적 파서가 항상 먼저 일치하도록 보장
"equalsOnly"와"nextToken"모드에서--옵션 종결자 존중object(), 서브커맨드 및 기타 조합기와 원활하게 작동
이 기능은 Docker와 같은 CLI, 빌드 도구 래퍼 또는 명령을 다른 프로세스로 프록시하는 모든 도구를 구축하기 위해 설계되었습니다.
사용 패턴과 모범 사례는 passThrough() 문서를 참조하세요.
LogTape 로깅 통합
새로운 @optique/logtape 패키지는 LogTape와의 원활한 통합을 제공하여 다양한 파싱 전략을 통해 명령줄 인수로 로깅을 구성할 수 있게 합니다.
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
loggingOptions() 프리셋을 사용한 빠른 시작:
import { loggingOptions, createLoggingConfig } from "@optique/logtape";
import { object } from "@optique/core/constructs";
import { parse } from "@optique/core/parser";
import { configure } from "@logtape/logtape";
const parser = object({
logging: loggingOptions({ level: "verbosity" }),
});
const args = ["-vv", "--log-output=-"];
const result = parse(parser, args);
if (result.success) {
const config = await createLoggingConfig(result.value.logging);
await configure(config);
}
이 패키지는 로그 상세 수준을 제어하기 위한 여러 접근 방식을 제공합니다:
verbosity()파서: 각 플래그가 상세 수준을 높이는 고전적인-v/-vv/-vvv패턴 (플래그 없음 →"warning",-v→"info",-vv→"debug",-vvv→"trace")debug()파서: 일반 수준과 디버그 수준 사이를 전환하는 간단한--debug/-d플래그logLevel()값 파서: 직접적인 수준 선택을 위한 명시적--log-level=debug옵션logOutput()파서: 콘솔을 위한-또는 파일 출력을 위한 파일 경로의 로그 출력 대상
전체 예제와 구성 옵션은 LogTape 통합 문서를 참조하세요.
버그 수정: 음수 정수 이제 허용
type: "number"를 사용할 때 integer() 값 파서가 음수 정수를 거부하는 문제가 수정되었습니다. 정규식 패턴이 /^\d+$/에서 /^-?\d+$/로 업데이트되어 -42와 같은 값을 올바르게 처리합니다. type: "bigint"는 이미 음수 정수를 허용했으므로, 이 변경으로 두 타입 간의 일관성이 유지됩니다.
설치
# Deno
deno add jsr:@optique/core
# npm
npm add @optique/core
# pnpm
pnpm add @optique/core
# Yarn
yarn add @optique/core
# Bun
bun add @optique/core
LogTape 통합을 위한 설치:
# Deno
deno add --jsr @optique/logtape @logtape/logtape
# npm
npm add @optique/logtape @logtape/logtape
# pnpm
pnpm add @optique/logtape @logtape/logtape
# Yarn
yarn add @optique/logtape @logtape/logtape
# Bun
bun add @optique/logtape @logtape/logtape
앞으로의 계획
Optique 0.8.0은 CLI 개발을 더 표현력 있고 타입 안전하게 만드는 데 계속 집중하고 있습니다. conditional() 조합기는 판별 유니온 패턴을 전면에 내세우고, passThrough()는 새로운 래퍼 도구 사용 사례를 가능하게 하며, LogTape 통합은 로깅 구성을 쉽게 만듭니다.
항상 그렇듯이, 모든 새로운 기능은 완전한 하위 호환성을 유지합니다—기존 파서는 변경 없이 계속 작동합니다.
피드백과 제안을 해주신 커뮤니티에 감사드립니다. 향후 개선 사항에 대한 아이디어가 있거나 문제가 발생하면 GitHub Issues를 통해 알려주세요. Optique와 그 기능에 대한 자세한 정보는 문서를 방문하거나 전체 변경 로그를 확인하세요.
This is why #Rust is not only about memory safety/performance, but about correctness 👇
Did you know that in #JavaScript when `fetch`ing the same Request/Response twice will error (when containing a body)!?
It is because bodies are Streams, which can only be read once! You can clone the Request/Response/Body, but this comes with its own caveats (in the worst case the clone will buffer the body fully in memory!).
#RustLang solves this by borrow checking and good API design...
1/3
We're thrilled to announce Optique 0.7.0, a release focused on developer experience improvements and expanding Optique's ecosystem with validation library integrations.
Optique is a type-safe, combinatorial CLI argument parser for TypeScript. Unlike traditional CLI libraries that rely on configuration objects, Optique lets you compose parsers from small, reusable functions—bringing the same functional composition patterns that make Zod powerful to CLI development. If you're new to Optique, check out Why Optique? to learn how this approach unlocks possibilities that configuration-based libraries simply can't match.
This release introduces automatic “Did you mean?” suggestions for typos, seamless integration with Zod and Valibot validation libraries, duplicate option name detection for catching configuration bugs early, and context-aware error messages that help users understand exactly what went wrong.
“Did you mean?”: Automatic typo suggestions
We've all been there: you type --verbos instead of --verbose, and the CLI responds with an unhelpful “unknown option” error. Optique 0.7.0 changes this by automatically suggesting similar options when users make typos:
const parser = object({
verbose: option("-v", "--verbose"),
version: option("--version"),
});
// User types: --verbos (typo)
const result = parse(parser, ["--verbos"]);
// Error: Unexpected option or argument: --verbos.
//
// Did you mean one of these?
// --verbose
// --version
The suggestion system uses Levenshtein distance to find similar names, suggesting up to 3 alternatives when the edit distance is within a reasonable threshold. Suggestions work automatically for both option names and subcommand names across all parser types—option(), flag(), command(), object(), or(), and longestMatch(). See the automatic suggestions documentation for more details.
Customizing suggestions
You can customize how suggestions are formatted or disable them entirely through the errors option:
// Custom suggestion format for option/flag parsers
const portOption = option("--port", integer(), {
errors: {
noMatch: (invalidOption, suggestions) =>
suggestions.length > 0
? message`Unknown option ${invalidOption}. Try: ${values(suggestions)}`
: message`Unknown option ${invalidOption}.`
}
});
// Custom suggestion format for combinators
const config = object({
host: option("--host", string()),
port: option("--port", integer())
}, {
errors: {
suggestions: (suggestions) =>
suggestions.length > 0
? message`Available options: ${values(suggestions)}`
: []
}
});
Zod and Valibot integrations
Two new packages join the Optique family, bringing powerful validation capabilities from the TypeScript ecosystem to your CLI parsers.
@optique/zod
The new @optique/zod package lets you use Zod schemas directly as value parsers:
import { option, object } from "@optique/core";
import { zod } from "@optique/zod";
import { z } from "zod";
const parser = object({
email: option("--email", zod(z.string().email())),
port: option("--port", zod(z.coerce.number().int().min(1).max(65535))),
format: option("--format", zod(z.enum(["json", "yaml", "xml"]))),
});
The package supports both Zod v3.25.0+ and v4.0.0+, with automatic error formatting that integrates seamlessly with Optique's message system. See the Zod integration guide for complete usage examples.
@optique/valibot
For those who prefer a lighter bundle, @optique/valibot integrates with Valibot—a validation library with a significantly smaller footprint (~10KB vs Zod's ~52KB):
import { option, object } from "@optique/core";
import { valibot } from "@optique/valibot";
import * as v from "valibot";
const parser = object({
email: option("--email", valibot(v.pipe(v.string(), v.email()))),
port: option("--port", valibot(v.pipe(
v.string(),
v.transform(Number),
v.integer(),
v.minValue(1),
v.maxValue(65535)
))),
});
Both packages support custom error messages through their respective error handler options (zodError and valibotError), giving you full control over how validation failures are presented to users. See the Valibot integration guide for complete usage examples.
Duplicate option name detection
A common source of bugs in CLI applications is accidentally using the same option name in multiple places. Previously, this would silently cause ambiguous parsing where the first matching parser consumed the option.
Optique 0.7.0 now validates option names at parse time and fails with a clear error message when duplicates are detected:
const parser = object({
input: option("-i", "--input", string()),
interactive: option("-i", "--interactive"), // Oops! -i is already used
});
// Error: Duplicate option name -i found in fields: input, interactive.
// Each option name must be unique within a parser combinator.
This validation applies to object(), tuple(), merge(), and group() combinators. The or() combinator continues to allow duplicate option names since its branches are mutually exclusive. See the duplicate detection documentation for more details.
If you have a legitimate use case for duplicate option names, you can opt out with allowDuplicates: true:
const parser = object({
input: option("-i", "--input", string()),
interactive: option("-i", "--interactive"),
}, { allowDuplicates: true });
Context-aware error messages
Error messages from combinators are now smarter about what they report. Instead of generic "No matching option or command found" messages, Optique now analyzes what the parser expects and provides specific feedback:
// When only arguments are expected
const parser1 = or(argument(string()), argument(integer()));
// Error: Missing required argument.
// When only commands are expected
const parser2 = or(command("add", addParser), command("remove", removeParser));
// Error: No matching command found.
// When both options and arguments are expected
const parser3 = object({
port: option("--port", integer()),
file: argument(string()),
});
// Error: No matching option or argument found.
Dynamic error messages with NoMatchContext
For applications that need internationalization or context-specific messaging, the errors.noMatch option now accepts a function that receives a NoMatchContext object:
const parser = or(
command("add", addParser),
command("remove", removeParser),
{
errors: {
noMatch: ({ hasOptions, hasCommands, hasArguments }) => {
if (hasCommands && !hasOptions && !hasArguments) {
return message`일치하는 명령을 찾을 수 없습니다.`; // Korean
}
return message`잘못된 입력입니다.`;
}
}
}
);
Shell completion naming conventions
The run() function now supports configuring whether shell completions use singular or plural naming conventions:
run(parser, {
completion: {
name: "plural", // Uses "completions" and "--completions"
}
});
// Or for singular only
run(parser, {
completion: {
name: "singular", // Uses "completion" and "--completion"
}
});
The default "both" accepts either form, maintaining backward compatibility while letting you enforce a consistent style in your CLI.
Additional improvements
-
Line break handling:
formatMessage()now distinguishes between soft breaks (single\n, converted to spaces) and hard breaks (double\n\n, creating paragraph separations), improving multi-line error message formatting. -
New utility functions: Added
extractOptionNames()andextractArgumentMetavars()to the@optique/core/usagemodule for programmatic access to parser metadata.
Installation
deno add --jsr @optique/core @optique/run
npm add @optique/core @optique/run
pnpm add @optique/core @optique/run
yarn add @optique/core @optique/run
bun add @optique/core @optique/run
For validation library integrations:
# Zod integration
deno add jsr:@optique/zod # Deno
npm add @optique/zod # npm/pnpm/yarn/bun
# Valibot integration
deno add jsr:@optique/valibot # Deno
npm add @optique/valibot # npm/pnpm/yarn/bun
Looking forward
This release represents our commitment to making CLI development in TypeScript as smooth as possible. The “Did you mean?” suggestions and validation library integrations were among the most requested features, and we're excited to see how they improve your CLI applications.
For detailed documentation and examples, visit the Optique documentation. We welcome your feedback and contributions on GitHub!
Optique 0.7.0: Smarter error messages and validation library integrations
We're thrilled to announce Optique 0.7.0, a release focused on developer experience improvements and expanding Optique's ecosystem with validation library integrations.
Optique is a type-safe, combinatorial CLI argument parser for TypeScript. Unlike traditional CLI libraries that rely on configuration objects, Optique lets you compose parsers from small, reusable functions—bringing the same functional composition patterns that make Zod powerful to CLI development. If you're new to Optique, check out Why Optique? to learn how this approach unlocks possibilities that configuration-based libraries simply can't match.
This release introduces automatic “Did you mean?” suggestions for typos, seamless integration with Zod and Valibot validation libraries, duplicate option name detection for catching configuration bugs early, and context-aware error messages that help users understand exactly what went wrong.
“Did you mean?”: Automatic typo suggestions
We've all been there: you type --verbos instead of --verbose, and the CLI responds with an unhelpful “unknown option” error. Optique 0.7.0 changes this by automatically suggesting similar options when users make typos:
const parser = object({
verbose: option("-v", "--verbose"),
version: option("--version"),
});
// User types: --verbos (typo)
const result = parse(parser, ["--verbos"]);
// Error: Unexpected option or argument: --verbos.
//
// Did you mean one of these?
// --verbose
// --version
The suggestion system uses Levenshtein distance to find similar names, suggesting up to 3 alternatives when the edit distance is within a reasonable threshold. Suggestions work automatically for both option names and subcommand names across all parser types—option(), flag(), command(), object(), or(), and longestMatch(). See the automatic suggestions documentation for more details.
Customizing suggestions
You can customize how suggestions are formatted or disable them entirely through the errors option:
// Custom suggestion format for option/flag parsers
const portOption = option("--port", integer(), {
errors: {
noMatch: (invalidOption, suggestions) =>
suggestions.length > 0
? message`Unknown option ${invalidOption}. Try: ${values(suggestions)}`
: message`Unknown option ${invalidOption}.`
}
});
// Custom suggestion format for combinators
const config = object({
host: option("--host", string()),
port: option("--port", integer())
}, {
errors: {
suggestions: (suggestions) =>
suggestions.length > 0
? message`Available options: ${values(suggestions)}`
: []
}
});
Zod and Valibot integrations
Two new packages join the Optique family, bringing powerful validation capabilities from the TypeScript ecosystem to your CLI parsers.
@optique/zod
The new @optique/zod package lets you use Zod schemas directly as value parsers:
import { option, object } from "@optique/core";
import { zod } from "@optique/zod";
import { z } from "zod";
const parser = object({
email: option("--email", zod(z.string().email())),
port: option("--port", zod(z.coerce.number().int().min(1).max(65535))),
format: option("--format", zod(z.enum(["json", "yaml", "xml"]))),
});
The package supports both Zod v3.25.0+ and v4.0.0+, with automatic error formatting that integrates seamlessly with Optique's message system. See the Zod integration guide for complete usage examples.
@optique/valibot
For those who prefer a lighter bundle, @optique/valibot integrates with Valibot—a validation library with a significantly smaller footprint (~10KB vs Zod's ~52KB):
import { option, object } from "@optique/core";
import { valibot } from "@optique/valibot";
import * as v from "valibot";
const parser = object({
email: option("--email", valibot(v.pipe(v.string(), v.email()))),
port: option("--port", valibot(v.pipe(
v.string(),
v.transform(Number),
v.integer(),
v.minValue(1),
v.maxValue(65535)
))),
});
Both packages support custom error messages through their respective error handler options (zodError and valibotError), giving you full control over how validation failures are presented to users. See the Valibot integration guide for complete usage examples.
Duplicate option name detection
A common source of bugs in CLI applications is accidentally using the same option name in multiple places. Previously, this would silently cause ambiguous parsing where the first matching parser consumed the option.
Optique 0.7.0 now validates option names at parse time and fails with a clear error message when duplicates are detected:
const parser = object({
input: option("-i", "--input", string()),
interactive: option("-i", "--interactive"), // Oops! -i is already used
});
// Error: Duplicate option name -i found in fields: input, interactive.
// Each option name must be unique within a parser combinator.
This validation applies to object(), tuple(), merge(), and group() combinators. The or() combinator continues to allow duplicate option names since its branches are mutually exclusive. See the duplicate detection documentation for more details.
If you have a legitimate use case for duplicate option names, you can opt out with allowDuplicates: true:
const parser = object({
input: option("-i", "--input", string()),
interactive: option("-i", "--interactive"),
}, { allowDuplicates: true });
Context-aware error messages
Error messages from combinators are now smarter about what they report. Instead of generic "No matching option or command found" messages, Optique now analyzes what the parser expects and provides specific feedback:
// When only arguments are expected
const parser1 = or(argument(string()), argument(integer()));
// Error: Missing required argument.
// When only commands are expected
const parser2 = or(command("add", addParser), command("remove", removeParser));
// Error: No matching command found.
// When both options and arguments are expected
const parser3 = object({
port: option("--port", integer()),
file: argument(string()),
});
// Error: No matching option or argument found.
Dynamic error messages with NoMatchContext
For applications that need internationalization or context-specific messaging, the errors.noMatch option now accepts a function that receives a NoMatchContext object:
const parser = or(
command("add", addParser),
command("remove", removeParser),
{
errors: {
noMatch: ({ hasOptions, hasCommands, hasArguments }) => {
if (hasCommands && !hasOptions && !hasArguments) {
return message`일치하는 명령을 찾을 수 없습니다.`; // Korean
}
return message`잘못된 입력입니다.`;
}
}
}
);
Shell completion naming conventions
The run() function now supports configuring whether shell completions use singular or plural naming conventions:
run(parser, {
completion: {
name: "plural", // Uses "completions" and "--completions"
}
});
// Or for singular only
run(parser, {
completion: {
name: "singular", // Uses "completion" and "--completion"
}
});
The default "both" accepts either form, maintaining backward compatibility while letting you enforce a consistent style in your CLI.
Additional improvements
-
Line break handling:
formatMessage()now distinguishes between soft breaks (single\n, converted to spaces) and hard breaks (double\n\n, creating paragraph separations), improving multi-line error message formatting. -
New utility functions: Added
extractOptionNames()andextractArgumentMetavars()to the@optique/core/usagemodule for programmatic access to parser metadata.
Installation
deno add --jsr @optique/core @optique/run
npm add @optique/core @optique/run
pnpm add @optique/core @optique/run
yarn add @optique/core @optique/run
bun add @optique/core @optique/run
For validation library integrations:
# Zod integration
deno add jsr:@optique/zod # Deno
npm add @optique/zod # npm/pnpm/yarn/bun
# Valibot integration
deno add jsr:@optique/valibot # Deno
npm add @optique/valibot # npm/pnpm/yarn/bun
Looking forward
This release represents our commitment to making CLI development in TypeScript as smooth as possible. The “Did you mean?” suggestions and validation library integrations were among the most requested features, and we're excited to see how they improve your CLI applications.
For detailed documentation and examples, visit the Optique documentation. We welcome your feedback and contributions on GitHub!
KO:Optique 0.7.0 출시를 발표하게 되어 기쁩니다. 이번 릴리스는 개발자 경험 개선과 유효성 검사 라이브러리 통합을 통해 Optique의 생태계를 확장하는 데 중점을 두었습니다.
Optique는 TypeScript를 위한 타입 안전한 조합형 CLI 인자 파서입니다. 설정 객체에 의존하는 전통적인 CLI 라이브러리와 달리, Optique는 작고 재사용 가능한 함수들로부터 파서를 구성할 수 있게 해줍니다—Zod를 강력하게 만드는 동일한 함수형 구성 패턴을 CLI 개발에 적용합니다. Optique를 처음 접하신다면, *Why Optique?*를 확인하여 이 접근 방식이 설정 기반 라이브러리가 단순히 따라올 수 없는 가능성을 어떻게 열어주는지 알아보세요.
이번 릴리스에서는 오타에 대한 자동 "이것을 의미하셨나요?" 제안, Zod 및 Valibot 유효성 검사 라이브러리와의 원활한 통합, 설정 버그를 조기에 잡기 위한 중복 옵션 이름 감지, 그리고 사용자가 정확히 무엇이 잘못되었는지 이해하는 데 도움이 되는 컨텍스트 인식 오류 메시지를 도입했습니다.
"이것을 의미하셨나요?": 자동 오타 제안
우리 모두 이런 경험이 있습니다: --verbose 대신 --verbos를 입력하면 CLI는 도움이 되지 않는 "알 수 없는 옵션" 오류로 응답합니다. Optique 0.7.0은 사용자가 오타를 낼 때 자동으로 유사한 옵션을 제안하여 이를 개선합니다:
const parser = object({
verbose: option("-v", "--verbose"),
version: option("--version"),
});
// 사용자 입력: --verbos (오타)
const result = parse(parser, ["--verbos"]);
// Error: Unexpected option or argument: --verbos.
//
// Did you mean one of these?
// --verbose
// --version
제안 시스템은 Levenshtein 거리를 사용하여 유사한 이름을 찾고, 편집 거리가 합리적인 임계값 내에 있을 때 최대 3개의 대안을 제안합니다. 제안 기능은 option(), flag(), command(), object(), or(), longestMatch() 등 모든 파서 유형에서 옵션 이름과 서브커맨드 이름 모두에 대해 자동으로 작동합니다. 자세한 내용은 자동 제안 문서를 참조하세요.
제안 사용자 정의
errors 옵션을 통해 제안이 포맷되는 방식을 사용자 정의하거나 완전히 비활성화할 수 있습니다:
// 옵션/플래그 파서를 위한 사용자 정의 제안 형식
const portOption = option("--port", integer(), {
errors: {
noMatch: (invalidOption, suggestions) =>
suggestions.length > 0
? message`Unknown option ${invalidOption}. Try: ${values(suggestions)}`
: message`Unknown option ${invalidOption}.`
}
});
// 조합기를 위한 사용자 정의 제안 형식
const config = object({
host: option("--host", string()),
port: option("--port", integer())
}, {
errors: {
suggestions: (suggestions) =>
suggestions.length > 0
? message`Available options: ${values(suggestions)}`
: []
}
});
Zod 및 Valibot 통합
Optique 제품군에 두 개의 새로운 패키지가 추가되어 TypeScript 생태계의 강력한 유효성 검사 기능을 CLI 파서에 제공합니다.
@optique/zod
새로운 @optique/zod 패키지를 사용하면 Zod 스키마를 값 파서로 직접 사용할 수 있습니다:
import { option, object } from "@optique/core";
import { zod } from "@optique/zod";
import { z } from "zod";
const parser = object({
email: option("--email", zod(z.string().email())),
port: option("--port", zod(z.coerce.number().int().min(1).max(65535))),
format: option("--format", zod(z.enum(["json", "yaml", "xml"]))),
});
이 패키지는 Zod v3.25.0+ 및 v4.0.0+를 모두 지원하며, Optique의 메시지 시스템과 원활하게 통합되는 자동 오류 포맷팅을 제공합니다. 전체 사용 예제는 Zod 통합 가이드를 참조하세요.
@optique/valibot
더 가벼운 번들을 선호하는 사용자를 위해 @optique/valibot은 Valibot과 통합됩니다—Valibot은 Zod의 ~52KB에 비해 훨씬 작은 크기(~10KB)를 가진 유효성 검사 라이브러리입니다:
import { option, object } from "@optique/core";
import { valibot } from "@optique/valibot";
import * as v from "valibot";
const parser = object({
email: option("--email", valibot(v.pipe(v.string(), v.email()))),
port: option("--port", valibot(v.pipe(
v.string(),
v.transform(Number),
v.integer(),
v.minValue(1),
v.maxValue(65535)
))),
});
두 패키지 모두 각각의 오류 핸들러 옵션(zodError 및 valibotError)을 통해 사용자 정의 오류 메시지를 지원하여 유효성 검사 실패가 사용자에게 어떻게 표시되는지 완전히 제어할 수 있습니다. 전체 사용 예제는 Valibot 통합 가이드를 참조하세요.
중복 옵션 이름 감지
CLI 애플리케이션에서 버그의 일반적인 원인은 여러 곳에서 실수로 동일한 옵션 이름을 사용하는 것입니다. 이전에는 이런 경우 첫 번째 일치하는 파서가 옵션을 소비하는 모호한 파싱이 조용히 발생했습니다.
Optique 0.7.0은 이제 파싱 시점에 옵션 이름을 검증하고 중복이 감지되면 명확한 오류 메시지와 함께 실패합니다:
const parser = object({
input: option("-i", "--input", string()),
interactive: option("-i", "--interactive"), // 이런! -i가 이미 사용 중입니다
});
// Error: Duplicate option name -i found in fields: input, interactive.
// Each option name must be unique within a parser combinator.
이 검증은 object(), tuple(), merge(), group() 조합기에 적용됩니다. or() 조합기는 분기가 상호 배타적이므로 계속해서 중복 옵션 이름을 허용합니다. 자세한 내용은 중복 감지 문서를 참조하세요.
중복 옵션 이름에 대한 정당한 사용 사례가 있다면 allowDuplicates: true로 이 기능을 비활성화할 수 있습니다:
const parser = object({
input: option("-i", "--input", string()),
interactive: option("-i", "--interactive"),
}, { allowDuplicates: true });
컨텍스트 인식 오류 메시지
조합기의 오류 메시지는 이제 보고하는 내용에 대해 더 똑똑해졌습니다. 일반적인 "일치하는 옵션이나 명령을 찾을 수 없음" 메시지 대신, Optique는 이제 파서가 기대하는 것을 분석하고 구체적인 피드백을 제공합니다:
// 인자만 기대할 때
const parser1 = or(argument(string()), argument(integer()));
// Error: Missing required argument.
// 명령어만 기대할 때
const parser2 = or(command("add", addParser), command("remove", removeParser));
// Error: No matching command found.
// 옵션과 인자 모두 기대할 때
const parser3 = object({
port: option("--port", integer()),
file: argument(string()),
});
// Error: No matching option or argument found.
NoMatchContext를 사용한 동적 오류 메시지
국제화나 컨텍스트별 메시징이 필요한 애플리케이션의 경우, errors.noMatch 옵션은 이제 NoMatchContext 객체를 받는 함수를 허용합니다:
const parser = or(
command("add", addParser),
command("remove", removeParser),
{
errors: {
noMatch: ({ hasOptions, hasCommands, hasArguments }) => {
if (hasCommands && !hasOptions && !hasArguments) {
return message`일치하는 명령을 찾을 수 없습니다.`; // 한국어
}
return message`잘못된 입력입니다.`;
}
}
}
);
셸 완성 명명 규칙
run() 함수는 이제 셸 완성이 단수형 또는 복수형 명명 규칙을 사용하도록 구성할 수 있습니다:
run(parser, {
completion: {
name: "plural", // "completions"와 "--completions"를 사용
}
});
// 또는 단수형만 사용
run(parser, {
completion: {
name: "singular", // "completion"과 "--completion"을 사용
}
});
기본값인 "both"는 두 형태를 모두 허용하여 이전 버전과의 호환성을 유지하면서 CLI에서 일관된 스타일을 적용할 수 있게 합니다.
추가 개선 사항
-
줄 바꿈 처리:
formatMessage()는 이제 소프트 브레이크(단일\n, 공백으로 변환됨)와 하드 브레이크(이중\n\n, 단락 구분 생성)를 구분하여 여러 줄 오류 메시지 포맷팅을 개선합니다. -
새로운 유틸리티 함수: 파서 메타데이터에 프로그래밍 방식으로 접근할 수 있도록
@optique/core/usage모듈에extractOptionNames()와extractArgumentMetavars()가 추가되었습니다.
설치
deno add --jsr @optique/core @optique/run
npm add @optique/core @optique/run
pnpm add @optique/core @optique/run
yarn add @optique/core @optique/run
bun add @optique/core @optique/run
유효성 검사 라이브러리 통합을 위해:
# Zod 통합
deno add jsr:@optique/zod # Deno
npm add @optique/zod # npm/pnpm/yarn/bun
# Valibot 통합
deno add jsr:@optique/valibot # Deno
npm add @optique/valibot # npm/pnpm/yarn/bun
앞으로의 계획
이번 릴리스는 TypeScript에서 CLI 개발을 가능한 한 원활하게 만들기 위한 우리의 노력을 보여줍니다. "이것을 의미하셨나요?" 제안과 유효성 검사 라이브러리 통합은 가장 많이 요청된 기능 중 하나였으며, 이러한 기능이 여러분의 CLI 애플리케이션을 어떻게 개선하는지 보게 되어 기쁩니다.
자세한 문서와 예제는 Optique 문서를 방문하세요. GitHub에서 여러분의 피드백과 기여를 환영합니다!