· ai · tools + guides
AI, in one place.
Every AI-adjacent thing on Briskly — the LLM token counter, the prompt caching calculator, the pricing comparisons, and the guides on system prompts, MCP servers, and picking between Claude, ChatGPT, and Gemini. All free, all built for practitioners who actually ship.
Guides.
· long-form · honest
· guide · 2026-04-21
Claude vs ChatGPT vs Gemini
An honest side-by-side of the three flagship LLMs. Capability table across 12 dimensions, pricing at every tier, and five scenario-specific picks for when each one is actually the right choice.
open →
· guide · 2026-04-21
Claude pricing in 2026, compared
Current Claude API rates (Opus 4.7 at $5/$25, Sonnet 4.6 at $3/$15, Haiku 4.5 at $1/$5) benchmarked against GPT-5.4, Gemini 3.1 Pro, and Llama 4 — with per-workload math for chatbots, RAG, and coding agents.
open →
· guide · 2026-04-21
Prompt caching explained
How Claude (90% off reads), OpenAI (up to 90% automatic), and Gemini (~75% with storage fees) implement prompt caching. Includes a live calculator that models your monthly savings under each provider.
open →
· guide · 2026-04-21
System prompts, explained
The instruction you give an LLM before the user's first message. Anatomy (role, context, behavior, examples), a copy-paste template, anti-patterns that break output, and how each provider interprets it.
open →
· guide · 2026-04-21
MCP server primer
What a Model Context Protocol server actually is, how to plug one into Claude Desktop in three steps, and how to ship your own to npm in an afternoon.
open →
· guide · 2026-04-20
Claude Design: first look
A working developer's review of Anthropic's Claude Design brand-system generator — what it does well, what it doesn't, and how Briskly used it to build the actual identity of this site.
open →
Why a whole AI corner on a small-tools site
Briskly started as a small-tools studio — invoicing, rate calculators, email signatures. The AI corner grew as a response to one observation: every resource about working with LLMs either reads like a product pitch (provider blogs, launch posts) or reads like a LinkedIn hype tour ("I tried GPT-5.4 and you won't believe…"). The practical, vendor-neutral, actually-useful version was missing.
What you find here: current API pricing that isn't marketing-washed, calculators that run actual math on your workload, and comparison pieces that name the model that wins for each specific task rather than hand-waving about "it depends." Everything is written after using the tools on real projects, not after reading the press release.
The LLM token counter is the central hub — it runs exact tokenization for OpenAI models via the bundled gpt-tokenizer library, and ~5-10% approximate counts for Claude, Gemini, and Llama (whose tokenizers aren't browser-compatible). Most of the guides cross-link to it because counting tokens is the first step of almost every AI-cost decision.
Updates
Pricing, model versions, and provider features change constantly — we try to update within a week when a major change lands (e.g., Claude Opus dropping from $15/$75 to $5/$25 in early 2026, or the GPT-5.4 family release). Everything on this page shows its last-updated date.
Missing something? Email us — the next guide is usually whatever a reader asked for twice in the same week.