Plan with AI.
Build with clarity.
You shouldn't have to re-explain your project every single time.
Free. Bring your own API key.
Add token counter to composer
Here's the implementation. Counts tokens client-side using tiktoken-lite.
// utils/tokens.ts
export function count(text) {
return encode(text).length
}via Sonnet 4.6
1 import { encode } from 'tiktoken-lite'
2
3 export function count(text: string) {
4 return encode(text).length
5 }
6
7 // ~12k tokens (6%)Claude has no memory. Cursor has no strategy.
Pith has both.
The three things that change how you build.
Prompt enhancer
Type rough, hit Cmd+L. Pith rewrites your prompt for maximum AI clarity before sending.
Project memory
Your stack, decisions, conventions — saved once, injected into every single message automatically.
Multi-model, your keys
Claude, GPT-4o, Gemini, Groq, local Ollama. One interface, your own API keys, zero markup.
Three steps. Every session.
Set your context
Describe your stack once. Pith injects it forever.
Enhance your prompt
Rough idea in, precise prompt out. One shortcut.
Plan, then execute
Copy Claude's answer to Cursor. Ship.
Everything. No subscription.
Named sessions with history
Your planning conversations, organized and searchable. Never lose a decision again.
- Token counter design2d
- Onboarding flow2d
- Voice input fix2d
- Pricing page copy2d
Always-on rules
Injected into every response in this project.
Local Ollama support
Desktop app with free inference. No API costs. Your machine, your models.
Session export
Save any conversation as markdown.
Token counter
Know your context usage.
Voice input
Speak your prompt. Transcribed by Whisper.
Multi-project
Separate context, rules, and history per project.
- overnight.so
- New idea
Built for where this is going.
AI tools are about to get much more powerful. Pith is being built to grow with that.
GitHub integration
Pull your actual repo context automatically. No more pasting architecture docs.
Team memory
Shared project brain for engineering teams. One update, everyone's AI gets smarter.
Decision capture
When Claude recommends something and you approve it, Pith saves it as a logged decision. Searchable. Referenced forever.
Everyone has a plan.
Nobody saves it.
Pith remembers everything
so you don't have to.
Because context rot kills momentum.
Questions, answered.
It goes to your own server (Vercel) to make the API call, then to Anthropic/Google/OpenAI directly. Never stored, never logged.
Pith saves your project context, rules, and session history permanently. Claude.ai starts fresh every time.
Yes with Ollama. Pull a local model and plan without any internet connection or API costs.
Yes. Pith supports Claude, GPT-4o, Gemini 2.5, Groq Llama, Grok, and local Ollama. Swap mid-conversation.
Coming soon. Shared project context for engineering teams. Join the waitlist.
Stop explaining your project from scratch.
Set it once. Every conversation knows everything.
Open PithFree. Your own API key. No subscription.