Zero dependencies · Runs entirely local · Free

cc-toolkit

106 free tools to understand what's really happening in your Claude Code sessions.

▸ TRY IT NOW
# Zero-install — open in browser, select ~/.claude
CC Wrapped → CC Score → CC Streak → CC Compare → Project Stats → CC Calendar → CC Roast → Health Check → Context Check →
# CLI tools — zero dependencies, just run
npx cc-session-stats    # How many hours? Which projects?
npx cc-agent-load      # You vs AI time split
npx cc-context-check   # How full is your context window?
npx cc-personality     # Your Claude Code personality type
106
Free tools
0
Dependencies
100%
Local — no data leaves your machine
THE DATA BEHIND THIS TOOLKIT
60+ days · 3,580 sessions · 142.3 hours · 40 Ghost Days
The autonomous AI experiment that made these tools necessary.
Read the story →
READY TO GO BEYOND ANALYSIS?
Claude Code Production Hooks Kit
16 hooks · 5 templates · 3 tools. Production-ready in 15 minutes. From 160+ hours of autonomous operation.
Hooks Kit →
Free quick start: npx cc-safe-setup — 8 hooks in 10 seconds
cc-session-stats
How much time do you actually spend with Claude Code?
npx cc-session-stats
Total hours, active days, streaks, top projects, day-of-week patterns, health warnings.
NEW
cc-live
Watch your active session: tokens, burn rate, cache hit rate.
npx cc-live
Real-time session monitor. Shows input/output/cache token breakdown, estimated cost at API rates, burn rate (tokens/min), and cache efficiency. Updates every 5s.
NEW
cc-context-check
How full is your Claude Code context window right now?
npx cc-context-check
Reads token usage from your ~/.claude session files and shows a color-coded progress bar: green (<70%), yellow (70-85%), red (>85% — time to /compact). Displays input tokens, cache tokens, remaining capacity, and last-active time across all your Claude Code projects.
NEW
cc-file-churn
Which files does Claude Code touch the most?
npx cc-file-churn
Scans all session transcripts, counts Edit/Write/Read/Grep calls per file, and ranks them. Find your highest-churn files — the ones the AI keeps coming back to. Options: --writes (most-modified), --reads (most-referenced), --days=7 (recent only), --project filter.
57
cc-plan
How often does Claude Code use plan mode?
npx cc-plan
Plan mode adoption rate, cycles per session, and monthly trend. 10.9% of sessions enter plan mode — and when they do, the average session goes through 7.4 plan cycles. See which projects trigger structured planning most.
NEW
cc-fail
Which tools fail most in your Claude Code sessions?
npx cc-fail
Tool failure rates across 144K+ calls. Bash exits, WebFetch errors, token limits, permission denials. See which project has the highest failure rate and which session was the worst.
NEW
cc-output
How much text did Claude Code generate for you?
npx cc-output
Total output tokens across all sessions — converted to words, pages, and novel equivalents. 20M tokens = 192 novels. See your most verbose sessions, output by project, and monthly trends.
NEW
cc-bash
Which shell commands does Claude Code run most?
npx cc-bash
Breakdown of all 54K Bash calls — ranked by frequency. Highlights anti-patterns: cat (5,175×), ls (4,735×), grep (4,498×) all have dedicated tool alternatives. 31% of Bash calls could use Read/Glob/Grep instead.
NEW
cc-streak
How long can Claude go without an error?
npx cc-streak
Median 12 successful tool calls between errors. Mean 22.3, max 829. Bash breaks 52% of streaks. 6,221 streaks across 1,994 sessions.
#105
cc-recovery
How does Claude recover from its own errors?
npx cc-recovery
99% self-recovery rate across 6,512 errors. Retry 55%, investigate 28%, fix 14%. 62% of retries thrash 3+ times. Track recovery patterns by tool.
#104
cc-save
How much money has Claude's cache saved you?
npx cc-save
$59.7K saved by prompt caching (86% of hypothetical bill). 22.1B cache-read tokens at 10× discount. 97% hit rate across 519 sessions. Supports custom pricing per model.
#103
cc-context
How does Claude's context window grow within a session?
npx cc-context
471 sessions. Median peak 90K tokens (45% of 200K window). 35% push past 150K — approaching compaction territory. Cache reads 22.1B tokens (97% of all input).
#102
cc-pulse
What's the rhythm of a Claude Code session?
npx cc-pulse
521 sessions, 887K gaps. 81% instant (<2s), 17% quick (2-15s). Median gap 1s. 98% of events fire within 15s — Claude Code runs in near-continuous bursts.
#101
cc-checkin
When does your human check in during a session?
npx cc-checkin
162 sessions: 30% autonomous (zero follow-ups). Of 939 check-ins: 43% happen in the last third — humans review results, not process. Median trust: 9min. Max solo run: 15.8h.
#100
cc-warmup
Does Claude Code warm up or fade within a session?
npx cc-warmup
330 sessions: 60% fade by end, 26% warmup. Early avg 195 tools/hr vs late 121. Median ratio 0.64× — context fills, pace drops. Top sessions 18× faster by end.
#99
cc-speed
How fast does Claude Code actually work?
npx cc-speed
393 sessions analyzed. Median 99 tools/hr — that's ~2 tool calls per minute. p90: 410/hr. Max burst: 1,435/hr. 28% slow, 38% medium, 30% fast, 3% burst.
#98
cc-think
How deeply does Claude think before acting?
npx cc-think
52.8% of sessions use thinking blocks. 54.1K blocks total — 25.2M chars of hidden reasoning. 48% brief, 37% medium, 3% deep (2000+c). Median 23 blocks/session, max 4504.
#97
cc-human
What does your human actually do during a session?
npx cc-human
35.5% pure-autonomous sessions — human writes once, CC does the rest. 64.5% interactive. Follow-ups: 69% brief direction, 26% ack, 4% big briefing. Median follow-up: 42 chars.
#96
cc-denied
Every Bash command your human said NO to
npx cc-denied
153 denials. 0.141% denial rate. 100% Bash — you never denied Read, Edit, or Grep. pkill was #1 at 73.9%. One rm was blocked. 95% of sessions approved everything.
#95
cc-error
Which Claude Code tools fail most often?
npx cc-error
54% of sessions hit at least one error. WebFetch fails 25% of the time — blocked URLs, auth walls, timeouts. Bash: 6.1% across 54K calls. Overall 4.5% error rate on 143K tool calls.
#94
cc-text
How much does Claude actually say?
npx cc-text
73% of assistant turns produce no visible text — 54% are tool-only, 20% are silent thinking. When Claude does speak, 47% of those turns are under 50 chars. Only 3% are essays.
#93
cc-scope
How many files does Claude touch per session?
npx cc-scope
43% of sessions touch 2–5 files. 14% are single-file. Only 4% sweep 31+ files. Median is 4 files, p90 is 16. Max seen: 1,288. Measure the blast radius of each Claude Code session.
#92
cc-bash-type
What kind of Bash commands does Claude actually run?
npx cc-bash-type
Shell utils dominate at 31% (sleep, echo, export). Inspect is 25% — cat/grep/ls are the real workhorses. Execute comes third at 15%. Git is a consistent 8%. Classify 54k+ Bash calls by intent.
#91
cc-turns
How many times do you intervene per session?
npx cc-turns
28% of sessions are fire-and-forget (1 user message). 42% need 2–3 turns. Sessions with 15+ turns average 182 tool calls vs 18 for single-turn. Median: 3 turns.
#90
cc-delta
How big are Claude's edits?
npx cc-delta
57% of edits are surgical (<100 chars). Massive rewrites are only 3.6% but dominate total changed content. Claude's median edit expands code by 1.12×. Markdown grows most.
#89
cc-multi
How many tools does Claude call per turn?
npx cc-multi
Glob calls are parallel 68% of the time. Edit stays solo 92% of the time. 35.8% of all tool-using turns call 2+ tools simultaneously. Top pair: Read+Read.
#88
cc-when
When in a session does each tool appear?
npx cc-when
Glob fires in the first 20% of sessions (median). ExitPlanMode waits until 72%. Bash/Read/Edit are flat throughout. Timeline visualization of 1,835 sessions.
87
cc-mix
How many tools does a session mix?
npx cc-mix
3 is the magic number: 29.7% of sessions use exactly 3 tools. By the halfway mark, 61% have decided their full toolset. Solo: Bash 53%, Read 25%, WebSearch 20%.
86
cc-ratio
What's your Read:Edit ratio?
npx cc-ratio
Read:Edit median 1.9× — balanced zone is the norm. Write:Edit 0.5× — you modify 2× more than you create. Bash:Grep bimodal: 36% search-heavy, 35% run-heavy. Bash max 447×.
85
cc-pair
Which tools always travel together?
npx cc-pair
Bash+Read: 68% of sessions. TaskCreate+TaskUpdate: 13.63x affinity — inseparable. Edit never appears without Read (100%). WebSearch leads to WebFetch (88%). 1,837 sessions analyzed.
84
cc-sequence
What follows what follows what?
npx cc-sequence
Edit→Read→Edit: 67% of reads lead back to editing. Bash→Read→Bash: the debugging ping-pong. Grep→Read→Edit: the fastest path to a fix. 142K trigrams analyzed.
83
cc-arc
Does the Explore→Code→Verify arc exist?
npx cc-arc
No — Bash stays flat at 37% throughout. Glob is the only front-loaded tool (-2.4%). 35% of sessions never change mode. The arc is a myth. 1,595 sessions analyzed.
82
cc-burst
How deep does each tool go before switching?
npx cc-burst
Bash × 275 is the all-time record. WebSearch averages 3.6 per burst (deepest!). TodoWrite never bursts (mean 1.03). 65% of all calls stand alone. 72K bursts analyzed.
81
cc-mode
What kind of work does each session do?
npx cc-mode
EXECUTE dominates at 41%. CODE sessions are longest (median 40 calls). RESEARCH is rare (2%). Classify your sessions into 7 modes: EXECUTE, MIXED, READ, EXPLORE, CODE, RESEARCH, PLAN.
80
cc-flow
What tool follows what in your sessions?
npx cc-flow
Bash→Bash 70% (momentum). Write→Bash 56% (verify immediately). Grep→Read 44% (search then inspect). Read is the hub — everything flows through it. 144K transitions analyzed.
79
cc-toolbox
How many tools does Claude Code reach for?
npx cc-toolbox
Median session uses 3 distinct tool types. Read appears in 82.4% of sessions. Bash+Grep+Read is the canonical trio (16.1%). 61.7% of sessions are "focused" (≤3 types). 4.2% are marathon sessions (11+ types).
78
cc-last
How does Claude Code close a session?
npx cc-last
Bash→Bash→Bash (18.6%) is the #1 closing triple. Read (36%) and Bash (34%) dominate endings. Bash rises from 28% (opener) to 34% (closer) — more action as sessions progress. Companion to cc-first.
77
cc-first
What does Claude Code reach for first?
npx cc-first
Read (38%) is the #1 first tool. Bash (28%) second, Glob (17%) third. Top opening triple: Read→Read→Read (278 sessions). Context-gathering before action is the dominant pattern.
76
cc-python
How does Claude Code use Python?
npx cc-python
3,790 python3 calls across 288 sessions. 55.9% inline (-c flag) — Python as a shell scripting language. py_compile runs 703 times as a quality gate. pip called only 5 times.
75
cc-git
What git commands does Claude Code run?
npx cc-git
2,249 git commands across 187 sessions. git add is #1 (27.7%). git log (323) beats git status (162) — Claude reads history before acting. 2.0× more pushes than commits.
74
cc-cmds
What shell commands does Claude Code run most?
npx cc-cmds
54,636 Bash calls across 1,484 sessions. cat is #1 at 5,198 calls. grep runs 4,516 times despite a dedicated Grep tool. sleep called 3,822 times. 42.4% are shell utilities.
73
cc-tasks
How does Claude Code manage agent tasks?
npx cc-tasks
1,042 TaskCreate calls. 97.2% completion rate — 38pp higher than TodoWrite (59.2%). TaskUpdate 2,111 times. Max 34 tasks in a single session. Agent task lists are highly actionable.
72
cc-glob
What file patterns does Claude Code search for?
npx cc-glob
2,810 Glob calls. 1,582 unique patterns. 79.4% recursive (**). 19.5% specific filenames. Top patterns: **/*.py (120), **/*.gd (113), **/dungeon_game.py (87). Always searching subdirectories.
71
cc-grep
How does Claude Code search code?
npx cc-grep
13,162 Grep calls. 92.8% content mode — reads actual matching lines. 98.6% path-scoped. 28.6% use context lines (-C/-A/-B). .py (3%), .gd (1.8%), .md (0.8%) most searched types.
70
cc-edit
Which files does Claude Code edit most?
npx cc-edit
17,995 edits across 905 unique files. Hall of Fame: dungeon_game.py (1,957), dungeon.gd (1,666), index.html (1,441). Growth ratio 2.00x — new code twice as long as replaced. .gd 34%, .py 21%, .md 20%.
69
cc-todo
How does Claude Code manage tasks?
npx cc-todo
12,067 tasks tracked across 1,786 todo lists. 59.2% completion rate. Avg 6.8 tasks per list. One session tracked 2,841 task snapshots. Breakdown: 59% done, 13% in-progress, 28% pending.
68
cc-write
What does Claude Code create? File types, sizes, and projects.
npx cc-write
.md leads at 34.6%. .ps1 at 11.6% — all PowerShell scripts for WSL2 CDP automation. 5,803 Write calls, 24.4 MB created. Size distribution: 65.8% medium (1K–10K). Largest file: index.html at 88.1 KB.
67
cc-read
Which files does Claude Code read most?
npx cc-read
dungeon_game.py was read 6,723×. .py leads at 34.3% of all 36,973 Read calls. Hall of Fame: 4 files with 1,000+ reads each. By extension, project, and Hall of Fame ranking.
66
cc-time
When does Claude Code work? Hourly and daily activity heatmap.
npx cc-time
Peak activity: 10 PM JST. Evening (18–23) dominates at 37.8% of all 1.3M events. Monday is busiest. Hourly bar chart, day-of-week breakdown, night-owl vs early-bird verdict. Auto-detects your timezone.
65
cc-fetch
What sites does Claude Code browse?
npx cc-fetch
Domain breakdown of 1,522 WebFetch calls across 363 unique sites. github.com #1 (208), itch.io #2 (157). Categories: Code/Docs 26%, Game/Assets 19%, Publishing 13%, Marketing 7%.
64
cc-search
What does Claude Code search the web for?
npx cc-search
Topic breakdown of 2,407 WebSearch calls — 40% Game Dev, 22% Claude/AI, 3% Monetize. See top keywords, by project, and a live feed of recent queries. Reveals where Claude focuses its research.
63
cc-ask
How often does Claude Code ask for help vs just doing things?
npx cc-ask
Measure your autonomy rate — the % of sessions where Claude asked zero questions. 98.8% of sessions operated autonomously. Shows distribution, hottest sessions, and which projects triggered the most check-ins.
62
cc-lang
Which programming languages does Claude Code work with most?
npx cc-lang
Count every Edit and Write tool call by language. See your primary language, edits vs new files breakdown, and the edit:new ratio — GDScript at 13.8:1 means deep iteration; JavaScript at 0.2:1 means mostly scaffolding new files.
61
cc-reread
Which files does Claude Code read over and over?
npx cc-reread
Find re-read patterns across 37K Read calls. One session re-read dungeon_game.py 928 times. See your most-read files globally, hottest sessions, and how many sessions hit 100+ re-reads of a single file.
56
cc-web
How often does Claude Code use web search?
npx cc-web
WebSearch vs WebFetch split, session adoption rate, and per-project breakdown. 87% of sessions run entirely offline — but when web kicks in, the average session makes 25.6 calls. One session made 794 web calls.
55
cc-subagent
How many subagents does your Claude Code spawn?
npx cc-subagent
Subagent adoption rate, total count, peak sessions, and per-project breakdown. 42.9% of sessions spawn at least one subagent — and one session spawned 284. Reads file metadata only — no content access, instant results.
54
cc-compact
How often does Claude Code hit the context limit?
npx cc-compact
Compaction frequency, pre-compaction token counts, and trigger breakdown (auto vs manual /compact). See what percentage of your sessions hit the 200K context limit and at exactly what token count compaction kicks in. 95% of compactions happen at 150–175K tokens.
53
cc-cache
How effective is Claude Code's prompt caching?
npx cc-cache
Cache hit ratio and illustrative API cost savings across your session history. Shows token breakdown (fresh input / cache write / cache read / output), monthly efficiency chart, and per-project breakdown. Typical users see 90%+ hit ratio — saving $100K+ at API rates.
52
cc-size
How much conversation history have you accumulated?
npx cc-size
Total disk usage and growth rate of your Claude Code session files. Shows monthly chart, per-project breakdown, and projects how long until you hit 10 GB. Reads file metadata only — no content access, instant results.
51
cc-tools
Which tools does your Claude Code AI call most?
npx cc-tools
Distribution of tool usage across your entire session history. See how many times Read, Bash, Edit, Grep, and 40+ other tools were called. Use --categories for a grouped view: Explorer (Read+Grep+Glob), Executor (Bash), Builder (Edit+Write), Orchestrator, and more.
50
cc-model
Which Claude AI models power your sessions?
npx cc-model
Distribution of Opus, Sonnet, and Haiku usage across your sessions, plus a week-by-week timeline showing exactly when you switched models. See whether you're running 84% Opus like a power user or mixing models strategically.
49
cc-depth
How many turns per Claude Code session?
npx cc-depth
Shows the distribution of conversation depth — how many user messages occur per session. Classifications: Quick Prompter, Task Completer, Collaborative Coder, or Loop Runner. Reveals whether you use Claude Code for one-shot queries or extended back-and-forth.
48
cc-momentum
Is your Claude Code usage growing or declining?
npx cc-momentum
Week-by-week session count chart with trend classification: Accelerating, Growing, Stable, Declining, or Sharply Declining. Excludes the current (incomplete) week from trend calculation. Shows peak week and overall total.
47
cc-gap
Time between your Claude Code sessions
npx cc-gap
Gap distribution between consecutive sessions — from instant compaction restarts to multi-day breaks. Classifies your work rhythm: Always On, Rapid Cycler, Steady Pauser, Daily Worker, or Weekend Coder.
46
cc-session-length
How long are your Claude Code sessions?
npx cc-session-length
Duration distribution of all your sessions — from 30-second compaction restarts to 100-hour marathon runs. Classifies you as Quick Iterator, Balanced Worker, Deep Worker, or Marathon Coder. Shows median, average, P90, and longest session.
45
cc-night-owl
When does Claude Code actually work?
npx cc-night-owl
Shows which hours of the day your Claude Code sessions start. Night owl score: % of sessions between 22:00–05:59. Color-coded by period (night/morning/afternoon/evening). Discover if your AI is a midnight coder. Options: --days=30, --utc, --json.
NEW
cc-tool-mix
Which tools does Claude use most in your sessions?
npx cc-tool-mix
Break down your tool usage across all sessions: Bash, Read, Edit, Grep, Task, and more. Category breakdown (File Ops / Shell / Search / AI Agents), top-20 tools ranked, and actionable insights like "Heavy Bash user — consider Grep/Glob for searches."
NEW
cc-model-selector
Task complexity → Claude model recommendation.
npx cc-model-selector
Decision matrix: which tasks need Opus, Sonnet, or Haiku? Shows model pricing, cost multipliers (Opus is 5x Sonnet), and analyzes your 30-day session history to suggest optimizations.
NEW
cc-skill-audit
Audit your Claude Code skills: token overhead and prune candidates.
npx cc-skill-audit
Scans ~/.claude/skills/ and session transcripts. Shows skill sizes, 30-day usage frequency, and prune candidates (large skills with zero usage). Helps keep your context window clean.
NEW
cc-cost-forecast
Project your Claude Code spend to month-end.
npx cc-cost-forecast
Month-end cost forecaster. Shows today/week/month spend, daily average, projected month-end total, and compares against Max plan tiers ($20/$100/$200/$400). 30-day sparkline included.
NEW
cc-agent-load
How much of your usage is YOU vs AI subagents?
npx cc-agent-load
Splits interactive sessions from autonomous subagent runs. Shows autonomy ratio, Ghost Days, and a GitHub-style activity calendar.
NEW
cc-daily-report
What did your AI do today? Ghost Day report + tweet text.
npx cc-daily-report 2026-02-09 (npm pending)
Shows AI hours, projects active, and generates a tweet-ready summary for Ghost Days — days when AI ran while you were offline. Reads proof-log for per-project detail when available.
NEW
cc-compare
This week vs last week. Are you more or less active than before?
npx cc-compare (npm pending)
Period-over-period comparison of your Claude Code activity. Shows your hours delta, AI hours delta, autonomy ratio trend, Ghost Days change, and active days — with visual bars and percentage changes.
NEW
cc-project-stats
Where's your Claude Code time going? Hours ranked by project.
npx cc-project-stats --days=30 (npm pending)
Breaks down your Claude Code time by project — ranked by total hours, split between interactive (you) and autonomous (AI). Filters by any time range. Uses the same session-file methodology as cc-agent-load.
NEW
cc-score
Your AI Productivity Score. 0–100. Shareable.
npx cc-score --share (npm pending)
Combines consistency, autonomy ratio, ghost days, session volume, and streak into a single score. S-rank = Cyborg. Includes a tweet-ready share text for your score.
NEW
cc-calendar
GitHub-style calendar: YOU vs AI, day by day.
npx cc-calendar (npm pending)
Contribution graph in your terminal. Two rows — cyan for YOU, yellow for AI. Ghost Days glow bright: the days AI kept working while you rested. Zero dependencies.
NEW
cc-ghost-log
What did your AI commit while you were gone?
npx cc-ghost-log
Shows git commits from Ghost Days — days when AI ran autonomously while you had zero interactive sessions. Your AI's work diary.
cc-personality
What kind of Claude Code developer are you?
npx cc-personality
Analyzes your session patterns and assigns you a developer archetype — Architect, Sprinter, Overnight Builder, and more.
cc-wrapped
Your Claude Code year in review. No install needed.
Open in browser → select ~/.claude
Spotify Wrapped–style visualization of your Claude Code usage. Browse your .claude folder directly — no npm required. 7 animated slides, 8 personality types, shareable stats card.
cc-roast
Your CLAUDE.md, brutally honest.
Open in browser → pick CLAUDE.md
Pick your CLAUDE.md file directly — no paste needed. Get a scored verdict: verbosity, strictness, complexity, thoroughness. Roasts and compliments included.
cc-bingo
How many of these have happened to you?
50 relatable Claude Code moments, randomized into a 5×5 bingo card. Click to mark, get bingo, share the pain.
cc-cost-check
Know what your AI costs per commit.
Interactive cost calculator. See cost per hour, per commit, per day — and compare against hiring a junior or senior developer.
cc-health-check
Is your Claude Code setup production-ready?
npx cc-health-check
20-question diagnostic across 6 dimensions — Safety, Quality, Monitoring, Recovery, Autonomy, Coordination. Instant score with actionable fixes.
NEW
cc-weekly-report
What did your AI accomplish this week?
npx cc-weekly-report
Reads your proof-log files and generates a weekly Markdown summary — sessions, hours, lines changed, top projects, daily breakdown. Ready to publish.
NEW
cc-standup
Your AI writes your daily standup. Paste it into Slack.
npx cc-standup --format slack (npm pending)
Reads yesterday's proof-log and generates a copy-paste-ready standup: what was worked on, time spent, lines added. Supports plain text, Slack formatting, and tweet format. Ghost Day aware.
NEW
cc-ai-heatmap
GitHub-style heatmap of your AI activity.
npx cc-ai-heatmap --open
Reads proof-log files and generates a beautiful standalone HTML heatmap — 52 weeks of AI sessions, color-coded by hours. Stats: total time, active days, streaks. Hover each day for details. Screenshot it.
NEW
claude-md-generator
Generate a custom CLAUDE.md for your project in 60 seconds.
yurukusa.github.io/claude-md-generator
Answer 5 questions about your project (type, autonomy level, language, pain points) and get a ready-to-use CLAUDE.md with decision rules, quality gates, and language-specific checks. Browser-based, zero install.
NEW
cc-review-queue
Files your AI touched that need a human look.
npx cc-review-queue --days=7 (npm pending)
Reads activity-log.jsonl and shows all files marked needs_review — hooks, config files, scripts — sorted by most recently edited. See what your AI has been touching in sensitive paths before it causes issues.
NEW
cc-receipt
The AI never clocks out. The receipt proves it.
npx cc-receipt (npm pending)
ASCII receipt of your AI's daily work. Shows every project, sessions, lines added — then compares AI active time to your sleep time. "AI WORKED WHILE YOU SLEPT." Screenshot it.
cc-audit-log
See what your Claude Code actually did. Human-readable audit trail.
npx cc-audit-log
Reads your session transcripts and shows every action: files created/modified, bash commands run, git commits made. Flags risky operations. For teams who run autonomous pipelines and want a paper trail.
NEW
cc-shift
See your AI's work schedule. Hour by hour, project by project.
npx cc-shift (npm pending)
Terminal bar chart of AI activity across 24 hours. Each ▓ block is ~30 minutes active. See which hours were busy, which were idle, which projects ran at 3am.
NEW
cc-predict
The only tool that looks forward. Forecast your month-end stats.
npx cc-predict (npm pending)
Projects month-end hours, Ghost Day count, and streak trajectory at your current pace. Uses your last 14 days as baseline. The only cc-toolkit tool that looks forward instead of back.
NEW
cc-peak
When are you most focused? Hour-of-day heatmap + peak window.
npx cc-peak (npm pending)
All other cc-toolkit tools answer "how much?" — cc-peak answers "when?" Shows your hour-of-day heatmap, day-of-week breakdown, and your optimal 4-hour working window. Insights: night owl vs early bird, weekend builder vs weekday focused.
NEW
cc-impact
What did you actually build? Commits, lines, files across all your git repos.
npx cc-impact (npm pending)
cc-session-stats shows how many hours you spent. cc-impact shows what those hours produced: total commits, lines added/removed, files changed, ranked by most active repo. Answers "I used 140h of Claude Code — so what did I build?"
NEW
cc-focus
Are you spreading too thin? Weekly project scatter trends.
npx cc-focus (npm pending)
Counts how many distinct projects you used Claude Code in per week. Shows convergence (getting focused) vs divergence (spreading thin). Labels: Deep coder / Balanced builder / Project juggler / Context switcher.
NEW
cc-day-pattern
What day of the week do you actually code with Claude Code?
npx cc-day-pattern (npm pending)
Groups your sessions by day-of-week to show your real coding pattern. Sessions, hours, and avg session length per day. Are you a Thursday builder or a Sunday night coder?
NEW
cc-burnout
Burnout risk detector. cc-score rewards high usage — cc-burnout gives the opposite signal.
npx cc-burnout (npm pending)
Analyzes streak length, late-night sessions, weekend work, session escalation, and weekly ramp-up. Score 0–100 with risk signals and concrete recommendations. Sustainable coding > heroic sprints.
NEW
cc-monthly
Your monthly Claude Code retrospective, auto-generated in Markdown.
npx cc-monthly (npm pending)
Generates a full monthly report: hours, sessions, commits, lines added/removed, Ghost Days, and week-by-week breakdown. Pipe to a file and paste into Zenn, note, or dev.to as your monthly retro.
NEW
cc-collab
Are you getting better at working with Claude Code? Weekly efficiency trends.
npx cc-collab (npm pending)
Combines your CC session hours with actual git commits to compute commits-per-hour weekly. Shows whether you're improving, plateauing, or declining — with a trend chart spanning 8 weeks.
NEW
cc-mcp
Give Claude itself access to your usage stats. Ask in plain English.
npx @yurukusa/cc-mcp
MCP server for Claude Desktop. Ask Claude "how much have I used Claude Code this month?" or "will my streak survive?" — Claude queries your actual data and answers. The only cc-toolkit component that feeds data back into Claude during a session.
NEW
cc-alert
Never lose a streak again. Get warned before it's too late.
npx @yurukusa/cc-alert
Streak risk notifier. Run as a cron job — warns you at 8pm if you haven't coded yet. Exit code 0 (safe) / 1 (at risk) / 2 (no streak). Supports --notify for OS alerts (macOS/Linux/WSL).
NEW
cc-stats-badge
Put your Claude Code streak in your GitHub README.
npx @yurukusa/cc-stats-badge --out=badge.svg
Generates an SVG badge showing your streak, monthly hours, and AI autonomy ratio. Embed in your GitHub profile README. Auto-update daily with a cron job.
NEW
ai-life-level
What difficulty mode is your AI life on?
Open in browser → answer 7 questions
Answer 7 quick questions about your AI usage, spending, and life changes. Get your difficulty level: Easy Mode → Normal → Hard → Hardcore. Shareable result card included.
NEW
ai-cost-reality
Is your AI subscription actually worth it?
Open in browser → select your subscriptions
Bilingual (EN/JA) calculator showing what your AI stack really costs — broken down per hour, per session, per commit. Compared to hiring a developer. Based on 50 days of real data.
NEW
cc-achievements
Unlock your Claude Code milestones automatically.
Open in browser → select ~/.claude folder
20 achievements auto-detected from your session data. Volume milestones (First Steps → Half Grand), Consistency (Hat Trick → Month Strong), Ghost Days (First Ghost → Phantom), Patterns (Night Owl, Marathon), Projects (Juggler, Explorer), and Autonomy (AI Partners, AI Forward). Share what you've unlocked.
▸ COMMUNITY STATS

How do your stats compare?

Share your anonymized stats and see how you compare to other Claude Code users. No account needed — just a GitHub Gist.

📊 See community stats → npx cc-session-stats --json

Heavy Claude Code user?

If your stats show high usage, you probably want guardrails. Claude Code Ops Kit — 16 hooks + 5 templates + 3 tools. Production-ready in 15 minutes from 160+ hours of autonomous operation.

See the Hooks Kit — $19 →