DATA STORY · JAN 10 – MAR 1, 2026

51 Days of Autonomous AI

I gave Claude Code a $200/month subscription and minimal supervision. Here's what 60+ days of session logs actually show — the good numbers, the embarrassing numbers, and the ones I didn't expect.

3,580
Sessions
142.3h
AI runtime
40
Ghost Days
22.6h
Longest session

The Numbers

From January 10 to March 1, 2026: 60+ days. 3,580 Claude Code sessions logged. 142.3 hours of AI operating time.

That's 2.9 hours per day, every day, averaged across the full period. But averages hide the shape. Some days were 12-hour sessions. Others were zero.

The longest single session: 22.6 hours. February 28, 2026. The AI ran from evening through the next morning without a meaningful break. I was present for parts of it. Not all of it.

Ghost Days: The Hidden Number

A Ghost Day is a day where Claude Code was technically running but nothing meaningful happened — fewer than 10 meaningful tool calls, no shipped output. Opened, looked around, closed.

Out of 60+ days: 40 were Ghost Days.

60+ days — ■ active ■ ghost

I expected maybe 20% ghost rate. The actual rate was 78%.

The pattern wasn't random. Monday had a 88% ghost rate. Wednesday and Thursday were most productive. Saturday evenings were reliably quiet.

What this means: On most days, I paid the $6.67 daily subscription cost and produced nothing deployable. The 11 active days had to carry the full weight.

AI Time vs. Human Time

cc-agent-load separates time into AI-driven work (autonomous operation) versus human-directed work (prompts, reviews, corrections).

AI: 84.2h (61%) Human: 54.3h (39%)

The AI ran 1.5× longer than I did. Most of those autonomous hours happened while I was doing other things — or sleeping.

This ratio was lower than I expected. I thought I was more hands-off than 39%. Turns out I was still present and directing for most of the "autonomous" hours.

The 3AM Spike

The session heatmap shows activity by hour of day across 60+ days. Peak activity: 2AM–4AM JST.

0:00 6:00 12:00 18:00 23:00

I didn't plan to be a 3AM coder. Some of it is autonomous sessions that started in the evening and ran through the night. Some of it is genuinely me, awake at 3AM, directing AI sessions because there's nobody asking for anything.

🌙
The Midnight Beast
"The compiler doesn't sleep, and neither do I."
52% night sessions · 1.6× weekend activity · 4-day longest streak

What 142.3 Hours Produced

Jan 10
Started the experiment
First Claude Code session. No system. Just prompts.
Jan–Feb
First tools built to understand what was happening
cc-session-stats, cc-audit-log, cc-agent-load — built because the data was invisible.
Feb
39 tools, 1 game shipped, 25+ articles published
The toolkit grew as each tool revealed a new question to answer.
Mar 1
This page
3,580 sessions later.
38
Tools built
25+
Articles written
1
Game shipped
$4.99
Revenue earned

The Uncomfortable Math

Two months of Claude Max at $200/month. $400 total cost. $4.99 total revenue.

$4.99
$0 Break-even: $400

1.25% of costs recovered. The experiment is not yet profitable.

The 39 tools are free. They build trust and audience. The paid products ($19 Ops Kit, ¥800 Zenn book, $2 game) are the monetization path. Whether that path reaches $400 is still an open question.

Why publish this? An AI that hides losses isn't trustworthy. The full P&L is tracked at ai-earns-its-keep →

Analyze Your Own Data

All 39 tools are free. Browser-based, zero dependencies — just select your ~/.claude folder. No npm, no install required for the browser tools.

Open cc-toolkit →

What does your session heatmap look like? Do you have Ghost Days too?