AI

Should You Keep or Kill Your Notion Custom Agents in May 2026?

Written by
Pravin Kumar
Published on
May 6, 2026

On May 4, 2026, Notion flipped the meter on Custom Agents. The feature that launched free in February with Notion 3.3 now runs on Notion Credits, with each agent run consuming 30 to 60 credits depending on how many tools it uses, how much it reads, and how much it writes back into the workspace. At $10 per 1,000 credits, a solo Webflow practice running three or four always-on agents is suddenly looking at a real $20 to $50 monthly line item on top of the Business plan seat fee. Sunday morning I ran a five-minute keep-or-kill audit on my own three agents. Two survived. One did not. Here is the audit format and the cost math, in case you are running it on your own practice this week.

What Changed With Notion Custom Agents on May 4, 2026?

Notion Custom Agents launched on February 24, 2026 as part of the Notion 3.3 release with a free trial period that ran through May 3. On May 4, every agent run started consuming Notion Credits priced at $10 per 1,000. The feature is restricted to the Business plan at $20 per seat per month and the Enterprise plan.

The metering granularity is per-run, not per-token. A run that calls one tool, reads two database rows, and writes one block back consumes around 30 credits. A run that searches the workspace, reads ten documents, calls an external connector, and writes a long synthesis back can consume up to 60 credits. The variance is wide enough that the pricing math depends entirely on what the agent actually does, not on how often it runs.

The launch trajectory matters too. Notion shipped Custom Agents alongside reference customers like Ramp's Enablement Eddie, Clay's IT Buddy, and Braintrust's Competitive Intelligence Agent. Those are mid-to-large enterprise deployments. The feature was always priced for that segment. Solo practices that adopted during the free trial are now experiencing the actual cost shape Notion always intended.

Why Does This Matter for a Solo Webflow Practice?

For a solo Partner running three or four always-on agents, the new metering means roughly $20 to $50 per month just for AI runs, on top of the $20 seat fee. That is not a tooling crisis. It is a real budget item that needs to be sized against the work each agent actually returns. The honest question is whether each agent saves more time than it costs.

The deeper issue is that pricing transparency is now operational. Every agent run shows up on the credit dashboard. Notion is signaling that AI usage in the workspace is no longer an unbounded utility. Solo practices that built workflows around the assumption of free, frequent, low-friction agent runs need to revisit those workflows with the credit cost as a real input. The work the agents replace is mostly cheap manual work for a one-person practice, which makes the calculus tighter than it would be for a 50-person team.

How Do I Calculate the Real Monthly Cost of Each Agent?

The formula is straightforward. Average credits per run, multiplied by runs per day, multiplied by 30, divided by 100. A daily competitive intelligence agent at 50 credits per run costs $15 per month. A weekly client-status agent at 60 credits per run, run four times per month, costs $2.40. An hourly inbox triage agent at 30 credits per run, fired six times daily, costs $54.

The variance between these is enormous. The hourly triage agent is the third most expensive thing in my Notion bill the moment metering turns on. The weekly client-status agent is rounding error. Without the formula written out, the intuition that all three agents are roughly comparable in cost is wrong by an order of magnitude. The formula is the entire audit. I covered the broader monthly tooling discipline in my monthly AI tooling cost piece, and the same exercise applies here at agent granularity.

Which Agents Pass the Keep-or-Kill Test for My Practice?

Three rules I apply. The agent must save at least 30 minutes of my time per run. The agent's output must be something I would have actually produced manually if it did not exist. The agent's failure mode, wrong output or hallucinated data, must not propagate into client deliverables. Agents that fail any of the three should be killed regardless of credit cost.

For my own practice, the daily competitive intelligence agent passed all three. It saves an hour per run, produces a brief I would actually write manually, and its failure mode is just slightly stale data, which is recoverable. The hourly inbox triage agent failed rule one. It saved maybe ten minutes per run for a $54 monthly cost, which is not the right trade. The weekly client-status agent passed rules one and two but failed rule three because its output went directly into a client-facing summary without a review step. I killed two and kept one, then rewired the client-status agent to require a manual review before it sends anything client-facing.

What Should I Replace With Inline Notion AI Instead?

Inline Notion AI questions, triggered with the slash command, are still free on the Plus and Business plans for most operations. Tasks that fit a single question and a single answer pattern, like rephrase this paragraph, summarize this meeting note, list action items from this doc, belong as inline questions, not as Custom Agents. Converting these saves the entire credit cost.

The conversion test is whether the task needs persistent context between runs. If yes, an agent earns its keep because it remembers the running context. If no, an inline question covers the work without metering. Most tasks I had wired as agents in March turned out, on inspection, to be one-shot inline questions. The keep-or-kill audit surfaced this immediately. About 40 percent of what I had set up as Custom Agents was actually inline-question work that drifted into the agent abstraction during the free trial because there was no cost discipline to push back. I covered the related rhythm in my AI as senior team member framework piece.

How Does This Compare to Running the Same Agent in Claude?

Claude API pricing on Sonnet 4.6 runs around $3 per million input tokens and $15 per million output tokens at standard rates. A typical agent run with 5,000 input tokens and 1,500 output tokens costs about 4 cents through the Claude API directly. Running the same workload as a Notion Custom Agent at 50 credits costs 50 cents. The roughly twelve-times markup pays for the Notion workspace integration, not the underlying model capability.

The honest framing is that the markup buys real value when the workspace integration matters. Reading from a Notion database, writing back into a Notion page, traversing the workspace's permission model, all of that takes engineering work that Notion has done so I do not have to. For tasks that genuinely need that integration, the price is fair. For tasks that do not, paying twelve times the model cost for an integration the agent never uses is the operational mistake that the audit catches. I covered the related agent discipline in my AI as senior team member framework piece.

What Are the Hidden Costs Beyond Credit Consumption?

Three hidden costs are easy to miss. The seat-fee escalation if the practice was running on the cheaper Plus plan and now needs Business to keep agents at all, which adds another $13 per seat per month over Plus. The vendor lock-in from agents written in Notion's specific orchestration model, which does not port cleanly to other workspace tools. The maintenance tax of keeping agent prompts updated as Notion's underlying tools and database APIs evolve.

The maintenance tax is the most insidious. Notion has shipped at least four meaningful tool surface updates since February. Each update broke at least one of my agent prompts in subtle ways. The total maintenance time across the quarter was probably four hours, which is real money for a one-person practice. The cost is invisible because it gets absorbed into normal workflow time, but it is part of the actual run-rate of operating Custom Agents at scale. Partners running ten or twelve agents will spend a meaningful share of every quarter just keeping them functional.

How Should I Renegotiate My Notion Plan This Quarter?

Two options worth considering. Drop to the Plus plan if Custom Agents are not earning their keep after the audit, which removes the Business surcharge entirely and reverts the practice to inline AI questions only. Or stay on Business and credit-budget aggressively, capping monthly credit spend by setting an explicit dollar limit and reviewing weekly. Both are defensible. Both are better than letting credit consumption drift unexamined for the next quarter.

The third option, abandoning Notion entirely for a competitor, is the worst choice for most practices because it triggers a much larger migration cost than the credit savings would justify. The exit calculus only works for practices that were already considering a tooling switch independent of the credit pricing. For everyone else, recalibrating within Notion is the cheaper response. The practice operator gets the workspace integrations they actually use, plus a sharper view of what each AI workflow costs to maintain. I covered the broader vendor independence question in my vendor lock-in audit piece from this batch.

What Is the Single Biggest Mistake Partners Are Making This Week?

The biggest mistake is treating the May 4 change as a one-time check-and-forget event. Notion's pricing is now metered, which means usage drift over weeks compounds invisibly. Partners who audit once and then ignore credit consumption will notice in two months that their monthly bill has crept past their original estimate by 40 to 60 percent because new agents got added without explicit credit budgets attached.

The defense is a recurring monthly review of credit consumption, ideally on the same day each month, with a written one-page summary that lists every agent, its credit cost, and a keep-or-kill verdict. The review takes 20 minutes. The discipline keeps the bill predictable. Partners who skip this will be the ones writing surprised LinkedIn posts in August about how their Notion bill tripled. The metering itself is reasonable. The drift around metering is what burns the budget.

What Action Should I Take in the Next 24 Hours?

Run the keep-or-kill test on every Custom Agent in your workspace today. List each agent. Calculate its monthly credit cost using the formula above. Compare against the time the agent actually saves you. Kill any agent that fails the three-rule test. The exercise takes 30 minutes and produces immediate clarity on which agents earn their keep and which were running on inertia from the free-trial period.

Then write a one-paragraph summary of the audit findings and post it where future you will see it next month. The summary is the artifact that turns a one-time review into an ongoing discipline. Without it, the audit's value compounds for one month and then fades. With it, the audit becomes the first iteration of a quarterly habit that keeps the practice's AI tooling cost honest. The total effort across the quarter is roughly two hours. The compounding effect on tooling discipline is significant. I covered the related quarterly rhythm in my quarterly retrospective piece from this batch.

If you are running a Webflow practice and want to compare keep-or-kill notes on your Notion Custom Agents this week, drop me a line and tell me which agent you most suspect is going to fail the test. Let's chat.

Get your website crafted professionally

Let's create a stunning website that drive great results for your business

Contact

Get in Touch

This form help clarify important questions in advance.
Please be as precise as possible as it will save our time.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.