Use Scheduled Scripts for simpler, bounded jobs. Use Map/Reduce when record volume, retries, and governance isolation matter more than implementation simplicity.
Verdict
Default to Scheduled Script until you have enough volume, retry complexity, or stage separation to justify Map/Reduce.
| Criterion | Option A | Option B | Recommendation |
|---|---|---|---|
| Implementation complexity | Lower. One execute path, fewer moving parts. | Higher. Multiple stages, more moving parts to reason about. | Scheduled Script wins when the job is still operationally small. |
| Governance isolation | Single execution budget for the whole run. | Per-stage and per-key isolation reduces blast radius. | Map/Reduce wins once volume is the actual constraint. |
| Retry behavior | Usually custom and manual. | Native retry semantics are much better for record-level failures. | Map/Reduce if you expect intermittent failures. |
| Debugging speed | Faster to trace for small jobs. | More indirection during debugging. | Scheduled Script for jobs you need to reason about quickly. |
Best for straightforward jobs with bounded datasets and simple orchestration requirements.
Best for high-volume or failure-prone workloads where governance isolation and retry handling matter more than simplicity.
Most teams make the same mistake in opposite directions. They either reach for Map/Reduce because it sounds more scalable, or they stay on Scheduled Script far too long because the first version worked in testing. Both are symptoms of choosing by reputation instead of workload shape.
If the job is small, predictable, and easy to rerun, a Scheduled Script is usually the better engineering choice because it is easier to build, debug, and hand off. If the job touches enough records that one execution context becomes the bottleneck, Map/Reduce stops being an optimization and becomes the correct architecture.
Do not use record count alone
Five hundred records can still justify Map/Reduce if each one triggers transforms, external calls, or multiple writes. The real threshold is cost per record multiplied by operational risk.
Governance
A Scheduled Script gives you one budget to spend. If each record costs roughly 30 governance units because you load and save it, the ceiling arrives quickly. Map/Reduce changes the economics by isolating work and reducing the blast radius of expensive records.
Performance
The crossover point is not theoretical. Small jobs often run slower in Map/Reduce because stage setup and orchestration overhead add real cost. High-volume jobs usually repay that overhead quickly.