Automating OTA Updates: How We Deploy to 20+ White-Label Apps Without Touching a Laptop
If you read Expo’s recent post about OneSpot automating OTA publishing at massive scale, you already know the core idea: stop deploying app-by-app from a laptop, and turn releases into a system.
This is our version of that same playbook—rebuilt for a Source Push workflow.
The scenario
We run a white-label React Native platform where every customer has their own branded app:
- Unique app name and icon
- Unique bundle/package identifiers
- Dedicated release channels
- Shared core codebase
That architecture is great for product velocity, but painful for operations when updates are manual.
Without automation, even a tiny fix can mean repeating the same deployment flow hundreds of times.
The bottleneck we had to remove
Our old OTA flow looked like this:
- Select one app
- Update local config manually
- Run release commands from a developer laptop
- Validate
- Repeat for the next app
At 20+ apps, this turns into hours of repetitive operational work and high risk of human error.
The key shift: treat app targeting as data
Instead of hardcoding per-app settings, we moved every variable into a central app registry (JSON).
Each app record includes:
- Brand name and slug
- iOS bundle ID / Android package
- Source Push app key and channel mapping
- Asset set references
- Runtime version and rollout metadata
Now “deploy this app” is just: load app config from data + run pipeline.
Our Source Push automation architecture
1) Config generation layer
A script takes one app ID (or many) and generates runtime config files for that target app set.
2) Release execution layer
The same script calls Source Push CLI commands non-interactively.
# Example single-app OTA release
srcpush release-react SchoolApp-Production \
--description "Fix session timeout handling"
# Example promotion flow
srcpush promote SchoolApp-Production Staging Production
3) CI runner (not local machine)
All releases run in CI (GitHub Actions in our case), not on any engineer’s laptop. That gives us:
- Repeatable execution environment
- Centralized secrets handling
- Full audit trail for every deployment
- Safer rollback and retry behavior
4) Remote trigger interface
We expose a secure internal endpoint that triggers deployment jobs. From an ops dashboard (or even mobile admin UI), authorized teammates can trigger:
- Publish one app
- Publish a segment (for example, all K-12 customers)
- Promote validated releases across channels
Guardrails that made this reliable
At this scale, speed without control is dangerous. We added guardrails from day one:
- Channel-based rollout: Internal → Beta → Production
- Automated checks: block promotion on failed health metrics
- Scoped credentials: least privilege for CI tokens
- Deterministic artifacts: generated config committed/traceable per run
- Instant rollback paths: fast channel re-pointing when needed
What changed after implementation
Faster hotfix velocity
Critical fixes can be pushed across the fleet in minutes, not hours.
Lower operational load
Engineering stopped spending release windows on repetitive command execution.
More consistent deployments
Centralized, scripted runs eliminated machine-specific drift and “works on my laptop” failures.
Better confidence at scale
Because every release follows the same pipeline, the process is predictable even when fleet size grows.
If you want to replicate this pattern
Start simple:
- Build a single source of truth for per-app metadata.
- Generate app config from data, not manual edits.
- Run Source Push releases from CI only.
- Add staged promotion and rollback automation.
- Add a secure API trigger once core flow is stable.
The big idea is straightforward: OTA at white-label scale is a systems problem, not a terminal command problem.
Once you model deployments as data + automation, shipping to 20 apps can feel almost as simple as shipping to one.
