The Apple Platform Matrix: Which Targets Deserve Which App
Apple ships six consumer compute platforms a developer can target with a single Swift codebase: iPhone, iPad, Mac, Watch, Vision, and TV. SwiftUI plus the iOS 26 toolchain make adding any of them a check-the-box operation in Xcode. The check-the-box is the trap. Every additional target is an obligation, not a feature: each one extends the surface area of design, testing, accessibility, runtime model, and ongoing maintenance. The right number of targets for an app is fewer than the framework permits.
The cluster’s apps run different mixes. Return ships on six platforms (iPhone, iPad, Mac, Watch, Vision, TV). Get Bananas ships on four (iPhone, iPad, Mac, Watch). Reps and Water are pre-release with multiple compiled targets the apps will narrow before launch. Ace Citizenship and Tappy Color each ship on iPhone alone. Same developer, same toolchain, six different platform decisions. The decisions follow rules; the rules deserve a shared map.
The piece names the rules. Each platform earns inclusion through specific user value the platform genuinely adds, not through “it compiles.” The matrix that survives shipping is what’s left after each target either justifies itself or gets cut.
TL;DR
- Six platforms, six different obligations: iPhone (default), iPad (size-class adaptation), Mac (window/menu/keyboard idioms), Watch (runtime contract), Vision (spatial mental model), TV (focus engine).
- Each additional target adds testing surface, design work, accessibility, and ongoing release coordination. The Xcode “add platform” checkbox hides the cost.
- The right tests are user-value tests, not technical tests: does the user genuinely benefit from running this app on that platform? If the answer is “it works there too,” cut it.
- Most apps should ship one to three platforms. Four to six is uncommon and earns the slot only when every platform genuinely adds user value the others can’t deliver.
iPhone Is The Default
Every Apple app starts on iPhone or it doesn’t start. iPhone has the largest install base, the most refined accessibility surface, the strongest distribution path through the App Store, and the canonical SwiftUI design language. Every cluster app I’ve shipped runs on iPhone. None has shipped a non-iPhone-first design.
The user value test for iPhone: would a user open this app on a phone? For nearly every consumer category, yes. Health and fitness apps live on the phone. Productivity tools live on the phone. Games live on the phone. Communication tools live on the phone. The default is iPhone because the phone is where the user is.
The exception is dev tools (Xcode, Terminal) and creative tools that need real estate (Final Cut, Logic). Those start on Mac and earn an iPhone companion only if there’s a clear handoff (Watch heart-rate during a workout that the phone shows the chart for, Camera Continuity). For consumer software, iPhone-first is not a debate.
iPad Is Not An iPhone With More Pixels
Catalyst made it possible to ship a UIKit iPhone app on iPad with size-class breakpoints. SwiftUI made the same thing easier through @Environment(\.horizontalSizeClass) and NavigationSplitView. The technical cost of “ship to iPad” is low. The product cost is the question of whether the app actually deserves the iPad’s larger canvas.
Three iPad-yes signals:
The app reads or creates content the user wants more screen for. Reading apps (Books, news, comics, long documents). Drawing/painting apps (Procreate). Note-taking with Apple Pencil (Notability, GoodNotes). Get Bananas earns its iPad slot because a shopping list with sections is more useful at a wider canvas than at a narrow one; the iPad design adapts the same section list to the larger area.
The app has handoff value with iPhone or Mac. Apple Notes, Reminders, and Mail all earn iPad slots because the user expects continuity. Return’s meditation history on iPad earns its slot for the same reason: the user starts on iPhone, glances at iPad while the timer runs.
The app has an iPad-native input affordance. Apple Pencil for sketching/handwriting. Multi-finger gestures on a larger surface. Stage Manager workflows that benefit from tile-based layout. If the affordance does not exist on iPhone, the iPad target earns its place.
The iPad-no signals:
Single-column content with no extra value at scale. A meditation timer’s primary view is a count and a button. iPad makes both bigger; that’s not a feature. A quick-log hydration tracker is the same; the wider screen does not change what the user does during a five-second logging session.
Apps that depend on iPhone-specific hardware (Lock Screen, Dynamic Island, ProMotion in specific iPhone-only configurations, certain camera formats). Those design assumptions translate poorly; either rebuild the design or skip the target.
Apps where the user already has a better destination on Mac. A code editor on iPad without keyboard support is a crippled version of the Mac app. Skip the target unless the design earns its place on the iPad’s specific input model.
Mac Is Window, Menu Bar, And Keyboard Idioms
A SwiftUI app shipped to Mac via Mac Catalyst or “Designed for iPad” runs on macOS without producing a real Mac app. A real Mac app honors window resizing semantics, the menu bar (Commands(content:) in SwiftUI), keyboard shortcuts, the macOS file picker, drag-and-drop with the Finder, and Mac-native sharing. Skipping any of those produces an iPad-app-in-a-window experience that Mac users notice and judge.
Mac-yes signals:
The user spends time in the app and would benefit from it being a real desktop window. Get Bananas on Mac earns its slot because users editing a long shopping list at a desk benefit from a real window with keyboard navigation. Return on Mac earns it because users who want a meditation timer running on their work machine benefit from a real menubar app or a window pinned to a specific corner.
Multi-window or multi-document workflows. A code editor with split panes. A photo editor with reference images side-by-side. A spreadsheet. None of these survive on iPad or iPhone in their proper form; Mac is the right platform.
Keyboard-driven interaction. A SwiftUI app on Mac that ignores the keyboard is a Mac app in name only. Cmd+N for new, Cmd+W for close, Tab for focus traversal, arrow keys for selection. The cost is per-app: every Mac target needs a thought-through keyboard surface.
Mac-no signals:
The app is a small, single-task tool that doesn’t benefit from a window. A morning timer that the user runs once a day for five minutes does not need a Mac target. The user can run it on the iPhone next to the Mac.
Apps where the iPhone-shaped UI fundamentally doesn’t translate. Camera apps. Apple Watch companion apps. Apps where the input model is touch-first and the keyboard/mouse equivalent would be worse than touching the phone.
The team can’t commit to ongoing Mac-specific maintenance. Mac releases get scrutinized differently than iPhone releases. macOS update cycles, signing for Gatekeeper, notarization, the App Store review for Mac specifically, each of these adds release-cycle work the team has to budget for. If the team won’t budget it, skip the target.
Apple Watch Is A Runtime Contract
The Watch is the platform where “ship to it” is an actively misleading instruction. The Watch’s runtime model, covered in detail in watchOS Runtime Is a Contract, Not a Background Task, is fundamentally different from iOS’s: the wrist drops, the system suspends the app, and only specific session types (mindfulness, workout-processing, alarm, etc.) can run after suspension. A Watch target without a runtime story is broken on the second use.
Watch-yes signals:
The app has a session shape the watchOS runtime model recognizes. Meditation timers (mindfulness session). Fitness apps (workout-processing). Alarm clocks (alarm). Navigation (the navigation session type). Return ships on Watch because meditation maps cleanly to mindfulness; the Watch app keeps the timer running through wrist-drop because the runtime contract permits it.
The user genuinely wants the wrist as the input surface. A heart-rate viewer during exercise. A timer the user starts without pulling out the phone. A complication that surfaces information at a glance. Get Bananas ships on Watch as a fast list-checking surface paired with the iPhone canonical store; a workout app like Reps earns its Watch target for the same reason, because logging a set on the wrist is faster than fishing the phone out of a pocket.
The companion-app value is real. Some Watch apps exist primarily as displays for iPhone-driven data (e.g., a recipe timer where the iPhone runs the timer and the Watch shows the count remaining). Those earn their slot only if the cross-device sync is built honestly (covered in Single Source of Truth: SwiftData + MCP + iCloud) and the Watch view does real work beyond mirroring.
Watch-no signals:
The app has no session type it can claim. A reading app on the Watch is a Watch app in name only. The user can’t read a book on a wrist; the runtime model doesn’t extend the session. Skip.
The team can’t commit to watchOS-specific debugging. Watch debugging is uniquely painful: simulator behavior diverges from real-device behavior in ways that only surface on a real Apple Watch under wrist-drop conditions. If the team doesn’t have hardware and willingness to test on it, the Watch target ships broken.
The data model doesn’t fit a 1MB cross-device sync envelope. Cross-device sync from iPhone to Watch usually goes through NSUbiquitousKeyValueStore (covered in the multi-platform shipping post). The store has 1MB total + 1MB per value + 1024 keys. If the app’s session state can’t fit in that envelope, the Watch target needs a different sync architecture, which is real engineering investment.
Vision Is The Spatial Mental Model
Vision Pro tempts apps to ship as a flat panel hovering in 3D space. That panel is a SwiftUI window, and SwiftUI on visionOS makes shipping it a one-click operation. The panel is a worse iPad. The platform’s real value comes from the spatial mental model, covered in RealityKit and the Spatial Mental Model, where content lives in the room rather than in the panel.
Vision-yes signals:
The app has 3D content that benefits from being in the room. A virtual sculpture the user can walk around. A tape measure that lays on a real wall. A workout coach that projects form cues onto a mirror image of the user’s body. Most cluster apps that appear on visionOS do so via “Designed for iPad” compatibility rather than a native visionOS target; that compatibility is fine for the user but it does not make the app a Vision-native experience. The post on RealityKit’s spatial mental model argues that earning the platform means more than running on it.
Hand-tracking is the right input model. Pinch-to-grab, two-handed scaling, drawing in mid-air. visionOS gives an input affordance no other platform has; the apps that earn their visionOS slot lean into it.
The app handles spatial-specific surfaces (Lock Screen-equivalents, immersive spaces, ornaments). Productivity apps that just float their iPhone UI on Vision are visible noise on the user’s first day with the device. The apps that make the user keep coming back are the ones that exploit the spatial surface.
Vision-no signals:
The app is fundamentally panel-shaped and benefits not at all from depth. A note-taking app, a chat app, a settings utility. visionOS will run them; the user has no reason to use them on visionOS instead of iPad. Skip.
The development team has no visionOS-specific testing. visionOS is the platform with the smallest install base and the most fragile patterns; testing a Vision target without a real device is uniquely difficult, more so than the watchOS case.
Privacy and presence concerns dominate. Health-disclosure apps. Sensitive workplace tools. The visionOS device is shared between household members in a way the iPhone is not; an app that surfaces private information needs a different posture there than on iPhone.
Apple TV Is The Focus Engine
TV apps are driven by the Siri Remote’s focus engine: the user moves a focus highlight with the remote, presses to select, and never sees a touch interaction. SwiftUI on tvOS supports this through the .focusable(...) modifier, the @FocusState property wrapper, and .focused(...) for state binding, but every TV app needs a focus design from scratch. The iPhone tap-and-scroll model does not translate.
TV-yes signals:
The app is for content consumption at TV-watching distance. Streaming video (Apple TV+, Netflix). Photo slideshows. Family game apps the user controls with the remote. The user is on a couch, the screen is far away, the input is sparse, TV is the right platform.
The app has a “lean back” use case the iPhone doesn’t. Workout videos the user follows along with. Cooking apps the user references while at the stove. Bedtime stories the user listens to while they sleep. None of these are well-served by a phone propped up on the coffee table.
The interaction model fits sparse, focus-driven navigation. A list of items the user picks one at a time. A grid of options. A linear playback flow. Anything more complex than that, multi-touch gestures, fine-grained text input, drag-and-drop, does not work on tvOS and means the target is wrong.
TV-no signals:
The app needs text input. TV text entry through the remote is one of the worst input models on any Apple platform. If the app needs anything more than a search box, skip TV.
The app’s value depends on the user’s hands being free for other tasks. Fitness tracking. Health monitoring. Real-time collaboration. The TV is a screen, not a wearable.
The maintenance cost is too high for the install base. tvOS has a small install base relative to iOS. The development cost is real (focus design, separate testing, App Store review for tvOS). For most apps, the math doesn’t justify the slot. A meditation app earns an Apple TV slot only with a real “leave it on the screen at low brightness while I sit on the couch” mode that the use case actually rewards; without that mode, even an app whose timer maps cleanly to TV’s lean-back pattern doesn’t deserve the maintenance bill.
The Matrix In Practice
Each cluster app’s actual matrix:
| App | Status | Targets | Why each slot |
|---|---|---|---|
| Return (meditation) | Shipped | iPhone, iPad, Mac, Watch, Vision, TV | Mindfulness session on Watch; iPad and Mac as desk companions; visionOS for immersive mode; TV for couch-mode lean-back. Six platforms only because each one earns it. |
| Get Bananas (shopping) | Shipped | iPhone, iPad, Mac, Watch | iPad and Mac for desk editing; Watch as fast on-the-wrist list checking paired with the iPhone canonical store. |
| Reps (workouts) | Pre-release | iPhone, iPad, Mac, Vision, Watch (compiled) | Wrist input is the value prop for set logging; the larger surfaces compile but the ship target is likely to narrow before launch. |
| Water (hydration) | Pre-release | iPhone, iPad, Mac, Vision (compiled) | Quick logging has no obvious benefit at scale; the ship target will narrow toward iPhone-first. |
| Ace Citizenship (study tool) | Shipped | iPhone | Study sessions are phone-shaped; iPad and Mac targets deferred until the user value is real. |
| Tappy Color (kid’s color game) | Shipped | iPhone | Tap-target game; doesn’t translate to wrist or remote. |
The pattern: each row is a deliberate cut as much as a deliberate addition. Return reaches six platforms because each one is justified through a specific user-value test; the iPhone-only apps stay there because nothing else is. Same toolchain, different products, different matrices.
Three Architecture Decisions That Make The Matrix Sustainable
Three patterns that keep multi-platform apps from collapsing under their own weight:
A shared core targets #if canImport(SwiftUI) && canImport(<platform>) for platform-specific surfaces. The core domain logic (data models, business rules, sync) compiles everywhere. Platform-specific UI lives behind compile-time conditionals. SwiftUI’s @available(iOS 26.0, macOS 26.0, ...) covers most cases; surfaces that genuinely diverge (a Mac menu bar that has no iPhone equivalent, a Watch complication that has no iPad equivalent) get their own files in target-specific groups.
Cross-device sync goes through one substrate, chosen per domain. NSUbiquitousKeyValueStore for small session-level state across devices (Return uses this for cross-device timer state). iCloud Drive JSON files for cross-process bridges (Get Bananas uses this with its MCP server, covered in Single Source of Truth). SwiftData for in-process state. Mixing the substrates per-domain is fine; using two substrates for the same domain is the failure mode that produces drift.
Each platform has explicit refusal patterns documented per-app. “We will not ship to TV unless the use case has a lean-back mode.” “We will not ship to Vision unless the app uses RealityKit, not a panel.” “We will not ship to Watch without a session type.” The refusals are project decisions captured per-app and applied consistently, in the spirit of What I Refuse To Write About.
When Adding Platforms Is Wrong
Three cases where the easier path is the wrong one.
The team adds a target because the toolchain makes it easy. Xcode’s “duplicate target” wizard makes adding a Mac or visionOS target a four-click operation. The four clicks do not include design review, accessibility audit, App Store screenshot creation, ongoing release coordination, or per-platform testing. Targets shipped from the wizard without that work shipping with them are broken on day one.
The team treats target count as a status signal. “We ship on five Apple platforms” sounds impressive in a launch tweet. The launch tweet doesn’t run on user devices; the apps do. A two-platform app that nails both is a better product than a five-platform app that’s stretched.
The team underestimates ongoing maintenance. Each Apple platform releases major OS updates yearly. A five-platform app has five sets of release notes to react to, five sets of behavior changes to test, five sets of App Store metadata to keep current. The cost compounds; teams that ship to all five without the bandwidth to maintain them produce slowly-decaying products on three of those platforms.
What This Pattern Means For Apps Shipping On iOS 26+
Three takeaways.
-
Platform inclusion is a product decision before it is an engineering one. The Xcode checkbox is the engineering side; the user-value test is the product side. Without a clear answer to “does the user genuinely benefit from running this app on that platform,” the target gets cut.
-
Each platform brings specific obligations: size class (iPad), window/menu/keyboard (Mac), runtime contract (Watch), spatial model (Vision), focus engine (TV). Skipping the obligations produces an iPad-app-on-Mac, a phone-app-on-Vision, an iPhone-app-on-Watch, all visible failures the platform’s actual users notice.
-
Most apps should ship one to three platforms. Four to six is uncommon and earned only when each platform genuinely adds user value. The cluster’s apps span one to six; the six-platform case (Return) is the exception that proves the rule, and every additional platform was a separate product decision with its own user-value test.
The full Apple Ecosystem cluster: typed App Intents for the Apple Intelligence surface; MCP servers for the agent surface; the routing question between them; Foundation Models for in-app on-device LLM features; the runtime vs tooling LLM distinction; the three surfaces synthesis; the single source of truth pattern; Two MCP Servers for the Xcode integration; hooks for Apple development; Live Activities; the watchOS runtime contract; SwiftUI internals; RealityKit’s spatial mental model; SwiftData schema discipline; Liquid Glass patterns; multi-platform shipping; what I refuse to write about. The hub is at the Apple Ecosystem Series. For broader iOS-with-AI-agents context, see the iOS Agent Development guide.
FAQ
How do I decide whether to add an Apple platform target?
Ask whether the user genuinely benefits from running the app on that platform, not whether the toolchain permits it. The Xcode checkbox is the technical side; the user-value test is the product side. Each platform brings specific obligations (size class, window/menu, runtime contract, spatial model, focus engine) and specific maintenance cost. If the answer is “it works there too,” the right call is usually to skip the target.
Should I ship to all six Apple platforms?
Usually no. Most apps benefit from one to three platforms. Four to six is uncommon and earned only when each platform genuinely adds user value (Return reaches all six because mindfulness session fits Watch, the timer fits iPad and Mac as desk companions, visionOS fits an immersive mode, and TV fits a lean-back couch use case). For most apps, tvOS’s interaction model and visionOS’s spatial requirements don’t fit, and the right call is to skip those targets.
What’s the most common mistake when adding an iPad target?
Treating iPad as “iPhone with more pixels.” A SwiftUI iPhone app dropped onto iPad without size-class adaptation produces a stretched single-column UI that iPad users immediately judge as half-effort. The right pattern is to read @Environment(\.horizontalSizeClass), adapt the layout to the larger canvas (two columns where it makes sense via NavigationSplitView, otherwise a single list with comfortable spacing), and consider Apple Pencil and multi-finger affordances as iPad-specific value.
Why is Apple Watch so different from the other platforms?
watchOS does not have iOS’s background-execution model. The wrist drops, the system suspends the app, and only specific session types (mindfulness, workout-processing, alarm, etc.) can run after suspension. Apps without a clean session type produce broken-on-second-use experiences. The cluster’s watchOS runtime post covers the contract in detail.
How does cross-device sync work across the platform matrix?
Three substrates: NSUbiquitousKeyValueStore for small key-value state (settings, last-selected-tab, cross-device timer state), iCloud Drive files for cross-process bridges (the Get Bananas + MCP server pattern), SwiftData for in-process state. Pick one substrate per domain; mixing two for the same domain produces drift. The cluster’s single source of truth post walks the decision matrix.