App Intents Are Apple's New API to Your App
On the morning of February 8, 2026, I asked Siri to log 8 oz of water from my Apple Watch while my hands were under the kitchen sink. The water logged. The watch dialog said 32 oz remaining. I had not touched a screen.1
Eleven weeks earlier I had added a single Swift file to Water, my hydration-tracking iOS app: LogWaterIntent.swift, 80 lines of AppIntent plus an AppShortcutsProvider declaring three Siri phrase variants. That file is now the hottest API surface I own.2
Here is the part that took me a while to internalize. App Intents are not a Siri feature. They are the contract third-party apps sign with Apple Intelligence, the system AI surfaces Apple began rolling out in iOS 18 and continued building through iOS 26.3 If you ship an iOS app and you are still treating App Intents as a “nice to have” voice feature, you are misreading what Apple has built. App Intents are the API that lets Apple’s AI act as your app on behalf of the user. Everything else (Siri, Spotlight, Shortcuts, Apple Intelligence summaries, the Watch and Vision Pro surfaces) is downstream of that contract. Foundation Models, the on-device LLM that shipped in iOS 26, exposes a separate Tool protocol for in-app tool calling; it runs parallel to App Intents rather than through them.
TL;DR
- App Intents declare what your app can do in a typed, structured way that Apple’s AI can call directly. They are Apple’s tool-use API for third-party apps.
- One real production example:
LogWaterIntentin Water. 80 lines, full SwiftData write, HealthKit sync, locale-aware unit conversion, structured Siri dialog response. - iOS 26 added Foundation Models, Apple’s on-device LLM. Foundation Models exposes its own
Toolprotocol for in-app tool use; App Intents remain the canonical surface that Siri / Spotlight / Apple Intelligence call across apps. Same direction, two parallel contracts. - An app without App Intents in 2026 is invisible to Apple Intelligence. The AI fabric routes through your declared intents or it routes around your app to a competitor.
- Apple has been telling us this for three years. The naming (App Intents, App Shortcuts, Apple Intelligence) is on purpose. The contract goes one level up the stack each WWDC.
What An App Intent Actually Is
The full source of LogWaterIntent as it shipped in commit e398c58 on February 8, 2026:2
import AppIntents
import SwiftData
struct LogWaterIntent: AppIntent {
static var title: LocalizedStringResource = "Log Water"
static var description: IntentDescription = "Log a glass of water to your daily intake"
@Parameter(title: "Amount", default: 8)
var amount: Int
static var parameterSummary: some ParameterSummary {
Summary("Log \(\.$amount) oz of water")
}
func perform() async throws -> some IntentResult & ProvidesDialog {
let container = try ModelContainer(for: WaterEntry.self, DailyLog.self, UserSettings.self)
let context = ModelContext(container)
let settingsDescriptor = FetchDescriptor<UserSettings>(
predicate: #Predicate { $0.id == "user-settings" }
)
let settings = try context.fetch(settingsDescriptor).first ?? UserSettings()
let amountMl: Double
if settings.unitSystem == .imperial {
amountMl = Double(amount) * 29.5735
} else {
amountMl = Double(amount)
}
let todayKey = DailyLog.todayKey()
let logDescriptor = FetchDescriptor<DailyLog>(
predicate: #Predicate { $0.dateKey == todayKey }
)
let log: DailyLog
if let existing = try context.fetch(logDescriptor).first {
log = existing
} else {
log = DailyLog(date: .now, goalAmount: settings.dailyGoal)
context.insert(log)
}
let entry = WaterEntry(amount: amountMl)
log.entries.append(entry)
try context.save()
if settings.healthKitEnabled {
try? await HealthKitService.shared.logWater(amount: amountMl, date: entry.timestamp)
}
let unit = settings.unitSystem == .imperial ? "oz" : "mL"
let totalDisplay = settings.formatAmount(log.totalAmount)
return .result(dialog: "Logged \(amount) \(unit). Today's total: \(totalDisplay)")
}
}
struct WaterShortcuts: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: LogWaterIntent(),
phrases: [
"Log water in \(.applicationName)",
"Add water in \(.applicationName)",
"Drink water in \(.applicationName)",
],
shortTitle: "Log Water",
systemImageName: "drop.fill"
)
}
}
(Water’s current production version of this file iterates the dialog further with a goal-reached/remaining-amount conditional. The Feb 8 ship code above is the one I tested at the kitchen sink.)
Three things here are worth naming because most “App Intents tutorials” gloss them.
The @Parameter is the schema. Apple’s AI sees amount: Int with a default of 8. When Siri parses “log 12 oz of water” it produces LogWaterIntent(amount: 12) and calls perform(). There is no string parsing on my side. The type system is the schema.5
parameterSummary is the natural-language reflection of the parameter. Apple uses it to render the action in Shortcuts UI, in dialog, and increasingly in Apple Intelligence’s confirmation panels. The summary is read aloud back to the user. Get it wrong and the user hears an ugly sentence; get it right and the surface feels native.6
perform() returns IntentResult & ProvidesDialog. That is the structured return: the AI surface gets back not just success/failure but a dialog string the user hears. Apple is increasingly expecting ProvidesDialog, ProvidesView, or ReturnsValue so the result composes into Siri, Spotlight, the Watch, and (in iOS 26) Apple Intelligence’s response chain.7
The AppShortcutsProvider block at the bottom is what registers the Siri phrases. The \(.applicationName) token is where Siri inserts “Water” automatically. Three phrase variants with the same intent give Apple’s NL parser more room to match user phrasing without you maintaining a phrase dictionary. The systemImageName is a real SF Symbols name; that is how Spotlight, Shortcuts, and Apple Intelligence render the action’s icon.
Why This Is The Most Important iOS API Since SwiftUI
iOS APIs come in two shapes. Some are about how your app draws itself (UIKit, SwiftUI, Metal). Some are about how your app integrates with the system (URL schemes, Universal Links, Widgets). App Intents are a third shape: they are how Apple’s AI uses your app.
The progression is worth tracing.
- iOS 10 (2016) introduced SiriKit Intents (
INIntent), the first time third-party apps could be addressed by voice. The surface was narrow: a fixed list of domains (messaging, payments, ride-booking) with strict schemas.8 - iOS 12 (2018) broadened the surface with Siri Shortcuts: any app could donate an
NSUserActivityorINIntentand hope Siri suggested it. - iOS 13 (2019) added in-app intent handling so apps could respond to shortcut invocations without backgrounding to system Siri UI.
- iOS 16 (2022) introduced the App Intents framework: typed, declarative, with
@ParameterandAppShortcutsProvider. TheINIntentpredecessor was effectively superseded for new development.9 - iOS 18 (2024) introduced Apple Intelligence and started routing Siri requests through App Intents wherever possible. Apple Intelligence’s “personal context” feature reads from App Entities (the data version of App Intents).10
- iOS 26 (2025) introduced the Foundation Models framework, Apple’s on-device LLM. Foundation Models exposes a separate
Toolprotocol for in-app tool calling. App Intents remain the canonical cross-app surface for Apple Intelligence, whileToolis the in-app surface for direct LLM calls. The two contracts run parallel.4
The contract has been extending up the stack every release. Originally the consumer of an App Intent was a person tapping Shortcuts. Then Siri voice. Then Spotlight. Then Apple Intelligence summaries. Now Apple Intelligence’s LLM-backed system surfaces use them to act on user requests. The App Intent surface you ship in 2026 is the one Apple Intelligence will be calling on iOS 27, 28, 29.
The pattern above is what I mean when I say App Intents are not a Siri feature. They are the structured-tool-use API for the entire Apple AI fabric. SwiftUI was the most important UI API because it became the only way to write an app for visionOS, watchOS 10+, and iOS 17+. App Intents are tracking the same arc on the AI side: the surface where Apple is putting all its bets.
What Changes Now That Foundation Models Has Shipped
Foundation Models is the framework that ships on every Apple Intelligence-eligible device. The hardware cutoff is the same Apple Intelligence list: iPhone 15 Pro and 15 Pro Max (A17 Pro), iPhone 16 line, iPhone 17 line, iPhone Air, iPhone 17e, iPad Pro with M1 or later, iPad Air with M1 or later, iPad mini with A17 Pro, Vision Pro with M2 or later, and Mac with M1 or later. Notably absent: base iPhone 15 / 15 Plus.412
The implication: if Apple’s system surfaces (Siri, Spotlight, Apple Intelligence) call your app at all, they call it through App Intents and App Entities. There is no setSystemPrompt(...) API for third-party apps in the system AI fabric. There is the intent registry. Foundation Models adds a parallel in-app Tool surface for developers who want their own on-device LLM features. The cross-app contract (the one Apple Intelligence and Siri use to find your app) runs through App Intents.
Three concrete consequences for app developers:
An app without a relevant App Intent is not reachable from a Siri voice command in its category. Apple Intelligence routes phrases like “Hey Siri, log my water” to apps that have declared the matching intent first. I shipped Water’s intent in February 2026. My read of the framework direction: hydration apps that ship the intent in 2027 will be entering a market where the routing weights have already accumulated toward early movers. Same logic applies to shopping lists, workout logging, calendar entries, photo searches. I expect the first-mover advantage on intent declarations to compound the way it has for other Apple platform-bet APIs (HealthKit categories, Spotlight rich results, Live Activities tokens).
Apple Intelligence personalization reads from App Entities, not just intents. An AppEntity declares “this app has data of this shape.” When the user asks “what was the last book I added to my reading list,” Apple Intelligence searches every AppEntity matching Book across every installed app. If your app has a reading list and no BookEntity declared, your data is invisible to Apple’s AI surfaces. Apple Intelligence cannot retrieve or reference your data.11
The IntentResult & ProvidesDialog return shape is increasingly important. Apple Intelligence is composing intent results into longer responses across Siri, Spotlight, and the Watch. A perform() that just returns success without a structured dialog is harder for the system to compose into a coherent reply. ProvidesDialog and ProvidesView are not optional politeness; they are how your action becomes a citation in the user’s AI surface.
What I Would Build Differently
Eleven weeks of production logs in Water tell me three things I should have done sooner.
Ship more intents than you think you need. I shipped one. I should have shipped four: LogWaterIntent, CheckTodaysProgressIntent, AdjustGoalIntent, ShowHistoryIntent. Each maps to a Siri phrase that users actually try (“how much water have I had today” routed to Apple’s generic AI rather than to my app’s data). Each missed intent is a query Apple Intelligence routes around me.
The dialog string is not the body of an email. I had ProvidesDialog from the start, but my early dialog was prose. The user hearing it through CarPlay or AirPods needs short, concrete, fact-led structure: “8 oz logged. 32 oz to go.” The Watch surface in particular truncates aggressively. Conversational dialog is a worse user experience than confident-fact dialog. I rewrote mine in week 4.2
App Entities matter more than I thought. I have a WaterEntry SwiftData model. I should also declare a WaterEntryEntity: AppEntity plus its companion WaterEntryQuery: EntityQuery so Apple Intelligence can answer “show me when I drank water yesterday.” The minimal bridging:11
struct WaterEntryEntity: AppEntity {
static var typeDisplayRepresentation: TypeDisplayRepresentation = "Water Entry"
static var defaultQuery = WaterEntryQuery()
var id: UUID
var displayRepresentation: DisplayRepresentation {
DisplayRepresentation(title: "\(amount) oz at \(timestamp.formatted())")
}
var amount: Int
var timestamp: Date
}
struct WaterEntryQuery: EntityQuery {
func entities(for identifiers: [UUID]) async throws -> [WaterEntryEntity] {
// Fetch matching entries from SwiftData
}
func suggestedEntities() async throws -> [WaterEntryEntity] {
// Recent entries Apple Intelligence can suggest
}
}
Two small Swift types plus the SwiftData fetch glue. To make entries individually surfaceable in Spotlight (so users searching “water” land on the right entry), conform the entity to IndexedEntity and donate index updates on writes. That is what Apple’s Spotlight pipeline expects beyond bare AppEntity exposure.
The same shape applies elsewhere in my apps. Get Bananas, my shopping list app, already has a SwiftData @Model ShoppingItem with @Attribute(.unique) var id: UUID, name, amount, section, isChecked, plus a lastModified field for iCloud Drive sync.13 Wrapping it as ShoppingItemEntity: AppEntity and shipping a couple of intents (AddShoppingItem, CheckOffItem, ShowList) would expose the same persistence layer to Apple Intelligence that Get Bananas already exposes to Claude Desktop through its .mcpb MCP server.14 Two LLM ecosystems, two different contracts, same shopping list. That is the parallel-contracts thesis as a single shipped app: the SwiftData model is the data, App Intents are Apple’s contract, MCP is Anthropic’s contract, both surfaces operate on the same source of truth.
When Not To Ship An App Intent
Refusal is part of the design.
If your app is purely consumption-driven (reading the user’s photos, displaying news, playing audio) with no mutable user state, App Intents may have nothing to expose. Apple’s framework supports OpenIntent (just open the app to a context) but if the only useful action is “open the app,” the intent is overhead. Don’t ship one for the sake of having one.
If the action depends on UI affordances that are hard to abstract (a complex multi-step canvas tool, a 3D editing app), the intent’s required parameterSummary will degrade to vague pseudo-natural-language that nobody actually says. The Siri phrase “edit my photo with the blur tool at strength 7” is technically possible but no human will utter it. The intent’s surface is a tax with no payoff.
The right rule: an App Intent earns its keep when there is a sentence a user would naturally say that triggers the action. “Log 8 oz of water” is that sentence. “Apply Gaussian blur with sigma 2.4 to layer 3” is not. If your app’s actions cluster on the second pattern, intents are not your conversion lever.
The Closing Take
For three years Apple has been signaling that the system AI fabric of iOS goes through App Intents. WWDC 2024 added Apple Intelligence routing through them. WWDC 2025 added Foundation Models alongside as a separate in-app tool-calling surface, leaving App Intents as the cross-app contract Siri / Spotlight / Apple Intelligence keep using. Every signal points the same direction: the typed, declarative App Intent is the contract third-party apps now sign with the system.
Most iOS apps still treat App Intents as Siri Shortcuts: a feature to ship if you have time. My read is that framing is going to age badly. As Apple Intelligence’s system surfaces extend (already through Siri, Spotlight, Shortcuts, and Apple Intelligence summaries today), apps without declared intents are likely to find themselves outside the routing graph. The first-mover surface, in my experience watching Apple’s other platform bets, compounds.
Water has had LogWaterIntent shipped for eleven weeks. The amount of code that ships an App Intent is small enough to fit in a single file. The cost of not shipping it grows with every Apple Intelligence release.
If you ship an iOS app in 2026 and you have not declared at least one App Intent, your roadmap has a missing line item. Add it.
FAQ
What is an App Intent in iOS development?
An App Intent is a typed, declarative Swift structure that exposes one of your app’s actions to Apple’s system AI surfaces. It declares parameters via @Parameter, a natural-language summary via parameterSummary, and an async perform() body that does the work and returns a structured result. Apple’s Siri, Spotlight, Shortcuts, and Apple Intelligence can call it. Foundation Models (Apple’s on-device LLM) uses a separate Tool protocol for direct in-app tool calls.
How is App Intents different from the older INIntent?
App Intents (introduced iOS 16, 2022) replaced INIntent as Apple’s primary intent framework. The newer framework is fully Swift-native, uses property wrappers like @Parameter, supports type-safe entity queries via AppEntity, and is the surface that Siri, Spotlight, Shortcuts, and Apple Intelligence call. The older INIntent is still supported but receives no new feature work.
Do I need iOS 26 to ship an App Intent?
No. App Intents are available from iOS 16 onward. iOS 26 adds the Foundation Models framework alongside, but the App Intent declarations themselves work on iOS 16+. The example code above uses SwiftData (iOS 17+) so the deployment target depends on what your perform() body imports. Bare App Intents work back to iOS 16; SwiftData-backed ones need iOS 17.
What is the difference between an App Intent and an App Entity?
An App Intent is an action (verb). An App Entity is the data your app knows about (noun). LogWaterIntent is an intent. WaterEntry becoming a queryable type is an entity. Apple Intelligence uses both: intents to take actions, entities to retrieve and reference data in responses.
How do App Intents relate to Foundation Models tool calling?
Foundation Models exposes its own Tool protocol for direct in-app LLM tool calls. App Intents remain the canonical cross-app surface that Apple Intelligence, Siri, and Spotlight call. Same direction (typed, declarative tool use); two parallel contracts. An app that wants to be reachable by system AI surfaces ships App Intents; an app that wants to call its own on-device LLM with custom tools ships Tool conformances. Many apps will ship both.
App Intents are not a feature. They are the contract. The app that ships the intent first gets the surface; the app that ships it later finds the surface already routed elsewhere. Eleven weeks ago I shipped one in Water. The compounding has already started.
References
-
Personal field test, February 8, 2026, ~9:15 AM PT. Recorded as the first end-to-end Siri-to-
LogWaterIntent-to-SwiftData write on a paired Apple Watch. ↩ -
Author’s Water iOS app, published by 941 Apps (941apps.com).
LogWaterIntent.swiftshipped in Water 1.4, commite398c58on February 8, 2026. Source code excerpt above is the production version as of that initial commit; the dialog string has been iterated since. ↩↩↩ -
Apple, “Apple Intelligence Foundation Language Models,” machinelearning.apple.com. On-device + Private Cloud Compute hybrid. ↩
-
Apple Developer, “Foundation Models” framework. iOS 26+.
LanguageModelSessionexposes tool calling through theToolprotocol, separate from theAppIntentprotocol used by Siri / Spotlight / Apple Intelligence. The two are parallel contracts in the same direction. ↩↩ -
Apple Developer, “Creating Your First App Intent”. Property-wrapper-based parameter declaration; types are the schema. ↩
-
Apple Developer, “ParameterSummary”. Used by Shortcuts UI, Siri dialog, and Apple Intelligence confirmations. ↩
-
Apple Developer, “Returning a value from your intent”.
ProvidesDialog,ProvidesView,ReturnsValueshapes. ↩ -
Apple, “Introducing SiriKit”, WWDC 2016. SiriKit Intents (
INIntent) shipped in iOS 10. Siri Shortcuts followed in iOS 12 (2018) and in-app intent handling in iOS 13 (2019). ↩ -
Apple, “What’s new in App Intents”, WWDC 2022. Introduction of the typed, declarative App Intents framework. ↩
-
Apple, “Bring your app to Siri”, WWDC 2024. Apple Intelligence routing through App Intents and App Entities. ↩
-
Apple Developer, “AppEntity protocol”. The data type version of App Intents; queryable by Apple Intelligence and other system surfaces. ↩↩
-
Apple, “Apple Intelligence System Requirements”. Eligible devices: iPhone 15 Pro and Pro Max (A17 Pro), the iPhone 16 line, the iPhone 17 line, iPhone Air, iPhone 17e, iPad Pro with M1 or later, iPad Air with M1 or later, iPad mini with A17 Pro, Apple Vision Pro with M2 or later, and Mac with M1 or later. Notably absent: base iPhone 15 / 15 Plus. Foundation Models framework inherits the same hardware gate. ↩
-
Author’s Get Bananas, a SwiftUI + SwiftData shopping list app for iOS, macOS, watchOS, and visionOS.
ShoppingItemSwiftData@Modellives inItem.swift:@Attribute(.unique) var id: UUID,name: String,amount: String,section: String,isChecked: Bool,isOptional: Bool,sortOrder: Int,lastModified: Date?. iCloud Drive sync viaiCloudBackupManager. ↩ -
Get Bananas ships an MCP (Model Context Protocol) server bundled as
get-bananas.mcpbfor Claude Desktop. Tools exposed:get_shopping_list,add_item,remove_item,update_item,update_shopping_list. Anthropic’s MCP spec: modelcontextprotocol.io. ↩