Guideline 5.1 Data Collection Rejection¶
This is the concrete rejection page for Guideline 5.1 data collection and privacy mismatch outcomes.
Use it when Apple has already signaled that the submitted app, privacy form, ATT behavior, SDK traffic, or disclosure surfaces do not align well enough for approval.
If you are still evaluating the next build before submission, use the App Store Privacy and Data Guide first.
What This Page Covers¶
Use this page when you need to determine:
- which declaration-to-runtime mismatch most likely triggered the rejection
- whether the problem is ATT, privacy form integrity, SDK behavior, or policy text drift
- what evidence Apple needs before resubmission
- which adjacent privacy pages to check next
Means¶
Data Declaration vs Runtime Behavior¶
In practice, review risk comes from divergence between four planes of truth:
- App Privacy form in App Store Connect, where you declare data types, linkage, and tracking use.
- ATT prompt behavior in the app, including whether tracking-capable logic executes before user choice.
- SDK transmission behavior at runtime, including third-party libraries, default settings, and fallback code paths.
- Server-side data flows after ingestion, including joins, enrichment pipelines, and downstream sharing.
Many teams control only one or two planes tightly. Rejections emerge when the uncontrolled planes contradict the declared one. Common failures include accurate client declarations with undocumented server enrichment that re-links data, or a clean form declaration with early-launch SDK traffic before ATT status is resolved.
Treat Apple review as a consistency test, not a static questionnaire. A technically correct declaration can still fail if runtime implies broader data use than declared. A privacy-forward implementation can also fail if metadata or policy text lags behind.
A reliable approach is to maintain a single internal privacy ledger per release:
- Each declared data type maps to an originating code path and endpoint.
- Each endpoint maps to storage, retention, and sharing behavior.
- Each third-party SDK maps to initialization timing and gating conditions.
- Each user-facing disclosure maps to the same behavior set.
With this ledger, reviewer follow-up can be answered with evidence instead of interpretation.
App Privacy Form Integrity Model¶
The App Privacy form is most reliable when handled as an integrity model with three dimensions:
-
What data is collected Account for all relevant collection initiated by the app or embedded SDKs, including telemetry and diagnostics where applicable. The right question is what the shipped app and integrated services actually transmit.
-
Whether data is linked to the user Data is effectively linked when it can reasonably connect to identity, account, or persistent profile context in your systems or partner systems. Teams often under-classify linkage by looking only at one payload and ignoring downstream joins.
-
Whether collection is used for tracking Tracking analysis should include cross-app or cross-company contexts, not just in-app personalization. If an integration contributes to attribution or audience profiling outside your first-party boundary, treat it as tracking risk until proven otherwise.
Common mismatch patterns that trigger scrutiny:
- Declaring a data type as not collected while an SDK sends it under a default config on first launch.
- Declaring data as not linked, but transmitting stable account or device-adjacent identifiers that your backend later joins.
- Declaring no tracking while ad/measurement partners receive identifiers used in cross-property attribution.
- Omitting server-derived inferences that remain tied to app-collected signals.
An integrity model should include explicit ownership:
- Product: user-facing explanation.
- Mobile engineering: runtime emission inventory.
- Backend engineering: joins and downstream sharing map.
- Legal/compliance: declaration language and policy concordance.
Without clear ownership, the form becomes stale between releases and rejection probability rises.
Trigger¶
ATT and Tracking Enforcement¶
ATT enforcement is sequence-sensitive. The core question is whether tracking-capable behavior is gated until authorization is known.
Operational points that matter in review:
- ATT prompt timing: If tracking-related SDK logic initializes before the ATT decision is available, reviewers may interpret this as noncompliant behavior even when your prompt appears later.
- IDFA access: Access to IDFA must align with user authorization state. Any code path assuming availability before consent introduces clear risk.
- Third-party SDK behavior: Some SDKs collect or infer identifiers via configuration flags, optional modules, or remote toggles. Your compliance posture must be based on active runtime configuration, not default documentation.
- Hidden fingerprinting risk: Combining device/network attributes to create a stable identity surrogate can create tracking exposure, even without explicit IDFA usage.
A robust ATT control design usually includes:
- Hard initialization gate for tracking-capable modules.
- Runtime assertions proving gate order.
- Environment parity checks so staging and production configs do not diverge.
- Regression tests for first launch, reinstall, and consent-state transitions.
Teams should also review iOS privacy manifests and required-reason declarations for API usage that may signal broader privacy risk if undocumented. Manifests do not replace App Privacy disclosures, but they add another consistency surface.
Privacy Policy Consistency¶
Your privacy policy is a consistency surface that should match both declared metadata and runtime behavior. Misalignment is a fast path to repeated follow-up.
Required consistency checks:
- Declared vs described: Every material data practice declared in App Store metadata should be reflected in policy language using equivalent scope.
- Described vs observed: Policy claims must remain true under real app execution, including third-party SDK behavior and server-side processing.
- Link reliability: The privacy policy URL in App Store Connect must resolve reliably. Missing URLs, dead links, irrelevant redirects, or geo-blocked content can cause enforcement.
- Version control: Policy updates should be timestamped and traceable to release changes so reviewer questions can be answered with concrete history.
Regional regulatory overlays affect what reviewers expect to see disclosed clearly:
- EU/EEA contexts may require clear treatment of lawful basis and user rights.
- US state regimes can require opt-out language for specific data uses or sharing contexts.
- Other jurisdictions may impose local disclosure and consent constraints.
Apple is not adjudicating every statute in review, but visible omissions in region-relevant disclosures can prolong review cycles. The practical target is one globally coherent policy model, with regional addenda where needed, synchronized with declarations and runtime evidence.
Risk¶
Escalation Pattern¶
Most teams should model 5.1 enforcement as an escalation curve:
-
First rejection -> metadata correction Initial outcomes are usually recoverable through declaration fixes, targeted SDK configuration changes, and an evidence-backed resubmission note.
-
Repeat mismatch -> trust erosion If later submissions show unresolved inconsistencies, review posture shifts from issue-level correction to team-level control confidence. Approval latency usually increases.
-
Cross-flag with 2.1 or 4.x When privacy mismatches coincide with incomplete artifacts, inaccessible flows, or degraded quality, signals can overlap with Guideline 2.1 (App Completeness / Information Needed). In recurring patterns, low-integrity behavior can also interact with 4.x concerns. Cross-flag risk increases when evidence is fragmented or contradictory.
-
Potential account-level risk Persistent unresolved mismatches across versions can shift enforcement from app-level friction to account-level trust concerns.
The control objective is to stop escalation at stage one by proving systemic correction, not by negotiating individual symptoms.
Trust Erosion Model¶
Teams can model enforcement risk as trust erosion across repeated mismatches.
First mismatch typically maps to metadata correction: align App Privacy declarations, policy text, and runtime controls, then resubmit with change notes.
Repeat mismatch usually increases scrutiny. Reviews tend to request clearer evidence, and teams may experience longer cycles because control reliability becomes the primary concern.
If similar inconsistencies appear across multiple apps under one organization, review intensity can increase across that set. Treat this as a cross-app signal to standardize privacy controls, SDK governance, and release evidence rather than fixing each app independently.
At the extreme, unresolved repeated mismatches can create portfolio-level review risk: broader friction across submissions, tighter expectation for proof, and reduced tolerance for documentation/runtime divergence. The objective is to break the cycle early by demonstrating systemic control, not one-off patching.
Pre-Check¶
- Verify App Privacy categories match current runtime and server data flows.
- Verify ATT gating blocks tracking-capable SDK initialization before user choice.
- Verify privacy policy URL is reachable and policy language matches observed behavior.
- Verify first-launch and consent-state network traces are archived for the build.
Structured Pre-Submission Audit¶
Run this audit before every submission that changes data flows, SDKs, login architecture, advertising, attribution, or policy language.
SDK audit¶
- Inventory all embedded SDKs, versions, and enabled modules for the release candidate.
- Confirm each SDK’s data emission behavior under production configuration, not sample defaults.
- Verify initialization order and gating logic for tracking-adjacent modules.
- Record vendor documentation references and runtime configuration deltas.
Network inspection¶
- Capture first-launch and post-consent network traces on clean devices.
- Run traces for ATT states: not determined, authorized, denied, and restricted.
- Identify payload fields that are stable identifiers, quasi-identifiers, or account keys.
- Confirm no unexpected endpoint domains appear due to remote config or fallback routing.
Data mapping¶
- Build a source-to-sink map: event origin -> transport -> storage -> enrichment -> sharing.
- Mark where data is linked to account identity, household profile, or persistent device context.
- Mark each downstream processor (internal or vendor) and purpose of use.
- Identify derived fields that may create linkage or profiling risk.
Privacy form review¶
- Reconcile every collected data category against the latest runtime and server map.
- Re-validate linked/not linked classification using downstream join logic, not only client payload shape.
- Re-validate tracking/not tracking classification against partner contracts and observed use.
- Ensure release notes, reviewer notes, and declaration updates describe the same behavior.
ATT flow validation¶
- Verify ATT prompt appears at the intended moment and before tracking-capable logic executes.
- Confirm IDFA-dependent code paths are disabled unless authorization is granted.
- Test reinstall and upgrade scenarios where cached state can bypass expected prompt logic.
- Validate behavior parity across iPhone/iPad and current supported iOS versions.
A submission is audit-ready only when all five sections reconcile without exceptions. If any section fails, fix behavior first, then update declarations and policy text.
Fix¶
Third-Party SDK Accountability Model¶
Treat every SDK emission as your app's behavior. Operationally, there is no practical separation between first-party code and third-party runtime effects once traffic leaves the binary. If an SDK collects or transmits data, your declarations and policy posture must cover it as if written internally.
Default SDK auto-initialization is a common control gap. Many libraries boot on launch, register lifecycle hooks, or begin telemetry before your consent and configuration logic completes. This creates first-launch exposure where runtime behavior can exceed declared intent, especially in denied or not-determined ATT states.
A stronger pattern is deferred loading plus explicit gating. Initialize only a minimal shell at process start, then load tracking-capable modules after policy gates pass (for example consent state, regional rule set, and feature entitlement). Require deterministic initialization order, with fail-closed behavior when gate state is unknown.
Also plan for vendor configuration drift. Remote dashboards, partner defaults, new SDK versions, and silently enabled modules can change observed emissions without visible app code changes. Manage this as a release risk: snapshot vendor settings per build, diff them during QA, and block submission when runtime traces no longer match declared categories.
Runtime Verification Layer¶
Start with network inspection on a clean physical device at first launch. Capture all outbound domains and payload classes before user interaction to identify unexpected startup emissions. Repeat under production-like config, not debug-only settings.
Run ATT state switching tests as a fixed matrix: not determined -> denied, not determined -> authorized, and post-install state changes. Confirm event volume, identifier fields, and destinations differ as intended by consent logic.
Compare fresh install and upgrade paths. Upgrade flows can preserve cached IDs, stale SDK state, or migrated settings that do not appear in clean-install tests. Validate both because risk can arise from either entry path.
Validate region variance directly. If geolocation, storefront, or locale drives privacy behavior, test at least one representative region per ruleset and confirm endpoint routing plus payload content are consistent with declarations for that region.
Maintain a telemetry emission inventory per build: event name, trigger point, fields, destination, linkage status, and tracking relevance. Version this inventory with the release artifact so metadata review is evidence-backed and reproducible.
Resubmission Evidence Packet¶
- Reviewer summary: what changed since rejection, mapped directly to the cited 5.1 concern.
- Runtime proof: first-launch and consent-state network traces (before/after) with identifier fields called out.
- Control proof: ATT gate order logs/assertions and SDK initialization timeline for the reviewed build.
- Declaration proof: updated App Privacy form matrix aligned to actual collected/linked/tracking behavior.
- Policy proof: live privacy policy URL, effective date, and redlined language updates tied to the same behavior set.