| All plans | No audio or transcription leaves the device. Account holders: email (account auth) plus non-identifying usage analytics; crash reports exclude dictated content. | No "The Voibe application processes your voice entirely on your device. No audio is transmitted to our servers at any point." | Audio: not transmitted, not retained. Account email: kept while account is active. | Yes (only mode) — Whisper models running on Apple Silicon Neural Engine | Apr 27, 2026 | |
| Free (Privacy Mode OFF, default) | Audio, transcripts, edits, optional Context Awareness (screen content from active app) | Yes (opt-in) After 2024 community backlash, training is now off by default and requires opt-in. Audio retained indefinitely; 30 days for data passed to third-party LLMs (OpenAI, Meta). | Indefinite for retained dictation data; 30 days for third-party LLM passthrough. | No — transcription always happens in the cloud, even in Privacy Mode (zero-retention cloud, not local). | Apr 27, 2026 | |
| On-device modes (Fast / Nano / Standard / Parakeet — Free + Pro) | None — audio processed locally and never transmitted | No "Your data is not retained on Superwhisper servers" and "not used for training AI models or any other machine learning purposes." Audio recordings are saved to local disk by default — opt out in settings. | N/A on servers. Local recordings persist until the user deletes them. | Yes | Apr 27, 2026 | |
| Cloud modes (Ultra transcription / Super Mode LLMs — Pro) | Audio sent to Superwhisper's proxy infrastructure | No (per vendor) Superwhisper says cloud audio is proxied through their infrastructure, third-party providers don't see user account or content, and there is no training or retention. Cloud-mode handling is not currently distinguished in the public privacy policy from on-device modes — verify the latest with the vendor before sensitive use. | Stated as not retained on servers; not separately documented for cloud modes. | No | Apr 27, 2026 | |
| Pro (Gumroad) / Whisper Transcription (App Store) | On-device modes: none transmitted. App Store version discloses "Usage Data" and "Product Interaction" as Data Not Linked to You. Cloud Assistant or BYOK (OpenAI / ElevenLabs) features send audio to those providers under their terms. | No (by MacWhisper) MacWhisper does not train its own models on user audio. Cloud Assistant and BYOK integrations inherit the chosen provider's terms (e.g., OpenAI Whisper API, Anthropic / ElevenLabs). | On-device transcription: not retained. Cloud Assistant / BYOK: per third-party provider's terms. | Yes (primary mode) — local Whisper models plus Apple Foundation Models for AI features. Cloud Assistant is opt-in for higher-quality transcription. | Apr 27, 2026 | |
| Free / Pro / iOS Pro | Audio inputs, technical data (IP, browser, OS, performance metrics), session metadata. With Privacy Mode disabled, "we may securely store transcript data on our servers." | Yes (opt-out) Privacy Mode toggle stops transcript storage on Aqua Voice servers; with it enabled, "transcript data is not collected" though session metadata may still be. The privacy policy does not explicitly state whether stored transcript data is used for AI training. SOC 2 Type II certified by Advantage Partners. No HIPAA BAA publicly advertised. | With Privacy Mode disabled: not specified in policy. With Privacy Mode enabled: transcripts not stored; session metadata (timestamps, device type, performance metrics) may be retained. | No — cloud transcription | Apr 27, 2026 | |
| Free / Pro | Audio plus limited contextual information, processed on Typeless's cloud servers. Subprocessors include third-party LLM providers, analytics, and cloud infrastructure. | No (per vendor) Privacy policy: "Your data is never used to train these services and is configured for zero retention by the providers." Note: the November 2025 reverse-engineering analysis documented in our Typeless privacy issues investigation reported collection beyond what the public policy describes — verify against the current policy and subprocessor list before sensitive use. | Per privacy policy, audio + contextual information are "processed in real time on our cloud servers and immediately discarded once the result is returned to your device." | No — cloud-processed in real time | Apr 27, 2026 | |
| macOS / iOS (Apple Silicon, supported languages) | Audio inputs, plus contextual data (contacts, app names, etc.) when sent to servers | Opt-in only "Improve Siri & Dictation" must be enabled. Default at setup is to be asked. | If opted in: audio + transcripts kept under a rotating random ID for up to 6 months, dissociated and kept up to 2 years for improvement; reviewed subset retained beyond 2 years. If opted out: not retained for improvement. | Yes (partially) — most languages on Apple Silicon process locally for general text fields (Notes, Mail, Messages). Server fallback applies to unsupported languages, search-box dictation, and some third-party Speech Recognition API uses. | Apr 27, 2026 | |