Typeless Privacy Issues: What Researchers Found (2026)
Researchers reported Typeless sends voice to AWS cloud despite "on-device" marketing. See the findings, cloud dictation risks, and safer alternatives.
Typeless Privacy Issues: What Independent Researchers Found
TL;DR: Typeless markets itself as privacy-first with claims of "on-device history" and "zero data retention," but the company's own privacy policy confirms that voice audio is sent to cloud servers for processing. In November 2025, a reverse-engineering analysis posted on X reported that Typeless routes audio to AWS servers in us-east-2 and collects additional contextual signals beyond voice, including browsing URLs and focused window titles. The incident highlights a broader pattern: cloud-based dictation apps cannot offer the same privacy guarantees as tools that process audio entirely on the device. Voibe runs 100% on-device on Apple Silicon — no audio leaves your Mac, ever.
This article walks through what Typeless publicly states, what independent researchers reported, what the community reaction revealed, and what any Mac user concerned about voice privacy should check before granting a dictation app access to their microphone. We are not accusing Typeless of malicious behavior — we are comparing their public claims against their own privacy policy, against the reverse-engineering analysis, and against the community response.
Disclosure: Voibe is our product. We compare Voibe to other tools using verifiable facts — pricing from vendor sites, claims from public privacy policies, and attributed third-party research.
Key Takeaway
Typeless markets "on-device" features but its own privacy policy confirms audio is processed on cloud servers. Independent researchers reported additional data collection beyond voice.
Key Takeaways: The Typeless Privacy Story
| Point | What to Know | Source |
|---|---|---|
| Cloud processing | Voice audio is processed on Typeless's cloud servers, then discarded. | Typeless Privacy Policy |
| "On-device" scope | The "on-device" claim refers to history storage after processing — not to where the audio is transcribed. | Typeless marketing vs. policy |
| Reverse-engineering findings | Researcher @medmuspg reported AWS us-east-2 routing, URL collection, window-title capture, and broad permission requests. | X post, November 2025 |
| Community response | Japanese tech and medical community members publicly uninstalled the app and deleted prior recommendations. | X posts, November 2025 |
| HIPAA claim | Typeless announced HIPAA compliance in March 2026 but no public BAA is advertised. | Paubox assessment, Typeless X announcement |
| Safer alternative | On-device dictation (Voibe, VoiceInk, Superwhisper offline mode) eliminates the cloud surface entirely. | Architectural comparison |
The rest of this article walks through each row in detail, explains why "zero data retention" is a weaker guarantee than "zero data transmission," and gives you an 8-point audit framework to evaluate any dictation app you install on Mac.
What Typeless Publicly Claims About Privacy
Typeless markets itself as a privacy-focused AI dictation app. The company's public positioning rests on three claims:
- "Zero data retention" — voice dictations, transcripts, and edits are not stored after processing.
- "On-device history" — dictation history remains on the user's device.
- "Never train on your data" — customer data is not used to train Typeless's AI models or third-party models.
Each of these claims, taken narrowly, can be true at the same time as voice data leaving the device. To understand why, you have to read the actual Typeless privacy policy rather than the marketing copy. The policy states that audio inputs and contextual information are "processed in real time on our cloud servers and immediately discarded once the transcription result is returned to your local device." The same policy discloses that Typeless shares data with "third-party LLM providers, analytics providers, cloud providers, communications providers."
The gap between the marketing and the policy is the source of the privacy concerns. "On-device" sounds like the audio never leaves your Mac. The policy makes clear that it does.
Warning
"On-device history" and "on-device processing" are not the same thing. The former describes where history is stored. The latter describes where audio is transcribed. Typeless confirms the former; its own privacy policy confirms that the latter happens in the cloud.
What the Reverse-Engineering Analysis Reported
In November 2025, an independent researcher posting under the handle @medmuspg on X published a reverse-engineering analysis of the Typeless macOS app. The thread went viral in the Japanese Mac and medical technology communities and triggered a wave of uninstall recommendations. We are reporting the researcher's claims as they were published — we have not independently verified them.
The analysis made six specific claims:
- Cloud routing. Voice data is sent to AWS servers in the us-east-2 (Ohio) region. The "on-device" marketing, per the analysis, refers only to where dictation history is stored locally — not to where the audio is transcribed.
- Contextual data collection. The analysis reported that the app captures browsing URLs (including pages inside Gmail and Google Docs), focused application names, and window titles via the macOS accessibility API.
- Accessibility API scraping. The researcher reported that screen text and DOM-level elements in browsers were accessible to the app through standard macOS accessibility permissions.
- Clipboard monitoring. The app was reported to have access to clipboard content, which is unusual for a speech-to-text workflow.
- Plaintext local storage. Personal information, including transcribed text and URL metadata, was reported to be stored in plaintext within the local application database.
- Broad permission requests. The app reportedly requests screen recording, camera, Bluetooth, and full accessibility access — a permission surface far wider than a voice-input tool strictly requires.
The analysis also noted structural transparency concerns: no legal entity name in the terms of service, vague company location details, and private WHOIS registration for the domain. The researcher did not claim any of these items alone constitute a breach or a law violation — the argument was that the combination creates a privacy profile inconsistent with the "privacy-first" marketing.
Why Cloud Dictation Is Risky, Even With a Zero-Retention Policy
The Typeless case study illustrates a structural problem with all cloud dictation apps, not just one vendor. "Zero data retention" is a policy, not an architecture. The difference matters because architecture is enforced by physics — if audio never leaves your device, no policy change can expose it — while policies can be changed, breached, or circumvented.
Six ways a zero-retention policy can fail to protect your voice data:
- Transmission itself is a breach surface. Even with TLS encryption, your audio passes through your ISP, cloud provider networks, and data-center switches. Any intermediate layer can be misconfigured, logged, or compromised.
- Subprocessor logging. Typeless's privacy policy discloses that it uses "third-party LLM providers, analytics providers, cloud providers" and others. Each of these parties can log, cache, or retain data beyond what the primary vendor intends.
- Policy changes. A privacy policy can be updated with 30 days' notice. The same servers processing your audio today under "zero retention" can store it tomorrow under a revised policy.
- Acquisition risk. When a company is acquired, its data becomes an asset. When Microsoft acquired Nuance (Dragon) in 2022, all customer data moved under Microsoft's governance. A privacy-first startup's zero-retention promise does not survive a change in ownership.
- Legal compulsion. A subpoena or national security letter can compel a vendor to preserve and hand over data that would normally be discarded. On-device processing removes this vector because there is no data to preserve.
- Implementation bugs. A routing error, a debug log left enabled, or a caching layer that forgets to flush can cause data to persist longer than the policy says. Breach reports over the last decade routinely cite exactly these causes.
None of these failure modes are unique to Typeless. Wispr Flow, Aqua Voice, Otter.ai, and every other cloud-based dictation app share the same structural exposure. For a deeper comparison of the two architectural approaches, see our cloud vs. local dictation guide and our voice data privacy guide.
The Permission Problem: What macOS Accessibility Access Can See
The Typeless reverse-engineering analysis highlighted a concern that applies to every dictation app on Mac: the macOS accessibility API grants broad read access to running applications. When you click "Allow" on that permission prompt, you are authorizing the app to read window titles, focused text fields, menu contents, and — in browsers — elements of the page DOM.
This access is necessary for any dictation app to paste text into the active field. But the same API can be used to capture far more than that. According to Apple's own accessibility API documentation, an authorized app can query attributes of any UI element in any running app, including webpage content rendered in browsers.
A well-designed dictation app reads the accessibility tree only when it needs to paste transcribed text. A poorly-designed or over-reaching app can read it continuously, log what it sees, and transmit it to a server. From a user's perspective, there is no visual indicator of which is happening.
This is why the permission model of a dictation app matters as much as its privacy policy. A minimal permission surface limits what a misbehaving app can do, regardless of what it claims to do.
What a voice dictation app actually needs on Mac:
- ✅ Microphone access — required to capture audio
- ✅ Accessibility permission — required to paste text into the active field
- ❓ Input monitoring — needed for global hotkeys; scope matters
- ❌ Screen recording — not required for voice-to-text
- ❌ Camera access — not required for voice-to-text
- ❌ Bluetooth access — not required for voice-to-text
- ❌ Full-disk access — not required for voice-to-text
If a dictation app requests anything beyond microphone and accessibility, the vendor should be able to explain why in one sentence. If they can't, treat it as a red flag.
The Dictation Privacy Audit: 8 Red Flags to Check Before Installing
Use this framework — the Dictation Privacy Audit — to evaluate any dictation app before granting it access to your microphone. Each of the eight checks is independent; the more red flags, the higher the privacy risk. A tool with zero red flags does not guarantee privacy, but a tool with four or more is almost certainly routing your voice through infrastructure you don't control.
- Does the privacy policy confirm on-device processing? Read the actual policy, not the marketing. Look for phrases like "cloud servers," "real-time processing on our infrastructure," or "third-party LLM providers." If any appear, audio is leaving your device.
- Does the app work fully offline? Disconnect your Mac from the internet and try to dictate. If transcription fails, the app is cloud-based regardless of how it is marketed.
- Does the app avoid requiring an account? If you cannot dictate without creating an account and logging in, the app has at minimum tied your voice to an identity on the vendor's server — even if it claims zero retention.
- Is the permission scope minimal? Microphone plus accessibility is sufficient. Screen recording, camera, Bluetooth, or full-disk access on a voice tool is a red flag.
- Does the vendor publish a subprocessor list? A legitimate cloud vendor discloses its subprocessors (third-party LLM providers, cloud hosts, analytics). If the list is missing, vague, or requires account login to view, the data trail is opaque.
- Is there a named legal entity in the Terms of Service? A privacy-first vendor should identify the legal entity responsible for the service. Private WHOIS, no company name, or only a generic "contact us" form is a transparency concern.
- What does Little Snitch show during dictation? A network monitor tells you what the app actually does, not what it claims. A genuine on-device app shows zero outbound traffic while dictating. A cloud app shows connections to the vendor's or AWS/GCP/Azure endpoints.
- Does the terms of service grant the vendor a broad license to your content? Look for phrases like "access, copy, modify, distribute, transmit, export, display, store, and otherwise use" your content. Even with "zero retention" marketing, a broad ToS license allows future use changes.
We use this framework across our reviews. For results on specific tools, see our guides on Apple Dictation privacy, voice data privacy, and our Typeless alternatives guide.
Typeless's Response: HIPAA Claim Without a Public BAA
Typeless responded to the privacy criticism over the following months. In March 2026, the company publicly announced HIPAA compliance on X. This response addresses healthcare-vertical concerns but does not directly rebut the reverse-engineering claims about AWS routing, URL capture, or permission scope.
An independent assessment by Paubox, a HIPAA-focused compliance vendor, noted an important gap: Typeless does not publicly advertise a standalone Business Associate Agreement (BAA) on its website. For covered entities under HIPAA, a signed BAA is a prerequisite — not an optional add-on — before any Protected Health Information can be processed by a third-party service.
Paubox's recommendation was that healthcare organizations considering Typeless contact the company directly to confirm BAA availability. This is standard diligence advice, but it signals that Typeless's HIPAA claim is less complete than the announcement suggests. Our view: a HIPAA compliance announcement without a public, signable BAA is a marketing milestone, not a compliance milestone. Healthcare professionals looking for truly private dictation should review our dedicated HIPAA dictation guide and best dictation software for doctors roundup.
On-Device Alternatives: What Actually Keeps Voice Data Private
If the Typeless findings concern you, the solution is not a different cloud dictation app — it is a dictation app that processes audio entirely on your device. Three Mac-native options process audio on-device using OpenAI Whisper models on Apple Silicon:
| Tool | Processing | Pricing | Key Strength |
|---|---|---|---|
| Voibe | 100% on-device | $4.90/mo or $99 lifetime | Developer Mode (Cursor/VS Code), minimal permissions, no account needed |
| VoiceInk | 100% on-device | $39 one-time | Open-source, auditable codebase |
| Superwhisper | On-device (with optional cloud mode) | $849 lifetime | Multiple customizable modes, broad language support |
All three run Whisper on Apple Silicon's Neural Engine and require only microphone + accessibility permissions. None transmits audio to external servers in on-device mode. For a broader comparison including cloud options, see our best offline dictation apps roundup, the Typeless alternatives guide, and our Voibe vs. VoiceInk comparison.
Key Takeaway
If "zero data retention" is not enough assurance, choose zero data transmission. Voibe, VoiceInk, and Superwhisper (offline mode) process audio entirely on Apple Silicon.
Voibe: Privacy as Architecture, Not as a Policy
Voibe is a Mac-native dictation app built around a single architectural principle: your audio never leaves the device. Voibe runs OpenAI Whisper models on Apple Silicon's Neural Engine. When you press your hotkey, audio is captured into memory, transcribed by the local Whisper model, written into the active text field, and discarded. No cloud servers, no third-party LLM providers, no network round-trip.
The practical consequences of this architecture, mapped against the Dictation Privacy Audit:
- Privacy policy language. Voibe's privacy model is stated plainly: no audio, transcripts, or usage data leave your device. There are no subprocessors because there is no data to process off-device.
- Offline test. Voibe works with Wi-Fi disabled. Dictation is unaffected by connectivity.
- Account requirement. Voibe does not require an account to dictate.
- Permission scope. Microphone and accessibility only. No screen recording, no camera, no Bluetooth, no full-disk access.
- Subprocessor list. Not applicable — Voibe has no subprocessors for dictation data because none is transmitted.
- Legal transparency. Voibe is published by a named legal entity with a public contact.
- Network monitor test. Little Snitch shows zero outbound traffic from Voibe during dictation.
- ToS license breadth. Voibe does not need a license to your content because your content is never transmitted.
Pricing: $4.90/month or $99 lifetime for unlimited dictation on Apple Silicon Macs (M1 through M4). That is $250–$350 cheaper than Wispr Flow over three years and $750 cheaper than Superwhisper's $849 lifetime. Voibe also includes a Developer Mode for VS Code and Cursor with file/folder name resolution — a feature actively requested in the Superwhisper and Wispr Flow communities but not available in either.
Try Voibe for Free — install, grant microphone and accessibility permissions, and dictate. No account, no credit card, no audio leaving your Mac.
The Bottom Line on Typeless and Cloud Dictation
The Typeless privacy controversy is less a scandal about one vendor than an illustration of a structural limit: cloud dictation apps cannot offer the same privacy guarantees as on-device dictation, no matter how carefully they word their policies. Typeless's own privacy policy discloses that audio is processed on cloud servers. The reverse-engineering analysis reported additional collection beyond voice. The community response was swift because the gap between "on-device" marketing and actual architecture is the kind of gap that breaks user trust.
If you value the convenience of AI-polished dictation and accept the trade-off of sending audio to the cloud, Typeless is a reasonable option among its peers. If you are a lawyer, doctor, developer, or anyone who would rather not have your voice cross network boundaries, on-device dictation is the only architectural answer. The privacy policy you trust most is the one the server can't break — and that means no server at all.
For side-by-side Typeless comparisons against other dictation tools, see our Typeless vs Superwhisper comparison, Typeless vs Aqua Voice comparison, and Typeless vs Wispr Flow comparison. For further reading on related topics, see our why offline dictation matters explainer, our complete dictation privacy hub, and our 11 best Typeless alternatives guide.
Ready to type 3x faster?
Voibe is the fastest, most private dictation app for Mac. Try it today.
Related Articles
Dictation Privacy Hub: The Complete Guide to Protecting Your Voice Data
Your voice is biometric data that can never be changed. Explore our complete library of dictation privacy guides covering HIPAA, voice data, Apple Dictation, and more.
Apple Dictation Privacy: What Data Apple Collects and How to Stop It
Apple Dictation on Mac processes most speech on-device but can still share audio with Apple. Learn exactly what data is sent, how to disable sharing, and limitations.
Cloud vs. Local Dictation: Privacy, Speed, and Accuracy Compared (2026)
Cloud dictation sends audio to servers. Local dictation processes on your device. Compare privacy, latency, accuracy, and cost to choose the right approach.

