
Three former Google X scientists are building what they call a virtual “second brain” — not through implants or sci-fi chips, but via an AI-powered app that learns from everything you say. Their startup, TwinMind, has raised $5.7 million in seed funding and launched both an Android and iPhone app, alongside a new AI speech model.
Co-founded in March 2024 by Daniel George (CEO) and former Google X colleagues Sunny Tang and Mahi Karim (both CTOs), TwinMind runs in the background — with user consent — to capture ambient speech and create a personal knowledge graph. By turning spoken thoughts, meetings, and conversations into structured memory, the app generates smart notes, to-dos, and contextual answers.
Unlike typical AI note-takers such as Otter, Granola, or Fireflies, TwinMind works passively all day, processing audio on-device in real time, with the ability to run up to 17 hours without draining the battery. It also supports offline use, data backup, and real-time translation in 100+ languages.
To overcome Apple’s restrictions on background audio capture, the team built a native Swift service that runs continuously on iPhones, rather than relying on React Native or cloud-only processing. “We spent six to seven months just figuring out how to capture continuous audio without Apple shutting us down,” George told TechCrunch.
George conceived TwinMind in 2023 while working at JPMorgan as VP and Applied AI Lead. Frustrated by endless meetings, he created a script that transcribed audio, ran it through ChatGPT, and quickly began producing summaries and even usable code. When colleagues showed interest, he realized the potential of a dedicated app that could run privately on a personal phone.
Beyond mobile, TwinMind also offers a Chrome extension that uses vision AI to scan browser tabs, pulling context from platforms like Gmail, Slack, and Notion to further enrich its memory.