Earlier today, I updated my AirPods 4’s firmware to the beta version, which Apple released yesterday. I was curious to play around with the software update for two reasons:
- AirPods are getting support for automatically pausing media playback when you fall asleep, and
- Apple is advertising improved “studio quality” recording on AirPods 4 and AirPods Pro 2 with this update.
I’ll cut to the chase: while I haven’t been able to test sleep detection yet since I don’t take naps during the day, I think Apple delivered on its promise of improved voice recordings with AirPods.
For starters, I’ll mention that the process of updating AirPods to beta software is better and less obscure than it used to be. If you’re running a Mac, iPhone, or iPad with the latest OS 26 developer release and open the AirPods menu in Settings, you’ll find the option to enable beta updates for compatible AirPods models. I enrolled my AirPods 4 and placed them in their case next to my iPhone, and after a few minutes, the beta firmware was installed. I still don’t love that there is no way to manually install a software update on AirPods, but at least the process is a little more streamlined now.
If you recall, a while back, I mentioned that I was disappointed with the recording quality of AirPods when I published a story about turning my own voice recordings into actionable items in Obsidian. At the time, AirPods 4 were essentially unusable if I wanted to record myself while doing chores around the house or driving. The earbuds would pick up a lot of external noise, and the resulting audio was heavily compressed. I resorted to using a pair of Xiaomi earbuds, combined with some LLM scripting on a Mac mini server, to process my brainstorming sessions, extract tasks from them, and save those tasks as notes.
With the new beta firmware, things are much, much better. For context, here is what a simple voice recording sounded like while wearing two AirPods before the beta firmware update:
I recorded this on my balcony, with some construction work going on at a neighbor’s apartment, using AirPods’ standard voice settings. Here is audio from the same environment, using one AirPod only:
Both are…pretty bad. Well, here’s what a recording from the same scenario with two AirPods sounds like now, after the beta firmware update, with standard settings:
And here is a single AirPod:
It’s a fairly dramatic difference, to the point that I can now actually consider rethinking my voice notes workflow around AirPods connected to my iPhone or Apple Watch. So what’s going on behind the scenes? Here’s how Apple officially described the feature in its press release:
Creating content gets even better with studio-quality audio recording. Interviewers, podcasters, singers, and other creators can record their content with greater sound quality, and even record while on the go or in noisy environments with Voice Isolation. With the H2 chip, beamforming microphones, and computational audio, users will also enjoy more natural vocal texture and clarity across iPhone calls, FaceTime, and CallKit-enabled apps. Studio-quality audio recording and improved call quality work across iPhone, iPad, and Mac, while also supporting the Camera app, Voice Memos, dictation in Messages, video conferencing apps like Webex, and compatible third-party camera apps.
That doesn’t say a lot, so I decided to take a look myself. Beyond the “computational audio” magic that Apple is working behind the scenes, the updated AirPods now save audio files with a sample rate of 48 kHz, as opposed to the 24 kHz files that were saved before the beta firmware:
I’m sure that there is more going on under the hood than a mere sample rate boost, but that surely doesn’t hurt. As a result of the change, the audio files I saved this morning after the beta update are almost double the size of the old ones.
Regardless of how Apple made this possible, I’m thrilled that AirPods are now a good solution for recording long voice memos or voiceover for quick videos. Would I use AirPods instead of my microphone to record my weekly podcasts? No, but for everything else – whether it’s recording voice notes or hopping on a non-podcast Zoom call – this means I can now use AirPods without having to say, “Sorry, I’m using AirPods for this”.
Speaking of notes, I will revisit this topic later in the summer as Apple releases more beta updates, but I was also curious to see whether my convoluted workflow to record myself and use an LLM to extract actionable items from a recording could be replicated simply using Notes and Apple Intelligence. The answer is…yes, surprisingly.
After recording myself in Notes, I copied the transcript (which was generated instantly thanks to the excellent SpeechAnalyzer framework, a separate model that Apple improved this year) and ran a shortcut that passed the text in my clipboard to the Reminders share sheet extension. That extension has been infused with Apple Intelligence integration this year, and one of the things it can do is extract actionable items from the input text. The extension did exactly that, on-device, in seconds, giving me the ability to quickly turn my voice ramblings into a list of tasks in the Reminders app:
Color me impressed. I’m going to keep an eye on all of this over the next few weeks.