April 11, 2026
it's been a year
Got CosmiCut out the door a year ago. And boy have I learned stuff.
When I started working on CosmiCut (shortly after receiving an Apple Vision Pro at launch), Apple hadn’t published the specs for “spatial” videos. Mike Swanson’s blog was a super helpful starting point.
From there I could see what a spatial video actually is: an MV-HEVC video with some extra metadata and two frame buffers. That’s effectively what “immersive” video is, too (just at a typically higher resolution with a different presentation technique).
Apple eventually added official docs. But by the time these docs existed, I had already made CosmiCut 😅.

We launched CosmiCut, built fully in SwiftUI, in the spring of 2025. Initially, it was really just a very simple way to trim and smash together multiple spatial videos. Over the last year, it’s become pretty full-featured. It now has:
- A full timeline with draggable clips.
- Music track support and simple music trimmer.
- Support for adding filters and effects to spatial videos with a custom pipeline for handling both pixel buffers from a spatial video. Something Apple still doesn’t offer within Photos or iMovie.
- A render pipeline and an in-app ML model (DepthAnythingV2) to generate spatial videos from 2D videos.
- Support for adding images and importing them as Ken Burns-style animations.
- Support for previewing spatial videos in 3D while editing on Vision Pro.
- Support for previewing spatial videos in 3D on iPhone, iPad, and Mac using RealityKit and some custom shaders.
- Ability to export a “spatial GIF” using the same RealityKit rendering pipeline.
- A complete UI/UX rewrite to support Liquid Glass.
- Migrated the app to Swift 6 with strict concurrency.
There’s a bunch of items from that list that deserve posts of their own. While I’ve been a professional software dev for some time now, my focus hasn’t been this kind of low-level image and video processing, and I’ve learned a ton that I’m eager to share.
Now I just have to write it all up.