I encountered a very unique challenge today - I needed to cut a part of a video hosted online with Azure Media Services for reference. The video in question is Into Focus, the “show within a show,” that aired at Microsoft’s Ignite conference earlier this week.
Now, ordinarily, and because I work on the team that produced those pages to begin with, I could reach out and ask where the source MP4 is located and get it that way. But I needed it that very moment (thank you, instant gratification). So what do I do, fire up the web browser inspector and hit play, in the hopes to get a
https://foobar/video.mp4 request captured. But wait, what am I getting instead?
A bunch of fragments! Azure Media Services, the underlying service that underpins the player hosted on a docs.microsoft.com page, is not giving the full URL, but rather doing the smart thing and pre-buffering only the necessary parts (that is, within immediate play reach). The fragments are still MP4 chunks, but they can’t be played standalone if you copy and paste the URL - they need to be assembled together.
This is all fine and dandy, but I still need to cut a part of the whole video for my project. Let’s dig more! When Azure Media Services first starts playing the video, it downloads a manifest that contains all the necessary information for streaming. Again, a very calculated and smart thing, because the manifest contains different streaming preferences depending on the device, bandwidth, and allows packaging information about supported audio tracks.
If I look back at the video I am trying to get, the manifest request can be extracted from the browser’s Network inspector, and ends up being the following:
That’s an intimidating URL with intimidating content:
Worry not, though - this is all part of the flexibility of Azure Media Services. The file is a descriptor of the content, and the embedded player is then responsible for grabbing the right chunks depending on the characteristics of the system where the content is played. Notice that there are different adaptation sets for each audio track that is available (and there are a few languages supported, such as Spanish and Japanese).
This doesn’t help me, though. I still don’t have access to any direct video link that would allow me to download the file directly, and I don’t want to write a custom script to re-assemble the MP4 chunks. After a bit of digging, I found out that I can actually use
ffmpeg, an open-source video processing tool, to do this for me. To do that, however, I will need a M3U file. But how do I get one? Well, Azure Media Services does this for me already, as they offer dynamic manifests. I just need to tweak the URL to it, as such:
The only thing that changed is the
(format=m3u8-aapl-v3) at the end - I am using the Apple HTTP Live Streaming V3 format. And the content of that manifest is now a little less intimidating!
There’s a catch here, though. If you look at the contents of the M3U, you might also notice the
audiotrack=Spanish as the prefix. Because our video has multiple audio tracks, it seems like the generated M3U manifest defaults to the last node in the original one. Using this particular variant will result in me downloading a 2GB+ file that has a Spanish audio track, while I needed the English one.
To fix this, all I need to do is append an
audiotrack=English argument inside the parentheses, as such (this is called filter composition):
This should do it! Still no URL, but I will defer to
ffmpeg to know what it’s doing. To download and re-assemble a streamed video, I can call it from the terminal:
In a couple of minutes, I had a working video! Now that I think about it, the initial instant gratification assumption was probably flawed and I could’ve just asked someone for the file, but that just means I don’t get to figure out this problem with
ffmpeg, and let’s be real - most problems can probably be solved by