Why caption? Why subtitle?
Because a large chunk of your audience either needs or prefers to watch video reading text without sound. Subtitles and captions are an essential part of business for any media organization who wants to reach the widest possible audience, for a multitude of reasons: legal requirements, international distribution, accessibility…. Captions provide a textual representation of dialogue and other important audio for people with hearing loss. They also give the viewer additional information about the video, such as context for a news clip. Subtitles provide a translation of the dialogue, essential to reach international markets.
So, we all agree captions and subtitles are key, but… creating them is hard.
First, you need solid speech-to-text transcription so that the right words are created from the audio stream. As many of us who have been working with this technology for a while quickly learnt, this is not enough. There is the complicated business of correctly segmenting, laying the text out, punctuation, timing and so on, often creating much manual work for captioning and subtitling teams.
What if you could be smart about captioning and subtitling?
What if you had a technology solution that automates most of these editorial decisions applying artificial intelligence to the output of a good speech-to-text engine? Enter Dalet Media Cortex.
Over the last few months, our teams have worked tirelessly to improve the quality of our Smart Captions, part of the Dalet Media Cortex Speech service, providing users with high-quality automatic captions and subtitles for their video content.
If you are using the Dalet Media Cortex API, Smart Captions will be delivered to you in the form of SRT or TTML files. If you are using Dalet Media Cortex integrated with Dalet Galaxy five or the Ooyala Flex Media Platform, captions will also be displayed as timecoded locators so that users can search and navigate through subtitles and captions easily.
What is so special about Smart Captions? We have developed algorithms based on speech density, natural language processing and speaker diarization (the process of partitioning an input audio stream into homogeneous segments according to the speaker identity) to generate captions and subtitles that are as close as possible to the BBC Subtitle Guidelines. This takes care of essential elements in captioning and subtitling: such as text that flows well and is synchronized with the speakers’ voice and cadence, lines that split at natural points based on sentence structure and text length that is properly adjusted to screen size.
How does this help you?
Beyond the improved quality of these AI-generated subtitles and captions, there are immediate and tangible benefits. You will see the time you spent in adjusting subtitles or captions cut in half, when compared to traditional speech-to-text based solutions. You don’t have an automatic captioning/subtitling system today? Good news: you will save over 80% of your time in generating quality captioning and subtitles for your video content.
Besides, having great captions and subtitles will increase the value of your media, and bring you new business opportunities, such as expanding to new markets or increase your online and social media engagement.
No doubt Smart Captions is one of the multiple differentiators that makes Dalet Media Cortex an “IBC Best of Show Award” winner.