Captions and Subtitles

A look at captions and subtitles and why they’re useful for your video’s viewers

rowan@vimsy.co profile

Written by Rowan

Updated 2019-01-14

Captions and Subtitles

Do you know the difference between closed captions, open captions and subtitles? And do you know why they're useful for your viewers? Take a look at the video below or continue reading to find out everything you need to know about this video feature.

Watch our video to learn everything you need to know about captions and subtitles

The biggest benefit of using captions and subtitles is providing accessibility to people who are deaf or hard of hearing. Without some type of captioning, these people are going to find it difficult or impossible to follow your video content.

According to Action On Hearing Loss, this effects one in six people in the UK. That’s a lot of people!

The second, perhaps equally biggest benefit is to give viewers a way of engaging with your video when they can’t watch it with sound. For example, when they’re in a busy or loud environment, or watching on a muted device like a smartphone or tablet.

Captions and Subtitles

Get more content like this We regularly post articles like this to our blog. Want to get them as soon as they're released? Sign up to our mailing list.The difference between Captions and Subtitles

The reason I keep referencing “captions and subtitles” is because, although they’re similar in function and appearance, the two are actually designed for different purposes.

Subtitles tend to assume that the viewer can still hear the sound, and are simply a transcription of just the narration and dialogue of a video. They’re commonly used to provide alternative language translations for viewers who speak another language, as they can usually hear the video (and don’t need the sounds described) but don’t understand what is being said.

Captions tend to be for viewers who can’t hear the sound at all and include on-screen descriptions of sound effects, which can lead to some rather funny descriptions.

The reason I say “tend to” is because the the phrases “captions” and “subtitles” are commonly interchanged in everyday conversation, and as video has become more commonplace, there are certainly times when captions are referred to as subtitles, and vice versa.

Closed Captions vs Open Captions

Captions come in two flavours: closed captions and open captions.

Closed captions are captions that a viewer can turn on or off within their video player or on their device. They’re not persistent, which is useful for people who can hear the sound and don’t want to see them, but not so good for people who need them and might not know how to turn them on.

Closed captions requires a level of technical understanding to display them, and for video creators, closed captions aren’t supported on all platforms – for example, a number of popular social media platforms don’t support closed captions for videos at the moment, which isn’t great.

Open captions are captions that are embedded into the video itself, which is what we’ve done for this video. There’s no way to turn them off, which has the benefit of working everywhere the video plays, and there no technical ability required to make them appear, but they can be distracting for viewers who don’t need them, and sometimes they can obscure whatever is behind them.

Implementation Challenges

Given the benefits of subtitles and captions, putting the effort into creating them is a no brainer, but therein comes the difficulty: every subtitles or captions implementation requires additional effort above and beyond creating the video itself.

The old fashioned way of manually transcribing your video to a word-perfect level is a very time consuming process, especially if the video is long. Sitting and transcribing a day’s worth of conference recordings isn’t really anybody’s idea of fun and could take days or weeks to do.

Scripted content is easier to manage as you have the script as a reference, but somebody still needs to spend time making sure that the subtitles and captions appear at the right time in the video, regardless of how you implement them. For some types of content such as training and educational videos, it might make more sense to provide a transcript in the video description, which is what we usually tend to do for our own videos.

Some platforms such as YouTube will automatically create closed captions for your videos using artificial intelligence, but this can be hit and miss, and I often find that the automatic closed captions are full of errors – brand names, such as Vimsy, often appear as something completely different. It’s a bit like typing on your phone but autocorrect is drunk.

However, on YouTube specifically, you can go into your video settings and edit the automated subtitles, which saves time. You can also download your subtitles in a variety of formats which you can use on other platforms too. YouTube is actually one of the best, most easy-to-use closed caption generator tools I’ve ever used – and it’s free!

Conclusion

As speech to text technology improves I believe creating subtitles and captions will become easier and less time consuming, but for now there’s still unfortunately a question of whether the effort of creating subtitles and captions is worth the effort to reach people who might not be able to understand your videos properly without them.

In my opinion, catering for people with needs should always be worth the time and effort. From a business perspective, where this could result in missed opportunities or lost sales, it makes sense to cater to as many people as possible.


Get more content like this

We regularly post articles like this to our blog. Want to get them as soon as they're released? Sign up to our mailing list.

Previous Article AMA with Vimsy CEO & Founder, Rowan Johnson Prev

Next Article Case Study: "The PM Channel" by Provek Next