Detecting language please wait for.......
is there a way to preview a video with translated subtitles displayed in some way or other?
Use Subtitle Edit instead. Using CAT tools for subtitling makes not much sense as the texts are hardly ever repetitive... Plus, localized subtitles carry over the original MEANING, but are not necessary a TRANSLATION of the original message. At least in MOVIES subtitles...
Not mentioning the subtitles structure - sentences are usually broken down to separate parts and the translation does not always match the parts 1:1 (different word orders, etc.).Plus the timing - subtitles need to be timed properly, and the translated subtitles usually need to be re-aligned (so that sentences/phrases are not cut in a middle and the partial text displayed on the screen does make some sense)
If your video is a file, like an MP4, open it in VLC (an open-source and free application). Then select Subtitles menu and open your subtitle file, like an SRT file, for example.
Then play your video file and subtitles will be in the lower part of the VLC window.
I agree with regard to movies, but many videos are not movies and for sales and support videos CAT tools do make a lot of sense.
I completely agree with you Daniel Hug The amount of rich media being used these days has changed the face of video translation quite a bit and it makes a lot of sense to be able to handle video translation with all the benefits of a CAT tool. We have got one in Beta at the moment and can support SRT, webVTT, SBV and STL so far, all with some support for subtitle formatting in the video and also positioning based on information in the file, also different aspect ratio support. Looks like this for example:
Also added some QA checks... could add more but we want some feedback to make sure the stuff we put in here will be relevant:
And we have a TQA model using the FAR methodology:
We will in the future add support for more filetypes, but this needs to be based on demand and good reason since this is not just about segmenting text. We also read and use a lot more information from the file to make the experience more useful.
We might be open to some external Beta testing if you are really going to test it and provide some feedback. All to often we get people asking and they provide nothing back to us at all which makes our effort a waste of time. Might as well wait until we release and then let people complain!
To answer the points Evzen Polenka made, which are valid. We could have added some capability to adjust the frame information, update information in the target files accordingly etc, but we decided after discussion that it was better (for now) to leave these changes to be made by an appropriately trained person and keep the translation as a separate activity. Even like this with the information we provide there is considerable benefit reducing the editing effort after translation.
that looks very promising! Is it going to be an add-on or will it come with the next version or CU?
As for segmenting, as Evzen Polenka pointed out, SRT segments are usually somewhat random, which is an obvious problem. I wonder whether instead of segmenting by screen (bunching text that appears on the screen at once (in this case "this is something completely new and unique") into one segment), use the usual segmentation rules ignoring paragraph marks using only the full stop rule (or at least giving that option) and treat the time markers as inline tags.
Here's an example:
You could segment:
Segment 1: <Tag: 1><Tag: 00:00:00,498 --> 00:00:02,827>- Here's what I love most about food and diet.
Segment 2: <Tag: 2><Tag: 00:00:02,827 --> 00:00:06,383>We all eat several times a day,<Paragraph mark tag> and we're totally in charge <Tag: 3><Tag: 00:00:06,383 --> 00:00:09,427>of what goes on our plate<Paragraph mark tag> and what stays off.
It's easier and less prone to error (file corruption) to split segments than to merge them if I understand past discussions on this forum correctly, so I'd rather segment less initially.
Daniel Hug said:that looks very promising! Is it going to be an add-on or will it come with the next version or CU?
Initial release will be as a plugin through the SDL AppStore for Studio 2019 only.
Daniel Hug said:I wonder whether instead of segmenting by screen (bunching text that appears on the screen at once (in this case "this is something completely new and unique") into one segment), use the usual segmentation rules ignoring paragraph marks using only the full stop rule (or at least giving that option) and treat the time markers as inline tags.
First point is that we deliberately restrict merging across paragraph breaks. The SRT filetype controls this and we have done the same thing with webVTT, SBV and STL. Allowing this adds a lot of complexity because you don't have the ability to change the time frames and you will also lose the ability to QA the work based on existing allowable times per character rates. So adding times as internal tags removes the ability to QA altogether.
The reason for the plugin is this:
If time frames or other changes need to be made then the translator can mark them for a localization engineer to do later using a suitable video subtitling/editing software.
We may revisit this in the future, but for now these are the problems we are addressing.
Hi Daniel Hug
We released the plugins and a TQA model for this yesterday. You can find them all on the appstore here:
There's also a WIKI here for more details on the way it works:
great, I just downloaded the plugins and am testing it now. Thanks for supplying this!
Any findings and questions I will start a separate thread for, so this one does not get too frayed out.