Studio Subtitling is plugin for SDL Trados Studio 2019 that provides features for previewing subtitle captions within the video while translating segments in the Editor. This includes a verification provider with specific QA checks for validating subtitle content, ensuring the translations provided adhere to the standards agreed upon.
As we move forward, we will start to extend our portfolio in supporting additional subtitle formats. The initial release will include file type support for (SubRip).srt, (WebVTT).vtt,(YouTube).sbv and (Spruce Subtitle file).stl formats.
In addition, we are releasing a new TQA model generated from the FAR model, covering the primary requirements in a providing a functional approach to assessing quality for subtitle formats, using the integrated Translation Quality Assessment feature of Studio.
Windows 8 and SDL Trados Studio 2019 are required as a minimum. The preview will not work at all with Windows 7 or earlier.
Complete the installation of the File types from within the Studio Options
The Subtitling preview and data controls are available for valid subtitle documents. The Preview control provides a real-time preview of the subtitles within the video, whereas the Data control displays the subtitle metadata for each of the segments, along with real time verification feedback as the linguist is providing translations in the editor.
Both the preview and data controls can be displayed and positioned anywhere within the Editor view.To display the preview or data control:
The selected segment from the editor is synchronized with the subtitle track that is displayed in the video. When the user moves to a new segment in the editor, the corresponding track is selected in the video and similarly when the user selects a subtitle track in the video, focus is moved to the corresponding segment in the editor. Additionally, all changes applied to the translation is immediately visible in the subtitle caption from the video, including any formatting that was applied (e.g. bold, italic, underline…).
Displays real time verification feedback as the content is being updated, providing the linguist with much more informed approach in making decisions as they are translating.
Support for splitting and merging segments within and across paragraph boundaries. When merging across paragraphs, all context and structure related to the paragraphs that have been fully merged to the parent paragraph are excluded from the native file that is regenerated with the target content..
Support for working with both merged and virtually merged files. Ideally each subtitle file that is added to the project should have a corresponding video reference with the same name (with exception to the file extension). Each time the user navigates to a new file within the document, the relative video is loaded (if available) and corresponding track in the video is selected.
Support for updating the display time of the subtitle captions. This feature permits a linguist to adapt the Start and End time-span data to align better with the translated content.
Support for converting and displaying the time-code format between the various standards (e.g. Milliseconds, Centiseconds & Frames)
These will be your default captions format settings until you change them again or click Reset to go back to the default captions format.
The time-code format is always displayed in milliseconds from the Subtitle Preview control. It is read as milliseconds, unless the subtitle document has information to suggest otherwise. In the case that the format should be read as Frames, select the option 'Frames' from Time-code combobox in the Subtitle Preview Options dialog.
To successfully switch the Time-code format from Milliseconds to Frames:
Milliseconds Per Frame = 1000 / Frame RateFrames To Milliseconds = Frame * Milliseconds Per Frame
Given:- Frame Rate = 24- Frame = 4Result: Milliseconds Per Frame = 41.666 (1000 / Frame Rate) Frame To Milliseconds = 166.666 (Frame * Milliseconds Per Frame)
Milliseconds Per Frame = 1000 / Frame Rate Milliseconds To Frame = Milliseconds / Milliseconds Per Frame
Given:- Frame Rate = 24- Milliseconds = 166.666 Result: Milliseconds Per Frame = 41.666 (1000 / Frame Rate) Milliseconds To Frame = 4 (Milliseconds / Milliseconds Per Frame)
Any document whose paragraphs have a context of type=sdl:section, code=sec, and obtain a meta data entry with a key named timeStamp , whose value conforms to the following time span format
Time span format: hh:mm:ss[.fffffff]
We have integrated a specific set of verification checks for working with Subtitle documents. You can specify these verification settings for your project, along with the existing verification tools (e.g. QA Checker 3.0, Tag Verifier and Terminology Verifier)
The number of characters per second
Default settings:Report when less than: 12 (warning)Report when greater than: 15 (warning)
Based on the recommended rate of 160-180 words per minute, you should aim to leave a subtitle on screen for a minimum period of around 3 words per second or 0.3 seconds per word (e.g. 1.2 seconds for a 4-word subtitle). However, timings are ultimately an editorial decision that depends on other considerations, such as the speed of speech, text editing and shot synchronization.
Default settings:Report when less than: 160 (warning)Report when greater than: 180 (warning)
Calculation:Total Seconds = (End time - Start time) → SecondsWords Per Second = Words / Total SecondsWords Per Minute = Words Per Second * 60
The number of characters per line
Default settings:Report when greater than: 39 (warning)
Method:Returns the total number of characters for each line in the subtitle caption.Report when any of the lines contain a number of characters greater than CPL setting assigned by the user.
LPSNumber of lines per subtitle
A maximum subtitle length of two lines is recommended. Anything greater than that should be used if the linguist is confident that no important picture information will be obscured. When deciding between one long line or two short ones, consider line breaks, number of words, pace of speech and the image.
Default settings:Report when greater than: 2 (warning)
Spaces and punctuation are counted in all character counts
TQA can be seen as a functional approach to measuring quality in the translated content and from that assessment, evaluate and improve the process. We are introducing a new TQA model generated from the FAR model, covering the primary requirements in a providing a functional approach to assessing quality for subtitle formats, using the integrated Translation Quality Assessment feature of Studio.
The definition of a standard semantic equivalence error would be a subtitle that contains errors, but still has bearing on the actual meaning and does not seriously hamper the viewers’ progress beyond that single subtitle. Standard semantic errors would also be cases where utterances that are important to the plot are left unsubtitled.
A serious semantic equivalence error scores 2 penalty points and is defined as a subtitle that is so erroneous that it makes the viewers’ understanding of the subtitle nil and would hamper the viewers’ progress beyond that subtitle, either by leading to plot misunderstandings or by being so serious as to disturb the contract of illusion for more than just one subtitle.
Stylistic errors are not as serious as semantic errors, as they cause nuisance, rather than misunderstandings.
Examples of stylistic errors would be erroneous terms of address, using the wrong register (too high or too low) or any other use of language that is out of tune with the style of the original (e.g. using modern language in historic films).
These are simply errors of target language grammar in various forms.
A serious grammar error makes the subtitle hard to read and/or comprehend. Minor errors are the pet peeves that annoy purists (e.g. misusing ‘whom’ in English). Standard errors fall in between.
Errors that fall into this category are not grammar errors, but errors which sound unnatural in the target language.
It should be pointed out that sometimes source text interference can become so serious that it becomes an equivalence issue
Spelling errors could be judged according to gravity in the following way:
Consider penalizing anything over 15 cps and up to 20 cps as a standard error.
Above that is serious as you wouldn't have time to do anything apart from read the subtitles, and possibly not even finish that.
Spotting errors are caused by bad synchronization with speech, (subtitles appear too soon or disappear later than the permitted lag on out-times) or image (subtitles do not respect hard cuts).
Segmentation errors are when the semantic or syntactic structure of the message is not respected.
Q. Can I edit the Start and/or End times of the subtitle captions from the Subtitling Data control?A. Yes, editing the time track data of the subtitle is fully supported. This feature permits a linguist to adapt the Start and End times to align better with the translated content.
Q. How does the plugin recognize which time-code format (i.e. milliseconds vs frames) to use?A. The time-code format is read as milliseconds, unless the subtitle document has information to suggest otherwise. In the case that the format should be read as Frames, select the option 'Frames' from Time-code combobox in the Subtitle Preview Options dialog. Please refer to Time-code format for more information.
Q. Can I merged across paragraphs when working with the supported subtitle formats?A. Merge across paragraphs is now fully supported with the latest release of the File Types (WebVTT, SBV & STL) version 1.0.3+ and Subtitling plugin version 1.0.8+. All context and structure related to the paragraphs that have been fully merged to the parent paragraph are excluded from the native file that is regenerated with the target content.The feature to merge across paragraphs in Studio is however limited, so far as there is no action to un-merge paragraphs after they have been merged, other than performing an undo operation in the editor. In the case where you perform an undo operation to un-merge merged paragraphs, then you will subsequently need to click on the Reload button from the Subtitling Preview control to refresh the data from both the Studio editor and Subtitling data-grids.
Q. Can I add a new cue or remove an existing one from the list of cues visible from the Studio Preview Control?A. Currently this is not possible, but we might consider this feature in the future. It would introduce a structural change to the paragraphs of the document that would then need to be reflected when regenerating the native format.
Q. I would like to add a verification check that is specific to subtitling other than those included with the Studio Subtitling plugin, how can I do that?A. Check if what you are looking for is not already included as a standard QA check with the other tools (e.g. QA Checker 3.0, Tag Verifier and Terminology Verifier). It is also possible to create/add your own, via the Regular Expressions area in the QA Checker tool. In addition, please take the opportunity to communicate any improvements of this nature to the AppStore team, as we welcome any suggestions/feedback from the community in area's where we can improve the features for future releases.
Q. I don't have a video reference for the subtitle document that I'm working on; will the verification checks still work?A. Yes, the QA checks will still function correctly if you are translating a document that is recognized correctly by the File Types we have released to support subtitle formats
Q. Can I update the video reference after the project has been created?A. The video reference can be associated with the subtitle document during project creation or linked to the active document from the editor at a later time. Any updates to the video reference path from the editor are persisted in the project file without affecting the project resources.