Trados Business Manager
Speech to Text
Managed Translation - Enterprise
Translation Management Connectors
Language Weaver Connectors
Language Weaver Edge
Tridion Docs Developers
RWS User Experience
RWS Community Internal Group
RWS Access Customer Portal
RWS Professional Services
RWS Training & Certification
RWS Enterprise Technology Partners
Trados Academic Partners
Trados Approved Trainers
ETUG (European Trados User Group) Public Information
Machine Translation User Group
Nordic Tridion Docs User Group
Tridion Docs Europe & APAC User Group
Tridion UK Meetup
Tridion User Group Benelux
Tridion User Group New England
Tridion User Group Ohio Valley
Tridion West Coast User Group
WorldServer User Group
Trados GroupShare Ideas
Trados Studio Ideas
Language Weaver Ideas
Language Weaver Edge Ideas
RWS Language Cloud TMS Ideas
RWS Language Cloud Terminology Ideas
RWS Language Cloud Online Editor Ideas
Managed Translation - Enterprise Ideas
Tridion Docs Ideas
Tridion Sites Ideas
LiveContent S1000D Ideas
Events & Webinars
To RWS Documentation
To RWS Support
Detecting language please wait for.......
Studio Subtitling is plugin for Trados Studio 2019-SR2 and Trados Studio 2021 that provides features for previewing subtitle captions within the video while translating segments in the Editor. This includes a verification provider with specific QA checks for validating subtitle content, ensuring the translations provided adhere to the standards agreed upon.
As we move forward, we will start to extend our portfolio in supporting additional subtitle formats.
The current release includes file type support for (SubStation Alpha) .ass (SubRip).srt, (WebVTT).vtt,(YouTube).sbv and (Spruce Subtitle file).stl formats.
Trados Studio 2019 SR2+ and Trados Studio 2021 support the following formats ‘out of the box’-.srt .vtt .sbv and .sub.
There may however be certain scenarios, where you are advised to install the separate plugin for a better experience.
If you are working with .stl, and /or .ass/ssa file types, you will need to install the relevant plugin.
In addition, we also released a new TQA model generated from the FAR model, covering the primary requirements in a providing a functional approach to assessing quality for subtitle formats, using the integrated Translation Quality Assessment feature of Studio.
We recommend Windows 8.1 (preferably W10) and SDL Trados Studio 2019 SR2 as a minimum.
Download the required File Type plugins to your local drive
Double click on each of the plugins to launch the Plugin Installer and complete the installation, selecting the supported versions of SDL Studio.
Complete the installation of the File types from within the Studio Options
The Subtitling preview and data controls are available for valid subtitle documents. The Preview control provides a real-time preview of the subtitles within the video, whereas the Data control displays the subtitle metadata for each of the segments, along with real time verification feedback as the linguist is providing translations in the editor.
Both the preview and data controls can be displayed and positioned anywhere within the Editor view.To display the preview or data control:
The selected segment from the editor is synchronized with the subtitle track that is displayed in the video. When the user moves to a new segment in the editor, the corresponding track is selected in the video and similarly when the user selects a subtitle track in the video, focus is moved to the corresponding segment in the editor. Additionally, changes applied to the translation are immediately visible in the subtitle caption from the video, including any formatting that was applied (e.g. bold, italic, underline…).
Displays real time verification feedback as the content is being updated, providing the linguist with a much more informed approach in making decisions as they are translating.
Support for splitting and merging segments within and across paragraph boundaries. When merging across paragraphs, all context and structure related to the paragraphs that have been fully merged to the parent paragraph are excluded from the native file that is regenerated with the target content.
Support for working with both merged and virtually merged files. Ideally each subtitle file that is added to the project should have a corresponding video reference with the same name (with exception to the file extension). Each time the user navigates to a new file within the document, the relative video is loaded (if available) and corresponding track in the video is selected.
Support for updating the display time of the subtitle captions. This feature permits a linguist to adapt the Start and End time-codes to align better with the translated content. This also includes hotkeys to automatically set the Start/End times from the current position of the video.
Support for converting and displaying the time-code format between Frames and Milliseconds.
Support for applying the rules of a styleguide to set a minimum time span between subtitles.
Support for assigning keyboard shortcuts to help automate interaction with the subtitling controls from the editor (e.g. jump back/forward from the video in seconds or frames, link/unlink the video, play back from previous subtitle etc...)
These will be your default captions format settings until you change them again or click Reset to go back to the default captions format.
The time-code format is always displayed in milliseconds from the Subtitle Preview control. It is read as milliseconds, unless the subtitle document has information to suggest otherwise. In the case that the format should be read as Frames, select the option 'Frames' from Time-code combobox in the Subtitle Preview Options dialog.
To successfully switch the Time-code format from Milliseconds to Frames:
Milliseconds Per Frame = 1000 / Frame RateFrames To Milliseconds = Frame * Milliseconds Per Frame
Given:- Frame Rate = 24- Frame = 4Result: Milliseconds Per Frame = 41.666 (1000 / Frame Rate) Frame To Milliseconds = 166.666 (Frame * Milliseconds Per Frame)
Milliseconds Per Frame = 1000 / Frame Rate Milliseconds To Frame = Milliseconds / Milliseconds Per Frame
Given:- Frame Rate = 24- Milliseconds = 166.666 Result: Milliseconds Per Frame = 41.666 (1000 / Frame Rate) Milliseconds To Frame = 4 (Milliseconds / Milliseconds Per Frame)
Any document whose paragraphs have a context of type=sdl:section, code=sec, and obtain a meta data entry with a key named timeStamp , whose value conforms to the following time span format
Time span format: hh:mm:ss[.fffffff]
We have integrated a specific set of verification checks for working with Subtitle documents. You can specify these verification settings for your project, along with the existing verification tools (e.g. QA Checker 3.0, Tag Verifier and Terminology Verifier)
The number of characters per second
Default settings:Report when less than: 12 (warning)Report when greater than: 15 (warning)
Based on the recommended rate of 160-180 words per minute, you should aim to leave a subtitle on screen for a minimum period of around 3 words per second or 0.3 seconds per word (e.g. 1.2 seconds for a 4-word subtitle). However, timings are ultimately an editorial decision that depends on other considerations, such as the speed of speech, text editing and shot synchronization.
Default settings:Report when less than: 160 (warning)Report when greater than: 180 (warning)
Calculation:Total Seconds = (End time - Start time) → SecondsWords Per Second = Words / Total SecondsWords Per Minute = Words Per Second * 60
The number of characters per line
Default settings:Report when greater than: 39 (warning)
Method:Returns the total number of characters for each line in the subtitle caption.Report when any of the lines contain a number of characters greater than CPL setting assigned by the user.
LPSNumber of lines per subtitle
A maximum subtitle length of two lines is recommended. Anything greater than that should be used if the linguist is confident that no important picture information will be obscured. When deciding between one long line or two short ones, consider line breaks, number of words, pace of speech and the image.
Default settings:Report when greater than: 2 (warning)
Spaces and punctuation are counted in all character counts
TQA can be seen as a functional approach to measuring quality in the translated content and from that assessment, evaluate and improve the process. We are introducing a new TQA model generated from the FAR model, covering the primary requirements in a providing a functional approach to assessing quality for subtitle formats, using the integrated Translation Quality Assessment feature of Studio.
The definition of a standard semantic equivalence error would be a subtitle that contains errors, but still has bearing on the actual meaning and does not seriously hamper the viewers’ progress beyond that single subtitle. Standard semantic errors would also be cases where utterances that are important to the plot are left unsubtitled.
A serious semantic equivalence error scores 2 penalty points and is defined as a subtitle that is so erroneous that it makes the viewers’ understanding of the subtitle nil and would hamper the viewers’ progress beyond that subtitle, either by leading to plot misunderstandings or by being so serious as to disturb the contract of illusion for more than just one subtitle.
Stylistic errors are not as serious as semantic errors, as they cause nuisance, rather than misunderstandings.
Examples of stylistic errors would be erroneous terms of address, using the wrong register (too high or too low) or any other use of language that is out of tune with the style of the original (e.g. using modern language in historic films).
These are simply errors of target language grammar in various forms.
A serious grammar error makes the subtitle hard to read and/or comprehend. Minor errors are the pet peeves that annoy purists (e.g. misusing ‘whom’ in English). Standard errors fall in between.
Errors that fall into this category are not grammar errors, but errors which sound unnatural in the target language.
It should be pointed out that sometimes source text interference can become so serious that it becomes an equivalence issue
Spelling errors could be judged according to gravity in the following way:
Consider penalizing anything over 15 cps and up to 20 cps as a standard error.
Above that is serious as you wouldn't have time to do anything apart from read the subtitles, and possibly not even finish that.
Spotting errors are caused by bad synchronization with speech, (subtitles appear too soon or disappear later than the permitted lag on out-times) or image (subtitles do not respect hard cuts).
Segmentation errors are when the semantic or syntactic structure of the message is not respected.
Spruce Subtitle *.stl
Advanced Substation Alpha *.ass
Q. Can I edit the Start and/or End time-codes of the subtitle from the Subtitling Data control?A. Yes, editing the time-codes of the subtitle is fully supported. This feature permits a linguist to adapt the Start and End times to align better with the translated content.
Q. How does the plugin recognize which time-code format (i.e. milliseconds vs frames) to use?A. The time-code format is read as milliseconds, unless the subtitle document has information to suggest otherwise. In the case that the format should be read as Frames, select the option 'Frames' from Time-code combobox in the Subtitle Preview Options dialog. Please refer to Time-code Format for more information.
Q. Can I merged across paragraphs when working with the supported subtitle formats?A. Merge across paragraphs is now fully supported with the latest release of the File Types (WebVTT, SBV, STL & ASS) version 1.0.3+ and Subtitling plugin version 1.0.8+. All context and structure related to the paragraphs that have been fully merged to the parent paragraph are excluded from the native file that is regenerated with the target content.Unmerging segments is no longer supported from Studio 2019 SR1 CU3.
Q. Can I add a new subtitle record or remove an existing one from the list of subtitles visible from the Studio Data Control?A. This is currently not supported, as It would introduce a requirement to manage a structural change to the paragraphs of the document that would then need to be reflected when regenerating the native format.
Q. I would like to add a verification check that is specific to subtitling other than those included with the Studio Subtitling plugin, how can I do that?A. Check if what you are looking for is not already included as a standard QA check with the other tools (e.g. QA Checker 3.0, Tag Verifier and Terminology Verifier). It is also possible to create/add your own, via the Regular Expressions area in the QA Checker tool. In addition, please take the opportunity to communicate any improvements of this nature to the AppStore team, as we welcome any suggestions/feedback from the community in area's where we can improve the features for future releases.
Q. I don't have a video reference for the subtitle document that I'm working on; will the verification checks still work?A. Yes, the QA checks will still function correctly without a video reference, if you are working with a document that is recognized correctly by the File Types we have released to support subtitle formats
Q. Can I update the video reference after the project has been created?A. The video reference can be associated with the subtitle document during project creation or linked to the active document from the editor at a later time. Any updates to the video reference path from the editor are persisted in the project file without affecting the project resources.