"Number is missing in target segment or is not properly localized"

Hi, I am a free lance translator for Czech language, and my very important question about Studio 2019 is: How can I manage the message: "Number is missing in target segment or is not properly localized". In fact the Studio 2019 (as all previous versions of Trados/Studio) doesn't know the rules to respect using the numbers in the documents in Czech. So, the above mentioned message is very null and void/disturbing/ineffective.

As translator, am I able teach the Studio to manage the numbers as needed?

If not, should it be possibile for the SDL make known to everybody this very fatal failure?

Many thanks in advance for all useful reactions


36 Replies Latest Replies: 26 Oct 2018 11:06 AM by Steven Whale < 1   2   3   4   5   6   7   8  >
  • In reply to Paul:

    Hi Paul,

    The problem was caused because the source document was badly formatted, containing a mixture of numbers correctly formatted in French with the non-breaking space and other sections where they were formatted with the period as the thousands separator. The number checking in Studio's standard QA checker was turned off and the following images show the settings in the number verifier plugin. Despite the fact that both the space and period are shown as valid thousands separators, I got hundreds of errors saying "numbers modified/unlocalised". This means that relatively unimportant "unlocalised" errors are mixed up with serious "modified" errors.

    My justification for my statement is that if error checking produces so many false positives that they cannot be examined manually, then there is, in effect, no error checking. It is a fact of life that translators have to deal with badly formatted documents and must do so without introducing errors of substance.

    XBench located the error immediately and produced no false positives.

    I hope this is of use and I am at your disposal if you have further questions.

    Thank you for your time.


  • In reply to Neil Allen:


    One simple way to detect such wrong or forgotten numbers and get a distince error message would be this regex:

    This will interpret 123.456,12€ as three individual numbers, but it will flag up a modified number in all but one cases I can think of. (That case would be that no thousands separators are used in the source, but some kind of separator is used in the target.) This is not as comprehensive as XBench, but it would catch most mistakes I think. Again, using Regex Autosuggest might greatly reduce the amount of mistakes introduced into the target text.

    The formatting check might be done as a separate step, again with the advantage of getting a distinct error message:

    (((\d{1,3},)+\d+)(\.\d+))|((\d{1,3},)+\d{3}[ $€]?) ...or something similar would flag up numbers with a comma as thousands separator. (Just check the target.)

    The position of a currency symbol or measurement would be a third regex, maybe (\d [€])|([€]\d) if you want to be alerted if the € symbol is anywhere but right after the last digit.

    I am not saying this rivals Xbench - that is a whole product designed to do nothing but QA, but I think there are simple ways to use Studio's QA checks to safeguard against most number-related QA problems.


  • In reply to Daniel Hug:

    Daniel, thank you for your reply and your time.

    However, in all of the CAT tools I have used (WordFast, Déjà Vu (DVX) and MemoQ) I don't recall ever having to craft and test regular expressions in order to do a basic QA check on numbers.

    The problem lies in the attempt to detect non-localised numbers. If there were an option to turn this off and only detect numbers added or taken away, then this would completely resolve the problem.

    Until then, I see XBench or some equivalent solution as being an absolute necessity for anyone who does financial or technical translations.

  • In reply to Neil Allen:

    I have to agree with Neil about the irritation of false positives and how it makes you annoyed with Studio.

    I have had problems with false positives from date recognition in Austrian for years.

    I reported it years ago and, since other people commented about other languages with date recognition problems in the same thread, I naively thought it would be corrected "soon".

    For example "1. Januar 2018" is NOT recognised as a date in Austrian, but its translation into English as "1 January 2018" IS recognised as a date.

    So I get 3 errors, two errors because the numbers "1." and "2018" are missing in the target segment ("1. Januar 2018" is recognised as 2 numbers separated by a word in Austrian) and then I get a third error because Studio thinks there is an extra date in the English target segment.

    I have been thinking about converting the Austrian TMs and termbases to German (which does not have this problem) to get rid of the annoyance, but that means convincing the clients and then actually spending the time to do the conversion, so I will probably let it slide another few years before I finally lose my patience. If I had known this in advance I definitely would not have used Austrian.

    Yes, it is technically possible to use regex when I happen to get another Austrian job that happens to have a lot of dates in it, but I don't have the patience for using a separate workflow in these cases. I just grit my teeth and spend the few additional minutes needed in that occasional job to check the list of "missing" numbers.

    It does, however, leave a bad taste in my mouth every time. (Customer satisfaction?)

    Best regards,
    Bruce Campbell
    ASAP Language Services

  • In reply to Neil Allen:

    Neil Allen

    Until then, I see XBench or some equivalent solution as being an absolute necessity for anyone who does financial or technical translations.

    XBench: I would definitely do the same if I was in your position.

    My point was not to say that everything is perfect with the Studio number check - it's not. I just meant to point out that there are simple ways (\d+ -> $1) to work around certain problems in most cases. Regex is far more limited than a programmatic solution, that's obvious. And, yes, checks that produce lots of false positives are useless - I turn them off.


< 1   2   3   4   5   6   7   8  >