Forum: CAT Tools Technical Help
Topic: Trim internal fuzzies (AutoIt script)
Poster: Samuel Murray
Post title: @Hans
[quote]Hans Lenting wrote:
[url removed] [/quote]
Thanks. I don't have NXT but from what I can tell from that blog post, these two features are not what we're looking for.
According to the blog post (and from the screenshots), in NXT one can create two types of reduced sets of data, namely a "translation extract" (which extracts all untranslated segments, for re-import later) and a "reference extract" (which, and I'm guessing, extracts TUs from TMs and possibly also glossaries).
The reference extract does have the option of specifying a fuzzy threshold, so if NXT retains multiple instances of TUs in its TMs, then perhaps this feature can be used after all: create a source=target TM from the source file, then run a "reference extract" with the source file against that TM only, and then export it to a format that one can process (e.g. TMX), and removing any TUs that occur only once (TUs that occur more than once would be TUs that were a match for more than just its own segment), and then convert that to a new plaintext source file with one segment per line, and remove duplicate lines. This all hinges on the assumption that NXT writes (retains) multiple instances of identical translations into its own TM... or that its TM system contains a segment re-use counter.
See also my updated first post with a test file.
[Edited at 2020-08-17 08:08 GMT]
Topic: Trim internal fuzzies (AutoIt script)
Poster: Samuel Murray
Post title: @Hans
[quote]Hans Lenting wrote:
[url removed] [/quote]
Thanks. I don't have NXT but from what I can tell from that blog post, these two features are not what we're looking for.
According to the blog post (and from the screenshots), in NXT one can create two types of reduced sets of data, namely a "translation extract" (which extracts all untranslated segments, for re-import later) and a "reference extract" (which, and I'm guessing, extracts TUs from TMs and possibly also glossaries).
The reference extract does have the option of specifying a fuzzy threshold, so if NXT retains multiple instances of TUs in its TMs, then perhaps this feature can be used after all: create a source=target TM from the source file, then run a "reference extract" with the source file against that TM only, and then export it to a format that one can process (e.g. TMX), and removing any TUs that occur only once (TUs that occur more than once would be TUs that were a match for more than just its own segment), and then convert that to a new plaintext source file with one segment per line, and remove duplicate lines. This all hinges on the assumption that NXT writes (retains) multiple instances of identical translations into its own TM... or that its TM system contains a segment re-use counter.
See also my updated first post with a test file.
[Edited at 2020-08-17 08:08 GMT]