NIST 2009 open machine translation (OpenMT) evaluation.

Saved in:
Bibliographic Details
Imprint:[Philadelphia, PA] : Linguistic Data Consortium, c2010.
Description:1 CD-ROM ; 4 3/4 in.
Language:English
Arabic
Urdu
Subject:
Format: E-Resource
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/8287525
Hidden Bibliographic Details
Other authors / contributors:National Institute of Standards and Technology (U.S.)
Linguistic Data Consortium.
ISBN:1585635707
9781585635702
Notes:Title from disc label.
"LDC2010T23"
"Ccontains the evaluation sets (source data and human reference translations), DTDs, scoring software, and evaluation plans from the Current tests (Progress tests are not included) of the NIST Open Machine Translation 2009 Evaluation."--Readme.txt.
"Test sets: Arabic-to-English, current test, Urdu-to-English, current test"--Readme.txt.
Also available on the Internet.
Summary:NIST 2009 Open Machine Translation (OpenMT) Evaluation, Linguistic Data Consortium (LDC) catalog number LDC2010T23 and isbn 1-58563-570-7, is a package containing source data, reference translations and scoring software used in the NIST 2009 OpenMT evaluation. It is designed to help evaluate the effectiveness of machine translation systems. The package was compiled and scoring software was developed by researchers at NIST, making use of broadcast, newswire and web data and reference translations collected and developed by LDC.
The objective of the NIST Open Machine Translation (OpenMT) evaluation series is to support research in, and help advance the state of the art of, machine translation (MT) technologies -- technologies that translate text between human languages. Input may include all forms of text. The goal is for the output to be an adequate and fluent translation of the original.
The MT evaluation series started in 2001 as part of the DARPA TIDES (Translingual Information Detection, Extraction) program. Beginning with the 2006 evaluation, the evaluations have been driven and coordinated by NIST as NIST OpenMT. These evaluations provide an important contribution to the direction of research efforts and the calibration of technical capabilities in MT. The OpenMT evaluations are intended to be of interest to all researchers working on the general problem of automatic translation between human languages. To this end, they are designed to be simple, to focus on core technology issues and to be fully supported. The 2009 task was to evaluate translation from Arabic to English and Urdu to English.
Description
Item Description:Title from disc label.
"LDC2010T23"
"Ccontains the evaluation sets (source data and human reference translations), DTDs, scoring software, and evaluation plans from the Current tests (Progress tests are not included) of the NIST Open Machine Translation 2009 Evaluation."--Readme.txt.
"Test sets: Arabic-to-English, current test, Urdu-to-English, current test"--Readme.txt.
Physical Description:1 CD-ROM ; 4 3/4 in.
ISBN:1585635707
9781585635702