The DKU-JNU-EMA Electromagnetic Articulography Database /

Saved in:
Bibliographic Details
Imprint:[Philadelphia, PA] : Linguistic Data Consortium, 2019
Description:1 CD-ROM ; 4 3/4 in.
Language:Chinese
English
Subject:
Format: Unknown
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/12739868
Hidden Bibliographic Details
Other authors / contributors:Qin, Xiaoyi
Liu, Xinzhong
Cai, Zexin
Li, Ming
Linguistic Data Consortium.
ISBN:1585638943
9781585638949
Notes:Title from disc label.
"Authors: Xiaoyi Qin, Xinzhong Liu, Zexin Cai, Ming Li.
Release Date: July 15, 2019
Language(s):Yue Chinese, Hakka Chinese, Min Nan Chinese, Mandarin Chinese
Data type: text.
Data source: broadcast conversation, transcribed speech.
Data: Articulatory measurements were made using the NDI electromagnetic articulography wave research system. Subjects had six sensors placed in various locations in their mouth and one reference sensor was placed on the bridge of their nose. For simultaneous recording of speech signals, subjects also wore a head-mounted close-talk microphone. Speakers engaged in four different types of recording sessions: one in which they read complete sentences or short texts, and three sessions in which they read related words of a specific common consonant, vowel or tone. Audio data is presented as single channel, 16kHz, 16-bit flac compressed wav files. Articulography data is stored as UTF-8 plain text files.
LDC2019S14
Access restricted to University of Sydney staff and students. Educational use only.
In Chinese with English translation.
Summary:Introduction: The DKU-JNU-EMA Electromagnetic Articulography Database was developed by Duke Kunshan University and Jinan University and contains approximately 10 hours of articulography and speech data in Mandarin, Cantonese, Hakka, and Teochew Chinese from two to seven native speakers for each dialect. Electromagnetic articulography (EMA) is a method of measuring the position of parts of the mouth and their movement over time during speech and swallowing. Measurements are made from sensors placed in the mouth to capture real-time vocal tract variable trajectories. EMA is used in linguistics and language-related research to study phonetics, in particular, articulation (how sounds are made).

MARC

LEADER 00000cmm a2200000 a 4500
001 12739868
008 220324s2019 pau a chi
005 20220714160918.5
040 |a AZU  |b chi  |c AZU  |d OCLCQ  |d OCLCF  |d OCLCO  |d CGU 
020 |a 1585638943 
020 |a 9781585638949 
035 |a (OCoLC)1305380085 
041 1 |a eng  |a chi  |h chi 
050 4 |a PL1074.5  |b D492 2019 
245 0 4 |a The DKU-JNU-EMA Electromagnetic Articulography Database /  |c Linguistic Data Consortium 
260 |a [Philadelphia, PA] :  |b Linguistic Data Consortium,  |c 2019 
300 |a 1 CD-ROM ;  |c 4 3/4 in. 
336 |a text  |b txt  |2 rdacontent/chi 
337 |a computer  |b c  |2 rdamedia/chi 
338 |a computer disc  |b cd  |2 rdacarrier/chi 
500 |a Title from disc label. 
500 |a "Authors: Xiaoyi Qin, Xinzhong Liu, Zexin Cai, Ming Li. 
500 |a Release Date: July 15, 2019 
500 |a Language(s):Yue Chinese, Hakka Chinese, Min Nan Chinese, Mandarin Chinese 
500 |a Data type: text. 
500 |a Data source: broadcast conversation, transcribed speech. 
506 |a Access restricted to University of Sydney staff and students. Educational use only. 
520 |a Introduction: The DKU-JNU-EMA Electromagnetic Articulography Database was developed by Duke Kunshan University and Jinan University and contains approximately 10 hours of articulography and speech data in Mandarin, Cantonese, Hakka, and Teochew Chinese from two to seven native speakers for each dialect. Electromagnetic articulography (EMA) is a method of measuring the position of parts of the mouth and their movement over time during speech and swallowing. Measurements are made from sensors placed in the mouth to capture real-time vocal tract variable trajectories. EMA is used in linguistics and language-related research to study phonetics, in particular, articulation (how sounds are made). 
500 |a Data: Articulatory measurements were made using the NDI electromagnetic articulography wave research system. Subjects had six sensors placed in various locations in their mouth and one reference sensor was placed on the bridge of their nose. For simultaneous recording of speech signals, subjects also wore a head-mounted close-talk microphone. Speakers engaged in four different types of recording sessions: one in which they read complete sentences or short texts, and three sessions in which they read related words of a specific common consonant, vowel or tone. Audio data is presented as single channel, 16kHz, 16-bit flac compressed wav files. Articulography data is stored as UTF-8 plain text files. 
546 |a In Chinese with English translation. 
500 |a LDC2019S14 
650 0 |a Chinese language  |x Data processing. 
650 0 |a Computational linguistics. 
650 7 |a Chinese language  |x Data processing.  |2 fast  |0 (OCoLC)fst00857415 
650 7 |a Computational linguistics.  |2 fast  |0 (OCoLC)fst00871998 
700 1 |a Qin, Xiaoyi 
700 1 |a Liu, Xinzhong 
700 1 |a Cai, Zexin 
700 1 |a Li, Ming 
710 2 |a Linguistic Data Consortium. 
929 |a cat 
999 f f |s 48dfc178-e672-4e77-a5ea-2c1aa2fe1966  |i 02a33744-2141-4f31-8745-7f8e3cc03c41 
928 |t Library of Congress classification  |a PL1074.5.D492 2019  |p CDRom  |l ASR  |c ASR-JRLASR  |i 12876758 
927 |t Library of Congress classification  |a PL1074.5.D492 2019  |p CDRom  |l ASR  |c ASR-JRLASR  |b 115529531  |i 10401750