ACL-OCL / Base_JSON /prefixI /json /ijclclp /2020.ijclclp-2.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:27:40.525143Z"
},
"title": "Feature Extraction based on Maximizing the Accuracy of States in Deep Acoustic Models \u5f35\u7acb\u5bb6 \uf02a \u3001\u6d2a\u5fd7\u5049 \uf02a",
"authors": [
{
"first": "Li-Chia",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chi Nan University",
"location": {}
},
"email": ""
},
{
"first": "Jeih-Weih",
"middle": [],
"last": "Hung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chi Nan University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this study, we focus on developing a novel speech feature extraction technique to achieve noise-robust speech recognition, which employs the information from the backend acoustic models. Without further retraining and adapting the backend acoustic models, we use deep neural networks to learn the front-end acoustic speech feature representation that can achieve the maximum state accuracy",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this study, we focus on developing a novel speech feature extraction technique to achieve noise-robust speech recognition, which employs the information from the backend acoustic models. Without further retraining and adapting the backend acoustic models, we use deep neural networks to learn the front-end acoustic speech feature representation that can achieve the maximum state accuracy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "\u5728\u73fe\u4eca\u7684\u6642\u4ee3\uff0c\u96fb\u5b50\u7522\u54c1\u548c\u76f8\u95dc\u670d\u52d9\uff0c\u4f8b\u5982\u624b\u6a5f\u3001\u52a9\u807d\u5668\u3001\u8033\u6a5f\u3001\u96fb\u8a71\u6703\u8b70\u7cfb\u7d71\u7b49\uff0c\u5728 \u6211\u5011\u7684\u751f\u6d3b\u4e2d\u9010\u6f38\u8b8a\u6210\u65e5\u5e38\u4e14\u4e0d\u53ef\u6216\u7f3a\u7684\u9700\u6c42\uff0c\u5728\u9019\u4e9b\u8a2d\u5099\u548c\u670d\u52d9\u4e2d\uff0c\u8a9e\u97f3\u7684\u529f\u80fd\u548c\u61c9 \u7528(\u8a9e\u97f3\u4e92\u52d5\uff0c\u8a9e\u97f3\u901a\u8a71\uff0c\u8a9e\u97f3\u8fa8\u8b58\u7b49)\u662f\u4e00\u500b\u76f8\u7576\u91cd\u8981\u7684\u74b0\u7bc0\u3002\u7136\u800c\uff0c\u73fe\u5be6\u901a\u8a0a\u74b0\u5883 \u4e2d\u4e2d\u5b58\u5728\u5404\u7a2e\u5e72\u64fe\u6e90\uff0c\u800c\u9019\u4e9b\u5e72\u64fe\u6e90\u6703\u5e72\u64fe\u8a9e\u97f3\u8a0a\u865f\uff0c\u56e0\u6b64\u6e1b\u640d\u4e86\u4e0a\u8ff0\u8a9e\u97f3\u529f\u80fd\u548c\u61c9\u7528 \u7684\u6027\u80fd\u3002\u9019\u4e9b\u5e72\u64fe\u6e90\u5305\u62ec\u52a0\u6210\u6027\u96dc\u8a0a\u3001\u901a\u9053\u5e72\u64fe\u548c\u6df7\u97ff\u7b49\uff0c\u8fd1\u5e7e\u5341\u5e74\u4f86\uff0c\u5404\u65b9\u5df2\u7d93\u7814\u7a76 \u958b\u767c\u51fa\u591a\u7a2e\u6280\u8853\u4f86\u964d\u4f4e\u9019\u4e9b\u5e72\u64fe\u6548\u61c9\u3001\u4ee5\u6539\u9032\u8a9e\u97f3\u76f8\u95dc\u7684\u529f\u80fd\u3002\u6cbf\u6b64\u65b9\u5411\uff0c\u5728\u672c\u7814\u7a76\u4e2d\uff0c \u6211\u5011\u8457\u773c\u65bc\u8a9e\u97f3\u8a0a\u865f\u4e2d\u52a0\u6210\u6027\u96dc\u8a0a\u7684\u5e72\u64fe\u7684\u554f\u984c\uff0c\u63d0\u51fa\u4e86\u4e00\u7a2e\u5c08\u9580\u7528\u65bc\u63d0\u9ad8\u8a9e\u97f3\u8fa8\u8b58\u6e96 \u78ba\u5ea6\u7684\u65b0\u578b\u964d\u566a\u65b9\u6cd5\u3002 \u8a9e\u97f3\u901a\u8a71\u548c\u8fa8\u8b58\u4e2d\uff0c\u74b0\u5883\u96dc\u8a0a\u7684\u5b58\u5728\u5f88\u53ef\u80fd\u53cd\u6620\u5728\u8a9e\u97f3\u8a0a\u865f\u7684\u54c1\u8cea\u548c\u80fd\u8fa8\u5ea6\u4ee5\u53ca\u8fa8 \u8b58\u7684\u6e96\u78ba\u5ea6\u4e0a\u3002\u70ba\u4e86\u964d\u4f4e\u96dc\u8a0a\u7684\u5f71\u97ff\uff0c\u7814\u7a76\u8005\u5f9e\u8a9e\u97f3\u8655\u7406\u7cfb\u7d71\u7684\u4e0d\u540c\u89d2\u5ea6\u63d0\u51fa\u4e86\u591a\u7a2e\u65b9 \u6cd5 \uff0c \u4f8b \u5982 \u524d \u7aef \u8a0a \u865f \u8655 \u7406 (front-end signal processing) \uff0c \u8072 \u5b78 \u7279 \u5fb5 \u64f7 \u53d6 (acoustic feature representation)\u548c\u5f8c\u7aef\u8072\u5b78\u6a21\u578b(back-end acoustic model)\u7b49\u3002 \u5728\u8072\u5b78\u7279\u5fb5\u64f7\u53d6\u65b9\u6cd5\u4e2d\uff0c\u4f8b\u5982\uff0c\u76f8\u5c0d\u983b\u8b5c\u6cd5(relative spectral analysis, RASTA) (Hermansky & Morgan, 1994 )\u8a2d\u8a08\u4e00\u500b\u57fa\u65bc\u8072\u5b78\u77e5\u8b58\u7684\u7121\u9650\u8108\u885d\u97ff\u61c9\u6ffe\u6ce2\u5668\uff0c\u5176\u61c9\u7528\u5728\u8a9e \u97f3 \u8a0a \u865f \u7684 \u5c0d \u6578 \u983b \u8b5c \u4e2d \u53ef \u4ee5 \u6709 \u6548 \u7684 \u6291 \u5236 \u8a0a \u865f \u4e2d \u975e \u8a9e \u97f3 \u7684 \u6210 \u5206 \uff0c \u8457 \u540d \u7684 RASTA-PLP (Hermansky, Morgan, Bayya & Kohn, 1991) \u8a9e\u97f3\u7279\u5fb5\u8868\u793a\u5c31\u662f\u5c07\u611f\u77e5\u7dda\u6027\u4f30\u8a08(perceptual linear prediction, PLP) (Hermansky, 1990 )\u7684\u8a9e\u97f3\u7279\u5fb5\u7d93\u7531 RASTA \u8655\u7406\u3002\u6b64\u5916\uff0c\u5c0d\u8a9e\u97f3\u7279 \u5fb5\u5e8f\u5217\u9032\u884c\u4e0d\u540c\u968e\u7d1a\u7684\u6b63\u898f\u5316\u53ef\u6709\u6548\u5730\u6e1b\u8f15\u8a13\u7df4\u8207\u6e2c\u8a66\u8cc7\u6599\u7684\u7d71\u8a08\u4e0d\u5339\u914d\uff0c\u800c\u7814\u7a76\u4e2d\u4e5f \u6307\u51fa\u85c9\u6b64\u53ef\u4ee5\u540c\u6642\u964d\u4f4e\u96dc\u8a0a\u7684\u5f71\u97ff\uff0c\u76f8\u95dc\u7684\u65b9\u6cd5\u5305\u62ec\u5e73\u5747\u6b63\u898f\u5316\u6cd5(mean normalization, MN) (Liu, Stern, Huang & Acero, 1993) \u3001\u6b63\u898f\u5316\u6cd5(mean and variance normalization, MVN) (Viikki & Laurila, 1998 )\u548c\u7d71\u8a08\u5716\u7b49\u5316\u6cd5(histogram equalization, HEQ) (Torre et al., 2005) \uff0c \u4e0a\u8ff0\u65b9\u6cd5\u5206\u5225\u5c0d\u8a9e\u97f3\u7279\u5fb5\u6b63\u898f\u5316\u4e86\u5e73\u5747\u3001\u5e73\u5747\u8207\u8b8a\u7570\u6578\u3001\u6a5f\u7387\u5bc6\u5ea6\u51fd\u6578\u3002 \u5728\u5f8c\u7aef\u8072\u5b78\u6a21\u578b\u4e2d\uff0c\u8072\u5b78\u6a21\u578b\u4e4b\u8abf\u9069\u6cd5\u65e8\u5728\u8abf\u6574\u8072\u5b78\u6a21\u578b\u53bb\u9069\u61c9\u5608\u96dc\u74b0\u5883\u4e0b\u7684\u8f38\u5165 \u8a9e \u97f3 \u7279 \u5fb5 \uff0c \u4e00 \u4e9b \u77e5 \u540d \u7684 \u65b9 \u6cd5 \u4f8b \u5982 \u6700 \u5927 \u5f8c \u9a57 \u6a5f \u7387 \u81ea \u9069 \u61c9 \u6a21 \u578b (maximum a posteriori \u57fa\u65bc\u6df1\u5ea6\u8072\u5b78\u6a21\u578b\u5176\u72c0\u614b\u7cbe\u78ba\u5ea6\u6700\u5927\u5316\u4e4b\u5f37\u5065\u8a9e\u97f3\u7279\u5fb5\u64f7\u53d6\u7684\u521d\u6b65\u7814\u7a76 87 adaptation, MAP) (Su, Tsao, Wu & Jean, 2013 )\u3001\u6700\u5927\u4f3c\u7136\u7dda\u6027\u56de\u6b78(maximum likelihood linear regression, MLLR) (Stolcke, Ferrer, Kajarekar, Shriberg & Venkataraman, 2005 )\u8207\u6700 \u5927\u4f3c\u7136\u7dda\u6027\u8f49\u63db(maximum likelihood linear transformation, MLLT) (Gales, 1998) \u61c9\u7528\u65bc\u8072 \u5b78\u6a21\u578b\u7684\u53c3\u6578(\u4f8b\u5982\u9ad8\u65af\u6df7\u548c\u6a21\u578b\u7684\u5e73\u5747\u8207\u5171\u8b8a\u7570\u6578)\u4e26\u9032\u884c\u6620\u5c04\u8f49\u63db\u3002\u6b64\u5916\uff0c\u9451\u5225\u5f0f \u8072\u5b78\u6a21\u578b\u540c\u6a23\u6709\u975e\u5e38\u597d\u7684\u6548\u679c\uff0c\u5176\u900f\u904e\u8a2d\u8a08\u8a13\u7df4\u6642\u7684\u76ee\u6a19\u51fd\u5f0f\uff0c\u4ee5\u9054\u5230\u76f4\u63a5\u6539\u5584\u8fa8\u8b58\u6642 \u7684\u6e96\u78ba\u7387\uff0c\u4f8b\u5982\uff0c\u5728\u55ae\u8a9e\u53e5\u8fa8\u8b58\u4e2d\uff0c\u6700\u5c0f\u5316\u5206\u985e\u932f\u8aa4(minimum classification error, MCE) (Juang, Hou & Lee, 1997) \u7684\u76ee\u6a19\u51fd\u5f0f\u662f\u6700\u4f73\u5316\u5176\u8fa8\u8b58\u7684\u5206\u985e\u7d50\u679c\uff0c\u800c\u975e\u53ea\u662f\u6a21\u64ec\u8f38\u5165\u7684\u8a9e \u53e5\uff1b\u5728\u5927\u8fad\u5f59\u9023\u7e8c\u8a9e\u97f3\u8fa8\u8b58\u4e2d\uff0c\u6700\u5c0f\u5316\u97f3\u7d20\u932f\u8aa4(minimum phone error, MPE) (Povey, 2003) \u548c\u6700\u5c0f\u5316\u8a5e\u932f\u8aa4(minimum word error, MWE) (Kuo & Chen, 2005) \u6240\u5f97\u7684\u8072\u5b78\u6a21\u578b\u8a13\u7df4\u76ee \u6a19\u662f\u5c0d\u8a13\u7df4\u8cc7\u6599\u80fd\u6700\u5c0f\u5316\u5c0d\u8fa8\u8b58\u932f\u8aa4\u7684\u5e73\u6ed1\u4f30\u8a08\u3002 \u7531\u65bc\u8fd1\u5e74\u4f86\u6df1\u5c64\u795e\u7d93\u7db2\u7d61(deep neural network, DNN)\u6280\u8853\u7684\u84ec\u52c3\u767c\u5c55\uff0c\u8a9e\u97f3\u8655\u7406\u7684 \u524d\u5f8c\u7aef\u65b9\u6cd5\u90fd\u5f97\u5230\u4e86\u986f\u8457\u6539\u5584\u548c\u9032\u6b65\uff0c\u5f9e\u800c\u7372\u5f97\u66f4\u597d\u7684\u6548\u80fd\uff0c\u4f8b\u5982\uff0c\u5728\u8a9e\u97f3\u5f37\u5316\u9818\u57df\u4e2d\uff0c \u53ef\u4ee5\u4f7f\u7528 DNN \u900f\u904e\u5927\u91cf\u6210\u5c0d\u7684\u96dc\u8a0a\u8a9e\u97f3\u8207\u4e7e\u6de8\u8a9e\u97f3\uff0c\u4f86\u5b78\u7fd2\u4e8c\u8005\u4e4b\u9593\u7684\u6620\u7167(mapping) \u95dc\u4fc2\uff1b\u5728\u8072\u5b78\u7279\u5fb5\u64f7\u53d6\u8207\u8072\u5b78\u6a21\u578b\u65b9\u9762\uff0cDNN \u540c\u6a23\u4e5f\u90e8\u4efd\u751a\u81f3\u5168\u9762\u5730\u53d6\u4ee3\u4e86\u50b3\u7d71\u7684\u65b9\u6cd5\uff0c \u4f8b\u5982\uff0cANN-HMM(artificial neural network-hidden Markov model) (Bourlard & Morgan, 1994 )\u7684\u96d9\u5411\u7cfb\u7d71\u85c9\u7531 ANN \u66f4\u7cbe\u78ba\u5730\u4f30\u8a08\u51fa\u8a9e\u97f3\u7279\u5fb5\u7684\u4f3c\u7136\u5206\u6578\uff0c\u6b64\u5916\uff0cTANDEM \u7cfb\u7d71 (Hermansky, Ellis & Sharma, 2000) \u8a13\u7df4 DNN \u7522\u751f\u8a9e\u97f3\u7279\u5fb5\u7684\u5f8c\u9a57\u6a5f\u7387\uff0c\u4e26\u5c07\u5176\u4f5c\u70ba\u984d\u5916 \u7684\u8cc7\u8a0a\u53bb\u8a13\u7df4\u50b3\u7d71\u7684\u8072\u5b78\u6a21\u578b\uff0c\u7814\u7a76\u4e2d\u4e5f\u6307\u51fa\u6b64\u65b9\u6cd5\u53ef\u4f7f\u6a21\u578b\u5177\u6709\u66f4\u597d\u7684\u5f37\u5065\u6027\uff0c\u540c\u6a23 \u5730\uff0c\u74f6\u9838\u7279\u5fb5(bottleneck feature)\u6280\u8853 (Grezl, Karafiat, Kontar & Cernocky, 2007) ",
"cite_spans": [
{
"start": 548,
"end": 573,
"text": "(Hermansky & Morgan, 1994",
"ref_id": "BIBREF11"
},
{
"start": 670,
"end": 709,
"text": "(Hermansky, Morgan, Bayya & Kohn, 1991)",
"ref_id": "BIBREF12"
},
{
"start": 761,
"end": 777,
"text": "(Hermansky, 1990",
"ref_id": "BIBREF9"
},
{
"start": 897,
"end": 930,
"text": "(Liu, Stern, Huang & Acero, 1993)",
"ref_id": "BIBREF16"
},
{
"start": 975,
"end": 998,
"text": "(Viikki & Laurila, 1998",
"ref_id": "BIBREF25"
},
{
"start": 1037,
"end": 1057,
"text": "(Torre et al., 2005)",
"ref_id": "BIBREF24"
},
{
"start": 1254,
"end": 1280,
"text": "(Su, Tsao, Wu & Jean, 2013",
"ref_id": "BIBREF22"
},
{
"start": 1336,
"end": 1394,
"text": "(Stolcke, Ferrer, Kajarekar, Shriberg & Venkataraman, 2005",
"ref_id": "BIBREF21"
},
{
"start": 1455,
"end": 1468,
"text": "(Gales, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 1611,
"end": 1635,
"text": "(Juang, Hou & Lee, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 1713,
"end": 1726,
"text": "(Povey, 2003)",
"ref_id": "BIBREF17"
},
{
"start": 1760,
"end": 1778,
"text": "(Kuo & Chen, 2005)",
"ref_id": "BIBREF15"
},
{
"start": 2050,
"end": 2074,
"text": "(Bourlard & Morgan, 1994",
"ref_id": "BIBREF1"
},
{
"start": 2118,
"end": 2151,
"text": "(Hermansky, Ellis & Sharma, 2000)",
"ref_id": "BIBREF10"
},
{
"start": 2247,
"end": 2289,
"text": "(Grezl, Karafiat, Kontar & Cernocky, 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u7dd2\u8ad6 (Introduction)",
"sec_num": "1."
},
{
"text": "\u64f7\u53d6\u4e86 ANN \u7684\u4e2d\u9593\u7279\u5fb5\u4f5c\u70ba\u50b3\u7d71\u8072\u5b78\u6a21\u578b\u7684\u8f38\u5165\uff0c\u4e5f\u53ef\u4ee5\u6709\u6548\u5730\u63d0\u9ad8\u5176\u8fa8\u8b58\u6e96\u78ba\u5ea6\u3002 \u7279 \u5225 \u4e00 \u63d0 \u7684 \u662f \uff0c \u7531 \u65bc \u8a9e \u97f3 \u5f37 \u5316 \u6216 \u662f \u5608 \u96dc \u8a9e \u97f3 \u7279 \u5fb5 \u7684 \u6620 \u5c04 (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u7dd2\u8ad6 (Introduction)",
"sec_num": "1."
},
{
"text": "(multi-condition) \u7684 \u8a13 \u7df4 \u96c6 \u4f86 \u8a13 \u7df4 \u8072 \u5b78 \u6a21 \u578b \uff0c \u5176 \u8a9e \u97f3 \u8a0a \u865f \u647b \u96dc \u4e86 \u4e0d \u540c \u7a2e \u985e \u8207 \u8a0a \u96dc \u6bd4 (signal-to-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u7dd2\u8ad6 (Introduction)",
"sec_num": "1."
},
{
"text": "\u5728\u672c\u7ae0\u7bc0\u4e2d\uff0c\u6211\u5011\u5448\u73fe\u5be6\u9a57\u7d50\u679c\u4e26\u52a0\u4ee5\u8a0e\u8ad6\uff0c\u70ba\u4e86\u65b9\u4fbf\u8a0e\u8ad6\uff0c\u6211\u5011\u5c07\u6240\u63d0\u51fa\u7684\u65b9\u6cd5\u547d\u540d \u70ba\"\u6700\u5927\u5316\u8072\u5b78\u6a21\u578b\u72c0\u614b\u7cbe\u78ba\u7387\u6cd5\"\uff0c\u82f1\u6587\u70ba\"maximum state accuracy\"\uff0c\u4ee5\u7e2e\u5beb\"MSA\"\u8868\u793a\uff0c \u540c\u6642\uff0c\u6211\u5011\u4f7f\u7528\u4e86\u5169\u7a2e\u8a9e\u97f3\u5f37\u5316\u6cd5\u9032\u884c\u6bd4\u8f03\uff0c\u5206\u5225\u70ba\u6700\u5c0f\u5747\u65b9\u8aa4\u5dee\u77ed\u6642\u983b\u8b5c\u5f37\u5ea6\u4f30\u6e2c (minimum mean-square error short-time spectral amplitude estimation, \u7e2e\u5beb\u70ba MMSE-STSA) (Ephraim & Malah, 1984) \uff0c\u53ca\u7406\u60f3\u6bd4\u4f8b\u906e\u7f69\u6cd5(ideal ratio masking, \u7e2e\u5beb\u70ba IRM) (Wang, 2005) Word error rates (WER, %) achieved by different methods (baseline, MSA, FMSE, MMSE-STSA and IRM) ",
"cite_spans": [
{
"start": 218,
"end": 241,
"text": "(Ephraim & Malah, 1984)",
"ref_id": "BIBREF2"
},
{
"start": 282,
"end": 294,
"text": "(Wang, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 295,
"end": 392,
"text": "Word error rates (WER, %) achieved by different methods (baseline, MSA, FMSE, MMSE-STSA and IRM)",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u5be6\u9a57\u7d50\u679c\u8207\u8a0e\u8ad6 (Experimental Results and Discussions)",
"sec_num": "4."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A compact model for speaker-adaptive training",
"authors": [
{
"first": "T",
"middle": [],
"last": "Anastasakos",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mcdonough",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of Fourth International Conference on Spoken Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICSLP.1996.607807"
]
},
"num": null,
"urls": [],
"raw_text": "Anastasakos, T., McDonough, J., Schwartz, R., & Makhoul, J. (1996). A compact model for speaker-adaptive training. In Proceedings of Fourth International Conference on Spoken Language Processing (ICSLP) 1996. doi : 10.1109/ICSLP.1996.607807",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Connectionist Speech Recognition: A Hybrid Approach",
"authors": [
{
"first": "H",
"middle": [],
"last": "Bourlard",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bourlard, H. & Morgan, N. (1994). Connectionist Speech Recognition: A Hybrid Approach. New York, NY: Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Ephraim",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Malah",
"suffix": ""
}
],
"year": 1984,
"venue": "IEEE Trans on Acoustics, Speech, and Signal Processing",
"volume": "32",
"issue": "6",
"pages": "1109--1121",
"other_ids": {
"DOI": [
"10.1109/TASSP.1984.1164453"
]
},
"num": null,
"urls": [],
"raw_text": "Ephraim, Y. & Malah, D. (1984). Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Trans on Acoustics, Speech, and Signal Processing, 32(6), 1109-1121. doi: 10.1109/TASSP.1984.1164453",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "MetricGAN: Generative adversarial networks based black-box metric scores optimization for speech enhancement",
"authors": [
{
"first": "S.-W",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "C.-F",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "S.-D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2031--2041",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu, S.-W., Liao, C.-F., Tsao, Y., & Lin, S.-D. (2019). MetricGAN: Generative adversarial networks based black-box metric scores optimization for speech enhancement. In Proceedings of the 36th International Conference on Machine Learning 2019, 2031-2041.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Maximum likelihood linear transformations for hmm-based speech recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gales",
"suffix": ""
}
],
"year": 1998,
"venue": "Computer Speech and Language",
"volume": "12",
"issue": "2",
"pages": "75--98",
"other_ids": {
"DOI": [
"10.1006/csla.1998.0043\u5f35\u7acb\u5bb6\u8207\u6d2a\u5fd7\u5049"
]
},
"num": null,
"urls": [],
"raw_text": "Gales, M. (1998). Maximum likelihood linear transformations for hmm-based speech recognition. Computer Speech and Language, 12(2), 75-98. doi: 10.1006/csla.1998.0043 \u5f35\u7acb\u5bb6\u8207\u6d2a\u5fd7\u5049",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1 (NASA STI/Recon",
"authors": [
{
"first": "J",
"middle": [],
"last": "Garofolo",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lamel",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fiscus",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pallett",
"suffix": ""
}
],
"year": 1993,
"venue": "Technical Report N",
"volume": "93",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garofolo, J., Lamel, L., Fisher, W., Fiscus, J., & Pallett, D. (1993). DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1 (NASA STI/Recon Technical Report N, vol. 93, p. 27403).Retrieved from https://ui.adsabs.harvard.edu/abs/1993STIN...9327403G",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic and bottleneck features for lvcsr of meetings",
"authors": [
{
"first": "F",
"middle": [],
"last": "Grezl",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Karafiat",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kontar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cernocky",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2007.367023"
]
},
"num": null,
"urls": [],
"raw_text": "Grezl, F., Karafiat, M., Kontar, S., & Cernocky, J. (2007). Probabilistic and bottleneck features for lvcsr of meetings. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2007. doi: 10.1109/ICASSP.2007.367023",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linear discriminant analysis for improved large vocabulary continuous speech recognition",
"authors": [
{
"first": "R",
"middle": [],
"last": "Haeb-Umbach",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.1992.225984"
]
},
"num": null,
"urls": [],
"raw_text": "Haeb-Umbach, R. & Ney, H. (1992). Linear discriminant analysis for improved large vocabulary continuous speech recognition. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing 1992. doi : 10.1109/ICASSP.1992.225984",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep neural network based spectral feature mapping for robust speech recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bagchi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of INTERSPEECH 2015",
"volume": "",
"issue": "",
"pages": "2484--2488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han, K., He, Y., Bagchi, D., Fosler-Lussier, E., & Wang, D. (2015). Deep neural network based spectral feature mapping for robust speech recognition. In Proceedings of INTERSPEECH 2015, 2484-2488",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Perceptual linear predictive (PLP) analysis of speech",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 1990,
"venue": "The Journal of the Acoustical Society of America",
"volume": "87",
"issue": "4",
"pages": "1738--1752",
"other_ids": {
"DOI": [
"10.1121/1.399423"
]
},
"num": null,
"urls": [],
"raw_text": "Hermansky, H. (1990). Perceptual linear predictive (PLP) analysis of speech. The Journal of the Acoustical Society of America, 87(4), 1738-1752. doi: 10.1121/1.399423",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TANDEM connectionist feature extraction for conventional hmm systems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
},
{
"first": "D",
"middle": [
"P W"
],
"last": "Ellis",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2000.862024"
]
},
"num": null,
"urls": [],
"raw_text": "Hermansky, H., Ellis, D. P. W., & Sharma, S. (2000). TANDEM connectionist feature extraction for conventional hmm systems. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing 2000. doi: 10.1109/ICASSP.2000.862024",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "RASTA processing of speech",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "2",
"issue": "4",
"pages": "578--589",
"other_ids": {
"DOI": [
"10.1109/89.326616"
]
},
"num": null,
"urls": [],
"raw_text": "Hermansky, H. & Morgan, N. (1994). RASTA processing of speech. IEEE Transactions on Speech and Audio Processing, 2(4), 578-589. doi: 10.1109/89.326616",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Compensation for the effect of the communication channel in auditory-like analysis of speech (RASTA-PLP)",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bayya",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kohn",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of EUROSPEECH 1991",
"volume": "",
"issue": "",
"pages": "1367--1370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hermansky, H., Morgan, N., Bayya, A., & Kohn, P. (1991). Compensation for the effect of the communication channel in auditory-like analysis of speech (RASTA-PLP). In Proceedings of EUROSPEECH 1991, 1367-1370.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups",
"authors": [
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Signal Processing Magazine",
"volume": "29",
"issue": "6",
"pages": "82--97",
"other_ids": {
"DOI": [
"10.1109/MSP.2012.2205597"
]
},
"num": null,
"urls": [],
"raw_text": "Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., \u2026 Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82-97. doi: 10.1109/MSP.2012.2205597",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Minimum classification error rate methods for speech recognition",
"authors": [
{
"first": "B.-H",
"middle": [],
"last": "Juang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "5",
"issue": "3",
"pages": "257--265",
"other_ids": {
"DOI": [
"10.1109/89.568732"
]
},
"num": null,
"urls": [],
"raw_text": "Juang, B.-H., Hou, W. & Lee, C.H. (1997). Minimum classification error rate methods for speech recognition. IEEE Transactions on Speech and Audio Processing, 5(3), 257-265. doi : 10.1109/89.568732",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimum word error based discriminative training of language models",
"authors": [
{
"first": "J.-W",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Interspeech'2005 -Eurospeech",
"volume": "",
"issue": "",
"pages": "1277--1280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuo, J.-W. & Chen, B. (2005). Minimum word error based discriminative training of language models. In Proceedings of Interspeech'2005 -Eurospeech, 1277-1280.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient cepstral normalization for robust speech recognition",
"authors": [
{
"first": "F",
"middle": [
"H"
],
"last": "Liu",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Stern",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the workshop on Human Language Technology (HLT '93)",
"volume": "",
"issue": "",
"pages": "69--74",
"other_ids": {
"DOI": [
"10.3115/1075671.1075688"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, F. H., Stern, R. M., Huang, X., & Acero, A. (1993). Efficient cepstral normalization for robust speech recognition. In Proceedings of the workshop on Human Language Technology (HLT '93), 69-74. doi: 10.3115/1075671.1075688",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Discriminative training for large vocabulary speech recognition (Doctoral dissertation)",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D. (2003). Discriminative training for large vocabulary speech recognition (Doctoral dissertation). University of Cambridge, UK.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N. \u2026 Vesely, K. (2011). The Kaldi speech recognition toolkit. In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding 2011.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The PyTorch-Kaldi speech recognition toolkit",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ravanelli",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Parcollet",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2019.8683713"
]
},
"num": null,
"urls": [],
"raw_text": "Ravanelli, M., Parcollet, T., & Bengio, Y. (2019). The PyTorch-Kaldi speech recognition toolkit. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019 ). doi: 10.1109/ICASSP.2019.8683713",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Perceptual evaluation of speech quality (PESQ) -a new method for speech quality assessment of telephone networks and codecs",
"authors": [
{
"first": "A",
"middle": [
"W"
],
"last": "Rix",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Beerends",
"suffix": ""
},
{
"first": "M",
"middle": [
"P"
],
"last": "Hollier",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Hekstra",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2001.941023"
]
},
"num": null,
"urls": [],
"raw_text": "Rix, A. W., Beerends, J. G., Hollier, M. P., & Hekstra, A. P. (2001). Perceptual evaluation of speech quality (PESQ) -a new method for speech quality assessment of telephone networks and codecs. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing 2001. doi: 10.1109/ICASSP.2001.941023",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "MLLR transforms as features in speaker recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ferrer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kajarekar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Venkataraman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Eurospeech 2005",
"volume": "",
"issue": "",
"pages": "2425--2428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A., Ferrer, L., Kajarekar, S., Shriberg, E., & Venkataraman, A. (2005). MLLR transforms as features in speaker recognition. In Proceedings of Eurospeech 2005, 2425-2428.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Speech enhancement using generalized maximum a posteriori spectral amplitude estimator",
"authors": [
{
"first": "Y.-C",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsao",
"suffix": ""
},
{
"first": "J.-E",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "F.-R",
"middle": [],
"last": "Jean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2013.6639114"
]
},
"num": null,
"urls": [],
"raw_text": "Su, Y.-C., Tsao, Y., Wu, J.-E., & Jean, F.-R. (2013). Speech enhancement using generalized maximum a posteriori spectral amplitude estimator. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2013. doi: 10.1109/ICASSP.2013.6639114",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A short-time objective intelligibility measure for time-frequency weighted noisy speech",
"authors": [
{
"first": "C",
"middle": [
"H"
],
"last": "Taal",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Hendriks",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Heusdens",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jensen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2010.5495701"
]
},
"num": null,
"urls": [],
"raw_text": "Taal, C. H., Hendriks, R. C., Heusdens, R., & Jensen, J. (2010). A short-time objective intelligibility measure for time-frequency weighted noisy speech. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2010. doi: 10.1109/ICASSP.2010.5495701",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Histogram equalization of the speech representation for robust speech recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Torre",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Peinado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Segura",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "P\u00e9rez-C\u00f3rdoba",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Benitez",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rubio",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "13",
"issue": "3",
"pages": "355--366",
"other_ids": {
"DOI": [
"10.1109/TSA.2005.845805"
]
},
"num": null,
"urls": [],
"raw_text": "Torre, A., Peinado, A., Segura, J., P\u00e9rez-C\u00f3rdoba, J., Benitez, C., & Rubio, A. (2005). Histogram equalization of the speech representation for robust speech recognition. IEEE Transactions on Speech and Audio Processing, 13(3), 355-366. doi: 10.1109/TSA.2005.845805",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cepstral domain segmental feature vector normalization for noise robust speech recognition",
"authors": [
{
"first": "O",
"middle": [],
"last": "Viikki",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Laurila",
"suffix": ""
}
],
"year": 1998,
"venue": "Speech Communication",
"volume": "25",
"issue": "1-3",
"pages": "133--147",
"other_ids": {
"DOI": [
"10.1016/S0167-6393(98)00033-8"
]
},
"num": null,
"urls": [],
"raw_text": "Viikki, O. & Laurila, K. (1998). Cepstral domain segmental feature vector normalization for noise robust speech recognition. Speech Communication, 25(1-3), 133-147. doi: 10.1016/S0167-6393(98)00033-8",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On ideal binary mask as the computational goal of auditory scene analysis",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2005,
"venue": "Speech Separation by Humans and Machines",
"volume": "",
"issue": "",
"pages": "181--197",
"other_ids": {
"DOI": [
"10.1007/0-387-22794-6_12"
]
},
"num": null,
"urls": [],
"raw_text": "Wang, D. (2005). On ideal binary mask as the computational goal of auditory scene analysis. In: Divenyi P. (eds) Speech Separation by Humans and Machines (pp. 181-197), Springer, Boston, MA. https://doi.org/10.1007/0-387-22794-6_12",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification",
"authors": [
{
"first": "B",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bao",
"suffix": ""
}
],
"year": 2014,
"venue": "Speech Communication",
"volume": "60",
"issue": "",
"pages": "13--29",
"other_ids": {
"DOI": [
"10.1016/j.specom.2014.02.001"
]
},
"num": null,
"urls": [],
"raw_text": "Xia, B. & Bao, C.-c. (2014). Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification. Speech Communication, 60, 13-29. doi: 10.1016/j.specom.2014.02.001",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Training supervised speech separation system to improve STOI and PESQ directly",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2018.8461965"
]
},
"num": null,
"urls": [],
"raw_text": "Zhang, H., Zhang, X., & Gao, G. (2018). Training supervised speech separation system to improve STOI and PESQ directly. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2018. doi: 10.1109/ICASSP.2018.8461965",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>\u5f35\u7acb\u5bb6\u8207\u6d2a\u5fd7\u5049</td></tr><tr><td>\u5c04 \uff0c \u7528 \u4ee5 \u6700 \u5927 \u5316 \u5730 \u63d0 \u9ad8 \u8a9e \u97f3 \u8fa8 \u8b58 \u7cfb \u7d71 \u4e2d \u5f8c \u7aef \u8072 \u5b78 \u6a21 \u578b \u72c0 \u614b \u7684 \u5f8c \u9a57 \u6a5f \u7387 (posterior</td></tr><tr><td>probability)\u3002\u6211\u5011\u9810\u671f\u5f97\u5230\u7684\u65b0\u8a9e\u97f3\u7279\u5fb5\u5728\u8fa8\u8b58\u6e96\u78ba\u5ea6\u65b9\u9762\u5c07\u512a\u65bc\u539f\u59cb\u7279\u5fb5\uff0c\u4e26\u4e14\u5177\u6709</td></tr><tr><td>\u5c0d\u96dc\u8a0a\u7684\u5f37\u5065\u6027\u3002</td></tr><tr><td>\u5728\u4ee5\u4e0b\u7ae0\u7bc0\u4e2d\uff0c\u6211\u5011\u4ecb\u7d39\u65b0\u63d0\u51fa\u7684\u8a9e\u97f3\u7279\u5fb5\u64f7\u53d6\u65b9\u6cd5\uff0c\u4e26\u63a2\u8a0e\u5b83\u7684\u7279\u9ede\u8207\u53ef\u80fd\u7684\u512a</td></tr><tr><td>\u52e2\u3002\u7136\u5f8c\u9032\u884c\u5be6\u9a57\u8207\u5206\u6790\u7d50\u679c\u3002\u800c\u5f8c\u4ee5\u7d50\u8ad6\u4f5c\u7d42\u3002</td></tr><tr><td>2</td></tr><tr><td>Han, He, Bagchi,</td></tr><tr><td>Fosler-Lussier &amp; Wang, 2015)\u65e8\u5728\u5c07\u96dc\u8a0a\u8a9e\u97f3\u8a0a\u865f\u6216\u662f\u5176\u7279\u5fb5\u8f49\u63db\u56de\u53d7\u96dc\u8a0a\u5e72\u64fe\u524d\u7684\u539f</td></tr><tr><td>\u59cb\u503c\uff0c\u9019\u662f\u6a5f\u5668/\u6df1\u5ea6\u5b78\u7fd2\u4e2d\u5178\u578b\u7684\u56de\u6b78(regression)\u554f\u984c\uff0c\u56e0\u6b64\uff0c\u5728\u5176\u65b9\u6cd5\u4e2d DNN \u7684\u8a13</td></tr><tr><td>\u7df4\u7d93\u5e38\u4f7f\u7528\u5747\u65b9\u8aa4\u5dee(mean squared error, MSE)\u4f5c\u70ba\u640d\u5931\u51fd\u6578\u3001\u85c9\u7531\u5176\u6700\u5c0f\u5316\u4f86\u5b78\u7fd2 DNN</td></tr><tr><td>\u6a21\u578b\u53c3\u6578\u3002\u7136\u800c\uff0c\u5728\u8a55\u4f30\u65b9\u6cd5\u7684\u6027\u80fd\u6642\uff0c\u901a\u5e38\u6703\u4f7f\u7528\u5176\u4ed6\u4e00\u4e9b\u5ba2\u89c0\u7684\u6307\u6a19\uff0c\u4f8b\u5982\u8a9e\u97f3\u54c1</td></tr><tr><td>\u8cea\u7684\u611f\u77e5\u8a55\u4f30(perceptual evaluation of speech quality, PESQ) (Rix, Beerends, Hollier &amp;</td></tr><tr><td>Hekstra, 2001)\u3001\u77ed\u6642\u5ba2\u89c0\u80fd\u8fa8\u5ea6(short-time objective intelligibility, STOI) (Taal, Hendriks,</td></tr><tr><td>Heusdens &amp; Jensen, 2010)\u6216\u8a5e\u932f\u8aa4\u7387(word error rate, WER)\u3002\u9019\u4e9b\u8a55\u4f30\u5206\u6578\u4e0d\u4e00\u5b9a\u8207\u9084\u539f</td></tr><tr><td>\u5f8c\u7684\u8a9e\u97f3\u548c\u539f\u59cb\u8a9e\u97f3\u4e4b\u9593\u7684\u5747\u65b9\u8aa4\u5dee(MSE)\u6709\u76f4\u63a5\u7684\u76f8\u95dc\uff0c\u4ea6\u5373 DNN \u8a13\u7df4\u76ee\u6a19\u8207\u8a55\u4f30\u6307</td></tr><tr><td>\u6a19\u4e26\u4e0d\u4e00\u81f4\uff0c\u56e0\u6b64\u964d\u4f4e MSE \u672a\u5fc5\u53ef\u76f4\u63a5\u63d0\u5347\u9019\u4e9b\u8a55\u4f30\u5206\u6578\u3002\u6709\u9451\u65bc\u6b64\uff0c\u5728\u4e00\u4e9b\u8fd1\u5e74\u958b\u767c</td></tr><tr><td>\u7684\u57fa\u65bc\u6df1\u5ea6\u5b78\u7fd2\u7684\u8a9e\u97f3\u5f37\u5316\u6cd5\u4e2d(Zhang, Zhang &amp; Gao, 2018)\uff0c\u76f4\u63a5\u5c07 PESQ \u548c STOI \u4f5c\u70ba</td></tr><tr><td>DNN \u6a21\u578b\u8a13\u7df4\u7684\u76ee\u6a19\u51fd\u6578\u3001\u52a0\u4ee5\u6700\u4f73\u5316\uff0c\u800c\u7372\u5f97\u66f4\u597d\u7684\u6548\u80fd\u3002</td></tr><tr><td>\u53d7\u4e0a\u8ff0\u89c0\u5bdf\u548c\u5176\u4ed6\u6587\u737b\u7684\u555f\u767c(Fu, Liao, Tsao &amp; Lin, 2019; Xia &amp; Bao, 2014)\uff0c\u672c\u7814\u7a76</td></tr><tr><td>\u63d0\u51fa\u4e00\u7a2e\u57fa\u65bc\u6df1\u5ea6\u5b78\u7fd2\u6a21\u578b\u4e4b\u5f37\u5065\u6027\u7279\u5fb5\u64f7\u53d6\u7684\u65b0\u65b9\u6cd5\uff0c\u5176\u5229\u7528\u4e86\u8207 MSE \u7121\u95dc\u7684\u76ee\u6a19\u51fd</td></tr><tr><td>\u6578\u4f86\u8a13\u7df4\u5176\u4e2d\u7684\u6df1\u5ea6\u7db2\u8def\u3002\u5728\u6b64\u65b0\u65b9\u6cd5\u4e2d\uff0c\u6211\u5011\u4f7f\u7528\u7684\u76ee\u6a19\u51fd\u6578\u662f\u7d66\u5b9a\u8a9e\u97f3\u7279\u5fb5\u5e8f\u5217\u4e0b\uff0c</td></tr><tr><td>\u5c0d\u61c9\u7684\u8072\u5b78\u6a21\u578b\u5176\u4e2d\u72c0\u614b\u5e8f\u5217(state sequence)\u8207\u771f\u5be6\u72c0\u614b\u5e8f\u5217\u76f8\u8f03\u4e4b\u4e0b\u7684\u7cbe\u78ba\u5ea6\uff0c\u8207\u8a9e\u97f3</td></tr><tr><td>\u8b58\u5225\u7684\u7cbe\u78ba\u5ea6\u6709\u76f4\u63a5\u7684\u76f8\u95dc\u3002\u7c21\u800c\u8a00\u4e4b\uff0c\u6211\u5011\u8a13\u7df4\u4e00\u500b\u6df1\u5ea6\u795e\u7d93\u7db2\u7d61\u4f86\u9032\u884c\u8a9e\u97f3\u7279\u5fb5\u6620</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">\u57fa\u65bc\u6df1\u5ea6\u8072\u5b78\u6a21\u578b\u5176\u72c0\u614b\u7cbe\u78ba\u5ea6\u6700\u5927\u5316\u4e4b\u5f37\u5065\u8a9e\u97f3\u7279\u5fb5\u64f7\u53d6\u7684\u521d\u6b65\u7814\u7a76 \u57fa\u65bc\u6df1\u5ea6\u8072\u5b78\u6a21\u578b\u5176\u72c0\u614b\u7cbe\u78ba\u5ea6\u6700\u5927\u5316\u4e4b\u5f37\u5065\u8a9e\u97f3\u7279\u5fb5\u64f7\u53d6\u7684\u521d\u6b65\u7814\u7a76</td><td>89 91</td></tr><tr><td colspan=\"3\">\u6b64\u5916\uff0c\u6211\u5011\u4f7f\u7528\u4e7e\u6de8\u72c0\u614b\u7684\u8a13\u7df4\u96c6\u7279\u5fb5(\u8207\u591a\u72c0\u614b\u8a13\u7df4\u96c6\u6709\u76f8\u540c\u7684\u539f\u59cb\u4e7e\u6de8\u8a9e\u53e5\u5167 \u6b64\u5916\uff0c\u900f\u904e Kaldi \u7684\u6a19\u6e96\u7a0b\u5e8f\uff0c\u6211\u5011\u5efa\u69cb\u4e86\u4e00\u7d44\u7528\u65bc\u8a13\u7df4\u8a9e\u97f3\u7684\u4e09\u5143\u8a9e\u6cd5\u7684\u8a9e\u8a00\u6a21</td></tr><tr><td colspan=\"3\">\u5bb9)\uff0c\u53e6\u5916\u8a13\u7df4\u4e00\u5957 DNN-HMM\uff0c\u85c9\u6b64 DNN-HMM\uff0c\u6211\u5011\u70ba\u6bcf\u53e5\u8a9e\u97f3\u7279\u5fb5\u6c42\u5176\u5c0d\u61c9\u7684\u72c0 \u578b(tri-gram)\u3002</td></tr><tr><td colspan=\"3\">\u614b\u5e8f\u5217 \uff0c\u6211\u5011\u628a\u5b83\u8996\u70ba\u771f\u5be6\u72c0\u614b\u5e8f\u5217(ground-truth state sequence)\uff0c\u56e0\u70ba\u5b83\u662f\u7531\u4e7e\u6de8\u8a9e\u97f3</td></tr><tr><td colspan=\"3\">\u6c42\u5f97\uff0c\u6c92\u6709\u96dc\u8a0a\u5e72\u64fe\u3002 1. \u6211\u5011\u7684\u65b9\u6cd5\u5176\u4e2d\u7684\u6297\u566a\u7db2\u8def\u5176\u8a13\u7df4\u76ee\u6a19\u662f\u5728\u96dc\u8a0a\u74b0\u5883\u4e0b\uff0c\u6700\u5927\u5316\u8072\u5b78\u6a21\u578b\u7684\u72c0\u614b\u7cbe\u78ba</td></tr><tr><td colspan=\"3\">\u6b65\u9a5f 4: \u7387\uff0c\u9032\u800c\u76f4\u63a5\u964d\u4f4e\u8a9e\u97f3\u8fa8\u8b58\u7684\u8a5e\u932f\u8aa4\u7387(word error rate, WER)\u3002\u76f8\u5c0d\u800c\u8a00\uff0c\u76f4\u63a5\u6700\u5c0f\u5316</td></tr><tr><td colspan=\"3\">\u6b64\u6b65\u9a5f\u662f\u6211\u5011\u65b9\u6cd5\u7684\u6838\u5fc3\u3002\u6211\u5011\u8a13\u7df4\u4e00\u500b\u53bb\u566a\u6df1\u5ea6\u795e\u7d93\u7db2\u8def(denoising network)\uff0c\u7528\u4f86\u5c07 \u8a13\u7df4\u96c6\u5167\u96dc\u8a0a\u8a9e\u97f3\u7279\u5fb5\u8207\u539f\u59cb\u4e7e\u6de8\u8a9e\u97f3\u7279\u5fb5\u4e4b\u9593\u7684\u5e73\u65b9\u8aa4\u5dee(MSE)\u7684 DNN \u6297\u566a\u7db2\u8def\uff0c</td></tr><tr><td colspan=\"3\">\u539f\u59cb\u8a9e\u97f3\u7279\u5fb5 \u0305 \u8f49\u63db\u70ba \u0305 \u2032\uff0c\u5982\u4e0b\u6240\u793a\uff1a \u53ef\u80fd\u5b58\u5728\u5982\u524d\u4e00\u7ae0\u7bc0\u8a0e\u8ad6\u7684\u76ee\u6a19\u4e0d\u5339\u914d\u554f\u984c\uff0c\u4e26\u4e0d\u80fd\u4fdd\u8b49\u5728\u6e2c\u8a66\u7684\u96dc\u8a0a\u74b0\u5883\u4e0b\u964d\u4f4e\u8fa8 (convolutional network)\uff0c\u5177\u6709 4 \u500b\u76f8\u540c\u5c3a\u5bf8\u7684\u4e00\u7dad\u5377\u7a4d\u5c64\uff0c\u6bcf\u5c64 kernel \u6578\u70ba 30\uff0ckernel</td></tr><tr><td colspan=\"3\">\u0305 \u0305 \u5927\u5c0f\u70ba 5\uff0cpadding \u6578\u70ba 2\u3002\u6b64\u5916\uff0c\u9019\u56db\u500b\u5377\u7a4d\u5c64\u5f8c\u9762\u63a5\u8457\u5169\u500b\u76f8\u540c\u7684\u5168\u9023\u63a5\u5c64\uff0c\u5404\u5c64\u5177 \u8b58\u932f\u8aa4\u7387\u3002 (1) \u5176\u4e2d \u6709 759 \u500b\u7bc0\u9ede\u3002\u6bcf\u5c64\u8f38\u51fa\u7684\u6fc0\u6d3b\u51fd\u6578\u662f\u7dda\u6027\u6574\u6d41\u51fd\u6578(ReLU)\u3002\u8a72\u964d\u566a\u6846\u67b6\u7684\u8a13\u7df4\u904e\u7a0b\u4f7f 2. \u6211\u5011\u7684\u65b9\u6cd5\u63a1\u7528\u591a\u689d\u4ef6\u8a13\u7df4\u96c6\u4f9d\u5e8f\u7372\u5f97 GMM-HMM \u548c DNN-HMM \u8072\u5b78\u6a21\u578b\uff0c\u85c9\u6b64\u8a13 . \u8868\u793a\u6b32\u8a13\u7df4\u4e4b\u53bb\u566a\u7db2\u8def\u51fd\u6578\uff0c\u5176\u8a13\u7df4\u76ee\u6a19\u662f\u4f7f\u65b0\u7684\u7279\u5fb5 \u0305 \u5728\u6b65\u9a5f 3 \u4e2d\u5275\u5efa\u7684 argmax Acc , (2) \u672c\u7684\u539f\u56e0\u662f\u6574\u500b\u964d\u566a\u67b6\u69cb\u8981\u6539\u5584\u539f\u59cb\u8a9e\u97f3\u7279\u5fb5\u4f86\u64ec\u5408\u5f8c\u7aef\u7684\u8072\u5b78\u6a21\u578b\u3002\u5728\u5f8c\u9762\u7684\u7ae0\u7bc0 \u0305 | \u578b\u6642\uff0c\u6b64\u964d\u566a\u795e\u7d93\u7db2\u8def\u4ecd\u7136\u53ef\u4ee5\u4f7f\u8f38\u5165\u96dc\u8a0a\u8a9e\u97f3\u7279\u5fb5\u4e4b\u8f38\u51fa\uff0c\u5c0d\u61c9\u5230\u8f03\u9ad8\u8fa8\u8b58\u7387\u3002\u6839 DNN-HMM \u88e1\uff0c\u53ef\u4ee5\u9810\u6e2c\u51fa\u66f4\u63a5\u8fd1\u771f\u5be6\u72c0\u614b\u7684\u8072\u5b78\u6a21\u578b\u72c0\u614b\u5e8f\u5217\uff0c\u5176\u6578\u5b78\u5f0f\u5982\u4e0b\uff1a \u7df4\u6211\u5011\u7684\u964d\u566a\u795e\u7d93\u7db2\u8def\u3002\u4f46\u662f\uff0c\u6211\u5011\u9810\u671f\u7576\u4f7f\u7528\u4e7e\u6de8\u7121\u96dc\u8a0a\u7684\u8a13\u7df4\u96c6\u6240\u8a13\u7df4\u7684\u8072\u5b78\u6a21 \u7528 Adam \u512a\u5316\u5668\u9032\u884c\u4e86 30 \u500b epochs\uff0c\u4e26\u4f7f\u7528\u4e86\u5c0d\u6578\u76f8\u4f3c\u5ea6(log-likelihood)\u4f5c\u70ba\u76ee\u6a19\u51fd\u6578\u3002</td></tr><tr><td colspan=\"2\">\u5176\u4e2d \u662f DNN-HMM \u8072\u5b78\u6a21\u578b\u3001 \u662f\u4e00\u500b\u7d66\u5b9a\u6a21\u578b \u7684\u4e00\u500b\u51fd\u6578\uff0c\u7528\u4f86\u7522\u751f\u65b0\u7279\u5fb5 \u4e2d\uff0c\u6211\u5011\u5c07\u6703\u5728\u8a55\u4f30\u5be6\u9a57\u4e2d\u89c0\u5bdf\u4e26\u8a0e\u8ad6\u9019\u65b9\u9762\u7684\u7d50\u679c\u3002</td><td>\u0305</td></tr><tr><td>\u5c0d\u61c9\u7684\u6700\u9ad8\u76f8\u4f3c\u5ea6(maximum likelihood)\u72c0\u614b\u5e8f\u5217</td><td colspan=\"2\">\u3001 \u662f\u7531\u524d\u4e00\u6b65\u9a5f\u6240\u5f97\u4e4b\u771f\u5be6\u72c0\u614b</td></tr><tr><td colspan=\"2\">\u5e8f\u5217(ground-truth state sequence)\u3001Acc\u662f\u5c0d\u6578\u76f8\u4f3c\u5ea6(log-likelihood)\u51fd\u6578\uff0c\u7528\u65bc\u8a55\u4f30</td><td>\u76f8</td></tr><tr><td>\u5c0d\u65bc \u7684\u7cbe\u78ba\u5ea6\u3002</td><td/></tr><tr><td colspan=\"3\">\u7576\u8a13\u7df4\u597d\u53bb\u566a\u6df1\u5ea6\u795e\u7d93\u7db2\u8def \u5f8c\uff0c\u5728\u8fa8\u8b58\u904e\u7a0b\u4e2d\uff0c\u6211\u5011\u5c07\u5176\u7528\u65bc\u96dc\u8a0a\u5e72\u64fe\u7684\u8a9e\u97f3\u7279</td></tr><tr><td colspan=\"3\">\u5fb5 \u0305 DNN-HMM \u6240\u63d0\u4f9b\u7684\u771f\u5be6\u72c0\u614b\uff0c\u76f8\u7576\u65bc\u6574\u5408\u4e86\u96dc\u8a0a\u74b0\u5883\u5c0d\u6620\u81f3\u4e7e\u6de8\u72c0\u614b\u7684\u8cc7\u8a0a\uff1b\u540c\u6642\uff0c</td></tr><tr><td colspan=\"3\">\u7531\u65bc\u65b0\u7279\u5fb5 \u0305 \u6240\u5c0d\u61c9\u7684\u72c0\u614b\u5e8f\u5217\uff0c\u76f8\u8f03\u65bc\u539f\u59cb \u0305 \u800c\u8a00\u61c9\u6703\u5177\u6709\u8f03\u9ad8\u7684\u72c0\u614b\u7cbe\u78ba\u7387\uff0c\u56e0</td></tr><tr><td>\u6b64\u5b83\u5011\u5728\u8fa8\u8b58\u4e2d\u7406\u61c9\u7522\u751f\u8f03\u4f4e\u7684\u8a5e\u932f\u8aa4\u7387\u3002</td><td/></tr><tr><td colspan=\"3\">\u512a\u5316\u5668\uff0c\u4e26\u4e14\u4f7f\u7528\u5c0d\u6578\u76f8\u4f3c\u5ea6(log-likelihood)\u4f5c\u70ba\u76ee\u6a19\u51fd\u6578\u3002\u5728\u6a21\u578b\u8a13\u7df4\u4e2d\uff0c\u6703\u5c07\u4e09\u9023\u97f3</td></tr><tr><td colspan=\"3\">\u7d20\u548c\u55ae\u97f3\u7d20\u7684\u8aa4\u5dee\u76f8\u52a0\uff0c\u4e26\u5c07\u5176\u6700\u5c0f\u5316\u3002\u6211\u5011\u4f7f\u7528 Kaldi \u5de5\u5177\u5305(Povey et al., 2011)\u4f86\u5275 noise ratio, SNR)\u7684\u96dc\u8a0a\uff0c\u56e0\u6b64\uff0c\u9810\u671f\u7522\u751f\u7684 DNN-HMM \u6703\u6bd4\u4f7f\u7528\u4e7e\u6de8\u72c0\u614b\u7684 \u5efa GMM-HMM\uff0c\u800c Pytorch-Kaldi (Ravanelli, Parcollet &amp; Bengio, 2019)\u5de5\u5177\u5305\u5247\u7528\u65bc\u5275\u5efa \u8a13\u7df4\u96c6\u5c0d\u61c9\u7684\u8072\u5b78\u6a21\u578b\u5177\u6709\u66f4\u597d\u7684\u6297\u566a\u80fd\u529b\u3002 \u5716 1. \u6240\u63d0\u65b9\u6cd5\u4e4b\u6d41\u7a0b\u5716 DNN-HMM\u3002</td></tr></table>",
"type_str": "table",
"html": null,
"text": "\u6620\u5c04\u5230\u65b0\u7279\u5fb5 \u0305 \uff0c\u7136\u5f8c\u5c07 \u0305 \u8f38\u5165\u81f3\u539f\u672c (\u7121\u9808\u91cd\u65b0\u8a13\u7df4) DNN-HMM \u8072\u5b78\u6a21\u578b\u8207\u8a9e\u8a00 \u6a21\u578b\u3001\u5206\u5225\u751f\u6210\u6700\u9ad8\u76f8\u4f3c\u5ea6\u72c0\u614b\u5e8f\u5217\u8207\u8a5e\u5e8f\u5217\u3002\u8207\u539f\u59cb\u7279\u5fb5 \u0305 \u76f8\u6bd4\uff0c\u65b0\u7279\u5fb5 \u0305 \u9810\u671f\u6709 \u66f4\u5f37\u7684\u6297\u566a\u80fd\u529b\uff0c\u56e0\u70ba\u5b83\u662f\u5728\u591a\u689d\u4ef6\u8a13\u7df4\u7684 DNN-HMM \u5e6b\u52a9\u4e0b\u5275\u5efa\u7684\uff0c\u4e26\u6709\u4e7e\u6de8\u8a13\u7df4\u7684 \u5f35\u7acb\u5bb6\u8207\u6d2a\u5fd7\u5049 \u8207\u4f7f\u7528\u5e73\u5747\u5e73\u65b9\u8aa4\u5dee(mean squared error, MSE)\u4f5c\u70ba\u640d\u5931\u51fd\u6578\u7684\u4e4b DNN \u6c42\u53d6\u6297\u566a\u4e4b \u8a9e\u97f3\u7279\u5fb5\u65b9\u6cd5\u76f8\u6bd4(Garofolo, Lamel, Fisher, Fiscus & Pallett, 1993)\uff0c\u6211\u5011\u63d0\u51fa\u7684\u65b9\u6cd5\u5177\u6709 \u4ee5\u4e0b\u6f5b\u5728\u512a\u52e2\uff1a DNN-HMM\uff0c\u5177\u9ad4\u4f86 \u8aaa\uff0cGMM-HMM \u548c DNN-HMM \u5206\u5225\u662f\u4f7f\u7528 GMM \u548c DNN \u8868\u793a HMM \u7684\u5404\u7a2e\u72c0\u614b\u3002\u5c0d \u65bc GMM-HMM\uff0c\u6bcf\u500b\u55ae\u97f3\u7d20(monophone)\u7684\u8a9e\u97f3\u8a0a\u865f\u548c\u975c\u97f3\u5206\u5225\u7531\u5177\u6709 3 \u500b\u72c0\u614b\u7684 HMM(\u7e3d\u5171 1000 \u500b Gaussian)\u4f86\u8868\u793a\uff0c\u800c\u6bcf\u500b\u4e09\u9023\u97f3\u7d20\u7531\u5177\u6709 3 \u500b\u72c0\u614b\u7684 HMM \u4f86\u8868\u793a\uff0c \u7e3d\u5171 2500 \u500b leaves\u3002\u7e3d\u5171\u6709 15000 \u500b Gaussian\u3002\u6b64\u5916\uff0c\u5728\u4e09\u9023\u97f3\u7d20\u6a21\u578b\u8a13\u7df4\u671f\u9593\uff0c\u5c07 LDA\u3001 MLLT \u548c SAT \u61c9\u7528\u65bc\u8a9e\u97f3\u7279\u5fb5\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u5c0d\u65bc DNN \u7684\u7d50\u69cb\uff0c\u4f7f\u7528\u4e86 5 \u5c64\u96b1\u85cf\u5c64\uff0c\u6bcf \u500b\u96b1\u85cf\u5c64\u5305\u542b 1024 \u500b\u7bc0\u9ede\uff0c\u4e26\u4e14\u5206\u5225\u9023\u63a5\u5230 DNN-HMM \u4e2d\u7528\u65bc\u4e09\u9023\u97f3\u548c\u55ae\u97f3\u7684\u5169\u500b\u7368 \u7acb\u8f38\u51fa\u5c64\u3002\u6b64 DNN \u7684\u8a13\u7df4\u4f7f\u7528 Dropout \u6cd5\uff0c\u6bd4\u4f8b\u70ba 15\uff05\uff0c\u9032\u884c 24 \u500b epochs \u548c\u4f7f\u7528 SGD \u5c0d\u65bc\u8a13\u7df4\u548c\u6e2c\u8a66\u96c6\u4e2d\u7684\u6bcf\u500b\u8a9e\u53e5\uff0c\u6211\u5011\u4f7f\u7528 69 \u7dad\u7684 FBANK \u7279\u5fb5(\u6bcf\u500b\u97f3\u6846 23 \u7dad \u7684 FBANK \u4ee5\u53ca\u5176 delta \u548c delta-delta\uff0c\u97f3\u6846\u9577\u5ea6\u70ba 20 \u6beb\u79d2\uff0c\u6bcf\u6b21\u4f4d\u79fb 10 \u6beb\u79d2)\u4f86\u4f5c\u70ba \u57fa\u790e\u7279\u5fb5(baseline feature)\u3002\u6211\u5011\u63d0\u51fa\u7684\u964d\u566a DNN \u6846\u67b6\u5c07 FBANK \u4f5c\u70ba\u8f38\u5165\uff0c\u6309\u7167\u4e0a\u500b\u7ae0 \u7bc0\u4e2d\u7684\u6b65\u9a5f\u7522\u751f\u65b0\u7684\u7279\u5fb5\uff0c\u4ee5\u9032\u884c\u5f8c\u7e8c\u8fa8\u8b58\u3002\u53bb\u566a DNN \u6a21\u578b\u662f\u4e00\u500b\u5377\u7a4d\u795e\u7d93\u7db2\u7d61",
"num": null
},
"TABREF2": {
"content": "<table><tr><td/><td/><td colspan=\"2\">\u5f35\u7acb\u5bb6\u8207\u6d2a\u5fd7\u5049</td></tr><tr><td colspan=\"4\">\u8868 2. \u591a\u689d\u4ef6\u8a13\u7df4\u6a21\u5f0f\u5728 \"Engine\" \u96dc\u8a0a\u74b0\u5883\u4e0b\u6e2c\u8a66\u96c6\u7684\u57fa\u790e\u5be6\u9a57\u3001MSA\u3001FMSE\u3001 \u5165\u66f4\u591a\u53ef\u89c0\u5bdf\u5230\u7684\u5931\u771f\u3002 MMSE-STSA \u548c IRM \u6240\u5f97\u7684\u8a5e\u932f\u8aa4\u7387(WER, %) 2. SNR \u74b0\u5883\u5c0f\u5e45\u8f03\u4f4e\u8a5e\u932f\u8aa4\u7387\uff0c\u6b64\u7d50\u679c\u90e8\u5206\u9a57\u8b49\u4e86\u8a9e\u97f3\u5f37\u5316\u65b9\u6cd5\u96d6\u53ef\u6539\u5584\u8a9e\u97f3\u54c1\u8cea\uff0c\u4f46 [Table 2.</td></tr><tr><td>\u672a\u5fc5\u80fd\u6709\u6548\u63d0\u5347\u8a9e\u97f3\u8fa8\u8b58\u7cbe\u78ba\u5ea6\u3002</td><td/><td/></tr><tr><td colspan=\"4\">3. \u5728 White \u548c Engine \u96dc\u8a0a\u74b0\u5883\u4e2d\uff0c\u65b0\u63d0\u51fa\u7684 MSA \u6cd5\u5728\u5927\u591a\u6578 SNR \u74b0\u5883\u4e0b\u53ef\u4ee5\u5f97\u5230\u8f03\u4f4e</td></tr><tr><td colspan=\"4\">\u7684\u8a5e\u932f\u8aa4\u7387\uff0c\u4e26\u4e14\u66f4\u52dd\u904e\u5176\u4ed6\u65b9\u6cd5\uff0c\u9019\u9a57\u8b49\u4e86 MSA \u85c9\u7531\u63d0\u9ad8\u8a9e\u97f3\u7279\u5fb5\u4e4b\u72c0\u614b\u7cbe\u78ba\u5ea6\uff0c</td></tr><tr><td colspan=\"4\">\u53ef\u6539\u5584\u7279\u5fb5\u5c0d\u96dc\u8a0a\u7684\u5f37\u5065\u6027\u4e26\u589e\u52a0\u8fa8\u8b58\u6e96\u78ba\u7387\u3002\u7279\u5225\u7684\u662f\uff0c\u65b0\u63d0\u51fa\u7684 MSA \u4e2d\u7684\u964d\u566a\u7db2</td></tr><tr><td colspan=\"4\">\u8def\uff0c\u5176\u8a13\u7df4\u6240\u4f7f\u7528\u4e4b\u8a9e\u97f3\u5305\u542b\u7684\u96dc\u8a0a\u7a2e\u985e\u4e26\u975e\u662f\u6e2c\u8a66\u96c6\u4e4b White \u96dc\u8a0a\u8207 Engine \u96dc\u8a0a\u3002</td></tr><tr><td colspan=\"4\">\u56e0\u6b64\uff0cMSA \u5728\u67d0\u7a2e\u7a0b\u5ea6\u4e0a\u986f\u793a\u51fa\u5176\u4e00\u822c\u5316(generalization)\u7684\u80fd\u529b\uff0c\u5728\u672a\u77e5\u96dc\u8a0a(unseen</td></tr><tr><td colspan=\"2\">noise)\u74b0\u5883\u4e0b\uff0c\u4ecd\u53ef\u63d0\u5347\u8a9e\u97f3\u7279\u5fb5\u7684\u5f37\u5065\u6027\u3002</td><td/></tr><tr><td colspan=\"4\">4. FMSE \u65b9\u6cd5\u662f\u70ba\u4e86\u6700\u5c0f\u5316\u96dc\u8a0a\u8a9e\u97f3\u548c\u4e7e\u6de8\u8a9e\u97f3\u5176 FBANK \u7279\u5fb5\u4e4b\u9593\u7684\u5747\u65b9\u8aa4\u5dee(MSE)\uff0c</td></tr><tr><td colspan=\"4\">\u7136\u800c\u5728\u5e7e\u4e4e\u6240\u6709\u96dc\u8a0a\u60c5\u6cc1\u4e0b\uff0c\u5176\u6548\u679c\u90fd\u6bd4\u57fa\u790e\u5be6\u9a57\u7d50\u679c\u5dee\uff0c\u5982\u540c\u4e4b\u524d\u8a0e\u8ad6\uff0c\u5176\u53ef\u80fd\u539f</td></tr><tr><td colspan=\"4\">\u56e0\u662f\u5176\u8a55\u4f30\u8207\u512a\u5316\u6307\u6a19\u7684\u4e0d\u5339\u914d\uff0c\u9020\u6210 FMSE \u65b9\u6cd5\u8f49\u63db\u5f8c\u7684\u8a9e\u97f3\u7279\u5fb5\u53cd\u800c\u7522\u751f\u8f03\u5dee\u7684</td></tr><tr><td colspan=\"4\">\u8fa8\u8b58\u6e96\u78ba\u7387\uff0c\u53e6\u4e00\u500b\u539f\u56e0\u662f\uff0c\u5728 FMSE \u4e2d\u5b78\u7fd2\u5230\u7684 DNN \u904e\u5ea6\u64ec\u5408\u8a13\u7df4\u8cc7\u6599\uff0c\u56e0\u6b64\u7121\u6cd5</td></tr><tr><td>\u5f88\u597d\u5730\u6539\u5584\u6e2c\u8a66\u8cc7\u6599\u7684\u5931\u771f\u554f\u984c\u3002</td><td/><td/></tr><tr><td colspan=\"4\">\u8868 1. \u591a\u689d\u4ef6\u8a13\u7df4\u6a21\u5f0f\u5728 \"White\" \u96dc\u8a0a\u74b0\u5883\u4e0b\u6e2c\u8a66\u96c6\u7684\u57fa\u790e\u5be6\u9a57\u3001MSA\u3001FMSE\u3001</td></tr><tr><td colspan=\"2\">MMSE-STSA \u548c IRM \u6240\u5f97\u7684\u8a5e\u932f\u8aa4\u7387(WER, %)</td><td/></tr><tr><td colspan=\"4\">[Table 1. Word error rates (WER, %) achieved by different methods (baseline, MSA,</td></tr><tr><td colspan=\"4\">FMSE, MMSE-STSA and IRM) for the White noise-corrupted test set under the</td></tr><tr><td colspan=\"4\">(mean squared error, MSE)\u4f86\u5b78\u7fd2\u5176 DNN\uff0c\u9019\u7a2e\u65b9\u6cd5\u7a31\u70ba feature-based MSE\uff0c\u7e2e\u5beb\u70ba multi-condition-training mode]</td></tr><tr><td>FMSE\u3002</td><td>Signal-to-noise ratio (SNR)</td><td/></tr><tr><td colspan=\"4\">\u5728\u9019\u88e1\uff0c\u6211\u5011\u7684\u5be6\u9a57\u7d50\u679c\u5206\u70ba\u5169\u90e8\u5206\uff0c\u5206\u5225\u70ba\u591a\u689d\u4ef6\u8a13\u7df4\u6a21\u5f0f(multi-condition training -6 dB -3 dB 0 dB 3 dB 6 dB 12 dB</td></tr><tr><td colspan=\"4\">mode)\u548c\u4e7e\u6de8\u72c0\u614b\u8a13\u7df4\u6a21\u5f0f(clean-condition training mode)\u3002\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0c\u6211\u5011\u6240\u63d0\u51fa\u7684 baseline 66.1 62.1 57.0 49.8 44.8 34.4</td></tr><tr><td colspan=\"4\">\u65b9\u6cd5 MSA \u4e2d\u7684\u964d\u566a\u6a21\u578b\u5728\u5169\u500b\u6a21\u5f0f\u4e0b\u7686\u662f\u85c9\u7531\u591a\u689d\u4ef6\u8a13\u7df4\u96c6\u6240\u5b78\u7fd2\u800c\u5f97\uff0c\u4f46\u6211\u5011\u60f3\u6e2c MSA 65.5 61.0 54.9 48.9 43.2 34.7*</td></tr><tr><td colspan=\"2\">\u8a66\u7531\u6b64\u7522\u751f\u7684\u5f37\u5316\u5f8c\u8a9e\u97f3\u7279\u5fb5\u5728\u5169\u7a2e\u6a21\u5f0f\u4e0b\u53ef\u5426\u90fd\u80fd\u8868\u73fe\u826f\u597d\u3002 FMSE 69.2* 63.3* 57.2* 50.4* \uf0b7 \u591a\u689d\u4ef6\u8a13\u7df4\u6a21\u5f0f(multi-condition training mode)\u4e4b\u7d50\u679c\u8207\u8a0e\u8ad6 MMSE-STSA 70.3* 66.8* 60.4* 55.0*</td><td>44.8 49.7*</td><td>35.3* 40.1*</td></tr><tr><td colspan=\"4\">\u7576\u5229\u7528\u591a\u689d\u4ef6\u8a13\u7df4\u96c6\u6240\u8a13\u7df4\u800c\u5f97\u7684\u8072\u5b78\u6a21\u578b\u6642\uff0c\u8868 1\u3001\u8868 2 Jackhammer \u96dc\u8a0a\u4e0b\u7684\u57fa IRM 65.8 61.9 56.5 50.4* 44.4 34.3</td></tr><tr><td colspan=\"4\">\u790e\u5be6\u9a57\u7d50\u679c(baseline)\uff0c\u9019\u8868\u660e\u8a9e\u97f3\u5f37\u5316\u6216\u5f37\u5065\u6027\u7279\u5fb5\u65b9\u6cd5\u53ef\u80fd\u6703\u5c0d\u5e72\u64fe\u8f03\u5c11\u7684\u8a9e\u53e5\u5f15</td></tr></table>",
"type_str": "table",
"html": null,
"text": "\u3002\u6b64\u5916\uff0c\u6211\u5011\u63a1\u7528\u4e00\u7a2e\u57fa\u65bc DNN \u6c42\u53d6\u8072\u5b78\u7279\u5fb5\u8f49\u63db\u7684\u65b9\u6cd5(Han et al., 2015)\u9032\u884c\u6bd4 \u8f03\uff0c\u8a72\u65b9\u6cd5\u4e3b\u8981\u4f7f\u7528\u6df1\u5ea6\u795e\u7d93\u7db2\u7d61(DNN)\u4f86\u8f49\u63db\u8f38\u5165\u7684 FBANK \u7279\u5fb5\uff0c\u900f\u904e\u76f4\u63a5\u6700\u5c0f\u5316 \u591a\u689d\u4ef6\u8a13\u7df4\u96c6\u4e2d\u96dc\u8a0a-\u4e7e\u6de8\u914d\u5c0d(noisy-clean pair)\u4e4b\u8a9e\u97f3\u7684 FBANK \u7279\u5fb5\u4e4b\u9593\u7684\u5747\u65b9\u8aa4\u5dee \u8207\u8868 3 \u5217\u51fa\u4e86\u6211\u5011\u7684\u65b9\u6cd5 MSA \u53ca\u4e09\u7a2e\u6bd4\u8f03\u6cd5 FMSE\u3001MMSE-STSA \u8207 IRM \u5728\u4e09\u7a2e\u96dc\u8a0a\u6e2c\u8a66\u96c6\u6240\u5f97\u4e4b\u8a5e\u932f\u8aa4\u7387(word error rate, WER)\uff0c\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0cMMSE-STSA \u8207 IRM \u4e8c\u7a2e\u8a9e\u97f3\u5f37\u5316\u6cd5\u53ea\u4f7f\u7528\u65bc\u6e2c\u8a66\u96c6\u7684\u8a9e \u53e5\uff0c\u4e26\u672a\u4f5c\u7528\u65bc\u8a13\u7df4\u96c6\u4e4b\u8a9e\u53e5\uff0c\u539f\u56e0\u662f\u6211\u5011\u4e4b\u524d\u5be6\u9a57\u767c\u73fe\uff0c\u82e5\u5b83\u5011\u540c\u6642\u4f5c\u7528\u65bc\u8a13\u7df4\u96c6\uff0c \u5c07\u4f7f\u6e2c\u8a66\u96c6\u4e4b\u8a5e\u932f\u8aa4\u7387\u660e\u986f\u589e\u52a0\u3002\u5f9e\u9019\u4e09\u500b\u8868\u4e2d\uff0c\u6211\u5011\u6709\u4ee5\u4e0b\u89c0\u5bdf\u7d50\u679c\uff1a 1. \u5e73\u5747\u800c\u8a00\uff0c\u5404\u7a2e\u65b9\u6cd5\u5728 Jackhammer \u96dc\u8a0a\u74b0\u5883\u4e2d\u5f97\u5230\u7684\u8a5e\u932f\u8aa4\u7387\u660e\u986f\u4f4e\u65bc White \u548c Engine \u96dc\u8a0a\u74b0\u5883\u4e2d\u7684 WER\uff0c\u9019\u8868\u660e\u8207 White \u548c Engine \u96dc\u8a0a\u76f8\u6bd4\uff0cJackhammer \u96dc\u8a0a\u5c0d \u8a9e\u97f3\u8a0a\u865f\u7684\u5931\u771f\u8f03\u5c0f\u3002\u4f46\u662f\uff0c\u6211\u5011\u767c\u73fe\u6240\u6709\u7684\u65b9\u6cd5\u5747\u7121\u6cd5\u6539\u5584 \u5c0d\u65bc MMSE-STSA \u8207 IRM \u5169\u7a2e\u8a9e\u97f3\u5f37\u5316\u6cd5\u800c\u8a00\uff0cMMSE-STSA \u6548\u679c\u660e\u986f\u6bd4 IRM \u5dee\uff0c \u4e14\u6bd4\u57fa\u790e\u5be6\u9a57\u5f97\u5230\u8f03\u9ad8\u7684\u8a5e\u932f\u8aa4\u7387\uff0c\u800c IRM \u6cd5\u76f8\u8f03\u65bc\u57fa\u790e\u5be6\u9a57\u7d50\u679c\u800c\u8a00\uff0c\u53ea\u80fd\u5728\u90e8\u5206",
"num": null
},
"TABREF3": {
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">\u5f35\u7acb\u5bb6\u8207\u6d2a\u5fd7\u5049</td></tr><tr><td colspan=\"7\">\u8868 6. \u4e7e\u6de8\u8a13\u7df4\u6a21\u5f0f\u5728\"Jackhammer\"\u96dc\u8a0a\u74b0\u5883\u4e0b\u6e2c\u8a66\u96c6\u7684\u57fa\u790e\u5be6\u9a57\u3001MSA\u3001FMSE\u3001 \u8fa8\u8b58\u6e96\u78ba\u7387\u3002 MMSE-STSA \u548c IRM \u6240\u5f97\u7684\u8a5e\u932f\u8aa4\u7387(WER, %) 3. \u5c0d\u65bc\u5927\u591a\u6578\u96dc\u8a0a\u4e4b\u72c0\u614b(\u9664\u4e86 SNR \u9ad8\u65bc-3 dB \u7684 Jackhammer \u96dc\u8a0a\u74b0\u5883)\uff0c\u6211\u5011\u6240\u63d0\u51fa\u7684 [Table 6. Word error rates (WER, %) achieved by different methods (baseline, MSA,</td></tr><tr><td colspan=\"7\">for the Engine noise-corrupted test set under the MSA \u6cd5\u76f8\u8f03\u65bc\u57fa\u790e\u5be6\u9a57\u7d50\u679c\uff0c\u7372\u5f97\u660e\u986f\u8f03\u4f4e\u7684\u8a5e\u932f\u8aa4\u7387\uff0c\u9019\u4e9b\u7d50\u679c\u8868\u660e\uff0c\u5373\u4f7f\u8a13\u7df4\u96c6 FMSE, MMSE-STSA and IRM) for the Jackhammer noise-corrupted test set under</td></tr><tr><td colspan=\"7\">multi-condition-training mode] \u7279\u5fb5\u672a\u7d93 MSA \u8655\u7406\uff0c\u82e5\u6e2c\u8a66\u8a9e\u53e5\u7279\u5fb5\u7d93\u904e MSA \u6cd5\u8655\u7406\u5f8c\uff0c\u4ecd\u53ef\u6539\u5584\u5176\u8a9e\u97f3\u8fa8\u8b58\u7cbe\u78ba the clean-condition training mode]</td></tr><tr><td colspan=\"7\">Signal-to-noise ratio (SNR) \u5ea6\u3002\u6211\u5011\u8a8d\u70ba\uff0c\u9019\u518d\u6b21\u8b49\u5be6\u4e86\u6211\u5011\u5148\u524d\u7684\u9673\u8ff0\uff0c\u5373 MSA \u5177\u6709\u4e00\u822c\u5316\u7684\u80fd\u529b\uff0c\u53ef\u514b\u670d\u672a Signal-to-noise ratio (SNR)</td></tr><tr><td>\u898b\u96dc\u8a0a\u4e4b\u554f\u984c\u3002</td><td>-6 dB -6 dB</td><td>-3 dB -3 dB</td><td>0 dB 0 dB</td><td>3 dB 3 dB</td><td>6 dB 6 dB</td><td>12 dB 12 dB</td></tr><tr><td colspan=\"7\">baseline 4. \u4f5c\u7528\u5728\u7279\u5fb5\u4e0a\u7684 FMSE \u6cd5\uff0c\u5728\u90e8\u5206\u96dc\u8a0a\u74b0\u5883\u4e0b\u80fd\u6bd4\u57fa\u790e\u5be6\u9a57\u7d50\u679c\u8868\u73fe\u8f03\u4f73(\u5373\u5f97\u5230\u8f03\u4f4e 65.3 61.7 55.1 48.2 41.5 31.1 baseline 35.3 31.5 28.5 26.7 24.9 23.1</td></tr><tr><td colspan=\"5\">MSA \u7684\u8a5e\u932f\u8aa4\u7387)\uff0c\u4f46\u5176\u6548\u679c\u4ecd\u4e0d\u53ca\u6211\u5011\u6240\u65b0\u63d0\u51fa\u7684 MSA \u6cd5\u3002 65.5* 60.2 54.6 47.9 MSA 33.8 30.6 28.7* 27.0*</td><td>41.7* 26.3*</td><td>32.3* 24.9*</td></tr><tr><td colspan=\"7\">FMSE \u8868 4. \u4e7e\u6de8\u8a13\u7df4\u6a21\u5f0f\u5728\"White\"\u96dc\u8a0a\u74b0\u5883\u4e0b\u6e2c\u8a66\u96c6\u7684\u57fa\u790e\u5be6\u9a57\u3001MSA\u3001FMSE\u3001 70.4* 64.5* 56.2* 48.2 41.0 31.2* MMSE-STSA \u548c IRM \u6240\u5f97\u7684\u8a5e\u932f\u8aa4\u7387(WER, %) FMSE 35.0 32.2* 29.7* 28.1* 27.0* 25.4*</td></tr><tr><td colspan=\"7\">MMSE-STSA [Table 4. Word error rates (WER, %) achieved by different methods (baseline, MSA, 68.3* 63.6* 57.3* 51.4* 44.9* 34.2* MMSE-STSA 36.5* 32.8* 29.6* 27.7* 25.3* 23.7*</td></tr><tr><td colspan=\"7\">IRM FMSE, MMSE-STSA and IRM) for the White noise-corrupted test set under the 65.9* 61.4 54.8 48.2 41.2 31.1 clean-condition training mode] IRM 34.8 31.8* 28.8* 26.6 24.9 23.3*</td></tr><tr><td colspan=\"7\">\u8868 3. \u591a\u689d\u4ef6\u8a13\u7df4\u6a21\u5f0f\u5728\"Jackhammer\"\u96dc\u8a0a\u74b0\u5883\u4e0b\u6e2c\u8a66\u96c6\u7684\u57fa\u790e\u5be6\u9a57\u3001MSA\u3001 Signal-to-noise ratio (SNR) 5. \u7d50\u8ad6\u8207\u672a\u4f86\u5c55\u671b (Conclusion and Future Work) FMSE\u3001 MMSE-STSA \u548c IRM \u6240\u5f97\u7684\u8a5e\u932f\u8aa4\u7387(WER, %) [Table 3. Word error rates (WER, %) achieved by different methods (baseline, MSA, -6 dB -3 dB 0 dB 3 dB 6 dB 12 dB \u5728\u672c\u7814\u7a76\u4e2d\uff0c\u6211\u5011\u4e3b\u8981\u95dc\u6ce8\u5728\u81ea\u52d5\u8a9e\u97f3\u8fa8\u8b58\u4e2d\u7684\u96dc\u8a0a\u554f\u984c\uff0c\u63d0\u51fa\u4e00\u7a2e\u57fa\u65bc\u6df1\u5ea6\u5b78\u7fd2\u7684\u65b0 FMSE, MMSE-STSA and IRM) for the Jackhammer noise-corrupted test set under baseline 67.6 64.6 61.0 55.6 50.9 41.2 \u65b9\u6cd5\u4f86\u5efa\u7acb\u6297\u566a\u8a9e\u97f3\u7279\u5fb5\uff0c\u8a72\u65b9\u6cd5\u5229\u7528\u6df1\u5ea6\u795e\u7d93\u7db2\u8def\u4f86\u6700\u5927\u5316\u8a9e\u97f3\u7279\u5fb5\u6240\u5c0d\u61c9\u7684\u8072\u5b78\u6a21 the multi-condition-training mode] MSA 64.3 60.6 56.3 50.8 45.4 36.6 \u578b\u7684\u72c0\u614b\u7cbe\u78ba\u5ea6\u3002\u521d\u6b65\u5be6\u9a57\u8868\u660e\uff0c\u65b0\u63d0\u51fa\u7684\u65b9\u6cd5\u53ef\u4ee5\u63d0\u9ad8 FBANK \u7279\u5fb5\u7684\u8fa8\u8b58\u6e96\u78ba\u7387\uff0c Signal-to-noise ratio (SNR) FMSE 68.6* 64.3 58.9 53.0 47.7 38.9 \u7279\u5225\u662f\u5728\u4e2d\u5ea6\u548c\u91cd\u5ea6\u96dc\u8a0a\u5e72\u64fe\u72c0\u614b\u4e2d\uff1b\u4e14\u7121\u8ad6\u8072\u5b78\u6a21\u578b\u7684\u8a13\u7df4\u96c6\u662f\u5728\u591a\u689d\u4ef6\u74b0\u5883\u4e0b\u6216\u662f</td></tr><tr><td colspan=\"7\">-6 dB 69.9* \u4e7e\u6de8\u74b0\u5883\uff0c\u5b83\u90fd\u80fd\u8868\u73fe\u826f\u597d\u3002\u95dc\u65bc\u672a\u4f86\u7684\u6539\u826f\u65b9\u5411\uff0c\u6211\u5011\u5c07\u900f\u904e\u63a1\u7528\u66f4\u591a\u7a2e\u985e\u7684\u8a13\u7df4\u6578 -3 dB 0 dB 3 dB 6 dB 12 dB MMSE-STSA 67.1* 63.7* 59.1* 54.4* 45.6*</td></tr><tr><td colspan=\"7\">baseline MSA IRM \u64da\u6216\u589e\u52a0\u8a13\u7df4\u8cc7\u6599\u91cf\u4f86\u9032\u4e00\u6b65\u589e\u5f37\u6b64\u964d\u566a\u795e\u7d93\u7db2\u8def\uff0c\u7136\u5f8c\u5c07\u5176\u8207\u5176\u4ed6\u57fa\u65bc\u7279\u5fb5\u6216\u57fa\u65bc\u6a21 31.4 27.8 25.9 23.8 23.0 21.9 32.6* 29.4* 27.5* 25.7* 25.1* 67.4 64.3 60.6 55.0 50.5 40.7 \u578b\u7684\u96dc\u8a0a\u5f37\u5065\u6027\u6f14\u7b97\u6cd5\u7d44\u5408\uff0c\u4ee5\u5be6\u73fe\u66f4\u597d\u7684\u6027\u80fd\u3002 23.9* FMSE 34.4* 31.3* 29.1* 27.6* 27.0* \u8868 5. \u4e7e\u6de8\u8a13\u7df4\u6a21\u5f0f\u5728\"Engine\"\u96dc\u8a0a\u74b0\u5883\u4e0b\u6e2c\u8a66\u96c6\u7684\u57fa\u790e\u5be6\u9a57\u3001MSA\u3001FMSE\u3001 26.1* MMSE-STSA 32.9* 29.0* 27.2* 25.2* 24.4* 23.2* MMSE-STSA \u548c IRM \u6240\u5f97\u7684\u8a5e\u932f\u8aa4\u7387(WER, %) \u53c3\u8003\u6587\u737b (References)</td></tr><tr><td>IRM</td><td>32.4*</td><td>29.8*</td><td>26.6*</td><td>25.2*</td><td>23.8*</td><td>22.8*</td></tr><tr><td colspan=\"5\">\uf0b7 \u4e7e\u6de8\u8a13\u7df4\u6a21\u5f0f(clean-condition training mode)\u4e4b\u7d50\u679c\u8207\u8a0e\u8ad6 Signal-to-noise ratio (SNR)</td><td/><td/></tr><tr><td colspan=\"7\">\u5229\u7528\u4e7e\u6de8\u8a13\u7df4\u96c6\u6240\u8a13\u7df4\u800c\u5f97\u7684\u8072\u5b78\u6a21\u578b\uff0c\u8868 4\u3001\u8868 5 \u8207\u8868 6 \u5217\u51fa\u4e86\u6211\u5011\u7684\u65b9\u6cd5 MSA \u53ca\u4e09 -6 dB -3 dB 0 dB 3 dB 6 dB 12 dB</td></tr><tr><td colspan=\"7\">\u7a2e\u6bd4\u8f03\u6cd5 FMSE\u3001MMSE-STSA \u8207 IRM \u5728\u4e09\u7a2e\u96dc\u8a0a\u6e2c\u8a66\u96c6\u6240\u5f97\u4e4b\u8a5e\u932f\u8aa4\u7387(word error rate, baseline 67.3 64.7 61.1 56.3 50.4 39.4</td></tr><tr><td colspan=\"7\">WER)\uff0c\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0c\u7531\u65bc\u8a13\u7df4\u96c6\u662f\u4e7e\u6de8\u8a9e\u97f3\uff0c\u6211\u5011\u4e26\u4e0d\u4f7f\u7528\u4efb\u4f55\u65b9\u6cd5\u5c0d\u5176\u7279\u5fb5\u4f5c\u9032\u4e00 MSA 64.4 60.9 55.2 49.8 43.8 34.7 \u6b65\u5f37\u5316\uff0c\u610f\u5373\u6211\u5011\u4f7f\u7528\u4e7e\u6de8\u8a13\u7df4\u96c6\u4e2d\u7684\u539f\u59cb FBANK \u7279\u5fb5\u4f86\u8a13\u7df4\u8072\u5b78\u6a21\u578b\uff0c\u800c\u5404\u500b\u65b9\u6cd5 \u53ea\u7528\u5728\u6e2c\u8a66\u96c6\u4e0a\u3002\u5f9e\u9019\u4e09\u500b\u8868\uff0c\u6211\u5011\u6709\u4ee5\u4e0b\u89c0\u5bdf\uff1a FMSE 69.9* 65.5* 59.7 52.3 46.8 36.6</td></tr><tr><td colspan=\"7\">1. \u82e5\u8207\u524d\u4e09\u500b\u8868(\u8868 1\u30012\u30013)\u76f8\u6bd4\uff0c\u4e7e\u6de8\u8a13\u7df4\u6a21\u5f0f\u5f97\u5230\u7684\u57fa\u790e\u5be6\u9a57(baseline)\u7d50\u679c\u6bd4\u591a\u689d\u4ef6 MMSE-STSA 69.1* 65.9* 62.5* 57.3* 52.1* 40.7*</td></tr><tr><td colspan=\"7\">\u8a13\u7df4\u6a21\u5f0f\u6240\u5f97\u5230\u7684\u57fa\u790e\u5be6\u9a57\u7d50\u679c\u8f03\u5dee(\u524d\u8005\u5f97\u5230\u8f03\u9ad8\u7684\u8a5e\u932f\u8aa4\u7387)\uff0c\u9019\u5f88\u53ef\u80fd\u662f\u56e0\u70ba\u8207\u591a IRM 66.8 65.1* 60.6 55.7 49.7 39.4</td></tr><tr><td colspan=\"7\">\u689d\u4ef6\u8a13\u7df4\u8a9e\u53e5\u76f8\u6bd4\uff0c\u4e7e\u6de8\u8a13\u7df4\u8a9e\u53e5\u8207\u6e2c\u8a66\u96c6\u7684\u96dc\u8a0a\u8a9e\u53e5\u4e4b\u9593\u4e4b\u4e0d\u5339\u914d\u66f4\u70ba\u660e\u986f\u3002</td></tr><tr><td colspan=\"7\">2. \u985e\u4f3c\u65bc\u4e4b\u524d\u7684\u89c0\u5bdf\uff0c\u76f8\u8f03\u65bc\u57fa\u790e\u5be6\u9a57\u7d50\u679c\uff0c\u5169\u7a2e\u8a9e\u97f3\u5f37\u5316\u6cd5 (MMSE-STSA \u548c IRM) \u53ea</td></tr><tr><td colspan=\"7\">\u80fd\u5f97\u5230\u76f8\u8fd1\u6216\u66f4\u9ad8\u7684\u8a5e\u932f\u8aa4\u7387\uff0c\u9019\u518d\u6b21\u986f\u793a\u4e86\u76f4\u63a5\u6539\u5584\u8a9e\u97f3\u7684\u54c1\u8cea\u4e26\u4e0d\u4e00\u5b9a\u80fd\u63d0\u9ad8\u5176</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
}
}
}
}