Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H91-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:11.306087Z"
},
"title": "AUTODIRECTIVE MICROPHONE SYSTEMS FOR NATURAL COMMUNICATION WITH SPEECH RECOGNIZERS",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Flanagan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {
"settlement": "New Brunswick",
"country": "New Jersey"
}
},
"email": ""
},
{
"first": "R",
"middle": [],
"last": "Mammone",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {
"settlement": "New Brunswick",
"country": "New Jersey"
}
},
"email": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Elko",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Two technological advances support new sophistication in sound capture; namely, high-quality lowcost electret microphones and high-speed economical signal processors. Combined with new understanding in acoustic beamforming, these technologies permit spatially-selective transduction of speech signals several octaves in bandwidth. Spatial selectivity mkigates the effects of noise and reverberation, and digital processing provides the capability for speechseeking, autodirective performance. This report outlines the principles of autodirective beamforming for acoustic arrays, and it describes two experimental implementations. It also summarizes the direction and emphasis of continuing research.",
"pdf_parse": {
"paper_id": "H91-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "Two technological advances support new sophistication in sound capture; namely, high-quality lowcost electret microphones and high-speed economical signal processors. Combined with new understanding in acoustic beamforming, these technologies permit spatially-selective transduction of speech signals several octaves in bandwidth. Spatial selectivity mkigates the effects of noise and reverberation, and digital processing provides the capability for speechseeking, autodirective performance. This report outlines the principles of autodirective beamforming for acoustic arrays, and it describes two experimental implementations. It also summarizes the direction and emphasis of continuing research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In many applications of automatic speech recognition, it is desirable for the talker to have hands and eyes free for concurrent tasks. Typical examples include parcel sorting, product assembly and inspection, voice dialing for cellular telephones, and data plotting and manipulation in a situation room. The user frequently needs to move around in the workspace, which often is noisy and reverberant, while issuing commands to the speech recognizer. Electrical tethers, close-talking microphones and body-worn sound equipment represent undesirable encumbrances. Ideally, one would like an acoustic system able to capture high-quality sound from natural conversational exchanges in the work space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Speech-seeking autodirective microphone arrays enable unencumbered freedom of movement, while providing sound pickup quality approaching that of close-talking microphones. Low-cost high-quality electret microphones, in combination with economical signal processing, permit sophisticated beamforming and dynamic beam positioning for tracking a moving talker. Multiple beam formation permits \"track while scan\" performance, similar to phasedarray navigational radars, so that multiple sound sources can be monitored and algorithmic decisions made about the signals [1, 2] . Beamforming has been found to be more useful than adaptive noise filtering for sound pickup in noisy, reverberant enclosures [3] .",
"cite_spans": [
{
"start": 563,
"end": 566,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 567,
"end": 569,
"text": "2]",
"ref_id": "BIBREF1"
},
{
"start": 697,
"end": 700,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This report mentions the acoustic principles involved in dynamic beamforming and the design factors governing the ability of steered arrays to combat noise and room reverberation. It discusses the as-yet rudimentary algorithms for sound source location and speech/non-speech detection. It then describes an initial application of an autodirective array and a limited-vocabulary connected-word speech recognizer for voice control of a video/audio teleconferencing system. It concludes by indicating the directions for research needed to refine further the capabilities of hands-free natural sound pickup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The signal output H from an arbitrary array of N discrete omnidirectional acoustic sensors due to a time-harmonic plane wave with wavevector k is N-1 H(k, r) = ~ a n e -jk'r\" ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "n=O where an is the amplitude weighting of sensor n, r,t is the position vector of sensor n with respect to some defined origin, and the bold case indicates a vector quantity. The time-harmonic term is omitted for compactness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "The array can be steered to wave arrivals from different directions by intro4ucing a variable time delay x,~ for each sensor element. The response of the steered array is N-1 H(k, r ) = ~ an e -j(k'r'+\u00b0~x') ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": ",q=0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "where \u00a2o = 2nf is the radian frequency. It is convenient to make a change of variables and define k' as k' = ~ k', where k' is the unit vector in the C wavevector k' direction, c is the speed of sound, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "At rn \" k = -cxn .",
"eq_num": "(3)"
}
],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "Equation 2can then be rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "N-1 -jk\" \u2022 r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "H(k,r) = ~ ane ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "where k\" = k-k'. Equation 4shows that the array response is maximum when [k\"l is 0, or when the delays have been adjusted to co-phase the wave arrival at all sensors. The received spatial frequency is 0 (or DC), and the array has a maximum N-1 response which is equal to ~ an. For waves n=O propagating from directions other than k' the response is diminished.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "This principle has been used to design onedimensional and two-dimensional arrays of sensors spaced by d distance. The element spacing dictates the highest frequency for which spatial aliasing (or, ambiguity in directivity) does not occur. This frequency also depends upon the steering parameters but has a lower bound offupp~r = c/2d. Alternatively the spacing is chosen as d=Xupper/2. The lowest frequency for which useful spatial discrimination occurs depends upon the overall dimensions of the array.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "For speech pickup applications, the desired bandwidth of the array is greater than three octaves. The magnitude of k\" in (4) is proportional to frequency, hence the beamwidth and directivity are inversely proportional to frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "A design artifice to combat this frequency dependence is to use \"harmonic nesting\" [1, 2] of the sensors, so that different harmonically-spaced groups of sensors are used to cover contiguous octaves. Some sensors in the nest serve every octave band. Figure 1 shows a nested two-dimensional array of sensors, its directivity index as a function of frequency, and its beam pattern when the a,,'s of (4) are Chebyshev weighted for -30 dB sidelobes.",
"cite_spans": [
{
"start": 83,
"end": 86,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 87,
"end": 89,
"text": "2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 250,
"end": 258,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "Using these relations one-dimensional and twodimensional arrays have been designed for conferencing and voice-control applications (see Fig. 2 ). Digitally-addressable bucket brigade chips on each sensor provide the delay steering under control of a 386 computer.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 142,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u2022 Acoustic Beamforming",
"sec_num": null
},
{
"text": "Because of limited computational power in the control computer, algorithms for sound-source location and speech detection are, as yet, rudimentary. Sources are located by a blind search and energy detection, and speech/non-speech decisions are made by waveform heuristics. Beams can be positioned in less than a millisecond, but speech decisions require about twenty milliseconds in a given position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithms for Speech-Seeking Autodirective Performance",
"sec_num": null
},
{
"text": "Full digital designs are in progress having enough signal processing power to make computations of correlations and cepstral coefficients. This will enable more sophistication in both source location and speech detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithms for Speech-Seeking Autodirective Performance",
"sec_num": null
},
{
"text": "The large two-dimensional array, consisting of over 400 electret microphones, has been in use for the past year and a half for interlocation conferencing from an auditorium seating more than 300 persons. Performance greatly surpasses the traditional isolated microphones in the room, and speech quality comparable to Lavalier pickups can be achieved (Fig. 3a) .",
"cite_spans": [],
"ref_spans": [
{
"start": 350,
"end": 359,
"text": "(Fig. 3a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Applications",
"sec_num": null
},
{
"text": "The small one-dimensional array, consisting of 21 pressure-gradient elements, is being used for an experimental multimedia conferencing system (Hu-MaNet) designed for ISDN telephone communications [4] , (Fig. 3b) .",
"cite_spans": [
{
"start": 197,
"end": 200,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 203,
"end": 212,
"text": "(Fig. 3b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Applications",
"sec_num": null
},
{
"text": "With continued progress in arithmetic capability and economy of single-chip digital signal processors, substantial refinement and expanded performance are possible for autodirective microphone systems. Four areas in particular are receiving research effort. They are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Directions",
"sec_num": null
},
{
"text": "\u2022 accurate spatial location of multiple sound sources \u2022 reliable speech/non-speech discrimination \u2022 spatial volume selectivity in sound capture (and projection)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Directions",
"sec_num": null
},
{
"text": "\u2022 characterization of array performance in noisy reverberant enclosures",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Directions",
"sec_num": null
},
{
"text": "Properties of three-dimensional microphone arrays appear to provide advantages in some of these areas, and are presently being studied. In particular, 3D arrays can be delay-steered to beamforrn over 4 pi steradians without spatial ambiguity and with beamwidth independent of steering direction [5] .",
"cite_spans": [
{
"start": 295,
"end": 298,
"text": "[5]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Research Directions",
"sec_num": null
},
{
"text": "As with linear and planar arrays, harmonic nesting of the receiving elements in 3D arrays can be used to make beamwidth weakly dependent upon bandwidth coverage. For example, a uniform cubic array, shown in Fig. 4 , provides unique, constantwidth beam patterns over 4pi steradians. The 3D geometry can also provide range selectivity that goes beyond the point-focusing capabilities of 1D and 2D arrays. These properties are currently under study. ",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 213,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Research Directions",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Computer-steered microphone arrays for sound transduction in large morns",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Flanagan",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Johnston",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zahn",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Elko",
"suffix": ""
}
],
"year": 1985,
"venue": "J. Acoust. Soc. Amer",
"volume": "78",
"issue": "",
"pages": "1508--1518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.L. Flanagan, J. D. Johnston, R. Zahn, G. W. Elko, \"Computer-steered microphone arrays for sound transduction in large morns, J. Acoust. Soc. Amer. 78, 1508-1518 (1985).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Autodirective microphone systems",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Flanagan",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Berldey",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Elko",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "West",
"suffix": ""
},
{
"first": "M",
"middle": [
"M"
],
"last": "Sondhi",
"suffix": ""
}
],
"year": 1991,
"venue": "Acustica",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.L. Flanagan, D. A. Berldey, G. W. Elko, J. E. West, M. M. Sondhi, \"Autodirective micro- phone systems,\" Acustica, February 1991 (in press).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Speech enhancement for mobile telephony",
"authors": [
{
"first": "M",
"middle": [
"M"
],
"last": "Goulding",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Bird",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Trans. Vehic. Tech",
"volume": "39",
"issue": "4",
"pages": "316--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.M. Goulding and J. S. Bird, \"Speech en- hancement for mobile telephony,\" IEEE Trans. Vehic. Tech. 39, no. 4, 316-326 (November 1990).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Integrated information modalities for Human/Machine communications: 'HuMaNet', an experimental system for conferencing",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Flanagan",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Berkley",
"suffix": ""
},
{
"first": "K",
"middle": [
"L"
],
"last": "Shipley",
"suffix": ""
}
],
"year": 1990,
"venue": "Jour. Visual Communication and Image Representation",
"volume": "1",
"issue": "",
"pages": "113--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.L. Flanagan, D. A. Berkley, K. L. Shipley, \"Integrated information modalities for Human/Machine communications: 'HuMaNet', an experimental system for conferencing,\" Jour. Visual Communication and Image Repre- sentation 1, 113-126 (November 1990).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Harmonic nesting of acoustic sensors for three octaves. Low-frequency elements are shown by the largest circles. Mid and high frequency elements are indicated by smaller and smallest circles, respectively. (b)Directivity index as a function of frequency for nested sensors. (c) Chebyshev weighted beam at broadside (sidelobes are -30 dB down). 5. J.L. Flanagan, \"Three-dimensional microphone arrays,\" J. Acoust. Soc. Amer. 82(",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "(a) Auditorium installation of a 2D autodirective array. (b)Teleconferencing application of a 1D autodirective array. The array provides input to a connected-word speech recognizer for controlling system features [4].",
"num": null,
"uris": null,
"type_str": "figure"
}
}
}
}