The HESP Department sponsors a series of talks on current research in the areas of hearing, speech, and language by visiting researchers or members of the HESP faculty. All students, faculty, staff, and affiliates are welcome and encouraged to attend. If you would like to join the email distribution list for all upcoming HESP Seminar Series talks, email Dr. Matt Goupell at goupell [at] umd.edu

Certification Maintenance Hours can be earned by attending these talks.

//calendar.google.com/calendar/embed?src=6ss0j48ah4hkt25ij2o0dngnac [at] group.calendar.google.com&ctz=America/New_York&pli=1">Click here to follow our google calendar!

 

Date/Time

Speaker

Affiliation

Title

Abstract

Location

11/12/18

12:00-1:00

Kristina Milvae & Katie Von Holzen HESP

Is cochlear gain reduction related to speech-in-noise performance? (Milvae)

Babies know words, even when they are mispronounced: A meta-analysis of mispronunciation sensitivity (Von Holzen)

Listening to speech in noisy environments is difficult for listeners with normal hearing, but a task that is often accomplished successfully. The mechanisms that underlie this ability are not well understood. A known physiological mechanism that may contribute to this ability is the medial olivocochlear reflex (MOCR). The MOCR is a bilateral reflex between the brainstem and cochlea that reduces cochlear gain in response to sound. Correlational studies have explored the relationship between cochlear gain reduction and speech-in-noise processing with mixed results. Some studies found that individuals with higher measures of cochlear gain reduction had better performance on speech-in-noise tasks, others found the opposite relationship, and yet others have found no relationship. The contrasting findings likely stem from differences in the research paradigms. This experiment aimed to overcome previous limitations by controlling the MOC pathways activated across measures and by measuring speech-in-noise understanding at multiple signal-to-noise ratios (SNRs). It was hypothesized that the relationship would be dependent on the SNR. Psychoacoustic (forward masking) and physiologic (otoacoustic emissions) measures of cochlear gain reduction were used to examine the relationship between cochlear gain reduction and speech-in-noise performance at multiple SNRs. A significant relationship between the psychoacoustic measure and speech-in-noise performance was found at a negative SNR; individuals with larger gain reduction estimates at 2 kHz had poorer performance on the speech-in-noise task. This result is consistent with the idea that turning down the gain should only improve performance when the overall response to the noise is decreased more than the response to the speech. (Milvae)

Before infants become mature speakers of their native language, they must acquire a robust word-recognition system which allows them to strike the balance between allowing some variation (mood, voice, accent) and recognizing variability that potentially changes meaning (e.g. cat vs hat). Together with my co-author Christina Bergman, we conducted a meta-analysis with the goal of quantifying how the latter, termed mispronunciation sensitivity, changes over infants’ first three years, testing competing predictions of mainstream language acquisition theories. Our results show that infants were sensitive to mispronunciations, but nonetheless accepted them as labels for target objects. Interestingly, and in contrast to predictions of mainstream theories, mispronunciation sensitivity was not modulated by infant age, suggesting that a sufficiently flexible understanding of native language phonology is in place at a young age. (Von Holzen)

Tydings 2108

11/5/18

12:00-1:00

Amritha Mallikarjun & Rochelle Newman HESP Toddler's accommodation of accent: acoustic and experiential factors (Newman) The cocktail party effect in domestic dogs (Mallikarjun)

Accents can pose multiple problems for a young learner:  First, phonetic and rhythmic variations can potentially make the mapping between accents difficult.  Second, speakers with foreign accents may be less consistent across their productions. Third, some accents may have sufficiently large acoustic differences that they can cause a perceived change in category identity resulting in overlap between intended categories. Recent work from our lab has revealed that: 1) toddlers’ (aged 18-20 months) ability to generalize across accents appears to be affected more by dialectal changes in vowel realizations than by changes in linguistic rhythm; 2) children aged 32 months can successfully accommodate accents that do not result in category overlap, but have more difficulty when accents have the potential to cause phonetic confusions, and 3) bilingual children are better able to generalize across unfamiliar regional dialects than monolingual children. We have also found that while children’s recognition of known words is impaired by the presence of an accent, their short-term memory is not. That is, assuming a word was identified correctly when it was spoken, the presence of an accent does not appear to further affect children’s ability to store that information for subsequent recall. These findings support a more nuanced view of the way that children both accommodate, and are affected by, variation in their language input. (Newman) 

Like humans, canine companions often find themselves in noisy environments, and are expected to respond to human speech despite potential distractors. Such environments pose particular problems for young children, who have limited linguistic knowledge. Here, we examine whether dogs show similar difficulties. We found that dogs prefer their name to a stress-matched foil in quiet conditions, despite hearing it spoken by a novel talker. They continued to do so at signal-to-noise levels as low as 0 dB. We see better performance at name recognition in dogs that are trained to do tasks for humans, like service dogs, search-and-rescue dogs, and explosives detection dogs. The tasking dogs were of several different breeds, and their tasks were widely different from one another. This suggests that their superior performance may be due to generally more training and better attention. (Mallikarjun)

Tydings 2108

10/22/18

12:00-1:00

Zoe Ovans & Margaret Cychosz

HESP

Rely on what’s reliable: Effects of cognitive-control engagement on children’s sentence comprehension (Ovans); The lexical advantage: Four-year-old children acquire words, not sounds (Cychosz)

 
Children parse sentences word-by-word, which can lead to interpretation errors when initial misanalyses conflict with later-arriving evidence. Such challenges have been attributed to immature cognitive-control, though it remains unclear how sentence processing and cognitive-control are linked. We tested 5 year-olds' ability to interpret sentences in real time under varying levels of cognitive control engagement and varying sentence types. Cognitive-control engagement was manipulated using interleaved child-friendly Stroop trials (congruent/incongruent) preceding Sentence trials. We found that Incongruent Stroops caused children to more readily interpret sentences according to information gleaned from the verbs, regardless of whether verbs were prediction or revision cues. This suggests that cognitive-control fulfills the very general function of increasing reliance on reliable parsing cues like verb-specific biases (and doesn't always prevent children from revising an initial interpretation). (Ovans)
This study tested a potential lexical advantage in young children’s early speech production: do children produce consonant-vowel sequences less accurately in non words than real words? Children aged 3;3-4;4 repeated both real words and non words after a model speaker. Each real word had a paired consonant-vowel sequence in the non word in word-initial position (e.g. suitcase, soodross). The word-initial consonant-vowel sequences were kept constant between the paired words. Previous work on this topic compared different sequences of paired sounds, making it hard to determine if those results were due to a lexical or phonetic effect. Our results show that children consistently produced the sequences in real words more accurately than non words. The effect was most pronounced in children with smaller receptive vocabularies. Together, these results reinforce theories arguing for interactions between vocabulary size and phonological development in language development. (Cychosz)

Tydings Hall room 2108

10/15/18 

12:00-1:00

Alex Presacco

HESP

Tone-Evoked Acoustic Change Complex (ACC) Recorded in a Sedated Animal Model

The acoustic change complex (ACC) is a scalp recorded cortical evoked potential complex generated in response to changes (e.g., frequency, amplitude) in an auditory stimulus. The ACC has been well studied in humans, but to our knowledge, no animal model has been evaluated. In particular, it was not known whether the ACC could be recorded under the conditions of sedation that likely would be necessary for recordings from animals. For that reason, we tested the feasibility of recording ACC from sedated cats in response to changes of frequency and amplitude of pure-tone stimuli. Cats were sedated with ketamine and acepromazine, and subdermal needle electrodes were used to record electroencephalographic (EEG) activity. Tones were presented from a small loudspeaker located near the right ear. Continuous tones alternated at 500-ms intervals between two frequencies or two levels. Neurometric functions were created by recording neural response amplitudes while systematically varying the magnitude of steps in frequency centered in octave frequency around 2, 4, 8, and 16 kHz, all at 75 dB SPL, or in decibel level around 75 dB SPL tested at 4 and 8 kHz. The ACC could be recorded readily under this ketamine/azepromazine sedation. In contrast, ACC could not be recorded reliably under any level of isoflurane anesthesia that was tested. The minimum frequency (expressed as Weber fractions (df/f)) or level steps (expressed in dB) needed to elicit ACC fell in the range of previous thresholds reported in animal psychophysical tests of discrimination. The success in recording ACC in sedated animals suggests that the ACC will be a useful tool for evaluation of other aspects of auditory acuity in normal hearing and, presumably, in electrical cochlear stimulation, especially for novel stimulation modes that are not yet feasible in humans.

Tydings 2108

Seminar Series Archive: To see a list of previous HESP Seminars click here. 

 

Certification Maintenance Hours:

Those who attend these seminars will be awarded Certification Maintenance Hours (CMHs). A CMH is 60 minutes of time spent as a learner and participant in a non-ASHA CEU professional development activity. CMHs are different from ASHA-approved CEUs, which are also offered through the department. ASHA permits the use of CMHs for the purpose of maintaining your CCCs. However, you are responsible for maintaining documentation verifying completion of each activity. Documentation will not be maintained on the ASHA CE Registry. For additional information about CMHs and ASHA certification requirements, please click here. Please be aware that state regulatory agencies and boards of education might not recognize or accept CMHs.

Last modified
11/07/2018 - 3:09 pm